id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
248701063 | pes2o/s2orc | v3-fos-license | Identification and Verification of Feature Biomarkers Associated With Immune Cells in Dilated Cardiomyopathy by Bioinformatics Analysis
Objective: To explore immune-related feature genes in patients with dilated cardiomyopathy (DCM). Methods: Expression profiles from three datasets (GSE1145, GSE21610 and GSE21819) of human cardiac tissues of DCM and healthy controls were downloaded from the GEO database. After data preprocessing, differentially expressed genes (DEGs) were identified by the ‘limma’ package in R software. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses were then performed to identify biological functions of the DEGs. The compositional patterns of stromal and immune cells were estimated using xCell. Hub genes and functional modules were identified based on protein-protein interaction (PPI) network analysis by STRING webtool and Cytoscape application. Correlation analysis was performed between immune cell subtypes and hub genes. Hub genes with |correlation coefficient| > 0.5 and p value <0.05 were selected as feature biomarkers. A logistic regression model was constructed based on the selected biomarkers and validated in datasets GSE5406 and GSE57338. Results: A total of 1,005 DEGs were identified. Functional enrichment analyses indicated that extracellular matrix remodeling and immune and inflammation disorder played important roles in the pathogenesis of DCM. Immune cells, including CD8+ T-cells, macrophages M1 and Th1 cells, were proved to be significantly changed in DCM patients by immune cell infiltration analysis. In the PPI network analysis, STAT3, IL6, CCL2, PIK3R1, ESR1, CCL5, IL17A, TLR2, BUB1B and MYC were identified as hub genes, among which CCL2, CCL5 and TLR2 were further screened as feature biomarkers by using hub genes and immune cells correlation analysis. A diagnosis model was successfully constructed by using the three biomarkers with area under the curve (AUC) scores 0.981, 0.867 and 0.946 in merged dataset, GSE5406 and GSE57338, respectively. Conclusion: The present study identified three immune-related genes as diagnostic biomarkers for DCM, providing a novel perspective of immune and inflammatory response for the exploration of DCM molecular mechanisms.
INTRODUCTION
Dilated cardiomyopathy (DCM) is defined as left or both ventricles enlargement and contraction impairment in the absence of abnormal loading conditions or coronary disease. The estimated prevalence of DCM was >1:250 of the population (Hershberger et al., 2013). It accounts for a considerable portion of heart failure (HF) (Reichart et al., 2019) and is the leading cause of heart transplantation (Weintraub et al., 2017). DCM results from a diverse range of etiologies including genetic alteration, viral infection, drug and alcohol with a heterogeneous pathophysiological mechanism. (Felker et al., 2000).
Immune and inflammatory response plays an important role in cardiovascular disease, such as myocardial infarction (Kologrivova et al., 2021), atrial fibrillation . As for DCM, myocardial damage, whether from a genetic or environmental etiology, triggers inflammation and recruits immune cells to the heart. Regional inflammation causes tissue fibrosis, which stiffens the heart and promotes the progression to dilation and HF (Schultheiss et al., 2019). Myocardial inflammation was related to poor long-term outcome for DCM (Nakayama et al., 2017). Immune cells, especially T lymphocytes and macrophages, promote myocardial inflammation and contribute to ventricular remodeling (Comarmond and Cacoub, 2017;Jain et al., 2021). For example, it is been reported that polarization of macrophages towards M2 was associated with ventricular remodeling and poor long-term prognosis in DCM (Nakayama et al., 2017). In addition, a significant increase of the number of Th1 and Th17 cells was observed while the number of Treg cells decreased in DCM patients (Wei et al., 2017;Liu et al., 2021). Infiltrated immune cells release cytokines and chemokines, such as TGF-β1, IL-1β, and TNF, promoting collagen deposition, fibrosis and cardiac remodeling (Schultheiss et al., 2019). Microarray profiling research has also shown that the expression of some immune-related genes in left ventricle of DCM was dysregulated, such as IL-6, CXCL10, TLR3 (Qiao et al., 2017). Based on the potential relationship between inflammation and DCM, some immunological therapies have been reported, including immunosuppressants, (Parrillo et al., 1989;Frustaci et al., 2009), immunoadsorption, (Bian et al., 2021), IL-1 inhibitors, (Van Tassell et al., 2017;De Luca et al., 2018). However, these immune-based therapies are either unsatisfactory or not Frontiers in Genetics | www.frontiersin.org May 2022 | Volume 13 | Article 874544 2 fully confirmed by large randomized, multi-center research. It is necessary to better understand DCM pathogenetic mechanism.
As shown in Figure 1, we integrated several GEO datasets. Through systematically bioinformatics analyses, we identified differentially expressed genes (DEGs) between DCM and healthy cardiac samples and explored the potential pathological mechanism of DCM by functional enrichment analysis, immune cell infiltration analysis and protein-protein network analysis. Moreover, we constructed a three-gene diagnostic model via logistic regression analysis. Finally, we confirmed the validity of the diagnostic model in another two datasets GSE406 and GSE57338. This article is the first to explore the pathogenesis of DCM from the perspective of immunity and inflammation with bioinformatics, and we hope our analyses will provide potential targets for future in-depth research.
GEO Datasets
The DCM RNA expression datasets were collected from the online GEO database (http://www.ncbi.nlm.nih.gov/geo/). The keywords "dilated cardiomyopathy", "Homo sapiens", and "expression profiling by array" were used on the initial search, and 184 DCM related studies were found. Then the following criteria were used to further screen datasets: 1) The study includes DCM case vs healthy control; 2) tissue samples obtained from left ventricle; 3) sample size was bigger than 10. Three datasets qualified for the above criteria and performed on the same platform were combined for analysis. Another two datasets derived from other platforms were used as validating datasets. The processed data of GSE1145 (platform: GPL570, including 11samples of control and 12 samples of DCM), GSE21610 (Schwientek et al., 2010) (platform: GPL570, including eight samples of control and 21 samples of DCM), GSE29819 (Gaertner et al., 2012) (platform: GPL570, including six samples of control and seven samples of DCM), GSE5406 (Hannenhalli et al., 2006) (platform: GPL96, including 16 samples of control and 86 samples of DCM) GSE57338 (Liu et al., 2015) (platform: GPL11532, including 132 samples of control and 82 samples of DCM) were downloaded as expression matrix with R package 'GEOquery'. (Davis and Meltzer, 2007). The mRNA expression profiles of controls and targeted patients were extracted and were performed log2 transformation before further analysis (Only if they have not be log2 transformed).
Data Preprocessing and DEGs Identification
The 'limma' (Ritchie et al., 2015) and 'sva' (Parker et al., 2014) packages in R software (R version 4.0.3) were used to correct intra-and inter-batch effect. Prcomp function was used to perform principal component analysis (PCA). Next, probes annotations were performed. Probes annotated to >1 gene were removed. For multiple probes annotated to the same gene, the first one appeared retained. Finally, the 'limma' package was also used for DEGs identification between different groups with cut-off values of adjusted p-value < 0.05 and |fold change| ≥ 1.5. In addition, DEGs were also identified by applying robust rank aggregation (RRA) algorithm (Kolde et al., 2012) with the same criteria. Venn diagrams (http://bioinformatics.psb.ugent.be/webtools/Venn/ ) were used to summarize the overlapping DEGs between 'sva' and RRA algorithms. Boxplot function, 'ggplot2' and 'pheatmap' packages were then used to plot gene expression boxplot, volcano plot and heatmap.
Functional Enrichment Analysis
To further analyze the functions of DEGs, the R package 'clusterProfiler' (Yu et al., 2012) was used to perform Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis. The cut-off values for GO and KEGG were set as p < 0.05. The 'enrichplot' package was used to draw dot plots for the results of functional enrichment analysis.
Immune Cell Infiltration Analysis
xCell (Aran et al., 2017) is a new gene signature-based method to estimate the content of immune and stromal cells. It was validated using extensive in-silico simulations and cytometry immunophenotyping. XCell package was applied to normalized merged data to portray the cellular heterogeneity landscape of left ventricular expression profiles. We compared the cell distribution differences between the two groups through t-test, and the cutoff value was set as p < 0.05. The results were significantly different between the two groups, categorized based on their traits into three categories: "lymphoid and myeloid cells", "stem cells" and "stromal cells and others." These were visualized by using 'ggplot2' and 'ggpubr' packages. Correlation analyses of immune cell subtypes and hub genes were performed with 'psych' and 'corrplot' packages. Pearson correlation coefficient was used to assess the strength of correlation. Hub genes whose absolute correlation coefficient with immune cells >0.5 and p-value < 0.05 are selected to further study.
Protein-Protein Interaction Network and Gene Module Identification
Search Tool for the Retrieval of Interacting Genes (STRING, https://string-db.org) is a webtool that provide validated and predicted information of PPIs (Szklarczyk et al., 2019). The list of DEGs was uploaded into STRING website to detect significant protein interactions with minimum interaction score >0.7. The network was then exported and visualized by Cytoscape 3.7.1 software (Cline et al., 2007). The CytoHubba plugin (Chin et al., 2014) was used to calculate hub genes with a high degree. The results were directly visualized by Cytoscape. Additionally, the MCODE plugin (Bader and Hogue, 2003) was used to identify highly interconnected clusters with the cutoff parameters set as follows: degree cutoff = 2, node score cutoff = 0. 2, k-Core = 2, Max. Depth = 100. The results were further screened with criteria set as MCODE score >4 and nodes number >5. Gene ontology biological process enrichment analysis was performed on the significant modules.
Hub Genes Verification and Diagnosis Model Construction
Hub genes with correlation coefficient >0.5 and p < 0.05 with immune cells were selected. The expression profiles of the selected biomarkers were visualized in boxplot and validated in another two datasets GSE5406 and GSE57338. Then, a diagnosis model combining the selected biomarkers was constructed in the merged dataset by logistic regression using 'glm' function and verified in GSE5406 and GSE57338. Receiveroperating characteristic (ROC) curves were used to assess the discrimination ability of the key genes and the diagnosis model.
Data Preprocessing and DEGs Identification
Expression profiles of healthy controls and DCM patients from GSE1145, GSE21610 and GSE21869 were combined into a merged dataset. The merged dataset contains 25 healthy controls and 40 DCM patients. As shown in Figures 2A,C, the merged dataset had strong batch effect. After normalization, it was effectively removed (Figures 2B,D). Then in the differential expression analysis, 1,005 DEGs were recognized in the merged dataset, which includes 385 downregulated genes and 620 up-regulated genes by integration method of 'sva' ( Figure 2E). 179 DEGs including 101 upregulated genes and 78 down-regulated genes were identified by integration method of RRA. Venn diagrams depict DEGs across two different integration methods (Supplementary Figure S1). DEGs obtained by 'sva' method were used in the following analyses. All of the DEGs were displayed in heatmap ( Figure 2F).
Functional Enrichment Analysis
To assess the functions of DEGs, enrichment analyses of GO and KEGG were performed. The categories of GO analysis include biological process (BP), cellular component (CC) and molecular function (MF). The leading 10 enriched terms of each GO categories and KEGG with p value <0.05 were visualized in Figure 3.
The mainly enriched BP terms included epithelial cell proliferation and its regulation, extracellular matrix and structure organization, cell chemotaxis, leukocyte migration and regulation, and inflammatory response ( Figure 3A). The mainly enriched MF terms were extracellular matrix structural constituent, glycosaminoglycan, collagen and fibronectin binding, G protein-coupled purinergic nucleotide receptor activity ( Figure 3B). The mainly enriched CC terms contained collagen-containing extracellular matrix, endoplasmic reticulum lumen, external side of plasma membrane, myofibril, sarcomere, basement membrane, and collagen trimer ( Figure 3C). In whole, GO results indicated that the gene function of DEGs mainly associated with both extracellular structure reorganization and fibrosis, immune, and inflammatory abnormalities.
In KEGG pathway enrichment analysis, DEGs mainly enriched in PI3K-Akt signaling pathway, Cytokine-cytokine receptor interaction, AGE-RAGE signaling pathway in diabetic complications, Phagosome, viral protein interaction with cytokine and cytokine receptor, and HIF-1 signaling pathway ( Figure 3D).
Immune Cell Infiltration Analysis
xCell was used to estimate the cell composition heterogeneity of left ventricle between DCM and controls. As shown in Figure 4, 18 cell types were significantly changed in DCM cardiac tissue compared to control samples, among which the scores of CD8 + T-cells, cDC, adipocytes, fibroblasts, and smooth muscle in DCM were significantly increased, while the scores of macrophages M1, Th1 cells were significantly decreased. Moreover, the correlation among immune cells were calculated by using the Pearson's correlation coefficients. As shown in Figure 5, iDC has the highest positive correlation with DC (Pearson's coefficient = 0.89). The correlation between macrophages and macrophages M1 was the second strongest positive (Pearson's coefficient = 0.78). Additionally, aDC, DC, CD8 + Tcm, macrophages, monocytes, and macrophages M1 had strong correlation coefficient with most of the remaining immune cells.
PPI Network and Gene Module Identification
As shown in Figure 6A, there were 633 edges among 350 proteins in the PPI network. The top 10 genes with the highest degree in the above network were selected as the hub genes. These are STAT3, IL6, CCL2, PIK3R1, ESR1, CCL5, IL17A, TLR2, BUB1B, and MYC ( Figure 6B). The expression of the top 10 hub genes across all datasets was displayed in supplementary material. Then, the correlation coefficients between 10 hub genes and significantly changed immune cells were calculated. As shown in Figure 6C, CD8 + T-cells and macrophages M1 had significant correlation with most of the hub genes. Moreover, with the screening rules as |Pearson's coefficient| > 0.5 and p < 0.05, we obtained three immune-related hub genes with potential diagnostic value: TLR2, CCL2 and CCL5.
Additionally, the densely connected modules were identified by using MCODE plug-in. Nine modules were obtained from the PPI network (Supplementary Figure S2). The most enriched GO BP terms of each module were listed in Table 1. Module_1 was enriched in negative regulation of transcription from RNA polymerase II promoter. Module_2 was enriched in sister chromatid cohesion. Enriched BP terms of Module_3 and four were associated with metabolism. Module_5 was enriched in FIGURE 4 | xCell scores of immune and stromal cells between DCM and control heart tissues in merged dataset (A-C) Boxplots of "lymphoid and myeloid cells", "stem cells", "stromal cells and others" respectively. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Hub Genes Verification and Diagnosis Model Construction
As it is shown in Figure 7A, CCL2 and TLR2 were significantly downregulated, while CCL5 was significantly upregulated in the merged dataset and verified in another two datasets GSE5406 and GSE57338 ( Figures 7B,C). Next, ROC curve analysis was performed to verify the diagnostic value of the selected biomarkers. In the merged dataset, the ROC curves of the three biomarkers revealed high diagnostic value for DCM ( Figure 8A). The AUC (area under the curve) scores of CCL2, CCL5, TLR2 were 0.963, 0.803, 0.766 respectively. In verification dataset GSE5406, the AUC scores of CCL2, CCL5, TLR2 were 0.733, 0.821, 0.794, respectively ( Figure 8C). In verification dataset GSE57338, the AUC scores of CCL2, CCL5, TLR2 were 0.738, 0.831, 0.836, respectively ( Figure 8E). Finally, we used the three biomarkers to construct a diagnosis model by logistic regression and visualized in ROC curves. The AUC score of the diagnosis model was 0.981, 0.867 and 0.946 in the merged dataset, GSE5406 and GSE57338, respectively ( Figures 8B,D,F).
DISCUSSION
DCM, a heterogeneous disease, is the major cause of heart failure and heart transplantation worldwide. Both genetic mutation and many different environmental changes can cause cardiomyocyte damage or death and may trigger
CC2 Chemoattractant for activated T lymphocytes and monocytes Promotes the infiltration of mononuclear cells Primary activator for macrophages
Protects cardiomyocytes from death by autocrine and paracrine effects CCL5 Chemoattractant for T lymphocytes, monocytes, eosinophils and NK cells Promotes neutrophil and macrophage activator in myocardial infarction and myocarditis TLR2 Recognizes pathogen-associated molecular patterns (PAMP) and damageassociated molecular patterns (DAMP) Induces expression of inflammatory cytokines and chemokines, resulting in an invasion of inflammatory cells Protects heart in aged animals after transverse aortic constriction surgery Frontiers in Genetics | www.frontiersin.org May 2022 | Volume 13 | Article 874544 8 myocardial inflammation in both directions, further promoting the progression of cardiomyopathy (Whelan et al., 2010;Lynch et al., 2015). Recent experimental and clinical evidence has suggested that abnormal activation of immune system may be involved in the process of cardiac function deterioration (Kawai, 1999;Lynch et al., 2017;Brayson et al., 2019). Exploring the mechanism of key immune cells, pathways, and molecules in the pathophysiological process of cardiomyopathy can help clarify the special role of the immune system in the maintenance and imbalance of cardiac function to some degree, so as to provide potential immunosuppressive targets for future immunotherapy.
By bioinformatics analyses, GO annotation and KEGG pathway enrichment analysis of the DEGs in merged dataset revealed that immune and inflammatory response and extracellular matrix remodeling play important roles in the pathogenesis of DCM, such as in myeloid leukocyte migration, cell chemotaxis, mononuclear cell migration, and extracellular matrix organization. Immune cell infiltration analysis FIGURE 7 | Gene expression of the three selected hub genes (A) Boxplot of the three selected hub genes in merged dataset (B) Boxplot of the three selected hub genes in GSE5406 (C) Boxplot of the three selected hub genes in GSE57338. *p < 0.05, **p < 0.01, ***p < 0.001 ****p < 0.0001.
Frontiers in Genetics | www.frontiersin.org May 2022 | Volume 13 | Article 874544 9 indicated that the infiltration degrees of CD8 + T-cells and cDC were significantly enriched in DCM samples while Th1 cells and macrophages M1 were in healthy control tissues. According to PPI network analysis, ten hub genes were selected as potential biomarkers of DCM. Finally, three feature immune-related hub genes were identified as biomarkers by both correlation and logistic analyses with an excellent AUC score in the merged dataset of 40 DCM patients and 25 healthy controls. Our novel diagnosis model of DCM was constructed based on these three feature molecules and verified in another two datasets GSE5406 and GSE57338. The functions of the three biomarkers were summarized and displayed in Table 2.
CCL2 (C-C motif chemokine ligand 2, also named as MCP-1) and CCL5 (C-C motif chemokine ligand 5, also named as RANTES) are two kinds of CC chemokines and play dual roles in inflammation and tissue repair. CCL2, the best studied chemokine, can be released by dendritic cells, monocytes, macrophages, smooth muscle cells and cardiomyocytes (Hanna and Frangogiannis, 2020). CCL2 is a potent chemoattractant for activated T lymphocytes and monocytes as well as a primary activator for macrophages (Rollins et al., 1988;Rollins, 1997). Previous studies suggested that CCL2 was upregulated in cardiac injury and was the key culprit for cardiac disease development and progression by promoting the infiltration of mononuclear cells (Hanna and Frangogiannis, 2020). CCL2 could also protect cardiomyocytes from cell death by its autocrine effect on cardiomyocytes and paracrine effect on endothelial cells, which stimulated angiogenesis (Tarzami et al., 2002;Hong et al., 2005;Tarzami et al., 2005). CCL5 is known to be a potent chemoattractant for T lymphocytes, monocytes, eosinophils, and natural killer cells (Jarrah and Tarzami, 2015). It is released by endothelial cells, smooth muscle cells, activated T-cells, macrophages, and so on upon inflammatory stimulus or FIGURE 8 | Analysis of the disease predictive abilities of the three selected hub genes (A) ROC curve analysis of three selected hub genes in merged dataset (B) ROC curve analysis of diagnosis model using the three selected hub genes in merged dataset (C) ROC curve analysis of three selected hub genes in GSE5406 (D) ROC curve analysis of diagnosis model using the three selected hub genes in GSE5406 (E) ROC curve analysis of three selected hub genes in GSE57338 (F) ROC curve analysis of diagnosis model using the three selected hub genes in GSE57338.
Frontiers in Genetics | www.frontiersin.org May 2022 | Volume 13 | Article 874544 infection (Jarrah and Tarzami, 2015). Although their function has been well studied on a systemic level, their role in cardiovascular disease, especially DCM, has not been fully elucidated. TLR2 (Toll-like receptor 2) is a pattern-recognition receptor protein critical for the initiation of the innate immune system, which recognizes both pathogenassociated molecular patterns (PAMP) and damageassociated molecular patterns (DAMP) (Bryant et al., 2015). Besides TLR4, TLR2 is the next most abundant tolllike receptor in heart tissue (Mann, 2011). The binding of ligand to TLR2 induces expression of inflammatory cytokines and chemokines (e.g., IL-1β, TNF-α, CCL2), resulting in an invasion of macrophages and other inflammatory cells (Kawasaki and Kawai, 2014). Most of the previous reports suggested that TLR2 played a detrimental role upon proinflammation (Yu and Feng, 2018). However, other studies indicated a protective role of TLR2 in aged animals and mice after transverse aortic constriction surgery (Bualeong et al., 2016;Spurthi et al., 2018). In our study, we found that TLR2 was downregulated in DCM, which may lead to more sensitivity of DCM patients to injury stimulus, causing eventual cardiac dysfunction.
The novelties of our study were as follows. Firstly, we were the first to use bioinformatics analyses to investigate the molecular mechanism of DCM from the perspective of immunity and inflammation. Secondly, we identified that CCL2, CCL5 and TLR2 could be potential diagnostic biomarkers of DCM. Nonetheless, there are several limitations that should not be ignored. First, it cannot be determined whether there is a cause-and-effect relationship between gene expression differences and the pathophysiological mechanism of DCM or whether it was just compensatory change. Second, the study was a retrospective data analysis; thus, detailed clinical and prognostic profiles, such as the left ventricular eject fraction and the occurrence of adverse cardiovascular events in patients with DCM, were absent. This restricted the further exploration of the key genes about their clinical features and outcomes. Finally, our study was based on bioinformatics analyses of transcriptomic data of public datasets, which may be inconsistent with actual situation. Further clinical trials are needed to test our findings by bioinformatics analyses.
CONCLUSION
By bioinformatics analyses of public transcriptional data, CCL2, CCL5, and TLR2 were identified as potential biomarkers of DCM from the perspective of immune cell infiltration combined with logistic regression. More importantly, a diagnostic model of DCM based on these three feature genes was developed, which brings a new aspect to the current understanding of the pathogenesis in DCM and may serve as interesting targets for future in-depth studies.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: GSE1145; GSE21610; GSE29819; GSE5406; GSE57338.
AUTHOR CONTRIBUTIONS
TZ and JQ conducted statistical analysis and drafted the article. TZ and MW were involved in the conception and design of the study. ZD, QL, ML, and CX contributed to picture processing and article reviewing. YX reviewed and proofread the article. YX provided effective scientific suggestions and supervision and created the final revision of the manuscript.
ACKNOWLEDGMENTS
We would like to thank the Gene Expression Omnibus (GEO) database for the precious data used for free in scientific research. | 2022-05-12T13:31:49.902Z | 2022-05-12T00:00:00.000 | {
"year": 2022,
"sha1": "396eb6f8604da4665309c4d95ee959639109f1ae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "396eb6f8604da4665309c4d95ee959639109f1ae",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220713949 | pes2o/s2orc | v3-fos-license | Stiffness of Probability Distributions of Work and Jarzynski Relation for Initial Microcanonical and Energy Eigenstates
We consider closed quantum systems (into which baths may be integrated) that are driven, i.e., subject to time-dependent Hamiltonians. As a starting point we assume that, for systems initialized in microcanonical states at some energies, the resulting probability densities of work (work-PDFs) are largely independent of these specific initial energies. We show analytically that this assumption of"stiffness", together with the assumption of an exponentially growing density of energy eigenstates, is sufficient but not necessary for the validity of the Jarzynski relation (JR) for the above microcanonical initial states. This holds, even in the absence of microreversibility. To scrutinize the connection between stiffness and the JR for microcanonical initial states, we perform numerical analysis on systems comprising random matrices which may be tuned from stiff to nonstiff. In these examples we find the JR fulfilled in the presence of stiffness, and violated in its absence, which indicates a very close connection between stiffness and the JR. Remarkably, in the limit of large systems, we find the JR fulfilled, even for pure initial energy eigenstates. As this has no analogue in classical systems, we consider it a genuine quantum phenomenon.
I. INTRODUCTION
The long-standing question regarding whether, and in which way, closed finite quantum systems approach thermal equilibrium has recently gathered renewed attention. On the theoretical side thermalization and equilibration have been investigated e.g. for rather abstract settings [1][2][3][4][5][6] and also for more specific condensed-matter type systems [7][8][9][10]. In these works major concepts are the eigenstate thermalization hypothesis (ETH) and typicality, both of which will also play certain roles in the paper at hand. The developments on experiments on ultra-cold atoms now allow for testing what have been merely theoretical results before; see e.g. Ref. [11][12][13]. Rather than just the existence of equilibration within closed quantum systems, lately the very peculiarities of the dynamical approach to equilibrium have moved to the center of interest [11,14]. Questions addressed in this context include limits on relaxation time scales and agreement of unitary quantum dynamics of closed quantum systems with standard statistical relaxation principles, such as Fokker-Planck equations [15][16][17][18], or more general, standard stochastic processes [19,20]. But also the emergence of universal non-equilibrium behavior involving work and driven systems is under discussion at present [21].
To a large extent universal non-equilibrium behavior may be captured by fluctuation theorems, see e.g. Ref. [22] and references therein. The Jarzynski relation (JR), a general statement on work that has to be invested to drive processes also and especially far from equilibrium, is a prime example of such a fluctuation theorem. * lknipschild@uos.de † andreas.engel@uol.de ‡ jgemmer@uos.de Many derivations of the JR from various starting grounds have been presented. These include classical Hamiltonian dynamics, stochastic dynamics such as Langevin or master equations, as well as quantum mechanical starting points [22][23][24][25][26][27]. However, all these derivations (except for Ref. [28]) assume that the system, that is acted on with some kind of "force", is strictly in a Gibbsian equilibrium state before the process starts. (The notion of "the system" here routinely includes the bath.) Thus, this starting point differs significantly from the progresses in the field of thermalization: There, the general features of thermodynamic relaxation are found to emerge entirely from the system itself without any necessity of evoking external baths or specifying initial states in detail. Clearly, the preparation of a strictly Gibbsian initial state requires the coupling to a (super-)bath prior to starting the process. This situation renders the question whether or not the standard JR also holds for systems starting in other than Gibbsian states (e.g. micro-canonical states) rather exigent. Note that, other than for Gibbsain initial states, the answer to this question is expected to depend on specific properties of the considered systems.
In this context a property which we call "stiffness of work-distributions" has been suggested as a key ingredient for the validity of the JR for microcanonical initial states in Ref. [28]. In this pioneering work the validity of the JR is proven for classical systems initialized in microcanonical initial states given the systems feature stiffness and microreversibility. Moreover, for a classical Lorentz gas stiffness and the validity of the JR for microcanonical initial states are numerically demonstrated. Furthermore the JR was found to hold for micro-canonical initial states for some quantum spin-models exhibiting stiffness in Ref. [29] in a numerical study. The present work extends this line of research in various directions: We examine the validity of the JR not only for microcanonical initial states but also for initial pure energy eigenstates, the latter is conceptually beyond the scope of Ref. [28]. It is also important to note that stiffness is a sufficient but not a necessary condition for the validity of the JR, thus the practical relevance of stiffness is challenged. The numerical modelling in the paper at hand allows to address this practical relevance by means of an investigation of the validity of the JR in the presence of stiffness, as well as in its absence. The latter is, to our best knowledge, so far missing in the literature. Furthermore the results in the current paper do not rely on microreversibility.
The paper at hand is organized as follows: In Sec. II we introduce our basic hypothesis of probability density function of work (work PDF's) being largely independent of the respective energy for micro-canonical initial states. We call this property stiffness. The validity of the JR for micro-canonical initial states is shown to follow from this assumption (together with the routinely applied assumption of an exponentially growing density of energy eigenstates). With an additional assumption on the system dynamics which we call smoothness we derive the validity of the JR even for energy eigenstates. In Sec. III we introduce our modelling, which is partly based on random matrices. In Sec. IV we provide numerical results for micro-canonical initial states indicating a very strong correspondence between the validity of the JR and stiffness of the system dynamics. In Sec. V we numerically show that also the aforementioned smoothness-assumption is fulfilled for our modelling in the limit of large systems. This completes the demonstration of the existence of a class of systems which exhibit both, stiffness and smoothness and thus fulfill the JR even for energy eigenstates. We close with a discussion.
II. STIFFNESS AND SMOOTHNESS OF WORK PDF'S AND JARZYNSKI RELATION FOR INITIAL MICROCANONICAL STATES AND ENERGY EIGENSTATES
The analysis at hand focuses exclusively on closed systems. While it is physically appropriate to interpret the examples in Sec. III in terms of "considered system" and "environment" or "bath", we technically treat the sys-tem+environment compound regardless of the coupling strength as one closed system. Thus, since there is no external source or sink of heat, any energy change of the full system is to be counted as work W (for an overview over different perspectives, see e.g. Ref. [30].) The measurement of the inner energy is described by a two point projective measurement scheme. In this respect we choose the same starting point as employed in derivations of the JR as described, e.g. , in Ref. [31] and references therein. However, while in Ref. [31] the assumption of a canonical, Gibbsian initial state is of vital importance, we base our consideration on much larger classes of initial states of the full system. The central role which the assumption of strictly Gibbsian state plays in the afore mentioned works is replaced by the assumption of "stiffness" of the work-PDF's (as introduced in in Eq. (13)). We consider a system described by a time-dependent Hamiltonian H(t) during the time t ∈ [0, T ], which induces a non-equilibrium process.
The corresponding unitary time-propagation operator U is defined by: where T is the time-ordering operator and we tacitly set = 1.
Let |i be the eigenstates of H(0) and |f the eigenstates of H(T ). Let further ǫ i and ǫ f be the corresponding eigenvalues, respectively. Starting from the initial state |i , p f ←i denotes the probability to make a transition into |f : The average over the work-PDFs h(W ) W starting from an initial state ρ(0) can be calculated for an arbitrary function h(W ) of the work W : Tr(ρ(0) |i i|) is the probability to find the system after the first projective measurement in the initial state |i and p f ←i is the probability to make a transition from |i to the final state |f . The work performed during this transition is W = ǫ f − ǫ i . One can easily show that these transition-probabilities p f ←i are doubly stochastic: In general these transition-probabilities vary from eigenstate to eigenstate. We thus define the probability p F ←i to transition from an eigenstate |i into an energy-interval E F : Here, δ is to be chosen large compared to the level spacing of the full system, but small compared to the involved energy scales of E, W . Note that I and F are integers used to address the initial (E I ) and final energy-intervals (E F ), respectively. This construction serves as a coarsegraining of the energy scale.
In a similar way, we define the average probability to make a transition from an initial state |i from the energy-interval E I into an energy-interval E F : Ω I and Ω F denote the number of eigenstates of H(0) in the interval E I and of H(T ) in the interval E F , respectively.
Note that these transition-probabilities depend on the width δ of the final energy-interval. Closely related to these transition-probabilities is the so-called work probability density function (work-PDF), which describes the probability to perform the work W = (F − I)δ starting from an initial energy E = Iδ.
The transition-probabilities and the work-PDFs are essentially the same, up to a constant rescaling-factor. But in large systems these work-PDFs typically become independent of the concrete choice of δ [29]. Starting from Eq. (3), the average over the work-PDFs h(W ) W for a function h(W ), which does not vary significantly on the scale of δ, can be calculated from p F ←I : E n = nδ is an approximation of the energies in the initial (n = I) interval E I and of the final (n = F ) interval E F , respectively. From Eq. 4 we derive the following properties of p F ←i and p F ←I : Up to now we only defined various quantities and derived general statements, but did not make any assumptions. We now come to the derivation of the JR for microcanonical initial states. To begin with, we define the latter as In order to derive the JR for micro-canonical initial states to make two assumptions. First, we assume that the probability to make a transition from a state from the energy-interval E I into the energy-interval E F only depends on the difference of F and I: We call this assumption stiffness. This assumption can be also expressed in terms of work-PDFs P E (W ). If these work-PDFs are independent of the initial energy E, then Eq. (13) is fulfilled. Our second assumption states that the densities of states (DOS) of the initial D ini (E I ) := δ −1 Ω I and final Hamiltonian D fin (E F ) := δ −1 Ω F grow exponentially: Up to now β, Z ini and Z fin are just some positive real numbers. In the discussion below (16) these numbers are interpreted in terms of standard statistical thermodynamics.
Of course Eq. (13) and Eq. (14), are not expected to hold for all energies E. Here we only require that these relations hold at least for an energy interval which is large enough to comprise almost the entire work-PDF.
To arrive at the JR for micro-canonical initial states, we start by calculating the average of exp{−βW } over the work-PDFs according to Eq. (9).
In the last step we evaluated the sum over I ′ by using Tr(Ω −1 I Π I Π I ′ ) = δ I,I ′ and used the stiffness-assumption Eq. (13). By substituting F by F ′ + I − I ′ , while I ′ is the new summation index and F ′ an arbitrary but fixed integer, we get: In the second step we used that the DOS of the initial and the final Hamiltonian exponentially grow according to Eq. (14). In the last step we used Eq. (11). Eq. (16) formally is a JR for the work PDF's obtained by starting from microcanonical initial states, with the temperature replaced by a parameter describing the exponential growth of the DOS of the full system. As such Eq. (16) already represents the main result of the present section. Note that Eq. (16) holds for arbitrary processes and its r.h.s. only contains static, process-independent model parameters.
Formally the JR could be fulfilled for microcanonical initial states, even if Eq. (13) and Eq. (14) do not hold. In this sense these assumptions are stronger than the validity of the JR, or to rephrase, these assumptions represent sufficient but not necessary conditions. This peculiarity will be investigated in detail below In an analogous way we can derive Eq. (16) for initial energy eigenstates ρ(0) = |i i| if we additionally assume that holds for all i ∈ N with ǫ i ∈ E I . This additional assumption means that the transition probabilities from an eigenstate |i to an energy interval E F are smooth functions of the initial and final energy. We therefore call it "smoothness". The validity of this assumption is investigated in Sec. V in a finite size scaling. In order to demonstrate even closer analogy of Eq. (16) with the standard JR, it remains to be explained in which sense the r.h.s of Eq. (16) may be considered as the familiar r.h.s of the standard JR, e −β∆F , where F is the free energy. Such an identification would hold if In order to judge whether or not Eq. (18) is justified, consider the logarithm of Eq. (14), The index α ∈ {ini, fin} signals, whether the equation refers to the initial or final Hamiltonian, respectively. Moreover, the discrete average EnergiesĒ I andĒ F are replaced by the continous parameter U . If one identifies, along the lines of Boltzmann's original approach, the entropy S α as (where we tacitly set k B = 1), one may convert Eq.
Note that, in accordance with Eq. (14), ∂ U S α = β, hence β has the meaning of inverse temperature, and the r.h.s. of Eq. (21) is, accordingly the free energy F as introduced in standard textbooks on phenomenological thermodynamics. In this sense Eq. (18) indeed holds, which entails the rewriting of Eq. (16) in a form closer to the familiar one: where · · · E denotes the microcanonical expectation value corresponding to energy E. This concludes our consideration on the validity of a JR for microcanonical initial states under the assumption of stiff work-PDFs.
III. MODELS AND DRIVING PROTOCOL
With the following numerical investigations we ascertain the pivotal relevance of stiff work-PDFs for the validity of the JR for microcanonical initial states. We therefore introduce a model that is partly based on random matrices. Within this model we can control the stiffness of the resulting work-PDFs via a single parameter ξ. This allows us to observe the influence of stiffness on the JR for microcanonical initial states. We consider an isolated system comprising a relatively small subsystem (denoted as "sys", H sys ) and a bigger part serving as heat bath (denoted by "bath", H bath ). Both parts may interact via H int . Finally, a timedependent force periodically drives the system H prot . Concretely we choose the small subsystem to be a spin and the time dependent force to be a kind of microwave field such that the whole model allows for an interpretation in terms of a spin-resonance experiment with a finite lifetime of the spin excitation, see Fig. 1. A very similar model (spin-GORM model) has previously been used to study relaxation in finite environments [32] In detail the Hamiltonian of the full system reads: The small subsystem is a simple two-level system, e.g. a spin-1 ⁄2-particle in a magnetic field B z . The Hamiltonian of this subsystem is characterized as: E sys j obviously denote the eigenstates of H sys . We chose B z = 0.5 throughout this paper.
The bath-part is also defined by its energy-levels, (25) while N denotes the dimension of the bath. This definition yields an (strictly) exponentially growing DOS Ω bath (E) ∝ exp{βE} comprising energies from E bath min = 0 to E bath max = 4.5. The constant β (which takes the role of a temperature here) is chosen to 1. Note that for this model an exponentially growing DOS of the bath induces an approximately exponentially growing DOS of the full Hamiltonian.
As mentioned in the previous section, an exponentially growing DOS is one of the conditions (Eq. (14)) used here to derive the JR for micro-canonical initial states. As the DOS of many physical systems (spatially extended with short range interactions, etc,) is well approximated by an exponential within not too large an energy range, this condition is routinely imposed in this context and represents a natural cornerstone of the modelling. Note that this modelling corresponds to an "ideal heat bath" i.e., the temperature is always 1/β, regardless of the actual bath energy E sys j We now define the interaction between the two parts of system. We introduce the following notation: Regarding this product basis we define the interactionpart.
R nl = R ln denote normally distributed random numbers with zero mean and unit variance.
To assess the rationale behind this modelling consider the following.
The interaction H int only allows transitions (for the non-driven model, i.e., for λ = 0) between energetically similar bath-states. Direct transitions between states with significantly different bath-energies are suppressed by the Gaussian function f (ω), i.e., their suppression is controlled by the respective variance σ 2 int = 0.5. Within the validity of Fermi's golden rule, the decayrate γ of the z-component of the magnetization of the spin for some initial bath-energy E bath can be estimated as γ ∝ exp β(1 − ξ)E bath for our model. In a physical system we would expect that γ depends on the temperature 1/β of the bath, but not on its actual energy. For ξ = 1 the rate γ actually becomes independent of the bath energy E bath . We thus consider this the most physical case.
While it is not plain to be seen, it is an actual and most important fact, that ξ also controls the stiffness of the model. It turns out that stiff work-PDF's arise precisely at the above "most physical" case ξ = 1. For smaller and larger ξ stiffness is lost. For clarity of presentation we do not discuss the inner workings of this "stiffness control mechanism" here but simply present clear numerical evidence for its existence in App. A. Work-PDFs for two different bath-couplings α. For the weaker coupling one nicely sees two sharp peaks at W = ±Bz, resulting from spin-flips induced by the resonant irradiation. For the stronger coupling the work-PDF is much broader.
We finally introduce the time-dependent protocol exclusively acting on the "sys"-part: Thinking again of the system in terms of a spin-1 ⁄2particle, the protocol describes a sinusoidally modulated magnetic field in the x-direction, as routinely used in spin resonance experiments. We choose ω prot = B z = 0.5, i.e., the irradiation is on resonance. The duration of the protocol is set to T = 3.5 2π ωprot throughout this paper.
IV. JARZYNSKI RELATION FOR MICRO-CANONICAL INITIAL STATES AND VARIOUS SYSTEM CONFIGURATIONS
We consider a micro-canonical ρ I0 mc (0) initial state from the center of the spectrum of the initial Hamiltonian H(0) with an energetic width of about δ ≈ 0.06.
The dimension of the bath is set to N = 4000.
In order to quantify deviations from the perfectly fulfilled JR (Eq. (22)) we introduce the following definition: Since we consider cyclic processes ∆F is equal to zero and exp{−β∆F } becomes equal to 1. If the JR holds for the considered set of parameters (ξ, α and λ), the corresponding quantifier D(ξ, α, λ) vanishes.
The results for the micro-canonical initial states are displayed in Fig. 3. Light green means that the JR is fulfilled, while other colors indicate deviations.
In case of weak bath-couplings α or weak irradiationstrengths λ the JR is trivially fulfilled, even for microcanonical initial states. For λ ≈ 0 we are in the limit of adiabatic following and we thus expect to actually perform zero work. For α ≈ 0 the sys-and bath-part are decoupled. But nevertheless the reduced initial sys-state is a thermal state with the inverse temperature β. So the protocol acts on a system prepared in a Gibbsian state. For this scenario it is well-known that the JR holds. We therefore concentrate on the larger αs and λs.
For ξ = 1.0 the resulting work-PDFs are stiff, up to small fluctuations (see. App. A). Since stiff work-PDFs imply the JR for micro-canonical initial states, the respective deviations in Fig. 3 are nearly zero.
For ξ = 0.6, 2.0 the resulting work-PDFs are not stiff (see. App. A). In principle, the JR could still be fulfilled for microcanonical initial states , since stiffness is formally not a necessary condition. However, at both values i.e, ξ = 0.6, 2.0, we find deviations from the JR "to both sides" (D mc (ξ, α, λ) positive as well as negative). These deviations appear to systematically depend on α and λ and are nonzero for most α, λ. However, there are few combinations of α and λ for which the JR is fulfilled, see corresponding "light green corridors" in Fig. 3.
In App. B the dependence of the deviations D mc (ξ, α, λ) on the initial energy E 0 is numerically investigated in more detail. We find that at ξ = 1 the initial energy plays a crucial role for the resulting deviations, but not so at ξ = 1 Especially at the "light green corridors" in Fig. 3, left and right panel, the JR is violated for initial microcanonical states with energies other than E 0 .
These numerical finding suggests that the stiffness of work-PDFs is crucial for the validity of the JR for microcanonical initial states.
V. VALIDITY OF THE JARZYNSKI RELATION FOR ENERGY EIGENSTATES AND FINITE SIZE SCALING
Up to now we only investigated the validity of the JR for micro-canonical initial states Eq. (30). We now turn to initial states being eigenstates of the initial Hamiltonian H(0). We denote these initial states as The standard-deviations are nicely described by tilted parabolae. This suggests that the standard-deviations decrease as The energetic width of these states is δ = 0. In this sense they are fundamentally different from micro-canonical initial states. But in this section we will demonstrate, that in the limit of large bath-dimension, both behave similar regarding the JR. Again, we use Eq. (31) to check whether the JR is fulfilled or not. We define the corresponding deviations D es (ξ, α, λ) completely analogous to the D mc (ξ, α, λ) (cf. Eq. (31)) but with ρ I0 mc (0) replaced by ρ i es (0) Note that the average of the D es (ξ, α, λ) over a pertinent range of i equals a corresponding D mc (ξ, α, λ). Thus the following numerical results (Fig. 4) do not only hold information about the sizes of the D es (ξ, α, λ) but also about the finite size scaling of the D mc (ξ, α, λ).
A systematic survey of the D es (ξ, α, λ), for all α, λ is numerically very costly. We thus concentrate on cases where the violation of the JR is pronounced for ξ = 1 i.e., α = 0.4, λ = 0.25, cf. Fig. 3. Figure 4 shows statistical results on the D es (ξ, α, λ) for increasing bath sizes N . (For clarity the results are displayed over inverse bath size 1/N .) Displayed are the averages (diamonds) and standard deviations (vertical "error" bars) for a stiff system ξ = 1 and two nonstiff systems ξ = 0.6, 2. The statistics encompass 100 different D es (ξ, α, λ) for adjacent i from the middle of the respective spectrum for each parameter set.
The following principles may be inferred from Fig. 4: The averages appear to be independent of the system size N , thus the D mc (ξ, α, λ) are independent of the system size, hence Fig. 3 provides a representative picture also for other (larger) bath sizes than N = 4000. The standard deviations of the D es (ξ, α, λ) decrease with bath size, presumably as ∝ N −0.5 as suggested by the tilted parabolae.
These findings strongly indicate that the JR is indeed fulfilled even for pure initial energy eigenstates for stiff systems in the limit of large bath (total system) sizes. Note that in this case the statistical character of the corresponding work-PDFs is entirely due to pure quantum uncertainties. Furthermore the JR appears to be always violated for pure initial energy eigenstates in the limit of large bath (total system) sizes if the system is nonstiff.
VI. DISCUSSION
In this article we analytically show that the Jarzynski relation holds also for a broad class of non-Gibbsian initial states in quantum systems under certain conditions. For micro-canonical initial states these conditions are : An exponentially growing DOS of the initial and final Hamiltonian and stiff work-PDF's i.e., work-PDF's that are independent of the initial energy. Moreover, numerics indicate that the converse also holds: systems that do not comply with the stiffness condition actually do violate the JR for micro-canonical initial states, independent of the size of the system.
In order to analytically show the validity of the Jarzynski relation for initial energy eigenstates we exploit an additional assumption on the work-PDF's called "smoothness", which is expected to hold for large systems. This expectation is supported by numerics for some examples, which shows that the Jarzynski relation is fulfilled in the limit of large systems for systems that do exhibit smoothness, and violated for systems which do not.
To conclude, there appears to be a very tight link between the applicability of the Jarzynski relation and stiffness/smoothness for non-Gibbsian initial states which deserves further exploration. 5 shows the probabilities to perform zero work. For ξ = 1.0 the probabilities p E (0) appear to be approximately independent of E, while for ξ = 0.6 and ξ = 2.0 we find a significant dependence.
While for larger bath dimensions d the work-PDFs become smoother, the slope for ξ = 0.6, 2.0 appears to be independent of d. In Sec. IV we considered deviations from the JR for various combinations of ξ, α and λ, but for a fixed initial energy E 0 and found that for some combinations of these parameters the JR appeared to be fulfilled, even though condition Eq. (13) is violated. We now consider the dependence of these deviations on the energy of the initial state ρ(0) with the aforementioned parameters held constant. We consider micro-canonical initial states, defined according to Eq. (30), with various energies E. The resulting deviations D(ξ = 2.0, α = 0.45, λ = 0.15) are displayed in Fig (6).
Note that for this parameter combination we found the JR fulfilled for the previously considered initial energy E 0 . The data suggests that there is only a small energy-range for which the JR is approximately fulfilled and E 0 accidentally is within this region. The energydependence for other α and λ looks quite similar. So we can find specific micro-canonical initial states, which comply with the JR, even if condition Eq. (13) is not fulfilled. But since this is a feature of a very specific combination of system and initial state we conclude that the JR is not fulfilled by this system and driving-protocol in general.
In contrast, for ξ = 1 there is a wide region of initial energies that fulfill the JR, which is a direct consequence of the conditions Eq. (13) and Eq. (14). | 2020-07-24T01:00:25.309Z | 2020-07-23T00:00:00.000 | {
"year": 2020,
"sha1": "40830dadd2dbcb6fa02493312ccb5077b0ba2b44",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2007.11829",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "40830dadd2dbcb6fa02493312ccb5077b0ba2b44",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
247937078 | pes2o/s2orc | v3-fos-license | Phenotypic frailty in people living with HIV is not correlated with age or immunosenescence
Background It has been hypothesized that HIV-1 infection prematurely “ages” individuals phenotypically and immunologically. We measured phenotypic frailty and immune “aging” markers on T-cells of people living with HIV on long term, suppressive anti-retroviral therapy (ART) to determine if there is an association between frailty and immunosenescence. Methods Thirty-seven (37) community-dwelling people living with HIV were measured for frailty using a sensor-based frailty meter that quantifies weakness, slowness, rigidity, and exhaustion. An immunological profile of the patients’ CD4+ and CD8+ T-cell expression of cell surface proteins and cytokines was performed (n = 20). Results Phenotypic frailty prevalence was 19% (7/37) and correlated weakly with the number of past medical events accrued by the patient (r = 0.34, p = .04). There was no correlation of frailty with age, sex, prior AIDS diagnosis or HIV-1 viral load, or IFN-γ expression by CD4+ or CD8+ T-cells. There were more immune competent (CD28+ CD57−) cells than exhausted/senescent (CD28− CD57+) T cells. Conclusion Frailty in people living with HIV on long term, suppressive ART did not correlate with aging or T cell markers of exhaustion or immunosenescence.
Introduction
In the 1980s and '90s, the physical manifestations of untreated HIV-1 infection and the side effects of anti-retroviral therapy (ART) were common and obvious to all clinicians. For example, weight loss, cutaneous Kaposi's sarcoma, lipodystrophy, and neuropathy were commonplace. 1 One hypothesis advanced to explain the physical deterioration in people living with HIV was that the HIV-1 virus aged the patient "with an earlier occurrence of a phenotype that resembles the phenotype of frailty in older adults without HIV infection". 2 Another hypothesis claimed that HIV/ AIDS "compress[ed] the aging process, perhaps accelerating comorbidities and frailty." 3 The implication was that HIV-1 infection, particularly when untreated, leads to premature aging and frailty. This concept still has adherents today 4 although we have shown that ART actually reverses some of these processes. 5 It is a fact that untreated HIV-1 infection is associated with the onset of a frail phenotype in some individuals. In the current era when therapy is advocated for all patients, therefore, it is important to establish whether ART prevents, ameliorates, or reverses frailty and immunosenescence. The irreversible loss of physical function over time can be portrayed as a spiral of decreased mobility and activity leading to the pre-frail state and finally, frailty. 6 Untreated HIV-1 infection also leads to immunosenescence with the accumulation of CD8 + and CD4 + T-cell subsets associated with aging. For example, terminally differentiated CD28 À / CD57 + bearing T-cells found in untreated HIV-positive patients are the hallmark of immunosenescence, whereas, the loss of naïve CD4 + and CD8 + T-cells demonstrates an ineffective response to the HIV-1 infection. 7 In aging uninfected individuals, there is a major decline in CD4 + and CD8 + cells expressing CD28, in itself an expression of immunosenescence. 8 In this work, we measure physical frailty as well as immunological markers of aging in a group of older, mostly men, on long-term, suppressive ART. Our goal was to determine if there is some association of frailty or immunosenescence in people living with HIV on long term ART. Several markers of inflammation have been studied with regard to their contribution to aging and frailty such as CRP and IL-6 9 and others have looked at socioeconomic factors. 10 However, we chose immunological markers of aging in people living with HIV, comparing markers in nonfrail with frail individuals living with HIV. We wondered if frailty was correlated or perhaps, causally linked to immunosenescence. We found that prolonged, suppressive ART restores many immunological parameters previously damaged by HIV, even returning some immunological measurements to normal values. 11 Although HIV targets the immune system and clearly contributes to frailty, both processes, frailty and immunosenescence appear to be reversible. 5,11 Methods
Demographics
Our large urban HIV clinic serves the surrounding metropolitan area of Tucson and southern Arizona (approximately 1800 patients). The same physician (SAK) has followed these patients for over 20 years and >91% are virally suppressed (meaning <200 copies of HIV-1 RNA 12 ). Some subjects participated in previous aging/frailty studies. For this study we recruited patients who were compliant with ART, of an older age, virally suppressed for years, and who consented to cell harvesting for our studies of immunological aging. 11,13,14 Recruits were not chosen on the basis of frailty as that was measured later. It is important to note that frailty was determined within a year of cell surface analysis. 11 The study was approved by the Institutional Review Board of the University of Arizona, Tucson, Arizona. Informed consent was obtained from all participants, and the study performed in accordance with relevant guidelines and regulations approved by the University of Arizona Biosafety Committee. Baseline demographic and medical information was collected at the time of frailty measurement and included age, sex, occurrence of specific, past medical events (heart disease, lung disease, neurological disease, arthritis, cancer, and surgery) resulting in hospitalization for two or more days in the last year or at any time in the past, the last CD4 + count, last viral load value, and whether the patient had been previously diagnosed with AIDS. These data are available from a dedicated EMR system accessed by the investigators.
Frailty measurements
Measurements of frailty were performed using a sensorbased upper extremity method called Frailty Meter (FM) (Frailty Meter TM , Biosensics, Newton, MA). 15,16 FM consists of one wrist-worn sensor and a wirelessly connected tablet. FM works by quantifying weakness, slowness, rigidity, and exhaustion during a 20-second repetitive elbow flexion/extension task using the wrist-worn sensor. The four scores indicate how different aspects of the patient's performance contribute to their frailty and generate a frailty index score (FI) ranging from zero to one; higher values indicate progressively greater severity of frailty. Twenty (20) seconds was chosen from a prior study 18,19 in which a 20-second repetitive elbow flexion/extension exercise was long enough to capture alterations in elbow angular velocity due to the presence of exhaustion phenotype (based upon the Fried Frailty Exhaustion phenotype 17 ), but not too long to observe a noticeable alteration in those without the presence of exhaustion phenotype. Using a machine learning model, the measured phenotypes are mapped into a continuous FI scale ranging from 0 to 1. This methodology has been validated 15 against Fried criteria 17 which serve as the "gold standard" of the phenotypic measurement of frailty and which we performed in all prior frailty investigations. The sensor is attached to a tablet which provides a rapid readout of the results. We measured the FI using the dominant and non-dominant hand as well as a dual task exercise which entailed counting backward from 100 in increments of three while performing the elbow flexion with the dominant hand. In a previous study, it was demonstrated that dual-task exercise allows distinguishing between older adults with and without cognitive impairment. 15 An FI over 0.27 indicates that the patient is frail.
Measurement of Depression
We measured all subjects for the presence of depression using the Center for Epidemiological Studies-Depression (CES-D) test. CES-D is a quick 20 question test developed to detect depression. 20 A score ≥22 indicates the possibility of major depression; scores between 15 and 21 suggests moderate to mild depression; score <15 indicates no depression present in the patient.
Twenty (20) of 37 patients in this study completed the entire cell surface analysis.
Statistical analysis: frailty analysis
A Pearson's correlation coefficient and corresponding pvalue were constructed for FI and CES-D scores against each clinical factor: age, last CD4 + cell count, number of past medical events; binary variables were sex, past medical event, AIDS diagnosis, and if virally suppressed (<200 copies/mL). Correlations greater than 0.8 or less than À0.8 indicate a strong linear relationship between the outcome and predictor 21 and p-values less than 0.05 are considered statistically significant. All correlations and p-values were calculated in SAS version 9.4.
Immunologic analysis
Cell population totals were graphed using median values with 95% confidence intervals. An unpaired, nonparametric Mann-Whitney test was used to determine statistical significance between different populations. A linear regression model was used for all linear plots comparing a population of CD4 and CD8 T-cell counts and FI or age. Statistical significance was obtained by comparing the slope of the population to a slope of zero. In the event both linear regression slopes were significantly non-zero, the slopes were compared to each other to determine if they were significantly different. Software used was GraphPad Prism version 7. The following cells and surface markers were investigated: total CD4 and CD8 cell counts/percentages; CD4 CD28, CD4 CD57, CD4 CD28 CD57, and CD8 CD28, CD8 CD57, CD8 CD28 CD57, and CD4 and CD8 production of IFN-g (PHA-stimulated and unstimulated), and CD8 TNF-a (PHA-stimulated and unstimulated). Index and CD8 + CD28 + , r = 0.07222, or CD8 + CD28 À , r = 0.07222. (c). CD8 + CD28 CD57 and the Frailty Index measuring the dominant hand. ÃÃÃÃ The difference in the two cell populations was significant: p < .0001; there was no significant correlation between Frailty Index and CD28 + CD57 À , r = 0.04129, or CD28 À CD57 + , r = 0.03515. (d). CD8 + CD57 cells and the Frailty Index measuring the dominant hand. ÃÃÃÃ The difference in the two cell populations was significant: p = < .0001; there was no significant correlation between Frailty Index and CD8 + CD57 + , r = 0.0163, or CD8 + CD57 À , r = 0.0163.
Frailty measurements
Thirty-seven (37) HIV-positive patients compliant with ART therapy for years completed frailty testing. The demographic data is presented in Table 1. The average participant was 60.3 years old, male, and had <20 copies of HIV-1 RNA on the most recent viral load test, had taken ART for 15.8 years, had at least one past medical event (n = 33, [89%]), and did not have major depression as measured by CES-D.
The majority of participants had FI scores that fell below 0.27, that is, were not frail (the average FI was 0.2 ± 0.1). The number of participants that fit the definition of frail using the dominant hand, non-dominant hand, and dual task tests were 7 (19%), 6 (16%), and 11 (30%), respectively. Table 2 shows the correlation between the FI using each of the three methods of measurement and several pertinent clinical variables. We found no statistically significant correlations between multiple independent health variables and the FI except for the number of medical problems when using the dominant hand, r = 0.34, p-value = 0.04. There were no statistically significant correlations between FI with either hand and the following: age, sex, last CD4 count, past medical event, AIDS diagnosis, and undetectable viral load. The seven frail patients (measuring dominant hand) did have an average of 3.1 prior medical problems (as defined in Methods) compared to the non-frail participants who had an average of 2.1 prior medical problems (Table 1).
For the CES-D score, there was a weak correlation with the number of past medical events (r = 0.35, p-value = 0.03) and moderate correlation with occurrence of past medical event(s) (r = 0.47, p-value = 0.003) ( Table 3).
Immunologic measurements
We measured the surface expression of various HIV-1 and immune aging/exhaustion markers on CD4 + and CD8 + Tcells of the participants. For these results, we used FI as measured using the dominant hand (since there was no difference between hands; see above). The participants had a mean CD4 cell count of 732 and all were virally suppressed (Table 1). Although the CD4 + and CD8 + T-cell counts were significantly different, there was no correlation between their numbers and the FI (Figure 1(a)). Likewise, there was no correlation of the FI with the following four cell populations: CD8 + CD28 +/À (Figure 1(b)), CD8 + CD28 +/À CD57 +/À (Figure 1(c)), CD8 CD57 +/À ( Figure 1(d)), CD4 CD57 +/À (Figure 2(a)), and CD4 CD28 +/À (Figure 2(b)). However, CD8 + cells did show a trend, although not significant, of TNF-α production in the unstimulated state indicating a low level of cytokine production by CD8 + cells (Figure 2(c)). There was no correlation with FI with CD4 + or CD8 IFN-γ production in Figure 2. (a). CD4 + CD57 cells and the Frailty Index measuring the dominant hand. ÃÃÃÃ The difference in the two cell populations was significant: p = < .0001; there was no significant correlation between frailty and CD4 + CD57 + , r = 0.04644, or CD8 + CD57 À , r = 0.04636. (b). CD4 + CD28 cells and the Frailty Index measuring the dominant hand. ÃÃÃÃ The difference in the two cell populations was significant: p = < .0001; there was no significant correlation between Frailty Index and CD4 + CD28 + , r = .0001972, or CD8 + CD28 À , r = .001972. (c). CD8 + TNF-α production and Frailty Index. There is a non-significant trend toward increasing cytokine production with Frailty Index (r = 0.05027; p = .3561). (d). Interferon-γ production and Frailty Index. ÃÃÃÃ There was a significant difference in the two cell populations, p = .0001 (using non-parametric Mann-Whitney test); there is no correlation between Frailty Index and IFN-γ production by CD4 cells (r = .05276) or CD8 cells (r = .02869).
Discussion
We have been interested in the effect ART has upon "phenotypic or physical frailty" and "premature aging" of Tcells. One standard method of determining frailty measures physical parameters, 17 whereas, another approach adds up cumulative deficits to arrive at a frailty score. 22 We employed sensor-based technology to measure physical parameters of frailty. Although many investigators treat frailty occurring in the elderly and people living with HIV as identical processes, we believe "frailty" in these two populations is quite different. The characteristics of each cohort have been delineated. 23 For example, in the community-dwelling elderly, frailty is age-related but it is not in people living with HIV. 24 Prevalence of frailty in the community-dwelling elderly is ∼7%, whereas, it is much higher in people living with HIV, ∼ 20%, where it is often transient and reversible. 5 Sarcopenia is a defining feature of frailty in the elderly but is not as important in people living with HIV where depression is a common factor. 24 As mentioned above, frailty is often reversible in people living with HIV, 5 whereas, this rarely occurs in the frail elderly. 24 Following frail people living with HIV over time as we have done we find frailty to be highly fluid. Others have noted the same phenomenon. A recent study found that 36% of HIV-positive and age-matched uninfected control individuals changed frailty status between two consecutive visits. 25 Nevertheless, the prevalent view assumes that the frailty occurring in community-dwelling elderly and in people living with HIV are similar or identical processes. 26 Our study found results similar to those we have reported before. 5,24 For example, seven of 37 HIV-positive subjects were frail, for a prevalence of 19% and the frailty status was not correlated with age, sex, the CD4 + cell count, being virally suppressed with ART, or whether they were diagnosed with AIDS in the past or were depressed ( Table 2). The sole correlation with frailty we found was with the number of past medical problems (p=0.04). Past medical problems correlated with depression (p=0.003) ( Table 3) and this relationship may explain the importance of depression in frail HIV-positive we noted before. 24 We also found that frailty is often reversible with institution of ART and that ART over time protects against frailty. 5 It is claimed that immune aging or immunosenescence seen in HIV infection is similar to that which occurs in the uninfected elderly characterized by accumulation of highly differentiated immunocompetent cells with a concomitant reduction in hematopoietic progenitor and naïve cells. 27 In HIV-specific CD8 + T-cells, this population of highly differentiated cells expresses CD57 and loses their proliferative ability and undergoes apoptosis. 28 These CD8 T-cells fall into 4 separate populations: CD28 + CD57 À , CD28 À CD57 + , CD28 À CD57 À , and CD8 + CD57 + . 29 In our subjects, all of whom were virally suppressed, there was no correlation of the FI with any acknowledged T-cell marker of senescence including the absence of CD8 + CD28 + or CD8 + CD57 + cells. Although there was a residuum of TNFα detected in some cells there was no correlation between TNF-α expression and the FI (Figure 2(d)). This residual TNF-α expression may occur in some of our treated patients with lower total CD4 + cell counts. 30 Our work has demonstrated that long-term ART with higher CD4 + T-cell numbers improved patients immunologically, reducing HIV-specific CD8 + T-cell responses compared to those with lower CD4 + T-cell counts. 11 In that same study, older people living with HIV exhibited decreasing levels of CD8 + T-cell responses with increasing age. 11 Moreover, virologic control of HIV-positive patients on long-term ART was associated with a significant reduction in terminally differentiated T-cells demonstrating decreased cell senescence and improvement in numbers of naïve to memory T-cells. 11 These studies showed a major improvement in the immunological markers associated with HIV-1 infection in older patients maintained on long-term ART.
Conclusion
We showed in this work that people living with HIV on long term, suppressive ART exhibit similar prevalence rates of frailty as our previous studies (i.e.,∼ 20%), but frailty did not correlate with age, sex, prior diagnosis of AIDS, or immunosenescence. Specifically, frailty did not correlate with any CD4 + or CD8 + T-cell surface proteins known to be associated with immunosenescence such as is seen in frail, community-dwelling elderly individuals. The physical and immunological characteristics of frail, virally suppressed people living with HIV differed from that of frail, community-dwelling elderly. Thus, it is likely that the etiology of frailty in these two populations is different. | 2022-04-05T06:22:54.766Z | 2022-04-04T00:00:00.000 | {
"year": 2022,
"sha1": "3c4f04a75d496256bab545b51ca1f94c3d250870",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/09564624221091455",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "a062b3c4769e1ce31bc16d24de34f18c292c862f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8186046 | pes2o/s2orc | v3-fos-license | Update of a comparative analysis of cost minimization following the introduction of newly available intravenous iron therapies in hospital practice
Background The clinical need to be able to administer high doses of intravenous iron conveniently as a rapid infusion has been addressed by the recent introduction of ferric carboxymaltose and subsequently iron isomaltoside 1000. Neither requires a test dose. The maximum dose of ferric carboxymaltose is 1000 mg. The maximum dose of iron isomaltoside 1000 is based on 20 mg/kg body weight without a specified ceiling dose, thereby increasing the scope of being able to achieve total iron repletion with a single infusion. This ability to give high doses of iron is important in the context of managing iron deficiency anemia, which is associated with a number of clinical conditions where demands for iron are high. It is also an important component of the strategy as an alternative to blood transfusion. Affordability is a key issue for health services. Recent price changes affecting iron sucrose and ferric carboxymaltose, plus modifications to the manufacturers’ prescribing information, have provoked this update. Methods This study is a comparative analysis of the costs of acquiring and administering the newly available intravenous iron formulations against standard treatments in the hospital setting. The costs include the medication, nursing costs, equipment, and patient transportation. Three dosage levels (600 mg, 1000 mg, and 1600 mg) are considered. Results and conclusion The traditional standard treatments, blood and iron sucrose, cost more than the alternative intravenous iron preparations across the dose spectrum and sensitivities. Low molecular weight iron dextran is the least expensive option at the 1600 mg dose level but has the caveat of a prolonged administration time and requirement for a test dose. At 600 mg and 1000 mg dose levels, both iron isomaltoside 1000 and ferric carboxymaltose are more economical than low molecular weight iron dextran. Iron isomaltoside 1000 is less expensive than ferric carboxymaltose at all dose levels. Newly available iron preparations appear to be clinically promising, cost effective, and practical alternatives to current standards of iron repletion.
Introduction
The ability to administer high doses of intravenous (IV) iron rapidly, without the need for a test dose, is an important development in the strategy for treating iron deficiency anemia (IDA). Ferric carboxymaltose was the first IV iron to be introduced to the UK that did not require a test dose. It can be given rapidly and administered at up to 20 mg per kg body weight to a ceiling of 1000 mg per infusion. 1 Iron isomaltoside 1000, whilst also administered rapidly, can be administered at up to 20 mg of iron per
502
Bhandari kg of body weight. 2 The absence of a specific dose ceiling for iron isomaltoside 1000 offers the opportunity to deliver very high doses (total doses) in a single administration. 2 This may be of practical importance when calculating the treatment dose based on the Ganzoni formula that incorporates amounts for replenishing body iron stores. 3 Doses for a number of disorders associated with IDA commonly require doses well in excess of 1000 mg. [4][5][6][7][8] A prerequisite for undertaking a cost minimization study is establishing similarity of outcome from the treatment options. 9 Iron treatment may be considered a "basic physiological requirement" (a micronutrient). 10 There is no evidence to indicate that the choice of IV iron formulation affects the physiological uptake or iron metabolism. Thus, whilst the literature provides evidence of efficacy (as measured by a range of measurable outcomes) for each of the IV iron treatment options (including blood), there are no comparative data to suggest a physiological difference in performance. [11][12][13][14][15] It is also recognized that the legacy of adverse events (ADEs) experienced with Imferon ® (Fisons, Ipswich, UK), 16 a high molecular weight iron dextran formulation withdrawn from the European market almost two decades ago, has been superseded by subsequent treatments which are associated with low levels of similar ADEs and, as such, any costs associated with ADEs are likely to be similar across the treatment options. 1,2,[16][17][18][19] The low level of ADEs associated with the two latest introductions (ferric carboxymaltose and iron isomaltoside 1000) is reflected in their approved modes of administration, as neither require the administration of a test dose. 1,2 Patients receiving iron sucrose require a test dose prior to receiving their first dose. 18 Those receiving low molecular weight iron dextran require a test dose at the time of each administration. 19 The purpose of a test dose is to predose (challenge) with a small amount of iron (eg, 20-25 mg) of the chosen formulation followed by a period of observation to establish the likelihood of the formulation provoking an ADE. A test dose, followed by an observation period, extends the overall administration time and cost (nurse observation time). The arrival of two formulations where a test dose is explicitly excluded increases convenience (for both patients and health care professionals), reduces the overall administration time, and, furthermore, implicitly endorses the safety profile of the latest IV iron therapy options. 1,2 The original paper examining the comparative costs of IV iron therapy and standard blood transfusions was first published in March 2011. 20 However, a subsequent modification in the price of two of the products included in the initial analysis reduced the validity of the results. This subsequent study reflects the changed product acquisition costs, the current price of blood in England and Wales, and the most recently published nursing costs.
Background
Blood is a declining resource. The safety associated with the receipt of a blood infusion has progressively improved over the last decade but there are recognized risks (and costs) associated with a blood transfusion. 21 Strategies to reduce the risks have led to the imposition of restrictions on members of the population who can be blood donors. This has resulted in a decline in the volume of blood donated. Additionally, following the identification of blood-borne diseases in blood donated by UK donors (eg, prion-related diseases, including Creutzfeldt-Jakob disease), certain cohorts of the population are prevented from receiving blood and blood products prepared from blood donated in the UK.
The National Blood Transfusion Service has encouraged the conservation and appropriate use of blood and blood products. 15 The policy of reducing inappropriate blood use is aimed at reducing both the intrinsic risks associated with blood and risks associated with the process of matching and administration.
These developments have been at a time when the importance, implications, and prevalence of IDA is being appreciated and associated with a broad range of clinical conditions and situations, for example, in: • chronic kidney disease patients, including renal transplant patients with/without erythropoietin replacement therapy • patients undergoing various modes of dialysis therapy • anemia associated with pregnancy (pre/postpartum, following hemorrhage) • anemia following "high blood loss" surgical procedures; eliminating or reducing the need for postsurgery transfusion (eg, orthopedics, colorectal surgery) • the elderly (often iron deficient and/or anemic); especially prior to surgery where blood loss may be significant • IDA associated with anemia of chronic disease • chronic IDA (often presenting with acute symptoms) • chronic occult blood loss (inflammatory bowel disease) • anemia associated with cancer or the use of chemotherapeutic agents • menorrhagia (heavy uterine bleeding) • chronic heart failure.
In these situations, iron-store repletion provides the substrate for erythropoiesis, thereby restoring or improving
503
Cost minimization update of newly available iV iron therapies hemoglobin levels. This can commonly be achieved without the need for concomitant erythropoiesis-stimulating therapy. 22 Anemia of chronic disease may be a comorbidity associated with a number of chronic conditions (eg, rheumatoid arthritis, inflammatory bowel disease). In these conditions, where hepcidin blocks both the absorption of iron from the gastrointestinal tract and mobilization of stored iron, IV iron has been demonstrated to bypass these blocks. 23,24 Compared with oral iron, IV iron repletes iron stores more rapidly and can be given at high doses, as a total dose infusion, which improves compliance. 2,19,25 Oral iron is associated with poor tolerance, poor compliance, and a high frequency of ADEs. 26 It is poorly absorbed in patients with anemia of chronic disease and does not appear to bypass the immobilized iron stores. 23,24 As such, its role as a useful source of iron supplementation is limited and its low cost often a "false economy." However, in spite of such caveats, it is still commonly used as first-line iron supplementation for patients diagnosed with chronic kidney disease.
The administration of IV iron may be considered a more physiological method of addressing chronic IDA than a blood transfusion. A transfusion addresses the acute symptoms of anemia, but is a poor and expensive source of iron, whereas IV iron provides physiologically available iron for both erythropoiesis and replenishing iron stores.
The purpose of this paper is to examine the comparative cost to the health care economy of the IV iron supplementation options, including blood transfusion. The economic importance is driven by the need to optimize the use of services in the current challenging financial climate whilst serving the needs of patients and maintaining patient safety. In these circumstances, value for money and the overall relative cost of treatments are important when making policy prescribing decisions. Given that all options will achieve similar clinical responses, a cost minimization analysis was undertaken to determine the least expensive option overall.
Methods
The costs of administering iron isomaltoside 1000 and ferric carboxymaltose are compared with the cost of administering a blood transfusion, iron sucrose, and low molecular weight iron dextran across a range of doses in a secondary care (hospital) setting. The cost model includes transportation, nursing, and equipment costs.
Initially, three matrix spreadsheets were established (one at each dose level) for the total costs of administering each of the options incorporating the sensitivity parameters (transport 10% and 20% of patients) and nurse grade 6 (nurse team leader) and 7 (nurse team manager) (ie, four total costs for each treatment option at the three dose levels).
From these, for each treatment option, and at each dose level, a mean cost was established with the maximum and minimum levels taken from their respective sensitivity calculations (Table 4). This allowed a comparison to be made and provided an indication of the level of robustness of the relative costs.
Cost differences between each of the three traditional treatments were calculated with reference to each of the two recently introduced formulations using the mean costs with the differences calculated as an absolute and percentage difference (Table 5).
Finally, a direct "head to head" comparison at each dose level was undertaken between the mean cost of ferric carboxymaltose and iron isomaltoside 1000 that provides the actual and percentage differences for each dose level ( Table 6).
Parameters for the revised cost model Standard treatment comparators
Standard treatment will vary according to local practice and medical specialty. Traditionally, blood would have been the sole option in most of the indications/situations described. IV iron is used almost exclusively in
504
Bhandari hemodialysis patients, whereas, in other situations, IV iron is progressively replacing the practice of administering a blood transfusion. As a standard treatment, blood is included as a comparator. Iron sucrose (considered a standard treatment) and low molecular weight iron dextran are also used as comparators.
Dose levels
The comparator doses were chosen to reflect clinical practice. Blood is transfused in multiples of "units." Each unit may be considered to approximate 200 mg of elemental iron.
Iron doses are commonly calculated using the Ganzoni formula. 3 It is not uncommon for an individual's requirement to be up to 2000 mg or higher across the range of conditions associated with anemia. [4][5][6][7][8] For the purposes of this cost minimization modeling, three levels of administration were chosen (to provide a dose sensitivity matrix): 600 mg, 1000 mg, and 1600 mg. These allowed direct comparison with units of blood.
Bioavailability and efficacy
In preparing this cost minimization model, it was assumed that each of the IV iron preparations impact erythropoiesis and enter iron stores in a similar manner directly related to the dose administered. There is no evidence to suggest that incorporation of iron into reticulocytes, elevation of hemoglobin levels, and development of iron stores differs. [28][29][30][31][32] The administration of IV iron differs physiologically to the administration of a blood transfusion. Blood results in an immediate rise in hemoglobin level. Iron from a blood transfusion is then recycled as the erythrocytes expire, but the resulting elevation in iron stores and hemoglobin level are considered to be similar for the purposes of this study.
ADEs
In the cost modeling, no allowance was made for occurrence of ADEs. These are infrequent and similar for iron sucrose and low molecular weight iron dextran. [17][18][19] The summaries of product characteristics for iron isomaltoside 1000 and ferric carboxymaltose indicate that ADEs associated with their use will be similar to those of currently available IV iron formulations. 1,2,18,19 Blood has higher levels of risk, both as a product per se and from the potential human error associated with compatibility testing and administration. However, no cost has been allocated to the treatment of these ADEs.
Dose and rate of administration limitations
In the cost modeling, the dose (including any constraints), rate of administration, and need for a test dose were taken from the manufacturers' prescribing information (Tables 1-3).
The manufacturers' instructions for undertaking a test dose were carefully incorporated into the modeling. For example, in the case of low molecular weight iron dextran, when administered for the first time, a 25 mg dose is given and the patient
505
Cost minimization update of newly available iV iron therapies observed for 45 minutes. 19 The balance can then be administered if there are no ADEs. For the second and subsequent infusions, the first 25 mg of iron is infused over 15 minutes and, if there are no untoward events, the administration can be continued. 19 When a total dose is administered, the patient should be observed for a further hour after the completion of the administration. 19 For the administration of iron sucrose, a test dose is required only for the first administration to a patient. 18 For the purposes of this study, it was assumed that this was the second (or subsequent) administration to a patient of iron sucrose and low molecular weight iron dextran. For the specified range of doses, an observation period was included in the administration times for low molecular weight iron dextran (as indicated in the manufacturer's prescribing information). 19 Given the recommended dilution volume for preparing the infusion, it was assumed that the administration plus observation period would be similar at each dose level -that is, 6 hours in total. In this analysis, 10 minutes was allowed for setup time across the range of preparations.
Transportation
This is an important factor when considering IV iron supplementation. Across the spectrum of patients with IDA, a proportion will be short of breath, perhaps with palpitations; will invariably be nonambulatory; and will be transported to hospital on a stretcher or in a wheelchair. Additionally, a number will be elderly, frail, and disabled. In the UK, ambulance services are paid for by the National Health Service. Two types of "transported" patient are considered: (1) those who are ambulatory, where the charge is GBP£12.00/single journey (GBP£24.00 return) and (2) those who are in a wheelchair or who require a stretcher where the charge for a single journey is GBP£48.00 single (GBP£96.00 return). 33 Across the spectrum of causes of IDA, it is difficult to establish the specific proportion of patients who require transportation and the ratio between ambulatory and nonambulatory patients. In this study, a sensitivity of 10% and 20% of patients requiring treatment is used. These percentages are to reflect transport requirements across the spectrum of patients with IDA. For example, few anemic pregnant women will require transport, but there will be a high demand for transportation by those undergoing dialysis or elderly persons undergoing surgery. It is assumed that those requiring transport will be equally split between those who are ambulatory and those who are nonambulatory (who require a stretcher or wheelchair).
Giving sets, cannula, and dressing
For the purpose of this analysis, unit costs reported by Bhandari and Naudeer 34 were used. These were GBP£7.89 for a "giving set," GBP£0.74 for one cannula, and GBP£0.54 for a standard dressing.
nursing time
The costs for 1 hour of patient-contact nursing time in the UK at midband 6 and 7 have risen to GBP£70.00 (+4.5%) and GBP£81.00 (+5.2%), respectively, and reflect the latest figures published by the Personal Social Services Research Unit (2009/10). 35,36 Nurse grades 6 (nurse team leader) and 7 (nurse team manager) are used to reflect the level of knowledge, experience, and responsibility required to run a nurse-led "anemia" service. In the cost allocations, assumptions are made with regard to allocating time to represent multitasking (ie, not dedicating sole time to an individual patient during a 6-hour low molecular weight iron dextran total dose infusion administration). Thus, for a short administration (approximately 30 minutes) time, a nurse is likely to attend for the duration. For an infusion taking about 60 minutes it is assumed that the nurse will spend 50% of their time with the patient, whereas, for a prolonged infusion of low molecular weight iron dextran, a nurse is considered to spend 33% of their time with the patient. (During the test-dose phase and observation phase this may be 100%.) The differences in administration times are reflected in the nursing time and, therefore, nurse costs required for the administration of each treatment option at each dose level.
Cost of blood
The cost used in this modeling is now updated to that charged in England and Wales to NHS hospitals for 2010/11, which is GBP£125.00/unit (−6.37%) for red blood cells. (This does not include the cost of pretransfusion cross matching of a patient's blood or error checking. 41 )
Other costs
Expenditure considered minor or unlikely to be significant to the outcomes was excluded. This can be justified on the basis that, in any particular unit, the practice is likely to have a similar impact across the IV iron options. A cost deliberately omitted was that of the clinician. Whilst likely to be available during a transfusion, they would be undertaking other clinical/administrative duties, whereas a nurse would normally be responsible for administering the infusion and managing/monitoring the procedure. An example of a minor cost not included is that of the infusion fluid (normal saline), which costs GBP£0.70 per 250 mL. 34
Results
From Table 4, it is observed that both iron isomaltoside 1000 and ferric carboxymaltose are the lower cost options when compared with iron sucrose and blood at each dose level and across all levels of sensitivity. When compared with low molecular weight iron dextran, both have a lower cost at the 600 mg and 1000 mg level, however, at the 1600 mg dose level, low molecular weight iron dextran offers a lower cost than both ferric carboxymaltose and iron isomaltoside 1000.
This same table indicates that low molecular weight iron dextran is less expensive than iron sucrose across the dose spectrum and across the sensitivity ranges. Blood is the highest cost option and this is without including the costs of cross matching.
507
Cost minimization update of newly available iV iron therapies The actual cost and percentage differences accruing from using either iron isomaltoside 1000 or ferric carboxymaltose compared with the current standard treatments are illustrated in Table 5. It is apparent throughout that greater savings are potentially realizable by adopting iron isomaltoside 1000 instead of ferric carboxymaltose and, on the single occasion that a traditional therapy is less expensive (ie, 1600 mg dose of low molecular weight iron dextran), the saving is GBP£103.97 compared with ferric carboxymaltose but only GBP£32.52 when compared with iron isomaltoside 1000.
A direct comparison of the cost of using the latest two entrants at the three dose levels is presented in Table 6. The potential expenditure savings from using iron isomaltoside 1000 at each dose level range from GBP£3.02 at the 600 mg dose level (1.74%), to GBP£71.45 at the 1600 mg dose level (17.65%).
Discussion
Blood continues to be used to treat IDA, in the absence of acute blood loss, associated with a number of conditions. This is against NHS Blood and Transplant policies to reduce blood use, which include the use of IV iron as an alternative to blood. 15 There is, however, momentum gaining pace toward treating IDA with iron supplementation (normally with IV iron).
Renal medicine has been at the forefront of pioneering the use of IV iron. This practice historically developed following the introduction of iron sucrose. Iron sucrose can be administered in doses of up to 200 mg in a single administration and, as such, it was adopted in hemodialysis units. It is normally given to patients during one of their weekly hemodialysis sessions. This use of IV iron has resulted in a dramatic reduction in the requirement for blood transfusion.
The results of this updated cost minimization modeling (Tables 4 and 5) indicate that each of the IV iron options are less expensive than administering a blood transfusion at each iron repletion level and thereby may further encourage the consideration of iron as an alternative to blood in patients with IDA, especially in those diagnosed with chronic IDA.
Iron sucrose is well established as a standard treatment for IDA, however, its use is constrained by the maximum amount that can be given in a single administration. This is particularly pertinent when considering total iron repletion requirements. The Ganzoni formula is widely used to calculate these requirements. 3 This formula embraces repletion of iron stores (frequently 500 mg of iron). The resulting dose calculation, depending on the weight of the patient, prevailing hemoglobin level, target hemoglobin level, and cause of IDA, may be well above 1000 mg and may exceed 2000 mg. To achieve repletion with iron sucrose would require multiple administrations of 200 mg doses, which is impractical and inconvenient. Furthermore, apart from blood, it is the least attractive option from a cost perspective across the dose range under consideration.
Since the withdrawal of Imferon ® (high molecular weight iron dextran), low molecular weight iron dextran, available in the UK since the 1990s, has been at the forefront for clinicians wishing to administer large doses of iron at a single clinic visit. However, the prolonged infusion time may be considered a disadvantage both for the health service provider and the patient.
In this analysis, low molecular weight iron dextran has been shown to be less expensive than the standard treatments of blood and iron sucrose across the dose range. When compared to the more recently introduced options (ferric carboxymaltose and iron isomaltoside 1000) it is more expensive at the 600 mg and 1000 mg dose levels. At the 1600 mg dose level, ferric carboxymaltose is 34.6% more expensive to administer. Whilst iron isomaltoside 1000 is also more expensive, the difference is much less at 10.8% (GBP£32.52) and may perhaps be preferred (and justified) given the much reduced time required for administration, patient convenience, and potential increase in patient throughput (especially in the "payment by results" environment).
The direct comparison between iron isomaltoside 1000 and ferric carboxymaltose ( Table 6) suggests that iron isomaltoside 1000 offers potential savings when compared with ferric carboxymaltose at all three dose levels but especially at the higher dose end of the spectrum.
In the modeling, key "drivers" are cost of medication, time for administration (affecting nurse resource), and transportation. The cost of the various iron formulations are National Health Service acquisition costs. The choice of available IV treatment options is limited to those included in this analysis. This analysis has not embraced oral iron supplementation. It is acknowledged that oral iron has a significant role in preventing and treating IDA and has a much lower cost than either IV iron formulations or blood. However, the patient population that receive IV iron or blood is largely defined as those for whom oral iron is not appropriate, where oral iron has not been tolerated, has resulted in an unacceptable level of side effects, where compliance is poor, or where treatment has not achieved target iron parameters (ie, hemoglobin or ferritin levels). 1,2,18,19 There is evidence to indicate that IV iron will achieve target parameters more quickly than oral iron and, more importantly, when compared with oral iron, overcomes the "hepcidin block" affecting iron absorption and mobilization of iron stores in patients where IDA is associated with chronic conditions. [23][24][25] The ability to give a total dose repletion rapidly, in a single infusion, overcomes compliance issues and is highly convenient for a number of patient types (eg, the elderly, in pregnancy). It may be justified on the basis of achieving target hemoglobin and ferritin levels more rapidly than oral iron -for example, prior to elective surgery -thereby reducing the incidence of cancellations due to poor anemic status.
Conclusion
Parenteral iron treatment has advanced significantly as a result of the introduction of ferric carboxymaltose and subsequently iron isomaltoside 1000. The scope for administering rapid single high doses of iron, without the need for a test dose, to address IDA associated with various clinical conditions, is a welcome development. This further enhances the prospect of using IV iron as an alternative to a blood transfusion for treating chronic IDA. This may be particularly important in the strategy of reducing blood use and reducing the incidence and volume of blood transfusions in the UK. It is particularly pertinent to note that this can be undertaken at a cost well below that of blood.
This updated analysis confirms that blood as a source of iron is expensive and is the least attractive option from a cost perspective. Iron sucrose, in addition to being able to be administered only in small 200 mg doses and requiring a test dose, is more expensive than the other available IV iron alternatives. Only at the highest dose level (1600 mg) does low molecular weight iron dextran offer cost savings worthy of any consideration compared with the two newest entrants.
Ferric carboxymaltose has a lower cost than iron sucrose and blood across the dose range; is only noticeably more expensive than low molecular weight iron dextran at the 1600 mg level, where the cost difference has to be balanced against administration time, convenience, and patient throughput. It is, however, more expensive than iron isomaltoside 1000 at each dose level (progressively, GBP£3.02 at 600 mg, GBP£17.92 at 1000 mg, and GBP£71.45 at the 1600 mg dose levels).
Likewise, iron isomaltoside 1000 has a lower cost than iron sucrose and blood across the dose range; it compares favorably with low molecular weight iron dextran at the 600 mg and 1000 mg dose levels but is marginally more expensive at the 1600 mg level. Cost savings compared to ferric carboxymaltose prevail across the dose range but are less pronounced than previously published following this product's price reduction.
This analysis of the relative holistic cost of administering the treatment options endeavors to more closely reflect the "real world" situation when making prescribing policy decisions associated with the treatment of IDA.
Disclosure
The author reports no conflicts of interest in this work.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/therapeutics-and-clinical-risk-management-journal Therapeutics and Clinical Risk Management is an international, peerreviewed journal of clinical therapeutics and risk management, focusing on concise rapid reporting of clinical studies in all therapeutic areas, outcomes, safety, and programs for the effective, safe, and sustained use of medicines. This journal is indexed on PubMed Central, CAS, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2017-04-06T02:21:13.892Z | 2011-12-12T00:00:00.000 | {
"year": 2011,
"sha1": "5daee5f64c6d2f3e47ab22744817d75504802697",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=11596",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d9019de0520bd3571183b48ade8f0a47027ebe7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15330950 | pes2o/s2orc | v3-fos-license | Muon Collider Overview: Progress and Future Plans
Besides continued work on the parameters of a 3-4 and 0.5 TeV CoM collider, many studies are now concentrating on a machine near 100 GeV that could be a factory for the s-channel production of Higgs particles. We mention the research on the various components in such muon colliders, starting from the proton accelerator needed to generate pions from a heavy-Z target and proceeding through the phase rotation and decay channel, muon cooling, acceleration, storage in a ring and the collider detector. We also mention theoretical and experimental R&D plans for the next several years that should lead to a better understanding of the design and feasibility issues for all of the components. This note is a summary of a report updating the progress on the R&D since the Feasibility Study of Muon Colliders presented at the Workshop Snowmass'96.
INTRODUCTION
Unlike protons, muons are point like but, unlike electrons, they emit relatively little synchrotron radiation and therefore, can be accelerated and collided in rings. As a result, a muon collider with a given energy reach could be smaller than either a proton or electron machine. A 3 TeV muon collider (with effective energy comparable with that of an SSC) would fit on existing sites, such as BNL or FNAL (see Figs.1, 2). Another advantage resulting from the low synchrotron radiation is the lack of beamstrahlung and the possibility of very small collision energy spreads. A beam energy of ∆E/E of 0.003 % (equivalent to a CoM spread of ∆E/E of 0.002 %) is considered feasible for a 100 GeV machine; and it has been shown that by observing spin precession, the absolute energy could be determined to a small fraction of this width. These features become important in conjuction with the large s-channel Higgs production (µ + µ − → h, 43000 times larger than for e + e − → h), allowing precision measurements of the Higgs mass, width and branching ratios.
Such machines are clearly desirable. The questions are: • whether they can be built and physics done with them • what will they cost.
Much progress has been made in addressing the first question and the answer, so far, appears to be positive. It is too early yet to address the second. We have studied machines with center of mass energies of 100 GeV, 400 GeV and 3 TeV, defined parameters and simulated many of their components (see Tb.1). Most work has been done on the 100 GeV "First Muon Collider", the exact energy taken to be representative of the actual mass of a Higgs particle.
COMPONENTS
Proton Driver The specification of the proton driver for the three machines is assumed the same: 10 14 protons/pulse at an energy above 16 GeV and 1-2 ns rms bunch lengths. There have been three studies of how to achieve them. The most conservative, at 30 GeV, is a generic design. Upgrades of the FNAL (at 16 GeV) and BNL (at 24 GeV) accelerators have also been studied. Despite the very short bunch requirement, each study has concluded that the specification is attainable. Experiments have been done and are planned to confirm some aspects of these designs. [3] Muon Production Pion production has been taken from the best models available, but an experiment (BNL-E910) that has taken data, and is being analyzed, will refine these models.
[4] The assumed 20 T capture solenoid appears to be well within current technology (a coil with the specified field and aperture is now nearing completion at the National High Magnetic Field Laboratory, Florida State University). Capture, decay and phase rotation have been simulated, and have achieved the specified production of 0.3 muons per initial proton. The most serious remaining questions for this part of the machine are: 1. The nature and material of the target: The baseline assumption is that a liquid metal jet will be used, but the effects of shock heating by the beam, and of the eddy currents induced in the liquid as it enters the solenoid, are not yet fully understood.
2. The maximum RF field in the phase rotation. For the short pulses used, the current assumptions would be reasonably conservative under normal operating conditions, but the effects of the massive radiation from the nearby target are not known.
Both these questions can be answered in a target experiment planned to be performed within the next two years at AGS. [5] Cooling The required ionization cooling is the most difficult and least understood element in any of the muon colliders studied. Ionization cooling is a phenomenon that occurs whenever there is energy loss in a strong focusing environment. Such an environment has existed, for instance, in the iron toroid muon calorimeters of several neutrino experiments, and a Monte Carlo simulation has shown [6] that cooling must have occured there. But achieving the nearly 10 6 reduction required is a challenge. Cooling over a wide range has been simulated using lithium lenses and ideal (linear matrix) matching and acceleration; and examples of limited sections of solenoid lattices with realistic accelerating fields have now been simulated. But the specification and simulation of a complete system has not yet been done. Much theoretical work remains: space charge and wake fields must be included; lattices at the start and end of the cooling sequences must be designed; lattices including liquid lithium lenses must be designed and studied, and the sections must be matched together and simulated as a full sequence. The tools for this work are nearly ready, and this project should be completed within two years. [7] Technically, one of the most challenging aspects of the cooling system appear to be: • High gradient RF (e.g. 36 MV/m at 805 MHz) operating in strong (5-10 T) magnetic field, with beryllium foils between the cavities.
An experiment is planned that will test such a cavity, in the required fields, in about two years time. On an approxi-mately six year time scale, a "Cooling Test Facility" is being proposed that could test ten meter lengths of different cooling systems. [8] If they are required, there is the need to develop: • Lithium Lenses: (e.g. 2 cm diameter, 70 cm long, liquid lithium lenses with 10 T surface fields and a repetition rate of 15 Hz).
They may not be needed for the low energy "First Muon Collider", which would ease the urgency of this rather long term R & D. Meanwhile a short lithium lens is under construction at BINP (Novosibirsk, Russia).
Acceleration
The acceleration systems are probably the least controversial, although possibly the most expensive, part of a muon collider. Preliminary parameters have been specified for acceleration sequences for a 100 GeV and 3 TeV machines, but they need refinement. In the low energy case a linac is followed by three recirculating accelerators. In the high energy accelerator, the recirculating accelerators are followed by three fast ramping synchrotrons employing alternating pulsed and superconducting magnets. The parameters do not appear to be extreme, and it does not appear as if serious problems are likely.
Collider The collider lattices are challenging because of their required very low intersection betas, high single bunch intensities, and short bunch lengths (see Tb.1); however, the fact that all muons will decay after about 1000 turns means that slowly developing instability are not a problem. Feasibility lattices have been generated for a 4 TeV case, and more detailed designs for 100 GeV machines studied. In the latter case, but still without errors, 5σ acceptances in both transverse and longitudinal phase space have been achieved in tracking studies. Beam scraping schemes have been designed for both the low energy (collimators) and high energy (septum extractors) cases.
Bunch length and longitudinal stability problems are avoided if the rings, as specified, are sufficiently isochronous, but some rf is needed to remove the impedance generated momentum spread. Transverse instabilities (beam breakup) should be controlled by rf BNS damping.
The heating of collider ring superconducting magnets by electrons from muon decay can be controlled by thick tungsten shields, and this technique also shields the space surrounding the magnets from the induced radioactivity on the inside of the shield wall. A conceptual design of magnets for the low energy machine has been defined.
Although much work is yet to be done (inclusion of errors, higher order correction, magnet design, rf design, etc), the collider ring do not appear likely to present serious problems.
Neutrino Radiation and Detector Background
Neutrino radiation, which naturally rises as the cube of the energy, is not serious for machines with center of mass energies below about 1.5 TeV. It is thus not significant for the First Muon Collider; but above 2 TeV, it sets a constraint on the muon current and makes it harder to achieve desired luminosities. However, advances in cooling, and correction of tune shifts may still allow a machine at 10 TeV with substantial luminosity (> 10 35 cm −2 s −1 ). Background in the detector was, at first, expected to be a very serious problems. But after much work, shielding systems have evolved that limit most charged hadron, electron, gamma and neutron background to levels that are expected to be acceptable. Muon background, in the higher energy machines, is a special problem that can cause serious fluctuations in calorimeter measurements. It has been shown that fast timing and segmentation can help suppress this background, and preliminary studies of its effects on a physics experiment are encouraging. The studies are ongoing. [9] | 2014-10-01T00:00:00.000Z | 1998-06-01T00:00:00.000 | {
"year": 1998,
"sha1": "50abdec95fd66c76af561314e16118c056784fe2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cd4d08341ae99cfd6556a98c6a31b64a23a2dbff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1807867 | pes2o/s2orc | v3-fos-license | DBR: A Simple, Fast and Efficient Dynamic Network Reconfiguration Mechanism Based on Deadlock Recovery Scheme
Dynamic network reconfiguration is described as the process of replacing one routing function with another while the network keeps running. The main challenge is avoiding deadlock anomalies while keeping limitations on message injection and forwarding minimal. Current approaches, whose complexity is so high that their practical applicability is limited, either require the existence of extra network resources like virtual channels, or they affect the performance of the network during the reconfiguration process. In this paper we present a simple, fast and efficient mechanism for dynamic network reconfiguration which is based on regressive deadlock recoveries instead of avoiding deadlocks. The mechanism which is referred to as DBR guarantees a deadlock-free reconfiguration based on wormhole switching (WS) and it does not require additional resources. In this approach, the need for a reliable message transmission has led to a modified WS mechanism which includes additional flits or control signals. DBR allows cycles to be formed and in such conditions when a deadlock occurs, the messages suffer from time-out. Then, this method releases the buffers and channels from the current node and thus the source retransmits the message after a random time gap. Evaluating results reveal that the mechanism shows substantial performance improvements over the other methods and it works efficiently in different topologies with various routing algorithms.
INTRODUCTION
Computers get faster, but the demand for more computing resources seems to grow at an even faster rate, and depending on the applications domain, this demand can be satisfied by either, massively parallel computers, or cluster of computers. The dependency on high performance interconnect networks such as Myrinet [1], Infiniband [2], Gigabit Ethernet [3,4], and Quadrics [5] is common for both approaches.
Interconnection networks are applied as the communication infrastructure of parallel processing systems which enable the diverse processing, memory, storage, and I/O components of system to communicate.
They are found in high-end servers [6] in the form of system area networks [7] as well as in multicore processors [8] as networks-on-chip (NoCs) [9] at the other end of the spectrum. The network's role is critically relying on determining the system performance and dependability as the interaction and cooperation of other system components; ultimately it can be said that the role of the network depends on its ability to establish communication paths between the abovementioned components [10].
Various switching mechanisms have been described in the literature for interconnection networks including packet switching (PS), virtual cut-through (VCT) and wormhole switching (WS) [12].
WS [11] (also referred as wormhole-routing) has become the most widely used switching mechanism for multicomputers and distributed shared-memory multiprocessors, and it is also being used for networks of workstations [1]. Besides, a message in WS is fragmented into elementary units, called flits, for transmission and flow control [12].
In PS and VCT, messages are completely buffered at a node. As a result, the messages consume network bandwidth proportional to the network load. On the other hand, wormhole-switched messages may block the occupying buffers and channels across multiple routers, precluding access to the network bandwidth by other messages [12].
The need for a reliable message transmission has also led to a modified WS mechanism which includes additional flits or control signals (e.g., acknowledgments or padding flits). This particular technique was proposed as compressionless routing by its developers [13]. By increasing the probability of failure and reliability concerns for interconnection networks, faulttolerance has quickly become an indispensable part of such systems. Thus, it is necessary to provide an efficient fault-tolerant mechanism to keep the system running despite the presence of faults is necessary.
Fault-tolerance is defined as the ability of a system to pursue an operation, even in the presence of faults [14]. Reliability, availability and dependability are the three most applicable terms of faulttolerance [15]. However, due to the large application area, interconnection networks are found in systems with high requirements for reliability and continued operation.
The use of fault-tolerance mechanisms will assure that in case of a component failure the system keeps working, although in a degraded mode it should wait until the failed component is repaired. Basically, there are three ways to cope with faults in the interconnection networks: component redundancy, fault-tolerant routing algorithms and reconfiguration techniques [16]. Using the component redundancy has been the easiest and a costly way. In this method, while a failed component is detected in the system, it is easily replaced by its redundant copy.
Fault-tolerant routing algorithms aim at preventing messages from traversing faulty components by providing some kinds of routing path redundancy. To reach this end, messages must be able to be routed through alternative paths to circumvent or avoid faulty regions over the network. Faulttolerant routing schemes should be designed to tolerate a certain number of faults while still guaranteeing deadlock freedom in the network. However, to fulfill the requirements, fault-tolerant routing strategies often need to use additional network resources such as virtual channels or additional hardware at switches or routers.
By applying reconfiguration [17], any number of faults can be tolerated while the network remains physically connected. Once a fault is detected, the configuration consists of a fault is identified and the new topology will be discovered. Thus, a new routing scheme is computed and the required components in the network are updated. The main disadvantage of reconfiguration is the high delayed messages that may occur during the reconfiguration process.
Reconfiguration techniques can be either static or dynamic. Static reconfiguration techniques require the network traffic to be completely stopped the traffic in the network before changing any routing table, so the network is emptied. The routing algorithm used after the reconfiguration process is different. It implies that all the paths for each source-destination pair need to be computed. Owing to the network down-time, i.e. halting message injection that may cause strong performance degradation during the reconfiguration process, static reconfiguration largely impacts on the message latency. This issue prevents static reconfiguration techniques from being used in systems with high performance requirements.
Unlike the static reconfiguration, in a dynamic reconfiguration the transition from one routing function to another is performed while the functional parts of the network are fully operational, i.e., we have no network down-time and no halting message injection. This typically leads, when compared with static reconfiguration, to a reduction in the number of messages that miss their quality of service deadline. The problem in this approach resides in the fact that, in general, two different and individually deadlock-free routing functions may be prone to deadlock if they coexist in the network. It means that, in a dynamic reconfiguration, there will be a transition phase between the old and new routing functions where reconfiguration-induced deadlocks may occur. Another drawback of using dynamic reconfiguration is that it usually requires extra resources.
In this article, we introduce a simple, fast and efficient method for dynamic network reconfiguration which is based on regressive deadlock recoveries instead of avoiding deadlocks. DBR guarantees a deadlock-free reconfiguration based on WS and it does not require additional resources. In this approach, the need for a reliable message transmission has led to a modified WS which includes additional flits or control signals. This can be achieved by padding messages [13]. Further, a message can not leave the source node until the header flit reaches its destination. Moreover, deadlock recovery is basically achieved through using time-out mechanism [18]. DBR allows cycles to be formed and in such conditions when a deadlock occurs the messages suffer from time-out. This releases buffers and channels at the current node containing the header that goes back to the source node by reversing back along the path reserved by the header message and thus the routing table is updated. When a message experiences a transmission failure, due to a time-out at an intermediate node, the source retransmits the message after a random time gap.
The rest of the paper is organized as follows. Related work is presented in section 2. In Section 3, we present DBR method and describe implementation details. In Section 4, DBR is evaluated. Finally, in Section 5, conclusions are provided.
RELATED WORK
Faults in a network appear in several different types, such as hardware faults, software bugs, or malicious sniffing or removal of packets. The first step in dealing with errors is to understand the nature of component failures and then to develop simple models that allow us to reason about the failure and the methods for handling it. Classification of faults by nature is either random or systematic faults. Random faults are usually hardware faults affecting the system components occurring with a certain probability, while systematic faults such as software failures are the faults which are not random, whether a component has it or not [12]. We assume that such permanent failures are detected and contained of a node or a link boundary. Thus, faults are assumed to be fail-stop [19], meaning that we do not consider Byzantine (i.e., malicious) faults [12]. In the contexts of fault-tolerant routing, these are common assumptions [12,[19][20].
Faults also can be classified by their duration as transient and permanent faults [12]. Transient faults stay in the system for only a short duration, while permanent faults remain in the system until it is repaired. Permanent faults may be either dynamic or static. In a dynamic fault model, while a new fault is found, actions are performed in order to appropriately handle the faulty component which allows the system to reconfigure at the hardware level, and preserves the original network topology.
In some situations the defined promises of the routing algorithm and/or network topology may break, affecting the network dependability. This may happen when the topology of the network changes, either involuntarily due to faulty components or voluntarily due to removal or addition of some components. This normally requires the network routing algorithm (routing function) to be reconfigured in order to re-establish the connectivity of the entire network [22].
Unlike static reconfiguration techniques, dynamic reconfiguration techniques [17] do not require the network traffic to come to a complete stop. However, some packets must be removed from the network and re-injected later, which could cause a strong degradation in performance during the reconfiguration time. In the last decade several dynamic reconfiguration mechanisms have been proposed. Next we describe some of them.
In [17], a Partial Progressive Reconfiguration (PPR) technique is proposed, allowing arbitrary networks to migrate between two instantiations of up*/down* routing. The effect of load and network size on PPR performance is evaluated in [23].
Another approach is the NetRec scheme [24] which requires every switch to maintain information about the switches in some hops away. Yet another approach is the Double Scheme (DS) [25] that uses two sets of virtual channels in the network which act as two disjoint virtual network layers during the reconfiguration. A methodology for deriving new reconfiguration processes for any given pair of old and new routing function is given in [22]. An orthogonal approach which may serve on top of all above techniques is explained in [26], where, for up*/down* routing only some parts of the network need to be reconfigured for up/down routing. Solid theoretical supports on the issue that dynamic reconfiguration design methodologies and techniques are proved to be deadlock-free, can be found in [27].
Moreover, a mechanism was suggested in [22] which is referred to as Simple Reconfiguration (SR). In SR a token is issued to separate the messages routed with the old routing function from messages routed with the new routing function. Tokens advance through an output port in a switch once there are no more old messages passing through the output port (based on input and output dependencies generated from the old routing function). By performing this, there are no cycles in the network since there will be no old messages behind new ones.
The above mentioned mechanisms lack at least one of the identified goals in this paper. In particular, PPR only works with routing functions that adhere to the up*/down* scheme. NetRec [24] is specially tailored for re-routing messages around a faulty node. It fundamentally provides a protocol for generating a tree that connects all neighbor nodes of a fault, and drops packets to avert deadlocks in the reconfiguration period. DS is more flexible, in the sense that it can handle any topology and transition between any pair of deadlock-free routing functions. However, it requires the presence of two sets of virtual channels. The methodology in [22] requires complex computation in order to derive a safe reconfiguration process once the new routing function has been chosen. It consumes time and thus limits the applicability of the methodology. SR mechanism requires a token to be distributed over the entire network. Although it separates old and new traffic, it has two major drawbacks. The first one is that its implementation is not straightforward. The token distribution is based on the dependencies of the old routing function. The second, messages suffer from extra blocking since new messages must wait for the tokens to advance.
DBR can exhibit superior performance characteristics over the following goals. First, messages are not getting blocked for any reason. They are routed as soon as possible. Therefore, the message latency is minimized. Second, the nodes which are closed to failed links or nodes are updated in a fast manner. Third, the mechanism does not require additional resources at network components. Fourth, nodes react quickly in the presence of a reconfiguration process. The mechanism has been performed based on WS and works with any routing algorithm implemented in the form of routing tables at nodes.
It is worth mentioning that the authors in [21] suggested a protocol for dynamic network reconfiguration mechanism referred to as PDR in order to handle both deadlock and performance degradation. PDR provides an efficient approach for both deadlock detection and deadlock recovery. Further, although it has many advantages such as superior performance in higher message injection rates and simplicity, it needs additional resources such as virtual channels to pursue its operations. It is worthy to mention that in Section 4, we have compared our method with DS and SR. Also in [21] the method of PDR has been compared with the two recent mechanisms. We have avoided the comparison between PDR and DBR. The reason is the fact that both DS and SR mechanisms are based on deadlock avoidance, so the suggested technique of PDR is a mechanism based on deadlock recovery and a comparison between this mechanism and the proposed method in this paper, i.e. DBR, involves considering other parameters of performance and method comparisons of deadlock detection and recovery that is far from the main focus of this paper. Therefore, the comparison between these two mechanisms has been postponed to another separate paper. are sent from S1, S3 to d1, d3, respectively while messages labeled by Old  are sent from S2, S4, S5 to d2, d4, d5, respectively.
DBR MECHANISM
We consider the use of two routing algorithms referred to as Old  and New  . Routing information is distributed along the nodes by using routing tables. The mechanism is based on the fact that deadlock does not occur frequently, so recovery may be preferable to prevention. Indeed, the probability of deadlock is proportional to the traffic injection rate and inversely proportional to the availability of resources.
The DBR mechanism will allow cycles to be formed (Figure 1). In this situation, the mechanism is used to remove the deadlock by releasing the reserved path (routing table). The basic idea is to remove the deadlock by releasing the reserved path (routing table). In this situation, the fine-grained flow control and backpressure of wormhole-routing is used to communicate to nodes the routing tables, the routing status and the error condition. The nodes use the information to provide deadlock avoidance. In DBR, deadlock is avoided by keeping track of whether the message header has reached the destination or not. If it reached, no deadlock is possible; otherwise, it is blocked for a particular time and then the source tears down the partial message path (routing table) and tries again later. Thus, any deadlock message ( Old  or New  ) will eventually have its path torn down.
To determine whether the message header has reached the destination, DBR takes the advantage of the properties of messages under wormhole routing. Wormhole routing provides feedback in the form of flow control which can be exploited to communicate acknowledgements. Messages have a fixed profile in channels due to the small amount of buffering in wormhole routers. So when the message is long enough, the sender can determine that the message header must have reached the destination if a sufficient number of flits have been injected into the network. Otherwise, the sender pads the message to ensure that the header reaches the destination before the last flit has been injected by the source. When the real data ends (at the rising edge of the pad signal), the receiver will be informed by the pad signals. Else, the reserved path ought to be released (at the falling edge of the pad signal) [13]. Figure 2 demonstrates the single message format in DBR. For the sake of explanation, DBR is described as a fully deployed mechanism following the entire reconfiguration process from the occurrence of the topological change (i.e., a failure) to the normal and final functioning of the network with the new routing algorithm. The following sections describe each step. Figure 2. The message format for DBR mechanism [13]. The pad signal is used to differentiate pad and data flits.
Status Information Distribution
In some situations the defined promises of the routing algorithm and/or network topology may break and naturally affect the network dependability. This may happen when topology of the network changes, either involuntarily due to faulty components or voluntarily due to removal or addition of components. This normally requires the network routing algorithm to be reconfigured in order to network connectivity to be re-established. DBR is applied whenever a new routing algorithm is needed for the network. In some conditions the change in the topology does not necessarily lead to a change in the routing algorithm. This is the case of adding/removing nodes to/from the system. As the routing algorithm remains unchanged, there is no probability of deadlock and no demanding for a global reconfiguration process. On the other hand, a new node or link might be added, changing the topology. In that case, a change of the routing algorithm could lead to achieve a higher performance and thus activating a reconfiguration process. The routing algorithm might also need to be changed even if no topological change occurs. In cases which the topology makes no changes in, with changing the routing algorithm higher performance might be achieved. However, the most considerable topological change is when a node or link fails (or a group of them). Once some parts of the network are being disconnected, altering the routing algorithm is required. To alleviate this problem, we will introduce an efficient mechanism in the situation that a failure occurs.
In all the above mentioned cases, DBR reconfiguration mechanism is activated. To do so, a selected node runs a component to be responsible for detecting any topological change. To achieve this, the component sends control signals periodically to all nodes. Nodes respond to the current status of their links and neighbor nodes. On detecting the topological change, new routing tables need to be computed. At the moment that the nodes are notified, they use the alternate paths to bypass the failed node. However, employing the alternate paths is an additional mechanism which has nothing to do with the reconfiguration process. Therefore, we only focus on the reconfiguration process (i.e., updating tables).
Once the paths are computed, they must be distributed by sending all the new routing tables to all nodes through the network. Figure 3 shows an example of the sequence used to updating the nodes. Once a node finds out the new paths, it updates its routing table removing the old one.
Additionally, in order to reduce the overhead control traffic, only the differences between the old and the new routing tables might be sent to every node. Depending on the similarity of the routing algorithms the percentage of control traffic reduction may be significant. Indeed, in the evaluation we will see that this improvement will affect the effectiveness of the mechanism.
Deadlock Detection and Recovery
In order to keep track of paths that potentially can be released to break deadlocks, DBR detects deadlock by using a time-dependent selection function similar to those suggested in [18]. Besides, DBR exploits the tight coupling of wormhole routers for flow control to perform deadlock recovery. The detected deadlock is recovered by releasing the path.
To send a message, the sender first resets two initial parameters F and C which are referred to as flit counter and blocking counter, respectively. The former indicates how many flits of the current message have been injected while the latter states how long the message header has been blocked at a node. We also introduce the minimum flit injection for delivery guarantee parameter which is depicted by path p F . path p F is defined as depth of channel buffer (flits/channel)´ distance to destination in hops.
The parameter F is incremented in each cycle when a new flit is injected and C is added up in each cycle when a flit cannot be injected. If the message in flit ( F ) is shorter than the distance in flits ( path p F ), it is padded to make the size equal to the distance in order to reserve the path. If the sender is unable to send out the header, C is incremented. Incrementing the parameter C on every cycle by the router is continued until the router succeeds to send out the header or the value of C reaches the time-out interval (denoted byT ) for that message. When > C T , the router changes its status to "deadlocked" meaning that a cycle has been formed.
If there were cyclic dependencies (during transition from Old  to New  ) among the channels in the network, there would be no path to escape from cycles and the sender cannot inject a new flit for a period longer than T . It shows a deadlock situation, so that the sender launches a release signal to release the path. The F and C are also reset and the same message will be re-injected later.
EVALUATION
In this Section we evaluate the proposed reconfiguration mechanism. To do so, we first present the evaluation methodology and traffic patterns. Then, we briefly describe the reconfiguration mechanisms used for comparison purposes. Finally, results and analysis are presented.
After the text edit has been completed, the paper is ready for the template. Duplicate the template file by using the Save As command, and use the naming convention prescribed by your conference for the name of your paper. In this newly created file, highlight all of the contents and import your prepared text file. You are now ready to style your paper; use the scroll down window on the left of the MS Word Formatting toolbar.
Evaluation Methodology
We have developed a detailed simulator that allows modeling the network at the cycle level. An event-driven simulator as Xmulator [28] was used for evaluating the performance of the proposed methodology. The Orion power library [30] is integrated to our simulator to calculate the power consumption of the networks.
In order to determine the fault-tolerance of our methodology's variations, we have performed dynamic reconfiguration analysis for an 8×8 torus topology. WS is considered as the flow control mechanism. The simulations have been performed using a base message size of 16 flits and also the width of 128 bits for each flit. Moreover, each physical link is split into two virtual channels (VC). We calculated the power consumption of the links of each router in 70 nm technology library. In this technology, the clock frequency is set to 250 MHz, and the length of links between two adjacent routers is set to 1 mm for the torus topology. We also have performed a large number of simulations in order to make the evaluation results independent of the relative positions of faults.
Traffic Pattern
We consider two different traffic patterns when evaluating the network behavior: synthetic patterns and traces [14]. Synthetic patterns are widely used because they allow evaluating the network in the most generic way. When we use them, every node has the same traffic injection rate. We evaluate the complete range of traffic injection rate, from low levels up to the saturation point. The used synthetic traffic patterns are uniform and Hotspot [14].
• For uniform traffic, each source node sends messages to all the destinations with the same probability.
• For hotspot, 10% of the sources (selected randomly) inject traffic to the same destination (selected randomly), the rest of end nodes inject traffic to random destinations. This traffic pattern allows to model the situation when one or more end nodes are frequently accessed by the remaining end nodes (a disk server, for instance).
On the other hand, traces are based on capturing the traffic when running real applications. Traces contain the source, destination, injection time and the size of each sent message. They allow obtaining results in more realistic scenarios and let us compare them with the results obtained when using synthetic patterns. In this paper, some results obtained with this type of traffic pattern are shown.
The used traces were extracted under the execution of the FFT, LU, BARNES, RADIX, WATER-Nsquared and WATER-Spatial applications from SPLASH-2 [29] suite in shared-memory multiprocessors. These types of applications are widely used when simulating multiprocessor systems on engineering and scientific computations.
Evaluation Mechanism
DBR mechanism is evaluated when sending all the routing tables. We compare DBR with SR and DS mechanisms.
In all the reconfiguration mechanisms, once a topology change is detected, the new routing tables along with a control message are sent to all the nodes through the control virtual channel. In the case of DS, during the distribution of paths one virtual channel is drained. Once drained, control messages are sent to restore normal operation (the reconfiguration has finished). In the case of SR, the tokens that separate the old and the new traffic at the same time are sent to the nodes. More details of DS and SR can be found in [10].
Results and Analysis
We evaluate all the reconfiguration mechanisms for a random node failure in an 8×8 torus network. In this case, the old and the new routing algorithms are the up*/down*. Figure 4 shows the average message latency for the 8×8 torus for each reconfiguration mechanism and for different injection rates. The figure reveals that the latency of SR scheme increases significantly.
The reason is that SR experiences a higher latency due to the blocking introduced by the tokens. This effect is rapidly extended and thus messages experience higher latencies that this phenomenon carries on even though the reconfiguration process has finished. Further, DBR illustrates better results compared to DS and SR. Figure 4 confirms that DBR shows the superior performance to three different injection rates due to the structure of the approach as explained earlier. Moreover, each mechanism requires the same reconfiguration time regardless of the traffic rate. This is because in all the cases the distribution of routing tables is the process that takes most of the time (tables are sent sequentially) and the amount of information to distribute is the same despite the traffic injection rate. Also, it is apparent that control messages use the reserved control virtual channel and they have higher priority than data messages. Figure 5 shows the average message latency plotted against the generation time for the different reconfiguration schemes under the uniform traffic. As can be seen in Figure 5(a), there are two interval times (from 1,000 to 2,000 and 15,000 to 16,000 cycles) on which the curve sharply increases. The reason behind this behavior is rooted in the nature of spending extra time for releasing the path when the deadlock is detected and re-injecting the message. Yet, DBR shows even better results in average message latency overall. We have also evaluated the mechanisms with a variety of traffic patterns. Figure 6 shows the average message latency for an 8×8 Torus network for different synthetic traffic patterns. The results are normalized to the results given by DBR. On average, DBR outperforms DS by 14% and SR by 29% when considering the message latency. For the programs such as WATER-Nsquard where the traffic is distributed rather evenly across the nodes, DBR provides a better improvement compared to DS over the SR. To sum up, Table 1 illustrates the diverse conclusions we can extract from the performed evaluations. As can be seen DBR mechanism achieves the best results in the evaluation parameters mentioned in the table except the complexity which DS shows the better result. As an overall conclusion it can be seen that in terms of latency, DBR has outperformed DS and SR. DBR normally achieves a better performance rather than DS while it does not use any additional resources. Besides, SR blocks the traffic in order to avoid messages from being routed through the failed link. As we have seen, it has a great impact on increasing the message latency.
CONCLUSION
In this paper we have presented and evaluated DBR mechanism. DBR is a dynamic reconfiguration mechanism that uses a regressive deadlock recovery scheme in order to guarantee the deadlock-freedom condition in the network. Our methods can be used for any topology with various routing algorithms. DBR guarantees a deadlock-free reconfiguration based on wormhole switching (WS) and it does not require additional resources. This is the first implementation of dynamic reconfiguration method with WS. In addition, this method promises hand in messages during the reconfiguration by modifying WS to include additional flits or control signals. Evaluating results reveal that the mechanism shows substantial performance improvements over the other methods and it works efficiently in different topologies with various routing algorithms. Moreover, it can provide superior performance over SR under various traffics, especially, when the same amount of hardware is required. DBR normally achieves a better performance rather than DS while it does not use any additional resources. As for future work, we are planning to apply our approach on on-chip interconnection networks such as NoC to improve performance and decrease the power consumption. | 2012-11-25T01:31:50.000Z | 2012-10-31T00:00:00.000 | {
"year": 2012,
"sha1": "d898800bb3e1f13db36cfc50ee1cd226046a8a7d",
"oa_license": null,
"oa_url": "https://doi.org/10.5121/vlsic.2012.3502",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a512aec75351c4bb4a7fbe75af57f6bc69793e6e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
263969564 | pes2o/s2orc | v3-fos-license | Prognostic Worth of Nrf2/BACH1/HO-1 Protein Expression in the Development of Breast Cancer
Abstract Objectives Nrf2/BACH1/HO-1 proteins have been implicated in the development and progression of tumors. However, their clinical relevance in breast cancer remains unclear and understudied. This study evaluated Nrf2/BACH1/HO-1 protein expression and its relationship with age, tumor grade, tumor stage, TNM, ER, PR, HER2, and histologic type. Methods 114 female breast cancer and 30 noncancerous tissues were evaluated for Nrf2/BACH1/HO-1 protein expression using immunohistochemistry and Western blot. The relationships between the expression and clinicopathologic factors were assessed using the χ2 test. Results 74% of the cancerous samples had high Nrf2 protein expression, and 26% of them had low Nrf2 protein expression. Regarding the non-cancer samples, 43% had high Nrf2 protein expression and 57% had low Nrf2 protein expression (p < 0.002). 39% of the cancerous samples had high BACH1 protein expression, and 61% had low BACH1 protein expression. For the non-cancer samples, 80% had high BACH1 protein expression and 20% had low BACH1 protein expression (p < 0.031). 67% of the cancerous samples had high HO-1 protein expression, and 33% had low HO-1 protein expression. However, for the non-cancer samples, 17% of them had high HO-1 protein expression and 83% had low HO-1 protein expression (p < 0.001). The expression of Nrf2 and HO-1 significantly correlated with tumor grade, while BACH1 was significantly associated with tumor stage (p < 0.05). Conclusion Nrf2, BACH1, and HO-1 could be explored as a biomarker for cancer stage, progression, and prognosis.
Introduction
Breast cancer is the principal cause of cancer deaths in females worldwide.In 2012, around 1.7 million new cases were diagnosed worldwide, and it was estimated to rise to about 2.1 million in 2018, attributing 1 in 4 cancer cases among females [1].The pathogenesis of breast cancer is not entirely clear as our comprehension of the various biological and molecular processes involved in the development of breast cancer is still incomplete.
Nuclear factor erythroid 2-related factor (Nrf2), a member of basic leucine-zipper subfamily transcription factors, plays an important function in the adaptive response to oxidative stress by directing several transcriptional activities.Nrf2 forms a complex with its repressor Kelch-like ECH-associated protein 1 (KEAP1) and cullin 3 (CUL3) ubiquitin ligase, which subjects Nrf2 to degradation in proteasomes under homeostatic conditions.However, under stress state, Nrf2 is separated from KEAP1 and transported into the nucleus to form heterodimers with small Maf proteins which then attach to antioxidant response element (ARE) of the desired genes regulating its expression [2].Nrf2 has been regarded as a tumor suppressor owing to its cytoprotective role against reactive oxygen species (ROS) and electrophilic stressors.Upregulated Nrf2 in cancer aids malignant cells to cope with increased levels of ROS and evade apoptosis via activation of metabolic and cytoprotective genes that promote cell growth [3].Overexpression of Nrf2 plays a critical role in inducing cancer cell growth, proliferation, and survival and decreasing the sensitivity of the cells to chemo/radiotherapeutic agents [4].Report indicates that upregulated Nrf2 expression results in lower overall survival and disease-free survival in breast cancer patients [5].The pro-oncogenic role of Nrf2 in breast cancer cells and its anti-oncogenic capacity in healthy cells relies on metabolic adaptation, cell proliferation, and induction of Nrf2 [6,7].
BACH1, a cap "n" collar protein transcriptional factor, is part of the basic leucine-zipper superfamily.BACH1 binds to Maf recognition elements to form heterodimers that mediate transcription in gene promoter regions.Expression of BACH1 in human tissues has been observed and analyzed.Upregulated MALAT1 and BACH1 demonstrate shorter overall survival and disease-free survival in triple-negative breast cancer [8].Modulation of high mobility group A2 and BACH1 promotes the proliferation and migration of breast cancer cells and inhibits apoptosis [9].
Heme oxygenase-1 (HO-1) a rate-limiting enzyme in heme catabolism is inducible as feedback to many stress stimuli including heat shock, hypoxia, heme, ROS, and nitric oxide.When heme concentrations are low, BACH1 links directly to the HO-1 promoter and silences it; however, at higher heme levels, BACH1 is relieved, followed by overexpression of HO-1 [10].The induction of Nrf2 into the nucleus exports nuclear BACH1 to upregulate HO-1 expression to inhibit apoptosis.Thus, Nrf2 and BACH1 are the main transcriptional factors involved in modulating HO-1 complexes.Notwithstanding the cytoprotective ability of heme oxygenase, it also plays a critical role in carcinogenesis.Inhibition of HO-1 abrogates dipeptidyl peptidase-4 inhibitor-induced upregulation of Nrf2 to prevent breast cancer metastasis [11].Silencing of HO-1 suppresses breast cancer metastasis [12].Modulation of Bach-1/Nrf2 can reduce HO-1 to promote upregulated apoptosis and decreased proliferation of breast cancer cells and vice versa [13].
Despite these observations, the clinical relevance of Nrf2, BACH1, and HO-1 in breast cancer remains understudied and unclear.This study investigated the expression of Nrf2, BACH1, and HO-1 protein in conjunction; the results suggest that the expression of these proteins could be a biomarker for the stage, progression, and prognosis of cancer.
Study Subjects and Tissue Samples
The study was approved by the Institutional Review Board of Jiamusi University, China, with ethical clearance number SYXK 2016-014.Breast cancer and noncancerous (control) tissues were 2 Med Princ Pract DOI: 10.1159/000534534 Barnes/Agbo/Wang/Amoani/Opoku/ Okyere/Saahene obtained from 114 patients who underwent surgery at the First People's Hospital affiliated to Jiamusi University from 2015 to 2017.This was a cross-sectional study in which patients were selected based on the following inclusion criteria: complete medical history, no preoperative cancer treatment, histopathologic report with regional lymph node metastasis, and absence of distant metastasis.Patients with history of other cancers were excluded from the study.114 breast cancer tissues and 30 adjacent noncancerous (control) tissues used for the study were collected from 30 of the patients as paired specimens.Pathological parameters such as age, tumor stage, tumor grade, tumor node metastasis (TNM) stage, ER, PR, HER2, and histologic type obtained from the pathology report of each patient were used for the analyses.The patient's average age was 50 years.
Immunohistochemical Staining of Nrf2, BACH1, and HO-1 Protein Expression
Hematoxylin and eosin-stained slides were used to study histopathologic features obtained from pathology reports of patients to confirm the diagnosis.4 μm formalin fixed paraffinembedded tissue mounted on a slide was deparaffinized in 100% xylene (I and II) and rehydrated in ethanol 100% (I, II), 95%, 90%, 80%, 70% for 15 min and 5 min, respectively.After rinsing with phosphate-buffered saline, mounted sections were heated for 40 min in citrate solution and allowed to cool.Subsequently, the sections were treated with 5% bovine serum albumin for 30 min at 37°C and then incubated with either Nrf2 or HO-1 (1:100 Santa Cruz, CA, USA) or BACH1 (1:50 Santa Cruz, CA, USA) primary antibody overnight at 4°C.Following overnight incubation, sections were incubated with biotinylated goat anti-rabbit secondary antibody and washed three times with phosphate-buffered saline.3,3ʹ-diaminobenzidine was applied to the mounted section for 10 min, washed, and stained with Mayer's hematoxylin.After rinsing, it was then dehydrated in ethanol, cleared in xylene, and observed under the microscope.Brown reaction and deep bluepurple color indicated positive and negative staining, respectively.The immunoreactivity recorded was an average score observed by two pathologists by using the semiquantitative method classified as follows: (0), negative, no staining; (1+), weak staining, ≤10% stained cells; (2+), moderate staining, 11−50% stained cells; (3+), strong staining, >50% stained cells.The sum of the scores for both intensity and proportion were used as a measure of the expression.
Western Blotting
Minced tumor tissues lysed in 500 µL cell lysis buffer for 30 min at 4°C were centrifuged at 12,000 g for 15 min and then run on SDS-PAGE initially at 60 V for 15 min and upregulated to 110 V to complete the process in 2 h.Proteins were transferred to PVDF membranes after which the membranes were blocked and incubated overnight with either Nrf2 or BACH1 or HO-1 or β-actin primary antibody.The subsequent day, it was rinsed thrice for 5 min per wash using Tris-buffered saline with Tween 20.Additionally, the membrane was incubated with HRP-labeled goat anti-rabbit IgG for an hour at room temperature and rinsed thrice with Tris-buffered saline with Tween 20.The protein reactive band was intensified with a chemiluminescence kit, exposed to X-ray film and images captured with LabWork 3.0 (UVP Inc., Upland, CA, USA).
Statistical Analysis
Statistical analysis was performed using SPSS version 21.0 (SPSS Inc., Chicago, IL, USA).χ 2 test was used to determine the correlation between Nrf2, BACH1, and HO-1 expression and the clinicopathologic features.The Mantel-Haenszel-Cochran test was used for calculating the relationship between Nrf2, HO-1 expression and tumor grade, BACH1 and tumor stage.p < 0.05 was deemed as significant.
Expression of HO-1 protein: 3 specimens of negative staining (0), 35 specimens of weak staining (1+), 19 specimens of moderate staining (2+) and 57 specimens of strong staining (3+).Immunostaining results showed that 76 (66.7%) tissues of patients expressed high HO-1 and 38 (33.3%) expressed low HO-1 (Table 3).To determine the expression of Nrf2, BACH1, and HO-1, 114 breast cancer and 30 control tissues were evaluated by immunohistochemistry.We found Nrf2 and HO-1 were highly expressed in the nucleus with low cytoplasmic staining, while BACH1 was expressed in the low levels in the cytoplasm of cancerous tissues as compared to control tissues (Fig. 2).
Analysis of Nrf2 Protein Expression and Clinical Parameters
Immunohistochemical studies revealed that 74% of the cancerous samples had high Nrf2 protein expression and 26% of them had low Nrf2 protein expression.Regarding the non-cancer samples, 43% had high expression of Nrf2 protein and 57% had low expression of Nrf2 protein (Table 1).Expression of Nrf2 protein was associated significantly with tumor grade (p < 0.05).However, there were no significant differences in the expression of Nrf2 in terms of age, tumor stage, TNM, ER, PR, HER2, and histologic type.Furthermore, expression of Nrf2 protein in both cancerous and noncancer tissues also showed a significant correlation (p < 0.05) (Table 1).
Relationship between Expression of BACH1 Protein and Clinical Characteristics
39% of the cancer tissues had high expression of BACH1 protein, while 61% had low expression of BACH1 protein.With regards to the non-cancer samples, 80% had high expression of BACH1 protein and 20% of them had low expression BACH1 protein.BACH1 protein expression was not significantly correlated with age, tumor grade, TNM, ER, PR, HER2, and histologic type.However, the expression of BACH1 protein was The * was used to indicate the significant numbers.
Analysis of Correlation between HO-1 Protein
Expression and Clinicopathologic Parameters 67% of the cancer tissues had high expression of HO-1 protein, while 33% had low expression of HO-1 protein.However, 17% of the noncancerous tissues had high expression of HO-1 protein and 83% had low HO-1 protein expression (Table 3).Expression of HO-1 protein was significantly associated with tumor grade (p < 0.05).However, no significant correlation was found between the expression of HO-1 protein and age, and the expression of HO-1 was not significantly influenced by tumor stage, TNM, ER, PR, HER2, and histologic type.Furthermore, HO-1 protein expression in both cancerous and non-cancer tissues showed a significant correlation (p < 0.05) (Table 3).
Correlation between Expression of Nrf2, BACH1, and HO-1 and Its Significant Clinicopathologic Features
Cochran-Mantel-Haenszel statistics was used to work out the correlation between the expression of Nrf2 protein and tumor grade.The odds that poorly differentiated tissues will express high Nrf2 are greater than that of well/moderately differentiated tissues.Moreover, BACH1 and tumor stage also demonstrated that the odds that tissues of T3-T4 stage will have low BACH1 expression are greater than that of T1-T2 stage.Finally, HO-1 and tumor grade showed that the odds that well/ moderately differentiated tissues will express high HO-1 are greater than that of poorly differentiated tissues (p < 0.05) (Table 4).
Expression of Nrf2, BACH1, and HO-1 Protein by Western Blot
To further demonstrate the protein expression of Nrf2, BACH1, and HO-1, Western blot analysis was carried out in both cancerous and noncancerous tissues.As indicated The * was used to indicate the significant numbers.
Discussion
This study demonstrates the adverse effect of expression of Nrf2, BACH1, and HO-1 on the clinical outcome of patients with breast cancer.Nrf2, a transcription factor, controls the expression of different antioxidant and cytoprotective genes modulating oxidative and electrophilic stress to cellular response.The modulation of nuclear Nrf2/BACH1 via CXCR3-B/CXCL4 signals can upregulate HO-1 expression to inhibit apoptosis [13].This study found high expression of Nrf2 in 74% of breast cancer tissues, while it was positively expressed in 43% of the control tissues.Researchers have previously shown that the expression of Nrf2 is detected frequently in lung cancers with an expression rate of 74% and 77% [14,15] which is consistent to our findings.Moreover, we observed that the level of Nrf2 expression correlated with tumor grade and that its expression in poorly differentiated tumors is higher than that of well/ moderately differentiated tumors consistent with study
Expression of Nrf2/BACH1/HO-1 in Breast Cancer
Med Princ Pract DOI: 10.1159/000534534 [16].Further assessment revealed that Nrf2 expression was more concentrated in the nucleus of breast cancer tissues which is consistent with this study [16].
Expression of nuclear Nrf2 protein plays a critical function in the growth and development of breast cancer.Therefore, functional Nrf2 activity in human breast carcinoma is reflected by nuclear Nrf2 immunoreactivity, and its relatively wide distribution indicates the relevance of activated Nrf2 signaling pathway in breast cancer.Aberrant Nrf2 expression is associated with increased resistance to therapy in breast cancer, implying that Nrf2 genes are downregulated in breast cancer after starvation, which is associated with increased ROS levels [7,17].Hyperactivation of Nrf2 has been reported to upregulate glucose-6-phosphate dehydrogenase/HIF-1α/ Notch1 signaling to promote migration and metastasis of breast cancer cells [18].Recently, high expression of Nrf2 was shown to downregulate GSK-3β to enhance breast cancer [19].Nrf2 siRNA reversed the tamoxifen resistance in tamoxifen-resistant breast cancer with upregulated production of Nrf2-dependent antioxidant proteins [20].Thus, compared to Nrf2-negative subjects, residual breast cancer in Nrf2-positive breast cancer after surgical treatment might be able to proliferate rapidly and metastasize despite adjuvant therapy and result in high resistance and poor prognosis of breast cancer.
As BACH1 is a transcriptional suppressor which negatively modulates various genes that play key roles in cell cycle progression, apoptosis, and oxidative stress response [21], it may be considered a potential tumor suppressor.Our study showed low expression of BACH1 in 61.4% of breast cancer tissues and even lower expression (20%) in control tissues similar to a previous report [22].Moreover, this current study revealed that low expression of BACH1 correlates significantly with tumor stage.Conversely, high expression of BACH1 has been reported to promote growth and proliferation of breast cancer [8,23].Overexpression of long noncoding RNA (SNHG5) has been shown to enhance breast cancer growth and glycolysis by upregulating the expression of BACH1 through targeting miR-299 [23].HO-1 has been reported to be highly upregulated in tumor tissues and facilitates tumor proliferation, metastasis, and reduced sensitivity to chemotherapy [11,12,24].In the current study, out of 114 breast cancer tissues, high expression of HO-1 was observed in 76 (67%), whereas much lower expression of HO-1 was found in control tissues (5 of 30, 17%) consistent with previous studies [24,25].Moreover, the results demonstrated that upregulated HO-1 expression in breast cancer was associated with tumor grade which is also in agreement with a previous study [26].Furthermore, the results showed that well/moderately differentiated tumors express high HO-1 compared to poorly differentiated tumors.This could be due to the fact that under low or normal heme conditions, BACH1 binds to ARE and represses the expression of HO-1.However, when heme level is elevated, BACH1 binds to heme allowing Nrf2 to bind to ARE and transcriptionally regulate HO-1 expression to break down the heme [10].Breakdown of heme results in the production of antioxidants and antiapoptotic molecules as by-products with the latter preventing apoptosis and increasing tumor cell proliferation.Therefore, invasive and metastatic tumors go along with upregulated expression of HO-1.However, thorough evaluation of HO-1 expressive pattern in breast cancer revealed an important and interesting phenomenon, in contrast to the rate of expression and intensity of HO-1 in the cytoplasm, nuclear HO-1 expression was higher which is similar to a previous study [27].The translocation of HO-1 from the cytoplasm to the nucleus has been suggested to be an important factor associated with the protective effects of HO-1 expression in tumors, conferring some mechanisms of tumor proliferation, angiogenesis, and drug resistance [28].The above results at least partially support our observation that nuclear expression of HO-1 might be substantially associated with malignancy of breast cancer; we suggest that it is critically relevant to examine the expression pattern of HO-1 in tumors when assessing the roles of HO-1 in cancers.HO-1 knockout has been demonstrated to promote cisplatininduced apoptosis to abolish proliferation and migration of breast cancer [29].In triple-negative breast cancer patients, upregulated HO-1 expression has been reported to be significantly associated with poor disease-free survival, overall survival, and lower pathological complete response rate [30].
Conclusion
The correlation of Nrf2 and HO-1 with tumor grade and BACH1 with tumor stage suggests that Nrf2, HO-1, and BACH1 could serve as potential biomarkers for cancer stage, progression and prognosis as well as targets for therapy.Furthermore, in vivo and in vitro molecular studies are needed to evaluate their potential application in diagnosis and treatment.Therapies that can suppress Nrf2 and HO-1 and upregulate BACH1 might hold promise for the treatment of breast cancer.
Fig. 3 .
Fig. 3. a, b A representative Western blot analysis in cancerous and control tissues.Data are representative of three independent experiments (n = 3, means ± SD). *p < 0.05 verses control.
Table 1 .
Nrf2 expression in breast cancer and its correlation with clinicopathologic parameters
Table 2 .
Relationship between breast cancer BACH1 expression and its clinicopathologic parameters
Table 3 .
Correlation between breast cancer HO-1 expression with its clinicopathologic parameters
Table 4 .
Correlation The * was used to indicate the significant numbers. | 2023-10-14T06:17:44.895Z | 2023-10-12T00:00:00.000 | {
"year": 2023,
"sha1": "c04338e775675950443da4a26e901bac66eb5a66",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1159/000534534",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "64cba33663145a6b90558b1209463d6dd69fef6b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250713664 | pes2o/s2orc | v3-fos-license | PHOTOGRAPHY: THE POTENTIAL OF IMAGE AS ILLUSTRATION TOOL IN CHILDREN'S LITERATURE BOOKS
: From the conception of photographic art as a visual text, which narrates, informs and promotes interactions, this article aims to reflect on the potential of photography as an illustrative tool. Thus, it is understood that a literary work illustrated from photographic images allows the expansion of interpretative and dialogical limits between the reader and the work. Adele Enersen's work When my baby dreams was chosen for the investigation, a narrative composed of scenarios created manually using homemade artifacts and captured through the photographic record. Under a qualitative approach, the bibliographic research methodology was chosen, which took place through an interpretative analysis that allowed us to observe the means and instruments used in the composition of the work. As a theoretical basis, it relies on the studies of Santaella (2012), Flusser (2000) and Manguel (2001) on materiality, photography and literature as well as other authors that contemplate the themes. In this perspective, we seek to expand the principles of photographic materiality and identify the potential of its use as a literary illustration. It is concluded that storytelling combined with photographic illustrations contributes significantly to the reading comprehension and involvement of the reader, highlighting the materiality of the photographic image as an important characteristic of the literary illustration technique.
Introduction
The image is always multiple, even if it is only one.A double, cause of alteration of the one before whom it is presented.
It always comes, even if you have attended its training, with eager to lord it as if it asked, she too, to exist, as an escape from a kingdom where only being and life fit3 .
María Zambrano (1986, p. 12) In this text, the image is considered as a potential for expressiveness of language.Faced with this, an argumentative strand goes through that the image emanates multiple textualities, which provokes or demands from the reader literary experiences, in an intense search for the apprehension of the emitted meanings, full of "verbalized robes" (BAKHTIN, 2003), which requires of the reader a dialogical action.
In this perspective, this study has as theoretical basis the conception of language as a form of integration and interaction of the subject with the world around him, with his peers and with himself, configuring itself in a dynamic and constant search that has as purpose expressiveness and social communication through statements.Thus, language is understood as a manifestation of the word, which when written, spoken, gesticulated, sung, staged, or even photographed, becomes significant when given from someone to someone else.(BAKHTIN, 2012).
When analyzing a work of children's literature composed entirely of photographic images, it is clear that the illustration is given another perspective for its readers.Therefore, there is a need to reflect on this technique or, as Benjamin (1994) and Newhall (2013) consider, this art so called "photography".
Photographic art can be understood as a visual text, which narrates, informs and promotes interactions.In this regard, this article intends to discuss the potential of photography as an illustrative tool, through a reflection on the materiality of the image as well as the materiality of the printed work.Thus, it appears that an illustrated literary work, based on photographic images, allows the expansion of interpretative and dialogical limits between the reader and the work.
Photography becomes a form of expression of language, a manifestation of the materiality of a visual text, since it is assumed that language is a form of expression and social representation.For Oliveira (2008, p. 43) the art of illustration present in children's literature books has an inherent language that reached "[...] its fullness as a language [...]" at the beginning of the 20th century, at the time when that an illustrated work for children came to be understood beyond words, that is, as a text that expresses, that communicates, that dialogues with the reader.
Thus, it is considered that when a narrative is composed of a photographic illustration, it can collaborate so that the reader expands its interpretative and dialogical limits with the visual text, in order to go beyond the limits of a frame.Based on Manguel (2001, p. 291), the photographic image is able to create a discursive and dialogical space with the reader because "[...] it is also a stage, a place for representation" that, in turn, is opens for the interpretive eyes of the reader.
In this perspective, the objective of this research is to reflect on some of the principles of the materiality of the photographic image, as an illustrative tool in the printed work of children's literature, from the analysis of the book Os sonhos do meu bebê, by writer and illustrator Adele Enersen.The work under analysis is a narrative consisting of scenarios created manually using homemade artifacts and captured through the photographic record, and the support of such image's transitions from the hypertext format to the printed form during its trajectory in the literary market.
Thus, this article is supported by the studies of Santaella (2012), Santaella andNöth (1999), Flusser (2000) and Manguel (2001) on photography, materiality and literature as well as other authors that contemplate the themes and that favor weaving a network of reflections that contribute to a greater understanding of photographic and literary materiality.The analysis took place from a qualitative approach and, for that, the bibliographic research methodology was chosen through an interpretative analysis that allowed us to observe the means and instruments used in the composition of the work.
For a better organization of the proposed reflection, the text is divided into three thematic sections: the first weaves a reflection on the materiality of the image, the following section presents the work under analysis and a reflective appreciation about the photographic illustration, and finally, in a third moment, the materiality of the printed work is discussed.
The materiality of the image
When reflecting on the materiality of the image, it is rather a rethink about its definition.According to Santaella (2012), it is customary to define an image as a two-dimensional element, such as a drawing, a painting, an engraving, a photograph, or as a three-dimensional element with a sculpture, so that it provokes recognition, through the similar relationships with which representation occurs.
For Santaella (2012, p.14), every image "[...] implies a frame and a field".The field refers to the space for the inscription or occupation of the image, while the frame refers to the idea of delimitation, demarcation.For the author, the word image brings ambiguity and polysemy, as it can be applied to real and visible contexts, as to non-visual realities.Therefore, it is possible to speak in the domain of mental images, of perceptible images and images as visual representation.(SANTAELLA, 2012).
The concept of image as a visual representation refers to the fact that they are elements created and produced by subjects within a given society.Visual representation is an artificial form of creation, that is, they need a mediation of specific skills, techniques and instruments, can be inscribed on a certain surface or captured by optical resources, present themselves in motion can be fixed.This perspective is adequate for the discussions of this study, as it involves photography, which is the object of the proposed analysis, given that "[...] although the images represent recognizable figures, these figures have the function of representing meanings that go beyond of what the eyes see".The image brings an aspect of symbolism that overlays layers of meanings, others, that go beyond the dimension of what is in front of the reader's eyes.
Faced with the idea that the subjects live surrounded by matter, the images of the visible world are captured by the eyes and kept in the mind.Reserved in memory, these visual memories are accessed by thought through mental images.Materialized by the imaginary, by the thought, or by our hands through art, the images are of immaterial or material domain.For Manguel (2001, p. 21), there is no thought that can be realized without the image, for him "[...] the images, as well as the words, are the matter of which we are made".
Thus, photography is understood as a material representation, as an idea or a fantasy that takes shape and becomes detectable matter to the mind and eyes.Thus, it is understood that materiality is an integral part of the image, regardless of the performance of a technological procedure that consolidates it.According to Laurentiz (2004, p. 3) the "[…] materiality of the image is not an exclusive consequence of communication technologies, since any image, printed, drawn, photographed, etc., carries this potential in itself".
According to Santaella and Nöth (1999), what was imagined now presents itself as a palpable and visible object because it gains a visual representation, a physical format, a dimension.Thus, the authors emphasize that the domain of images is divided as follows: [...] the first is the domain of images as visual representations: drawings, paintings, prints, photographs and cinematographic, television, holo and infographic images belong to this domain.In this sense, images are material objects, signs that represent our visual environment.The second is the immaterial domain of images in our mind.In this domain, images appear as visions, fantasies, imaginations, schemes, models or, in general, as mental representations.Both domains of the image do not exist separately, as they are inextricably linked already in their genesis.There are no images as visual representations that have not arisen from images in the minds of those who produced them, just as there are no mental images that have no origin in the concrete world of visual objects.(SANTAELLA; NÖTH, 1999, p. 15).
According to Laurentiz (2004), the images of the material and immaterial domains do not exist separately from each other, they coexist, therefore, one gives rise to the other.In this sense, the images of the material field are composed from mental representations and, the mental images, come from what is visually concrete.Once created, materialized in a way that is visible to the eyes, the image becomes accessible to the reader who, from then on, will not only be able to read, understand and interpret what he sees, but also record the observed visual representation in his memories.The mental image becomes a reference for verbal, written or future visual representations, but this time through the reader's mind.For Joly (1994, p. 20) "[...] a mental representation is elaborated in an almost hallucinatory way and seems to borrow its characteristics from the vision".
It is noticed that the composition of a visual representation allows a stimulating mediation to occur between what was seen and what was imagined by the reader, thus, based on Joly (1994, p. 13), it is understood that the image " [...] designates something that, although not always referring to the visible, borrows some features from the visual and, in any case, depends on the production of a subject: imaginary or concrete, the image passes through someone, who produces or recognizes it ".
Naves (2019) explains that the image has two distinct functions, sometimes placing itself in opposite positions.According to Santaella and Nöth (1999, p. 18), the representation function is associated with an expressive role and must "serve the representation of the world", whereas the communication function is linked to the appealing and must serve "mediation of thoughts among people".
Enersen materializes scenarios from diverse representations, such as oceans and lawns based on memories and visual references that were stored in his memory.Thus, photography is understood as a visual representation of the concrete elements around us, components present in the visual field and signs of our visible world, explains Laurentiz (2004, p. 2).
However, the images that are represented are also originated in the immaterial field of our mind, generated in our imagination, from fantasies and mental creations.A work of children's literature conceives this immateriality in its entire narrative, guiding the reader through a journey through the dream world of a sleeping baby.
Thus, it is clear that both fields do not exist separately, material images and immaterial images are linked together, so that one causes or promotes the existence of the other.In other words, visual representations are created based on other images previously existing in the mind of those who produced them, as well as mental images are understood to originate from objects belonging to the visual world.(LAURENTIZ, 2004).
According to Manguel (2001, p. 21), images as well as stories, inform us, communicate both in relation to mental representations and visual representations, because according to the author, every thought process requires images and complements: "[ ...] the soul never thinks without a mental image".
The work and the photographic illustration
To develop this study, we opted for a qualitative approach research, of a descriptive character as being a characteristic essentially present in the qualitative approach, in order to show itself as an adequate focus for research that work with meanings and other subjective characteristics, that is, nonquantifiable or measurable data, as explained by Martins (2015).Thus, it is intended to analyze the typographic elements present in literary works and from the photographic image identify the art made up of colors, shapes, lines and other visual devices that enable the construction of the narrative and its reading.
The illustration in children's literature books plays a fundamental role for the engagement of the small reader, for this reason the choice of the work of Adele Enersen, for the creative and original use of photographs as illustrative art in a work for children, which occupied a space innovative in his literary production.
Adele Enersen won the attention of the literary market when she became prominent on the internet for the posts of personal photographs published through a blog directed to her own family members.The cover of the work, represented in image 1, in the English and Portuguese versions, presents the illustrative photograph of a sleeping baby, in a fictional landscape composed of fabrics.Here, at the junction of image and text, the reader recognizes that that baby on the cover is presented by his mother or father and in this sense, Joly (1994) points out that image and word need each other, complement each other for proper functioning.
The cover design work was developed by Jennifer Rozbruch, a space where you can see the identity of the work being analyzed, using a photograph of a baby sleeping on a backdrop made of fabrics.The image displays a fictitious and at the same time real landscape in the eyes of a child, in which it is perceived that the visual representation of a starry sky, which provokes the idea of a child dreaming.The initial image awakens the feeling of warmth, perhaps due to the use of an orange blanket to represent the moon, the place where the baby sleeps, a likely navy blue rug representing the immensity of the sky on a dark night.
In the first pages of the book, the author dedicates herself to the presentation of the main character of the story, Mila, her newborn daughter.There is a brief introduction, with comments on the origin and elaboration of the work.On one of the pages there is the text and on the other the photograph of the sleeping daughter's face.Text and image are surrounded by an illustrative dotted line, the same line that also surrounds the drawings of two butterflies that accompany Mila's photograph, a characteristic of a frame that runs through the entire work.
Entering the pages of the book, it is noted that the story is built with Enersen's constant effort to create a dialogical relationship between image and text.The author and illustrator conceives a narrative based on photographs, endeavoring to bring visual text closer to written text, inventing scenarios composed of artifacts recognizable to the eyes of readers, and the choice of household items such as blankets, cloths and pillows, collaborated to accentuate the familiarity effect to the content of the work.It is clear that the author and illustrator endeavors to bring fantasy images closer to the reality described in the narration, creating scenarios from objects familiar to readers.All landscapes are composed of artifacts that awaken the feeling of warmth in the reader combined with a sense of familiarity.
The creative and differentiated character in Enersen's art is undeniable, which brings together in his work, images rich in details, varied colors and fantastical scenarios proven by photographic reality.It is observed that Enersen's eyes are defined by sensitivity, therefore, it is clear that the images clearly reflect the profile of the illustrator, her delivery to motherhood and her proximity to photographic art, resulting in a legitimate harmony between the visual narrative and the writing.
It is possible to observe the subjects photographed, the elements used, the colors of the images, the material chosen to compose the scenes, specific characteristics of the drawings and the preference for the use of usual objects, such as fabrics, as being a material familiar to the reader, artifice that increase the chances of approximation between the reader and the work, by making the stories more familiar, stimulating and attractive.
It is understood that books that value and estimate children's sensory needs play an essential role in "[...] the reader's cognitive, affective and motor development", says Naves (2019, p. 95).Thus, the choice of concrete artifacts such as expressiveness and art, influence the reader's attention, in addition to favoring aesthetic appreciation.
We reflect our own identity, the reality that surrounds us, we create based on who we are and what we have already learned and experienced in life, what we produce is based on what we look, believe, hear and know, and through art we find a channel for the expression of ideas, ideologies and worldviews, declare Nikolajeva and Scott (2011).
In this sense, Ramos (2013) explains that it is important that those who deal with children's books get closer to the universe of images.It is known that children, even very young, already understand the language of visual representations because "[...] they are at a stage of development in which the sensations, linked to the shapes, colors and textures, are still at their fingertips, did not suffer excessive influence from rationalization", explains Ramos (2013, p. 41) who adds how essential it is that the mediator of this reading process, that is, the reader who mediates the book and the child, is aware of the potential of visual narratives.
The illustration loaded with meanings is capable of embracing the reader, creating a connectivity, an interaction between reader and narrative, which approximates and engages with the context of the story, with its colors and shapes, guides its eyes to the child, who even very child already understands the language of images.Based on Benjamin (2009, p. 69-70), the child, without being censored by the sense, not only observes and interprets the images, but also penetrates the "[...] colorful splendor of the pictorial world", because it is received as a participant in the illustrated narrative, which enchants, surprises and captivates the reader with its colorful adorned backdrops.
Images, as well as texts, communicate and promote dialogue.In addition to enabling multiple interpretations, the image allows the reader to see beyond what is framed, as stated by the author Alberto Manguel (2001, p. 29) the image as a work of art is "[...] a device to communicate ideas, sensations, a vast poetry".Thus, we understand that the photographic image narrates and informs a message, an idea.From the perspective of the gaze, it is observed that the photographs invite the reader to reflect, insofar as it leads him to perceive "beyond the image".
The photographic art found in Enersen's work has the potential to connect with the generation of the century.XXI, because these are records that offer readers a differentiated illustration, capable of enchanting both reality and fantasy.In this regard, the historian Mauad (1996, p. 2) clarifies that photography plays "[...] the role of an instrument of a documentary memory of reality".
Thus, photography, even if fanciful, provides an effect of legitimacy in the illustration.Santaella (2012) names this effect as a principle called testimony.A testimony of the real, a proof of the real, proof that the object was there in front of the camera, an image that confirms it as a testimony, a proof that cannot be denied because the photograph testifies to its presence in that given time and space and so the documentary power of the photographic image is given.
The materiality of the printed work
The use and practice of photography in the daily life of the population today, has become a particularity of culture.The work was composed from photographic images, building a narrative that maintains this characteristic.Thus, it is observed that Enersen's literary production, printed and illustrated with photographic images, represents a remarkable property of contemporaneity, that is, it presents itself as a material record of cultural change, says the author Donald F. Mckenzie (1999, p. 28).
It is known that the book, as well as the photograph, is aimed at a recipient, who can later access this object and assume the role of reader of the information that was saved and preserved in it (MAUAD, 1996, p. 9). For Goulart (2016, p.70), the book in its printed form triggers materiality, an act of remembrance based on the affective links built by the relationship between the reader and the book.
According to Goulart (2011), a moment of appropriation or understanding of reading becomes unique for the reader, which can occur in complicity with the book object.In an attempt to avoid the possible disappearance of information and / or reading experiences that reveal a significant time, an affective link occurs between the reader and the materiality of a work.For Borges (1985, p. 12), this is justified by the fact that the printed work becomes an "extension of memory and imagination", which also dialogues with Soares (2016, 153), who explains the printed work as a device that eternalizes records, that persists in time and becomes something stable, therefore monumental.
Another aspect that represents the characteristic of materiality in Enersen's work is the transformation of its originally digital content into a printed work.Initially, the photographic images created by Enersen were presented only on computer screens, published on the pages of a personal blog, with an address published on the back cover of the printed book.According to the author and illustrator, the act of photographing her daughter Mila, still newborn, was just a hobby and the use of the blog was a way to share images of her baby with her friends and family.
Image 3: Print screen from Adele's online blog Source: Autor Adele Enersen's blog: http://milasdaydreams.blogspot.com The content exposed as a hypertext has a multilinear and multi-sequential format.Through the blog platform, the screen space allows the author to change, insert or remove content whenever she wishes, also making room for the participation of readers who add comments freely along with Enersen's images and texts.Educator Soares (2002, p. 151) points out that "[...] hypertext is dynamic and is perpetually in motion".
For Naves (2019) a work adapts to the needs and requirements of the market, that is, the book reflects and represents a culture.In the case of the work under analysis, it is observed that it has such characteristics proven by several elements, among them the use of photography and the artifices recorded in the scenarios that tell the story.With this, the technical image, currently found in both printed and digital media, shows itself capable of following a market need, in order to adapt and diversify presenting itself in the scientific, journalistic or even literary medium.Based on Santaella (2012, p. 82), "[...] with digital media, photos also migrated to computer screens.With that, they lead to the last characteristic consequences that photography has always brought with it since its birth: nomadism and ubiquity".
In this way, photography stands out as a differential that is evidenced to the reader already in the presentation of the cover of the work, in which a photographic image of a sleeping baby, registered by Enersen, brings this relevant and significant characteristic capable of increasing the interest of the reader for the content of the interior of the work even if still unknown.
In this regard, it is noteworthy that in addition to the materiality of the image, already discussed in this article, we also present the form of materiality present through the possibility of printing the images on palpable pages with a defined structure.Enersen's work has several editorial devices with an emphasis on the materiality of both the print and the image, enabling dialogue, exploration and interaction with the reader, in order to allow the production of meanings, by allowing the handling of pages and the visualization of elements familiar to the reader's universe, ensuring that the book is part of a sensory experience "marked by equally playful and pleasurable situations", as stated by Silva and Chevbotar (2016, p. 61).
Enersen's creations have won thousands of admirers around the world and his contoured photographs of so much creativity later resulted in the publication of his first book.Based on Soares (2002, p. 154), the creation of the book, edited and printed, reveals to readers and admirers of Enersen's art, a new material, an object, palpable and defined.The printed work now has a dimension, a linearity, a structure, a sequence, number of pages, a totality, because its owner can identify its beginning, middle and end.The book represents a structural unit, it is a physical object that grants the author the materialization of his words, explains Goulart (2016, p. 69).
Regarding the performance of the editorial team in this process, Goulart (2016, p. 71) describes that the print and the text are distinct from each other, as it is understood that "[...] the authors do not write books, they write texts which are transformed into books, artifacts thought and designed by an editorial team".The art that first belonged only to the writer or illustrator, after the performance and editorial creation, belongs to the new authorities.Soares (2002) reports that the printing technology used in the production of literary works today, brings new characters to the book industry and specifically to the works under analysis.
It is observed that the printing technology brings new characters to the book industry and specifically to the work under analysis.According to Soares (2002, p.153-154), the printed work becomes beyond stable and monumental, something controlled, because "[...] they create many and several instances of control of the text" that intervene and regulate their production.In this way, the production of Enersen's printed work has the participation of an editorial team, with members such as a translator, original preparer, proofreader, cover designer, interior designer, and even a second illustrator.
The author produces the text, however the editorial team produces the physical book, as described by Chartier (1994;1999) there is a distinction between text and print, between the production and textual work, the work of fabricating the work, understanding that authors do not write books, they write texts that are transformed into books, artifacts thought and designed by an editorial team.
The book materialized as a printed work, starts to present evident structural characteristics, details that guide the eyes based on the colors of the pages, illustrated themes, in short, "clearly defined limits" explains Soares (2002, p. 150).The printed book offers the possibility of creating reading protocols, which in the case of the work under analysis, are the colors, the predominance of a single theme that is maintained every three double pages, or the visual harmonization created by an illustration complementary to the photographs.
It is noticed that the editorial work, ensures that the colors, drawings, strategic positions of the texts and images guide and enchant the eyes and imagination of the reader who accompanies this journey of events and colorful images with each page turned.According to Walter Benjamin (2009, p. 69) the magic and enchantment that connects the reader to the illustrations is described as follows: "But it is not the things that jump from the pages towards the children, it is the child himself who penetrates the colorful splendor of the pictorial world".For Benjamin (2009) the child penetrates the colored images becoming a participant in the illustrated story.Thus, it is observed that the book presents itself as an object loaded with information and ideas, offering support to the message transmitted by the image, and in this aspect, the printed book represents a cultural sign that displays man's desire to fix what society writes and reads, say Goff and Nora (1976).The book object stores, records, registers, materializes the words, intensions, ideas and images of a society and its culture with the characteristics of a specific time.Mckenzie (1999) points out that the book presents itself as a document of cultural transformation.
The book is designed and materialized for the handling of the reader, which did not happen on the screen.The opportunity to manipulate Enersen's work allows the reader to become familiar with the book object, as it allows the discovery of new properties and characteristics through the image, point out Ribeiro et. al. (2016, p. 91).
The materiality of the printed books allows manual and exploratory use of the object, favoring the reader's approach to the work.The opportunity for possible handling in the printed works favors the child's approach to the book, because according to Silva and Chevbotar (2016), the child learns from his sensory explorations.Thus, Ramos (2013) describes that children are at a stage of development in which emotions and senses are sensitive to stimuli that are often enhanced by interaction and pleasure.
The materiality of the printed book also allows the mediation of an adult in the literary reading process for children.This practice plays a fundamental role because it will indicate, for the younger readers, "the use for which the book object was created", in addition to demonstrating how to use the book, with changing pages and proper handling.(RIBEIRO et. al., 2016, p. 90).
The physical structure of the work directs the reader to its handling, the book object is palpable, presents texture and a format, even if filled with texts and illustrations, the printed book is above all an object that invites its owner to a certain behavior of interaction.
Enersen's book, in printed form, allows physical contact with the materiality of a visual narrative, allowing the small reader to handle, turn and leaf through its pages.According to Sampaio and Lima (2015, p. 21) this interaction creates possibilities for the child to establish meaningful relationships with the narrative, make inferences and explore more details when looking and examining the illustrations in their own way, holding with their own hands.
When the materiality of the book is considered as a space for the production of meanings, it is believed that the first contact that triggers the act of reading happens in the exteriority presented by the work (GOULART, 2016).The subject-reader uses the sensations that the printed work can offer him, there taking place a sensory reading, explains Martins (1986).The book object, in its materiality, insinuates the reader to certain different postures, choices and uses, and this is done because "[...] before being a written text, a book is an object; it has shape, color, texture, volume, smell.You can even hear it if you flip through its pages".(MARTINS, 1986, p. 42).
Final considerations
In this study, we tried to highlight the use of photography as literary art from the material aspect of the image in the printed work of children's literature books.In this way, it was sought through a reflexive analysis that the image materialized by Enersen made it possible to expand the interpretative limits of the work, as it appears that the photograph represents the record of a unique moment, a flagrant of an instant charged with senses.
It was observed that the choice of Enersen for the photographic illustration is close to the language of today because it takes advantage of new technologies.Ramos (2013, p. 133) explains that "[...] technologies collaborate to change narrative forms and discourses" and points out that there is a need to delight and surprise young readers, so as to captivate them and assist in the reading habit.
As a result, it was noticed that the use of photographic illustrations in a literary work favors the transmission of knowledge, benefiting the approximation between the reader and the work, whether printed or digital.Thus, it is emphasized that the materiality of the image in the printed work of children's literature books, which can be characterized by its different supports, brings textuality and an aesthetic appreciation, which requires reading comprehension, which influences and impacts on modes of interaction between reader and the context of the narrative.
It is concluded that the act of storytelling combined with the reading of photographic illustrations contributes significantly to the understanding and involvement of the reader.Thus, it is highlighted that the materiality of the photographic image in printed works of children's literature brings relevant characteristics and elements that contribute to literary art, as it consists of a language, an expressiveness, which requires proximity between the reader and the materiality of the image, which promotes the production of meanings through interaction and integration between the real and the imaginary world, between the reader, the visual text and the book object.
It is concluded that literary reading combined with photographic illustrations contributes significantly to the reader understanding and involvement of the reader, highlighting the materiality of the photographic image as a constitutive element of a visual representation, which can contribute to the technique of literary illustration, expanding the aesthetic and symbolic potential of the image, as narrative art.
Enersen photographed Mila, his newborn daughter, asleep in colorful and fanciful settings, resulting in creative and inventive images that have won millions of admirers around the world.Thus, Enersen's creations became a book, a printed work illustrated with photographic images.The work with the original English title When my baby dreams, was launched for the first time in 2012 in Englishspeaking countries, by Balzer & Bray.In Brazil, in the same year, the work was launched by the publisher Sextante, translated into Portuguese, by Angélica Lopes, as Os Sonhos do meu Bebê.Image 1: Book Front Cover: When my baby dreams United States edition and Os sonhos do meu bebê Brazilian edition Source: English -https://www.amazon.com.br/When-My-Baby-Dreams-English-ebook/dp/B01764RR80Portuguese -https://www3.livrariacultura.com.br/sonhos-do-meu-bebe-os-29607895/p?utmi_cp=8787&adtype=pla&utmi_cp=8102&gclid=CjwKCAjw6qqDBhB-EiwACBs6x0K57yBJcbXn3xhuh0V08c1eCYM8axjrmoKIXRpoQmU2p4nmruIXlxoCfwEQAvD_BwE
Image 2 :
Pages 1 and 2 of Adele's Brazilian version book Source: Researchers files | 2022-07-21T15:20:22.981Z | 2022-06-28T00:00:00.000 | {
"year": 2022,
"sha1": "3bc8fb690b98372830ff3120fe1ec9893a52f0a7",
"oa_license": "CCBY",
"oa_url": "https://periodicos.unespar.edu.br/index.php/sensorium/article/download/4557/4881",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "709ae2f2404624e159d7f6fb3ada6ef7c609ce1d",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
235808852 | pes2o/s2orc | v3-fos-license | Rapid immunoassay and clinical evaluation of the SARS‐CoV‐2 antibody assay on the real express‐6 analyzer
Abstract We developed a rapid and simple magnetic chemiluminescence enzyme immunoassay on the Real Express‐6 analyzer, which could simultaneously detect immunoglobulin G and immunoglobulin M antibodies against SARS‐CoV‐2 virus in human blood within 18 min, and which could be used to detect clinical studies to verify its clinical efficacy. We selected blood samples from 185 COVID‐19 patients confirmed by polymerase chain reaction and 271 negative patients to determine the clinical detection sensitivity, specificity, stability, and precision of this method. Meanwhile, we also surveyed the dynamic variance of viral antibodies during SARS‐CoV‐2 infection. This rapid immunoassay test has huge potential benefits for rapid screening of SARS‐CoV‐2 infection and may help clinical drug and vaccine development.
| INTRODUCTION
Near the end of 2019, many cases of unexplained pneumonia occurred in Wuhan City, Hubei Province. The illness spread quickly throughout the city and eventually over the entire country. 1 By early January 2020, it was confirmed that it was an acute respiratory infection that was caused by novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), with the disease being named coronavirus disease 2019 (COVID-19). 2 However, the virus soon found its way around the world, and by the beginning of March 2020, the World Health Organization (WHO) officially labeled the disease as a pandemic. 3 As of April 2021, SARS-CoV-2 had spread to 223 countries, and there have been 147 539 302 confirmed cases of SARS-CoV-2, including 3 116 444 deaths. 4 SARS-CoV-2 occurred by human-to-human transmission and mostly affected elderly and immunocompromised persons. 5 The rapid spread of SARS-CoV-2 has caused considerable damage to public health and the economy. 67 In the absence of treatment for this virus, accurate and rapid diagnosis of SARS-CoV-2 is the cornerstone of the efforts to control the epidemic, and save people's lives. Currently, the detection of viral nucleic acid real-time polymerase chain reaction (RT-PCR) has become the current standard diagnostic method for the diagnosis of COVID-19. 8,9 However, the performance of RT-PCR depends on many factors, such as the sample collection skill, sample type, different disease progression, and the quality and consistency of the PCR assay used. 10,11 Therefore, there is an urgent need for a rapid, simple to use, sensitive, and accurate test to identify infected patients of SARS-CoV-2 to prevent virus transmission.
Early diagnosis, isolation, and treatment are essential to cure the disease and control the epidemic. Antibody detection is of great significance in the diagnosis of infected patients, and helps to identify the stage of the infectious. 12 Based on these, we developed a magnetic chemiluminescence enzyme immunoassay test product, which could detect IgG and IgM simultaneously in human blood within 18 min. Here, we retrospectively described 456 serum samples through IgG/IgM antibody detection. All samples are from HwaMei Hospital, University of Chinese Academy of Sciences. This study may provide a reference for the clinical profile of SARS-CoV-2 patients confirmed by antibody detection, and further to investigate the potential relationship between immune antibodies and disease progression.
| Antibody detection
The SARS-CoV-2 antibodies (IgG and IgM) of the subjects were de-
We redetected the IgG and IgM concentrations of known positive and negative plasma samples with the SARS-CoV-2 antibody test kit.
| Precision of the SARS-CoV-2 antibody test kit
A negative samples pool, approximately 30 ml, was prepared by combing leftover antibody-negative samples (IgG < 278.8 U/ml, IgM < 6.6 U/ml). Similarly, critical positive (278.8 U/ml< IgG <320 U/ml, 6.60 < IgM < 7.59 U/ml), medium/strong (IgG > 320 U/ml, IgM < 7.59 U/ml) positive pool, approximately 30 ml, were prepared by diluting a positive clinical sample with a partial negative sample. Aliquots of 400 μl were prepared from each pool and frozen at −20℃. Two controls and three samples containing different concentrations of analysis were assayed in duplicate, with two runs per day, one lot of reagent for each run, and two replicates per run. Repeatability and Between-Lot precision study was performed by assaying each sample and one lot of reagent 10 times. A between-day precision study was performed by thawing out each respective aliquot to room temperature and running over 20 days. The mean and SD were calculated for each sample, and the coefficient of variation (CV) was determined as CV (%) = (SD × 100)/mean.
| Cross-reactivity of the SARS-CoV-2 antibody test kit
In total, some samples from patients with Influenza A virus anti-
| CT examination and image analysis
The patients underwent chest CT examinations on admission. All CT images were reviewed independently by two experienced radiologists. The image features included lesion distribution, local or bilateral patchy shadowing, lesion density, and interstitial abnormalities. Additionally, the CT scan was obtained every 5 days or in case of deterioration during hospitalization.
| The stability studies
To evaluate the stability of the SARS-CoV-2 antibody test kit, we tested the IgG and IgM levels of two samples (n = 2) at three different temperatures (Figure 2). When stored at room temperature, these samples were stable for 7 days, and their IgG and IgM levels were the same as at room temperature. In addition, the samples were stable for at least 60 days when stored at −20℃, and were consistent with the IgG and IgM levels at the first two temperatures.
| The cross-reactivity studies
The cross-reactivity study for SARS-CoV-2 IgG and IgM test kits were designed to evaluate potential cross-reactants and were shown in Table 1. The cross-reaction of the IgG and IgM test kit with Influenza A virus antibodies was 8.33%, which was lower than of colloidal gold 25.00%, whereas the IgG presented a crossreaction of 0.00% with respiratory syncytial virus antibodies, and lower than the colloidal gold and IgM 6.67%. The cross-reaction of both IgG and IgM were 11.76% with EBV VCA IgM, and lower than the colloidal gold 17.65%. Similarly, the cross-reaction of the IgG and IgM were 10.00% and 0.00% for the CMV IgG, when compared with that of colloidal gold 20.00%. In addition, the cross-reaction of both IgG and IgM were 7.14% and 14.29% for the C. pneumoniae IgM, which were significantly lower than that of colloidal gold 21.43%. Taking together, these results indicate the low cross-reactivity between the IgG and IgM, when compared with the colloidal gold.
| Precision study of SARS-CoV-2 IgG and IgM test kit
To investigate the precision of the SARS-CoV-2 IgG and IgM test kit, we detected three aspects: repeatability, between-lot, and between-day. The results are summarized in the following Table 2. tively. The stability of the SARS-CoV-2 IgG/IgM test kit is better and less affected by temperature. In the cross-reaction, we used the SARS-CoV-2 IgG/IgM test kit and the colloidal gold to detect the IgG and IgM levels of 10 viruses. We found that the positive rate of the IgG/IgM kit was lower, which was lower than the colloidal gold result.
| Antibody level and Chest CT features
For precision, the negative detection rate of negative samples was 100%, the positive detection rate of borderline positive samples was more than 95%, and the positive detection rate of medium/strong positive samples was 100% and CV ≤ 15%. Meanwhile, the IgG positive rate was always higher than IgM, and this phenomenon was also observed in a study by Zhang et al. 13 Based on the above, we analytically and clinically evaluated the qualitative and report that it performs reliably, precisely, consistent with manufacturer specifications Currently, virus nucleic acid RT-PCR, CT imaging, and hematology parameter are the primary tools for clinical diagnosis of the infection. 14 Chest CT has been proposed as an ancillary approach for screening individuals with suspected COVID-19 pneumonia during the epidemic period and monitoring treatment response according to the dynamic radiological changes. 15 Although detection of the RNA by either RT-PCR or sequencing is the gold standard for COVID-19 diagnosis, it still suffers from some limitations such as being labor-intensive and timeconsuming. 16,17 Testing the SARS-CoV-2 specific antibodies in the blood of patients is a good choice for rapid, simple, and highly sensitive diagnosis of SARS-CoV-2. 18 Serologic tests could provide much-needed insight into the adaptive immune response against SARS-CoV-2, the exposure history of an individual, transmission patterns, and potential donors of convalescent plasma. 19 Therefore, we also study the dynamic variance of viral antibodies during SARS-CoV-2 infection. We found that the IgG seroconversion was earlier than that of IgM and this is similar to Long et al. 20 On the contrary, the antibody levels increased rapidly during the first two weeks. Studies have found that IgM antibodies appear about 2 weeks after infection, while IgG antibodies last for months or even years. 21 Another study showed that the IgM antibody appeared within 1 week after SARS-CoV-2 infection, and this antibody was present in the body for 1 month or even longer, the IgG antibody is usually produced in about 10 days. 12 In addition to the diagnosis value, our study revealed a strong negative correlation between clinical severity and T A B L E 1 Cross-reactivity of non-SARS-CoV-2 viruses 22 The IgM and IgG could be used to understand the epidemiology of SARS-CoV-2 infection and to help to determine the level of humoral immunity in patients. 23
| CONCLUSION
We developed a rapid SARS-CoV-2 IgG/IgM antibody test using magnetic chemiluminescence enzyme immunoassay technology. | 2021-07-13T21:51:56.815Z | 2021-07-13T00:00:00.000 | {
"year": 2021,
"sha1": "aeaf7f26850dc47a30455a9dc868b3a24e81124e",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8426859",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e73db18a540444dd1281662f59fdcafa3b9cc03a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45651609 | pes2o/s2orc | v3-fos-license | Intranasal Remifentanil as an Adjunct to Oral Midazolam Sedation in Pediatric Dental Patients
Oral sedatives are often required for children who require dental restoration. Most children are able to receive this in the dental office, often using nitrous oxide with minimal sedation to facilitate the procedure. However, there are children who will not cooperate during the examination process or during the actual dental procedure without sedation [1]. This can be due to young age, behavior issues or non-compliant behavior. As a result, these children are scheduled for oral moderate sedation in the office setting. We routinely use oral midazolam as our sedation method. Midazolam has a wide safety margin and is effective in about 80% of the patients [2]. We use a dose of 1 mg/kg up to a maximum of 20mg, wait 30 minutes where the child is monitored in the preoperative area before going into the operating room.
Introduction
Oral sedatives are often required for children who require dental restoration. Most children are able to receive this in the dental office, often using nitrous oxide with minimal sedation to facilitate the procedure. However, there are children who will not cooperate during the examination process or during the actual dental procedure without sedation [1]. This can be due to young age, behavior issues or non-compliant behavior. As a result, these children are scheduled for oral moderate sedation in the office setting. We routinely use oral midazolam as our sedation method. Midazolam has a wide safety margin and is effective in about 80% of the patients [2]. We use a dose of 1 mg/kg up to a maximum of 20mg, wait 30 minutes where the child is monitored in the preoperative area before going into the operating room.
In 2012 and 2013, there was a national drug shortage affecting sedatives and other anesthetic agents. This resulted in several months of very short supply of oral midazolam. With a waiting list of over 3 months, cancelling these childrens' procedures would result in added burden for their families and them. We decided to evaluate an alternative sedation regimen that would conserve oral midazolam use. In the past, we have used intranasal sufentanil as an adjunct to oral midazolam. However during this drug shortage period, the only parenteral opiate that was available was remifentanil. Remifentanil is a newer synthetic opiate that has a rapid onset and a very short half-life of 8 minutes [3,4].
In addition, remifentanil has a very short and stable contextsensitive half time. As a result, it has been used intravenously in pediatric patients including critically ill neonates. It has pronounced cardiac stability and is a potent respiratory depressant [5]. It usually administered by infusion and IV bolus use has been also reported but the risks of apnea and muscle rigidity have limited this approach [6,7]. There appears to be very limited experience with using remifentanil intranasally. A paper by Verghese et al. [8] published 2008 in Anesthesia and Analgesia, reported the use of intranasal remifentanil and intubating conditions in a study involving 188 children aged 1 to 7 years. Remifentanil was dosed at 4mcg/kg after induction of anesthesia, some patients had blood levels checked for kinetic analysis. Peak plasma levels occurred after 4 minutes and intubating conditions were superior in the remifentanil group compared to the placebo. There were no side effects or complications noted secondary to the remifentanil [8]. We initiated a quality assessment (QA) review process to evaluate this new adjunct sedation medication. We were interested in the efficacy, side effects and the effective dose. The aim of this report is to describe our experience with intranasal remifentanil as an adjunct to oral midazolam sedation.
Methods
For the QA process, a nurse not involved in the clinical care of the child collected data prospectively. Data collection included patient demographics, drug dosing and administration times, sedation quality, number of dental procedures and complications such as desaturation, bradycardia, muscle rigidity or tachyphylaxis.
After this QA process was completed and the results discussed in our department we obtained IRB approval for the publication of this data from a retrospective review of the QA database. The sedation method included 0.7mg/kg oral midazolam (maximum dose 14mg) with the routine ASA monitoring. The first dose of the remifentanil was given in the preoperative area with the parents present. Remifentanil solution does not cause pain on administration [9]. Five minutes later the child was taken to the operating room and monitored with pulse oximetry, heart rate and non-invasive blood pressure. Oxygen was delivered via nasal cannula. Naloxone and flumazenil were immediately available for intranasal administration if required.
The concentration, dose of remifentanil and the number of administrations evolved as we gathered more experience with the technique. Initially we used 50 mcg/ml giving 1 or 2 doses of 1 mcg/kg and eventually we used 100 mcg/ml, 2 mcg/kg doseup to 4 doses as required. The maximum dose used was 60 mcg. The 1mg vial of remifentanil powder was dissolved in saline to produce the desired concentration. Then 0.6 ml of remifentanil was then drawn up into 1 ml luer lock syringes for use during the day, all unused remifentanil was wasted and documented after the day was complete. All doses were weight based and drawn into luer lock syringes. All remifentanil was administered using a disposable mucosal atomization device (MAD®, LMANA). This is a device that attaches to a luer lock syringe and deposits a fine spray during administration, ensuring an even spread of the medication onto the mucosal surface of the nose. Further doses were given as deemed indicated by the operating dentist.
Sedation quality was assessed using the Richmond Agitation Sedation Score (RASS). The QA observer assessed this at various times during the procedure. Also the dentist and the observer independently rated the overall quality of the sedation on a visual analogue scale (VAS), 1 -10. The database review yielded data on 74 patients who received oral midazolam and intranasal remifentanil. Patient demographics are shown in Table 1. The mean patient age was 5.5 years. Each patient had a median of 3 teeth procedures performed during the sedation. The drug doses are shown in Table 2. There was a significant variation in the dosing of remifentanil as we changed our dosing schedule several times. The midazolam dose was within the dosing parameters of this technique. Remifentanil dosing was divided into three different schedules ( Table 3). The initial evaluation (low, n= 11), the Journal of Anesthesia & Intensive Care Medicine second dosing schedule (intermediate, n= 10) and the final evaluation (high, n= 53). Dose escalation occurred with both an increase dose as well as number of remifentanil intranasal administrations. Most of our experience reported is with the high dose regimen with a mean total remifentanil dose of 1.8mcg/kg. Procedure times are shown in Table 4. The mean procedure time was 30 minutes and discharge time of 55 minutes. We compared this to data from our sedation QA database for full dose oral midazolam, (data taken from the six months period prior to remifentanil use, n=83). The full dose midazolam had a mean discharge time of 70 minutes, this is significantly longer (p<0.01) than the IN remifentanil patients. The quality of the sedation appeared to be significantly better with the high dose group of patients ( Table 5). The RASS scores between the three groups were similar for oral dosing of midazolam, initial IN remifentanil dose and entering the operator. The high dose group had significantly better RASS for the procedure (p=0.004), compared to the other dosing schedules. The benefit was also noted initially in the recovery room (p=0.02). The dentist's VAS assessment (Table 6) was significantly better for the high dose group (p=0.004), as was the observer's VAS assessment (p=0.001). Two patients experienced desaturation episodes (Table 7). One in the low dose group (Patient 9) and one in the high dose group (patient 25). There were no cardiac complications. There were no complaints of nausea or vomiting, chest rigidity or tachyphylaxis. Comments noted during the QA review included several concerning the short duration of effect (10-12 minutes) from the IN remifentanil and the need to re-dose before the effect had worn off. In fact, the next dose was most effective if given before the child became uncooperative. A pharmacokinetic simulation plot (Excel spreadsheet, based upon published kinetic data, Table 8) shown in Figure 1. Demonstrates the changes in remifentanil blood levels when administering 4 doses of intranasal remifentanil in a manner as we have reported. The rapid onset of the IN remifentanil allows a quick increase in the blood level, which also falls quickly due to the short half-life. The blood level increased with each IN administration, indicating that a steady state had not yet been achieved. Table 8.
Discussion
In light of recent drug shortages, this QA report and subsequent data analysis reviews the off-label use of intranasal remifentanil for moderate sedation in the pediatric population. In this study, the higher dose group of 100 mcg/ml with 2 mcg/ kg dose, up to 4 doses provided adequate sedation to these patients with minimal side effects. In the high dose group the depth of sedation was consistent with moderate sedation (RASS: -1 or -2) and the procedures were all completed satisfactorily.
Among all the patients receiving this treatment, only two episodes of desaturation occurred to 90% with no intervention required. In addition, no episodes of tachyphylaxis, nausea or chest rigidity were observed. Intranasal adjunct medication is an attractive option due to the ability to give multiple doses without requiring the cooperation of the patient [10][11][12]. Intranasal medications often have a faster onset, in this case remifentanil peak effect is within 5 minutes, this facilitates a safe titration to effect method, as the peak effect can be seen before the next dose is given. However the intranasal approach reduces the risk of apnea and rigidity that could be seen with bolus remifentanil, due to this onset delay [8]. The remifentanil must be diluted into an appropriate concentration, the optimal volume for an intranasal mediation is less than 0.5 ml, larger volumes result in a greater degree of unpredictability due to delayed swallowed medication effect, sensitive to the effects of first pass metabolism. The volume we used for all cases was a maximum of 0.6 ml, irrespective of the dose given.
Whenever a drug must be mixed, this increases the risk for error, also a small volume error of 0.1 ml could result in a dose error of 25% [13]. When using potent opiates as part of a sedation method, extra care must be taken with the preparation Journal of Anesthesia & Intensive Care Medicine and administration. A simple reminder, the volume should never be greater than 0.6 ml could help limit the risk of a severe overdose.
The use of the MAD improves the distribution of intranasal administered medications. There is a small dead space volume (0.05 ml) that should not be a problem unless very small volumes are being used [14].
The pharmacokinetic simulation demonstrates however, that this dosing regimen can still result in remifentanil levels that could cause apnea. Higher blood levels of remifentanil in a patient who has also received benzodiazepine further increases this potential risk. The study by Verghese et al. [8] demonstrated no problems using 4 mcg/kg as a single dose, however in these patients apnea was not a problem and actually desired, to facilitate the intubation process, this was not the endpoint in our analysis. We used a step-wise dosing schedule to evaluate the effects of remifentanil, increasing the individual dose as well as the number of doses in a structured manner to ensure that safe sedation was given. The high dose remifentanil schedule appears to be a safe and effective dosing method, using 3-4 doses with at least 5 minutes between each subsequent dose.
Remifentanil is one of the more higher cost sedation agents available at present. A 1mg vial from our supplier costs about $70. If this can be used between multiple patients then the cost may be acceptable. This report has several limitations. It is a retrospective review of a QA database. There was also no randomization of patients nor blinding of the observer or dentists. The patients were a convenience sample that presented to the clinic on days we were able to do the QA analysis and as such are reflective of our sedation population. However our population may be significantly different than those in other university or private offices. Therefore, a single center study may not be generalizable. This is the first report of the use of intranasal remifentanil as a sedative in pediatric patients. There may be several other opportunities for such a rapid acting, titratable, painless non-IV based parenteral sedation method such as patients requiring short painful procedures, patients with chronic pain for break-through management or as an anesthesia premedication.
Conclusion
As a result of the national shortage of drug supply, this review explored unique modalities for administration of remifentanil in a pediatric dental population. The intranasal remifentanil appeared safe and effective, however it was labor intensive due to the multiple dosing iteration schedule we utilized for safety reasons. Due to the small sample size, and limited demographic, further prospective randomized studies are necessary in both the adult and pediatric populations. | 2019-03-17T13:08:25.665Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "2e02103b98ac6a10f6d991c97492eb77de0242bc",
"oa_license": "CCBY",
"oa_url": "https://juniperpublishers.com/jaicm/pdf/JAICM.MS.ID.555618.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "25d51d7ffaaa4dea6ae2c37c7b11bef82f3a9942",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256277003 | pes2o/s2orc | v3-fos-license | Detailed genetic and clinical analysis of a novel de novo variant in HPRT1: Case report of a female patient from Saudi Arabia with Lesch–Nyhan syndrome
Background: Hypoxanthine-guanine phosphoribosyltransferase (HPRT1) deficiency is an inborn error of purine metabolism responsible for Lesch–Nyhan syndrome (LNS). The disease is inherited in an X-linked recessive manner and predominantly affects male individuals. Female individuals can carry a mutation as heterozygotes, but typically, they are asymptomatic because of the random inactivation of the affected allele. Nevertheless, although rare, heterozygote female individuals may manifest LNS with full characteristics. Herein, we describe a female patient from Saudi Arabia with LNS. Results: The patient (a 4-year-old girl) presented with typical characteristics of the disease, which include global developmental delay, self-mutilation, hyperuricemia, hypotonia, speech delay, spasticity, and seizures. Her general biochemical laboratory results were normal except for high levels of uric acid. The abdominal MRI\MRS, mostly unremarkable, showed bilateral echogenic foci within the renal collecting system. Genetic testing (whole-exome sequencing, iterative variant filtering, segregation analysis, and Sanger sequencing) pointed a novel de novo frameshift variant in HPRT1. X-inactivation assay using HpaII showed the presence of a 100% skewed X chromosome carrying the affected allele. RT-PCR of the cDNA indicated complete loss of the expression of the normal allele. Conclusion: Our study presents a female patient who has a severe case of LNSand found to be the 15th female patient with the disease in the world. The study emphasizethe need for a streamlined protocol that will help an early and accurate diagnosis of female LNS patients to avoid unnecessary interventions that lead to costly patient care.
In this study, we report a novel de novo frameshift mutation that led to LNS in a female patient from Saudi Arabia. The HPRT1 deficiency was confirmed based on previously described functional studies (Hara et al., 1982;Ogasawara et al., 1989;van Bogaert et al., 1992;Yukawa et al., 1992;Aral et al., 1996;De Gregorio et al., 2000;Rinat et al., 2006).
Case report
A female patient from a nonconsanguineous Saudi family was recruited from a medical genetics clinic at the King Faisal Specialist Hospital and Research Centre (KFSHRC) ( Figure 1A). Peripheral blood samples were collected into EDTA tubes from the affected girl and her parents after obtaining the signed informed consent approved by the institutional review board (KFSHRC Research Advisory Council, RAC#2120022). Skin biopsy collected from the affected individual was used for primary skin fibroblast culture.
Molecular genetic analysis
Genomic DNA (gDNA) was isolated from blood using a Gentra ® Puregene DNA Purification Kit (Gentra Systems, Inc. Minneapolis, MN, US), according to the manufacturer's instructions. Whole-exome sequencing (WES) was performed on the patient's DNA, as described previously (Aldhalaan et al., 2021;Scala et al., 2022). To confirm the result, gDNA samples were amplified by PCR using HPRT1-specific primers (forward GTGAAAAGGACCCCACGAAG and reverse CAA ATTATGAGGTGCTGGAAGGA). The total RNA was extracted from the cultured fibroblasts using a QIAamp ® RNA Blood Mini Kit (QIAGEN ® , Hilden, Germany). cDNA was synthesized using a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific Corp.). HPRT1-specific primers targeting the variant site were used for RT-PCR (forward CAAAGATGGTCAAGGTCGCA and reverse ACAGTTTAGGAATGCAGCAACT). Direct sequencing of PCR and RT-PCR products was performed on an ABI PRISM 3100 Genetic Analyzer (Thermo Fisher Scientific Corp.), according to the manufacturer's recommendations. Quantitative RT-PCR experiments were performed by "quantitative SYBR green qPCR" assay (Thermo Fisher Scientific Corp.). HPRT1 primers used for RT-PCR were utilized to quantify the level of mRNA. The experiments were Frontiers in Genetics frontiersin.org 02 performed on an ABI PRISM 7700 cycler (Thermo Fisher Scientific Corp.) using the PCR-efficiency-corrected -ΔΔCt method. The expression levels were normalized to GAPDH (FW: 5-TGC ACCACC AAC TGC TTA GC-3; REV: 5-GGC ATG GAC TGT GGT CAT GAG-3, GenBank NM_002046), as described previously (Livak and Schmittgen, 2001;Schmittgen, 2001). Variant frequency was obtained from the beta gnomAD browser and in-house database (n = 2,379).
X-inactivation analysis
Analysis of the X-inactivation pattern was performed, as previously described (Torres and Puig, 2017). Briefly, the total genomic DNA was extracted from whole blood and fibroblast of the index case. The DNA was incubated with and without the HhaI restriction enzyme at 37°C for 16 h. Amplification of each sample was performed with the digested and undigested genomic DNA using FAM-labeled primers targeting the human androgen receptor (AR) (sense: 5′-TCCAGAATCTGTTCCAGAGCGTGC-3′ FAM; antisense: 5′-GCTGTGAAGGTTGCTGTTCCTCAT-3′) to distinguish between maternally and paternally inherited alleles. The resulting products from the digested and undigested PCR were run on an ABI 3130 genetic analyzer and analyzed by GeneMapper 4.0 software (Thermo Fisher Scientific Corp.).
Clinical features
The patient is a 4-year-old Saudi girl born to a nonconsanguineous couple after a full-term uneventful pregnancy via vaginal delivery ( Figure 1A). Her mother is a 30-year-old Moroccan woman, and her father is a 42-year-old Saudi man who has seven unaffected children from his previous marriage. There was no family history of a similar condition. The patient's prenatal and natal histories were all normal. At 2 years of age, she was referred to our institution for global developmental delay. According to her mother, she was neither rolling over nor able to sit, walk, or able to say any words. Based on the patient's status, her developmental age was suggested to be 3-4 months. At 3 years of age, due to COVID-19 precautionary measures, she received a consultation over the phone. She developed self-mutilation and had an elevated serum uric acid level. After contacting her father, she was then admitted for lip surgery to assess her injuries at a local hospital. Molecular testing was utilized to assess her case. According to her WES results, a heterozygous variant in HPRT1 was identified as the most plausible candidate matching her clinical phenotype. At 4 years of age, she displayed abnormal movement and was admitted electively to the hospital to investigate her state. During her hospitalization course, she was advised to see an occupational therapist to assess her selfmutilation, where she was offered a splint to help decrease her injuries. Her abnormal movements were then investigated by brain magnetic resonance imaging (MRI) performed under general anesthesia. MRI revealed no abnormality. A routine electroencephalogram (EEG) was performed for 20 min and showed continuous EEG recording with the normal anterior to posterior gradient. The EEG indicated mild non-specific cerebral dysfunction over the left temporal head region. Her abdomen ultrasound (US) showed a normal liver and spleen, but the urinary bladder was under-filled. Her kidney demonstrated a slight small right kidney for her age and bilateral echogenic foci within the renal collecting system ( Figures 1B, C). These findings could represent uric acid deposition versus early nephrocalcinosis. There was no apparent shadowing that would indicate renal stones. She had an elevated serum uric acid level of 370 µmol\L. Her CBC and chemistry results were all within normal ranges (Table 1). Ophthalmologist consultation showed intermittent extropia. Allopurinol was given to the patient to control the uric acid level. The therapeutic modalities are merely supportive rather than curative, such as the use of wheelchair for mobility, protective masks, and elbow restraints to prevent selfinjuries and reconstructive surgeries.
Molecular genetics
The result of WES analysis revealed a single conceivable candidate, a novel de novo heterozygote frameshift variant in exon 8 of HPRT1 (NM_000194.2: c.539delG:p.Gly180Aspfs*10), which is neither found in gnomAD nor in in-house Saudi exomes (n = 2,379). The variant is in the highly conserved catalytic domain of HPRT1. Sanger sequencing of DNA demonstrated both normal and mutant HPRT1 alleles in the affected female patient, while her parents did not have the mutant allele, indicating that the mutation is de novo. (Figure 1D). The variant was classified, according to the American College of Medical Genetics (ACMG) guidelines, as probably pathogenic with the criteria being PVS1 (strong evidence for cosegregation) and PM2 (absent gnomAD exomes). Sanger sequencing of the amplified genomic DNA ( Figure 1D) revealed one base deletion in the patient. Such a change presumably leads to a premature stopcodon and causes a severely truncated protein (Figure 2A).
RT-PCR analysis ( Figure 2B) did not show any significant size difference on an agarose gel (2%). However, Sanger sequencing of the cDNA prepared from the total RNA extracted from the cultured fibroblast of the affected female patient revealed the complete absence of the mRNA of the normal allele and showed only the transcription of the abnormal allele of HPRT1 ( Figure 2C). The absence of a normal allele implicates the presence of non-random inactivation of the X chromosome. Based on the assumption that there is a non-random inactivation of the affected allele, we utilized a popular inactivation Glucose (mmol/L) 4.2 3.9-6.9 Frontiers in Genetics frontiersin.org 03 assay to test our hypothesis and checked the methylation pattern using HhaI restriction enzyme. The methylation status of the methylation-sensitive enzyme's restriction sites near the polymorphic CAG repeat in the first exon of the human androgen receptor (AR) locus correlates with X chromosome inactivation. Hence, we analyzed the X-inactivation pattern of blood DNA from the affected patient and her parents. Moreover, the same experiment was performed using the fibroblast DNA, which was only available from the affected individual. The results are shown in Figure 3A. When the genomic DNA from the whole blood samples was amplified without HhaI digestion, two polymorphic alleles at the AR locus [AR1 (260) and AR2 (280)] can be seen, as shown in Figure 3A. The assay indicated that the AR1 allele is from the mother and the AR2 allele is from the father. However, after HhaI digestion in the blood from the affected girl and the mother, only the AR1 allele could be amplified, indicating the presence of non-random X-inactivation in the patient Figure 3A.
We encountered a female patient from Saudi Arabia showing typical characteristics of the disorder. The 4-year-old girl, throughout her developmental stages, showcased the typical phenotype of LNS, including general developmental delay, self-mutilation, hyperuricemia, hypotonia, delayed speech, spasticity, and seizures (Table 1). Her EEG results revealed a mild non-specific cerebral dysfunction over the left temporal head region. The results of the US revealed a small right kidney for her age and bilateral echogenic foci within the renal collecting system. In our case, the transcription of one of the HPRT1 alleles was blocked due to a de novo frameshift mutation (c.539delG) in HPRT1, whereas the transcription of the normal allele was inhibited because of the nonrandom inactivation of the second X-chromosome similar to those of previously published female cases.
A wide range of genetic mutations have been detected across all 14 cases including our case ( Figure 3B). Two female cases reported by Aral et al. (1996) and De Gregorio et al. (2000) exhibited non-sense mutations, p.Arg170* and p.Tyr153*, respectively. Only one female patient was identified to carry a stop mutation (Yamada et al., 1996). Two missense mutations (p.Glu14Lys and p.Tyr72Cys) were detected in two different female patients. Interestingly, these patients were diagnosed at a later age compared to other cases. There was only one splice site mutation (c.609+4A>G) reported twice (Jinnah et al., 2000;De Gregorio et al., 2005). A patient carried a translocation severe [46,XX,t(X: 2)(q26:p25)] and exhibited by far the most characteristics of LNS reported in the literature (Rinat et al., 2006). There was only one microdeletion of HPRT1 reported by Ogasawara et al. (1989). The previously reported cases and mutations are listed in Supplementary Table S1.
The classic phenotype of LNS in male individuals includes those of the neurological sequelae, hypotonia, athetoid movements, intellectual disability, dysarthria, and self-mutilation. These are the prominent signs of LNS and are important to differentiate for the correct diagnosis of patients at an early stage. Such features have been reported in many female cases as well. Yet, due to the rare occurrence of the syndrome in female patients, it often goes missed or misdiagnosed (Fu et al., 2014), or it is confused with neurological disorders such as, muscular degenerative disorders, psychological disorders, or several types of cerebral palsy (Hara et al., 1982;Yukawa et al., 1992;De Gregorio et al., 2005). An example of this is in a case of a female patient reported by Hara Frontiers in Genetics frontiersin.org 04 et al. (1982). The patient was brought to the hospital, owing to her developmental stages with various symptoms and signs that align with the classical phenotype of LNS. Nonetheless, she was only evaluated for LNS at the age of 5 after exhibiting self-mutilation involuntarily leading to the loss of half of her lower lip. It was reported that the girl showed complete deficiency of HPRT1 with a threefold increase in APRT activity, which is a classical characteristic of LNS, and this has been observed in male subjects as well. Enzyme assay is used to confirm the diagnosis of LNS by assessing HPRT1 (2 ± .39 μmol\g Hb\min) and APRT (0.41 ± 0.17 μmol\g Hb\min) enzyme activities. The HPRT1 enzyme activity in erythrocytes is used as a confirmatory test. Moreover, laboratory examinations are used to assess the signs and symptoms of LNS, such as the dramatic high levels of uric acid due to absent purine recycling exceeding the normal values of 208-428 μmol/L in male and 155-357 μmol/L in female individuals.
The decreased level of the HPRT1 enzyme activity in erythrocytes has been detected with a subsequent increase in the APRT enzyme activity in some patients (Ogasawara et al., 1989;van Bogaert et al., 1992;Yukawa et al., 1992;Yamada et al., 1994;Jinnah et al., 2000;De Gregorio et al., 2005;Rinat et al., 2006). On the other hand, almost all cases showed an increase in serum uric acid levels. Similarly, an increased serum uric acid level (403 mmol\l) was observed in our patient. A subsequent consequence of hyperuricemia in LNS patients is kidney micro-injuries and kidney stones (nephrolithiasis). If these conditions go untreated, they can eventually progress into kidney failure. Such consequences have been reported in two female cases: a 29-year-old suffered from nephrolithiasis in one of her kidneys, a high serum uric acid level of 583 µmol\l and true gout, and a two-monthold presented with an advanced level of bilateral nephrolithiasis and acute renal failure.
From reviewing the published cases, it can be said that the most severe characteristics and signs have been seen in two female cases (Hooft et al., 1968;Hara et al., 1982). Both patients had complete deficiency in HPRT1 activity, which could be an indicator of Frontiers in Genetics frontiersin.org 05 deterioration in their cases. Enzyme deficiency has been associated with psychomotor delays in LNS in male individuals, whereas, in our case, severe dystonia and neurological sequelae, choreoathetosis movements, mild to severe mental deterioration, early onset of psychomotor delay, and psychological impairment were observed.
In conclusion, genetic testing inclusive of X-chromosome inactivation assay coupled with serum analysis and basic clinical manifestations of LNS can be useful for an early and quick differential diagnosis of the disease in female patients. LNS should be considered in female patients exhibiting typical characteristics of LNS, as has been done with male patients.
Data availability statement
The data analyzed in this study is subject to the following licenses/ restrictions: The data cannot be publicly released due to patient confidentiality. Requests to access these datasets should be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by the King Faisal Specialist Hospital and Research Centre, Office of Research Affairs, Research Advisory Council, and Research Ethics Committee. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
Author contributions
NK and ZR conceptualized, designed, and supervised the project, and were involved in genetic analysis and drafting the manuscript. AA and HA performed the experiments and were involved in writing the manuscript. JA and SA were involved in acquisition of clinical data and reviewed the patients' charts. ZR supervised and drafted the clinical descriptions. | 2023-01-27T14:10:55.971Z | 2023-01-26T00:00:00.000 | {
"year": 2022,
"sha1": "16daa186f81b3719f537f57e29a674411e64bb19",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2022.1044936/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "16daa186f81b3719f537f57e29a674411e64bb19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234883975 | pes2o/s2orc | v3-fos-license | Application of Deep Learning in Photovoltaic Array Fault Identification
If a photovoltaic power station needs to operate normally, it cannot do without the blessing of the photovoltaic array, which also takes up about half of the cost of the photovoltaic system. The failure and low efficiency of photovoltaic power plants are the main two reasons for the loss of energy. Among them, the failure of photovoltaic arrays is undoubtedly one of the important reasons. In this article, the author uses the superiority of deep neural network in fault identification technology to complete the fault identification model of photovoltaic array.
Research Purposes
Photovoltaic power generation is a new energy power generation technology, and its energy conversion, that is, light energy into electrical energy, is completed by the photovoltaic effect of semiconductor materials. The photovoltaic power generation system generally includes four parts: combiner box, photovoltaic array, controller and inverter. The photovoltaic array is composed of a large number of solar panels connected in series and packaged. The photovoltaic power generation device incorporates a power controller on the basis of the photovoltaic array [1] .
Photovoltaic systems often suffer from efficiency drops in engineering applications. The main reason is that the photovoltaic arrays are particularly vulnerable to adverse external environments. Big data shows that due to interference from the external environment, photovoltaic systems generally dissipate 20% of the efficiency, that is, their actual efficiency only accounts for about 80% of the total array efficiency [2] .According to the research results, the main culprit for the low efficiency of the photovoltaic system is the mismatch problem caused by shading [3] .The output characteristics of photovoltaic modules that are not in line with reality are generally caused by dust, aging, partial shading [4][5] and other reasons, which will lose the efficiency of the photovoltaic array. This is a mismatch. The mismatch phenomenon will significantly reduce the power generation efficiency of the photovoltaic system. Therefore, we must quickly and accurately complete the fault identification of the photovoltaic system in order to target the weakening of the mismatch problem and the hot spot effect, thereby improving the reliability of the system. The working status of the photovoltaic array is directly related to the output power of the photovoltaic system, and it is the protagonist of the photovoltaic system. Therefore, if it is necessary to complete the task of high-efficiency photovoltaic power generation and minimize various accident rates, it is necessary to detect and diagnose photovoltaic modules in real time.
Photovoltaic Array Failure Analysis
In the process of debugging the project, it is difficult to detect some problems in the materials, design, and technology of the photovoltaic array, but it will gradually fail in accordance with the site conditions after a period of time. The failure types of photovoltaic arrays are roughly as follows: (1)Short circuit fault One of the most common faults in photovoltaic arrays is the short circuit of components, mechanical vibration and other damage to the internal batteries of the array; unfavorable weather causes local corrosion and damages the insulation; human error wiring and other reasons can easily cause short circuits. Assuming that a large-scale short-circuit fault is found in a photovoltaic power station, it is very easy to cause a fire accident. Therefore, it is imperative to regularly check the short-circuit fault in the photovoltaic array, and it is also the top priority of the inspection of the photovoltaic power station.
( If an open-circuit fault is triggered, it will be inevitable that the array will disconnect its path. Basically normal power supply cannot be maintained, resulting in electrical energy dissipation. (3)Aging failure When photovoltaic cells work under load for a long time, it is difficult to guarantee that the array material will not corrode and deteriorate, and the aging phenomenon cannot be avoided. Given that the photovoltaic power generation system is mainly operated in unfavorable external conditions such as beach desert areas and remote mountainous areas. The aging of the packaging materials and the aging of the photovoltaic cell itself are both included in the category of aging of the photovoltaic array. The degradation performance of the packaging material is mainly caused by long-term ultraviolet radiation. In addition, various aging phenomena will occur, that is, the backplane and its photovoltaic cell packaging film(EVA) turn yellow; the battery itself is affected by the passage of time due to the production process and the battery The difference in type will generate aging to a certain extent. The aging fault is an irreversible process, which will not only lower the output power of the photovoltaic array, but also accelerate its damage under uninterrupted long-term use.
(4)Cover fault Self-shading, temporary shadow shading and building shading belong to different aspects of photovoltaic power station shading. Self-shielding refers to the left, right, front, and rear shading between the photovoltaic arrays to form a shadow or the shading of equipment such as combiner boxes. Temporary shelter refers to partial shelter formed by external conditions such as dust, snow, fallen leaves, bird droppings,etc.,and must be cleaned if necessary. Building shelter generally considers surrounding green plants, nearby buildings, etc. However, the initial planning of the erected power station can completely avoid the above-mentioned failures to the greatest extent. The hot spot effect is very easy to occur when the photovoltaic array is blocked and cannot be processed for a long time, and the material packaged on the surface of the array will be damaged. Although this fault can be recovered, the output power of the photovoltaic system will still be dissipated.
Data-driven Fault Identification Methods
At present, fault identification mainly uses computer technology analysis. The ability to use sensor technology or other collection methods to complete existing data acquisition depends on sensor technology and other collection methods. Comparing operations rely on data-driven algorithms to achieve the capture of operating results. Compared with traditional methods, this type of method will greatly improve the accuracy of fault identification and fault type classification. Currently, the most commonly used data-driven fault identification methods include statistical methods, signal processing techniques, and machine learning collaborative deep learning methods.
The most common fault identification signal processing techniques are FFT (Fast Fourier Transformation\Fast Fourier Transform) and wavelet decomposition. FFT is based on the innovation of DFT (Discrete Fourier Transformation\Discrete Fourier Transform) and is a fast new algorithm. Because FFT has good sensitivity in the frequency domain and can timely monitor the vibration signal triggered by the fault, it is applied in fault identification. For example, in [6], the author uses the Fourier transform method to obtain the spectral characteristics of the inverter's instantaneous voltage for the fault detection and identification of the inverter, and uses the spectral difference as the characteristic quantity for fault identification. Normally closed faults make a good judgment. Wavelet decomposition is a method of decomposition based on the acquisition of wavelet transform waveforms. It can partially analyze the frequency domain in the time domain information and propose innovations to the traditional Fourier wave. The Fourier transform cannot handle the insufficiency of unstable signals. The wavelet decomposition filtering performance is good enough to smoothly process the data. The wavelet decomposition can eliminate the noise and interference in the actual data, that is, various unstable information, so as to obtain more accurate and simplified data. For example, literature [7][8] uses wavelet decomposition in data preprocessing, and fault detection can be completed by extracting wavelet coefficients.
In the fault identification system, the statistical method is a very efficient and intelligent method, but its disadvantage is that it relies heavily on expert knowledge. Statistical methods generally include correlation analysis, variance, mean, regression analysis, and difference analysis. Correlation analysis examines the degree of correlation between variables and Pearson coefficient analysis [9] is the most widely used. The judgment of linear correlation is determined according to the specific value of the correlation coefficient. The formula is: is the covariance of X and Y,D(X), D(Y) is the variance of X and Y, and |ρXY| ≤1.The value of |ρXY| is closely related to the degree of correlation between the X and Y variables, and shows a positive correlation trend. The larger the value, the greater the degree of correlation between the X and Y variables, and vice versa. When |ρXY|=0,the variable wireless relationship between X and Y. Correlation analysis selects candidate feature values from variables according to the degree of correlation. This method usually does not have expert advice. Variance, mean, and difference analysis are analytical methods that can directly and specifically express the change situation of a sample. Linear and nonlinear regression are regression analysis, which can reflect the relative changes of several variables. Literature [10] completed modeling for real cases, and realized fault warning and identification with the help of linear regression algorithm.
The research on deep learning and machine learning methods is relatively mature, relying on the category of artificial intelligence algorithms to further improve the original fault identification topics. The machine learning method explores how to make the machine have the learning ability similar to that of human beings. At present, it has achieved remarkable results. It has been applied in various fields and has a good classification effect. The largest two-classification structure is the process of accurately and effectively distinguishing the normal state and the fault state. For the discrimination of each specific fault type, it may as well integrate multiple two-classifiers. SVM(Support Vector Machine) [11] is the most frequently used machine learning method in the domain. The basic model of SVM is a linear classifier with the largest separation [12] .The classifier selects data from the feature space and passes the discrimination Fault status and normal operation status realize classification problems. In the literature [13],the superiority of the support vector machine to complete the fault Through continuous innovation, traditional neural networks have solved the problem of gradient diffusion and produced deep learning methods. The abstract iteration of the received signal is a salient feature of deep learning [14] .Machine learning and deep learning methods are divided into deep models and shallow models. Taking into account the complex internal structure of the network, shallow models are not effective in extracting data feature values, because they generally only include one or two non-linear conversion layers, which are limited The sex is greater. Deep learning can better approximate complex functions, add the number of hidden layers, train layer by layer, and use the training results of the previous layer as the training input of the next layer. Not only can the gradient diffusion problem be effectively contained, but also the data itself The deeper characteristics of the complete learning. The development of deep learning algorithms has gone through a certain process. Among them, computer vision is the earliest application category. On the topic of extracting features, deep learning networks have outstanding performance, especially in the classification and processing of images. Therefore, the failure of various equipment in terms of identification, deep learning methods have many applications. For example, literature [15][16][17] facilitates the use of various deep learning algorithms to complete the fault identification of industrial equipment. Taking into account the actual problems in the specific working environment such as strong coupling, nonlinearity, large quantity and high-order complex data, deep learning methods have defects that are not easy to meet the needs of the work, so this undoubtedly becomes a hot issue, that is, the introduction of deep learning methods Into the fault identification of various components.
Photovoltaic Cell Model Construction and Simulation
Photovoltaic arrays are usually connected to multiple solar cells and are one of the most important parts of a photovoltaic system. The smallest power generation unit in a photovoltaic system is a solar cell, which can generate electromotive force by absorbing light energy by virtue of the photovoltaic effect. The current formed by a single solar cell cannot meet the production needs, so the photovoltaic array connects multiple solar cells in series and parallel to effectively expand the electric energy. The basic structure of solar cells will not change due to the series or parallel form. Therefore, the understanding of the photovoltaic array can be completed by analyzing a single solar cell. The solar cell can be equivalent to a non-linear small DC power supply, but taking into account the daily life under the conditions and the shielding fault in the main faults of the photovoltaic array mentioned above, the photovoltaic cell will be shielded, so this paper establishes a double diode (double exponential) mathematical model [18][19] , as shown in In the case of partial shading, the photovoltaic cell is negatively charged. Once the reverse voltage accumulated in the PN junction is too high, it will form a rapidly increasing current. In severe cases, the PN junction is easily broken down. I D --the current flowing through the diode, unit A; I ph --photoelectric current, unit A; R s and R sh --series equivalent resistance and parallel equivalent resistance, unit Ω.
In the formula, Is--the reverse saturation current of diode D; V br --avalanche breakdown voltage, unit V; A--the quality factor of diode D; T--reference temperature 300 K; α,n--avalanche breakdown characteristic constant; q--electron charge 1.6×10 -19 C; K--boltzmann constant 1.39×10 -23 J/K. The role of the bypass diode: To protect the negative voltage formed by the battery in a partially shielded state, and to prevent the avalanche breakdown effect. Therefore, the IV term in equation (4.1) is generally ignored in the circuit with bypass diodes. In actual situations, I ph and I S will change continuously with irradiance and temperature. The mathematical model can be shown as: Tab According to the above mathematical model,the system simulation of the photovoltaic array is completed with the help of Matlab/Simulink platform. The simulation diagram is shown in Figure 4.2:
Current-voltage Characteristic Analysis
The output characteristics of the photovoltaic array, that is, the current-voltage characteristics are usually not in line with the actual situation, which is generally interfered by shadows and component aging. Especially when the hot spot effect occurs in the photovoltaic panel, the power supply efficiency of the photovoltaic array will be reduced due to the sharp rise in temperature, which can easily cause the entire system to collapse in severe cases. Figure 4.3 simulates the current-voltage curve at different temperatures, where the control light intensity is the standard irradiance (G r =1000W/m 2 ).It is not difficult to find that temperature has varying degrees of influence on the short-circuit current and open-circuit voltage of photovoltaic cells. As the temperature increases, the short-circuit current increases, otherwise it decreases, showing a positive correlation; as the temperature increases, the open circuit voltage decreases, and vice versa, it shows a negative correlation.
Fig.4.3 I-V curves at different temperatures
Irradiance is the energy source of the photovoltaic array, and temperature will also affect the power generation efficiency of the photovoltaic power generation system. The current-voltage characteristics of photovoltaic modules will also change due to different irradiance. Figure 4.4 shows the current-voltage curves obtained under different irradiance conditions, where the control temperature is also the standard temperature of 25°C. Structure DNN(Deep Neural Network) is a multi-layer neural network structure with good application value in classification projects [20] .It has the advantages of straightforward processing of raw data, active learning and extraction of feature values, and can meet various engineering practical applications demand. DNN includes a multi-layer network structure. The first layer is the Input layer, the middle layer is the Hidden layer. The hidden layer can be set independently according to the actual situation, and the last layer is the Output layer.
Fig.4.5 DNN model
Suppose that the input vector of the input layer is expressed as X i , L l is the number of layers of the network, u lj is the input of the jth neuron of the Zth layer, Y lj is the output of the jth neuron of the lth layer, and Y l is the l Layer output. Then the output of any neuron in any layer can be expressed as: Generalizing to the entire l layer, the output of the l layer can be expressed as: Deep learning models generally have a multi-layer structure, and most of them have non-linear relationships. Therefore, compared with shallow models, their training rate and classification accuracy are higher. The activation function just satisfies the nonlinear demand. At the same time, differentiability and monotonicity are also indispensable features of the activation function. For example, when the activation function is f(x)=x(linear conversion),there is essentially no difference between the shallow neural network and the deep neural network, and the training rate and accuracy will not change. When the activation function is converted into a nonlinear function, it will also cause the neural network to form a nonlinear model. The neural network itself has good training advantages and the ability to extract feature values. Through the approximation training of the functional relationship between the expected output and input, the neural network can be approached to various complex functions to the greatest extent. The differentiability of the activation function is usually satisfied by the gradient calculation in the SGD(stochastic gradient descent) algorithm, and the monotonicity can be fully satisfied by ensuring that the output function of the shallow network is a convex function.
Identification Results and Analysis
According to the current-voltage characteristic curve, it is understood that irradiance and temperature will affect the current and voltage of the photovoltaic array. Further calculation of the temperature's influence on the output power P is shown in Table 5.1.Similarly, irradiance affects the power P The statistical values of changes are shown in Table 5.2.According to the data in the table, both irradiance and temperature have caused great changes in the output power of photovoltaic modules, so it is imperative to classify the types of module failures under various conditions. The selected DNN network structure is shown in Figure 5.1,the DNN algorithm flow chart is shown in Figure 5.2,and the test results under the condition that power varies with irradiance are shown in Table 5.3.The results in Table 5. 3 show that when the current-voltage is used as the eigenvalue matrix, the system accuracy is the highest. Figure 5.3 shows the current-voltage as the input loss function and the change trend of accuracy. Tab
Fig.5.2 DNN algorithm flow chart
The experimental principle of the influence of temperature on the output power is the same as above, and the current-voltage eigenvalue matrix is still used to integrate the DNN algorithm to complete the training. The accuracy rate obtained is 95.76%.Compared with the traditional MLP algorithm, the change trend is shown in Figure 5.4.Similarly, Figure 5.5 shows the training trend graph of the comparison of the two algorithms when the irradiance is different. It is not difficult to find that the advantages of the DNN algorithm compared to the MLP algorithm are reflected in the accuracy of fault detection and the reduction of the test sample loss function. This article first understands the main types of faults currently existing in photovoltaic arrays. Secondly, it deeply analyzes the new data-driven fault identification method, and then simulates the different operating states of photovoltaic modules based on the mathematical model of the photovoltaic array. The current-voltage curve under different conditions and its changes to the output power P can further complete the fault identification of the photovoltaic array. Relying on the deep neural network algorithm, this paper provides a photovoltaic array fault identification mode based on the DNN algorithm. The DNN method is more suitable for a large number of eigenvalue fusion conditions, and has a good fit for photovoltaic module fault identification. However, this article did not try to apply other smart algorithms with better performance to the fault identification of photovoltaic arrays. Similar ideas can also be carried out to complete more in-depth research on photovoltaic inverter circuits. | 2021-05-21T16:57:51.220Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "3fb1313f678ce44f62cc0353cab238143d1dfef6",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1852/4/042085/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "77fc0853366e0ef533b8f6de76eb0fdf769eb77e",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
2429599 | pes2o/s2orc | v3-fos-license | Another Generalization of the Reed-Muller Codes
The punctured binary Reed-Muller code is cyclic and was generalized into the punctured generalized Reed-Muller code over $\gf(q)$ in the literature. The major objective of this paper is to present another generalization of the punctured binary Reed-Muller code. Another objective is to construct a family of reversible cyclic codes that are related to the newly generalized Reed-Muller codes.
II. q-CYCLOTOMIC COSETS MODULO n AND AUXILILARIES To deal with cyclic codes of length n over GF(q), we have to study the canonical factorization of x n − 1 over GF(q). To this end, we need to introduce q-cyclotomic cosets modulo n. Note that x n − 1 has no repeated factors over GF(q) if and only if gcd(n, q) = 1. Throughout this paper, we assume that gcd(n, q) = 1.
Let Z n = {0, 1, 2, · · · , n − 1}, denoting the ring of integers modulo n. For any s ∈ Z n , the q-cyclotomic coset of s modulo n is defined by where ℓ s is the smallest positive integer such that s ≡ sq ℓ s (mod n), and is the size of the q-cyclotomic coset. The smallest integer in C s is called the coset leader of C s . Let Γ (n,q) be the set of all the coset leaders. We have then C s ∩C t = / 0 for any two distinct elements s and t in Γ (n,q) , and s∈Γ (n,q) Hence, the distinct q-cyclotomic cosets modulo n partition Z n . Let m = ord n (q), and let α be a generator of GF(q m ) * . Put β = α (q m −1)/n . Then β is a primitive n-th root of unity in GF(q m ). The minimal polynomial m s (x) of β s over GF(q) is a monic polynomial of the smallest degree over GF(q) with β s as a zero. It is now straightforward to see that this polynomial is given by which is irreducible over GF(q). It then follows from (1) that which is the factorization of x n − 1 into irreducible factors over GF(q). This canonical factorization of x n − 1 over GF(q) is fundamental for the study of cyclic codes.
III. THE PUNCTURED GENERALIZED REED-MULLER CODES OVER GF(q) Let m be a positive integer and let n = q m − 1 from now on, where q = p s , p is a prime and s is a positive integer. For any integer a with 0 ≤ a ≤ n − 1, we have the following q-adic expansion Let α be a generator of GF(q m ) * . Let ℓ = ℓ 1 (q −1) +ℓ 0 < q(m −1), where 0 ≤ ℓ 0 ≤ q −1. The ℓ-th order punctured generalized Reed-Muller code PGRM q (ℓ, m) over GF(q) is the cyclic code of length n = q m − 1 with generator polynomial Since ω q (a) is a constant function on each q-cyclotomic coset modulo n, g R (x) is a polynomial over GF(q). By definition, g R (x) is a divisor of x n − 1.
The parameters of the punctured generalized Reed-Muller code are known and given in the following theorem [1,Theorem 5.4.1].
Then GF(q)1 is a subspace of GF(q) n with dimension 1. A proof of the following property can be found in [2].
The dual codes PGRM q (ℓ, m) ⊥ and the original ones PGRM q (ℓ ′ , m) are related as follows: When q = 2, PGRM q (ℓ, m) becomes the punctured binary Reed-Muller code. Hence, PGRM q (ℓ, m) is indeed a generalization of the original punctured binary Reed-Muller code. Other properties of the code PGRM q (ℓ, m) can be found in [1] and the book chapter [2].
The only purpose of introducing the codes PGRM q (ℓ, m) in this section is to show the difference between the punctured generalized Reed-Muller codes PGRM q (ℓ, m) and the new family of generalized Reed-Muller codes to be introduced in the next section.
IV. ANOTHER GENERALIZATION OF THE PUNCTURED BINARY REED-MULLER CODES
A. The newly generalized codes ✵(q, m, ℓ) Let m be a positive integer and let n = q m − 1, where q = p s , p is a prime and s is a positive integer. For any integer a with 0 ≤ a ≤ n − 1, we have the following q-adic expansion where 0 ≤ a j ≤ q − 1. The Hamming weight of a, denoted by wt(a), is the the Hamming weight of the vector (a 0 , a 1 , · · · , a m−1 ).
Let α be a generator of GF(q m ) * . For any 1 ≤ ℓ ≤ m, we define a polynomial Since wt(a) is a constant function on each q-cyclotomic coset modulo n, g (q,m,ℓ) (x) is a polynomial over GF(q). By definition, g (q,m,ℓ) (x) is a divisor of x n − 1.
Let ✵(q, m, ℓ) denote the cyclic code over GF(q) with length n and generator polynomial g (m,q,ℓ) (x). To analyse this code, we set The dimension of the code ✵(q, m, ℓ) is equal to n − |I(q, m, ℓ)|.
Proof: As shown earlier, I(q, m, ℓ) is the union of some q-cyclotomic cosets modulo n. The total number of elements in Z n with Hamming weight i is equal to m i (q − 1) i . It then follows that Hence, the dimension k of the code is given by It then follows from the BCH bound that d ≥ (q ℓ+1 − 1)/(q − 1). When q = 2, the code ✵(q, m, ℓ) clearly becomes the classical punctured binary Reed-Muller code. Hence, ✵(q, m, ℓ) is indeed a generalization of the original punctured binary Reed-Muller code. B. The dual codes ✵(q, m, ℓ) ⊥ When q = 2, the parameters of the dual code ✵(q, m, ℓ) ⊥ are given by Theorems 1 and 2. Therefore, we need to study the dual code ✵(q, m, ℓ) ⊥ for the case q > 2 only.
Open Problem 1. Is it true that the minimum distance of the code
We will need the following theorem ( [8], see also [11, p. 153]).
Theorem 4 (Hartmann-Tzeng Bound). Let C be a cyclic code of length n over GF(q) with defining set T . Let A be a set of δ − 1 consecutive elements of T and B
The following theorem gives information on the parameters of the dual code ✵(q, m, ℓ) ⊥ .
Proof: The desired conclusion on the dimension of ✵(q, m, ℓ) ⊥ follows from the dimension of ✵(q, m, ℓ). What remains to be proved is the lower bound on the minimum distance d ⊥ . Let ✵(q, m, ℓ) c denote the complement of ✵(q, m, ℓ), which is generated by the check polynomial of ✵(q, m, ℓ). It is well known that ✵(q, m, ℓ) c and ✵(q, m, ℓ) ⊥ have the same length, dimension and minimum distance.
By definition, the defining set of ✵(q, m, ℓ) c is It is straightforward to verify that A + B ⊂ I(q, m, ℓ) c . Note that n ∈ A + B. In this case, we identify n with 0.
Clearly, A is a set of q m−ℓ − 1 consecutive elements in the defining set I(q, m, ℓ) c . Note that The desired conclusion on d ⊥ then follows from Theorem 4. When q = 2, the lower bound on the minimum distance d ⊥ of ✵(q, m, ℓ) ⊥ given in Theorem 5 is achieved. It is open if this lower bound can be improved for q > 2.
Open Problem 2. Determine the minimum distance d ⊥ of the code ✵(q, m, ℓ) ⊥ .
To further study the dual code ✵(q, m, ℓ) ⊥ , we need to establish relations between wt(a) and wt(n − a) for a ∈ Z n . Let a ∈ Z n and let a = m−1 ∑ j=0 a j q j be the q-adic expansion of a. We define Then we have the following lemma whose proof is straightforward and omitted.
Lemma 6.
For a ∈ Z n , we have Clearly, |N(i)| = m i (q − 1) i . The following lemma will be useful later. It is easily seen that |{1 ≤ a ≤ n − 1 : wt(a) = i and γ(a) = h}| = m i This completes the proof.
When q = 2, we have always the equality that wt(a) = m − wt(n − a) for all a. Hence, in this case, we have As a result, ✵(q, m, ℓ) ⊥ is the even-weight subcode of ✵(q, m, m − 1 − ℓ) when q = 2. Experimental data shows that one of I(q, m, m − ℓ) and −I(q, m, ℓ) c is not a subset of the other. Consequently, one of ✵(q, m, ℓ) ⊥ and ✵(q, m, m − ℓ) is not a subcode of the other.
C. The BCH cover of the cyclic code ✵(q, m, ℓ)
Recall that n = q m − 1. For any i with 0 ≤ i ≤ n − 1, let m i (x) denote the minimal polynomial of α i over GF(q). For any 2 ≤ δ ≤ n, definē where b is an integer, lcm denotes the least common multiple of these minimal polynomials, and the addition in the subscript b + i of m b+i (x) always means the integer addition modulo n. Let BCH(q, n, δ, b) denote the cyclic code of length n with generator polynomialḡ (q,n,δ,b) (x). When b = 1, the set BCH(q, n, δ, b) is called a narrow-sense primitive BCH code with designed distance δ.
The BCH cover of a cyclic code is the BCH code with the smallest dimension containing the cyclic code as a subcode. Theorem 9. ✵(q, m, ℓ) is a subcode of BCH(q, n, (q ℓ+1 − 1)/(q − 1), 1).
Proof: In the proof of Theorem 3, we have shown that I(q, m, ℓ).
V. A FAMILY OF REVERSIBLE CYCLIC CODES FROM THE CODES ✵(q, m, ℓ)
. The conclusions of the following theorem are known in the literature, and are easy to prove. We will employ some of them later.
Theorem 10. Let C be a cyclic code over GF(q) with generator polynomial g(x). Then the following statements are equivalent.
• C is reversible.
• g is self-reciprocal. • β −1 is a root of g for every root β of g(x) over the splitting field of g(x).
If C is a reversible cyclic code of length n over GF(q), then C + C ⊥ = GF(q) n . Such a linear code is called a linear code with complement dual (LCD), as its dual code is equal to its complement.
LCD cyclic codes over finite fields are interesting in both theory and applications [9], [10], [14]. An important application of LCD codes in cryptography was recently documented in [4]. This is our major motivation of constructing LCD codes.
We now employ the codes ✵(q, m, ℓ) to construct reversible cyclic codes. To this end, we need to make some preparations.
Let g (q,m,ℓ) (x) be the polynomial of (5), which is the generator polynomial of the cyclic code ✵(q, m, ℓ). Let g * (q,m,ℓ) (x) denote the reciprocal of g (q,m,ℓ) (x). Set Let ✵(q, m, ℓ) denote the cyclic code of length n over GF(q) with generator polynomial g(x). It follows from Theorem 10 that ✵(q, m, ℓ) is reversible. Information on the parameters of the reversible cyclic code ✵(q, m, ℓ) is given in the theorem below.
By Theorem 3, The desired conclusion on the dimension then follows. In this case, it follows from the proof of Theorem 3 that g(x) has the roots α i for all i in the set The desired conclusion on the minimum distance then follows from the BCH bound.
Proof: The conclusion on the minimum distance comes from the BCH bound. We now prove the conclusion on the dimension of the code. It follows from Lemmas 11, 6 and 7 that a ∈ I(q, m, m/2) ∩ (−I(q, m, m/2)) if and only if wt(a) = m/2 and the q-adic expression of a is of the form As before, let g (q,m,m/2) (x) be the generator polynomial of the code ✵(q, m, m/2). Then the generator polynomial of ✵(q, m, m/2) is given by . Therefore, deg(g(x)) = 2 deg(g (q,m,m/2) (x)) + 1 − deg(gcd(g (q,m,m/2) (x), g * (q,m,m/2) (x)) = 2 deg(g (q,m,m/2) (x)) + 1 − |I(q, m, m/2) ∩ (−I(q, m, m/2))| The desired conclusion on the dimension then follows. We point out that the dimension of the code ✵(q, m, m/2) is equal to zero when q = 2. Hence, the code is nontrivial only when q > 2.
The second contribution of this paper is the construction of the reversible cyclic codes ✵(q, m, ℓ), which are based on the cyclic codes ✵(q, m, ℓ). The dimension of ✵(q, m, ℓ) was settled for all ℓ with 1 ≤ ℓ ≤ ⌈m/2⌉. A lower bound on the reversible cyclic code ✵(q, m, ℓ) was developed. But the minimum distance of ✵(q, m, ℓ) is unknown. It would be nice if Open Problems 3, 4 and 5 can be resolved. | 2016-05-12T13:11:06.000Z | 2016-05-12T00:00:00.000 | {
"year": 2016,
"sha1": "8f6429ea8dda0b1407ea84f9ecb75f0cb5b0022c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "350bf7ef2c574f9e0e3b94c497e06107cd7ac8d1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
235840238 | pes2o/s2orc | v3-fos-license | ON THE STABILIZATION FOR THE HIGH-ORDER KADOMTSEV-PETVIASHVILI AND THE ZAKHAROV-KUZNETSOV EQUATIONS WITH LOCALIZED DAMPING
. In this paper we prove the exponential decay of the energy for the high-order Kadomtsev-Petviashvili II equation with localized damping. To do that, we use the classical dissipation-observability method and a unique continuation principle introduced by Bourgain in [3] here extended for the high- order Kadomtsev-Petviashvili. A similar result is also obtained for the two-dimensional Zakharov-Kuznetsov (ZK)equation. The method of proof works better for the ZK equation, so we were led to make some subtle modifications on it to include KP type equations. In fact, to reach a key estimate we use an anisotropic Gagliardo-Nirenberg inequality to drop the y -derivative of the norm.
1. Introduction. The main purpose of this work is to study the exponential decay of energy for the initial value problem associated to the high order Kadomtsev-Petviashvili equation where u = u(x, y, t) (with (x, y, t) ∈ R × R × R + ) is a real-valued function, α, β and γ are real parameters with β = 0, and the operator ∂ −1 x denotes the anti-derivative defined by and via Fourier transform, by More precisely, we shall study the initial value problem for (1) with (x, y) in appropriate bounded domain and with suitable boundary conditions in order to make the energy dissipates. Below, we will point out the details. When β = 0 and γ = ±1, the equation (1) becomes the usual Kadomtsev-Petviashvili (KP) equation introduced by B. B. Kadomtsev and V. I. Petviashvili in [23] in order to study the transverse stability of the solitary waves solutions of the Korteweg-de Vries (KdV) equation. In both equations (4) and (1), the number γ = −1 corresponds to the focusing case (KP-I type), while γ = +1 corresponds to the defocusing case (KP-II type). Analogously to (4), the fifth order KP equation (1) is deduced by taking into account weak transverse effects in the y direction of the so-called Kawahara equation (where ε ∈ {−1, 0, 1}) instead of the KdV equation. Also known as fifth order Korteweg-de Vries (KdV) equation, (5) was deduced in [24] by Kawahara in his study of oscillatory solitary waves, which in turn occurs just when the coefficient of the fifth order derivative term dominates over that of the third order one. See also [16] and [17] for the derivation of this equation in the context of one dimensional gravity-capillary waves. The equation (1) is an infinite-dimensional Hamiltonian system with Hamiltonian where the sign + corresponds to γ = −1 (KP 5 − I) and the sign − corresponds to γ = +1 (KP 5 − II). The Hamiltonians are (at least formally) constant along the trajectories of (1), i.e.
The Cauchy problem for higher order KP equations has been extensively studied (see [5,13,15,20,26,31,32,33,35,42,43] and references therein). For the IVP associated to (4) see e.g. [2,14,18,21,22] and references therein. Now we shall detail the main matter of this work. As pointed out in (7), the energy E(u(t)) := 1 2 R 2 u 2 dxdy is a conserved quantity. However in the bounded framework (x, y) ∈ Ω := (0, L) × (0, L) the energy may be dissipated. In this sense, we consider the IVP to the high-order KP equation in a bounded domain, under the presence of a localized non-negative function a(x, y) ∈ L ∞ (Ω) as damping term where the operator ∂ −1 x is in this case defined by Under the above boundary conditions and the restrictions β < 0 and γ > 0, the total energy associated to (8) given by is in fact dissipated along the flow (see Proposition 2), i.e., The following basic question arises: Does E(t) → 0 as t → +∞ and, if so, is it possible to find a rate of decay of E(t)? We refer [39,40,41,29] for studies of this problem in the context of KdV equation and some of its generalizations, and [44] for an analogous approach for the Kawahara equation. However, as far as we know the only work dealing with stabilization for two-dimensional nonlinear dispersive equations via the technique presented here on bounded domains is Gomes and Panthee [11], where the KP-II equation is studied. For some related results regarding high dimensional nonlinear dispersive equations we refer [4,6,7,28] and references therein.
Here, following the strategy described in [39,44], we use the boundary conditions from (8) together with a damping function a(x, y) and then we employ compactness arguments and the UCP property of (1) stated at Theorem 1.2 below, to prove the decay of the energy is exponential in time. We shall get the following result. Theorem 1.1. Let γ > 0. Given R > 0, let u be a solution of (8) with α > 0, β < 0 and data u 0 ∈ X 0 (Ω) satisfying u 0 L 2 (Ω) ≤ R, and let E(u(t)) be the energy defined by (9). Then the energy E(u(t)) decays exponentially, i.e, there exists δ = δ(R) > 0 such that, where C = C(R, T ) > 0.
According the dissipation-observability method (see Section 2) if the energy dissipates in an expected way (that is, meeting an observability condition) then it decays exponentially in time. In order to prove the observability condition for (8) we face some hardship due the order of the equation, the dimension and the presence of the term ∂ −1 x ∂ 2 y u. The structure of the equation (1) led us to drop the y-derivative in the norm of H 2 (Ω) (see Lemma 3.1 below), adding an extra difficulty to employ the usual a priori estimate argument. To overcome that trouble, we employ a useful anisotropic Gagliardo-Nirenberg inequality to obtain an estimate in the context of the space H 2 x (Ω) instead of the classical one.
Our method of proof relies on a unique continuation principle (UCP) instead of the classical Holmgren's uniqueness Theorem. More precisely, we shall prove and use the following unique continuation result.
with α, β and γ real constants where β = 0. Let u = u(x, y, t) be a smooth solution to (11) Inspired by the method introduced by Bourgain in [3], our approach on UCP extends the results of Esfahani and Pastor [8] and Panthee [38] to the fifth order KP equation (1) without restriction on the parameters α, β, γ. In the proof of Theorem 1.2 we follow closely the ideas of Esfahani and Pastor [8] to reduce the problem to a one dimensional one by choosing some key parameters appropriately. (For a thorough survey about the so-called UCP we suggest reading the introduction of Esfahani and Pastor [8] and references mentioned therein.) For the existence of smooth solutions associated to the IVP (11), we refer the following local well-posedness result due to Iório and Nunes in [19] for initial data in For each φ ∈ X s (R 2 ), there exists T > 0, depending only on φ Xs , and a unique solution u to (11) such that u ∈ C([0, T ]; X s (R 2 )) ∩ C 1 ([0, T ]; H s−5 (R 2 )). (12) Furthermore, the map φ → u is continuous from X s to the space (12). Moreover, T can be chosen independent of s.
Regarding the well-posedness for the problem (8), we prove the existence and uniqueness of global solution in Y for initial data in u 0 in X 0 (Ω) (see Appendix, Sec. 5), where . A similar approach to that one above described for the KP-5 equation (1) also works for the Zakharov-Kuznetsov (ZK) equation where u = u(x, y, t) is a real-valued function and α = 0 is a real parameter. In this case, the UCP result needed (similar to the Theorem 1.2), was proved by Panthee in [37]. The existence of smooth solution is done using the classical parabolic regularization method as in Iório and Nunes [19]. The equation (13) was formally derived by Zakharov and Kuznetsov [45] as a long wave small amplitude limit of the Euler-Poisson system in the "cold plasma" approximation on the context of plasma physics (see also [27] for a rigorously justification of this formal long-wave limit). The Zakharov-Kuznetsov equation can be seen as a higher-dimensional extension of the Korteweg-de Vries model of surface wave propagation, quite different from the KP equation which is obtained as an asymptotic model of several nonlinear dispersive systems under a different scaling.
Similarly to (8), we consider (13) in a bounded domain Ω = (0, L) × (0, L), under the presence of a localized non-negative function a(x, y) ∈ L ∞ (Ω) as damping term, as follows The total energy associated with (14) is given by Using the same strategy and applying similar tools to those in the proof of Theorem 1.1 we get the following result. Theorem 1.3. Given R > 0, let u be a solution of (14) with α > 0 and data u 0 ∈ L 2 (Ω), satisfying u 0 L 2 (Ω) ≤ R, and let E(u(t)) be the energy defined by (15). Then the energy E(u(t)) decays exponentially.
The proof of Theorem 1.3 is a little simpler than the proof of Theorem 1.1 because the structure of the equation (14) enjoys good symmetry in L 2 (Ω), so that we avoid to resort to the strategy of restricting Sobolev's norm to a single direction (x-direction) as we have done for the KP-5 equation. Furthermore, thanks to the lower order of the ZK equation, we only need to estimate u 2 L 2 ((0,T );H 1 (Ω)) by E(u 0 ) as key one (see Lemma 4.1 below). Remark 1. It worth mentioning that the technique presented seems to extend for ZK equations in higher dimention. Indeed, a careful look at Lemma 4.1 reveals that it works for ZK equations in dimension 2 ≤ n ≤ 6. Therefore we might obtain stabilization for ZK equations in dimension up to n = 6 whenever one can prove a UCP result similar to that for the two dimensional ZK. However, as far as we know the UCP for for higher order ZK equations is unknown.
The work is organized as follows. In Section 2 we prove the unique continuation result for the high order KP equation and exhibit a sketch of the dissipationobservability method. In Section 3 we prove the result of stabilization of the high order KP equation (Theorem 1.1) and in Section 4 we prove the result of stabilization for the ZK equation (Theorem 1.3). We finish with an appendix in which we prove the global existence of solution to (8) for initial data in X 0 (Ω) (see the definition in (17)).
Notation. Given any positives constants C, D, by C D we mean that there exists a constant c > 0 such that C ≤ cD; and, by C ∼ D we mean C D and D C.
By F{φ} or φ we denote the Fourier transform of u, defined as Given Ω = (0, L) × (0, L) ⊂ R 2 , we defne the space X k (Ω) to be the Sobolev space endowed with the norm and the space
Preliminary and basic results.
Here we first establish an extension of the unique continuation result proved by Esfahani and Pastor in [8] for high-order KP equations. We finish the section with a brief outline of the dissipation-observability method which we use to conclude the proof of our main results.
2.1.
Unique continuation for the high-order KP equations. Our goal here is to prove Theorem 1.2. The main idea is to use the method introduced by J. Bourgain in [3]. As in Esfahani and Pastor [8], we choose some key parameters so that the issue should be reduced to an one-dimensional problem. We make direct use the following result.
Proof of Theorem 1.2. We suppose by contradiction that there exists t ∈ I such that u(t) = 0. The integral equation associated to (11), for t 1 , t 2 ∈ I, is where Now taking the Fourier transform in space variable on (18), we get where without loss of generality we are assuming ∆t := t 2 − t 1 > 0.
Using the change of variable s = t − t 1 we can write (19) as Since u(t), t ∈ I, has compact support, it follows from Paley-Wiener theorem that u(t 2 ) has an analytic continuation to C 2 as where λ = (ξ, η) and σ = (θ, δ), with the parameters λ and σ to be chosen later and We have the following two cases to consider. 1. γ = −1. In this case, we shall prove that the equation in (11) behaves as the Kadomtsev-Petviashvili-I (KP-I) in Esfahani/Pastor [8]. In fact, taking δ ∼ θ, with θ = 0, δ < 0, and ξ > 0, η > 0 large enough such that we get from (22) that From this is enough to follow the proof of Theorem 1.3 in [8], from inequality (4.7) onwards therein, to get a contradiction.
2. γ = 1. In this case, the equation in (11) behaves as the Kadomtsev-Petviashvili-II (KP-II). It is enough to take δ ∼ θ, with θ = 0 and δ > 0, and ξ > 0, η > 0 satisfying (23). In fact choosing the referred parameters we arrive to the estimate from which we can get a contradiction as before. We note that (25) differs from (24) by the sign of δ, but we note out that fact does not have affected the obtaining of the contradiction.
2.2. The dissipation-observability method. We present here a sketch from the dissipation-observability method which we will employ to get theorems 1.1 and 1.3. For more details see for instance [39,46,47].
Consider Ω ⊂ R n a domain. Let A be a linear operator, and B a nonlinear operator with domain dense in L 2 (Ω). Let u be a solution to the evolution equation in L 2 (Ω), under suitable initial-boundary conditions. Suppose that the evolution associated to (26) satisfies a semi-group property and that the energy where −Q(u) = Ω u Au + B(u) dx.
Therefore in order to prove the exponential decay of the energy E(u(t)) it is enough to prove the inequality (28).
Stabilization for the high-order KP-II equation.
In this section we prove Theorem 1.1. In order to do that we use dissipation-observability method and follow closely the ideas posed in [39,44].
Consider the damped high-order KP model (8). Let E(u(t)) be the energy defined in (9). We also define Next, we prove that the energy E(u) is a decreasing function of t.
Proof. In fact, a(x, y)u 2 dxdy. Now using integration by parts and the boundary conditions from (8) we get (33).
From Proposition 2 we have that the energy is dissipated, i.e., E(u(t)) ≤ E(u 0 ), for all t > 0. So, we are able to get the following crucial result to establish the proof of Theorem 1.1.
In what follows we shall consider, without loss of generality, α = −β = γ = 1. With Lemma 3.1 and Theorem 1.2 in hand, we can prove the following result, from which, according to the Proposition 1, we get the observability inequality.
Lemma 3.2. Let Q(u) be defined in (32). Then, for any T > 0 and R > 0 there exist a positive constant C = C (R, T ) > 0 such that for all solution of (8) with u 0 L 2 (Ω) ≤ R.
Proof. We prove (40) by contradiction using (34) and the unique continuation result stated in Theorem 1.2. We shall follow the ideas posed in the proof of Theorem 2.2 from [39], but with important changes in some norms and key estimates. Suppose that (40) is not true. Then for each positive integer n there exists a solution u n of (8) such that nQ(u n ) u n 2 L 2 ((0,T );L 2 (Ω)) . In this case, we have a sequence {u n } of solutions such that lim n→∞ u n 2 L 2 ((0,T );L 2 (Ω)) Let {λ n } and {v n } be sequences defined respectively by λ n = u n L 2 ((0,T );L 2 (Ω)) and v n (x, y, t) = 1 λ n u n (x, y, t), so that v n L 2 ((0,T );L 2 (Ω)) = 1.
Using the weak lower semicontinuity of convex functionals we have which implies that a(x, y)v ≡ 0 in Ω × (0, T ). Since a(x, y) > 0 in Γ c we get that v ≡ 0 in Γ c × (0, T ). We notice that the limit v satisfies where λ ≥ 0 is the limit of λ n as n → ∞. In either case λ = 0 or λ > 0, we shall employ the UCP provided by Theorem 1.2 to conclude that v ≡ 0 in Ω × (0, T ). To do so, we must find an smooth extension of v in R 2 . Let Z = (δ, L − δ) × (δ, L − δ) and define the function Since Γ ⊂ (δ, L − δ) × (δ, L − δ), this extension is as smooth as v. Besides, w solves where which is compactly supported in H s (R 2 ), s > 2. From Theorem A, the IVP (53) has a smooth solution w. Therefore, by the unique continuation property established in Theorem 1.2, we conclude that w ≡ 0 in Ω × (0, T ). As a result we have that v ≡ 0 in Ω × (0, T ) which contradicts (50). Thus (40) holds.
3.2.
Proof of Theorem 1.1. It is enough to employ Corollary 1 using Proposition 1 with the Q(u) defined in (32). In fact, from Lemma 3.2, we have This implies, from Proposition 1 that the observability inequality (29) holds. Therefore, the energy E defined in (9) (14) enjoys better symmetry in L 2 (Ω) and the equations is of third order, so we will not need to resort to the strategy of restricting Sobolev's norm to a single direction (x direction) as we have done for the KP-5 equation. Hence, we give only the main steps here.
Consider the damped ZK model (14). Let E(u(t)) be the energy defined in (15). We also define Q(u(t)) = α 2 L 0 (∂ x u(0, y, t)) 2 dy + 1 2 A straightforward computation shows that the energy is a decreasing function of t. In fact, using integration by parts and boundary conditions from (14), we have Lemma 4.1. Let u be a solution of (14), with α > 0. Then, there exist constants C 1 , C 2 > 0 depending on α,L and T , such that where E( · ) is the functional defined in (15).
From now on, the analysis is very close to that one employed to obtain Lemma 3.2. Indeed, we have: for all solution of (14) with u 0 L 2 (Ω) ≤ R.
Finally, an argument similar to that one used in (55) provides the conclusion of our analysis. This completes the proof of Theorem 1.3.
Proof. In fact, integration by parts gives so the proof is concluded. | 2021-07-16T00:06:07.175Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1b3630512ca81e27bd935fe20f32e94ba01c2af5",
"oa_license": null,
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=624c793c-6b08-44b4-8b79-e4f080748252",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c06142a026794578c38b680ea61d8192b52cf4ce",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5361827 | pes2o/s2orc | v3-fos-license | Comparison of approaches to estimate confidence intervals of post-test probabilities of diagnostic test results in a nested case-control study
Background Nested case–control studies become increasingly popular as they can be very efficient for quantifying the diagnostic accuracy of costly or invasive tests or (bio)markers. However, they do not allow for direct estimation of the test’s predictive values or post-test probabilities, let alone for their confidence intervals (CIs). Correct estimates of the predictive values itself can easily be obtained using a simple correction by the (inverse) sampling fractions of the cases and controls. But using this correction to estimate the corresponding standard error (SE), falsely increases the number of patients that are actually studied, yielding too small CIs. We compared different approaches for estimating the SE and thus CI of predictive values or post-test probabilities of diagnostic test results in a nested case–control study. Methods We created datasets based on a large, previously published diagnostic study on 2 different tests (D-dimer test and calf difference test) with a nested case–control design. We compared six different approaches; the approaches were: 1. the standard formula for the SE of a proportion, 2. adaptation of the standard formula with the sampling fraction, 3. A bootstrap procedure, 4. A approach, which uses the sensitivity, the specificity and the prevalence, 5. Weighted logistic regression, and 6. Approach 4 on the log odds scale. The approaches were compared with respect to coverage of the CI and CI-width. Results The bootstrap procedure (approach 3) showed good coverage and relatively small CI widths. Approaches 4 and 6 showed some undercoverage, particularly for the D-dimer test with frequent positive results (positive results around 70%). Approaches 1, 2 and 5 showed clear overcoverage at low prevalences of 0.05 and 0.1 in the cohorts for all case–control ratios. Conclusion The results from our study suggest that a bootstrap procedure is necessary to assess the confidence interval for the predictive values or post-test probabilities of diagnostic tests results in studies using a nested case–control design.
Background
An essential step in the evaluation process of a (new) diagnostic test is to assess the diagnostic accuracy measures [1][2][3][4]. Traditionally the sensitivity and specificity are studied but another important measure is the predictive value, i.e. the absolute probability that the disease is present or absent given the test result, so-called post-test probability [5]. Typically, diagnostic accuracy studies use a cross-sectional design in a series or cohort of patients that is defined by the suspicion of the target disease under study. This suspicion is usually defined by the presented symptoms or signs. All patients then undergo the index (e.g. new) tests and subsequently the prevailing reference test or standard [5,6]. Subsequently the predictive values or post-test probabilities of the test results, as well as the sensitivity and specificity can be estimated.
An efficient alternative for this full cohort design is the nested case-control design, in which the controls and cases are sampled from a pre-defined cohort [5][6][7][8]. This design is particularly advantageous for diagnostic research purposes when the prevalence of the disease is rare, when the index test is costly or difficult to perform, and when using stored (e.g. biological) material from existing cohorts or biobanks [5][6][7]9]. Limitations, strengths and rationale of the nested case-control design are extensively discussed in the literature, mostly for etiologic research [8,10,11], but also recently for the evaluation of diagnostic tests [5,6,9].
As an important aim in diagnostic research is to estimate the absolute probability of having the disease given test results (predictive values or post-test probability), the nested character of the design in a cohort with known size is essential. In non-nested or regular casecontrol studies, controls are sampled from a source population with unknown size. The prevalence of the disease and hence the predictive values can thus not simply be estimated [5,6]. Only relative probabilities, like the odds ratio, can directly be estimated. However, absolute disease probabilities can be estimated, if cases and controls are sampled from an existing, pre-defined cohort, by weighing with the inverse sampling fraction [5].
For example, consider a full-cohort approach in which the index test result and reference test results are assessed for all patients. Say the index test is an expensive dichotomous biomarker (genomic) measurement requiring human material that is frozen for all cohort members in a biobank. The positive predictive value (PPV) of the marker result is a aþb , and the negative predictive value (NPV) d cþd ( Figure 1, Table A, see legend of Figure 1 for explanation of variable names).
In a nested case-control design, one samples from the full cohort (commonly) the human material of all subjects with a positive reference test (cases), but only a fraction (see cell b1 and d1, Figure 1, Table C) of those with a negative reference test (controls). The expensive index test is thus only retrieved or measured in the human material of the sampled cases and controls.
However, the estimation of the standard error (SE) of the predictive values derived from a nested case-control diagnostic accuracy study is not at all straightforward. When simply using the standard formula for the SE of a proportion ( , where π is the proportion, here predictive value or absolute disease probability, and n the number of patients, the question is which value for n to use. The actual observed (measured) number of cases and controls does not correspond to the estimated proportion (too low [12].We compared the approach proposed by Mercaldo with five other approaches using simulated datasets based on an empirical published diagnostic study among patients suspected of deep venous thrombosis. We studied several clinically relevant combinations of disease prevalence and casecontrol ratios.
Patient data
We used data from a published cross-sectional diagnostic study that collected a cohort of 2086 adult patients suspected of deep vein thrombosis (DVT) in primary care [13,14]. In brief, the general practitioners systematically documented information on patient history and physical examination. Physical examination included swelling of the affected limb and difference in circumference of the calves calculated as the circumference (in centimeters) of affected limb minus circumference of unaffected limb, further referred to as calf difference test. The calf-difference was considered to be abnormal if the difference in circumference between the legs was more then 3 cm. Subsequently, all patients underwent D-dimer testing.
Depending on the hospital to which the patient was referred in the original study the ELISA approach (VIDAS, Biomerieux, France) or the latex assay approach (Tinaquant, Roche, Germany) was used to determine the D-dimer level. The test was considered abnormal if the latex assay yielded a D-dimer level ≥400 ng/mL (Tinaquant, Roche, Germany) or ≥500 ng/mL for the ELISA assay (VIDAS, Biomerieux, France) [15]. Values were dichotomized: normal versus abnormal. In the present approachological study, we focus on the calf difference and D-dimer test as index tests. Presence of DVT (yes/no) was assessed in all patients with the reference test (repeated compression ultrasonography of the symptomatic leg).
Nested case-control samples
We first studied a source population based on the original data set ( Figure 2, line 1), with a prevalence of DVT of 0.1 (140 cases, 1260 controls), reflecting a relatively rare disease situation that commonly directs case-control studies ( Figure 2, line 2). The diagnostic accuracy parameters estimated for this source population serve as the commonly unknown true parameter values (see below and Table 1). Subsequently, we mimicked a cross-sectional cohort study of the same size as the source population, i.e. 1400 patients that were drawn with replacement from our source population (cohort, Figure 2, line 3).
A nested case-control sample was then created from the cohort (Figure 2, line 4). We included all patients with DVT (cases) from the corresponding cohort in the nested case-control sample, and an equally sized random sample from the subjects without DVT (controls): case-control ratio = 1:1. To prevent too much sampling errors (random variation), we repeated the above approach 1000 times, creating 1000 study cohorts from the Original data set n = 2086 Nested case-control sample Ratio 1:1 Nested case-control sample Ratio 1:2 Nested case-control sample Ratio 1:3 Nested case-control sample source population and hence 1000 nested case-control samples. In the 1000 nested case-control samples we estimated the predictive values of both index tests and their uncertainty (standard error and 95% CI) using the six approaches described below. All this was also done for three other case-control ratios: 2 controls per case (ratio 1:2); 3 controls per case (1:3); and 4 controls per case (1:4). The prevalence of the 1000 cohorts was thus not fixed across the different cohorts, though with a mean prevalence of 0.1 (95% CI 0.08-0.12). The actual prevalence of the corresponding cohort was used for all subsequent calculations in the nested case-control sample. Finally, the entire process of creating the 1000 study cohorts and 1000 corresponding nested-case control samples (with the four different case-control ratios), was repeated for a source population (n=1400) with a DVT prevalence of 0.05 (70 cases) and 0.2 (280 cases).
Approaches to estimate the uncertainty of predictive values of a diagnostic test from a nested case-control study
We compared six approaches to estimate the 95% CI of the predictive values obtained from the nested case-control samples, for the two index tests (D-dimer test and calf circumference difference). The point estimates of the predictive values were obviously the same for all six approaches, while the standard error estimates and hence 95% CI could vary. We describe the approaches for the predictive value of a positive result (positive predictive value = PPV). They can mutatis mutandis be applied to the negative predictive value (NPV). Notations used below, refer to those used in
Estimate the standard error of the PPV (SE(PPV))
using the standard formula for the SE of a proportion with the actually observed number of patients in the nested case-control sample: The 95% confidence interval can simply be calculated as PPV ± 1.96*SE(PPV)Calculating the SE with the actually observed numbers in the nested case-control samples (i.e. without correction for the sampling fraction that is used to estimate the correct PPV, using the upweighting by the samping fraction as shown in Table 1), agrees to the number of patients actually measured. However, the proportions in approach (1) do not correspond to the e stimated (corrected) PPV.
2. Estimate the SE(PPV) using the standard formula for SE of a proportion with correction for the sampling fraction in the numerator of approach 1 above, but not in the denominator: The correction is only applied to the numerator as this reflects the (corrected) PPV estimates. Applying the correction also to the denominator, would make the SE incorrectly too small: a larger number of patients than actually observed would then be used in the SE estimation. 4. The approach recently described by Mercaldo and colleagues [12]. This approach uses the prevalence from the underlying study cohort (not to be confused with our 'true' source population, see above) and the sensitivity and specificity estimated from the casecontrol sample to calculate the correct PPV. Not only the PPV can be estimated using the sensitivity (Sens) , specificity (Spec) and prevalence (p), but also the SE (PPV): 5. Weighted logistic regression. This is an ordinary logistic regression model with outcome disease present (y/n) and one covariable (index test result, positive or negative), with weights for cases and controls. The model can be written as log odds (PPV) = log ppv 1Àppv = α + β ×. With × =1 for a positive index test result. Each case receives a weight w(cases) = N 1 N (rather than simply weight 1) and each non-case receives weight w(non-cases) = The covariance matrix is estimated with the correct number of observed (N1) patients, since case and controls were weighted in the analysis.
Use the approach by Mercaldo and colleagues
(approach 3) [12] on the log odds scale. One uses the sensitivity (Sens) , specificity (Spec) and prevalence (p) in the known study cohort, to estimate the SE of the logit(PPV) by:
Statistical analysis
The PPVs of both index tests were thus calculated using the weighting approach from Figure 1. We then estimated the 95% confidence interval of the PPV using the six approaches above. From the 1000 nested case-control samples, the average 95% confidence interval width and the coverage probability were estimated. The narrower the average confidence interval width, the more precise the estimated predictive value [16]. The coverage probability is the proportion of the 1000 confidence intervals that included the true PPV estimated from of the source population. The coverage should not fall outside two SE's of the nominal probability (p) [16]. Nominal p is 0.05 for a 95% confidence interval, with SE(nominal p) = 0.0069 for a simulation study with 1000 repetitions (Se(p) = ffiffiffiffiffiffiffiffiffiffi ffi , with B the number of repetitions). The corresponding coverage ranges from 0.936 -0.964. If the coverage probability of the PPV's falls outside this interval we speak of "substantial undercoverage" for lower coverage probability (<0.936), or overcoverage for higher (>0.964) coverage probability. The ideal estimation approach has a coverage close to 95% and a small 95% confidence interval of the estimated predictive values.
All analyses were executed for the four case-control ratios, and for the three different disease prevalence's in the source population.
Analyses were performed with R version 2.6.0 [17]. Table 1 shows the accuracy estimates of both index tests as estimated from the source population. The PPV of both tests was low and the NPV of both tests was high as a result of the low prevalence of DVT. For both tests, the PPV increased and NPV decreased with increasing prevalence of DVT. The D-dimer test was very sensitive with limited specificity. The calf difference test was moderately sensitive and specific. The D-dimer test was positive in 978 (70%) patients for a DVT prevalence of 0.1. The calf-difference test was positive in 568 (41%) patients. Changing the prevalence of diseases did not change the percentage of positive tests. As expected, for both tests, the sensitivity, specificity and diagnostic odds ratio were similar for each prevalence. The point estimate for the PPV and NPV obtained with weighted logistic regression were similar (respectively 0.14 and 0.99) to those obtained with the standard approach. Approaches one, two and five showed clear overcoverage at low prevalences of 0.05 and 0.1 in the cohorts for all case-control ratios. They showed less overcoverage at a prevalence of 0.20 and even an undercoverage (Figure 3 and 4, approach 5). Approach three yielded slight overcoverage for lower case-control ratios (1:1, 1:2) and for low prevalences (0.05 and 0.01). Approaches four and six showed undercoverage for higher case-control ratios (1:3, 1:4). Extreme undercoverage was seen at a prevalence of 0.20 (Figure 3 and 4, left panels) for both approach four and six.
Results
In general, approach one showed the largest confidence interval width corresponding to the overcoverage, whereas approach four and six showed very similar and small widths. Approach three showed slightly larger widths then approach four and six (Figure 3 and 4, right panels).
Discussion
We compared six approaches for estimating the confidence intervals of predictive values or post-test probabilities of diagnostic test results when a nested case-control design is used. using simulations in a large empirical diagnostic study, the six approaches were compared in terms of coverage and the width of the 95% confidence intervals. Our data show that a bootstrap procedure (approach 3) seems to be the preferred approach, although it was only slightly better than the other approaches. Approaches 4 and 6 showed some undercoverage, particularly for the D-dimer test with frequent positive results (positive results around 70%). Approaches 1, 2 and 5 showed overcoverage. For a prevalence of 0.2 in the underlying cohort and a case-control ratio of 1:4 all approaches showed substantial undercoverage. In fact a case-control ratio of 1:4 implies a prevalence of 0.2 in the nested casecontrol sample. Hence, one may argue that a full cohort study is to be preferred, when the disease prevalence in the cohort is 0.2 or higher. Indeed, case-control studies are notably advantageous when the prevalence of a disease in the cohort is rare (i.e. below 0.1).
By applying a nested case-control design in diagnostic accuracy studies the number of patients undergoing the index test can be substantially reduced, hereby increasing the efficiency of the particular study [6,8,10,11]. This becomes more important if the index test comes with large patient burden, is costly, the disease is rare, and when stored biological material is used for measuring new tests, e.g. from proteomics, metabolomics or genomics. Previously it has been shown that by applying a correction for the sampling fraction precise point estimates of the predictive values can be obtained [5]. We found that applying a bootstrap procedure to estimate the confidence intervals around these predictive values, yields adequate results for the uncertainty in the estimated predictive values. Limitation of this approach can be that, due to the low numbers, in some of the bootstrap samples one of the cells of the 2×2 table remains empty, The latter did not happened in our simulation. If this happens PPV may be estimated with a continuity correction for low numbers.
The predictive values obtained with the approach recently discussed by Mercaldo and colleagues were equal to those derived with the weighted approach from Figure 1. For the lower prevalence's (0.05 and 0.10) the coverage of approaches 4 and 6 was between 0.90 and 0.95 which were similar to those found by Mercaldo and colleagues themselves [12]. With increasing case-control ratio and increasing prevalence, the Mercaldo and colleagues approach yielded more undercoverage. This could be due to the fact that in their original paper the case-control ratio was not explicitly varied, although in their equation the case-control ratio implicitly has influence on the SE and hence the confidence interval. Besides the study by Mercaldo and colleagues we are not aware of any other studies coping with this issue of uncertainty of predictive values estimated from nested case control studies.
A limitation of our study could be that we looked at only one original cohort in our simulations and studied only two index tests. Although the results for the different combinations simulated are alike, it is thinkable that for other combinations of disease prevalence, cohort sizes, and diagnostic accuracy of the index tests, the results could slightly differ. We certainly realize that DVT is not a true rare disease and most diagnostic studies on DVT are done on a full-cohort and not on a nested case-control sample. Therefore, we slightly modified the prevalence in the full cohort to better mimick the rare-disease situation, which we needed for our comparisons.
By using a fixed cohort size (n=1400) for the different prevalence's, the size of the nested case-control samples varied ( Figure 2). This could have influenced our results slightly since the SE and the confidence interval depends on the number of observations. Alternatively one could use a fixed number of cases in with varying cohort sizes for different prevalence's.
Conclusion
Our case-study suggests that in diagnostic accuracy studies using a nested case-control design, one can apply a simple bootstrap procedure to obtain a confidence interval for the post-test probabilities or predictive values of the index test results. For our data-set, the bootstrap procedure showed the best combination of coverage and 95% confidence interval width, compared with the other approaches. Our findings and inferences can also be applied to nested case control studies that investigate the predictive values of results from other kind of tests, for example prognostic tests. | 2017-06-17T17:45:24.534Z | 2012-10-31T00:00:00.000 | {
"year": 2012,
"sha1": "e77babfba355e6a1a7d59710b69570633a46c47d",
"oa_license": "CCBY",
"oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/1471-2288-12-166",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3027b8cdf3846686a2ae3ab2287445d07aa0ead3",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
257380844 | pes2o/s2orc | v3-fos-license | How I prescribe prolonged intermittent renal replacement therapy
Prolonged Intermittent Renal Replacement Therapy (PIRRT) is the term used to define ‘hybrid’ forms of renal replacement therapy. PIRRT can be provided using an intermittent hemodialysis machine or a continuous renal replacement therapy (CRRT) machine. Treatments are provided for a longer duration than typical intermittent hemodialysis treatments (6–12 h vs. 3–4 h, respectively) but not 24 h per day as is done for continuous renal replacement therapy (CRRT). Usually, PIRRT treatments are provided 4 to 7 times per week. PIRRT is a cost-effective and flexible modality with which to safely provide RRT for critically ill patients. We present a brief review on the use of PIRRT in the ICU with a focus on how we prescribe it in that setting.
Introduction
Prolonged Intermittent Renal Replacement Therapy (PIRRT) is the term that broadly encompasses 'hybrid' forms of renal replacement therapy (RRT). PIRRT treatments are provided for a longer duration than are intermittent hemodialysis (IHD) treatments (6-12 h vs. 3-4 h, respectively) but not 24 h per day as is done for continuous renal replacement therapy (CRRT). PIRRT is typically provided 4 to 7 times per week [1].
While PIRRT is less commonly used in ICUs than IHD or CRRT, its use has been progressively increasing in low-and middle-income countries [2, 3] since its initial descriptions in the literature in the late 1990s [4,5]. Its routine use in some high-income countries (e.g., institutions in New Zealand [6] and Canada [7]) is also longestablished. It is a cost-effective (as compared to CRRT [7,8]) and flexible modality with which to safely provide RRT for hemodynamically unstable patients. During the COVID-19 pandemic, PIRRT was rapidly adopted at some institutions to maximize their acute RRT capacity during surge. [9][10][11].
Indications for PIRRT
KDIGO 2012 guidelines state that CRRT is the treatment of choice for hemodynamically unstable patients, including those on extracorporeal support such as ECMO. However, at that time data on PIRRT were scarce. At present, PIRRT is used as a substitute for CRRT to treat hemodynamically unstable patients with acute kidney injury (AKI) or ESRD [12]; it can also be used in patients during de-escalation of treatment in the ICU [13], or as a substitute for IHD. Less well-studied than IHD or CRRT, there is no evidence suggesting significant differences in mortality or kidney recovery with the use of PIRRT to manage severe AKI in critically ill patients as compared to CRRT [14]. Reducing the efficiency of solute clearance (thereby reducing osmotic shifts) and extending the duration of treatment (thereby lowering the ultrafiltration rate) make PIRRT less likely to provoke hemodynamic instability during RRT (HIRRT) relative to IHD [15]. As an intermittent therapy, PIRRT facilitates the performance of diagnostic imaging, rehabilitation, and other procedures, and can often be provided overnight. In certain situations, PIRRT is relatively contraindicated. For patients with intoxications or extreme electrolyte disturbances where highly efficient small molecule clearance is desired, IHD should be favored over PIRRT (or CRRT). Conversely, in patients with traumatic brain injury, increased intracranial pressure or severe hyponatremia, CRRT should be favored over PIRRT (or IHD).
PIRRT modalities
PIRRT can be delivered using a standard IHD machine (with a connection to a central purified water-supply or the use of a portable/built-in reverse-osmosis machine) or a CRRT machine using standard commercially available CRRT solutions. In either case, adjustments are made to the blood flow rate (Qb), and dialyzate rate (Qd) and/or replacement fluid rates. These modifications are made to reduce the efficiency of solute clearance relative to standard IHD (and provide it for a longer duration) or increase clearance relative to CRRT (and provide it for a shorter duration). When using a conventional IHD machine to provide PIRRT, the machine software may not allow the Qd to be reduced enough to markedly decrease the efficiency of solute clearance. In such cases, a CRRT or pediatric IHD dialyzer (filter) with a relatively small surface area may be utilized to further reduce efficiency. Depending on the machines used and local experience, specific PIRRT modalities utilize diffusive clearance (i.e., hemodialysis; e.g., sustained low-efficiency (daily) dialysis [SLED/SLEDD]), convective clearance (i.e., hemofiltration; e.g., accelerated veno-venous hemofiltration [AVVH]) or both (i.e., hemodiafiltration; e.g., sustained low-efficiency (daily) diafiltration [SLED-f /SLEDD-f ]).
Vascular access
Vascular access considerations for patients with AKI are similar to when prescribing CRRT [16]. For patients with pre-existing kidney failure and an arteriovenous fistula (AVF) or arteriovenous graft (AVG), unless IHD-trained nurses are routinely involved in the provision of PIRRT and measures are in-place to prevent dislodgement of access needles, a hemodialysis catheter is required for PIRRT.
Anticoagulation
There is less need for anticoagulation with the use of PIRRT compared with CRRT, largely due to the higher Qb. In the absence of another indication for anticoagulation, we prescribe PIRRT without any anticoagulation (i.e., saline flushes only). When anticoagulation is indicated due to issues with filter clotting or otherwise, unfractionated heparin is most commonly used. If CRRT machines are used to provide PIRRT and regional citrate anticoagulation is possible, it is the option of choice. Table 1 details sample PIRRT prescriptions according to whether a conventional IHD machine or a CRRT machine is being used and relative to standard IHD and CRRT treatments. Successful development and implementation of routine PIRRT protocols necessitate a collaborative approach. The input of nephrologists, critical care physicians, nurses, pharmacists and administrators is required.
Complications/safety
When ordering PIRRT that is delivered using a conventional IHD machine, use of a low dialyzate temperature (i.e., 35-35.5 °C) [17], relatively high dialyzate sodium and calcium concentrations (e.g., 145 mmol/L and 1.5 mmol/L, respectively) may help mitigate HIRRT [18]. In patients with significant hyponatremia (e.g., serum sodium ≤ 130 mmol/L), the dialyzate sodium should be reduced to a level that will prevent overly rapid correction assuming that equilibration between the serum and dialyzate sodium will occur before the end of treatment. When using a conventional IHD machine with online generation of dialyzate, dialyzate bicarbonate levels must also be reduced to allow for generation of dialyzate sodium concentrations at the lower end of what the machine allows (typically ~ 130 mmol/L). Similarly, when ordering dialyzate potassium concentration, it is safest to assume that complete equilibration will occur prior to the end of the treatment. Thus, unless the patient is profoundly hyperkalemic and/or more-rapid correction is mandated (i.e., serum potassium ≥ 6.5 mmol/L or acutely rising) then a dialyzate potassium of 4 mmol/L can be used routinely to avoid precipitating hypokalemia.
Hypophosphatemia is a frequent complication of any continuous or prolonged RRT and is often under recognized [19]. Hypophosphatemia during RRT can lead to tissue hypoxia [20] and is associated with prolonged ventilator dependence [21]. Pre-emptive management is key since effects of phosphate depletion can occur even without overt hypophosphatemia. At one author's (AV) institution, the PIRRT protocol calls for starting oral supplementation when serum phosphate is less than 1.1 mmol/L. At the other author's (EC) institution, where IHD equipment is used to provide PIRRT, a phosphate additive is routinely added to dialyzate when serum phosphate is less than 1.6 mmol/L. Other pre-emptive strategies include using phosphate-containing solutions (if CRRT equipment is used to provide PIRRT). Intravenous phosphate supplementation may be required for moderate to severe hypophosphatemia (< 0.6 mmol/L).
Antibiotic and other medication dosing data in PIRRT are limited and, ideally, should be considered in conjunction with the input of a critical care or nephrology pharmacist. For medications cleared during RRT, augmented or additional dosing may be required. For example, intravenous vancomycin may need to be given immediately before and after a 10-12 h PIRRT session to ensure an adequate therapeutic level during and post-treatment. Table 2 provides additional details regarding dosing of selected antibiotics in patients receiving PIRRT [22][23][24][25][26][27][28], a topic that has been explored in greater detail by other reviews [29,30].
Dose/adequacy
Unlike dosing recommendations for CRRT and IHD (based on RENAL [31] and ATN [32] trials), there is no standard recommendation for dosing of PIRRT. Despite significant pitfalls in its use, urea kinetics remain the mainstay of determining adequacy of clearance during RRT, even in AKI. When prescribing PIRRT as a substitute for CRRT, a minimum weekly standard Kt/Vurea of 6 may be required. If using as a substitute for IHD or as a transition therapy, then lower flow rates or decreased frequency of treatments may suffice, as weekly standard Kt/Vurea recommendations for IHD is 2 [1]. It should be noted that volume overload is also an indication for RRT and frequency of PIRRT treatments ultimately will also depend on volume status and metabolic derangements such as hyperkalemia.
Conclusions
The various forms of PIRRT used in ICU allow for costeffective and flexible treatments for critically ill patients with kidney failure. As detailed in Table 1, practical considerations related to its application depend on whether IHD or CRRT machines are used to provide PIRRT. As is the case for our colleagues who prescribe CRRT [16], at institutions that provide PIRRT, we similarly advocate for its protocolized application accompanied by routine monitoring of quality and safety. [29,30]. We suggest that all anti-infective agents for critically ill patients receiving PIRRT are prescribed in conjunction with a critical care pharmacist and guided by directly measured levels, whenever possible
Anti-infective Agent [Relevant REFs] Suggested Dosing Regimen* Comments
Vancomycin [22,23] Loading dose of 2400 mg then 1600 mg post-treatment Clearance with PIRRT is ~ 3X higher than is described for CRRT Ongoing dosing guided by post-PIRRT trough levels Piperacillin [24,25] Fluconazole [28] Loading dose of 800 mg followed by 400 mg twice daily (q12h or pre-and post-PIRRT) Recommendation based on Monte Carlo simulations using a pharmacokinetic model of PIRRT. Directly measured pharmacokinetic data for fluconazole (and most anti-infective agents) are limited in this setting | 2023-03-08T15:19:25.368Z | 2023-03-08T00:00:00.000 | {
"year": 2023,
"sha1": "edac02d3f1bad065c205f436a70332a30ddd6fec",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/counter/pdf/10.1186/s13054-023-04389-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edac02d3f1bad065c205f436a70332a30ddd6fec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86688584 | pes2o/s2orc | v3-fos-license | Granulomatosis with Polyangiitis Presenting with Bilateral Hearing Loss and Facial Paresis
G ranulomatosis with polyangiitis (GPA) is a rare vascular inflammatory disease that affects the upper and lower respiratory tract and kidneys. Although the etiology of GPA is unknown, it is thought to be triggered by environmental events among patients with genetic susceptibility. Due to the frequency of upper respiratory tract involvement (70%-100%), otorhinolaryngologic symptoms may be the first clinical manifestation of disease.
G ranulomatosis with polyangiitis (GPA) is a rare vascular inflammatory disease that affects the upper and lower respiratory tract and kidneys. Although the etiology of GPA is unknown, it is thought to be triggered by environmental events among patients with genetic susceptibility. 1 Due to the frequency of upper respiratory tract involvement (70%-100%), otorhinolaryngologic symptoms may be the first clinical manifestation of disease. 1
Case Presentation
We present a case of a patient who uniquely presented with bilateral otitis externa, with subsequent hearing loss and unilateral facial paresis as the initial signs of GPA. The Temple University School of Medicine Institutional Review Board determined that this case report is exempt from review. A 68-year-old woman with a history of bilateral otitis externa and septal perforation following cocaine use initially presented with hearing loss and aural fullness of 2 weeks' duration. Her examination revealed purulent otorrhea, external auditory canal edema, and erythema consistent with acute otitis externa. She was treated and subsequently lost to follow-up. A month later, she presented to the emergency room with left facial paresis and bilateral hearing loss. There, head computed tomography (CT) showed partial opacification of the left middle ear and mastoid and absence of the nasal septum and turbinates ( Figure 1). Chest CT showed bilateral pulmonary nodules.
She presented again to the otolaryngology service. At that time, no mass was present on nasopharyngoscopy, and her facial function was normal. On otologic examination, she had developed left serous otitis media and mixed hearing loss. Myringotomy and tube placement were performed. Cultures of mucopurulent material grew methicillin-sensitive Staphylococcus aureus, which was treated with an extended course of antibiotics but did not resolve. Laboratory evaluation for sarcoidosis was negative. Follow-up imaging showed an apical pleural mass, bilateral pulmonary nodules, and renal and breast masses. Core biopsy of the breast and incisional breast biopsy revealed microabscesses and poorly formed granulomas with giant cells. Immunostains (CD20, CD3, CD5, CD43, CD10, bcl-2, CD21, S-100, CD68, and CD1a) supported a reactive process. Polymerase chain reaction studies for B-cell clonality and stains for acid-fast and fungal organisms were negative. Serum myeloperoxidase antibody was slightly elevated, as was proteinase 3 antibody. Subsequent lung wedge biopsy was indicative of GPA, including areas of geographic necrosis, granulomatous inflammation, and capillaritis ( Figure 2). She was subsequently treated with rituximab and prednisone for 2 weeks and is currently maintained on azathioprine and prednisone with supplemental Prolia (denosumab), calcium D, and cholecalciferol for severe steroid-induced osteoporosis.
Discussion
Patients with GPA frequently present with a number of head and neck complaints, including epistaxis, rhinorrhea, nasal obstruction, and spontaneous septal perforation. 2 Otologic manifestations, including serous otitis media, chronic otitis media, hearing loss, vertigo, and mastoiditis, are found among 35% of patients with GPA. GPA-induced otitis media can be associated with facial palsy and hearing loss (either sensorineural or conductive) and does not resolve with antibiotic therapy or surgery. 3 Spontaneously resolving facial paralysis, as noted in this case, has not previously been described.
Nonenhanced CT images can indicate the extent of the disease, but findings are not specific for GPA. 4 The nasal septum may be perforated and the turbinates shortened, and the paranasal sinuses demonstrate mucosal thickening and opacification. 4 Subglottic stenosis occurs frequently.
Inflammation of the temporal bone results in soft tissue density in the mastoid and middle ear cavities, typically bilaterally.
Diagnosis of GPA is confirmed by histology and serologic testing, including elevated myeloperoxidase-and proteinase 3-ANCA levels. Head and neck biopsies are often nondiagnostic unless obtained from the paranasal sinuses (45% vs 84%). 5 Pathognomonic biopsy characteristics include geographic necrosis, poorly formed granulomas, scattered giant cells, and microabscesses. 4,5 Geographic necrosis is described as basophilic patches of tissue with serpiginous borders surrounded by granulomatous inflammation with histiocytes or giant cells (Figure 2). Vascular changes include fibrinoid necrosis of blood vessel walls and granulomatous inflammation. Renal biopsy typically reveals active glomerulonephritis. 1 Untreated severe GPA can lead to 90% mortality within 2 years, with death due to renal or respiratory failure. 1 With treatment, 5-year survival is 75% to 88%. Classically, cyclophosphamide and glucocorticoids have been utilized to control severe GPA; however, rituximab has been approved by the Food and Drug Administration as an alternative to cyclophosphamide since 2011. Less severe disease may be treated with trimethoprim-sulfamethoxazole or dapsone. Treatment duration is tailored to control of symptoms, and maintenance of remission is continued for 18 months. Relapses are common (up to 93%). 1 Permanent otolaryngologic sequelae include saddle nose and nasoseptal deformities and hearing loss. 2,4,5 Complex systemic illnesses can present with otolaryngologic manifestations and may present solely with head and neck complaints. This case highlights the importance of maintaining a wide differential during the evaluation of these patients.
Author Contributions
Taha Mur, first author, drafting, design of the work, final approval, accountability for all aspects of the work; Marian Ghraib, drafting, data analysis, final approval, accountability for all aspects of the work; Jasvir S. Khurana, drafting, interpretation of the data, final approval, accountability for all aspects of the work; Pamela C. Roehm, corresponding author, conception and design of the work, drafting, final approval, accountability for all aspects of the work. | 2019-03-28T13:33:18.628Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "c2ef2fe430e8617eed9e68fbc3fc28be308bcc3f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/2473974x18818791",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2ef2fe430e8617eed9e68fbc3fc28be308bcc3f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18357636 | pes2o/s2orc | v3-fos-license | Modification of the structural and electrical properties of graphene layers by Pt adsorbates
The properties of graphene are strongly affected by metal adsorbates and clusters on graphene. Here, we study the effect of a thin layer of platinum (Pt) metal on exfoliated single, bi- and trilayer graphene and on chemical vapor deposition-grown single-layer graphene by using Raman spectroscopy and transport measurements. The Raman spectra and transport measurements show that Pt affects the structure as well as the electronic properties of graphene. The shift of peak frequencies, intensities and widths of the Raman bands were analyzed after the deposition of Pt with different thicknesses (1, 3, 5 nm) on the graphene. The shifts in the G and 2D peak positions of the Raman spectra indicate the n-type doping effect by the Pt metal. The doping effect was also confirmed by gate-voltage dependent resistivity measurements. The doping effect by the Pt metal is stable under ambient conditions, and the doping intensity increases with the increasing Pt deposition without inducing a severe degradation of the charge carrier mobility.
Introduction
In the last few years, graphene, a flat monolayer of carbon atoms arranged in a hexagonal network, has had tremendous attraction for researchers due to fascinating properties such as its very high mobility and quantum electronic transport [1][2][3]. However, the absence of a band gap in pristine graphene makes it unsuitable for digital device applications [4,5]. There are still many obstacles to overcome for graphene to be adopted as a device material. Recently, the doping of graphene has drawn much interest because it is crucial to fabricate integrated devices such as logic circuits [6]. Recently, metal adatoms and clusters on graphene have been a topic of great interest since they can locally dope or modify the band structure [7,8]. The interaction of electrons in graphene with surface adsorbates like metals and molecules is an important issue for high electronic mobility, doping and applications in sensors [9,10]. The metal adatoms can also induce the major structural deformation of graphene. It has already been found that the absorption of metal nanoparticles changes the structural and electronic properties of graphene [11,12]. Metals on graphene surfaces are employed as an electrical contact, which is an essential device element. Therefore, it is important to understand their influence on the structure and electronic properties of graphene. In general, a Pt contact is used as an electrode in many graphene devices. However, the physics implicated at the interface between the metallic electrodes and graphene remains ambiguous. The modification of defects in graphene by other metal adatoms has already been studied. However, no paper is available for study on the interaction of a thin layer of Pt film on graphene. Using the density functional theory, Giovannetti, Khomyakov and their co-workers extensively studied the electronic structure and charge transfer of graphene/metal interfaces. They Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
showed that graphene is physisorbed onto Al, Ag, Cu, Au and Pt surfaces [13,14]. However, the experimental exploration of metal/graphene systems has been limited so far, and it is important to understand their influence on the structural/ electronic properties of graphene.
In this paper, we report the effect of a thin layer of Pt on exfoliated single-, bi-and trilayer graphene as well as on single-layer graphene grown by chemical vapor deposition (CVD) using Raman spectroscopy and transport measurements. The Raman spectra and transport measurements show that Pt affects the structure as well as the electronic properties of graphene. The Raman bands were analyzed before/after the deposition of Pt with different thicknesses (1, 3, 5 nm) on graphene. The Raman spectra and transport measurements indicate the n-type doping effect by Pt deposition.
Experimental section
Pt metals with different thicknesses (1, 3, and 5 nm) were deposited on exfoliated single, bi-and trilayer graphene and on single-layer CVD-grown graphene by a thermal evaporation system in a high vacuum (∼8 × 10 −7 torr) at a rate of 1.0 Å s −1 The Pt deposition was monitored by a quartz oscillator and confirmed with atomic force microscopy (AFM). To avoid sample heating during deposition, the sample stage was cooled by a continuous water flow. The Raman spectra were measured before and after the metal deposition on the same graphene sample. The Raman spectra were recorded at room temperature with a Renishaw microspectrometer. The laser wavelength was 514 nm; to avoid a laser-induced heating effect, the power was kept at 1.0 mW. The diameter of the laser spot was less than 1 μm. The Dirac points (or the change in the neutrality point) of the graphene samples were observed by a gate-voltage dependent resistivity measurement using a 4-probe configuration method with a lock-in amplifier at room temperature.
The exfoliated single, bi-and trilayer graphene were obtained from the bulk graphite using a standard Scotch Tape method and were confirmed with optical and Raman spectra, whereas the CVD-grown single-layer graphene was obtained using thermal CVD. The growth process of the graphene is as follows: The graphene used in this study was grown on 25 μm thick copper foils from Alfa Aesar (99.8% pure) via thermal CVD. A mechanically polished and electropolished copper foil was inserted into a thermal CVD furnace. The furnace was evacuated to ∼10 -4 torr and heated to 1010°C under an H 2 gas flow (∼10 -2 torr). After the temperature became stable at 1010°C, both the CH 4 (20 standard cubic centimeters per minute (sccm)) and the H 2 (5 sccm) were injected into the furnace for 8 min to synthesize the graphene. After the graphene synthesis, the sample was cooled at a rate of 50°C min −1 to room temperature [15,16]. The grown graphene film was then transferred to the SiO 2 /Si wafer as follows: The Cu foil was etched in an aqueous solution of ammonium persulfate (APS). The surface of the graphene was spin-coated with polymethyl methacrylate (PMMA), and the sample was then baked at 70°C for 10 min The PMMA coating was applied to prevent the graphene films from cracking and folding during the transfer to a desired substrate. The PMMA/graphene film was washed with deionized water after the Cu foil had been completely dissolved; it was then transferred onto the Si/SiO 2 wafer. The PMMA film was removed with acetone. The graphene sample was Figure 1 shows the Raman spectra of the exfoliated pristine single-layer graphene (SLG), the bilayer graphene (BLG) and the trilayer graphene (TLG). Figure 1(a) shows the single Lorentzian fit of the 2D peak for the SLG. A broad 2D peak is fitted with four Lorentzian curves, as can be seen in figure 1(b), which confirms the BLG. Figure 1(c) shows the fitting of the broad 2D peak in the TLG by six Lorentzian curves. Figure 1(d) shows the Raman spectra of the exfoliated pristine SLG, BLG and TLG sample. The absence of the D peak in all of the samples indicates the high quality of our samples. We used AFM to measure the thickness and morphology of each graphene sample after Pt deposition. Figure 2 depicts the surface topology and line profile of the graphene by using AFM after 1, 3 and 5 nm Pt deposition on the CVDgrown graphene. All scans were taken in a tapping mode under ambient conditions, and the scan area was kept at 10 × 10 μm. The line profiles of the graphene sample after 1, 3 and 5 nm Pt deposition are shown in figure 2. The line profiles were obtained across the boundary between the deposition area and covered area. However, the boundary is not clear for the 1 nm deposition in figure 2(a). The thickness measurement by the AFM is consistent with the nominal thickness measured by the quartz oscillator (thickness monitor) for the Pt film deposition. The thicknesses of the graphene samples after 1, 3 and 5 nm Pt deposition were measured as 1.1, 3.2 and 5.1 nm, as seen in figures 2(b), (d) and (f).
Results and discussion
The Pt film's morphology is examined in figures 2(a), (c) and (e). The deposition is not uniform and shows clusters for the 1 nm Pt deposition, as shown in figure 2(a), whereas the films become more uniform and smooth for the 3 and 5 nm Pt deposition. The morphology of the Pt films is also examined using scanning electron microscopy (SEM). Figure 3(a) shows the SEM image of the pristine CVD-grown graphene. The SEM image clearly shows the high quality of the pristine graphene with little residue. The SEM image of the CVDgrown graphene after the 1 nm Pt deposition shows clusters all over the graphene's surface, as shown in figure 3(b). For the 3 and 5 nm Pt deposition, the films are uniform and continuous, as seen in figures 3(c) and (d).
The Raman spectra of the exfoliated and CVD-grown SLG before and after the 1, 3 and 5 nm Pt deposition are shown in figure 4. The G and 2D peaks of the pristine exfoliated SLG are observed at 1581 cm −1 and 2683 cm −1 , respectively, as shown in figures 4(a) and (b). These figures also show the Raman spectra of the exfoliated SLG after the 1, 3 and 5 nm Pt deposition. A change in the intensities and the full width at half maximum (FWHM) of the G, 2D and D peaks are observed after the 1, 3 and 5 nm Pt coating. The I D / I G ratio increases in the exfoliated SLG from 0.01 to 0.3 after the deposition of 1 nm of Pt. The ratio of I 2D /I G becomes 2.39 compared to 3.30 in pristine SLG. After the 3 nm Pt deposition on the exfoliated SLG, the ratio of the I 2D /I G is significantly reduced to 1.86. The ratio of the I D /I G is 0.41 for the exfoliated SLG after 3 nm of Pt metal deposition. The 2D band is a single peak at 2671 cm −1 , which is red-(downward) shifted by 13 cm −1 from the pristine graphene. On the other hand, the G band of the graphene after the Pt deposition is found to be at 1587 cm −1 , which is blue-(upward) shifted by 6 cm −1 . After the 5 nm Pt metal deposition, a more pronounced D band peak is observed. The ratio of I D /I G is 0.50 for the graphene after 5 nm of Pt deposition. The intensity of the 2D band is significantly reduced, and the ratio of the I 2D / I G becomes 1.25. The 2D band is observed at 2665 cm −1 , which is red-shifted by 19 cm −1 from the pristine graphene. The FWHM of the G peak has been observed at 15.5, 13.1, 10.2 and 9.7 cm −1 before and after the 1, 3 and 5 nm Pt metal deposition, respectively. The blue shift of the G peak, the red shift in the 2D peak and the reduction of the FWHM of the G peak indicate an n-doping effect [17][18][19][20][21], which is also confirmed by the electrical measurement of the graphene device. For further verification, we also analyzed the CVD-grown SLG by using the same metal of the 1, 3 and 5 nm thick Pt film. Figures 4(c) and (d) show the Raman spectra of the CVD-grown SLG before and after the 1, 3 and 5 nm Pt deposition. The G band was observed at 1582 cm −1 for the pristine CVD-grown SLG, whereas it was found at 1585, 1588 and 1591 cm −1 for the 1, 3 and 5 nm Pt deposition, respectively. The ratio of I D /I G was changed to 0.28, 0.37 and 0.47 after the 1, 3 and 5 nm Pt deposition, respectively, as compared to the 0.02 ratio for the pristine CVD-grown graphene. The 2D band of the pristine graphene was observed at 2684 cm −1 , whereas those after the Pt metals with thicknesses of 1, 3 and 5 nm were observed at 2679, 2674 and 2668 cm −1 , respectively, which is red-shifted by 16 cm −1 from the pristine CVD-grown graphene. The ratio of I 2D /I G was changed to 2.3, 1.9 and 1.35 after the 1, 3 and 5 nm Pt deposition, respectively, compared to the 2.9 ratio for the pristine CVDgrown graphene.
Both the n-and p-type doping can lead to the reduction of the FWHM of the G-peak, as already reported in [18]. We have done a Lorentzian fitting of the G-peak for the pristine 1, 3 and 5 nm Pt-coated CVD-grown graphene samples, as shown in figure 4(e). The FWHM of the G peak has been observed as 16, 13.4, 11.4 and 10.3 cm −1 before and after the 1, 3 and 5 nm Pt metal deposition, respectively. We found that the FWHM of the G-peak decreased after the Pt deposition. However, the strain usually results in an increase of the FWHM of the G-peak, as reported in [22]. Furthermore, it was reported that the G-peak splits into two distinct G and G' peaks if the strain is large enough. It was also reported that the increasing strain resulted in the red shift for both the G and 2D peak positions [22]. However, in this experiment, we observed a blue shift in the G peak position, a red shift in the 2D peak position and a reduction of the FWHM of the G peak position after the Pt deposition. Therefore, we conclude that the shift of the Raman peaks in this experiment is due to doping rather than strain [17][18][19][20][21]. We note that the exfoliated graphene is more subject to n-type doping than is the CVDgrown graphene. We believe some of the organic residues or impurities that formed during the transfer process hindered the coupling between the Pt and carbon atoms of the graphene. Figures 5(a) and (b) show the Raman spectra of the exfoliated BLG before and after the 1, 3 and 5 nm Pt deposition. Before the Pt deposition, the G band was observed around 1581 cm −1 for the exfoliated BLG. On the other hand, the G band of the bilayer graphene after the 1, 3 and 5 nm Pt deposition was found to be at 1583 cm −1 , 1586 cm −1 and 1589 cm −1 . The ratio of I 2D /I G became 0.90 after the 5 nm Pt deposition, whereas it was 1.36 in the pristine BLG. The 2D peak was found at 2705 cm −1 , 2701 cm −1 and 2696 cm −1 after the 1, 3 and 5 nm Pt deposition, respectively. The red shift of 11 cm −1 in the 2D band was found in the exfoliated BLG after the 5 nm Pt deposition. It is known that the blue shift of the G band positions and the red shift of the 2D band positions indicate the n-doping of the graphene. Figures 5(c) and (d) show the Raman spectra of the exfoliated TLG before and after the 1, 3 and 5 nm Pt deposition. Before the Pt deposition, the G band was observed to be around 1582 cm −1 for the exfoliated TLG. On the other hand, the G band of the graphene after the 1, 3 and 5 nm Pt deposition was at 1584, 1585.5 and 1587 cm −1 . The ratio of I 2D /I G became 0.65 compared to 0.8 in pristine TLG. The red-shifted 2D peaks were found at 2706 cm −1 , 2703 cm −1 and 2699 cm −1 after the 1, 3 and 5 nm Pt deposition. Again, the blue shift of the G band positions and the red shift of the 2D band positions are indicative of n-doping of the TLG [4,7,[17][18][19][20][21]. The n-type doping effect is also confirmed by the electrical measurement of the same graphene sample after the 1, 3 and 5 nm Pt deposition. Figure 6(a) shows the I D /I G ratio before and after the 1, 3 and 5 nm Pt deposition in exfoliated SLG, BLG and TLG. The I D /I G ratio for the pristine state in SLG, BLG and TLG is around zero, indicating the high quality of the graphene samples, but after the Pt deposition, the I D /I G ratio increases significantly. A larger change is observed for the SLG compared to the BLG and TLG. The I D /I G ratio is 0.5 in the SLG, while it is 0.25 in the TLG. Figure 6(b) shows the I 2D /I G ratio before and after the 1, 3 and 5 nm Pt metal deposition in exfoliated SLG, BLG and TLG. The largest change in the I 2D / I G ratio is observed in the SLG. The I 2D /I G ratio changes from 3.1 to 1.1 in the SLG, while the I 2D /I G ratio changes from 0.8 to 0.65 in the TLG. Figure 7 shows the gate-dependent resistivity of the pristine exfoliated SLG, BLG, TLG and CVD-grown singlelayer graphene. The maximum in the gate-dependent resistivity identifies the back gate-voltage (V g ), which corresponds to the Dirac point, while the slope indicates the mobility of the charge carriers in the graphene [23][24][25]. The Dirac point of the pristine graphene sample is found around V g = 0 V, indicating an undoped feature of the pristine graphene. The deposition of Pt on the graphene surface causes the shift of the Dirac point toward negative gate-voltages, showing the ntype doping effect [26,27]. The Dirac points were shifted from 5 V to −12, −36 and −57 V after the 1, 3 and 5 nm Pt deposition in the exfoliated SLG, as shown in figure 7(a). The Dirac points shifted toward more negative gate-voltages as we increased the Pt thicknesses, which increased the n-type doping. For further verification, we also measured the Dirac points for the CVD-grown SLG. In figure 7(b), the Dirac points were observed at −13, −32 and −52 V after the 1, 3 and 5 nm Pt deposition, respectively. Figure 7(c) shows the gatevoltage dependent resistivity for the pristine exfoliated BLG and the 1, 3 and 5 nm thick Pt deposited in the BLG. The Dirac points were shifted from 0 V to −15, −29 and −46 V after the 1, 3 and 5 nm Pt deposition, respectively. Figure 7(d) shows the gate-voltage dependent resistivity for the pristine exfoliated TLG and the 1, 3 and 5 nm thick Pt deposited in the TLG. The Dirac points moved from 0 V to −11, −24 and −41 V after the 1, 3 and 5 nm Pt deposition, respectively. To check the stability of the Pt doping, we also measured the sample after exposing it to an oxygen gas flow for a certain amount of time, as shown in figure 7(e). The CVD-grown SLG with the 5 nm thick Pt deposition was used for a stability check. No significant change was observed in the gate-voltage-dependent resistivity after 30 min of exposure. We also checked the stability of the Pt doping. After leaving the samples under ambient conditions for one day, their transport properties remained the same. Figure 8(a) shows the electron and hole mobility before and after the deposition of different Pt film thicknesses. The mobility was obtained using relation μ = (1/C g )(∂σ/∂V g ), where σ is the conductivity of the samples, and V g is the gatevoltage [28][29][30][31]. The mobility of the graphene decreased slightly with the increasing Pt thickness. However, the change in the mobility of the BLG and TLG was smaller than that of the SLG with the Pt deposition, as shown in figure 8(a). Figure 8(b) shows the change in the carrier concentration after the 1, 3 and 5 nm Pt deposition. The carrier concentration increases with the increasing Pt deposition. The trend of the carrier concentration change is similar, but the largest increase was observed in the SLG. The reduced doping effect in the BLG and TLG as compared to the SLG may be due to the influence of Pt, which is dominant in the top layer of the BLG and TLG.
Conclusion
Using Raman spectroscopy and transport measurements, we studied the interaction of thin layers of Pt with exfoliated single, bi-and trilayer graphene and CVD-grown single-layer graphene. The Raman spectra and transport measurements indicate that Pt affects the structural as well as the electronic properties of the graphene. The shifts in the G and 2D peak positions indicate the n-doping of graphene by the Pt metal. The doping effect is also confirmed by gate-voltage dependent resistivity measurements. However, it was found that Pt affects the characteristics of the SLG more than the BLG or TLG. We have also verified the stability of our graphene devices under ambient conditions and in an oxygen flow. | 2017-06-14T00:27:55.507Z | 2014-09-08T00:00:00.000 | {
"year": 2014,
"sha1": "0a734590c939ada90fc7e6e1ad2f45a0f42f7e4b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1468-6996/15/5/055002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d86966046c5f10ae98267667068e8e481b23894",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
24247422 | pes2o/s2orc | v3-fos-license | Oncologic and obstetric outcomes of conservative surgery for borderline ovarian tumors in women of reproductive age
Objective To compare the oncologic and obstetric outcomes in reproductive-age females with borderline ovarian tumors (BOTs) treated with cyst enucleation (CE) or unilateral salpingo-oophorectomy (USO). Methods The medical records of patients with BOTs treated between 1998 and 2014 were retrospectively reviewed. The recurrence rates in the USO and CE groups were compared, and the postoperative obstetric outcomes were assessed via telephone survey. Results Eighty-nine patients with BOTs underwent USO, and 19 underwent CE. Of these, six patients had recurrent BOTs. The recurrence rate was significantly lower in the USO group (3/89, 3.4%) than in the CE group (3/19, 15.8%) (P=0.032). All patients with recurrent disease were successfully treated with further surgery. Of the 76 patients interviewed by telephone, 71 (93.4%) resumed regular menstruation after surgery. Twenty-six of the 32 patients (81.3%) who attempted to conceive had successful pregnancies. USO (19/24, 79.2%), like CE (7/8, 87.5%), resulted in favorable pregnancy rates for patients with BOTs. Conclusion USO is a suitable fertility-preserving surgery for women with BOTs. CE is also an acceptable option for select patients.
Introduction
Borderline ovarian tumors (BOTs) are characterized by the presence of cellular proliferation and nuclear atypia without stromal invasion. They represent 10% to 15% of all epithelial ovarian tumors [1]. Compared to invasive epithelial ovarian cancers, BOTs typically present in younger women, are diagnosed at earlier stages, and have better prognoses [2]. The median age at diagnosis is 45 years, and 34% of patients are of childbearing age (under 40 years) [3]. The age at first pregnancy now exceeds 30 years in many developed countries [4]. Therefore, surgery for younger women diagnosed with BOTs has moved from radical treatment to a more conservative approach [5]. Fertility-sparing surgery (FSS) is safe, feasible, Oncologic and obstetric outcomes of conservative surgery for borderline ovarian tumors in women of reproductive age Vol. 60, No. 3, 2017 and widely accepted and performed [6][7][8]. Although many studies have investigated FSS in BOTs [9], data regarding specific obstetric outcomes among the different FSS subtypes are limited. The aim of the present study was to investigate the obstetric and oncologic outcomes of two FSS subtypes in reproductive-age women with BOTs.
Materials and methods
We reviewed the medical records of patients pathologically diagnosed with BOTs between 1998 and 2014. The study sub-jects included patients who underwent primary surgery at our institution as well as those referred for comprehensive staging operations after initial surgery at another clinic. Pathologically diagnosed with intraepithelial carcinoma or microinvasion was not included in this study. Subjects also had to be reproductive-age women (under 40 years) who were initially treated with FSS. FSS was defined as preservation of the uterus and at least part of one ovary. It was classified into two subtypes: unilateral salpingo-oophorectomy with or without contralateral ovarian cyst enucleation (USO), and unilateral or bilateral cyst enucleation (CE). Patients were treated with adjuvant platinum-based chemotherapy at the discretion of their physi- (Table 1). Demographic, clinical, pathological, surgical, obstetrical, and follow-up data were extracted from the medical records. The pathology slides were reviewed centrally by two expert pathologists. Telephone interviews were conducted to assess obstetric outcomes such as menstruation, pregnancy attempts, successful pregnancy, and usage of assisted reproductive technology (ART). Disease recurrence rates and pregnancy rates were compared between the USO and CE groups. Recurrence-free survival was defined as the time from the initial surgery to disease recurrence or censor date. Survival curves and rates were calculated using the Kaplan-Meier method. The differences in survival were assessed using the log-rank test. Frequency distributions were compared using the chisquared test and Fisher's exact test, and both mean and median values were compared between the two groups using the Student's t-test. A P-value of ≤0.05 in a two-sided test was statistically significant. All statistical analyses were performed using the SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA). This study was approved by the institutional review boards.
Patient characteristics
Of the 108 patients who met our inclusion criteria, 89 had undergone USO and 19 had undergone CE. The baseline patient characteristics are presented in Table 1. There were no significant differences in terms of age at diagnosis, CA 125 levels, surgical approach, parity, histological type, stage, peritoneal implant, and adjuvant therapy between the two groups. With regard to the histologic type, 28 were serous (25.9%), 72 mucinous (66.7%), and nine others (endometrioid, mixed cell type; 7.4%). After the initial surgery, nine USO patients USO (14.6%) and two CE patients (10.5%), who mainly had stage Ic and II BOTs, received adjuvant platinumbased chemotherapy. Nine patients received carboplatin with paclitaxel, and two received cisplatin with cyclophosphamide.
Oncologic outcomes
The median follow-up period was 37.4 months in the USO group, and 25.4 months in the CE group. Six patients developed recurrent disease 9 to 67 months after the initial surgery. The median recurrence-free interval was 24 months. The rate of recurrence was significantly higher in the CE group than in the USO group (15.8% vs. 3.4%, P=0.032) ( Table 1). The 5-year recurrence free survival rate was significantly higher in the USO group than in the CE group (95.7% vs. 78.8%, P=0.022) (Fig. 1). Table 2 summarizes the oncologic outcomes of patients with recurrent disease. All six patients had recurrent BOTs, none of which were invasive in nature. Regarding histologic subtypes, four were mucinous, one was serous, and one was seromucinous. Case 1 originally involved a 16-cm left ovarian cyst that was treated with left USO. After 41 months, a computed tomography scan revealed a right ovarian cyst that was treated with a right adnexectomy. The two other cases in the USO group involved recurrent disease in the contralateral ovary and uterus respectively, and were managed with radical surgery including hysterectomy. In the CE group, the sites of recurrence were the ipsilateral ovary, contralateral ovary and both ovaries. All three patients underwent a second FSS; in case 6 as well, six cycles of platinum-based adjuvant chemotherapy was administered at the time of surgery. There were no disease-related deaths, and all patients were alive with no evidence of disease after surgery. telephone. Of these, six refused to participate in the interview, but the remaining 76 were able to provide information on their menstrual cycles and obstetric histories (Table 3). Of the 76 patients, 71 resumed regular menstruation and 5 had irregular menstruation; none experienced premature menopause. Four patients were pregnant at the time of FSS and had simultaneous cesarean sections. All four delivered healthy full-term babies.
Obstetric outcomes
Of the 73 patients with stage I BOTs, 31 attempted to conceive, of which 25 were successful. Of the three patients with advanced BOTs, one attempted to conceive and had two successful singleton pregnancies. In case 1 (Table 2), the patient developed a recurrent borderline tumor on the contralateral ovary, and succeeded in conceiving and delivering a full-term baby vaginally 32 months following her second FSS (USO).
In the USO group, 19 of the 24 women (79.2%) who attempted to conceive had a total of 25 pregnancies; this included two who underwent ovulation induction using clomiphene citrate and in vitro fertilization. These pregnancies resulted in 21 full-term deliveries. There were three spontaneous abortions and one ongoing pregnancy at the time of analysis. In the CE group, seven of the eight women (87.5%) who attempted to conceive had a total of eight pregnancies; this included one who underwent ovulation induction using clomiphene citrate and in vitro fertilization. These pregnancies resulted in seven full-term deliveries and one ongoing pregnancy. None of the patients underwent radical surgery after delivery.
Discussion
Fertility-sparing treatments are defined as procedures that preserve the uterus and some functional ovarian tissue [10,11]. Several studies have compared the oncologic outcomes of radical surgeries and FSS [11][12][13][14]. However, few studies have compared the oncologic and obstetric outcomes of FSS subtypes (USO vs. CE) [15,16]. One such study, which compared the oncologic outcomes of USO and CE, found that CE patients have a higher recurrence rate than USO patients [16]. These findings are consistent with the results of the present study, which found that the recurrence rate was significantly higher following CE (15.8%) as compared to after USO (3.4%, P=0.032) ( Table 1).
In the present study, all six patients with recurrent disease Left salpingo-oophorectomy with right ovarian wedge resection with partial omentectomy during cesarean section.
Se Yun Lee, et al. Borderline ovarian tumors at reproductive age had recurrent BOTs and not an invasive cancer. They were successfully treated with further surgery (Table 2). This was consistent with Song et al. [16]'s hypothesis, that despite the substantial risk of relapse following CE for BOT, this approach does not impair patient survival. The histology of the BOT is also an important consideration. Mucinous type BOTs predominate in East Asia, including Korea [17]. A previous study on BOT types found that 31% were serous and 68% mucinous [17]; these numbers are similar to those of the present study's (26% vs. 67%) ( Table 1). Recent studies suggest that mucinous-type BOTs may not be benign, and instead, have a 13 cumulative risk of recurrence in the form of invasive carcinoma at 10 years [1,[17][18][19]. Uzan et al. [8] has also suggested that mucinous BOTs are 'high-risk' in that invasive recurrence is likely after FSS in stage I disease. Therefore, the authors of these studies concluded that USO is preferable to CE for patients with mucinous BOTs. In regions with a high prevalence of mucinous BOTs such as Korea, USO might be considered the FSS of choice rather than CE, consistent with the result of present study.
Regarding obstetric outcomes, the reported pregnancy rate for BOT patients ranges from 40% to 100% [2,12,[20][21][22][23][24], although the use of different surgical approaches in those studies was limited. Vasconcelos and de Sousa Mendes [24] conducted a meta-analysis of 32 studies, and found that the pregnancy rate for women who underwent USO was 45.4% (n=21/46), and the rate for women who underwent CE was 40.3% (n=26/61). In the present study, 26 of the 32 patients (81.3%) who tried to conceive had successful pregnancies ( Table 3). The pregnancy rates of the present study were 79.2% (n=19/24) in the USO group and 87.5% (n=7/8) in the CE group. Cystectomies tend to preserve fertility better than adnexectomies because less ovarian tissue is removed. However, in the present study, the pregnancy rates between the two groups were not significantly different (P=0.615). These results are consistent with those of a previous study [16], and suggest that the obstetric outcomes after USO or CE are promising. The majority of patients also had successful term pregnancies with no congenital anomalies.
Some researchers believe that the appearance of invasive implants on the peritoneal surface portends a less favorable prognosis in patients with BOTs [25]. As such, adjuvant chemotherapy can be considered for these patients, using the regimen typically used for epithelial ovarian cancer. However, studies have shown that postoperative adjuvant chemotherapy fails to lower the relapse rate or improve the survival rate in both the early and advanced stages of BOTs [26,27]. This present study included 11 of 108 patients (10.2%) who received adjuvant chemotherapy. The majority had FIGO (International Federation of Gynecology and Obstetrics) stage Ic disease or above, and received the treatment before 2005 according to the discretion of their physician. Of the 11 patients who received adjuvant chemotherapy, only one (case 3 in Table 2) developed recurrent disease 67 months later. She then underwent successful radical surgery, whereupon no evidence of disease remained. An adverse effect was observed in one patient who developed grade 2 leukopenia. Four patients succeeded in conceiving spontaneously after chemotherapy, and delivered full-term babies. Considering the incidence of recurrence in this present study, there was no benefit to receiving adjuvant chemotherapy. This was consistent with the findings from previous studies [1,[28][29][30]. As such, we would carefully conclude that adjuvant chemotherapy can be avoided for BOT patients with a strong desire to bear children. This study had several limitations. First, it was a retrospective analysis that was limited to a single center. Second, comprehensive surgical staging was not considered for all subjects. Third, the length of follow-up was insufficient. Especially, in CE group, the median follow-up period was shorter as 25.4 months than USO group. Fourth, the obstetric outcomes were subjective and depended on telephone interviews. More objective parameters, such as preoperative and postoperative ovarian function (follicle stimulating hormone, anti-Mullerian hormone, antral follicle count), should have been included.
The strengths of the current study include the relatively large sample size. In addition to confirming the effectiveness of FSS as a treatment for BOT, the current study has demonstrated the favorable obstetric outcomes of FSS in women under 40, and has compared the pregnancy rates between two FSS subgroups (USO and CE).
In this study, the recurrence rates in patients with BOTs treated with USO (3.4%) were significantly lower than in patients treated with CE (15.8%) (P=0.032). In addition, both USO (79.2%) and CE (87.5%) had excellent obstetric outcomes. Therefore, USO is an appropriate fertility-sparing treatment for young women with BOTs. Meanwhile, in some patients, CE may be the only viable option due to their previous history of unilateral oophorectomy or salpingo-oophorectomy, or bilateral BOTs. In our study, all recurrent lesions were BOTs located in the remaining ovary, and were successfully treated by secondary surgery. Therefore, CE is still an acceptable option, but should be limited to selected patients. | 2017-08-30T09:57:04.369Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "2a16ccd6bb3a6a7e65cb3fbfd12da892788a5d8d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5468/ogs.2017.60.3.289",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a16ccd6bb3a6a7e65cb3fbfd12da892788a5d8d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119648732 | pes2o/s2orc | v3-fos-license | Rational Minimax Iterations for Computing the Matrix $p$th Root
In [E. S. Gawlik, Zolotarev iterations for the matrix square root, arXiv preprint 1804.11000, (2018)], a family of iterations for computing the matrix square root was constructed by exploiting a recursion obeyed by Zolotarev's rational minimax approximants of the function $z^{1/2}$. The present paper generalizes this construction by deriving rational minimax iterations for the matrix $p^{th}$ root, where $p \ge 2$ is an integer. The analysis of these iterations is considerably different from the case $p=2$, owing to the fact that when $p>2$, rational minimax approximants of the function $z^{1/p}$ do not obey a recursion. Nevertheless, we show that several of the salient features of the Zolotarev iterations for the matrix square root, including equioscillatory error, order of convergence, and stability, carry over to case $p>2$. A key role in the analysis is played by the asymptotic behavior of rational minimax approximants on short intervals. Numerical examples are presented to illustrate the predictions of the theory.
1. Introduction. In recent years, a growing body of literature has highlighted the usefulness of rational minimax iterations for computing functions of matrices [25,26,7,8,4]. In these studies, f (A) is approximated by a rational function r of A possessing two properties: r closely (and often optimally) approximates f in the uniform norm over a subset of the real line, and r can be generated from a recursion. A prominent example of such an iteration was introduced by Nakatsukasa and Freund in [26], where it was observed that rational minimax approximants of the function sign(z) = z/(z 2 ) 1/2 obey a recursion, allowing one to rapidly compute sign(A) and related decompositions such as the polar decomposition, symmetric eigendecomposition, SVD, and, in subsequent work, the CS decomposition [8]. An analogous recursion for rational minimax approximants of z 1/2 has recently been used to construct iterations for the matrix square root [7], building upon ideas of Beckermann [2]. There, the iterations are referred to as Zolotarev iterations, owing to the role played by explicit formulas for rational minimax approximants of sign(z) and z 1/2 derived by Zolotarev [31].
The aim of this paper is to introduce a family of rational minimax iterations for computing the principal p th root A 1/p of a square matrix A, where p ≥ 2 is an integer. Recall that the principal p th root of a square matrix A having no nonpositive real eigenvalues is the unique solution of X p = A whose eigenvalues are contained in {z ∈ C | −π/p < arg z < π/p} [15,Theorem 7.2]. The iterations we propose reduce to the Zolotarev iterations for the matrix square root [7] when p = 2, but when p > 2, they differ from the Zolotarev iterations in several important ways. Notably, for all integers p ≥ 2, the iterations generate a rational function of r of A which has the property that for scalar inputs, the relative error e(z) = (r(z) − z 1/p )/z 1/p equioscillates on a certain interval [a, b] (see Section 2 for our terminology). Remarkably, when p = 2, e(z) equioscillates often enough to render max a≤z≤b |e(z)| minimal among all choices of r with a fixed numerator and denominator degree [7]. This optimality property is the hallmark of the Zolotarev iterations, and it allows one to appeal to classical results from rational approximation theory to estimate the maximum relative error. When p > 2, no such optimality property holds. Much of this paper is devoted to showing that the rational minimax iterations for the p th root still enjoy many of the same desirable features as the Zolotarev iterations for the square root, despite the absence of optimality in the case p > 2. We take care to present our results in such a way that when p = 2, the salient features of the Zolotarev iterations are recovered as special cases.
There are a number of connections between the iterations we derive and existing iterations from the literature on the matrix p th root. We have already mentioned that they reduce to the Zolotarev iterations when p = 2. For arbitrary p ≥ 2, the two lowest order versions of our rational minimax iterations are scaled variants of the Newton iteration and the inverse Newton iteration [15,Chapter 6], [3,Section 6], [18]. In another limiting case, our iterations reduce to the Padé iterations [21,Section 5]. Relative to these iterations, the rational minimax iterations offer advantages primarily when the matrix A has eigenvalues with widely varying magnitudes. As an extreme example, if p = 3 and A is Hermitian positive definite with condition number ≤ 10 16 , convergence is achieved in double-precision arithmetic after just 2 iterations when using our type-(6, 6) rational minimax iteration. In contrast, up to 5 iterations are needed when using the type-(6, 6) Padé iteration. Our numerical experiments indicate that the situation is similar, but less dramatic, for non-normal matrices with eigenvalues away from the positive real axis.
This paper is organized as follows. In Section 2, we review the Zolotarev iterations for the matrix square root by summarizing the contents of [7]. In Section 3, we introduce rational minimax iterations for the matrix p th root and present our main results: Theorem 3.1, Theorem 3.2, and their corollaries. Proofs of these results are provided separately in Section 4. Finally, Section 5 presents numerical experiments that illustrate the predictions of the theory.
2. Background: Zolotarev iterations for the matrix square root. Let us summarize the Zolotarev iterations for the matrix square root and their key properties [7]. Let R m,ℓ denote the set of all rational functions of type (m, ℓ) -ratios of polynomials of degree ≤ m to polynomials of degree ≤ ℓ. We say that a function r(z) = g(z)/h(z) in R m,ℓ has exact type (m ′ , ℓ ′ ) if, after canceling common factors, g(z) and h(z) have degree exactly m ′ ≤ m and ℓ ′ ≤ ℓ, respectively. The number d = min{m − m ′ , ℓ − ℓ ′ } is called the defect of r in R m,ℓ . In most of what follows, z is a real variable; we use the letter z since the behavior of r on C will play an important role later in the paper.
Given a continuous, increasing bijection f : [0, 1] → [0, 1] and a number α ∈ (0, 1), let r m,ℓ (z, α, f ) denote the best type-(m, ℓ) rational approximant of f (z) on [f −1 (α), 1]: For m ∈ N and ℓ ∈ {m − 1, m}, the Zolotarev iteration of type (m, ℓ) for computing the square root of a square matrix A reads It is proven in [7] that in exact arithmetic, X k → A 1/2 and α k → 1 with order of convergence m + ℓ + 1 for any A with no nonpositive real eigenvalues. In floating point arithmetic, it is necessary to reformulate the iteration to ensure its stability; we detail the stable reformulation of (2.3-2.4) later on.
With the exception of the cases are not known. However,r m,ℓ (z, α, p √ ·) can be computed numerically; see Section 5 for details. Note that the cost of computingr m,ℓ (z, α, p √ ·) is independent of the dimension of A, so it is expected to be negligible for problems involving large matrices.
As with the square root iteration (2.3-2.4), it is necessary to reformulate the p th root iteration (3.1-3.2) to ensure its stability. This is accomplished by considering the iteration for Y k = X 1−p k A and Z k = X −1 k implied by (3.1-3.2). Exploiting commutativity, we have where h ℓ,m,p (z, α) = r m,ℓ (z, α, p √ ·) −1 . (We swapped the order of the first two indices to emphasize that h ℓ,m,p (z, α) is a rational function of type (ℓ, m), not (m, ℓ).) The remainder of this section presents a series of results about the behavior of the iteration (3.1-3.2) and its counterpart (3.3-3.5). Proofs of these results are given in Section 4.
Functional iteration.
A great deal of information about the behavior of the iteration (3.1-3.2) (and hence (3.3-3.5)) can be gleaned from a study of the functional iteration The following theorem summarizes the properties of the functional iteration (3.6-3.7). In the interest of generality, it focuses on a slight generalization of (3.6-3.7) that reduces to (3.6-3.7) when the function f appearing below is f (z) = z 1/p . The theorem makes use of the following terminology. A continuous function g(z) is said to equioscillate m times on an interval for some σ ∈ {−1, 1}. It is well-known that the minimax approximants (2.1) are uniquely characterized by the property that equioscillates at least m + ℓ + 2 − d times on [f −1 (α), 1], where d is the defect of r m,ℓ (z, α, f ) in R m,ℓ [28, Theorem 24.1]. We will be particularly interested in those functions f for which: (3.i) For every α ∈ (0, 1) and m, ℓ ∈ N 0 , r m,ℓ (z, α, f ) has exact type (m, ℓ). Furthermore, equioscillates exactly m + ℓ + 2 times on [f −1 (α), 1], achieves its maximum at z = f −1 (α), and achieves an extremum at z = 1.
The function is f (z) = z 1/p satisfies this hypothesis; see Lemma 4.8 for a proof.
Let us discuss the meaning of this theorem. It states that the iteration (3.8-3.9) generates a function f k (z) ≈ f (z) with the following curious property: The maximum relative error in f k (z) on the interval [f −1 (α), 1] is equal to the maximum relative error in the best rational approximant of f (z) on a much smaller interval [f −1 (α k−1 ), 1]. Indeed, as k increases, the length of [f −1 (α), 1] remains constant, whereas the length ), assuming f is smooth enough near z = 1. That is, ε k → 0 with order of convergence m + ℓ + 1.
For most functions f , the iteration (3.8-3.9) is not useful, as it (rather circularly) uses f (and f −1 ) to generate an approximation of f . Furthermore, the approximation it generates need not be a rational function of z. The function f (z) = z 1/p , however, is exceptional, in that the iteration (3.8-3.9) -which reduces to (3.6-3.7) for this fgenerates a rational function f k (z) without requiring the evaluation of any p th roots.
A similar result holds for the coupled iteration (3.3-3.5).
Note that the bounds above imply corresponding bounds on the relative errors When A is non-normal and/or has eigenvalues away from the positive real axis, the behavior of the matrix iteration (3.1-3.2) (and hence (3.3-3.5)) is dictated by the behavior of the scalar iteration (3.6-3.7) on complex inputs z. This has been analyzed in detail for the case p = 2 in [8], but for p > 2, numerical experiments indicate that the scalar iteration converges in a subset of the complex plane with fractal structure, a typical feature of iterations for the p th root. We study this behavior numerically in Section 5. It remains an open problem to determine theoretically the convergence region {z ∈ C | lim k→∞ f k (z) = z 1/p } for the iteration (3.6-3.7).
Special cases.
For certain values of m, ℓ, and p, the theory above recovers some known results from the literature. We discuss these situations below.
3.3.1. Square roots. When p = 2, m ∈ N, and ℓ ∈ {m − 1, m}, a remarkable phenomenon occurs, allowing us to draw the connection between Theorem 3.1 and the results of [7] that we alluded to earlier. For these p, m, and ℓ, the function f k (z) is a rational function of type (m k , ℓ k ), where (m k , ℓ k ) is given by (2.5). In both the case ℓ = m − 1 and the case ℓ = m, we have follows from the theory of rational minimax approximation that f k (z) is the best rational approximant of √ z of type (m k , ℓ k ) on [α 2 , 1]: In particular, for every k ≥ 1. This shows that Theorem 3.1 includes [7, Theorem 1] as a special case.
The preceding proposition shows that when (m, ℓ) = (1, 0), the iteration (3.1-3.2) reads This is a scaled variant of the popular Newton iteration [15,Equation 7.5] for the matrix p th root. The scaling heuristic above is reminiscent of one proposed by Hoskins and Walton [17], but theirs is based on type-(1, 0) rational minimax approximants of z (p−1)/p . On the other hand, when (m, ℓ) = (0, 1), the iteration (3.1-3.2) reads In terms of the matrix Z k = X −1 k , the iteration for X k becomes which is a scaled variant of the inverse Newton iteration [15, Equation (7.12)] for computing A −1/p .
Padé iterations.
We recover one more family of iterations by considering the limit as α ↑ 1 in (3.1-3.2).
Below, we say that a family of rational functions {r α ∈ R m,ℓ | α ∈ (0, 1)} converges coefficientwise to r 1 ∈ R m,ℓ as α ↑ 1 if the coefficients of the polynomials in the numerator and denominator of r α , appropriately normalized, approach those of r 1 as α ↑ 1.
It follows that the iteration (3.1-3.2) reduces formally to [12]. In terms of For later use, it will be convenient to definê 3.4. Stability of the coupled matrix iteration. As alluded to earlier, the uncoupled matrix iteration (3.1-3.2) exhibits numerical instability, whereas the coupled iteration (3.3-3.5) does not. We justify the latter claim below.
We recall the following definition. A matrix iteration X k+1 = g(X k ) with fixed point X * is said to be stable in a neighborhood of X * if the Fréchet derivative of g at X * has bounded powers at X * [15,Definition 4.17] We first address the stability of the coupled Padé iteration (3.21-3.22).
Consider now the coupled minimax iteration (3.3-3.5). Theorem 3.1 established that α k converges to 1 in (3.5). We argue in Section 5 that when α k is close to 1, it is numerically prudent to set α k (and all subsequent iterates) equal to 1, thereby reverting to the Padé iteration (3.21-3.22). Since the latter iteration is stable, it follows that the aforementioned modification of (3.3-3.5) is stable as well.
By the same reasoning as above, the function has the property that s m,ℓ (z, α ′ , f ) − 1 equioscillates m + ℓ + 2 times on [α ′ , 1] with extrema ±ε ′ , and it achieves its extrema at the endpoints by the assumption (3.i).
Rate of convergence.
It remains to show that the order of convergence of ε k to 0 is m + ℓ + 1. As we explained in the paragraph below Theorem 3.1, it suffices to note that when f is C m+ℓ+1 in a neighborhood of 1, Indeed, this, together with (3.11), gives assuming f −1 is Lipschitz near 1 and f −1 (1) = 1. Below, we give more precise information about the constant implicit in (4.5). We begin with a lemma that shows, in essence, that the uniform error in the best type-(m, ℓ) rational approximant of a function g(z) on a small interval [−δ, δ] is about 2 m+ℓ times smaller than the uniform error in the type-(m, ℓ) Padé approximant of g(z).
Remark 4.7. The near equioscillation of R in the proof above can be used to show that R is close to r δ : The argument is essentially the same as the one used in [29, p. 429-430] to show that Carathéodory-Féjer approximants are close to minimax approximants on small intervals.
It is now a simple matter to estimate the constant implicit in (4.5). As ε → 0, the above lemma gives and c f,δ is the Taylor coefficient of (z − 1 + δ) m+ℓ+1 in the difference between f (z) and its type-(m, ℓ) Padé approximant about z = 1 − δ. A short calculation shows that It follows that in the iteration (4.3), we have
Proof of Theorem 3.2.
Having proved Theorem 3.1, we now verify that the function f (z) = z 1/p satisfies the hypothesis (3.i), and we prove Theorem 3.2.
We begin by establishing a few properties of the minimax approximants r m,ℓ (z, α, p √ ·). The proof of the following lemma is similar to that in [27,Lemma 2], which studies rational functions of type (ℓ + 1, ℓ) that minimize the maximum absolute error on [0, 1] rather than the maximum relative error on [α, 1], α > 0. The proof makes use of the following terminology. A Chebyshev system of dimension N on an interval I ⊆ R is a linearly independent set {g j (z)} N j=1 of continuous functions on I with the property that any nontrivial linear combination N j=1 c j g j (z) has at most N −1 (distinct) roots in I. Proof. Suppose that r(z) = g(z)/h(z), where g(z) and h(z) are polynomials of exact degree m ′ ≤ m and ℓ ′ ≤ ℓ, respectively. Observe that the function belongs to the space W spanned by which is a Chebyshev system on [a, b] of dimension m ′ + ℓ ′ + 2. Thus, z 1/p h(z)e(z) has at most m ′ + ℓ ′ + 1 zeros on [a, b]. In particular, e(z) has at most m ′ + ℓ ′ + 1 zeros on [a, b], so it equioscillates at most m ′ + ℓ ′ + 2 times on [a, b]. But e(z) equioscillates From this we conclude that d = 0, m ′ = m, ℓ ′ = ℓ, and e(z) equioscillates exactly m + ℓ + 2 times on [a, b]. Let a ≤ z 0 < z 1 < · · · < z m+ℓ+1 ≤ b be the points at which e(z) achieves its extrema on [a, b]. Suppose that z 0 > a or z m+ℓ+1 < b. By considering the graph of e(z), one easily deduces that there exists c ∈ R such that e(z)− c has at least m+ ℓ + 2 roots in [a, b]. But z 1/p h(z)(e(z) − c) = z 1/p h(z)e(z) − cz 1/p h(z) ∈ W, so z 1/p h(z)(e(z) − c) has at most m ′ + ℓ ′ + 1 = m + ℓ + 1 roots in [a, b]. In particular, e(z) − c has at most m + ℓ + 1 roots in [a, b], a contradiction. It follows that z 0 = a and z m+ℓ+1 = b.
It remains to verify that the signs in (4.9-4.10) are correct. Consider the dependence of e(z) on the parameters a and b. Denote this dependence by e(z; a, b). By an argument similar to the one made in the proof of Lemma 4.4, the maps a → e(a; a, b) and b → e(a; a, b) are continuous on (0, b) and (a, ∞), respectively. These maps also have no zeros, since e(z; a, b) has a nonzero extremum at z = a for every 0 < a < b < ∞. Now, for small δ > 0, the proof of Lemma 4.5 shows that for where c f is the coefficient of (z − 1) m+ℓ+1 in the Taylor expansion of P m,ℓ,p (z) − z 1/p about z = 1. In particular, e(1 − δ; 1 − δ, 1 + δ) has the same sign as c f T m+ℓ+1 (−1) = (−1) m+ℓ+1 c f for δ close to 0, which, as we verify below in (4.12), is positive. By continuity, e(a; a, b) > 0 for every 0 < a < b < ∞, and (4.9-4.10) follow.
The preceding lemma shows that the function f (z) = z 1/p satisfies the hypothesis (3.i), so Theorem 3.2 will follow if we can show that the constant C(m, ℓ, p) in the estimate (3.12) is given by (3.13). In view of the general estimate (4.8), it suffices to determine the coefficient c f of the leading-order term c f (z − 1) m+ℓ+1 in P m,ℓ,p (z) − z 1/p , where P m,ℓ,p (z) is the Padé approximant (3.19) of z 1/p about z = 1. This is given by [10,Lemma 3.12] (4.11) Inserting this into (4.8) and noting that f ′ (1) = 1 p and (4.12) we obtain (3.13).
Proof of Proposition 3.6. Trefethen and Gutknecht
Note that a more robust option for computing minimizers of the maximum absolute error |r(z) − f (z)| is the Chebfun function minimax [6]. However, Chebfun currently does not support minimization of the maximum relative error |(r(z) − f (z))/f (z)|.
Algorithm 5.1 summarizes the implementation of the rational minimax iteration (3.3-3.5). For simplicity, it focuses on the type (m, m) iteration. The type (m, ℓ) iteration with ℓ = m is similar, but the form of the partial fraction expansion of h ℓ,m,p (z, α) varies with ℓ. In the algorithm, the eigenvalues of A with the smallest and largest magnitudes are denoted λ min (A) and λ max (A), respectively. Compute h m,m,p (z, α k ) and its partial fraction expansion h m,m,p (z, α k ) = a 0 + m j=1 a j z + b j . 8: The choices of α 0 and τ used in the algorithm are motivated by Corollary 3.4: they ensure that the spectrum of A/τ is contained in the annulus {z ∈ C | α p 0 ≤ |z| ≤ 1}. In particular, if A is Hermitian positive definite, then the spectrum of A/τ is contained in [α p 0 , 1], and Corollary 3.4 is directly applicable. Neither λ min (A) nor λ max (A) need to be computed accurately; our experience suggests that estimates can be used without significantly degrading the algorithm's performance.
where ∆ = 10 −15 is a relative error tolerance. This is a generalization to arbitrary p of the termination criterion described in [7,Section 4.3].
Floating point operations. If A is n×n and (a 0 I +W ) p−1 is computed with binary powering in Line 9 of Algorithm 5.1, then the cost of each iteration in Algorithm 5.1 is about (6 + 2m + β log 2 (p − 1))n 3 flops, where β ∈ [1, 2] [15, p. 72]. In the first iteration, the cost reduces to (2 + 2m + β log 2 (p − 1))n 3 flops since Z 0 = I. If parallelism is exploited, then the m matrix inversions in Line 8 can be performed simultaneously, as can Lines 9-10. The effective cost of such a parallel implementation is (4 + β log 2 (p − 1))n 3 flops in the first iteration and (6 + β log 2 (p − 1))n 3 flops in each remaining iteration. Further savings in computational costs can be achieved when p = 2; see [7, Section 4.2] for details.
Scalar iteration.
Asymptotic convergence rates. To verify the asymptotic convergence rates predicted by Theorem 3.2, we computed ε k = 1−α k 1+α k , k = 1, 2, 3, for various choices of m, ℓ, p, and ε 0 . Table 5.1 reports the results for three such choices. (We selected values of m, ℓ, p, and ε 0 so that the asymptotic regime was reached before convergence to machine precision occurred.) The table demonstrates that the ratios ε k /ε m+ℓ+1 k−1 approach the constant C(m, ℓ, p) given by (3.13). Note that the entry in the row k = 3 of the last column should be ignored, since ε 3 is below machine precision in that instance.
Complex inputs. To study the behavior of the rational function f k (z) generated by the type-(m, ℓ) iteration (3.6-3.7), we numerically computed the sets for various choices of δ, α, m, ℓ, and p. The boundaries of these sets are plotted in Fig. 5.1. They are plotted in the (log 10 |z|, arg z) coordinate plane rather than the usual (Re z, Im z) coordinate plane to facilitate viewing. The shaded regions in the plots correspond to points z ∈ C for which lim k→∞ f k (z) = z 1/p . Numerical evidence indicates that at these points, lim k→∞ f k (z) ∈ {e 2πij/p z 1/p | j ∈ {1, 2, . . . , p − 1}}. Furthermore, the shaded regions have a fractal structure. Both of these phenomena are typical features of iterations for the p th root when p > 2 [5]. In each plot, one of the boundaries has been selected arbitrarily and labelled with its index k. Each unlabelled boundary has an index which differs by +1 from that of its nearest inner neighbor. Shaded regions correspond to points z for which lim k→∞ f k (z) = z 1/p . with eigenvalues in S(k), then the iteration (3.1-3.2) converges in at most k iterations with a relative tolerance δ in the 2-norm. As an example, the plot in row 3, column 2 of Fig where this time f k (z) is the rational function generated by (3.6-3.7) with the initial condition α 0 = α replaced by α 0 = 1. By Proposition 3.6, the sets T (k) characterize the convergence behavior of the Padé iteration (3.19) (and its coupled counterpart (3.21-3.22)) with the initial iterate scaled by 1/α p/2 . lie very near but not on the nonpositive real axis, a simple workaround is to compute A 1/2 using any algorithm for the matrix square root, and then compute ((A 1/2 ) 1/p ) 2 . One can also compute ((A 1/2 s ) 1/p ) 2 s with s > 1, as in [13,16], but the advantages of minimax approximation over Padé approximation become less pronounced as s increases, since A 1/2 s has eigenvalues clustered near 1 for large s.
Matrix iteration.
To test Algorithm 5.1, we applied it to a collection of matrices of size 10 × 10 from the Matrix Computation Toolbox [14]. We selected those 10×10 matrices in the toolbox with condition number ≤ u −1 (where u = 2 −53 denotes the unit roundoff) and with spectrum contained in the sector {z ∈ C : | arg z| < 0.9π}. We also included those matrices whose spectrum could be rotated into the aforementioned sector by multiplying A by a suitable scalar e iθ , θ ∈ [0, 2π]. A total of 41 matrices met these criteria. and (8,8), and the built-in Matlab function funm. The Padé iterations were implemented using Algorithm 5.1 with Lines 1-2 replaced by τ = 1/ |λ min (A)λ max (A)| and α 0 = 1. The results indicate that the algorithms under consideration behave in a forward stable way, with relative errors mostly lying within a small factor of uκ (p) (A). In Table 5.2, the number of iterations used by each iterative method on the 41 tests are recorded. In analogy with the results of [7], the rational minimax iterations very often converged more quickly than the Padé iterations on these tests.
6.
Conclusion. This paper has constructed and analyzed a family of iterations for computing the matrix p th root using rational minimax approximants of the function z 1/p . The output of each step k of the type-(m, ℓ) iteration is a rational function r of A with the property that the scalar function e(z) = (r(z)− z 1/p )/z 1/p equioscillates (m + ℓ + 1) k + 1 times on [α p , 1], where α ∈ (0, 1) is a parameter depending on A.
With the exception of the Zolotarev iterations (i.e. p = 2 and ℓ ∈ {m − 1, m}), this equioscillatory behavior does not render max α p ≤z≤1 |e(z)| minimal among all choices of r with the same numerator and denominator degree. Nevertheless, we have shown that many of the desirable features of the Zolotarev iterations carry over to the general setting. A key role in the analysis was played by the asymptotic behavior of rational minimax approximants on short intervals.
Several topics mentioned in this paper are worth pursuing in more detail. Remark 4.3 leads naturally to a family of rational minimax iterations for the matrix sector function sect p (A) = A(A p ) −1/p . As α ↑ 1, these iterations likely reduce to the Padé iterations for the sector function studied by Laszkiewicz and Ziętak [21,Section 5], so the results therein could inform an analysis of the convergence of the rational minimax iterations on matrices that are non-normal and/or have spectrum away from the positive real axis. Another topic of interest is computing the action of A 1/p on a vector b using rational minimax iterations. Li and Yang [22] address a similar task: computing the action of a spectral filter on b using Zolotarev iterations for sign(z). It my may be possible to construct a similar algorithm for computing A 1/p b. Finally, the functional iteration (3.6-3.7) is of interest in its own right, as it offers a method of rapidly generating rational approximants of z 1/p with small relative error, a tool that may have applications in, for instance, numerical conformal mapping [11]. | 2019-03-14T21:27:25.000Z | 2019-03-14T00:00:00.000 | {
"year": 2019,
"sha1": "be8f10e22318cef0c04feb928411ecd90397da77",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.06268",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "be8f10e22318cef0c04feb928411ecd90397da77",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
216618013 | pes2o/s2orc | v3-fos-license | MOLECULAR DETECTION OF PSEUDOMONAS AERUGINOSA ISOLATED FROM MINCED MEAT AND STUDIES THE PYOCYANIN EFFECTIVENESS ON PATHOGENIC BACTERIA
This study was aimed to collected Minced meat from the local markets in Baghdad governorate during 2018, and examined for the presence of Pseudomonas aeruginosa, in order to extract and purify pyocyanin and examined it as an antimicrobial activity against pathogenic bacteria in foods. Fifteen isolates were isolated from 50 samples and identified as P. aeruginosa using the API20E system and finally confirmed with PCR using 16SrRNA gene. Four tested media were used for the production of pigment after incubation within 72 h, One strain which given a vigorous pigmentation was chosen and extracted with chloroform and HCl then analyzed with Gas chromatography (GC-Mass) which showed a sharp peak at the time of acquisition of 27.13 minutes at the chromatographic analysis recognized with mass spectrometry as Hemipyocanin (alpha-hydroxy phenazine) which produced molecular ion with intensive peak at 205 m/z. Agar well diffusion technique was applied for estimating the antimicrobial activity of purified (pyocyanin) with variable concentrations (25, 50, 75 and 100 mg/ml) which monitored toward Gram-negative and Gram-positive bacteria that isolated of minced meat. Escherichia coli and staphylococcus aureus was the most affected with pyocyanin were followed by Serratia marcescens and Klebsiella sp. at the same level. While Enterobacter sp, Bacillus cereus, Proteus mirabilis, and Proteus vulgaris showed intermediate sensitivity, the Pseudomonas fluorescens was shown low sensitivity to pyocyanin.
INTRODUCTION
The shelf life of foods is identified being the period when the food quality remains satisfying within severe conditions of storage, distribution, and display. Spoilage is the method in which food has degenerated and turns into unacceptable for humans being or its quality is diminished turning food improper for selling or consumption (13). Several bacterial isolates which are particular as spoilage organisms (SSO) of meat, fish and poultry that can be identified through the ability for analyzing the nitrogenous components and generating the volatile compounds such as (ketones, esters, and aldehydes) that responsible for the flavor that will be formed at the point of spoilage. Some organisms primarily cause a change in sugars by oxidation and producing alkali and other organisms produce a fluorescent pigment (3). From the total of microflora, Pseudomonas spp. may represent the minority at the beginning of shelf life of the food then become dominant at the end. Phenazines are comprised the most significant extracellular pigments that produce from genus Pseudomonas, P. aeruginosa which is rod shape, aerobic and a Gram-negative opportunistic pathogen. Pseudomonas aeruginosa has a distinctive feature through synthesized of the blue-green, chloroform-soluble compound called pyocyanin (1-hydroxy-s-methelphenazine) (9). A number of virulence factors are secreted by P. aeruginosa which is considered the physiological and pathological effects of these bacteria. Of these virulence factors, Pyocyanin is phenazine oxidation pigment with lowmolecular-weight that produced by P. aeruginosa (14). The Pyocyanin production is regulated by sensing the quorum, which involves a cell-dependent synthesis of signaling molecules that modify the expression of virulence genes (19). In spite of the fact, that pseudomonad has repeatedly been described for its pathogenicity; the capability of these microorganisms to produce antimicrobial pigment has opened the opportunity to an application of this agent as a biological regulator (19). Pyocyanin has antimicrobial activity toward wide different microorganisms, which may assist P. aeruginosa through eliminating competing microorganisms; pyocyanin serve as an antimicrobial agent, selectively inhibitors for gram-positive and gram-negative bacteria rather than Pseudomonas spp. The redoxactive phenazine compound (Pyocyanin) which kills bacterial cells by the production of reactive oxygen intermediates. P. aeruginosa resists pyocyanin because of the limited redox cycling of this compound and that under conditions favoring pyocyanin production; catalase and superoxide dismutase activities are increased.Researchers created numeral and substantial modern antimicrobial agents within the latest thirty years; simultaneously the resistance of bacteria to the antimicrobial agents has more progressed. The aim of this study is to isolate various isolates of P. aeruginosa from minced meat with purifying and discriminate the pyocyanin pigment by conventional methods and study the pyocyanin properties as antimicrobial activity toward some pathogenic bacteria.
MATERIALS AND METHODS Sampling
This survey was carried out during 2018, 50 fresh minced meat samples were randomly collected from Baghdad supermarkets, Iraq. The samples were stored in the ice box while transport to the University of Baghdad/ laboratory of market research and consumer protection center for examination. Isolation with identification of P. aeruginosa and target bacteria from clinical samples P. aeruginosa is isolated out of minced meat specimens: blood agar, nutrient agar and Pseudomonas Cetrimide Agar (OxoidTM) and selective media for each microorganism. In beginning; Twenty-five gram of freshly minced meat specimens have been homogenized within peptone water (225 ml), samples were later cultivated on selective agar media through streaking and pour plate technique and incubated at 35 °C within 48 h. (16).
Observed the distinguishing pigmentation and compare the physiological and microscopic aspects with biochemical characters of the isolates through the official description presented in "Bergey's Manual of Determinative Bacteriology", that were recognized as P. aeruginos then the positive isolates has been confirmed with (API 20E).
Brain heart infusion agar was used to preserve the pure strains as slants form (16). DNA extraction A genomic DNA of P. aeruginosa was extracted for PCR amplification depending on company instruction Kit of DNA (G-spinTm, INtRON, Korea). Bacterial culture was transported to the microcentrifuge tube and Centrifugation at 13.000 rpm for one min, a buffer of Lysozyme was insert into the (centrifuge tube) the lysozyme was completely dissolved by using a vortex, the lytic when finished, the centrifuge was repeated twice and washed with buffer, the extracted DNA was saved at 4˚C until use. 1.0 % agarose gel was used to Electrophoresis the purified DNA. Five microns of DNA was combined beside three μl loading dye of bromophenol blue then photos were taken through using U. V. light 350 nm (Sambrook and Russell, 2001).
Detection of P. aeruginosa using 16S Rrna
The specific gene of 16S rRNA was conducted as Partial amplification with applying the primer pairs as in table (1), producing an amplicon with 150 bp with 25 µl reactions that including 100 pmol of each primer, master mix was used which is includes (PCR buffer, Taq polymerase, MgCl2 and dNTPs) and 100 ng of template DNA, with the conditional of amplification: the first denaturation was 95 °C within 5 min following by 30 cycles of 95 °C within 30 s, 60 °C within 30 s, and 72 °C within 45 s, the last extension at 72 °C within 10 min. PCR amplicons have been analyzed through 2% agarose gel electrophoresis, then photos were taken through UV transillumination 350 nm (1). Nanodrop (1000) was used to determine the purity of all 15 P. aeruginosa extracted DNA and the concentration of DNA was measured with (260/280nm) (1).
Pseudomonas-F 5'-CTACGGGAGGCAGCAGTGG-3' 150
Pseudomonas-R 5'-TCGGTAACGTCAAAACAGCAAAGT-3' Extraction, purification, and characterization of the pigment produced by Pseudomonas isolates The isolates of p.aeruginosa those given a vigorous pigmentation were selected and grown with the broth of Pseudomonas at 37oC within 48 h for generation of pigment. The broth culture rich with Pigment was later centrifuged by (10,000 rpm within 15 min) then the supernatant was accumulated, later filtered within filter membrane pore sized (0.45μm) and applied as the crude extract (7). (Chloroform and HCl) was adopting for Extraction of pigment from the crude extract, Chloroform was combined within the broth culture at the proportion of (2:1). The extract was stirred well by utilizing a shaker for 2 min, then divided out into two discrete layers, one of them was the pigment (a blue solvent layer), and the other was a residual material of culture. The blue layer was accumulated, later solution of 0.1N HCl (20% for the blue layer's volume) was combined then vortexed, then generated an upper pink acidified layer. The pink layer was later neutralized by Tris-Base then the neutralized layer was treated with chloroform again. The entire technique was repeated for numerous times to turn into purified pigment (7).
GC-MS Chromatograph of Pseudomonas aeruginosa pyocyanin
Pyocyanin was analyzed by using gas chromatography (GC-Mass) spectrophotometer with autosampler system (PerkinElmer/USA) this device provided with a carbowax (30*0.25mm ID) and (0.25μm thickness of film) capillary column (intercut DB5Ms. Japan). One μl of extracted Pyocyanin was autosampler inside the capillary column. The carrier gas (Helium) was adopted. Temperatures of Injector and detector were arranged at 280°C. The temperature of the column was programmed firstly at 40°C to 1 min and later expands to a 5°C rate per min at a terminal temperature of 290°C. Pigments were separated with at (96.1 Kpa) constant pressure and the flow of column 1.71 ml/min. Peaks have been recognized by comparing the mass spectra versus the mass spectral database (7).
Screening of P. aeruginosa pyocyanin as antimicrobial activity
Antimicrobial activity of pyocyanin toward each isolated bacteria was prepared by using well diffusion technique on Mueller-Hinton Agar following aerobic condition, 100μl of bacterial suspension was poured on the surface of MHA spread by L-shape glass rod and left for 10 minutes to settle down the bacteria and 120μl of different concentrations (25,50,75 and 100 ppm) of purified pyocyanin was added to the prepared wells in the same plate and incubated at 37˚C for 24h-48h, the diameter of the inhibition zone was measuring around the wells which represent the antimicrobial activity of pyocyanin (6).
Statistical analysis
The program of Statistical Analysis System-SAS (18) was employed to perform the different factors in investigation parameters. The LSD (least significant difference) test has been employed to significant compare within the means of this investigation RESULTS AND DISCUSSION
Isolation and identification of P. aeruginosa
The Bacterial isolates that chosen from minced meat sample were cultivated on blood agar and MacConky agar medium, isolates which revealed positive hemolysis activity were elected and re-cultured on nutrient agar and selective agar, these bacterial isolates were identified morphologically and microscopically and the result shown that there are several isolates belong to several genera as in table (2) 11 for Serratia marcescens and 12 isolates for Enterobacter sp) were chosen as target bacteria. P. aeruginosa is human's opportunistic pathogen, relating to the Pseudomonadaceae bacterial family which is popular within the environment; in the clean water, soil, and contaminated food. It has also been widely isolated from fish, meat products and canned food (4, 5). Morphological and biochemical features confirmed that Pseudomonas aeruginosa is a smooth, large, and irregular bacterium, surrounded by bluish-green coloration with grape-like odor. All the isolates were aerobic, catalase positive, nitrate reduction positive, showed oxidative metabolism on Hugh Leifson medium and the ability to stain with gram stain appear negative when examining microscopically with rods shape, motile. The results of the biochemical characterization, determined by means of the API 20 E for P.aeruginosa in figure (1), those results are consistent with results that observed via (20) who identified Pseudomonas aeruginosa that isolates from food
Figure 1. API 20E result of isolated P.aeruginosa Molecular characterization
In the microbiology laboratory, P. aeruginosa considered a very common isolate and its identification by conventional biochemical or commercial kits or by automated means may lead to a somewhat expensive process of identification. On the other hand, 24 hours or more may be needed to carry out for identification, so the identification of the Pseudomonas aeruginosa genus was confirmed by tests with the specific primer of 16S rRNA by PCR. The ranges of purity of extracted DNA out of 1.7-2. 350 nm U.V was used for visualized the extracted DNA followed by electrophoresis with 1% agarose gel by 70 volts within 30 min. The genomic DNA of isolates have been detected with 2.0% of agarose gel electrophoresis which dyed via red safe stain and electrophoresed in 70 volts about 1:30 hr, the 15 lanes in figure (2) have been captured with ultraviolet 350 nm (UV) transilluminator with size of band of (150) bp plus (100) bp as DNA ladder and this result was reported previously by (12) study and confirmed by (11). (1-15), with 350 nm U.V light The differentiation of the 16S rRNA gene permits comparison at the genus level between organisms of bacteria, as well as to classifying isolates at multiple levels. The 16S rRNA gene sequence was noticed by (15) who have analyzed of 5.0 isolates of Pseudomonas contain 99% nucleotide sequence comparable to P. aeruginosa despite its varied considerably in pyocyanin generation.
Production of pyocyanin
During growth of P. aeruginosa on the four tested media; blood agar, nutrient agar, Muller Hinton agar, and Mac Conkey agar, it was concluded that there are various nutritional media can be utilized by P. aeruginosa for biosynthesis of pyocyanin. During this investigation, it was concluded that the pigment production produce throughout the first 24 hrs of growth and maximal pigment production was reached following 48 hrs. While, isolate No. 4 achieved the highest yield after 72 hrs. Among these examined strains, the characteristic of Pigment production was found in the 4 (26.6 %) out of 15 strain had this ability to produce pigment vigorously within 48 h of incubation as in figure (3).
Figure 3. Growth of P. aeruginosa on the tested media with produce pigment
Pyocyanin that generated by P. aeruginosa is employed in the various clinical microbiological laboratories as an adjunct test in the multiple testing procedures adopted for the identification of P. aeruginosa. In preceding researches, the pyocyanin production and catalase activity were enhanced when P. aeruginosa was grown in low-and high-phosphate succinate media under conditions of limited Phosphate (10).
Extraction and chemical analysis of pigment
In the current investigation, a chloroform solvent was the addition for departed of pyocanin from culture supernatants Chloroform extracted layer of pyocyanin showed converter in color from bluish to pinkish red during acidified by 0.1 (N) HCl, which indicated the presence of pyocyanin pigment.
Chloroform extracted of P.aeruginosa revealed on gas Chromatographic analysis there is a sharp peak at acquisition period 27.13 minutes that recognized as (Hemipyocanin) alpha-Hydroxy phenazine through mass spectrum analysis which provided intense molecular ion peak at 205 m/z and its structure is presented in Figure(4) Figure 4. Illustrated the mass spectrum analysis of pyocyanin GC-Ms of pyocyanin in the current investigation revealed the existence of phenazine and Hemipyocyanin compound. Prior analysis of GC-Ms by (14) confirmed these result that revealed the correlated hemipyocyanin pigment extracted of P. aerogenosa which recognized by mass spectrum following gas chromatography at ions peak (211 m/z) while the estimated one is 211.09 for C13H11N2O. And also consonant with the previous studies of (2) who demonstrated a molecular ion of the protonated purified compound of pyocyanin at m/z 196. Antimicrobial activity against the target bacteria The antimicrobial activity of purified pyocyanin at different concentrations (25, 50, 75 and 100 mg/ml) was observed towards Gram-negative and Gram-positive bacteria that isolated out of minced meat. One strain was chosen for the production of (pyocyanin) and estimated for its antibacterial activity by agar well diffusion technique. Out of various concentrations of pigments that used, 25 mg/ml revealed less activity with moderate inhibition zone on the agar plate. The remaining concentration of 50-75 mg/ml revealed the significant obligation with a higher zone of inhibitory activity, while the 100mg/ml which is considered the higher concentration and high purity recorded the higher inhibition zone compared with (25, 50 and 75 mg/ml) and the results are presented in table (3) and figure (5). The common influenced bacteria to pyocyanin was E. coli and staph.aureus followed by Serratia marcescens and Klebsiella sp. at the same level. While Enterobacter sp, Bacillus cereus, Proteus mirabilis, and Proteus vulgaris showed intermediate sensitivity, the Psedomonas fluorescens was shown a weak low sensitivity to pyocyanin. These conclusions are in accordance with (10) they notify that phenazine compound has antimicrobial activity entirely toward Bacillus subtilis strains and Escherichia Coli. There was a considerable variation of the results regarding the bacterial resistance obtained from different strains and association with pyocyanin of isolated bacteria, This variation refers to the lipid of the cell wall content of Gram-negative and Gram-positive bacteria that may be accountable to the difference for the sensitivity of the pyocyanin antibiotic. Through expanding pyocyanin concentration from 50 mg/ml to 100 mg/ml, the antimicrobial activity is improved and enhanced; therefore the pyocyanin is concentration dependent as an antibiotic activity. Pyocyanin exhibits as a redox cycle and enhances intracellular oxidant stress and within the aerobic situation. This drives to reactive oxygen species (ROS) generation like hydrogen peroxide, and superoxide, these ROS compounds are able to inhibit the growth of microorganism (8). against target bacteria The current investigation concluded that the pyocyanin which extracted from Pseudomonas aeruginosa isolated out of minced meat was hemipyocyanin and has antimicrobial function as competitive agents infectious and pathogenic bacteria which contaminated food and these could assist as a signal to alert P. aeruginosa to the presence of another bacteria and the consequent progressed in pyocyanin production would help P. aeruginosa to compete with these microbes and save the food from contamination with pathogenic bacteria. | 2020-04-09T09:21:57.628Z | 2019-08-30T00:00:00.000 | {
"year": 2019,
"sha1": "3330bdf63d1607e49681b339f77951775cdc8817",
"oa_license": "CCBY",
"oa_url": "https://jcoagri.uobaghdad.edu.iq/index.php/intro/article/download/764/604",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d42e2f827c50f5eec6fc980502feec27b765b2db",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
208234295 | pes2o/s2orc | v3-fos-license | An EPAC1/PDE1C-Signaling Axis Regulates Formation of Leading-Edge Protrusion in Polarized Human Arterial Vascular Smooth Muscle Cells
Pharmacological activation of protein kinase A (PKA) reduces migration of arterial smooth muscle cells (ASMCs), including those isolated from human arteries (HASMCs). However, when individual migration-associated cellular events, including the polarization of cells in the direction of movement or rearrangements of the actin cytoskeleton, are studied in isolation, these individual events can be either promoted or inhibited in response to PKA activation. While pharmacological inhibition or deficiency of exchange protein activated by cAMP-1 (EPAC1) reduces the overall migration of ASMCs, the impact of EPAC1 inhibition or deficiency, or of its activation, on individual migration-related events has not been investigated. Herein, we report that EPAC1 facilitates the formation of leading-edge protrusions (LEPs) in HASMCs, a critical early event in the cell polarization that underpins their migration. Thus, RNAi-mediated silencing, or the selective pharmacological inhibition, of EPAC1 decreased the formation of LEPs by these cells. Furthermore, we show that the ability of EPAC1 to promote LEP formation by migrating HASMCs is regulated by a phosphodiesterase 1C (PDE1C)-regulated “pool” of intracellular HASMC cAMP but not by those regulated by the more abundant PDE3 or PDE4 activities. Overall, our data are consistent with a role for EPAC1 in regulating the formation of LEPs by polarized HASMCs and show that PDE1C-mediated cAMP hydrolysis controls this localized event.
Introduction
Agents that increase cyclic AMP (cAMP) signaling largely inhibit migration of arterial smooth muscle cells (ASMCs), including those ASMCs isolated from human arteries (HASMCs). For instance, studies have shown that the pan-cellular increases in cAMP caused by agents that activate all transmembrane adenylyl cyclases, including agents such as forskolin, or which inhibit all cellular cAMP-hydrolyzing phosphodiesterases (PDEs), like isobutyl-methyl-xanthine (IBMX), consistently reduce ASMC migration [1]. For these reasons, cAMP-elevating agents have long been seen as attractive agents through which to reduce ASMC migration in several conditions, including in-stent restenosis, where PDE4 inhibition reduces neointima formation and inhibits vascular cell adhesion molecule 1 (VCAM-1) expression and histone methylation in an exchange protein activated by cAMP (EPAC)-dependent manner [2,3] Interestingly, notwithstanding the observation that increased cAMP signaling results in reduced levels of ASMC migration, when the numerous steps involved in coordinating cellular migration are studied individually, it is found that they are equally likely to be inhibited or promoted [1,[4][5][6]. For
Cell Culture and siRNA Transient Transfections
Human arterial smooth muscle cells (HASMCs) were isolated from discarded unused portions of the internal thoracic artery in coronary artery bypass graft surgeries as described previously [16], from donor patients of Kingston General Hospital, as well as this, HASMCs were purchased from Cell Applications. For tissues obtained from Kingston General Hospital (KGH), their use in this research study (SURG-334-15; "Endothelial cell function in human hearts") was approved by the Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board (HSREB). HASMCs were cultured in smooth muscle basal medium (SMBM) and smooth muscle growth medium bullet kit (SMGM-2) (Lonza), supplemented with 10% fetal bovine serum (FBS), cultured at 37 • C in 5% CO 2 , and used between passages 4-9. For siRNA transfection, HASMCs were cultured in basal SMBM containing Lipofectamine 3000 (Invitrogen) and siRNA (Sigma) in a 1:1 ratio, and media was changed 5 h post transfection with SMGM-2. Experiments were conducted 48 h post transfection. The following sequences of siRNAs were used, listed in Table 1. All siRNAs were purchased from Invitrogen.
Chemotactic Leading Edge Protrusion (LEP) Assay
HASMCs resuspended in SMBM basal media were plated on the upper surface of gelatin-coated (ddH 2 0 supplemented with 0.25% gelatin (Biorad)), 3-µm, 24 mm 2 -diameter BD Falcon Corning ® FluoroBlok TM cell culture inserts forming a monolayer, as described previously [9,17]. Chemotaxis was initiated by adding 0.5% FBS in SMBM media to the underside of the inserts to allow cells to form leading edge protrusions (LEPs) for 4 h. Pharmacological activators or inhibitors were added to the top of the insert prior to the addition of FBS to the underside of the inserts. The following drugs were used: CE3F4 (ToCRIS), 8-CPT-2 -O-Me-cAMP (Biolog), Compound 33 ((C33) a generous gift from Dr. Guy Breitenbucher; Dart Neurosciences), PF-04827736 (Sigma), Cilostamide (Calbiochem), and Ro 20-1724 (Calbiochem). To visualize the extent of LEPs, inserts were fixed with paraformaldehyde (4% (v/v)), rinsed with Hank's Balanced Salt Solution (HBSS), and incubated for 1 h with phalloidin-tetramethylrhodamine B isothiocyanate (1:1000; Sigma) and DAPI (1:1000; Thermofisher) (0.3% bovine serum albumin (BSA) diluted in HBSS). Inserts were mounted on glass slides and the extent of LEPs were measured by quantifying the total fluorescence of phalloidin-TRITC on the bottom of the insert, as a measure of the total density of LEPs formed. In each case, 5 images were taken per transwell and these covered all 4 quadrants as well as the center of the transwell. In experiments in which we controlled for the number of cells applied to the top of the transwell, this was measured by counting the number of nuclei on the top of the transwell in a similarly unbiased and representative sampling of this structure. Images were captured with a Zeiss Axiovert S100 microscope and imaged with Slidebook software. LEP quantification was conducted by processing the images using Image Pro software, where the threshold tools were used to segment the LEPs followed by counting the pixel density of the area occupied by the LEPs.
RNA Isolation, Reverse Transcription, and qPCR
HASMC RNA was isolated using the Qiagen RNeasy (Qiagen) mini kit as per the manufacturer's instructions, followed by measurement of RNA purity and concentration using a Nanodrop 1000 (Thermo Scientific). cDNA was synthesized using a Qiagen Omniscript RT, according to the manufacturer's instructions. qPCR reactions were performed using PowerUP TM SYBR TM Green Master Mix (Thermo Fisher Scientific) with 2 ng cDNA template, and the following primers were used, listed in Table 2. Thermocycler conditions were the following, using the QuantStudio 5 Real-Time PCR System: PCR stage: Step 1 95 • C, 15 min; Step 2 60 • C, 1 min repeated 40X. Melt curve stage: Step 1 95 • C, 15 min; Step 2 60 • C, 1 min, and Step 3 Dissociation, 95 • C, 1 s.
Statistical Analysis
All data presented were analyzed using GraphPad Prism Software and used for statistical analysis. Data in this study was collected from at least three independent experiments unless otherwise stated and presented as means ± (SEM). Statistical analysis between two groups was compared using an unpaired, two-tailed Student's t-test and multiple comparisons were assessed using a 1-or 2-way analysis of variance (ANOVA), followed by the appropriate post-hoc test as indicated in the figure captions. A p value < 0.05 was considered significant.
Pharmacological Inhibition, or RNAi-Mediated Silencing, of EPAC1 Reduces Formation of Leading-Edge Protrusions (LEPs) in HASMCs
Using a combination of approaches, we assessed the role of EPAC1, the sole EPAC expressed in HASMCs [18], in coordinating the ability of these cells to generate polarized LEPs in response to a chemotactic gradient. Thus, treating HASMCs with an EPAC1-silencing siRNA decreased EPAC1 expression ( Figure 1A), and antagonized the ability of these cells to generate LEPs ( Figure 1B,C). Similarly, inhibiting EPAC1 pharmacologically with a selective EPAC1 inhibitor, CE3F4 (20 µM) [19,20], also markedly reduced the ability of HASMCs to generate LEPs in response to an FBS gradient ( Figure 1D). In contrast, but consistent with the idea that EPAC1 is effectively activated in migrating HASMCs, the addition of the EPAC1 activating cAMP analogue, 8-CPT-2 -O-Me-cAMP (100 µM) [19], did not significantly alter the number of LEPs formed by these cells in our experiments ( Figure 1E).
Selective Pharmacological Inhibition of HASMC PDEs Differentially Impacts Their Capacity to Generate LEPs
While previous studies have shown that pharmacological inhibition of the dominant HASMC cAMP PDEs, namely PDE1, PDE3, or PDE4, like PKA activation, reduced their migratory capacity [1,21], we hypothesized that selective pharmacological inhibition of PDE1, PDE3, or PDE4 might differentially impact the ability of these cells to form LEPs. Interestingly, while selective inhibition of HASMC PDE3 activity with cilostamide (5 µM) [1,22] reduced LEP formation in HASMCs, PDE4 inhibition with Ro 20-1724 (10 µM) did not (Table 3). Unexpectedly, pharmacological inhibition of PDE1 activity (C33, 1 µM) [23,24] in these cells markedly promoted the formation of LEPs in our experiments (Table 3). Although HASMCs have been reported by us and others to express both PDE1A and PDE1C gene-encoded enzymes, since PDE1C preferentially hydrolyzes cAMP compared to PDE1A, we next investigated the possibility that PDE1 inhibitors acted by inhibiting cAMP hydrolysis by PDE1C. Consistent with this hypothesis, silencing PDE1C ( Figure 2A) increased the formation of LEPs ( Figure 2B,C) and obviating the LEP producing effects of the PDE1 inhibitor, C33 (Table 4). experiments showed that siEPAC1 transfection significantly reduced EPAC1 levels when normalized to tubulin, as assessed using the Student's unpaired t-test, **** p < 0.0001. (B) Representative images, obtained at either 10× or 40× magnification, of actin-stained LEPs detected on the lower levels (bottom) of FluoroBlok TM transwells (3-μm pores) following a 4 h exposure of HASMCs to an FBS gradient. Prior to exposure of these cells to the FBS gradient, the cells had been transfected either with a control siRNA (siCtrl) or an EPAC1-targeting siRNA (siEPAC) for 48 h. Actin (red) and nuclei (blue) were visualized by incubating fixed cells with TRITC-conjugated phalloidin or DAPI, respectively (scaling bars, 50 μm). Note: Since 3 μm pores precluded migration of HASMCs to the lower level of these transwells, no (A) Detection of EPAC1 by immunoblotting of samples obtained from a representative experiment in which HASMC were treated with siCtrl or siEPAC1 for 48 h is shown. Quantitating levels of EPAC1 in similar samples obtained from n = 3 independent experiments showed that siEPAC1 transfection significantly reduced EPAC1 levels when normalized to tubulin, as assessed using the Student's unpaired t-test, **** p < 0.0001. (B) Representative images, obtained at either 10× or 40× magnification, of actin-stained LEPs detected on the lower levels (bottom) of FluoroBlok TM transwells (3-µm pores) following a 4 h exposure of HASMCs to an FBS gradient. Prior to exposure of these cells to the FBS gradient, the cells had been transfected either with a control siRNA (siCtrl) or an EPAC1-targeting siRNA (siEPAC) for 48 h. Actin (red) and nuclei (blue) were visualized by incubating fixed cells with TRITC-conjugated phalloidin or DAPI, respectively (scaling bars, 50 µm). Note: Since 3 µm pores precluded migration of HASMCs to the lower level of these transwells, no DAPI (blue) staining is present in these images. (C) Quantification of the LEPs formed by siCtrl or siEPAC1 transfected HASMCs following their exposure to the FBS gradient are shown. Statistically significant reduction in LEPs formed by siEPAC1 HASMCs compared to siCtrl HASMC was determined by comparing results obtained in n = 3 independent experiments using the Student's unpaired t-test, **** p < 0.0001). protein and mRNA were both statistically significant, as assessed using the Student's unpaired t-test, **** p < 0.0001. (B) Representative images, obtained at either 10× or 40× magnification, of actin-stained LEPs detected on the lower levels (bottom) of FluoroBlok TM transwell (3-µm pores) following a 4 h exposure of HASMCs to an FBS gradient. Prior to exposure of these cells to the FBS gradient, the cells had been transfected either with a control siRNA (siCtrl) or a PDE1C-targeting siRNA (siPDE1C) for 48 h. Actin (red) and nuclei (blue) were visualized by incubating fixed cells with TRITC-conjugated phalloidin or DAPI, respectively (scaling bars, 50 µm). Note: Since 3 µm pores precluded migration of HASMCs to the lower level of these transwells, no DAPI (blue) staining is present in these images. (C) Quantification of LEP formation in siCtrl or siPDE1C transfected HASMCs in response to exposure to an FBS gradient for 4 h in n = 3 independent experiments, assessed using the Student's unpaired t-test, **** p < 0.0001. Values are means of n = 3 independent experiments ± SEM. 1 p < 0.0001 compared with vehicle-treated HASMC LEP formation, as determined by a one-way analysis of variance (ANOVA) and Dunnett's multiple comparisons test.
Silencing HASMC EPAC1 Obviates PDE1 Inhibition-Directed LEP Formation
Since inhibition or silencing of the cAMP effector EPAC1 reduced LEP formation and PDE1 inhibition, or PDE1C silencing promoted the formation of these structures, we next investigated whether EPAC1 promoted LEP formation through a PDE1C-sensitive mechanism. To investigate this idea, we determined whether silencing EPAC1 would antagonize PDE1 inhibitor-mediated formation of LEPs in these cells. Thus, while PDE1 inhibition with either C33 (1 µM) or with PF-04827736 (1 µM) [25] promoted LEP formation in control HASMCs, neither of these PDE1 inhibitors were able to rescue LEP formation in EPAC1-silenced cells ( Figure 3A,B) or in cells in which EPAC1 was inhibited ( Figure 3C). Also, while silencing PDE1C promoted LEP formation in control cells, this basal effect and the ability of EPAC1 inhibition to promote LEP formation were lost in PDE1C-silenced HASMCs ( Figure 3D). To obviate that our results reflected an effect related to a reduced adhesion of HASMCs to the upper surface of the transwells upon PDE1 inhibition or EPAC1-silencing, or loss of cells during the treatments periods, we counted HASMC nuclei which were present on the upper surfaces of the transwells, in which we detected changes in LEP numbers. As shown in ( Figure 3E-H), inhibition or silencing of either PDE1C or EPAC1 did not significantly impact the number of HASMCs on the upper surface of the FluoroBlok TM transwell in our studies. HASMCs in response to exposure to an FBS gradient for 4 h in n = 3 independent experiments, assessed using the Student's unpaired t-test, **** p < 0.0001.
Silencing HASMC EPAC1 Obviates PDE1 Inhibition-Directed LEP Formation
Since inhibition or silencing of the cAMP effector EPAC1 reduced LEP formation and PDE1 inhibition, or PDE1C silencing promoted the formation of these structures, we next investigated whether EPAC1 promoted LEP formation through a PDE1C-sensitive mechanism. To investigate this idea, we determined whether silencing EPAC1 would antagonize PDE1 inhibitor-mediated formation of LEPs in these cells. Thus, while PDE1 inhibition with either C33 (1 μM) or with PF-04827736 (1 μM) [25] promoted LEP formation in control HASMCs, neither of these PDE1 inhibitors were able to rescue LEP formation in EPAC1-silenced cells ( Figure 3A, B) or in cells in which EPAC1 was inhibited ( Figure 3C). Also, while silencing PDE1C promoted LEP formation in control cells, this basal effect and the ability of EPAC1 inhibition to promote LEP formation were lost in PDE1Csilenced HASMCs ( Figure 3D). To obviate that our results reflected an effect related to a reduced adhesion of HASMCs to the upper surface of the transwells upon PDE1 inhibition or EPAC1silencing, or loss of cells during the treatments periods, we counted HASMC nuclei which were present on the upper surfaces of the transwells, in which we detected changes in LEP numbers. As shown in (Figure 3E-H), inhibition or silencing of either PDE1C or EPAC1 did not significantly impact the number of HASMCs on the upper surface of the FluoroBlok TM transwell in our studies. Prior to exposure of these cells to the FBS gradient, the cells had been transfected either with a control siRNA (siCtrl), an EPAC1-targeting siRNA, or a PDE1C-targeting siRNA (siPDE1C) for 48 h. Actin (red) and nuclei (blue) were visualized by incubating fixed cells with TRITCconjugated phalloidin or DAPI, respectively (scaling bars, 50 μm). Note: Since 3 μm pores precluded migration of HASMCs to the lower level of these transwells, no DAPI (blue) staining is present in these images. (B) Quantification in the density of LEPs in siCtrl or siEPAC1 transfected cells treated as above. Data from n = 3 independent experiments were normalized to appropriate controls and significance was calculated using a two-way ANOVA and the Tukey's post-hoc analysis, *** p < 0.001, **** p < 0.0001. (C) Quantification of LEP density in HASMCs treated with DMSO (0.1% v/v) or this same concentration of DMSO containing C-33 (1 μM) in the presence or absence of CE3F4 (20 μM). Data from n = 3 experiments were normalized to the vehicle DMSO and significance was determined with a two-way ANOVA and the Tukey's post-hoc analysis, *** p < 0.001, **** p < 0.0001. (D) Quantification of LEP density in siCtrl or siPDE1C HASMCs treated with DMSO (0.1% v/v) in the presence or absence of CE3F4 (20 μM). Data from n = 3 experiments were normalized to siCtrl control values and significance was assessed using a two-way ANOVA and the Tukey's post-hoc analysis, * Prior to exposure of these cells to the FBS gradient, the cells had been transfected either with a control siRNA (siCtrl), an EPAC1-targeting siRNA, or a PDE1C-targeting siRNA (siPDE1C) for 48 h. Actin (red) and nuclei (blue) were visualized by incubating fixed cells with TRITC-conjugated phalloidin or DAPI, respectively (scaling bars, 50 µm). Note: Since 3 µm pores precluded migration of HASMCs to the lower level of these transwells, no DAPI (blue) staining is present in these images. (B) Quantification in the density of LEPs in siCtrl or siEPAC1 transfected cells treated as above. Data from n = 3 independent experiments were normalized to appropriate controls and significance was calculated using a two-way ANOVA and the Tukey's post-hoc analysis, *** p < 0.001, **** p < 0.0001. (C) Quantification of LEP density in HASMCs treated with DMSO (0.1% v/v) or this same concentration of DMSO containing C-33 (1 µM) in the presence or absence of CE3F4 (20 µM). Data from n = 3 experiments were normalized to the vehicle DMSO and significance was determined with a two-way ANOVA and the Tukey's post-hoc analysis, *** p < 0.001, **** p < 0.0001. Values are means of n = 3 independent experiments ± SEM. 1 p < 0.001 compared with vehicle-treated HASMC LEP formation, as determined by a two-way ANOVA and Tukey's multiple comparisons test. 2 p > 0.05; means are not significant between siCtrl (C33) and si1C (DMSO) or si1C in the presence or absence of (C33).
Discussion
Herein we show that silencing or inhibiting HASMC EPAC1 decreased the ability of these cells to generate LEPs in response to a chemotactic gradient. In addition, we show that this EPAC1 dependence for the generation of these actin-based leading-edge structures is regulated selectively by a source of intracellular cAMP that is regulated by PDE1C activity, but not by PDE3 or PDE4 activities. These data add to our understanding of the known dichotomous actions of cAMP, and its effectors, PKA and EPAC1, in the control of HASMC migration-associated activities. Specifically, these data show that EPAC1, like PKA, positively influences the formation of HASMC LEPs in the presence of a gradient and identify a potentially important role for PDE1C in these effects.
Previous work has shown that modulating EPAC1 activity could either increase or decrease ASMC migration. Indeed, these earlier studies showed that EPAC1 activation with 8-CPT-2 -O-Me-cAMP promoted rat aortic SMC migration and facilitated ASMC accumulation into neointimal lesions formed following damage to murine femoral arteries [26]. Consistent with this, mice deficient in EPAC1 had reduced neointimal hyperplasia and ASMC migration under similar experimental conditions [26,27]. Interestingly, when SMCs isolated from human saphenous vein samples were used in experiments, EPAC1 activation negatively regulated their PDGF-induced migratory responses [28]. In this context, our work has begun to "unpack" these more global effects of EPAC1 in ASMCs and show that inhibiting EPAC1 activity, or markedly reducing its expression, inhibited LEP formation by migrating HASMCs. Our findings under conditions in which we pharmacologically activated EPAC1 were consistent with the idea that subjecting HASMC to a chemotactic gradient maximally activated EPAC1. Of course, further studies will be required to determine if this is the case, and more importantly, whether subjecting these cells to an FBS gradient selectively activates the fraction of EPAC1 localized at the leading edge of these cells. In addition, methodological differences between our study and others may also account for the differences which we observed. For instance, it was recently reported that EPAC1 could regulate SMC migration in a time-and concentration-dependent manner. Thus, it was reported that while high concentrations of the EPAC1 activator 8-CPT-2 -O-Me-cAMP (30-50 µM) significantly increased SMC migration compared to control untreated SMCs for up to 6 h, these effects were lost and replaced with inhibitory effects at longer time points [29].
With regards to the mechanism by which PDE1C regulates LEP formation in HASMCs, our data suggest that PDE1C regulates a distinct pool of cAMP than those regulated by PDE3 or PDE4. While some of our earlier and ongoing studies have shown that PDE1C is likely important in regulating the ability of PKA to impact LEP formation via effects on the store-operated calcium entry (SOCE) system in HASMCs [30], significant further work will be required in order to determine how PDE1C regulates EPAC1-mediated effects in these cells. In this context, previous work by others showed that the impact of SOCE in mediating cellular migration is influenced by the relative adhesive strength properties of the matrix and the cells [31]. Thus, it may be that PDE1C and EPAC1 will interact to regulate LEP formation differently when different matrix proteins are used. Indeed, EPAC1 may facilitate SMC LEP formation by interacting with different integrins based on the ECM protein tested [32]. In addition, it is likely that EPAC1 may impact SMC LEP formation by interacting with agents that control microtubule stability, since EPAC1 has been shown to regulate microtubule elongation [33]. For example, the low molecular weight GTPase Rap1, an EPAC1 effector, was shown to be activated at the leading edge of migrating vascular endothelial cells, and this was shown to accompany microtubule extension [34]. Furthermore, the importance of EPAC1 in regulating cell migration and microtubule stability was also previously reported, when inhibition of EPAC1 disrupted microtubule organization [35]. Another recent study identified the importance of graded cAMP signaling in mediating axonal guidance by promoting microtubule growth and membrane protrusion [36]. Therefore, future studies will be required to determine the impact of EPAC1 on microtubule stability in mediating ASMC protrusion and migration, and perhaps in other systems as well. Given the importance of Ca 2+ signaling in guiding directed cell migration in processes, such as mesodermal sheet migration and gastrulation, axonal growth cone steering in developing neurons, and metastasis [36][37][38], the PDE1C/EPAC1 axis supports a connection by which Ca 2+ and cAMP signaling systems may interact with one another locally to guide these physiological and pathological processes. In the context of HASMC LEP formation, PDE1C is known to be induced in migratory and proliferative vascular SMCs [21,39,40], thus this signaling axis provides a potential molecular target to mitigate vascular diseases where SMC migration is dysregulated, such as atherosclerosis and restenosis. | 2019-11-22T00:56:10.452Z | 2019-11-20T00:00:00.000 | {
"year": 2019,
"sha1": "866ef739162698311ec0cdb7f1e791c875fbaed7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells8121473",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb21267e227d3dfc1ddeb7403b00c1de588d748d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
11424873 | pes2o/s2orc | v3-fos-license | Tissue engineering and the future of hip cartilage, labrum and ligamentum teres
As the field of hip arthroscopy continues to evolve, the biological understanding of orthopaedic tissues, namely articular cartilage, labral fibro-cartilage and the ligamentum teres continues to expand. Similarly, the need for biological solutions for the pre-arthritic and early arthritic hip continues to be a challenge for the sports medicine surgeon and hip arthroscopist. This article outlines existing biological and tissue-engineering technologies, some being used in clinical practice and other technologies being developed, and how these biological and tissue-engineering principals may one day influence the practice of hip arthroscopy. This review of hip literature is specific to emerging biological technologies for the treatment of chondral defects, labral tears and ligamentum teres deficiency. Of note, not all of the technologies described in this article have been approved by the United States Food and Drug Administration and some of the described uses of the approved technologies should be considered ‘off-label’ uses.
INTRODUCTION
Hip arthroscopy has evolved significantly since Burman's [1] first 1931 report of the arthroscopic appearance of intra-articular structures. As hip arthroscopy has rapidly evolved, the specific instrumentation for performing the operations as well as the indications for surgery has expanded [2]. As the field of hip arthroscopy continues to evolve, the biological understanding of orthopaedic tissues, namely articular cartilage, labral fibro-cartilage and the ligamentum teres, continues to expand. Similarly, the need for biological solutions for the pre-arthritic and early arthritic hip continues to be a challenge for the sports medicine surgeon and hip arthroscopist.
This article will briefly review the native hip anatomy and common pathology encountered in patients undergoing hip arthroscopy. The focus will then shift to outlining existing biological and tissue engineer technologies, some being used in clinical practice and other technologies being developed, and how these biological and tissue-engineering principals may one day influence the practice of hip arthroscopy. This article is a forward thinking and certainly not all inclusive, as many of the technologies described have not been specifically tested in the hip joint; however, after review of the article, the reader will have a better understanding of recent tissue-engineering and biological technologies that may influence clinical practice in the years to come. We conclude this article by briefly mentioning some of the techniques used at our institutes' regenerative medicine laboratory, although not all of the techniques described are being investigated for orthopaedic purposes, we do hope the mention of these technologies will result in future orthopaedic and musculoskeletal investigation. Of note, some of the described uses of the approved technologies should be considered 'off-label' uses as not all of the technologies described in this article have been approved by the United States Food and Drug Administration to be used as described.
NATIVE HIP ANATOMY AND PATHOLOGY
The hip is a weight-bearing ball and socket joint that is deeply seated and congruent, with a great deal of anatomical constraint, especially when compared with the shoulder joint. Two main forms of cartilage exist in the hip joint, articular hyaline cartilage, from innominate fusion of osteochondral complexes, and labral fibrocartilage [3][4][5]. The hip joint also contains the ligamentum teres, which does have an embryological and early developmental role in hip joint formation, yet the role in pathology remains of some debate. The ligamentum teres may not simply represent vestigial anatomy, with some recent literature suggesting a more structural role [6][7][8]. The hip joint derives its vascularity from innominate fusion and much of the innervation of the joint is shared from surrounding layers and muscle-tendon units [5].
The challenge for the hip arthroscopist is to address various lesions of the articular cartilage, fibrocartilagenous labrum and the ligamentum teres in a minimally invasive manner. The pre-arthritic and early arthritic hip can be particularly challenging surgically because of its deep anatomic location, and the relatively high physiological loads, forces and stresses seen by the joint.
HIP ARTICULAR CARTILAGE
For the hip arthroscopist, articular cartilage defects can be addressed in the pre-arthritic or early arthritic stage. The gold standard for end-stage hip arthritis is total hip arthroplasty; but there are many patients who have articular cartilage wear that may not be significant enough to warrant total joint arthroplasty [9][10][11]. To date, there are several articular cartilage strategies that have been employed to help restore focal and larger cartilage defects in the active patient. Some of these articular cartilage strategies include autologous chondrocyte implantation , microfracture, composite grafting [e.g. synthetic TruFit bone graft substitute (Smith & Nephew Inc., Andover, MA, USA)], osteochondral autograft transfer system (Arthrex Inc., Naples, FL, USA) and fresh frozen allograft [12][13][14][15][16][17][18][19][20].
When examining cartilage, the 'gold-standard' is hyaline cartilage, which is the native cartilage of the hip joint; mainly comprised of type II collagen and layers of functional cells and extracellular matrix [12,21,22]. Marrow stimulation or microfracture technique has been utilized for hip articular cartilage restoration, with the goal of stimulating pluripotent cells of the inner pelvic table and proximal femur to restore focal areas of cartilage defects [15-18, 21, 22]. The microfractured cartilage that replaces the injured region of articular cartilage does share some characteristics of hyaline cartilage; however, there are key matrix components, including aggregans that are not expressed optimally in the regenerated cartilage [21].
There are numerous reports in the literature that demonstrate favourable clinical outcomes when performing microfracture for the articular surface of the hip joint [15][16][17][18]. Philippon et al. [15] examined the percentage of filled articular cartilage defect on the acetabular side of the hip joint after microfracture surgery as evaluated by second-look revision hip arthroscopy. They report excellent results of 95-100% coverage of the isolated acetabular chondral lesions at an average of 20 months follow-up for eight of their nine patients [15]. Similarly, Karthikeyan et al. [16] demonstrated adequate macroscopic fill of acetabular articular defects with associated femoral acetabular impingement on second-look arthroscopy after microfracture was performed an average of 17 months prior. In addition, microscopic histological evaluation of the tissue demonstrated fibrocartilage in the region of the microfracture. In all, 19 of the 20 patients in the series demonstrated mean fill of 96%. Domb et al. [17] studied patient reported outcome measures and demonstrated significant clinical improvement after microfracture was performed in their patient cohort at 2-year follow up. The outcome measures assessed were the modified Harris Hip Sore, the Non-Arthritic Hip Score (NAHS), the Hip Outcome Score-Activities of Daily Living, and the Hip Outcome Score-Sport Specific Subscale. These scores improved in their cohort; interestingly the improvement occurred for both a workers' compensation and a non-workers' compensation cohort at 2 year follow up when compared with pre-operative scores [17]. In elite athletes who underwent hip arthroscopy with microfracture compared with elite athletes who underwent hip arthroscopy without microfracture, McDonald et al. [18] demonstrated that the additional procedure of microfracture surgery did not preclude the athlete from returning to a high level of competition.
Other, more isolated case series have demonstrated some early success with mosaicplasty and autologous chondrocyte transplantation for the treatment of hip articular defects [19,20]. Hart et al. [19] demonstrated the feasibility of mosaicplasty of femoral head cartilage defects with autologous grafting from non-weight bearing portions of the knee in their case report. The authors of this report state that their experiences with mosaicplasty techniques in the hip are limited but theoretically, in certain clinical situations, may benefit the patient with a cartilage defect. Fontana et al. [20] compared the results of autologous chondrocyte transplantation (n ¼ 15 patients) versus simple debridement (n ¼ 15 patients) for hip acetabular chondral defects of grade three or four Outerbridge classification, with more than 2 cm 2 area. They demonstrated an improved Harris Hip Score at a mean of 74 months after the procedure in the autologous chondrocyte transplantation group when compared with the simple debridement group. As a whole, early isolated case series and reports have demonstrated feasibility and some favourable outcomes for the arthroscopic treatment of hip chondral lesions [15][16][17][18][19][20][21][22].
HIP LABRUM
The hip labrum is a unique fibrocartilagenous biological structure. It has multiple purposes including increasing joint stability, congruity and overall depth of the ball-andsocket joint, allowing for a biological seal that is thought to be protective to the hip articular cartilage [5,23,24]. In some instances of injury, primary repair of the hip labrum is sufficient with suture anchor constructs in order to reconstitute the mechanical function of the labrum. In circumstances of more severe injury, hip labral reconstruction must be undertaken in order to re-constitute the labrum's function [25][26][27][28][29]. Graft choices' reports in the literature include both allograft and autograft from gracilis/hamstring, quadriceps tendon, hamstring, iliotibial band, ligamentum teres or tensor fascia lata [25][26][27][28][29].
Costa Rocha et al. [26] reported a case series of four patients followed for 2 years who underwent labral reconstruction with hamstring semi-tendinous allograft, with improved functions in three of their four patients as demonstrated by an improved Oxford hip score, HOS and Global Treatment Outcome Score. Park and Ko [27] published a case report demonstrating the feasibility of using quadriceps tendon as an autograft option for labral reconstruction, although their follow up at the time of publication was only 3 months. Domb et al. [28] report a cohort study of 11 labral reconstruction versus 22 labral resection patients with significant improvement in the NAHS and hip outcome score for activities of daily living for the reconstruction cohort at a minimum of 2 years of follow-up. Ayeni et al. [25] systematically reviewed the available literature regarding hip labral reconstruction in 2014 and concluded that there are promising short-term functional and patient-reported outcomes benefits to reconstruction. Although, they did note that long-term follow-up for patients was not reported in the literature [25].
Potential graft choices for reconstruction can include autograft, allograft, xenograft or biological tissue-engineered scaffolds, each with their own benefits and shortcomings. Autografts allow for the labrum to be reconstructed with a patient's own tissue, but harvest of the graft may lead to donor site morbidity, which in some instances can be severe. Allografts offer an off-the-shelf solution for labral reconstruction but carry a small risk of rejection and disease transmission. Both autograft and allograft technologies undergo a period of 'labralization' in which there is local inflammation, followed by remodelling whereby the reconstructed graft tissue undergoes biological transformation into labral tissue [30][31][32]. A tissue engineered xenograft solution or scaffold solution may, in theory, improve integration and speed 'labralization' of the tissue by decreasing the time needed for remodelling. In theory, an ideal tissue-engineered solution for labral reconstruction would be naturally derived, compatible with the host tissue and immune system, have refined architecture to allow for rapid integration, and have sufficient biomechanical integrity to withstand post-surgical rehabilitation Fig. 1. Images of a tissue-engineered pig xenograft, naturally derived scaffold for tendon, ligament or labral reconstruction. A Represents an overview scanning electron microscopy view of the macrostructure of the tissue-engineered graft. The pore-size of the xenograft has been optimized for cell infiltration while still providing mechanical strength. B Shows a closer view of pore architecture of the xenograft. C A three-dimensional microcomputer tomography reconstruction of the pore architecture of the xenograft. Threedimensional reconstruction can be used to study microporosity and interconnectedness of the graft architecture (special thanks to Drs Patrick W. Whitlock and Thorsten M. Seyler for providing the xenograft samples depicted in this image). [30][31][32]. Although large series with long-term clinical followup of labral reconstruction with tissue engineered graft solutions are not presently available, the field of tissue engineering offers an exciting potential solution that could theoretically improve recovery time and post-operative morbidity, along with optimizing ultimate integration and function after labral reconstruction surgery.
LIGAMENTUM TERES
Recently, the biomechanical role of the ligamentum teres has been better appreciated and defined [6][7][8]. In certain instances, significant ligamentum teres disease can be associated with pain and general hip dysfunction in the prearthritic hip. The biomechnical role of the ligamentum teres in various hip positions has been studied in an animal model system [6]. In fact, isocentric reattachment of the ligamentum teres is not only feasible in an animal model system, but it was demonstrated to serve as a 'natural check-rein' to dislocation of the hip joint. Further, the ligamentum teres reconstruction in an animal model system was successfully performed without restricting hip motion or causing abnormal cartilage pressures [6]. Similar techniques have been described in an isolated case report by Simpson et al. [8], in which the ligamentum teres reconstruction was performed for recurrent pain associated with feelings of hip instability despite having undergone previous hip arthroscopy. In theory, graft options such as those available for anterior cruciate ligament of the knee reconstruction would be feasible options in the future. The use of both allografts and autografts is plausible, and both the advantages and disadvantages have been reviewed previously in this article. Similarly, future work may focus on an off-the-shelf tissue engineered solution or graft option for ligamentum teres reconstruction, using principals currently being applied for biological engineering of orthopaedic tissue as previously described [32].
EMERGING TISSUE-ENGINEERING
TECHNOLOGIES This section provides the general framework of the emerging biological field of regenerative medicine. In all, there has been a recent effort at many institutions, including our own, to combine diverse expertise to address various maladies in modern medicine. Although in its infancy, regenerative medicine is a field that has some exciting potential for orthopaedic applications, specifically for hip arthroscopy. Currently, specific efforts for utilizing and applying regenerative medicine technologies for hip arthroscopy purposes are limited. However, by providing a brief discussion, we hope to highlight how this field may potentially inspire future solutions to many of the shortcomings of biological healing and reconstruction that currently limit hip arthroscopy [33][34][35][36][37][38][39].
Regenerative medicine is a relatively new and emerging field in which many medical, biological and physical science principals are being applied to address the shortage of biological organs and tissues available for transplantation or therapy, secondary to primary organ or tissue pathology and failure. The field of regenerative medicine is more diverse than simply attempting to grow replacement organs. The field as a whole can be broadly divided into tissue engineering, diagnostic platforms, cellular therapies, healing therapies and supporting technologies. Tissue engineering is a branch of regenerative medicine in which replacement tissue and organs, as well as other body parts like skin or ears are being developed ex vivo. Diagnostic platforms have also emerged from regenerative medical experimentation, where genetic and pathogen detection tests have been developed, and engineered tissue is being used for pre-clinical drug testing. Cellular therapies are constantly being explored, where pluripotent cells, such as stem cells, are being investigated for their reparative properties in the setting of pathological states. Further, healing therapies, which may change the diseased environment, are being actively investigated in regenerative medicine. An example of healing therapies would be a biological dressing used to treat a contaminated wound or improve healing after a burn by modifying the injured environment. Finally, supporting technologies have emerged as an essential aspect of regenerative medicine, where cell harvesting kits and novel delivery techniques have been developed as part of the regenerative medicine armamentarium to be used as tools for implementing regenerative technology in the clinical setting [36][37][38][39][40].
An example of regenerative medicine technology would be harvesting and then isolating pluripotent stems cells from a patient. Those cells could then be cultured in vitro so that the cells continue to proliferate. These proliferative cells could then be seeded on a scaffold (Fig. 1), which allow for three-dimensional orientation and potentially will help with organization and differentiation of the stems cells. Various growth factors and mechanical stimuli, such as those provided by a bioreactor (Fig. 2), can be used to help in the stem cell differentiation and remodelling into mature, end-organ, tissue. At the end of the tissueengineering process, the surgeon may have an engineered tissue for replacement and reconstruction, which in theory can be optimized to speed the healing and recovery process (Fig. 3) [36][37][38][39]. When faced with a clinical dilemma in which a tissue-engineered solution may be of value, the surgeon can think of varying degrees of regenerative medicine complexity based upon the theory of the 'reconstructive ladder' (Fig. 4). As one moves up the rungs of the reconstructive ladder, the complexity of the regenerative medicine solution increases. Lower rungs consist of molecular or cell-based therapies that may aide in producing a more conducive healing environment. Full organ transplantation or use of a mature tissue-engineered construct consisting of a naturally derived scaffold that has been seeded with stem cells and matured on a bioreactor is a more complex and 'higher rung' solution on the reconstructive ladder. Our institution has completed the tissue engineering of urethra, vagina, bladder neck, ureter and bladder. Currently, blood vessel, heart valve, sphincter muscle, ear, finger/digits, kidney, nerve, skin and skeletal muscle are being developed [35][36][37][38][39]. The use of the novel technology of bioprinting or micro-organoids have allowed for the production of the microscopic liver structure, bladder tissue structure, testis, cardiac muscle and kidney structure [35]. It is the hope that with continued collaborative efforts, that these technologies may be applied to orthopaedic sports medicine and arthroscopic applications.
CONCLUSION AND DISCUSSION OF FUTURE DIRECTIONS
The future of hip arthroscopy and the use of biological agents for arthroscopic hip surgery are exciting. The primary limitations are mainly cell and mechanically based. The use of tissue engineering, as well as regenerative medicine technology does represent an exciting paradigm for addressing orthopaedic sports medicine problems. Although biological and tissue engineering solutions for hip arthroscopy are presently limited, the future has much potential for the continued evolution of existing technologies that may ultimately improve surgical outcomes, speed recovery and decrease post-operative rehabilitation limitations.
CONFLICT OF INTEREST STATEMENT
Allston J Stubbs serves as a paid consultant for a company or supplier: Smith and Nephew; he or a direct family member has stock or stock options in Johnson and Johnson; received research support from a company or supplier as PI Bauerfeind AG; he serves on editorial boards of VuMedi.com and Journal of Arthroscopy; he has board membership and/or committee appointments for the following organizations: International Society for Hip Arthroscopy, American Orthopaedic Society for Sports Medicine, Arthroscopy Association of North America.
Sandeep Mannava has received institutional research support from Wake Forest Innovations for work not related to the subject of this publication and he has been issued a United States patent entitle 'tissue tensioning deviced and related methods.' Elizabeth A Howse has no potential conflict of interest to declare. Fig. 4. Schematic diagram demonstrating the concept of the reconstructive ladder of tissue-engineering and regenerative medicine. As one proceeds up the rungs of the reconstructive ladder, the regenerative medicine construct becomes more complex and structured. Based upon the clinical scenario, one can easily attain a tissue-engineered solution from any rung of reconstructive ladder. | 2016-05-12T22:15:10.714Z | 2015-08-11T00:00:00.000 | {
"year": 2015,
"sha1": "4ef1f3dd1398dd64d9df3296071b69dd5b498755",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jhps/article-pdf/3/1/23/5054648/hnv051.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d925f8d3cd11c93ccd29dfafdf6f95098dda6a29",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5997277 | pes2o/s2orc | v3-fos-license | Distribution of modular symbols for compact surfaces
We prove that the modular symbols appropriately normalized and ordered have an asymptotical normal distribution for all cocompact subgroups of SL_2(R). We introduce hyperbolic Eisenstein series in order to calculate the moments of the modular symbols.
Introduction
Let M be a hyperbolic Riemann surface of finite volume. Hence the universal covering of M is the upper half plane, H, and the covering group, Γ, is a discrete subgroup of PSL 2 (R). Let f (z)dz be a holomorphic 1-form on M. If c is a curve on M we may integrate f (z)dz along the curve to get c f (z)dz.
We have a bijection between the covering group Γ and the fundamental group π 1 (M,ẑ 0 ) given by sending γ ∈ Γ to the unique geodesic between z 0 and γz 0 in H where z 0 lies aboveẑ 0 , and then projecting this curve to M. By integrating along this curve we get an additive homomorphism We wish to study the distribution of the values of this map.
In [10] we considered this map in the case where M is non-compact and f is cuspidal. In applications to analytic number theory and elliptic curves this is often the relevant setup. In topology and geometry, on the other hand, the case of M compact usually attracts more attention. In [10,Theorem B] we found that with the correct normalization and ordering the values of the map (1) are normally distributed when M has a cusp and f is cuspidal. In this paper we obtain a similar result in the case where M is compact and f satisfies a different condition than being cuspidal.
The result in [10] was proved using Eisenstein series twisted with modular symbols introduced by Goldfeld ( [2,3]). The definition of these requires the existence of a cusp and can therefore not be used in the compact case. In this paper we introduce hyperbolic Eisenstein series. They exist also in the compact case. We then 'twist' these with modular symbols to obtain the distributional result described below.
We can handle a slightly more general setup than described above. Let Γ ⊆ PSL 2 (R) be discrete and cocompact and let M = Γ\H be the associated quotient space. Let γ 1 ∈ Γ be hyperbolic, i.e. |tr(γ 1 )| > 2. For simplicity we assume where µ > 0. This may always be obtained from any hyperbolic γ 1 by conjugation with g ∈ SL 2 (R). By possibly considering γ −1 1 instead of γ 1 we may assume µ > 1. We further assume that f (z)dz is a holomorphic 1-form on Γ\H which satisfies (We note that such f always exist whenever the genus of Γ\H is strictly larger than 1). We define which for fixed f gives a map from the quotient Γ 1 \Γ to C. Here Γ 1 is the cyclic subgroup of Γ generated by γ 1 and a, b, c and d are the entries of γ.
Our main theorem is the following distributional result.
Theorem A. Assume f has Petersson norm 1. Then [γ, f ] has asymptotical normal distribution. More precisely, for any fixed rectangle R in C, as T → ∞. Here Theorem B. Assume f has Petersson norm 1. Then [γ, α] = vol (Γ\H) log((a 2 + b 2 )(c 2 + d 2 )) γz 0 z 0 α has asymptotical normal distribution. More precisely, for any fixed rectangle R in C, In order to prove such results we introduce hyperbolic Eisenstein series defined by This converges absolutely for ℜ(s) > 1 by Lemma 3.1 below. We then go on to study the basic properties of this series.
Theorem C. The function E γ 1 (z, s) has meromorphic continuation to the whole s plane. At a regular point, s 0 , E γ 1 (z, s 0 ) is square integrable on Γ\H. The poles are located at −2n + s j where s j (1 − s j ) is an eigenvalue of the automorphic Laplacian and n ∈ N. The point s = 1 is a simple pole and the residue at s = 1 is .
Most of this follows quite straightforward after applying the resolvent to the following identity Once the above theorem is established we can use the method of complex contour integration to get the following result which may be interpreted as a result on the number of closed homotopy classes on a compact hyperbolic Riemann surface.
Theorem D.
We show that 1−δ = 7/8+ε is valid if there are no small eigenvalues. Let f i be modular forms of weight 2 with respect to Γ and let α i = ℜ(f i (z)dz) or α i = ℑ(f i (z)dz). We shall write α = α 1 . The (real) modular symbols are defined by Assume that γ 1 , α i = 0 for i = 1 . . . n. We now "twist" the hyperbolic Eisenstein series with these modular symbols as done by Goldfeld [2,3] for the usual non-holomorphic Eisenstein series by setting We then go on to study the analytic properties of this function.
The last claim of the theorem enables us to give rather good bounds on the growth of the modular symbols.
Theorem F. For any ε > 0 we have We go on to study the possible singularity at s = 1. We estimate the pole order and determine the leading term in the Laurent expansion for many cases. We then go on to study the behavior on vertical lines and we arrive at the following thorem.
Theorem G. The function E γ 1 ,α 1 ,...,αn (z, s) grows at most polynomially on vertical lines with ℜ(s) > 1/2. This puts us in a position where we can use the method of contour integration to calculate the moments of the random variable defined by the left hand side of (2). Once we have calculated these moments Theorem A follows from a classical theorem in probability theory.
Acknowledgments: I am grateful to Professor A. B. Venkov for drawing my attention to [6] and for stimulating discussions regarding hyperbolic Eisenstein series. I am also grateful to Professors E. Balslev and Y. N. Petridis for remarks concerning an early draft of this paper.
The resolvent of the automorphic Laplacian
For the methods used in this paper it is very important to introduce the resolvent of the automorphic Laplacian. The automorphic Laplacian is closely related to the ordinary hyperbolic Laplacian We shall briefly recall the relevant definitions and properties. Let Γ ⊆ PSL 2 (R) be discrete and cocompact and M = Γ\H the associated quotient space under the action γ : H → H z → az+b cz+d . The quotient can be given a structure of a Riemann surface with H as a branched cover. (See e.g. [12, §1.5]) The branch points are at the elliptic points i.e. the points which are fixpoints of γ ∈ Γ with |tr(γ)| < 2. When there are no such points H is the universal cover.
The automorphic Laplacian, L Γ is the closure of the operator acting on smooth forms in L 2 (Γ\H) by ∆f where f : H → H is Γ-automorphic and smooth. The operator L Γ is selfadjoint with −L Γ non-negative. By the maximum principle L Γ u = 0 if and only if u is constant. There is a complete orthonormal system of smooth eigenfunctions ψ 0 , . . . , ψ i , . . . The spectral theorem (See [4, VI §5.3] ) asserts in our case that where P i is the projection on the line spanned by ψ i . We shall use P λ i to denote the projection to the λ i -eigenspace. It is convenient to introduce a variable s subject to the condition λ = s(1 − s). Hence the s-plane is a two-sheeted covering of the λplane and the right half plane ℜ(s) > 1/2 cut along 1/2 < s ≤ 1 corresponds to the λ-plane cut along the positive real axis.
The resolvent is the bounded operator From the spectral theorem we may conclude (See [4, VI §5.2]) that We note that for s(1 − s) close to λ i is regular in s. Hence if u(z, s) ∈ L 2 (Γ\H) is meromorphic in s with a pole of order k − 1 at s i with s i (1 − s i ) = λ i and leading term u k−1 (z), then R(s)u(z, s) has a pole of at most order k. The pole order is k if and only if is nonzero, and if this is the case then this is the leading term. If this is not the case the pole order is strictly less than k. We shall often use the above expression for s 0 = 1. Since ψ 0 = vol(Γ\H) −1 this reduces to
Hyperbolic Eisenstein series
In this section we define hyperbolic Eisenstein series related to γ 1 . The construction is a weight 0 real-analytic analogue of the holomorphic hyperbolic Eisenstein series of weight k ≥ 2 considered in e.g. [9,6]. We shall only develop the theory of these hyperbolic Eisenstein series to the point needed to prove Theorem A. We shall have more to say about these series in [13]. We fix, in this section, a unitary character χ : Γ → S 1 which is trivial on Γ 1 .
Definition 3.1. The hyperbolic Eisenstein series related to γ 1 is defined by It is easy to see that this is well defined in the domain of absolute convergence, and that it is (χ, Γ) automorphic i.e.
Lemma 3.1. The series defining the hyperbolic Eisenstein series is Proof. The proof given is closely modeled after the proof of the convergence of the usual Eisenstein series given in [5, Theorem 2.1.1]. We note that A = {z ∈ H|1 < |z| < µ} is a fundamental domain for Γ 1 . By the Γ 1 invariance of ℑ(z)/ |z| we may assume that γz ∈ A for all γ ∈ Γ 1 \Γ.
For any ǫ > 0 we let Here d(z, z ′ ) denotes the hyperbolic distance between z and z ′ . As in [5] we find that there exist Λ ǫ > 0 only dependent of ǫ such that If we choose ǫ small enough we may assume that From this inequality all the claims of the lemma easily follow.
We note that the above proof also applies when the group is cofinite.
Lemma 3.2. The hyperbolic Eisenstein series satisfies
Proof. We note that since ∆ commutes with the SL 2 (R) action it suffices to show that which is elementary. We omit the details.
Theorem 1. The function E γ 1 (z, s) has meromorphic continuation to the whole s plane. At a regular point, s 0 , E γ 1 (z, s) is square integrable on Γ\H. The poles are located at −2n + s j where s j (1 − s j ) is an eigenvalue of the automorphic Laplacian and n ∈ N. If χ = 1 the pole at s = 1 is simple with residue 2 log µ vol (Γ\H) .
Proof. From Lemma 3.1 we get that E γ 1 (z, s) ∈ L 2 (Γ\H, dµ(z)) when ℜ(s) > 1 and we can therefore apply the resolvent to expression in Lemma 3.2. We get Since E γ 1 (z, s) is holomorphic for ℜ(s) > 1 this gives meromorphic continuation to ℜ(s) > −1. with poles possible poles at s j (1 − s j ). Once this has been established, Eq. (9) gives continuation to ℜ(s) > −3. Repeating this process we obtain meromorphic continuation to the whole s-plane.
The pole order at s = 1 follows from the discussion in the end of section 2 and this also gives the residue
Hyperbolic Eisenstein series twisted with modular symbols
In this section we shall introduce hyperbolic Eisenstein series twisted with modular symbols. The analytic properties of these functions contains the information that eventually will enable us to conclude Theorem A.
Whenever g is a holomorphic or harmonic 1-form on Γ\H we shall write where γ ∈ Γ. We shall call this the modular symbol related to g. It is easy to see that this is independent of the path chosen and also that it is independent of z 0 . We shall sometimes write γ, f instead of γ, f (z)dz .
Let ω k , k = 1 . . . n, be holomorphic 1-forms on Γ\H. We have If we assume that γ 1 , α k = 0 for k = 1 . . . n we may construct an associated family of hyperbolic Eisenstein series by setting We will use the following convention. A function with a subscript variable will denote the partial derivative of the function in that variable. We have when the sum is absolutely convergent. This is analogous to the function introduced by Goldfeld in [2,3]. We notice that these functions may be seen as the coefficients in a power series expansion in ǫ of E γ 1 (z, s, ǫ) around the point ǫ = 0. Our aim in this chapter is to understand the analytic properties of this series in a neighborhood of the point s = 1. It is these properties that will enable us later to prove the distribution result stated in the introduction. We consider the space L 2 (Γ\H,χ ǫ ) of square integrable functions that transform as under the action of the group. We introduce unitary operators given by We set L( ǫ) = U( ǫ) −1 ∆U( ǫ) and Then using Lemma 3.2 we see that From (13) we see that By termwise differentiation we find whenever the sum is absolutely convergent. (See Lemma 4.2 below). From (13) we also infer that (15) These relations between the D γ 1 ǫ 1 ,...,ǫn (z, s, 0) and the E γ 1 ǫ 1 ,...,ǫn (z, s, 0) functions show that one family determines the other.
In order to ensure that the functions we shall be studying are well defined we need the following crude bound on the antiderivative of a modular form of weight 2. We notice that the proof uses the same starting point as Hecke's bound on the Fourier coefficients of modular forms (See e.g. [11, Chapter VII Theorem 4.5]) .
The integral in question is independent of the path chosen so we choose the direct line between z 0 and z. On the assumptions of the lemma we have |f (w)| ≤ M/ℑ(z) when w is on the line between z 0 and z. Hence The above lemma enables us to prove the following: Lemma 4.2. Let n ≥ 1. For ℜ(s) sufficiently large we have D γ 1 ǫ 1 ,...,ǫn (z, s, 0) ∈ L 2 (Γ\H, dµ). Proof. Since A = {z ∈ H|1 < |z| < µ} is a fundamental domain for Γ 1 , we may choose representatives of Γ 1 \Γ such that γz is in A. If we assume |z 0 | > µ, we can now use Lemma 4.1 to conclude The Lemma now follows from Lemma 3.1. We may remove the assumption |z 0 | > µ by using (15) (15) shows that D γ 1 ǫ 1 ,...,ǫn (z, s, 0) ∈ L 2 (Γ\H, dµ) without the assumption on |z 0 |.
We define Proof. The proof of Lemma 2.2 in [10] may be used without changes. We notice that in the present case δ(α k ) = 0.
Here ǫ k means that we have excluded ǫ k from the list. When ℜ(s) is sufficiently large we can use Lemma 4.2 and invert (19) and (20) by applying the resolvent of the automorphic Laplace operator R(s) = (∆ Γ + s(1 − s)) −1 . We get This will turn out to be identities of great importance for the proofs of many results in this and the following chapter. As a starting point we prove Proof. This is induction in n. For n = 0 we quote Theorem 1, while the induction step follows from (21) and (22).
This proves Theorem E. Proof. This follows from a classical theorem due to Landau (see e.g [11, Chapter VI, Proposition 2.7]).
Corollary 4. Let f (z)dz be a holomorphic 1-form on Γ\H such that γ 1 , f = 0 For any fixed z ∈ H, ε > 0 we have Proof. From Theorem 2 and (12) one easily finds that for any m ∈ N has meromorphic continuation to C and that it is is analytic in ℜ(s) > 1. Using Landau's result again one may conclude that the above series is absolutely convergent for ℜ(s) > 1. Since the terms in an absolutely convergent series tends to zero we get that tends to zero as |az + b| |cz + d| → ∞. Hence We note that putting z = i we obtain Theorem F. We shall now show how we can obtain the Laurent expansions of D γ 1 ǫ 1 ,...,ǫn (z, s, 0) from (21) and (22). We start by showing that R(s)L ǫ k ( 0)D γ 1 ǫ 1 ,., ǫ k ,.,ǫn (z, s, 0) is regular. To this end we need the following lemma: We apply this to the second integral in (23). Since f j is holomorphic, the integral equals The boundary of the fundamental domain is the union of conjugated sides. These conjugated sides cancel in the integral and we get the result.
Using this we can now prove is regular at s = 1.
Proof. For n = 0 we quote Lemma 1, and for n = 1 (21), Lemma 4.6 and the discussion in the end of section 2 give the result. Assume that the result is true for all n ≤ n 0 . By (22), (18), Lemma 4.6 and the fact that (−R(s)(−8π 2 w k , w l D γ 1 ǫ 1 ,.,ǫ k ,.,ǫ l ,.,ǫn (z, s, 0))) can have pole order at most 1 more than D γ 1 ǫ 1 ,.,ǫ k ,.,ǫ l ,.,ǫn (z, s, 0)) at s = 1, we obtain the result about the pole orders. Note also that R(s)s 2 D γ 1 ǫ 1 ,...,ǫn (z, s + 2, 0) always contributes with at most a simple pole. For even n we notice that by induction and using the discussion in the end of section 2 we find that the (s − 1) n/2−1 coefficient is where the prime in the product means that we have excluded α k , α l from the product and enumerated the remaining differentials accordingly. The result follows.
Using this we can prove Proof. This follows from (16) and Lemma 4.7.
Growth on vertical lines
By Corollary 3 we see that E γ 1 ǫ 1 ,...,ǫn (z, s, 0) = O K (1) for ℜ(s) = σ > 1 and z in a fixed compact set K. In this section we show that when we only require σ > 1/2 then we still have at most polynomial growth on the line ℜ(s) = σ.
This proves the second part of Theorem C.
The identity which is going to boost the induction is (22). We have, by the induction hypothesis and (18) the bound (33) L ǫ k ǫ l ( 0)D ǫ 1 ,., ǫ k ,., ǫ l ,.,ǫm (z, s, 0) We also have The first term can be estimated Using these two bounds we use the Sobolev embedding theorem as in the proof of Lemma 5.1 we get which establish (32) for m = n.
Hence we have proved at Theorem G.
Estimates of summatory functions
In the two preceding sections we found the pole structure of the twisted hyperbolic Eisenstein series and we showed that as a function of s this has at most polynomial growth on vertical lines. In this section we state and prove two technical propositions that enables us to use these properties to get estimates on certain summatory functions.
We shall formulate the results in terms of general Dirichlet series. Fix {f n } ⊂ R + a non-decreasing series that tends to ∞ as n → ∞. We note that (i) implies that the series is uniformly convergent on compact subsets. By the Phragmén-Lindelöf theorem we may replace (iv) by the weaker assumption that for any fixed h ≤ σ ≤ 2, the function H a (s) grows at most polynomially on vertical lines. If s = l is a pole of order l > 1, and d l is the leading term in the Laurent expansion of H a (s) then Proof. Let φ U : R → R, U ≥ U 0 , be a family of smooth decreasing functions with be the Mellin transform of φ U . Then we have and for any c > 0 Both estimates are uniform for ℜ(s) bounded. The first is a mean value estimate while the second is successive partial integration and a mean value estimate. The Mellin inversion formula now gives We note that by (44) and (iv) the integral is convergent. We now move the line of integration to the line ℜ(s) = h by integrating along a box of some height and then letting this height go to infinity. By (iv) and (44) we find that the contribution from the horizontal sides goes to zero. Since we assume that s = 1 is the only pole of the integrand with ℜ(s) ≤ h then using Cauchy's residue theorem we obtain If we choose c = b + ε the last integral is convergent and O(T h U b+ε ).
Assume that H a (s) has a pole of order l with (s − 1) −l coefficient d −l then if l > 1 we have The first factor in the sum is independent of U and T , while the second is independent of T and bounded in U. The third factor has leading term T (log T ) n 3 and a reminder O(log T n 3 −1 ). Hence the leading term is the one corresponding to n 1 = n 2 = 0, n 3 = l − 1 and we get, using (43),
This gives
fn≤T If l = 1 then by (43) Res s=1 (H a (s)R U (s)T s ) = a −1 y T + O(T /U), If H a (s) has a nonsimple pole we choose U = log T and we get In the simple pole case we choose U = T (1−h)/(b+1+ε) in order to balance the error terms and we get At this point we note that if a n is non-negative for all n, then by further requiring φ U (t) = 0 if t ≥ 1 andφ U (t) = 1 for t ≤ 1, we have fn≤T a n φ U f n T ≤ fn≤T a n ≤ fn≤T a nφU f n T from which it easily follows that the middle sum has an asymptotic expansion. In the case s = 1 a simple pole we choose U = T 1−h 2−h+ε to balance the error terms Since a n ∈ R for many of the applications we have in mind we shall also deal with this situation We let H(s) be the sum corresponding to a n = 1 for all n. Proposition 6.2. Assume that H(s) satisfies the conditions of Proposition 6.1 with parameters h ′ , b ′ . Assume that for any ε > 0 we have a n = O((f n ) ε ) as n → ∞ and that H a (s) satisfies (i)-(iv). Assume further that ord If s = 1 is a simple pole then If s = l is a pole of order l > 1, and c l is the leading term in the Laurent expansion of H a (s) then Proof. We may re-use most of the proof of the last proposition. To get a result without φ U from (45) and (46) we notice that if we choose φ U such that φ U (t) = 1 for t ≤ 1 then fn≤T a n φ U (f n T ) −1 = fn≤T a n + T <fn≤T (1+1/U ) a n φ U (f n T ) −1 .
From a n = O((f n ) ε ) we see that we may evaluate the last sum in the following way. For any ε > 0 this is less than a constant times Using this with the above choices of U we get the result.
We note that under the assumptions of the above proposition, with the exceptions that H a (s)should be regular at s = 1 and that H(z, s) should have a simple pole at s = 1, we may conclude that The proof of this is identical to the proof of the above with d −1 = 0.
Setting z = i we obtain Theorem D. The estimation of the remainder term depends on the growth estimates of E γ 1 (z, s) available to us and the existence of small eigenvalues of the Laplacian. Assuming no small eigenvalues Lemma 5.1 enables us to conclude 1 − δ = 7/8 + ε.
The distribution of modular symbols
We now show how to obtain a distribution result for the modular symbols from the asymptotic expansions of Corollary 8. We renormalize the modular symbols in the following way. Let for some δ > 0. Now let X T be the random variable with probability measure for R ⊂ C (By convention we set < γ, α >/ log |az + b| |cz + d| = 0 if |az + b| |cz + d| ≤ 1. Note that there are only finitely many such elements.) We consider the moments of X T (55) We notice that the right-hand side is the moments of the bivariate Gaussian distribution with correlation coefficient zero. Hence by a result due to Fréchet and Shohat (see [7, 11.4.C]) we conclude the following: As an easy corollary we get the following result about the distribution of harmonic differentials as T → ∞.
The same holds for ℑ( γ, f ). We note that by putting z = i in Corollary 10 and Theorem 9 we obtain Theorem A and Theorem B. | 2014-10-01T00:00:00.000Z | 2003-08-06T00:00:00.000 | {
"year": 2003,
"sha1": "1e1fdf9c0d4b9abd0f98c33169af274e167b98a9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1e1fdf9c0d4b9abd0f98c33169af274e167b98a9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
262727088 | pes2o/s2orc | v3-fos-license | Gold nanorods with conjugated polymer ligands: sintering-free conductive inks for printed electronics
A straightforward route to hybrid nanostructures of metal cores with conductive polymers and their application as sintering-free inks is described.
Introduction
The accelerating market of printed electronics requires inks that are suitable for large-area, high-throughput and low-cost production of lightweight and exible conductive materials. Relevant market drivers are touchscreen panels, memory components, organic photovoltaic, radio frequency identication (RFID) tags and optoelectronic devices. 1 Preferable deposition methods are solution-based processes such as inkjet printing using inks containing metal or conductive metal oxide colloids. [2][3][4] Successful printing requires suitable inks. The performance of nanoparticle-based inks depends on their colloidal stability under the conditions that occur during printing. 1,5 Conventionally, bulky organic molecules are used as ligands to ensure colloidal stability; they provide steric stabilization to the nanoparticles. Aer deposition, these ligands impede the contact between the particles and limit electrical conduction. Organic molecules represent insulating barriers; post-deposition treatments to remove them aer drying are required. Thermal sintering at high temperatures and with long residence times is hard to reconcile with polymer substrates and roll-toroll printing processes. 6 Alternative post-treatment methods with reduced time and thermal budgets include plasma, laser, infrared (IR), microwave and intense pulsed light treatments. 1,7-10 Some of them can remove organic ligands and improve electrical transport in less than a minute, but the resulting volume shrinkage can rupture the material.
Sintering-free inks avoid these challenges altogether. Grouchko et al. developed a self-sintering metal nanoparticle ink with a non-volatile destabilizing agent. 11 Upon solvent evaporation, the concentration of this destabilizing agent increases and detaches the ligand from the particles. Detachment leads to metal-metal contacts, but the ink with its rapidly decreasing colloidal stability is hard to handle.
Here, we introduce a sintering-free nanoparticle ink in which conjugated polymer ligands lend the particles chemical stability due to strong multidentate binding to the metal surface, colloidal stability and compatibility in different relevant solvents, and good electron transport properties in dry state. Kanehara and coworkers synthesized tailored phthalocyanine ligands and demonstrated that aromatic systems can provide mobile electrons in the ligand shells. 2,12 We adapted this idea with polymer-coated particles and created hybrid particles with increased colloidal stability using commercially available conjugated polymers such as poly[2-(3-thienyl)-ethyloxy-4-butylsulfonate] (PTEBS) with an average molecular weight of 40-70 kDa. To ensure that the p-electrons couple to the metal surface, polythiophene derivatives were used; they bring the psystem in close proximity to the gold because they contain a sulfur heteroatom in the aromatic ring. Polymer chains with more than 100 repetition units, molecular weights of more than 20 kDa, and highly polar side chains can provide colloidal stability in polar solvents.
We demonstrate the effectiveness of polythiophene ligands in nanoparticle inks based on gold nanorods (AuNRs). AuNRs are anisotropic nanoparticles that show lower percolation thresholds than spherical particles and thus provide large conductivities at low volume fractions. 13 The rods can be synthesized using a well-established protocol that yields narrow size distributions and negligible shape impurities. 14 Aer synthesis, AuNRs are capped by a cetyltrimethylammonium bromide (CTAB) double layer (AuNR@CTAB). 15 The ligand plays a crucial role in the anisotropic particle growth 14,16,17 but leads to poor colloidal stability unless the AuNRs are kept in excess CTAB. 16 Most existing strategies for the stabilization of AuNRs are based on large, non-conductive polymers that provide stability even if the CTAB has not been exchanged completely. [17][18][19][20] The poor colloidal stability of AuNR@CTAB system renders ligand exchange with small molecules challenging. 16,18 The few successful exchange protocols that exist for small ligands 17,21 require unusual and non-conductive ligands or multi-step ligand exchange protocols. The rods' anisotropy presents an additional challenge: surface properties of the different crystal planes presented on the rod are not equivalent. It is possible to specically exchange ligands only at the tips of the rods. [22][23][24] For nanoparticle inks, ligand exchange protocols have to be chosen such that a homogeneous ligand shell forms unless anisotropic particle interactions during ink processing are desirable.
Here we describe a facile and straightforward ligand exchange procedure to modify AuNR@CTAB with PTEBS. We prove the complete exchange of the ligand and discuss the binding site and arrangement of the polymer chains on the surface of the AuNRs. The resulting colloidal dispersion was stable in water and in a mixture of polar solvents over months. We formulated inks and used them to deposit conductive patterns that immediately reached conductivities in the range of annealed metal inks. The protocol is also readily applicable to other polythiophenes, and we demonstrate its compatibility with poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS), a polymer mixture commonly used in organic electronics.
Nanorod synthesis and ligand exchange
AuNRs were synthesized using a published protocol. 25 Transmission electron microscopy (TEM) images of the as-synthesized AuNR@CTAB are shown in Fig. 1a and b. The mean length and width were 115 and 25 nm respectively, both with 6% relative standard deviation (Fig. 1c). As-synthesized AuNR@C-TAB exhibited maxima of longitudinal localized surface plasmon resonance (L-LSPR) and transversal resonance (T-LSPR) at 909 and 508 nm, respectively (Fig. 1d). Washed AuNR@CTAB (excess CTAB below 100 mM) 26 were incubated with a solution of PTEBS in water. Aer the ligand exchange, the remaining free (new and old) ligands were separated from the nanorods by centrifugation. The ligand exchange protocol was optimized to provide full coverage of the surface of the studied particles. We determined that a polymer addition to the dispersion equivalent to at least 7 mg m À2 (polymer mass/ particle surface area) was required to obtain colloidally stable AuNR@PTEBS. We recommend a polymer to surface area ratio equivalent to 10 mg m À2 and 8 h incubation for optimal stability. Details on the ligand exchange protocol and its optimization are described in the ESI. † Fig. 2b illustrates the surface of a nanorod before and aer modication. The constitutional formulas of CTAB and PTEBS suggest that the nanorods' surface charge should reverse during a ligand exchange process. The observed change in zeta potential from +25 mV to À40 mV conrms a successful ligand exchange.
Surface chemistry characterization
The UV-vis spectrum of the AuNR@PTEBS showed a blue-shi in both LSPR maxima compared to AuNR@CTAB (Fig. 2c), indicating an increased dielectric constant in the direct vicinity of the nanorods. 27 We attribute the strong shi to the p-electrons of the conductive polymer that couple with the conduction band of gold. Subtraction of the AuNR@CTAB spectrum from the AuNR@PTEBS spectrum revealed the characteristic absorption band of PTEBS at l max ¼ 415 nm (Fig. 2c, inset) from the polymer attached to the gold surface.
The completeness of the ligand exchange was conrmed by IR spectroscopy. Fig. 3a compares the ngerprint regions of pure PTEBS, AuNR@CTAB and AuNR@PTEBS. Pure polymer exhibited characteristic vibration bands of the sulfonate group 28 (n s : 1042 cm À1 ; n a : 1175 cm À1 ) in the side chain. The original rods, AuNR@CTAB, exhibited two prominent peaks at 910 and 960 cm À1 . Aer ligand exchange, AuNR@PTEBS showed only the vibrations of the sulfonate group. The signals in the region of the two prominent peaks from CTAB were negligible, con-rming that CTAB has been removed and replaced by PTEBS on the surface of the AuNRs.
Raman spectroscopy was used to determine the PTEBS-Au binding motifs (Fig. 3b). The signal of the Au-bromide bond of AuNR@CTAB occurred at a Raman shi of 182 cm À1 as reported in literature. 20,26 Modied AuNRs did not show this but two other peaks at 172 and 278 cm À1 . The broad peak at 278 cm À1 is in the region where Au-S bonds are typically found. 20 To clarify whether the peak at 172 cm À1 originated from the aromatic ring or from the side chain of PTEBS, AuNR@CTAB were plasmacleaned until the Au-bromide bond was no longer visible in the Raman spectra. The cleaned surface was dipped into pure thiophene. The resulting spectra (Fig. S4 †) evidenced that both peaks found for AuNR@PTEBS arise from thiophene rings adsorbed onto gold. We conclude that PTEBS binds to the AuNRs with its conductive backbone as a multidentate ligand.
TEM images of AuNR@PTEBS showed dry ligand shells with thicknesses that varied between 0.7 nm and 2.1 nm (Fig. 4a). Thermogravimetric analysis (TGA) on thoroughly washed AuNR@PTEBS resulted in 2.9% mass loss aer heating to 800 C. We converted this value to an average shell thickness using a geometrical model described in the ESI (Table S1 †). The packing density of the polymer was estimated for the packing depicted in Fig. 4b with the distance of two neighboring polymer chains (a ¼ 0.90 nm), 28 the p-stacking distance (b ¼ 0.38 nm), 29 and the size of one monomer unit (c ¼ 0.39 nm). 29 The dry density of perfectly packed PTEBS on the AuNR surface was 7.5 monomers per nm 3 (3.5 g cm À3 ) according to this model. This corresponds to a volumetric shrinkage upon ligand removal of 16.1% and an average dry ligand shell thickness of 0.9 nm, in the range of thicknesses observed in TEM.
Conjugated polymers can bind to a surface face-on or edge-on (Fig. 4c). 30 The binding type affects the electronic properties of the coated particle: the edge-on conguration creates spacing between the conjugated polymer backbone and the metal surface. D. Tanaka et al. and Y. Abe et al. demonstrated that the spacing between the metal surface and a p-electron system affects electronic coupling. 31,32 Hence, face-on adsorption of the polymer onto the AuNRs is benecial for the conductivity of particle-particle interfaces. Our Raman study shows that PTEBS binds face-on with its conductive backbone and not with its side chains. This is in accordance with results previously reported for poly-(3-hexylthiophene) (P3HT), a polymer with the same backbone, that adsorbs face-on on Au (111) surface. 33 According to the TGA data, each AuNR is surrounded by an average of three polymer layers that bind to the gold and to each other through p-stacking interactions.
We estimated the binding strength of the multidentate ligand from the amount of desorbed polymer measured by inductively coupled plasma mass spectrometry (ICP-MS). A dilute particle dispersion was thoroughly puried to remove free polymer and the closed vessel was shaken at room temperature for one week to reach equilibrium. All particles were then removed by centrifugation. We found 1.1 AE 0.02 ppm of sulphur in the freshly puried, particle-containing sample and 0.4% of it (4.9 AE 0.07 ppb) in the supernatant of the centrifuged sample, demonstrating strong binding of the polymer.
Colloidal stability
Inkjet inks are typically formulated in a mixture of solvents, oen water-alcohol mixtures are most convenient. 5 The poor colloidal stability of AuNR@CTAB in such mixtures limits their use in printed electronics. The rods aggregate even in pure water unless excess CTAB is added, and small amounts of short-chain alcohols or acetone precipitate them. 16 We compared the colloidal stability of AuNR@CTAB to that of AuNR@PTEBS by centrifuging them, separating the supernatant, and adding different solvents to redisperse them aer washing. It was easy to fully redisperse AuNR@PTEBS in short-chain alcohols and in acetone. In a second experiment we introduced rods into solvent mixtures by rst dispersing AuNRs in water and adding the 2nd solvent subsequently to a nal ratio of 75/25 solvent/ water (v/v). Fig. 5a shows that the AuNR@CTAB dispersions responded to the addition of methanol and acetone with a color change that was visible to the naked eye aer seconds. The corresponding blueshi and the decrease in intensity in the L-LSPR band (Fig. 5a) are due to side-by-side assembly of the nanorods (Fig. 5c). 34 Aggregates of AuNR@CTAB in methanol and acetone precipitated irreversibly aer minutes. The same experiments with AuNR@PTEBS yielded stable dispersions (Fig. 5b) with a slight red-shi in the L-LSPR bands that is due to the change of refractive index (RI) caused by the second solvent.
We formulated inks from AuNR@PTEBS in water, short chain alcohols, acetone, and their mixtures. Their shelf lifes depended on the polarity of the solvent, as expected for electrosterically stabilized colloids with a zeta potential of À40 mV. Inks in pure acetone or alcohols remained stable for 1-2 weeks. Increasing water content increased stability, and a fully aqueous ink with 100 mg mL À1 (12 wt%) particle content was stable under shaking for at least 10 months.
Conductivity of deposited inks
The electron transport properties of the modied AuNRs were measured in lms deposited from concentrated inks. We compared the results to the properties of AuNRs coated with 20 Dense lines of AuNR with a thickness of 1 AE 0.2 mm, determined by prolometry, were deposited onto sputtered gold electrodes through masks ( Fig. 6a and b) with inks that contained 25 mg mL À1 (3 wt%) AuNRs in water/ methanol (25/75; v/v). No post-treatment was performed aer drying at room temperature (see detailed deposition parameters in the ESI †). Their conductivity was calculated from measured current-voltage (I-V) curves (Fig. 6c). Note that, in Fig. 6c, current is normalized to the thickness of each line so that the resistivity of the material is equal to the inverse of the slope. AuNR@PTEBS lines were conductive without any further treatment; they exhibited a resistivity of 7.0 Â 10 À6 U m, equivalent to a sheet resistance of 276 mU sq À1 per mil, with a relative standard deviation of 15%. The resistivity of as-deposited lines of AuNR@PEG-SH was above the limits of our measurement (330 U m), as expected for inks containing a non-conductive polymer ligand.
A 30 min exposure to a H 2 /Ar-plasma removed the organic ligands from the AuNR@PEG-SH lm to below the detection limits of Raman spectroscopy, and the lm became conductive. The structure of the nanorods was largely retained during the treatment (Fig. 6b), but the ligand shell removal caused volume shrinkage. The resistivity of the plasma-annealed AuNR line was 4.5 Â 10 À6 U m (177 mU sq À1 per mil). This is about one order of magnitude above the resistivities reported for fully sintered, nanoparticle-based inks where individual particles cannot be distinguished anymore. 35 The resistivity of the AuNR@PTEBS lines (7.0 Â 10 À6 U m) is less than double that of the plasma-treated AuNR. This resistivity is about 250 times that of bulk gold and similar to that of nichrome. 36 It is considerably lower (about 10 000 times) than that of purely organic conductive polymers and mixtures such as poly (3,4-ethylenedioxythiophene) with polystyrene sulfonate (PEDOT:PSS). 37 The sinter-free formulation can be applied like a regular ink: we loaded a fountain pen with AuNR@PTEBS at 25 mg mL À1 (2.6 wt%) in an isopropanol/water 10/90 (v/v) mixture and drew a circuit on glossy paper (Fig. 6c, inset). The pattern dried within minutes and was conductive enough to power a lightemitting diode (LED).
Applicability to other polymers
We assessed the versatility of the developed ligand exchange protocol using a structurally different polymer: PEDOT:PSS. This polymer fullls the requirements for ligands listed above. No changes in the ligand exchange procedure were required to successfully coat AuNRs with PEDOT:PSS.
The resulting AuNR@PEDOT:PSS dispersions possessed a negative zeta potential (À45 mV) similar to that of AuNR@PTEBS and blueshied LSPRs. They formed stable inks in short-chain alcohols and in acetone. The resistivity of deposited lines of AuNR@PEDOT:PSS was 9.9 Â 10 À7 U m (39 mU sq À1 per mil), 5 times lower than the resistivity of plasmaannealed AuNRs and 7 times lower than the line of AuNR@P-TEBS. We believe that the so PEDOT:PSS shell increases the effective contact area between nanorods. Further experiments have to be performed to clarify the exact mechanism of interparticle charge transfer.
Long-term stability is a critical property for printed electronics, and PEDOT:PSS is acidic enough to corrode metals. 38 We performed long-term experiments and stored samples under ambient conditions. Lines of both AuNR@PTEBS and AuNR@PEDOT:PSS retained their electrical performance for at least 1 year. No visible signs of degradation occurred.
Ink requirements for printing
Printing requires inks with good colloidal stability and suitable rheological properties. Agglomeration leads to inhomogeneous deposition and equipment damage, 5 inappropriate uid properties and wetting behavior drastically reduce printing quality. 39 Our particles are colloidally stable in a wide range of formulations. As an example, we investigated the rheological properties of a formulation that is suitable for inkjet printing, AuNR@PEDOT:PSS, 100 mg mL À1 (12 wt%) in isopropanol/ water (10/90; v/v) and found a density (r) of 1. (1)) characterizes uids in inkjet printing: where a is a characteristic length, usually the nozzle diameter. Stable printing is possible 39 for Oh ¼ 0.1-1.0, which implies a upper limit of the nozzle diameter of 3.3 mm for our ink, suitable for very high resolution printing. It is straightforward to tune the viscosity; for example, adding 1 mg mL À1 of excess PEDOT:PSS (i.e. 0.1 wt% of the liquid ink) increased the viscosity to 4.6 cP, made it suitable for larger low-cost nozzles, and retained the conductivity of the printed lines.
Conclusions
In summary, thiophene-based conjugated polymers with polar side chains prove to be highly suitable ligands for AuNRs in electronic applications. We developed a simple and straightforward protocol to obtain concentrated, stable colloidal inks suitable for printing. IR and Raman spectroscopy conrmed the complete exchange of CTAB on the AuNRs. PTEBS binds in a face-on conguration with 3 layers of polymer p-stacked on the gold surface in average, a conguration that facilitates electron transport through particle-particle interfaces. Deposited, untreated lms reached conductivities comparable to plasma-annealed AuNRs and no signs of degradation were observed aer storing them for one year under ambient conditions. The ligand exchange protocol is also applicable to other polythiophenes as we demonstrated by preparing AuNR@PE-DOT:PSS inks. Printed lms of AuNR@PEDOT:PSS reached conductivities that surpassed that of AuNR@PTEBS lms.
We expect the developed ligand exchange protocol to be applicable to polythiophene derivatives beyond the two examples presented here. The concept is not limited to AuNRs: other conductive or semi-conductive nanoparticles can be coated with conductive polymer ligands to increase inter-particle charge transfer. Nanoparticle-based materials in many applications can prot from this concept. Sintering-free conductive particle packings are a step towards conductive composites of nanoparticles in insulating polymer matrices. | 2018-09-07T08:09:32.150Z | 2016-03-15T00:00:00.000 | {
"year": 2016,
"sha1": "1b95f7950c1325f0679b1921e44e06284369fc9e",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/sc/c6sc00142d",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b95f7950c1325f0679b1921e44e06284369fc9e",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
31017741 | pes2o/s2orc | v3-fos-license | Helicobacter species ribosomal DNA in the pancreas, stomach and duodenum of pancreatic cancer patients
AIM: To determine whether gastric and enteric Helicobacter species are associated with pancreatic cancer. METHODS: Patients with exocrine pancreatic cancer (n = 40), neuroendocrine cancer (n = 14), multiple endocrine neoplasia type 1 (n = 8), and chronic pancreatitis (n = 5) were studied. Other benign pancreatic diseases (n = 10) and specimens of normal pancreas (n = 7) were included as controls. Pancreatic tissue specimens were analyzed by Helicobacter-specific PCR-assay and products were characterized by denaturing gradient electrophoresis and DNA-sequencing. From a subset of the pancreatic cancer patients, gastric and/or duodenal tissue as well as gallbladder and ductus choledochus tissue were analyzed. Gallbladder and choledochus samples were included as controls. Stomach and duodenum samples were investigated to analyze whether a gastric helicobacter might disseminate to the pancreas in pancreatic cancer patients. Pancreatic specimens were analyzed by Bacteroides-specific PCR for detecting the translocation of indigenous gut microbes to the diseased pancreas. RESULTS: Helicobacter DNA was detected in pancreas (tumor and/or surrounding tissue) of 75% of patients with exocrine cancer, 57% of patients with neuroendocrine cancer, 38% of patients with multiple endocrine neoplasia, and 60% of patients with chronic pancreatitis. All samples from other benign pancreatic diseases and normal pancreas were negative. Thirty-three percent of the patients were helicobacter-positive in gastroduodenal specimens. Surprisingly, H. bilis was identified in 60% of the positive gastroduodenal samples. All gallbladder and ductus choledochus specimens were negative for helicobacter. Bacteroides PCR-assay was negative for all pancreatic samples. CONCLUSION: Helicobacter DNA commonly detected in pancreatic cancer suggests a possible role of the emerging pathogens in the development of chronic pancreatitis and pancreatic cancer.
INTRODUCTION
Pancreatic carcinoma, an extremely aggressive cancer with very poor prognosis, is one of the leading causes of cancer-related death in the Western world [1] . Consistently reported risk factors are age and cigarette smoking [1] , whereas approximately 5% of pancreatic cancers seem to be primarily related to genetic traits [2] . Some early case control and data register studies suggested that chronic pancreatitis is a risk factor for pancreatic cancer [3,4] , which has been confirmed recently in prospective studies of chronic pancreatitis [5,6] . Helicobacter pylori (H pylori), the prototype species of the genus Helicobacter, which colonizes the stomach mucosa and causes acute and chronic gastritis as well as peptic ulcer disease, is a major predisposing factor for gastric cancer in human beings [7] . Other Helicobacter and Campylobacter species, including bile tolerant enteric Helicobacter species, colonize the intestine and hepatobiliary tract of many mammals and birds. Helicobacter hepaticus is the type organism of the enteric Helicobacter species and induces chronic active hepatitis, liver fibrosis, hepatocellular carcinoma, as well as inflammatory bowel disease in susceptible inbred strains of mice [8,9] . Enteric Helicobacter species such as H. pullorum, H. canis and H. cinaedi, are associated with hepatitis in poultry, dogs, and macaques, Nilsson HO et al . Helicobacter in pancreatic cancer 3039 respectively, as well as gastroenteritis and bacteremia in human beings [9] . H pylori and the enteric species H. bilis, as well as H pylori-like H. species 'liver', are associated with biliary tract cancer, some chronic liver diseases as well as hepatocellular carcinoma in humans [10][11][12][13][14] . Two serologybased case-control studies have shown an association between H pylori and pancreatic cancer [15,16] , suggesting a possible relationship between helicobacter infections and pancreatic cancer development, which is supported by a recent study of Helicobacter species in a small number of patients with pancreatic exocrine cancer [17] .
The purpose of the present investigation was to analyze the prevalence of gastric and enteric Helicobacter species DNA in samples from pancreatic cancer and chronic pancreatitis. Specimens from some benign pancreatic diseases (cysts or adenoma) as well as normal pancreatic tissue from patients with colon or choledochus cancer were included as controls. Translocation of some enteric bacteria to a diseased pancreas was analyzed by Bacteroides genus-specific PCR assay on pancreatic tissue.
Patients
All patients were operated on at the Department of Surgery, Lund University Hospital. Formalin-fixed paraffinembedded pancreatic tissue samples from 84 patients were obtained from the Department of Pathology at the same hospital. Prior to deembedding, one pathologist (U.S) reviewed all samples and approximately 100 mg of each tissue type was taken from the paraffin blocks with the tip of a scalpel. By carefully comparing the blocks with the slides, it was ascertained that a pure tissue type, i.e. tumor or normal etc, was obtained.
The tissue samples were from consecutive patients with primary exocrine cancer, predominantly of ductal type (PC) (n = 40, 20 females, mean age 59 years, range 44-77 years), neuroendocrine cancer (NE) (n = 14, 6 females, mean age 58 years, range 15-84 years), and multiple endocrine neoplasia type 1 (MEN) (n = 8, 1 female, mean age 52 years, range 42-69 years). In addition to a tumor specimen available from all 62 PC-, NE-and MEN patients, a sample of adjacent normal tissue was obtained from 41 patients (66%). Atrophic pancreatic tissue was available from 13 of the PC patients. Thus, 116 pancreatic tissue samples (1.9 samples per patient in average) were analyzed. Samples from patients with chronic pancreatitis (CP) of alcoholic, idiopathic, and epithelioid cell granulomatosis etiology (n = 5, 1 female, mean age 52 years, range 42-79 years), were also examined.
Pancreatic tissue specimens from patients with benign (other than pancreatitis) pancreatic diseases (n = 10, 8 females, mean age 55 years, range 24-71 years) such as mucinous cystadenoma (n = 5), serous cystadenoma (n = 3), pancreatic cysts (n = 2), as well as histologically normal pancreatic tissue samples from patients with cancer of ductus choledochus (n = 4, 1 female, mean age 65 years, range 55-75 years), colon (male, mean age 60 years), duodenum (male, mean age 64 years), and retroperitoneal fibrosis (male, mean age 57 years), were included as con-trols (collectively denoted C). Gastric tissue samples of the antrum and/or fundus, as well as duodenal samples, were obtained from 23 PC patients and four NE-and MEN patients. An average of 2.5 stomach and/or duodenum specimens was tested per patient. Specimens of the gallbladder (n = 18) and ductus choledochus (n = 8) were also obtained from PC-, NE-and MEN patients. This study was approved by the Research Ethics Committee at Lund University (LU 726-02).
Preparation of DNA
Paraffin-embedded tissue samples were heated at 60 ℃ for 10 min to melt excess paraffin, aseptically transferred to new micro-centrifuge tubes, washed in xylene for 2 × 5 min, rehydrated through graded ethanol (990 mL/L and 950 mL/L for 2 × 5 min and 700 mL/L for 5 min), and finally washed for 5 min in double-distilled water. Subsequently, the samples were homogenized in 170 mmol/L phosphate-buffered saline pH 7.2 by using a plastic microcentrifuge tube-adapted pestle. Pancreatic tissue samples were homogenized at 10 g/L and other gastrointestinal specimens at 40-50 g/L. DNA was extracted from 100 µL of each homogenate by the QIAamp DNA Mini Kit Tissue protocol (Qiagen, Hilden, Germany) according to the manufacturer's instructions. All DNA samples were stored at -20 ℃.
Helicobacter genus-specific PCR
DNA-extracts were amplified in a GeneAmp 2700 Thermocycler (Applied Biosystems, Foster City, CA, US) by semi-nested PCR-assay for Helicobacter species as previously described [18] with primers constructed by Goto et al [19] . The forward primer 1F (5´-CTATGACGGGTATCCGGC-3´) and reverse primer 1R (5´-CTCACGACACGAGCTGAC-3´) were used in the first step. In the second step, primer 1F and reverse primer 2R (5´-TCGCCTTCGCAATGAGTATT-3´) were used. Precautions were taken to minimize the risk of PCR cross contamination as described recently [20] . Detection of PCR-products was done in agarose gels as described previously [18] .
Bacteroides genus-specific PCR
To rule out the non-specific translocation of a common gut microbe to the pancreas among the studied patients, a pancreatic tissue sample (n = 84) from each patient was amplified by nested PCR with primers for the glutamine synthase gene of Bacteroides species. Primers BFR-1 (5´-ACTCTTTGTATC CCGACGATT-3´) and BFR-2 (5´-GAGGTTGATGCCT-GTATCGGT-3´), described by Kane et al [21] , were used. For the second step, internal primers (forward primer BFR-3: 5´-GACAAAAACATCACCCGGGT-3´ a n d r e ve r s e primer BFR-4: 5´-GCCCAGCTTGTGACACTCTATT-3´), based on the sequence of the BFR1-BFR2 PCR-product, were constructed using the Vector NTI Suite version 8.0 (Informax, Frederick, MD, US). The specificity of the Bacteroides species PCR-assay was evaluated using strains from the ATCC and CCUG. PCR-mixtures were prepared as described previously [18] . Amplification conditions for the first step were at 94 ℃ for 4 min; 30 cycles at 94 ℃ for 30 s, at 60 ℃ for 30 s, at 72 ℃ for 45 s and finally at 72 ℃ for 5 min. Conditions for the second step were at 94 ℃ for 10 min; 35 cycles at 94 ℃ for 30 s, at 60 ℃ for 30 s, at 72 ℃ for 30 s and finally at 72 ℃ for 5 min. Genomic DNA (0.1 ng) of B fragilis was used as a positive control.
Denaturing gradient gel electrophoresis
Denaturing gradient gel electrophoresis (DGGE) analysis of the V6-7 region of Helicobacter species 16S rDNA was performed as described previously [18] . Diluted (10×) first step PCR-products were amplified in the second step using forward primer GC-1F (5´-GCGGCCGCCCGTCCC-
DNA-sequence analysis
Nucleic acid products of the Helicobacter genus-specific PCR-assay were purified from agarose gels using the Montage DNA Gel Extraction Kit (Millipore, Bedford, MA, US), or from DGGE-gels as described previously [18] . DNA-sequence reactions were performed using the ABI PRISM dRhodamine Terminator Cycle Sequencing Ready Reaction Kit version 3.0 (Applied Biosystems) with modifications. One microliter of a BigDye mix and 1.5 µL of sequencing buffer (10 µL of 10× PCR-buffer II, 6 µL 25 mmol/L MgCl 2 , 4 µL double-distilled water) were prepared in a total volume of 10 µL with primers (1F or 2R) and template according to the manufacturer's instructions. Products of the sequence reaction were aligned and the closest homologous DNA was identified by BLASTnanalysis as described elsewhere [17] .
Helicobacter PCR of pancreas
As a rule, two pancreatic specimens were obtained from each patient in the PC-, NE-and MEN-groups. If at least one specimen was positive, the patient was considered Helicobacter-positive. Hence, 75% (tumor and/or surrounding tissue) of the PC-, 57% of the NE-, and 38% of the MEN patients were positive for the genus Helicobacter (Table 1). Three (one alcoholic, one idiopathic and one with epithelioid cell granulomatosis) of five patients with chronic pancreatitis were Helicobacter-positive. All benign tissue samples from cystadenoma-and pancreas cyst patients, as well as pancreatic tissue from the remaining C-patients, were negative for the genus Helicobacter (Table 1).
Helicobacter DNA was detected in 48% of tumors of PC patients (11 PC patients were PCR-negative in the tumor but positive in surrounding normal or atrophic tissue, hence, a higher number of PC patients [75%] compared with that of PC tumor patients [48%] were Helicobacterpositive), 57% of neuroendocrine tumors, and 38% of MEN tumors (Table 2). Atrophic and normal pancreatic tissues of the PC patients were positive in 69% and 36% of the samples, respectively, whereas only 5% of normal pancreatic tissues surrounding NE and MEN tumors were helicobacter-positive (Table 2). Commonly, if a PC patient was positive for Helicobacter species in a tumor sample, normal tissue was often negative and vice versa. Positive atrophic tissue was common in both tumor-negative and positive samples ( Table 2).
Helicobacter PCR in the stomach, duodenum, gallbladder and ductus choledochus Stomach and/or duodenum samples demonstrated a
Helicobacter-positive result in 33% of the patients (Table 1). Of the 40 PC patients, stomach and/or duodenum was obtained from 23. Among these 23 patients, 30% were positive in the stomach and/or duodenum whereas 74% were positive in the pancreas. Four PC patients and one NE patient were simultaneously positive both in the pancreas and in a gastric and/or duodenal specimen ( Figure 1). All gallbladder as well as ductus choledochus tissue samples were negative in the Helicobacter genus-specific PCR (Table 1).
Bacteroides genus-specific PCR
The expected 600-bp product was amplified with genomic DNA of B. fragilis as a template in the first step of the assay. However, after nested analysis, the expected fragment of 228-bp was also produced with E. cloacae DNA. Extracted genomic DNA of the other tested reference strains was negative. A pancreatic sample from each patient (n = 84), including tumor-, benign-, and normal tissue, was analyzed. None of the samples showed positive amplification for the genus Bacteroides in the nested PCR-assay.
DGGE-and DNA sequence analysis
Forty-six Helicobacter 16S ribosomal DNA PCR-products were identified by DGGE, and 20 of those were subjected to DNA-sequence analysis. DGGE revealed four migration profiles, similar to the reference strains of H pylori, H. sp. flexispira, H. bilis and H. hepaticus (Figure 1). The two methods were in concordance except for two PCR-products migrating with H. sp. flexispira in DGGE analysis but with the highest similarity to H. cinaedi after sequence-and BLASTn analysis. Twenty-seven (of 29) identified H pylorisequences were amplified from pancreatic tissue samples. Sequences of H. sp. flexispira (n = 7) and H. cinaedi (n = 2) were only detectable in pancreatic tumor specimens ( Figure 1, patient no. 7). Agarose-and DGGE gel images as well as the result of DNA-sequencing of pancreatic as well as gastric and/or duodenal samples of 7 patients are shown in Figure 1.
DISCUSSION
We analyzed the prevalence of Helicobacter species ribosomal DNA by PCR in paraffin-embedded pancreatic and gastroduodenal samples from patients with pancreatic cancer and tissues of benign pancreatic diseases as well as controls of normal pancreas (from choledochus and colon cancer patients). Helicobacter species DNA was identified in the pancreas of 75% of the PC patients, 57% of the NE patients, 38% of the MEN patients, and 60% of the patients with chronic pancreatitis. Other benign pancreatic diseases and the normal pancreas controls were all helicobacter-negative (Table 1). Detection of Helicobacter species in pancreas is thus related to patients suffering from pancreatic cancer and chronic pancreatitis. Serological studies have previously demonstrated an association between serum antibodies to H pylori and pancreatic cancer [15,16] . Moreover, H pylori increases the severity of tissue inflammation and production of proinflammatory cytokines in a rat model of ischemia/reperfusion-induced pancreatitis [22] . The distribution of Helicobacter ribosomal DNA in tumor and normal tissue of pancreatic cancer patients was also studied. Helicobacter species was commonly detected in tumors (48% of PC, 57% of NE, and 38% of MEN). The prevalence of helicobacter was much lower in the normal pancreas surrounding NE and MEN (5%) than surrounding PC (36%) ( Table 2). There are at least two possible explanations. One is that helicobacter bacteria and/or helicobacter DNA may be taken up and retained by the diseased tissues such as tumors, the other is that helicobacter cells and/or cell debris in pancreas may be implicated in the genesis of PC and are therefore also found in the non-tumor pancreas, while in NE and MEN helicobacters may be an epiphenomenon and are thus uncommon in the non-tumor pancreas. Similar to the findings in normal tissues surrounding tumors in PC patients compared with NE and MEN, we have previously detected Helicobacter species in liver tissue Table 3 Distribution of Helicobacter 16S ribosomal DNA sequences identified using DNA-sequence and/or DGGE-analysis of the V6-region surrounding primary liver carcinoma but not colorectal liver metastases [23] . Hence, a possible participation of Helicobacter species in the genesis of exocrine pancreatic and liver carcinoma has to be further explored. Moreover, PCR detection of H pylori in the liver is associated with cirrhosis in hepatitis C patients with or without hepatocellular carcinoma [24] , in analogy to H pylori-associated tissue inflammation in chronic atrophic gastritis progressing to gastric cancer [7,25] .
A majority of the PCR-products from pancreatic cancer samples are related to 16S ribosomal DNA sequences of H pylori previously identified in gallbladder tissue samples [26] , and to H. sp. ´liver´, a putative subspecies of H pylori with liver tropism, detected in hepatocellular carcinoma [10] , supporting the hypothesis of a H pylori subpopulation with hepatobiliary tropism [27] . We also identified H. cinaedi and H. sp. flexispira taxon 8 in some exocrine pancreatic cancer samples (Table 3). These species with a broad mammal host-range have been isolated from patients with bacteremia and gastroenteritis [28] .
A low prevalence of H pylori DNA was found in gastric and duodenal samples from the patients in this study (Table 3). At least half of the adult human population is infected with H pylori [7] , but only 14% of the PCR-products from the stomach of patients with exocrine pancreatic cancer could be identified as H pylori. However, H. bilis has been identified in 60% of helicobacter-positive gastroduodenal specimens. H. bilis is associated with chronic hepatitis and chronic enteric inflammation in susceptible laboratory mouse strains [28] and has been recently identified in both diseased human gallbladder and bile by PCR-based methods [12] . To our knowledge, H. bilis has not previously been detected in tissue samples of the human stomach and small intestine. The reason why the detection rate of H pylori is low in the stomach and duodenum of PC patients remains obscure. This may partly be explained by the fact that most or all patients were given metronidazole preoperatively.
We analyzed stomach and duodenum tissue specimens from pancreatic cancer patients to detect whether gastric Helicobacter species, such as H pylori, may disseminate to the pancreas in pancreatic cancer patients. However, DNA of different Helicobacter species in the pancreas compared with gastroduodenal tissue was identified in patients who were Helicobacter-positive both in the stomach and pancreas (Figure 1). Moreover, many pancreas-positive PC patients were negative in stomach samples and vice versa, not supporting migration of helicobacter microorganisms colonizing the stomach to the pancreas in the studied PC patients.
B. fragilis constitutes a part of the indigenous microflora of the human gut, predominantly of the colon. A nested PCR-assay for the genus Bacteroides was designed to study whether a major constituent of the normal microflora of the lower bowel could translocate to a diseased pancreas. None of the pancreatic tissue specimens was positive for Bacteroides spp., suggesting that bacterial translocation from the bowel to pancreas does not frequently occur in patients with cancer or a benign pancreas disease. However, the indigenous microflora may more easily be washed off tissues during sample preparations such as paraffin deembedding. Putative pathogens, such as some Helicobacter species, might be more closely associated with the gut mucosa.
Chronic inflammation is a characteristic feature of gastric-, colon-and hepatobiliary tract cancers [29] . Bacterial cell-wall peptidoglycan and lipopolysaccharide can stimulate the human innate immunity and induce inflammation. Bacterial DNA, so called CpG motifs, has been shown to activate macrophages, neutrophils as well as cell migration and to induce B cell activation and hyper-IgM production in patients with primary biliary cirrhosis [30,31] . Proinflammatory cytokines, reactive oxygen species and other inflammatory mediators are associated with a chronic Helicobacterinduced tissue inflammation [25] . Such factors probably increase genomic DNA damage and cell proliferation as well as inactivate tumor-suppressor genes, events also associated with malignant transformation of pancreatic cells [1,6,32] . Tumor-associated chronic inflammation [29] may be induced and maintained by bacteria or bacterial cell debris, originating from helicobacter and/or other microbial species.
In conclusion, 16S ribosomal DNA of gastric H pylori and some enteric Helicobacter species is commonly detectable in tissue samples from patients with pancreatic cancer but not from controls. To further explore a possible role of gastric and enterohepatic Helicobacter species in pancreatic malignancy, more studies of pancreatic cancer and the emerging Helicobacter genus and related organisms are needed. | 2018-04-03T01:49:13.778Z | 2006-05-21T00:00:00.000 | {
"year": 2006,
"sha1": "9f164a7d032e8f0a9bcb1d9fe5fcbe37918f0b85",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v12.i19.3038",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f90800c1edfeb532f141af0946d886af51fbccb1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221672220 | pes2o/s2orc | v3-fos-license | Nationwide trends in incidence, healthcare utilization, and mortality in hospitalized heart failure patients in Taiwan
Abstract Aims The objective of this study was to estimate the nationwide annual incidence, healthcare utilization, and mortality among hospitalized heart failure (HF) patients in Taiwan. Methods and results People aged 20 years or older and having been newly admitted for HF between 2010 and 2015 were identified from Taiwan's National Health Insurance Research Database. For 124 816 patients with incident HF hospitalizations between 2010 and 2012, we further analysed their treatment patterns, healthcare utilizations, and mortality during index hospitalization and within 3 years following discharge from the index hospitalization. The age‐stratified incidences were declined by 10–20% in people aged 55 years or older, but increased by ~4% among people younger than 44 years old between 2010 and 2015. For all incident hospitalized HF patients, the percentages of patients visiting the emergency room, were rehospitalized, and treated with guideline‐directed medical therapy were highest in the first year. Approximately two‐thirds of subsequent hospitalizations were due to non‐HF and non‐cardiovascular causes. The all‐cause mortality rate during index hospitalization was 8.5%, whereas the mortality rates at 30 days, 90 days, 180 days, 1 year, 2 years, and 3 years following discharge were 3.5%, 8.9%, 14.4%, 22.5%, 33.9%, and 42.8%, respectively, for those surviving index HF hospitalization. Non‐cardiovascular disease‐related deaths accounted for nearly 60% of all deaths. Conclusions Our study reveals that, in contemporary Taiwan, the >10% annual mortality following the first year of hospitalization, 30% deaths occurring outside the hospital, and 60% non‐cardiovascular‐related deaths, along with the decreasing use of guideline‐directed medical therapy, highlight sectors requiring more attention.
Introduction
Heart failure (HF) is associated with substantial risks of hospitalization and mortality and is regarded as an emerging pandemic, with an estimated 26 million patients worldwide. 1,2 To identify the management gaps and allocate healthcare resources adequately for HF, contemporary population-level epidemiology with longer-term follow-up information is of vital importance. However, studies regarding population-based prevalence and incidence of HF and its temporal trends are scarce, particularly in Asia. 3,4 In the real world, the diagnosis of HF is usually made on clinical grounds, 5 which casts doubts on the reliability of population-based prevalence data. Hospitalization for HF, compared with HF diagnosed at outpatient settings, is more reliable in disease ascertainment and a powerful predictor of rehospitalization and mortality. 6 In Taiwan, the crude incidence of HF hospitalization was 271 per 100 000 persons in 2005, according to the random sample of 1 million people from the National Health Insurance Program. 7 However, patients with prior history of HF were not excluded, and no adjustment for a standard population was made, rendering comparisons with other studies challenging. Recent epidemiological studies showed a decreasing trend in standardized HF incidence in Western countries, 8,9 whereas there are no data from Asian countries. 4 Prior hospital-based registries worldwide consistently demonstrated that, in patients with HF hospitalization, the risks of death and recurrent hospitalization are greatest in the first 30 to 60 days after discharge, with rates approaching 15% and 30%, respectively. 10,11 Nevertheless, there is a paucity of data concerning longer-term (>2 years) outcomes in patients with HF hospitalization, which might show an even greater discrepancy between hospital-based registry and population-level epidemiology given the various limitations of chronic disease management in real life. Patients with HF are often associated with frailty and malnutrition, portending worse prognosis indirectly related to HF. 12 To assess the impact of HF on morbidity and mortality comprehensively, non-HF-related hospitalization, non-HF-related mortality, and out-of-hospital mortality (deaths occurring outside the hospital) should be emphasized as well.
The objective of the present study was to fill the knowledge gaps regarding the current trends in the incidence of hospitalized HF and its longer-term (3 years) outcomes among patients newly hospitalized for HF from the 23 million people of Taiwan from 2010 to 2015 by using the Taiwan's National Health Insurance Research Database (NHIRD). The NHIRD contains nationwide claims-based data embedded with comprehensive data on healthcare utilization. 13 Through this analysis, the gaps in real world management of HF would appear.
Data source
We performed a population-based retrospective longitudinal study based on data from Taiwan's NHIRD between 1 January 2009 and 31 December 2015.
The NHIRD is a nationwide database comprising anonymous eligibility and enrolment information as well as claims for visits, procedures, and prescription medications for more than 99% of the entire population (23 million) of Taiwan. Individual patients are recorded as entering the NHIRD when they are covered by Taiwan's National Health Insurance system, which is a mandatory, single-payer health insurance programme in Taiwan established in 1996. The NHIRD is organized by the government and operated by the National Health Insurance Administration. For each visit, the NHIRD has recorded dates (outpatient visits, admissions, and discharges), medical resource utilization (outpatient and inpatient visits), costs of services, medication prescriptions, and up to five diagnoses according to the International Classification of Diseases, 9th Edition (ICD-9 CM codes). The completeness and accuracy of the NHIRD are ensured by the Ministry of Health and Welfare and National Health Insurance Administration and maintained by the Health and Welfare Data Science Center. The database has been described in detail elsewhere 14 and has been the source for numerous epidemiological studies published in peer-reviewed journals. 15 Mortality data obtained from the National Death Registry in Taiwan were used to estimate all-cause and cause-specific mortality rates according to the ICD, 10th Edition (ICD-10 CM codes). The accuracy of the coding has been validated by previous studies. 16,17 Ethical statement The identification numbers for all entries in the NHIRD were encrypted to protect the privacy of individual patients. The study protocol was approved by the Institutional Review Board of the National Taiwan University Hospital (No. 201701105W).
Study population
This study is composed of two study designs, including a cross-sectional survey for exploring the nationwide temporal trends and a longitudinal cohort design for assessing the long-term healthcare utilization and cause-specific mortality in incident hospitalized HF patients in Taiwan.
For calculation of nationwide temporal trends, we identified adult patients, legally defined as aged 20 years or older in Taiwan, who had been admitted for HF, defined as a hospitalization with either a primary diagnosis of HF or with one of the first two secondary diagnoses being HF (ICD-9-CM codes: To assess long-term healthcare utilization and cause-specific mortality in hospitalized HF patients in Taiwan, we identified incident hospitalized HF patients between 2010 and 2012 as a cohort. The cohort entry date was defined as the admission date of the incident HF hospitalization, and the index date was defined as the discharge date of incident HF hospitalization. After cohort identification, patients were followed from the index date to whichever of the following events came first: (i) death, (ii) the
Measurement of healthcare utilization
We collected detailed information about health service use, including pharmacological treatments, non-pharmacological treatments, outpatient visits, emergency department visits, and inpatient hospitalizations during the incident hospitalization and within 1, 2, and 3 years after the index date ( Table 1). To comprehensively understand the current treatment performances of HF, the pharmacological treatment evaluated in this study consisted of guideline-directed medical therapy (GDMT), treatment for symptom targets, and treatment for underlying diseases. The American College of Cardiology Foundation/American Heart Association foundation launched the term 'GDMT' to represent those Class І recommended therapies defined by the American College of Cardiology Foundation/American Heart Association in 2013. 18 The GDMT, currently deemed one of the major performance measures for optimal treatment of HF, includes the use of any beta-blockers, angiotensin-converting enzyme inhibitor (ACEI), angiotensin II receptor blocker (ARB), loop diuretics, thiazide diuretics, and aldosterone antagonist. 18 To assess the indirect impact of HF on morbidity, non-HF-related hospitalization, including hospitalization for other cardiovascular diseases and other diseases (except HF and cardiovascular diseases), was also analysed.
Measurement of mortality
To estimate all-cause and cause-specific mortality during incident HF hospitalization and within 30 days, 90 days, 180 days, 1 year, 2 years, and 3 years after the index date, we used the National Death Registry in Taiwan, which records the cause of death for all deceased citizens. Causes of death and their corresponding codes are presented in Table 2. In addition to the overall mortality, in-hospital and out-of-hospital mortality were also reported to assess the impact of HF on mortality comprehensively. In-hospital mortality was defined as a death in a patient who was hospitalized on the day of death. Because a substantial number of patients would have terminal discharge 19 under critical conditions, which is also called impending death discharge 20 or going home to die 21 and is a traditional custom in Taiwanese society, patients discharged within 3 days before the day of death were also considered as in-hospital mortality. 20 Out-of-hospital mortality, death occurring outside the hospital, was defined as a death in a patient who was not hospitalized within 3 days before the day of death. A death in a patient brought to the emergency room (ER) on the day of death was also defined as out-of-hospital mortality.
Statistical analysis
Age-stratified (20-44, 45-54, 55-64, 65-74, 75-84, and ≥85 years) and calendar year-stratified crude incidence rates of hospitalized HF from 2010 to 2015 in Taiwan were reported as estimates per 100 000 person-years at risk. Age-standardized overall and sex-stratified incidence rates were calculated annually using the direct standardized method, and the standard population was from the World Health Organization in 2000. 22 Poisson regression adjusted for age and calendar year was adopted to estimate the relative risk of HF between men and women. The annual percentage change in incidence between 2010 and 2015, stratified by age and sex, was calculated by using Joinpoint Trend Analysis software (National Cancer Institute, Bethesda, Maryland, USA). 23 Information on the baseline characteristics was retrieved from claims data from outpatient and inpatient visits 1 year prior to the index date. Data were presented as number (n) and frequencies (%) for categorical data, mean and standard deviation (SD) for normally distributed continuous data, or medians and inter-quartile range for non-normally distributed continuous data.
Results
Trends in the incidence of hospitalized heart failure from 2010 to 2015 The age-standardized, overall annual incidence of hospitalized HF decreased by 13% during the study period in Taiwan (from 204.1 per 100 000 people in 2010 to 177.2 per 100 000 people in 2015) (P for trend <0.05), and the overall decline was consistent across sex groups ( Figure 1). The incidence of hospitalized HF was 17% higher in men than that in women in general [relative risk of incidence between men and women = 1.17 (1.16, 1.18); P < 0.0001]. Because of steeper declines in incidence in women during the study period, the incidence of hospitalized HF was~10% higher in men compared with that in women in 2010 and was 25% higher in men in 2015 ( Figure 1). Age-stratified estimates showed that the incidence declined by 13.1%, 19.8%, 18.9%, and 10.7% among people aged 55-64, 65-74, 75-84, and 85+, respectively (P < 0.005, Figure 2). However, the incidence remained the same for people aged 45-54 and even increased by~4% in people younger than 45 years old (P for trend = 0.0108). Despite the decreasing trend in the age-standardized incidence of hospitalized HF, the absolute annual number of individuals presenting with incident HF hospitalization increased by 3.6% (from 44 631 in 2010 to 46 109 in 2015).
Incidence, healthcare utilization, and mortality of heart failure in Taiwan Table 1 Healthcare utilization of incident hospitalized HF cases during index hospitalization and within 3 years after discharge Incidence, healthcare utilization, and mortality of heart failure in Taiwan
Characteristics of patients with incident heart failure hospitalization
We identified 124 816 patients with incident HF hospitalization between 2010 and 2012 as a cohort and estimated their healthcare utilization during index hospitalization and within 3 years after discharge. The detailed baseline characteristics are summarized in Supporting Information,
Treatment patterns and healthcare utilization
The detailed information of treatment patterns and health care utiliztion was summarized in Table 1. During the index hospitalization, 85.8% of the 124 816 incident HF patients had been treated with GDMT (25.7% received ACEI, 24.8% received ARB, 35.2% received beta-blockers, and 27.0% received aldosterone antagonist), and 77.2% had been managed with treatments for symptom targets. Only 20.9% of patients had been treated with both ACEI/ARB and beta-blockers. The in-hospital mortality during the index hospitalization was 8.5%.
In the first year, 97% and 62% of the incident hospitalized HF patients had outpatient visits and ER visits, respectively. The mean numbers of outpatient visits and ER visits were 34.5 (SD 24.4) and 3.0 (SD 3.7), respectively. The percentages of patients having outpatient visits remained stable in the second and third years, whereas the percentages of patients having ER visits decreased to~50%.
During the first year, the average number of admissions was 2.3, with a median length of stay of 16 days, in 114 253 (91.5%) patients surviving the index hospitalization. Hospitalizations directly related to HF constituted only 34% of all hospitalizations, whereas 61% of hospitalizations were due to non-HF and non-cardiovascular causes. Incidence, healthcare utilization, and mortality of heart failure in Taiwan During the second and third years, the average numbers of admissions were similar to the number in the first year, but the percentages of patients with recurrent hospitalizations decreased (50.6% and 46.9% for 88 489 and 75 402 patients surviving the first and second years, respectively). Only 29.3% and 25.3% of hospitalizations during the second and third years were directly related to HF, whereas 64.6% and 66.1% of hospitalizations were due to non-HF and non-cardiovascular causes.
For pharmacological treatments, the percentages of patients treated with GDMT and receiving treatments for symptom targets were both highest in the first year and declined gradually in the second and third years. For instance, the percentage of the combined use of ACEI/ARB and beta-blockers was 20.9% during the index hospitalization, 37.9% in the first follow-up year, and 28.4% in the third follow-up year. However, treatments for underlying diseases such as statins, dipeptidyl peptidase-4 inhibitors, and new oral anticoagulants increased across the years.
Notably, patients who died outside the hospital (out-of-hospital death) accounted for~30% of all mortality cases (32.0%, 31.2%, and 30.9% of all deaths in the first, second, and third years, respectively) ( Figure 3B and 3C and Supporting Information, Figures S1 and S2). Non-cardiovascular disease-related deaths accounted for nearly 60% of all deaths during the entire study period. Approximately one-fourth of cardiovascular disease-related deaths were due to ischaemic heart diseases, and another one-fourth were HF related in general. The detailed information about cause-specific in-hospital and out-of-hospital mortality is shown in Supporting Information, Tables S1 and S2.
Discussion
Our study provides nationwide, 'real-world' longitudinal follow-up estimates regarding annual incidence, healthcare utilization, treatment patterns, and mortality among incident hospitalized HF patients in contemporary Taiwan. There are four major observations. First, the age-standardized incidence of hospitalized HF declined over time during the study period, especially in women and people aged 55 years or older. However, the absolute number of incident hospitalized HF increased slightly, mainly due to the more pronounced increase in ageing populations. The slight increase in age-standardized incidence of hospitalized HF in people younger than 45 years of age merits attention. Second, among patients with incident hospitalized HF, the average number of admissions remained the same, while the proportions of patients with recurrent hospitalizations and ER visits decreased across the years. This finding indicates that patients with recurrent hospitalization beget hospitalization, thus a much higher risk. Third, we found that the cumulative 1 year mortality (including deaths during index hospitalization) was nearly 30% and the annual mortality remained above 10% in the second and third years. The persistently high mortality is in contrast to the decreased prescription of GDMT across the years. Finally, it is noteworthy that~30% of deaths occurred outside the hospital and 60% of deaths were due to non-cardiovascular causes across the years. These sectors are often overlooked in clinical cares of HF from the country's perspective.
Incidence trends in hospitalized heart failure patients
There are several recent nationwide studies investigating the disease burden of HF worldwide that are similar to our study. Christiansen and colleagues reported a decreased incidence of new HF hospitalization among patients older than 50 years from 1995 to 2012 by using the Danish nationwide database. 9 They also noted an increased incidence in the younger population. There are other similar findings between our work and the study of Christiansen et al. First, they found that the mean age of incident HF hospitalization of men was younger than that of women (men: 72 vs. women: 78 years). In our study, the mean age at incident HF hospitalization was 72 and 76 years for men and women, respectively. Second, the percentage of female HF patients decreased from 49% in 1995 to 44% in 2012 in Demark. In our study, the percentage of women in the incident HF population decreased from 51% in 2010 to 49% in 2015. Third, the percentage of younger (≤50 years of age) patients in the HF population doubled from 3% in 1995 to 6% in 2012 in Demark, while the proportion of HF patients younger than 45 years increased from 3.4% in 2010 to 3.9% in 2015 in our cohort. Recently, Conrad et al. 8 assessed the temporal trends in incidence of HF, based on both inpatient and outpatient diagnoses from the Clinical Practice Research Database in the UK. From 2002 to 2014, the age-standardized incidence of HF also decreased from 358 to 332 per 100 000 person-years. Likewise, the absolute number of new HF patients increased from 750 127 to 920 616. Overall, the findings about the decreasing trend in the age-standardized incidence of HF and the increased absolute number of incident HF patients were consistent across studies. The increase in incidence of HF in younger people could result from suboptimal awareness and less effective control of cardiovascular risk factors.
Treatment patterns
The prescription of all kinds of GDMT was still suboptimal in contemporary Taiwan. Compared with previous studies evaluating GDMT in HF populations, the prescription of combined ACEI/ARB and beta-blockers was almost half of that in the USA (61%) 24 and in older Japanese patients (67%), 25 but was similar to that in patients in India (30%). 26 Further, we showed that the use of these medications was more frequent in the first year and then declined in the second and third years. The declining trend in GDMT use is consistent with the declining proportions of patients with recurrent hospitalization and mortality. However, the annual rates of both recurrent hospitalization (>45%) and mortality (>10%) following 1 year after index hospitalization are still high. In other words, the warranty for freedom from recurrent hospitalization and mortality does not exist within 3 years following HF hospitalization. Efforts focusing on physicians and patients to enhance the long-term adoption of GDMT in patients with incident HF hospitalization should be exerted. In situations with improvements of haemodynamic status in HF patients, physicians should implement complete GDMT, which might not be tolerable initially, and uptitrate the doses to the optimal levels, rather than be inert to adjust or even withdraw GDMT. 18,27 Long-term adherence to GDMT should be emphasized and routinely assessed for every HF patient no matter how stable he or she looks like. 28
Healthcare utilization
In contrast to the suboptimal prescription of GDMT, over 97% of incident HF patients had an average 35 outpatient visits annually throughout the 3 year follow-up period, which reflects the effective coverage of national health insurance programme in Taiwan. However, the extensive outpatient visits among patients with HF did not translate into better outcomes. In this nationwide claim-based study, we showed a high first-year readmission rate of 62.8%. This number is similar to the 59% first-year readmission rate in the claim-based population-level study of patients with incident hospitalization for HF in the Italian region of Lombardy in 2011. 29 Despite a decreasing trend in recurrent hospitalization and ER visits across the years, the readmission rate herein remained 46.9% at the third year following index hospitalization, which is even higher than the 10-44% first-year readmission rates observed in registry-based studies. 11,29 Regarding the causes of readmission, it is noteworthy that 60-66% of readmissions were due to non-cardiovascular causes across the 3 year follow-up period, whereas the proportions of readmissions directly due to HF declined from one-third in the first year to one-fourth in the third year. This finding is similar to the observation obtained among incident HF patients in Olmsted County from 1987 to 2006, which showed that 62% of readmissions were attributed to non-cardiovascular causes. 30 These findings indicate that non-cardiovascular co-morbidities make an important contribution to the burden of recurrent hospitalization in patients with HF. Both cardiovascular and non-cardiovascular co-morbidities should be meticulously managed to ameliorate the grave prognosis of HF.
Mortality
We found an 8.5% mortality rate during the index hospitalization among the incident hospitalized HF patients. This result is similar to those reported in a recent review, which showed that in-hospital mortality varied from 3% to 10% in multicentre HF registries and nationwide database. 11 Our finding regarding the mortality rate of 22.5% in the first year after HF discharge echoes previous studies in other countries. 31,32 Chen et al. 33 conducted a nationwide cross-sectional study in the USA and reported that the risk-adjusted 1 year mortality in prevalent hospitalized HF patients was 31.7% in 1999 and 29.6% in 2008. Yeung et al. 32 conducted a population-based cross-sectional study in Ontario, indicating that the 30 day and 1 year mortality rates were~16% and 34% for hospitalized HF patients. In general, the 1 year mortality following discharge from HF hospitalization ranges from 9% to 34% among studies 9,11 and shows no definite evidence of declining worldwide.
Our study further examined the 3 year longitudinal changes in the mortality in incident hospitalized HF patients. We herein showed that the mortality in the second and third years remained substantial (14.8% and 13.4% for surviving HF patients, respectively). According to official information from the Taiwanese government, 34 the annual mortality rate was 0.7% in the general population. The remaining >10% annual mortality within 3 years following hospitalization in HF patients highlights the importance of continued GDMT optimization and adherence in the long run.
There are another two features regarding mortality in HF patients worth mentioning. First, deaths due to non-cardiovascular causes accounted for nearly 60% of all deaths among our HF patients. This finding is consistent with the 60-66% readmissions being attributed to non-cardiovascular causes and reminds us that, in addition to GDMT, non-cardiovascular co-morbidities should be properly managed. Second, a consistent 30% of deaths occurred outside the hospital during the 3 year follow-up period. Even though the exact nature of these 'out-of-hospital' deaths was not certain, we assume that sudden cardiac death may contribute substantially. To curtail out-of-hospital death, optimization of GDMT and more widespread adoption of implantable cardioverter defibrillator (ICD) in symptomatic HF with reduced ejection fraction (HFrEF) patients should be emphasized. Because of the limitation in reimbursement criteria in Taiwan, there were only <1% HF patients undergoing ICD implantation throughout 3 year follow-up period.
Strengths and limitations
The major strength of this study is that this is the first study to use a contemporary population-level, country-specific database to explore the temporal trends in the incidence of hospitalized HF and to assess the longitudinal healthcare utilization, treatment patterns, and modes and causes of mortality within 3 years after the incident HF hospitalization in the Asian population. Most studies regarding the epidemiology of HF reported only the incidence and mortality and did not examine treatment patterns, healthcare utilization, and modes of mortality. Considering the high disease burden of HF worldwide and limited HF epidemiology information in Asia, results from this study fill this knowledge gap and could be more reliably generalized to other Asian countries and ethnic Chinese populations, compared with other surveys and registries. Further, the experiences learned from this study, like low GDMT implementation and persistently high rehospitalization and mortality despite high frequencies of outpatient visits, could be referenced for countries with similar universal health coverage and be taken as an example to refine the medical care of HF in a country-specific manner.
There are several limitations in this study. First, because there is no validation study to assess the accuracy of the Incidence, healthcare utilization, and mortality of heart failure in Taiwan diagnostic codes for HF in outpatient claims in the NHIRD, our study only focused on hospitalized HF patients. In other words, the interest of our study is in new HF hospitalizations rather than de novo HF. To avoid including patients with recurrent HF hospitalizations, those with either inpatient or outpatient diagnoses of HF in the previous 1 year were excluded. Given the >97% rates of outpatient visits in patients with HF in Taiwan, the inclusion of patients with prior HF should be very limited in this study. Second, owing to the advances in pharmacological treatment, the proportion of HF patients managed exclusively in the outpatient setting is increasing. Hence, inpatient data could not capture all HF cases, particularly for milder ones. Third, because of the lack of information about left ventricular ejection fraction and other laboratory data like N-terminal pro-brain natriuretic peptide, we could not assess disease severity and distinguish the types of HF (preserved, mid-range, or reduced ejection fraction). This not only limits the comparability of this study but also prevents us from assessing whether more accurate ways of diagnosing HF could partly explain the decreasing trend in incidence of HF hospitalizations during these years. Fourth, we included only adult patients in this study, which precludes the applicability of our findings to non-adult populations. Finally, because terminal discharge (going home to die) is a well-adopted tradition in Taiwan, we therefore adjusted our definition of in-hospital mortality as death occurring during hospitalization or within 3 days after discharge. Therefore, some HF patients who died outside the hospital might be erroneously assigned as in-hospital deaths. The average 30% out-of-hospital mortality across the 3 year follow-up period might be underestimated.
By
analysing the population-level, claim-based, country-specific NHIRD in Taiwan between 2010 and 2015, we showed a decreasing trend in the standardized annual incidence of HF hospitalization in people aged 55 years or older. On the other hand, both the standardized annual incidence of HF hospitalization in patients younger than 45 years of age and the absolute annual number of incident HF hospitalizations remain increasing, thus portending a rising burden on health care. The persistently high annual rates of mortality (>10%), mortality due to non-cardiovascular causes (~60% of all deaths), mortality occurring outside the hospital (~30% of all deaths), recurrent hospitalization (~50%), and hospitalizations due to non-cardiovascular causes (>60% of all hospitalizations) in the second and third years following initial HF hospitalization highlight the substantial unmet needs in HF management in Taiwan, where the universal health coverage programme has been successfully implemented for decades.
Our findings suggest that, in addition to more complete and widespread adoption of GDMT in patients with HF, the more appropriate use of ICD and control of non-cardiovascular risk factors should be emphasized to further curtail the healthcare burden of HF and improve its grave prognosis.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Figure S1. The percentage of in-hospital mortality and out-of-hospital mortality contributing to total mortality. Figure S2. The Kaplan-Meier curves of overall, in-hospital, and out-of-hospital mortality after discharge from the index heart failure (HF) hospitalizations (2010-2012) in Taiwan. Table S1. 2010-2015 annual incidence of heart failure hospitalization in Taiwan, overall and sex-stratified. Table S2. Baseline characteristics of patients with incident heart failure (HF) hospitalization between 2010 and 2012. | 2020-09-15T13:05:46.347Z | 2020-09-13T00:00:00.000 | {
"year": 2020,
"sha1": "92fbefc60792856db061b69a9fa1000f43da5230",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ehf2.12892",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3ae7973ee108cb0af59aaf7c8ee6cb55433cff4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13158726 | pes2o/s2orc | v3-fos-license | Submillimetre and far-infrared spectral energy distributions of galaxies: the luminosity-temperature relation and consequences for photometric redshifts
The spectral energy distributions (SEDs) of dusty high-redshift galaxies are poorly sampled in frequency and spatially unresolved. Their form is crucially important for estimating the large luminosities of these galaxies accurately, for providing circumstantial evidence concerning their power sources, and for estimating their redshifts in the absence of spectroscopic information. We discuss the suite of parameters necessary to describe their SEDs adequately without introducing unnecessary complexity. We compare directly four popular descriptions, explain the key degeneracies between the parameters in each when confronted with data, and highlight the differences in their best-fitting values. Using one representative SED model, we show that fitting to even a large number of radio, submillimetre and far-infrared (far-IR) continuum colours provides almost no power to discriminate between the redshift and dust temperature of an observed galaxy, unless an accurate relationship with a tight scatter exists between luminosity and temperature for the whole galaxy population. We review our knowledge of this luminosity-dust temperature relation derived from three galaxy samples, to better understand the size of these uncertainties. Contrary to recent claims, we stress that far-IR-based photometric redshifts are unlikely to be sufficiently accurate to impose useful constraints on models of galaxy evolution: finding spectroscopic redshifts for distant dusty galaxies will remain essential.
INTRODUCTION
The rest-frame far-infrared (far-IR) thermal emission from dust grains heated by various sources -the diffuse interstellar radiation field (ISRF) in galaxies, sites of active star formation, and a central active galactic nucleus (AGN)can dominate the spectral energy distribution (SED) of galaxies (Soifer & Neugebauer 1991;Sanders & Mirabel 1996). The most luminous galaxy apparent in the Universe (APM 08279+5255; Irwin et al. 1998) emits approximately 60 per cent of its bolometric luminosity in the far-IR waveband, while low-redshift galaxies with blue optical colours that were detected by the IRAS satellite also release about 60 per cent of their total bolometric luminosity as thermal radiation from dust (Mazzarella & Balzano 1986). Even the most quiescent spiral galaxies such as the Milky Way emit of order 30 per cent of their total luminosity from dust (Reach et al. 1995;Alton et al. 1998; Dale et al. 2001; Dale & Helou 2002). Dust emission remains important at high redshifts.
As compared with the rich variety of features in the SEDs of galaxies at near-IR, optical and ultraviolet wavelengths, the far-IR SED is simple, dominated by a smooth pseudo-thermal continuum emission spectrum. At most about 1 per cent of the emitted energy is associated with spectral lines from atomic fine-structure and molecular rotational transitions (Malhotra et al. 1997;Luhman et al. 1998;Combes, Maoli & Omont 1999;Blain et al. 2000). The mid-IR spectra of galaxies from 10 to 30 µm are expected to be significantly more complex, especially because of broad line emission from polycyclic aromatic hydrocarbon (PAH) molecules (Dale et al. 2001).
A variety of models have been used to describe the far-IR SEDs of dusty galaxies. We compare four wellconstrained descriptions with data for a variety of types of galaxy, and highlight the importance both of degeneracies between the parameters and the need to avoid baroque descriptions that require a greater number of parameters than can be justified and fixed by existing data. Using one uniform, self-consistent description of the SED we discuss the accuracy of photometric redshifts that can be derived for high-redshift galaxies based on their observed colours, making assumptions concerning their SEDs. We describe in detail the degeneracy between redshift and dust temperature when fitting photometric data for high-redshift galaxies (Blain 1999b;Blain et al. 2002), and discuss the prospects for breaking this degeneracy using information about absolute luminosity, obtained from a luminosity-temperature (LT) relation for dusty galaxies. A narrow range of SEDs was included implicitly in recent discussions of the prospects for determining mm-wave photometric redshifts (Hughes et al. 2002;Aretxaga et al. 2003;Dunlop et al. 2003), which leads to encouraging results. We discuss existing data on the LT relation (Dunne et al. 2000;Stanford et al. 2000;Dale et al. 2001;Dale & Helou 2002;Garrett 2002; Barnard & Blain 2003;Chapman et al. 2003), which leads to a much less optimistic outlook for far-IR/submm photometric redshifts. The observed dispersion in the LT relation is the key quantity that limits the effectiveness of the technique.
In Section 2 we describe four SED models, and compare them with a range of observed galaxy SEDs. We highlight the consequences of errors in the fitted SEDs and the LT relation for determining photometric redshifts in Section 3. Finally, in Section 4, we describe the requirements for spectroscopic observations that will remove this uncertainty, and describe the opportunities that much more detailed far-IR SEDs measured using SIRTF 1 from 2003 will provide for better understanding the LT relation and for determining far-IR-based photometric redshifts.
SED DESCRIPTIONS
Various functions have been used to describe the quasiblackbody far-IR/submm of the SEDs of dusty galaxies. The parameters that define the SED generally disguise the inevitably complex geometrical mix of dust grains at different temperatures in the interstellar medium of these galaxies, which are often disturbed and interacting, and sometimes very luminous. The far-IR emission is visible at different optical depths in both emitted and scattered radiation.
Single-temperature models
The simplest SED description is based on a blackbody spectrum Bν ∝ ν 3 /[exp(hν/kT ) − 1] at a single temperature T , as a function of frequency ν, modified by a frequencydependent emissivity function ǫν ∝ ν β , where β is in the range 1-2 (Hildebrand 1983). This yields an SED function is (1) Note that this function has an exponential Wien dependence when ν ≫ kT /h. It is necessary to modify this to a shallower form in order to agree with observed SEDs (see Fig. 1). A straightforward way to counteract the mid-IR Wien tail is to substitute a power-law SED, fν ∝ ν −α at high frequencies, matching the power-law and thermal function (equation 1) with a smooth gradient at a frequency ν ′ , which requires the condition d lnfν (ν ′ )/d lnν ′ = −α to be satisfied. Three parameters are required to describe the SED: T , β and α.
The dust temperature T determines the frequency of the SED peak, the emissivity index β fixes the power-law index of the SED in the Rayleigh-Jeans regime, and α sets the slope of the mid-IR SED. This SED was used in the context of studying submm-wave galaxy evolution by Blain et al. (1999a), and has been used without the Wien correction to fit low-redshift SEDs by Dunne et al. (2000). An alternative 'optically thick' functional form substitutes a more complex emissivity function, ǫν ∝ [1 − exp(ν/ν0) β ], to describe the expected increase in the optical depth of dust emission at higher frequencies, leading to an SED function, This SED has been used by several authors, especially those dealing with the SEDs of galaxies and AGN at the highest redshifts (e.g. Benford et al. 1999;Isaak et al. 2002) where the SED is probed close to its rest-frame peak. This SED is identical to the T -α-β form at long wavelengths, but tends to a pure blackbody at frequencies greater than ν0, as expected from an optically thick source. This function also requires a power-law to temper the SED on the Wien tail, using the parameter α. Four SED parameters are thus required in this model: T , α and β as before, plus ν0. There is a strong degeneracy between the value of ν0 and the values of T and β (see Section 2.4). Hence, a reasonable value of ν0 that corresponds to a frequency close to the 60-and 100-µm IRAS bands is usually assumed. Including this frequencydependent opacity allows a more physical description of the SED, but the parameter ν0 is difficult to determine unambiguously from available observed data.
Models with multiple dust temperatures
Descriptions of the SED can include more than one dust temperature. Most notably, these include models based on radiative transfer calculations, in which a continuous distribution of sources is assumed in some geometry, and the temperature distribution of the dust as a function of position is calculated self-consistently to build up an SED (Granato, Danese & Franceschini 1996;Devriendt et al. 1999;Efstathiou, Rowan-Robinson & Siebenmorgen 2000). Note, however, that even for nearby galaxies the spatial and spectral resolution available is insufficient to constrain the ∼ 10 parameters required to describe this type of model even in the simplest spherical geometry. When the sub-arcsec resolution of the Atacama Large Millimeter Array (ALMA) interferometer and James Webb (Next Generation) Space Telescope (NGST) are available at submm and near/mid-IR wavelengths, respectively, then radiative transfer models will have a role to play in interpreting observations. At present, the quality of available data does not justify the incorporation of such complexity. Figure 1. The observed SEDs of three well-studied galaxies: the low-redshift (z = 0.019) Sb galaxy NGC 958 (Dunne & Eales 2001), the often-quoted prototype low-redshift (z = 0.018) ultraluminous dusty galaxy Arp 220, and the galaxy with the greatest apparent luminosity in the Universe APM 08279+5255 Lewis et al. 1998). Four SED models described in Section 2 are compared: a T -α-β model (equation 1), a model with a variable optical depth (equation 2), a model with both a cold and a warm dust component (equation 3) and a model with a power-law dust mass-temperature distribution (equation 4). The parameters required to fit the data in all four models are listed in Table 1: the numerical values differ, but all provide reasonable descriptions of the data, including the radio data for NGC 958 and Arp 220, which is not shown to avoid extending their abscissae over another 2 orders of magnitude. The plotted ranges of frequency are equal, demonstrating the range of different apparent dust temperatures/rest-frame SED peak frequencies and mid-IR spectral indices observed. Table 1. Lists of the best-fitting parameters in the four SED models described in Section 2 (equations 1-4) required to reproduce the observed SEDs of three well-studied galaxies: see Fig. 1 NGC 958 T = 28.8 ± 1 K T = 33 ± 2 K Tw = 24.8 ± 2 K T min = 22.0 ± 1 K z = 0.019 α = 2.02 ± 0.2 α = 2.0 ± 0.2 α = 1.9 ± 0.2 γ = 7.9 ± 0.3 β = 1.5 f β = 1.5 ± 0.1 β = 2.0 f β = 1.5 f ν 0 = (2.9 ± 0.5) × 10 12 Hz Fwc = 0.58 ± 0.2 L = 3.1 × 10 11 L ⊙ L = 1.8 × 10 11 L ⊙ L = 2.6 × 10 11 L ⊙ L = 2.9 × 10 11 L ⊙ Arp 220 T = 37.4 ± 1 K T = 56 ± 1.5 K Tw = 42.3 ± 1 K T min = 29.7 ± 1 K z = 0.018 α = 2.9 ± 0.2 α = 3.0 ± 0.1 α = 3.43 ± 0.3 γ = 8.8 ± 0.2 β = 1.5 f β = 1.55 ± 0.1 β = 2.0 f β = 1.5 f ν 0 = (1.46 ± 0.1) × 10 12 Hz Fwc = 0.51 ± 0.1 L = 1.41 × 10 12 L ⊙ L = 1.43 × 10 12 L ⊙ L = 1.47 × 10 12 L ⊙ L = 1.39 × 10 12 L ⊙ APM 08279+5255 T = 91 ± 5 K T = 187 ± 10 K Tw = 83 ± 3 K T min = 59.7 ± 3 K z = 3.8 This parameter value was fixed in the fitting process, either to reduce the size of the parameter space being searched or to enforce a physically meaningful value.
Two more practical multi temperature SED descriptions have been used. A two-temperature model by Dunne & Eales (2001) includes a cool component at a fixed temperature of Tc = 20 K to describe dust heated by the general diffuse ISRF of the galaxy, and a component of hotter dust at a temperature Tw, with a mass fraction Fwc, that is heated more intensely in star-forming regions. Each component is described by a modified blackbody ν β Bν spectrum, assuming a fixed value of the emissivity index β = 2. The resulting SED function (3) There are 3 free parameters in this model if Tc is fixed at 20 K and β is fixed at 2.0: Tw and Fwc to describe the thermal part of the SED, and a power-law index α, to fix the mid-IR SED. A similar two-temperature model, with a greater spread between the temperatures, is described in the context of the dust emission from galaxies that are members of the Virgo cluster by Popescu et al. (2002).
A fourth, physically motivated, and yet still adequately constrained model was described by Dale et al. (2001), who assumed a power-law distribution of dust masses as a function of temperature, in which the mass of dust heated to a temperature between T and T +dT , is given by m(T ) ∝ T −γ . Fig. 1, plus radio data, is shown in the upper panel, while in the lower panel the results are derived for only the subset of data at 850, 450, 100 and 60 µm for consistency with Dunne & Eales (2001). The contours are spaced by unit standard deviations away from the best-fitting values, which are marked by a star. A fixed best-fitting value of α = 2.02 is assumed, and β spans the physically plausible range 1 → 2. There is a strong degeneracy between T and β in both panels. Between the two panels the best-fit points remain on a similar T -β trend line, but lie on different tracks. The best-fitting point from the restricted data set prefers the physically implausible β > 2 region. Note that if β is fixed at 1.5, then the value of T = 28.8 K listed in Table 1 is the best fitting value in the upper panel. The spectral contribution to the SED from each temperature component is ν β Bν , and so the composite SED is given by the integral The value of γ effectively determines the mid-IR slope at frequencies ν ≫ kT /h, and has a close equivalence to α above. Thus there is no need to introduce another parameter to counteract the Wien tail of the SED here. The value of γ required to produce the same mid-IR spectrum as the other three models is γ ≃ 4+α +β (Blain 1999a). The role of Tmin is equivalent to T in the other models, and determines the frequency of the peak of the SED, subject to a weak depen-dence on the value of γ. Tmin must always exceed the cosmic microwave background (CMB) temperature. The value of the maximum temperature Tmax is relatively unimportant, unless Tmin is very high: Tmax was always set to 2000 K to represent the sublimation temperature of the dust. We have thus chosen to keep Tmax fixed and vary both Tmin and γ to fit the data. This model has the minor practical disadvantage that an integral must be performed, or a look-up table employed, to evaluate the SED. In the updated model of Dale & Helou (2002) the value of the emissivity parameter β at wavelengths longward of 100 µm is now parametrized as a function of the intensity of the ISRF. For simplicity, we adopt a constant value β = 1.5 here. For all four models an additional component of synchrotron radio emission was added, by assuming the conventional far-IR-radio correlation (Condon 1992) between the 1.4-GHz radio emission and the flux densities in the 60and 100-µm IRAS passbands. This correlation holds with a 0.2-dek dispersion over 4 orders of magnitude in luminosity, and should provide a reasonable representation of the expected radio flux in the absence of additional radio emission from electrons accelerated by an AGN. The details of the extrapolation method are described in Blain (1999a).
Comparison of the SED models with data
The four SED models described above were fitted to data for three well-studied galaxies with very different luminosities and rest-frame SEDs. The results are shown in Fig. 1 and Table 1. The data were obtained from IRAS between 12 and 100 µm, from SCUBA at 450 and 850 µm, from other ground-based mm-wave telescopes, and from the VLA at 1.4 GHz in the cases of NGC 958 and Arp 220. The SEDs are reasonably well sampled all the way from the mid-IR to the radio waveband. All four models can provide a good description of these SEDs. In models 1 and 4, the emissivity index β was fixed to 1.5 to reduce the size of the parameter space to search in the maximum-likelihood fitting routine, without great loss of generality or hampering the quality of the fit. A key degeneracy between fitted values of T (Tmin in model 4) and β is described in the next section.
The width of the peak of the model SEDs, defined as the fractional frequency range over which the SED is reduced to half of its peak value, is the feature that differs most significantly from model to model, but never does so by more than a factor of approximately 1.5. The different functional forms of the SED require values of temperature that can have a large dispersion. This is especially true for APM 08279+5255. In model 2 the dust cloud is inferred to become optically thick at frequencies less than the peak of the SED, requiring a much higher temperature to describe the data than in the optically thin models, reflecting the lower effective value of β close to the peak of the SED in the optically-thick model. The peak frequency of the fitted SED is consistent with the data, and the luminosity is within the range spanned by the other three models (see Fig. 1 and Table 1). Hence, model 3 still provides an adequate description of the data. For the other galaxies, the temperature required to fit the data in model 1 is systematically less than that in the optically-thick model 2, again because of the effectively smaller value of β at the SED peak in model 2.
The temperature of the warm component required to fit SEDs in the two-temperature model 3 depends on the relative intensities of the 25-and 100-µm IRAS flux densities. The value of Tmin required to fit the data in model 4 with a power-law dust mass distribution is always lower than in the other models, reflecting its definition as a lower limit to a distribution of hotter temperatures. The factor by which it is lower than in the other models depends on the value of the mass-temperature function index γ (equation 4): a steeper decline in the proportion of dust at higher temperatures leads to a smaller difference, while a greater fraction of hot dust corresponds to a greater difference.
Degeneracies between fitted SED parameters
The most significant practical degeneracy in fitting the results shown in Fig. 1 in all four models is between the dust temperature T and the emissivity power-law index β, as illustrated in Fig. 2. This occurs because the peak frequency of the SED scales approximately with the value of β/T , and so the ratio of β and T must remain approximately constant in order to reproduce the data. In model 4 there is an effective dust temperature T that reflects the range of temperatures present. For a fixed value of γ this effective temperature is determined by the value of Tmin. Considering a subset of the data can modify the position of the best fit values significantly along the extended direction of the probability contours in the figure. The other pairs of parameters T -α and α-β do not show such a degeneracy -the probability contours determined for their fit to the SED data are almost circular (Fig. 3) -and so these pairs of parameters have unique well-determined values when the model provides a good description of the observed SED. Note that the bolometric far-IR luminosity of the galaxy that is derived by integrating the SED in frequency from 100 GHz to the frequency equivalent to a wavelength of 1 µm changes along the ridge of the probability curve shown in the upper panel of Fig. 2. The inferred luminosity increases smoothly along the ridge from 2.6 × 10 11 L⊙ when T = 23 K to 4.4 × 10 11 L⊙ when T = 40 K, with L = 3.1 × 10 11 L⊙ at the best-fitting value of 28 K if β = 1.5. Hence, there is little practical difficulty in using this description -neither the luminosity nor the peak frequency of the SED differs significantly as the permitted region in the T -β parameter space is traversed. There is however a real problem in trying to associate the values of the fitted parameters with the true physical properties of the dust grains that generate the emission.
With the exception of T and β in all models (Tmin and β in model 4) and the pairs T -ν0 and Fwc-Tw in models 2 and 3 respectively, the parameters are well determined. The well-determined pairs of parameters T -α and α-β are illustrated in Fig. 3, with almost circular probability contours. The probabilities for the relatively ill-constrained pairs discussed above are illustrated in Figs 4 and 5. There are two points to note from these probability figures. First, along the track of maximum probability in these figures, the value of the associated bolometric far-IR luminosity of the galaxy changes by only 5-10 per cent. This is much less variation than the factor of 2 variation across the wider range of the T -β parameter space shown in Fig. 2. Secondly, Fwc is not defined accurately even by the excellent data for NGC 958, as shown in Fig. 5. Hence, this is likely to provide the least Fig. 2, for the optically thick SED (model 2) fitted to data for Arp 220. The shape of the probability contours illustrates the degeneracy between temperature and the frequency at which a galaxy becomes optically thick, ν 0 . α = 3.0 and β = 1.5 are assumed, the best-fitting values.
informative presentation of SED data amongst the four models used. For APM 08279+5255, the extra population of 20-K dust in this model can be seen generating a break in the lowfrequency slope of the SED in Fig. 1. The mass of cold dust present in far-IR luminous galaxies may dominate the total mass of dust at all temperatures, and provide information concerning the history of metal enrichment within, but it is far from energetically dominant, and difficult to measure to better than a factor of a few.
For consistency with our earlier treatments (Blain et al. 1999aBarnard & Blain 2003) we will adopt the T -αβ model (model 1) in the following discussions. Subject to the effective dust temperature T being slightly different from that in the other SED parametrizations, this model provides a good description of observations for galaxies ranging from low-luminosity spirals to the most luminous high-redshift systems (Fig. 1). The results that follow are not only valid for this description of the SED, but are generic results that apply to all 4 descriptions discussed above.
PHOTOMETRIC REDSHIFTS
The well-defined pseudo-thermal SED of dusty galaxies (Fig. 1) offers a prospect of recognizing the redshifts of galaxies with the same intrinsic SEDs by comparing far-IR and submm colours. This was discussed in the context of identifying high-redshift galaxies amongst more numerous lowredshift galaxies in a shallow submm-wave survey by Blain (1998) and for observations of the first generation of hard-toidentify submm galaxies, with flux-density limits from IRAS observations by Hughes et al. (1998) and Eales et al. (1999). Eales et al. noted that it is not possible, a priori, to be certain of whether a dusty galaxy is hot and far away, or cool and close by, and that there could be significant consequences for Figure 5. An illustration of the significant degeneracy between the warm dust temperature parameter Tw and the warm dust mass fraction parameter Fwc obtained from fitting to the SED data for Arp 220, at a known redshift z = 0.018 using the twotemperature model 3. The large extent of the error ellipse in the direction of Fwc reflects the general difficulty of determining a dust mass from SED data, illustrating the difficulty of associating parameter values in an SED fit with the true physical properties of dust grains in an observed galaxy.
the cosmological implications of the population of submmluminous galaxies if the dust temperature/redshift is not estimated correctly. If the dust temperature defining the SED is too hot, and/or the redshift of the population is too great, then the cosmological importance of submm galaxies can be overstated. The reason for the similar effects of increasing both redshift and temperature is that the peak of the SED is determined by the value of ν/T in the exponential term of the Planck function. Redshifting the spectrum by a factor of (1 + z) in frequency ν thus has a directly equivalent effect to modifying the temperature T by the same fraction.
By assuming a narrow range of SED templates, with a tight distribution of dust temperatures, when trying to match a redshift Dunlop et al. 2003), this large degeneracy can appear to vanish, leading to unrealistically optimistic estimates of the accuracy of the derived redshifts to ∆z ≃ 0.5. Unless a representative range of SED templates is available, perhaps including the full range of observed SEDs of dusty galaxies, with temperatures from less than 20 K (Reach et al. 1995) to more than 80 K (Table 1), then the errors on photometric redshifts could be underestimated. The true error is at least as great as the fractional uncertainty in the dust temperature, even if there are very small errors on the photometric data itself.
This dust temperature-redshift degeneracy is illustrated in Fig. 6 for the photometric data available for both the low-redshift low-luminosity galaxy NGC 958, which includes radio data, and the high-redshift, high-luminosity dusty QSO APM 08279+5255. The availability of radio data reduces the degeneracy somewhat, as the different emission mechanisms lead to different spectral indices (Carilli & Yun Figure 6. An illustration of the degeneracy between the dust temperature T and the redshift z fitted to SED data for two dusty galaxies in Fig. 1, disregarding their known redshifts: NGC 958 at z = 0.019 (upper panel) and APM 08279+5255 at z = 3.8 (lower panel). The T -α-β model 1 SED is assumed, with α and β fixed at their values from Table 1 to minimize the scatter in the fitted values. The data define a narrow track aligned with the locus of constant T /(1 + z). The radio data for NGC 958 leads to the reduction in probability from low to high temperatures along the track of the contours in the upper panel. Observations of colours alone, even with radio data, provide a strong constraint only on the ratio T /(1 + z). & Carilli 2002); however, the radio-submm colour is still a much better indicator of the ratio T /(1 + z) than of T and z separately (Blain 1999a). The degeneracy lies along the locus T ∝ (1 + z): see Fig. 6. If a fraction of the radio emission is generated by an AGN, then the derived redshift will be underestimated by perhaps a large amount.
1999; Yun
A practical example of a high-redshift galaxy for which photometric redshifts may be sought is SMM J14011+0252, with a known redshift z = 2.56 (Frayer et al. 1999), and a well determined radio-far-IR SED (Ivison et al. 2001). The galaxy has an achromatic gravitational lensing magnification of a factor of 2.5. Upper limits from IRAS provide weak constraints on its dust temperature and mid-IR spectral index α. Values of T and z that provide good fits to the SED data are shown in Fig. 7, setting aside the known redshift z = 2.56. The true redshift corresponds to T ≃ 33 K; note, however, that the extent of the 1σ probability contour in the fit, even in the lower panel for which the assumed fractional errors on the photometric data points are reduced to an artificially low 2 per cent, is T = 29 +14 −9 K or z = 2.2 ± 1.2, hardly useful redshift information. Without increasing the accuracy of the observed data, no useful result for redshift z alone can be quoted, as the probability contours are open in the upper panel.
A luminosity-temperature (LT) relation to the rescue?
The fit for SMM J14011+0252, even with greater observational accuracy assumed, shows the large degeneracy between temperature and redshift when fitting photometric data. Is it possible to improve the redshift accuracy by inferring the luminosity for the galaxy if T and z are taken to lie along the tracks of maximum probability in the direction of T = (1 + z) in Figs 5-7? If there is a known link between luminosity and dust temperature, an LT relation (Dunne et al. 2000;Dale et al. 2001;Blain et al. 2002;Dale & Helou 2002;Barnard & Blain 2003;Chapman et al. 2003), then a colour-magnitude diagram could be used to locate galaxies on the degenerate T = (1 + z) tracks, allowing redshifts to be determined. The inclusion of luminosity information is an implicit assumption in the photometric redshift technique with a narrow range of SED parameters discussed by Hughes et al. (2002), Aretxaga et al. (2003) and Dunlop et al. (2003).
In Fig. 8 we show the bolometric luminosity L inferred for each temperature T and redshift z along the track of maximum probability in both the upper and lower panels of Fig. 7. The luminosity is plotted as a function of temperature, but the unique redshift associated with each temperature can be read from the tracks in Fig. 7. The ranges of fitted LT values that lie within 1-σ of the most probable track in Fig. 7 are also shown enclosed by the thinner solid and dashed lines in Fig. 8. The inferred luminosity increases with increasing temperature/redshift, and does so more rapidly than the LT relation inferred for low-redshift IRAS galaxies by Chapman et al. (2003), which is shown by the dotted lines in Fig. 8 and has a scatter of 0.14 dek in the interquartile range.
By comparing the two panels of Fig. 7 and the form of the LT curves for each panel shown in Fig. 8 it is clear that reducing the size of the errors on the photometric data does not change the width of the inferred track in the luminositytemperature/redshift space significantly, whereas including luminosity information does restrict the range of plausible temperatures/redshifts for galaxies with photometric data to those lying between the thin dotted lines. Note, however, that the 1σ spread in the low-redshift LT relation covers a range T = 38 ± 10 K, a dispersion of about 25 per cent. Figure 7. A fit to temperature and redshift derived from all existing data for the z = 2.56 submm galaxy SMM J14011+0252 (Frayer et al. 1999;Ivison et al. 2001). In the upper panel the true observational errors are assumed, while in the lower panel the fractional errors are set to only 2 per cent, about five times better than the real data. In both panels there is a huge degeneracy in the direction T ∝ 1 + z, and so little meaningful redshift information is available from the colours alone, even with unreasonably small errors. In the lower panel there is a clear maximum probability. The best-fitting line is also deflected between low and high temperatures due to the different temperature dependence of the submm and radio SEDs.
The associated redshift, obtained by reading off the maximum probability track in Fig. 7, based on unrealistically accurate data, indicates a range z = 2.9 ± 0.9: the result for less precise existing data shown in the upper panel of Fig. 7 is z = 3.4 ± 1.1. While these ranges include the true redshift z = 2.56, the uncertainties make the results of little use when compared to the exact values of T and L that Fig. 7. In both cases the luminosity is corrected for the known magnification factor of 2.5. Each temperature corresponds to a different redshift T ∝ (1 + z). At the known redshift z = 2.56 the fitted temperature is approximately 35 K (Fig. 6). The low-redshift LT relation derived from IRAS data and its 0.14-dex interquartile range uncertainty (Dale et al. 2001;Dale & Helou 2002;Chapman et al. 2003) is shown by the thick and thin dotted lines respectively. The uncertainty in the LT relation dominates the error on an inferred T -z value. The scatter in temperature T by approximately 15 per cent corresponds to a scatter in inferred z by approximately 25 per cent. Figure 9. The bolometric luminosity inferred from photometric data for the z = 0.019 spiral galaxy NGC 958, the z = 0.018 ULIRG Arp220 and the ultraluminous z = 3.87 galaxy APM 08279+5255 (demagnified by a factor of 50), compared with the result for SMM J14011+0252 and the low-redshift LT relation shown in Fig. 8. The errors on the track traced for all three galaxies are very small. The curves for NGC 958 and Arp220 intersect the LT relation close to their expected temperatures, but the curve for APM 08279+5255 does not. could be found from a spectroscopic redshift. The spread in the derived redshifts, assuming the measured scatter in the low-redshift LT relation (Chapman et al. 2003) is a factor of 2 greater than the photometric redshift accuracy claimed from the SMM J14011+0252 data by Hughes et al. (2002; z = 2.9 +0.6 −0.4 ). The corresponding range in luminosity covers 6 × 10 12 to 2 × 10 13 L⊙, a factor of approximately 3 (Fig. 8). Photometric redshifts cannot be obtained to the accuracy proposed by Hughes et al. (2002) without assuming an unreasonably tight dispersion in the LT relation, by taking an inadequate range of SED functions into account.
Knowledge of an LT relation for submm galaxies is thus unlikely to rescue far-IR/submm/radio photometric redshifts from the temperature-redshift degeneracy. The error in the derived temperature/redshift for an individual galaxy is expected to be dominated by the uncertainty in the LT relation. Once a spectroscopic redshift is obtained for a submm galaxy, however, its temperature and luminosity can be determined quite accurately: compare the widths of the probability contours in Figs 5-7 at constant redshift. Temperatures and luminosities accurate to about 20 and 50 per cent, respectively, are thus expected.
Note that the Chapman et al. (2003) low-redshift LT relation shown by the dotted lines in Fig. 8 is consistent with estimates of the temperature of the bulk of the submmselected galaxy population derived by comparing their multiwavelength properties Trentham, Blain & Goldader 1999). Using the T -α-β SED description (equation 1), these temperatures lie close to 40 K at luminosities of several 10 12 L⊙, assuming a high redshift (z > 1).
The SEDs discussed earlier, for NGC 958, Arp 220 and APM 08279+5255, can be analyzed in the same way as the data for SMM J14011+0252 shown in Fig. 8. 2 The resulting tracks in the LT diagram are shown in Fig. 9. The LT curves for NGC 958 and Arp 220 intersect with the lowredshift Chapman et al. LT relation at temperatures of approximately 28 and 38 K respectively. These results are very close to their fitted temperatures, taking into account their redshifts, which are 29 and 38 K respectively in model 1 (Table 1). Hence, in the absence of redshift information, a photometric redshift derived for both of these galaxies would be reliable, although the uncertainty would be dominated by the LT relation.
However, the curves intersect for APM 08279+5255 at T ≃ 34 K, nowhere near its true temperature T ≃ 80 K (in model 1), even after correcting for magnification by an assumed factor of 50. This is at least a warning that some galaxies would have a very discrepant photometric redshifts derived from far-IR and submm data using this technique. This galaxy is significantly hotter than other high-redshift dusty galaxies, but it is certainly an interesting object, and owing to its great luminosity, one that could be found quite easily in future far-IR and submm-wave surveys. The observational errors for all three of these brighter galaxies are 2 SMM J14011+0252 seems to set an unfortunate precedent for studies of submm galaxies: it is unusually bright in the optical range, and it shows no sign of the presence of an AGN, unlike most other identified galaxies . It lies squarely on the radio-far-IR correlation. Although it is relatively easy to study, SMM J14011+0252 is perhaps unrepresentative of submm galaxies as a whole. Figure 10. The LT values derived for 83 IRAS BGS galaxies observed by SCUBA (Dunne et al. 2000), fitted using the T -α-β SED description. The different symbols represent different redshifts: z < 0.01, open square; 0.01 ≤ z < 0.02, filled square; 0.02 ≤ z < 0.03, empty triangle; 0.03 ≤ z < 0.04, filled triangle; 0.04 ≤ z < 0.05, empty circle; and z ≥ 0.05, filled circle. Larger symbols represent more accurate results. The overplotted solid and dashed lines trace the loci of a 0.5-Jy 60-µm source and a 60-mJy 850-µm source, respectively, both at z = 0.02. They show the direction in which observational selection effects could truncate the distribution of points. The dashed 850-µm curve runs parallel to the distribution of the data points, and so there might be a selection effect against detecting hot sources in the sample. However, almost all the targeted IRAS sources were detected at 850 µm, and so the lack of sources at the top left of the field probably reflects a genuine absence of hot, low-luminosity galaxies in the IRAS sample. much smaller than for SMM J14011+0252, and so the discrepancies in their photometric redshifts definitely reflect a dispersion in the LT relation rather than errors in the data points.
In order to estimate reliable photometric redshifts from submm, far/mid-IR and radio observations it is necessary to be certain of the nature of and scatter in the LT relation. This can be investigated using a variety of samples of IRluminous galaxies with known redshifts. It is also important to determine whether the relation evolves with redshift. If it does, then this could provide insight into the astrophysics of dusty galaxies, in addition to important information for finding photometric redshifts.
Observed LT relations
The accurate determination of the temperature and luminosity of dusty galaxies with known redshifts using a threeparameter T -α-β SED description was illustrated above. In order to determine a temperature reliably, the redshift must be known, and flux density data must be available at frequencies both above and below the peak of the SED. Radio data can be used as a proxy for mid-/far-IR data if the galaxies can be assumed to lie on the far-IR-radio correlation. The number of galaxies for which all the required far-IR/radio and submm information is available is relatively small. We now derive LT relations from three different sam- Figure 11. The LT relation of 72 galaxies in both the IRAS Faint Source Catalog and the VLA-FIRST radio survey catalogue (Stanford et al. 2000), fitted using the T -α-β SED description. The different symbols represent different redshifts: z < 0.1, open square; 0.1 ≤ z < 0.2, filled square; 0.2 ≤ z < 0.3, empty triangle; 0.3 ≤ z < 0.4, filled triangle; 0.4 ≤ z < 0.5, empty circle; and z ≥ 0.5, filled circle. Better fitting data are plotted using a larger symbol. The solid and dashed overplotted lines show the direction in which observational selection effects would be important, and trace the loci of a 100-mJy 60-µm source and a 1-mJy 1.4-GHz source, respectively, both at z = 0.25. Both curves cut directly across the cloud of points, and so the lack of sources away from the cloud of points at luminosities L > 10 12 L ⊙ is not likely to be due to selection effects. Figure 12. The LT relation for 18 galaxies detected at radio wavelengths using MERLIN and in the mid-IR using ISO in the HDF (Garrett 2002), fitted using the T -α-β SED description (equation 1). The different symbols represent different redshifts: z < 0.2, open square; 0.2 ≤ z < 0.4, filled square; 0.4 ≤ z < 0.6, empty triangle; 0.6 ≤ z < 0.8, filled triangle; 0.8 ≤ z < 1, empty circle; and z ≥ 1, filled circle. Larger symbols represent more accurate results. Note that the temperatures are derived by assuming that the far-IR-radio correlation holds, and their errors are much larger than those for the other samples shown in Figs 10 and 11. The solid and dashed overplotted lines show the direction in which observational selection effects would be important, and trace the loci of a 0.5-mJy 15-µm source and a 50-µJy 1.4-GHz source respectively, both at z = 0.5. Both lines cut directly across the points, and so the lack of sources away from the trend is not likely to arise from selection effects. ples of galaxies to investigate its properties from the limited existing data.
Low-redshift IRAS galaxies
We have already discussed the low-redshift LT relation derived from a large sample of IRAS galaxies by Chapman et al. (2003): see the dotted lines in Figs 8 and 9. However, these results are based on radio and far-IR data alone. It would be very useful to include submm data at intermediate wavelengths in order to be sure of the form of the SED.
A much smaller number of low-redshift galaxies in the IRAS catalogue were observed at submm wavelengths in the SLUGS survey by Dunne et al. (2000) and Dunne & Eales (2001). The results provide important information about both the local, low-luminosity LT relation and galaxy SEDs. Many of these galaxies also have radio data from the NED database, and we have exploited this information, assuming the far-IR-radio correlation to improve the accuracy of the derived values of bolometric luminosity and temperature for these galaxies as compared with the SLUGS values. The resulting SEDs are thus obtained by combining radio data with the flux densities at 850, 100 and 60 µm considered by Dunne et al. (2000). The resulting quantities for 83 galaxies in the SLUGS sample are shown in Fig. 10, along with two lines that trace the luminosity and temperature of a galaxy that is required to generate the typical flux density of a galaxy in the sample at a typical redshift for the sample. These curves provide an indication of the possible role of selection effects in limiting the extent of the scatter in the derived LT relation. If the lines lie parallel to a correlation in the plotted points, then some of the correlation could be due to selection effects acting to reduce the intrinsic scatter, by removing galaxies from the sample on the low-luminosity side of the line. This is not the only way in which selection effects could modify the observed or inferred scatter in an LT relation; however, it provides a direct indication of whether or not selection effects are likely to be significant.
For the SLUGS data, the role of selection effects is unlikely to be very significant, as almost all of the targeted IRAS galaxies were detected at submm wavelengths (Dunne et al. 2000). Note that the luminosities and temperatures of the Milky Way (Reach et al. 1995; L ∼ 3 × 10 10 L⊙; T ≃ 17 K) and NGC 891 (Alton et al. 1998; L ≃ 5.3×10 9 L⊙; T ≃ 20 ± 2 K) lie at luminosities slightly below the cloud of data points, whereas the low-luminosity, hot starburst galaxy M82 (L ≃ 2.7×10 10 L⊙; T = 42±2 K) is significantly offset above the cloud. The apparent dispersion in the LT relation shown in Fig. 10 may thus be less than the true dispersion. The scatter in the temperatures derived for SLUGS galaxies is comparable to the scatter determined from the SEDs of low-redshift IRAS galaxies shown in Fig. 8 (Chap man et al. 2003).
VLA-IRAS galaxies at moderate redshifts
At higher redshifts, IRAS flux densities are available only for the most luminous galaxies. The fluxes from the IRAS Faint Source Catalog (FSC) were correlated with data from the wide-field VLA-FIRST radio survey (Becker, White & Helfand 1995) to provide information on the LT relation for a large sample of galaxies with bolometric luminosities greater than ∼ 10 12 L⊙ by Stanford et al. (2000). The LT values for these galaxies were calculated assuming that the far-IRradio correlation holds. The LT values for the 72 galaxies from this sample with reliable redshifts, which are not fitted by extremely cold temperatures (T < 5 K) and thus radioloud, are plotted in Fig. 11. About 25 further galaxies from the sample of Stanford et al. fall into the very cold category, thus indicating a likely ∼ 25 per cent contamination fraction in the sample from radio-loud AGN that emit a radio flux density greater than that expected from the observed low-redshift far-IR-radio correlation. The tracks of the lines through the data confirm that neither radio nor IR selection effects should severely bias the LT relation from the sample of Stanford et al., which lies on a slightly hotter track, and is scattered by a greater amount to higher temperature as compared with the SLUGS sample. This suggests that hotter, perhaps more AGN-rich galaxies, are represented in this more luminous, partially radio-selected sample. A similar sample of about 40 luminous southern galaxies, with radio images from the Molonglo telescope and redshifts from the 2dF multi-object spectrograph has recently been compiled by Sadler et al. (2002), and could be analyzed in a similar way. Our understanding of the LT relation would be much improved if submm fluxes could be determined for these galaxies with known moderate redshifts and radio and far-IR flux densities. These observations would both provide tighter constraints on their positions in the LT plane, and offer the possibility to search for evolution in the LT relation (Chapman et al. 2003).
Faint radio and mid-IR selected galaxies in the Hubble Deep Field
Another sample useful for investigating the LT relation is the 18 z ∼ 1 faint radio galaxies detected at 15 µm using ISO in the Hubble Deep Field (HDF; Garrett 2002). These galaxies are sampled at mid-IR wavelengths much shorter than the peak of their SEDs, but the radio data provides a proxy for the 60-100-µm emission close to the peak, assuming that the far-IR-radio correlation holds. A coarse upper limit to the 60-µm flux densities of the galaxies is provided by an XS-CANPI analysis 3 of the relevant IRAS scans. A constraint on the 60-µm flux densities of these galaxies is essential for fitting temperatures to the radio-mid-IR data. The inferred LT relation is shown in Fig. 12: it exhibits a remarkably narrow dispersion, but the errors on the data points are large. Galaxies at the mid-IR wavelengths probed have spectral features associated with emission from PAH molecules, and so this tight correlation is all the more surprising. A link between the intensity of PAH emission and the bolometric luminosity of a galaxy has been commented on for a sample of only five galaxies by Haas, Klass & Bianchi (2002). An apparent link between 15 µm emission and bolometric luminosity for a larger sample of low-redshift galaxies is discussed in Sections 5.3 and 5.5 of Dale & Helou (2002). The tracks of typical galaxies in the sample overplotted on Fig. 11 show that selection effects are unlikely to be responsible for the tight correlation.
3 http://www.ipac.caltech.edu/ipac/services/xscanpi.html It seems unlikely that the mid-IR flux density of a galaxy could be better correlated with its temperature and luminosity than data obtained closer to the peak of the SED. This would require some underlying correlation between the slope of the mid-IR SED and the total luminosity that is not apparent in existing datasets. Much larger samples from SIRTF from 2003 will hopefully resolve this question.
Submm-selected high-redshift galaxies
A few distant submm-and far-IR-selected galaxies (Ivison et al. , 2001Frayer et al. 1998Frayer et al. , 1999Chapman et al. 2002a) also have redshifts, and many more have recently been found using deep radio-selected galaxies (Chapman et al. 2002b). Initial results indicate that it is likely that the values of temperature and luminosity derived for high-redshift submm-selected dusty galaxies display a similar scatter to that shown in Fig. 11. These samples are destined to grow in size, and will provide the ultimate test of the high-redshift LT relation and the reliability of far-IR photometric redshifts for high-redshift galaxies. This sample is extremely useful, as it consists of the target population for which photometric redshifts are sought at far-IR and submm wavelengths. Inferred values of luminosity and temperature for high-redshift dusty galaxies with known redshifts can be found in Chapman et al. (2002b).
LT relations from a combination of samples
The combined LT relation for all three samples is compared in Fig. 13, segregated between samples by the plotting symbol. The LT relations estimated for low-redshift IRAS galaxies by Chapman et al. (2003), based on 60-100 µm colours and the SED templates of Dale et al. (2001), and for both merging and quiescent galaxies at low redshifts by Barnard & Blain (2003) are represented by the lines. Note that the temperature inferred by Chapman et al. (2003) is Tmin as defined in equation (4), and so is lower by 10-20 per cent as compared with the definitions used here, reducing the difference between the solid line that represents the Chapman et al. LT relation and the dashed line representing the LT relation for luminous merging galaxies.
The dispersion in the LT relation exceeds 25 per cent (0.1 dek) at all luminosities. These results demonstrate that the dispersion in the LT relation within individual samples is less than the dispersion between samples. The Stanford et al. sample is more likely to be significantly affected by radio emission from AGN, which could introduce additional scatter, causing an overestimate of luminosity and an underestimate of temperature. High-luminosity sources that would be selected in submm-wave surveys are thus observed to have dust temperatures that range between 25 and 60 K. The median temperature and its RMS scatter in temperature are about 45 ± 10 K, a dispersion of approximately 0.1 dek.
Galaxies with low dust temperatures are difficult to select in the far-IR surveys from which these LT results are drawn, and so there may not necessarily be a lack of very luminous cool galaxies, despite the avoidance of points at the lower right corner of Fig. 13. By contrast, there appears to be a real lack of hot, low-luminosity galaxies from the SLUGS and Stanford et al. IRAS-selected galaxies. An exception is one of the closest, brightest IRAS galaxies, M82.
Despite its low luminosity of only 2.7 × 10 10 L⊙, M82 has a temperature of 42 ± 2 K, in the empty upper left region of the figure. Such hot, dwarf galaxies are difficult to find in the IRAS survey, on the grounds of the limited survey volume for low-luminosity galaxies; however, much deeper surveys using SIRTF may find that this region of the LT plane is more thoroughly populated with moderate-redshift galaxies.
The same points are replotted in Fig. 14, segregated by the plotted symbol in redshift rather than from sample to sample. There is a natural tendency for more luminous galaxies to be selected at greater redshifts. As the population of dusty galaxies is known to evolve strongly in luminosity with redshift, this trend should be all the more apparent. However, at all redshifts the scatter in the LT relation from sample to sample appears to span the full range of the scatter seen in the combined population. The dispersion thus appears to reflect a real spread in the physical properties of galaxies present at all redshifts, and not to be due solely to a systematic evolution in a tightly dispersed LT relation with redshift. There is no strong evidence in samples of IRASselected galaxies for the LT relation to evolve with redshift over the approximate range 0 < z < 0.3 (Chapman et al. 2003).
The significant dispersion in the LT relation indicates that there are limited opportunities for discriminating between hot-distant and cool-closer galaxies on the grounds of their far-IR/submm colours even with an assumed LT relation. With an LT dispersion of at least 25 per cent (Fig. 13), the fractional accuracy of a photometric redshift determination is never likely to be better than 30 per cent. Current evidence thus suggests that spectroscopic redshifts will remain essential to interpret the nature of submm-selected galaxies, unless increased sample sizes reveal that the existing samples of high-redshift dusty galaxies are scattered by a larger amount in the LT plane than the true distribution, which is currently well determined only at low redshifts (Chapman et al. 2003). We think that this is unlikely, and rather that the observed scatter in the distribution will remain the same or grow larger as more information becomes available. Larger samples will provide a better description of the distribution of galaxies in the wings of the LT distribution, and unearth further examples of rare galaxies like M82 and APM 08279+5255, that are not represented in the LT plane at the current sampling rate.
FUTURE SED MEASUREMENTS
The greatest change in our understanding of the properties of dusty galaxies will be brought with the launch of SIRTF in 2003 January. With a 0.85-m aperture, and capable of diffraction-limited imaging in far-IR bands at effective wavelengths of 24, 70 and 160 µm, SIRTF will provide key information on the SEDs of at least several million galaxies. Unlike galaxies detected by IRAS, these will extend well beyond a redshift of 1, and will be located to an accuracy of order 5 arcsec. By combining the positions of galaxies detected by SIRTF with spectroscopic redshifts obtained as part of the Sloan Digital Sky Survey (SDSS), which is reasonably complete to z ≃ 0.3, it should be possible to obtain SED data for of order 10 4 galaxies with exact redshifts, and Figure 13. The combined LT scatter for all galaxies shown in Figs 9-11: Dunne et al. (2000), squares; Garrett (2002), lozenges;and Stanford et al. (2000), circles. The better-fitting galaxies are represented by larger symbols. The overplotted lines show the results from other low-redshift LT investigations: the solid line shows the result of Chapman et al. (2003), and the dashed and dotted lines represent the results for merging and quiescent galaxies by Barnard & Blain (2003) respectively. Galaxies appear to avoid the hot, low-luminosity region to the upper left of the figure, which is not likely to be empty due to systematic selection effects. Figure 14. The combined LT relation illustrated in Fig. 12, this time as a function of redshift. Galaxies at z ≤ 0.2 are represented by empty squares, at 0.2 < z ≤ 0.5 by filled squares, at 0.5 < z ≤ 1 by empty triangles, at 1 < z ≤ 2 by filled triangles, at 2 < z ≤ 3 by empty circles and at z > 3 by filled circles. There is a natural trend to greater luminosities/temperatures at higher redshifts. In each redshift interval temperatures range over the full extent of the scatter in the diagram, and there is no evidence for a significant change in the LT relation with redshift (Chapman et al. 2003). thus to determine the form of the LT relation discussed in Fig. 14 in much more detail. At that point the accuracy of photometric redshifts based on far-IR colours combined with an LT relation can be assessed realistically, at least at low to moderate redshifts.
The information can be extended out to higher redshifts by considering optical photometric redshifts, derived reliably from spectral breaks in the SDSS data for galaxies too faint/distant to be targeted for SDSS spectroscopy. At the same time, many of the galaxies with known high redshifts shown in Fig. 14, which already have some spectral information at far-IR/submm wavelengths, will be targets for SIRTF. More information will thus soon be available about their SEDs. By combining radio observations with several far-IR data points it should be possible to generate a more accurate LT relation, and to measure any changes in the far-IR-radio correlation with increasing redshift by the end of SIRTF's mission in approximately 2008. The true potential for far-IR and submm-wave photometric redshifts can then be assessed securely. The reliability of the far-IR-radio correlation can in the meantime be investigated using data for any galaxy with a known redshift that has three accurate flux density measurements, one each at radio, submm and far-IR wavelengths.
CONCLUSIONS
We have discussed the description of the SEDs of dusty galaxies using four different models that are appropriate to describe data that is available at present and is likely to be generated by forthcoming space missions. One of the parameters in each model always describes the peak frequency of the thermal dust SED ('temperature'), while two spectral indices describe the fraction of hot and cold dust: an 'α' and 'β' parameter respectively. Observational data constrains these SED descriptions to within 10 per cent accuracy across the full range of interesting wavelengths longer from about 20 µm to deep in the radio waveband. It is important to be careful in interpreting the values of dust temperatures, emissivities and masses that are inferred with the values of real physical parameters.
There is a huge degeneracy between temperature and redshift when fitting the SED of a distant galaxy. Assuming some link between the luminosity and SED allows this degeneracy to be broken, but a range of available information indicates that this relationship has a very considerable scatter, by up to a factor of 2. Without the knowledge that this LT relation has a scatter as narrow as the required accuracy of the photometric redshift, continuum far-IR/submm/radio photometric redshifts are almost useless for providing constraints from the data for an individual galaxy.
In order to finally assess their usefulness it is essential to quantify the LT relationship accurately, based on a large number of galaxies with known redshifts and well-sampled SEDs. This information is not available at present, but will be generated by SIRTF. Only if the true scatter in the LT relation turns out to be less than about 20 per cent will the photometric redshift technique be useful. Spectroscopic observations to fix the redshifts and SEDs of dusty galaxies remain essential to understand the population, and it is important to develop new types of spectroscopic instruments that can address these questions, for example wide-band detectors for multiple CO emission lines (Bradford et al. in preparation). | 2014-10-01T00:00:00.000Z | 2002-09-21T00:00:00.000 | {
"year": 2002,
"sha1": "66d0ca08415e629c601a57641fd63865c110a232",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/338/3/733/4271207/338-3-733.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "66d0ca08415e629c601a57641fd63865c110a232",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
8947148 | pes2o/s2orc | v3-fos-license | Isolated secretion granules from parotid glands of chronically stimulated rats possess an alkaline internal pH and inward-directed H+ pump activity.
Secretion granules have been isolated from the parotid glands of rats that have been chronically stimulated with the beta-adrenergic agonist, isoproterenol. These granules are of interest because they package a quantitatively different set of secretory proteins in comparison with granules from the normal gland. Polypeptides enriched in proline, glycine, and glutamine, which are known to have pI's greater than 10, replace alpha-amylase (pI's = 6.8) as the principal content species. The internal pH of granules from the treated rats ranges from 7.8 in a potassium sulfate medium to 6.9 in a choline chloride medium. The increased pH over that of normal parotid granules (approximately 6.8) appears to reflect the change in composition of the secretory content. Whereas normal mature parotid granules have practically negligible levels of H+ pumping ATPase activity (Arvan, P., G. Rudnick, and J. D. Castle, 1985, J. Biol. Chem., 260, 14945-14952) the isolated granules from isoproterenol-treated rats undergo a time-dependent internal acidification (approximately 0.2 pH unit) that requires the presence of ATP and is abolished by an H+ ionophore. Additionally, an inside-positive granule transmembrane potential develops after ATP addition that depends upon ATP hydrolysis. Two independent methods have been used that exclude the possibility that contaminating organelles are the source of the H+-ATPase activity. Together these data provide clear evidence for the presence of an H+ pump in the membranes of parotid granules from chronically stimulated rats. However, despite the presence of H+-pump activity, fluorescence microscopy with the weak base, acridine orange, reveals that the intragranular pH in live cells is greater than that of the cytoplasm.
pH of acidic intracellnlar compartments [40]) to cultured pituitary tumor ceils causes a diversion of nascent glycoproteins from normal storage in granules into an intracellular pathway leading to constitutive discharge (37). Third, vacuoles located at the trans-face of the Golgi complex in fibroblasts and cultured hepatoma cells accumulate similar weak bases that can be visualized by electron microscopy (2, 45). Fourth, certain enzymes involved in Golgi/post-Golgi processing of secretory polypeptide precursors (20,21,27,31,33) exhibit acidic pH optima with only low activity levels at neutral pH. Finally, in exocfine pancreatic cells, condensing vacuoles, but not mature granules, accumulate the biogenic amine, serotonin (48) by a process that in other cell types is known to be driven by ATP-dependent H + pumping.
In parallel with segregation, packaging, and storage of secretory proteins, the course of exocrine granule formation in the parotid aeinar cell involves the progressive disappearance Coy removal or suppression) of selected Golgi activities (such as nucleoside diphosphatase [41], acid phosphatase [23], and galactosyl transferase [11]) with retention (or emergence) of other enzymatic activities necessary for granule functions. Thus, the nearly undetectable levels of H+-ATPase in mature parotid granules may represent sorting or sup-pression that proceeds in the absence of any sustained role for H ÷ pumping in normal storage of (parotid) exocrine secretory proteins, whereas the presence of H+-ATPase activity in granules that accumulate biogenic amines reflects a sustained role in intragranular packaging and storage in certain other cell types.
In the present study it was our intention to perturb parotid secretory composition in an attempt to evaluate parameters which could influence granule packaging, including H+-ATPase activity. To achieve this goal, we have altered the transcriptional program of parotid cells by chronic stimulation (in vivo) with the 13-adrenergic agonist, isoproterenol (52). The major phenotypic effects of this treatment on the acinar cell are the increased granule size and number (10) and the dramatic alteration of salivary composition such that the normal spectrum of secretory proteins is largely replaced by a family of highly basic species (38).
We have isolated a highly purified fraction of granules from chronically stimulated parotid tissue and have confirmed that the content of stored proteins is markedly different from that found in normal granules. Further, we have found that these granules (both in vitro and in situ) exhibit an alkaline rather than an acidic internal pH, yet they contain inward-directed H+-translocating ATPase activity. By contrast, a different activity found in the Golgi/post-Golgi region (galactosyl transferase, [43]) remains efficiently excluded from the granule compartment. These findings raise the interesting possibility that under selected conditions, the H+-ATPase activity may be purposefully retained in the exocrine storage compartment.
A portion of the studies described herein have appeared in the form of an abstract (6).
Isolation of Parotid Granules from Isoproterenol-treated Rats
All steps for processing parotid tissue were carried out at 4°C. Rats were killed by cardiac incision under ether anesthesia. The enlarged parotid glands of two or three rats were excised, cleaned of connective tissue, minced thoroughly with razor blades, and gently homogenized (with four strokes in a cylindrical glass, teflon pestle homogenizer at 1300 rpm) in 33 ml (,~15 % wt/vol) of ice cold sucrose (0.3 M), 4-morpholino propane sulfonic acid (MOPS) (2 mM [pH 6.9]), and MgCIe (0.2 raM). The resulting suspension was then spun at 300 g for 2.5 rain; unbroken cells, nuclei, and some secretory granules sediment under these conditions. The supernatant fluid was saved; the pellet was rehomogenized in another 33 ml of fresh medium and respun as before. This supernatant was pooled with the first, adjusted to 1.2 mM EDTA, filtered through one layer of nylon screen (20-~tm mesh), and dispersed with three gentle strokes in a Dounce homogenizer (tight pestle). This "homogenate" (nuclei removed) forms the basis for recoveries calculated for enzymes assayed.
The homogenate was mixed with 2 vol of buffered Percoll medium: 0.3 M sucrose, 86% Percoil, 1 mM EDTA, and 2 mM MOPS, pH 6.9 (Percoil contributes <12 mosM to the osmolality of the diluted homogenate). This mixture was loaded into eight polycarbonate centrifuge tubes containing a 2-rni cushion of buffered Percoll medium and spun in a rotor (model 60 Ti; Beckman Instruments, Inc., Fullerton, CA) at 15,000 rpm for 30 rain.
Granules were banded in the lower (denser) quarter of the self-formed gradients; most other organelles were near the top. The collected granule band was mixed, reloaded into two polycarbonate centrifuge tubes, and spun again in the 60 Ti rotor at 25,000 rpm for 30 rain. Granules were well separated from a faint underlayer containing few erythrocy~s and nuclei and an overlying layer mostly comprising mitochondria. These layers were saved along with the residual gradient fluid for assays. The granule band was collected and diluted at least fivefold with a solution containing 0.3 M sucrose, 1 mM EDTA, 2% polyethyleneglycol, and 2 mM MOPS (pH 6.9), which serves to reduce the buoyant density of the medium and favors disaggregation of Percoll from the granule surfaces. Granules were then pelleted by cemrifugation at '~2300 g for 30 min. A significant number of intact granules remained in the final supernate under these conditions (chosen to minimize the sedimentation of Percoll). The final pellets were white and thus similar in gross appearance to those from normal glands.
Biochemical Analyses
Enzymatic activities of a-amylase; cytuchrome c oxidase, y-glutamyl transferase, ~-N-acetyl glucosaminidase, and UDP-galactosyl transferase were determined as described previously (5). Protein was assayed with fluoreseamine as described by Udenfriend et al. (49) using BSA as standard.
For amino acid analysis, parotid granule content proteins and 10 umol norleucine (used as an internal standard) were hydrolyzed for 20 h at ll0°C in 6 N HC1. Amino acids were resolved and qunntitated using a Dionex D-500 analyzer (Durrum Instruments Co., Sunnyvale, CA).
SDS PAGE of secretory polypeptides (reduced with 2-mercaptoethanol) was carried out on 10-15% (wt/vol) polyacrylamide linear gradients using the Laemmli discontinuous buffer system (32). After electrophoresis, gels were fixed and stained in 0.04% Coomassie Brilliant Blue in 25% isopropanol plus 10% acetic acid (19) and destained in 10% acetic acid. Omission of isopropanol from destaining solutions aids in retaining proline-rich secretory proteins in the stained polypeptide profile.
Microscopy, Immunolocalization, and Autoradiography
In preparing samples for routine observation by light and electron microscopy, parotid tissue and granule suspensions were fixed for I>3 h at 4°C in 3 % (wt/vol) glutaraldehyde and 1% (wt/vol) formaldehyde in sodium cacodylate (or phosphate) buffer (pH 7.2). Granule samples were pelleted by centrifugation (3-5 rain, ,,o3,000 g) after aldehyde fixation and all specimens were then postfixed in OsO4, stained with umnyl acetate, dehydrated, and embedded (in either Epon 812 or Spurr's resin) as described previously (16). Methylene blue-stained 0.5-ttm sections were examined using a Zeiss photomicroscope whereas ,-otO-nm sections (stained with uranyl acetate and lead citrate) were viewed on a Philips 300 electron microscope.
Immunoloealization of parotid secretory proteins was carried out by indirect immunofluorescence on tissue that had been fixed in phosphatebuffered 3 % formaldehyde and 0.05 % glutaraldehyde, frozen, cryosectioned (5 ttm), and permeabilized with 0.3% (wt/vol) Triton X-100 (18). Tissue sections were incubated with rabbit antisera prepared to either purified a-amylase or basic proline-rich proteins and stained subsequently using rhodamine-conjugated goat anti-rabbit IgG.
In preparation for acridine orange (AO) fluorescence microscopy, the parotid glands of chronically stimulated rats were dispersed into a mixture of small cell clumps, acini, and individual cells, using collagenase digestion and mild mechanical shear by repeated pipetting through a series of siliconized glass pipettes of progressively decreasing diameter (1.0-0.4 ram), followed by sieving through 200-~tm nylon screen (25). The dispersion medium consisted of 10 ml Eagle's modified minimal essential medium containing '~0.4 U collagenase (see Materials below), 0.1% BSA and 0.01% soy bean trypsin inhibitor, 15 mM Hepes-NaOH [pH 7.4] and was continuously oxygenated with 100% 02 at 37°C.
To collect cell populations containing mast cells, the same medium without coilagenase was used for lavnge of the rat peritoneal cavity followed by gravity sedimentation at 0*C. Cells of both types were incubated with 5 IxM AO and examined under the microscope within 5-10 rain after exposure to the pH probe. Specimens were photographed using both phase illumination (with the condenser diaphragm slightly offset to improve resolution of intracellular organdies) and epifluorescence with a fluorescein filter.
In autoradiographic studies, granule suspensions were incubated with ,~2 ttCi I~H]methylamine (under conditions identical to that described for biophysical measurements of internal pH, see below) and then were fixed for 60 min at 0°C by the addition of one-seventh volume of 46.5 % glutaraldehyde (final concentration, 6.6%) containing 155 mM lithium phosphate buffer (selected for minimal permeability; [pH 7.0]) (final concentration, 22 mM) and tracer amounts of radioactive methylamine to maintain the extragranular methylamine concentration as a conslant. Granules were then sedimented by centrifugation (2 min, ,~3,000 g) and fixation was continued overnight at 4°C in a fresh solution of 6.6% ghtaraldehyde and 22 mM lithium phosphate (pH 7.0). Subsequently, the pellets were postfixed in OsO4, dehydrated, and embedded in the usual manner 06). Autoradiography was performed on ~dO0-nm sections of embedded granule pellets (12). Quantitation of autoradiographic grain distributions was carried out on uniform-magnification electron micrographs that were representative of the top, middle, and bottom of pellets (50). Stereologic measurement of the volume fraction of granules in these preparations was made using point-count analysis on a quadratic lattice (50).
Measurement of lnternal Aqueous Volume and Internal p H of Isolated Granules
These determinations were performed as described for parotid granules from normal rats (7); [~C]sucrose (marker of the excluded 1-120 volume of granule pellets) was added (1% of total volume) just before termination of incubation by centrifugation. Both [;4C]methylamine and [3H]acetate were used as probes of ApH; all tracers were used at the concentrations described previously (7). The equilibrium distributions of these probes were calculated with the aid of parallel measurements of intragranular aqueous space (44). Unless otherwise indicated, granules were incubated with either Li2SO4 or MgSOdNa2ATP such that both sets of samples were maintained at equal osrnolality.
Measurement of Transmembrane Potential in Isolated Granules
Effects of ATP on A¥ were determined using the equilibrium distribution of tracer amounts of ~Rb + in the presence of valinomycin (44). In previous studies (8,24), good agreement was observed between measurements of inside-positive A~ using ~Rb + exclusion and SaCN -accumulation; however, the ~Rb + method was chosen in the present study to avoid background binding of the probe (encountered with SMCN-). Valinomycin (final concentration, 10 I~M) was added in absolute ethanol (g0.5 % contribution to sample volume). All incubations were carried out at 25"C in parallel with measurements of intragranular aqueous space. Membrane potential values were calculated using the out-in concentration ratios of radioactive cation according to the Nernst equation (44). ). Antisera prepared in rabbits against purified a-amylase and against prolinerich proteins (both from the rat parotid) were the kind gifts of Dr. Richard S. Cameron (Department of Cell Biology, Yale Medical School).
Purity and Recovery of Secretion Granules Obtained from Parotid Glands of lsoproterenol-treated Rats
Although interest in the enlarged secretion granules of parotid tissue from isoproterenol-treated rats has been longstanding (9, 10, 47), these granules have not been isolated previously. To obtain a representative population of granules that would be suitable for biophysical studies, we developed a method of isolation using isoosmotic media (see Materials and Methods). Initial processing steps used to prepare the homogenate have been modified from those described previously for normal tissue (7,11) because the tissue from treated animals requires less vigorous homogenization to achieve disruption and because the enlarged granules sediment more readily. Major granule purification is obtained by Percoll density gradient eentrifugation, similar to an approach used recently to purify adrenal chromaffin granules (13).
The purity and recovery of granules has been evaluated morphologically and by analysis of marker enzymes. Fig. 1, a and b presents comparative light micrographs of parotid tissue from normal and treated rats, emphasizing that chronic isoproterenol treatment induces an increase in the size and number of granules. Immunofluorescence micrographs showing the localization of a-amylase ( Fig. 1 c) and proline-rich proteins ( Fig. 1 d) from treated tissue reveal a uniform distribution of these secretory polypeptides among the acinar cells and their granules. This suggests the absence of major compositionally distinct subpopulations. Electron microscopic observation ( Fig. 1 e) of the granule fraction from the treated rats reveals that the secretory granules have been purified extensively. Further, the diameters of isolated granules (1.4-2.0 Ixm) are the same as that observed in situ (Castle, J. D., unpublished observations, and reference 10), suggesting that the fraction is representative of the total granule population.
The distribution of marker enzyme activities during granule purification is shown in Table I. a-Amylase, a secretory granule marker, is recovered at >20 % of the total homogenate activity (which represents I>50% of the granules that remained intact after homogenization). It is important to note that the parotid acinar cells of normal fasted rats can be considered to be unusually enriched in storage granules, even before fractionation ('~31% of the cell volume is occupied by granules [10]). Chronic isoproterenol treatment results in a further enrichment in these structures (~66% of the cell volume occupied by granules) and the volume fraction of other organelles is substantially reduced (10). Thus, the 3.5-fold purification of these granules measured biochemically (as an increase in the relative specific activity of amylase, Table I) indicates substantial purity and is in the same range as values reported previously for other highly purified exocrine granule preparations (11). In contrast, the measurements of 13-Nacetylglucosaminidase and cytochrome c oxidase indicate that the specific activities of these lysosomal and mitochondrial markers are, respectively, 4.5-and 9.5-fold lower than those of the homogenate. In the case of UDP-galactosyl transferase, the relative specific activity declines 10-fold (Table I), signifying that selected trans-Golgi activities are still effectively excluded from the granule compartment in the chronically stimulated tissue.
The common granule and plasma membrane marker, T-glutamyl transferase (4) increases approximately fourfold in activity per wet weight of tissue after 10 d of isoproterenol treatment. More than 20% of the activity present in the Percoil density gradient is associated with the granule band and is likely to be associated with granule membranes since plasmalemmal elements are not detected morphologically in the granule fraction. No attempt was made to quantitate the recovery of elements of the endoplasmic reticulum (observed to be present at very low levels by electron microscopy). 10 )tm. Light micrograph of parotid tissue from a rat that had received 10 daily injections of isoproterenol (b). Enlarged acini (example is outlined) contain greatly enlarged acinar ceils in which the basal cytoplasm is intensely stained and pale-staining secretion granules (~1.7 Ima diameter) fill >50% of the cell volume. Bar, 10 )tm. Immunolocalization of u-amylase (c) and basic proline-rich proteins (d) in the parotid gland of isoproterenol-treated rats. Cryosections were reacted with antibodies as described in Materials and Methods. Note the similar granule staining pattern for both secretory species. Bars, 10 gm. Low power electron micrograph of the secretion granule fraction from isoproterenol-treated iats (e). Variations in density of individual granules probably reflect the variable preservation of secretory species by fixation. Bar, 1 p.m.
Isoproterenol Induces Changes in Parotid Granule Content Polypeptides; Effects on Chemically Assayable Protein
Repeated isoproterenol injections cause a profound change in the relative quantities of the secretory polypeptides found in rat parotid saliva (1,9,36,38). Specifically, the levels of amylase, DNase, and RNase decline in comparison to a family of more than six proteins (pI's >10) that are highly enriched in proline, glutamine, and glycine. These basic proline-rich proteins increase from <2 % to more than twothirds of total parotid secretory protein during a 10-d isoproterenol treatment (38). granule content polypeptides (emphasizing a spectrum of pink-staining basic proline-rich proteins). Because basic proline-rich proteins contain little or no tyrosine and lower amounts of lysine in relation to other secretory proteins (38), we questioned the applicability of conventional protein assays (34,49) for comparing amounts of protein (used to normalize the internal aqueous space measurement) between granules from normal and amplified tissues. Table II shows the absolute and relative amino acid contents for granule lysates from normal and 10-d injected rats. In each case the analyses were conducted on amounts which by the fluorescamine assay (49) were equivalent to 25 gg of a serum albumin standard. Evidently, the sample from the chronically stimulated rats has a total amino acid content nearly 2.5-fold greater than that of the control. Increases in the amounts of only three amino acids (proline, glutamine, and glycine) account for >95 % of this discrepancy and emphasize the relative prominence of basic proline-rich proteins as secretory proteins in the amplified tissue.
Internal Aqueous Volume and pH of Isolated Granules from Chronically Stimulated Parotid Glands
Intragranular aqueous space was measured for twelve separate preparations of parotid granules suspended in 300-350 mosM media. The mean internal space observed was 3.6 I~l/mg protein with individual determinations ranging from 3.1 to 4.2 ~tl/mg. The variation depends on the compo- 100 in a and b were each '~50 ~tg, the actual amount in b is much higher (Table II). The shift in polypeptide composition to yield a secretory spectnma highly enriched in basic proline-rich proteins (arrowheads) and with decreased amylase (A) content as a result of isoproterenol treatment is evident. In addition, a family of closely spaced bands of unknown identity and extending between apparent Mr 43-55 K appears in treated samples.
Granules were lysed by successive freeze-thaw, hypoosmotic shock (by aqueous dilution), and brief sonication. After centrifugation (172,000 g min), sampies of the supernatant fluid equivalent to 25 lag BSA by fluorescamine assay were hydroiyzed for 20 h in 6 N HC1 with 10 nmol norleucine as an internal standard; they were then analyzed for amino acid content. ** Analyses were corrected for the destruction of threonine and serine (5 and 10%, respectively). § Tryptophan and cysteine are not quantitated, but each is nearly absent from proline-rich proteins of the rat parotid gland (38). i-mM ATP in a medium ('~350 z~°l mosM) containing sucrose, 100 mM KC1, and 50 mM MOPS-7.1o NaOH (pH 7.10). External pH changed ~0.03 U throughout in-~ ~.oo cubation. Intragranular pH was ~ 6,90 determined using [14C]methyl-~ 6.eo amine distribution. Broken line, control. Solid line, plus Mg-ATE Samples were incubated in duplicate; the difference between duplicates was ~10%. In different preparations, the magnitude of the ATP-dependent acidification was always <0.4 pH unit and >0.1 pH unit.
sition of the medium, with internal volumes being generally larger in media containing KCI rather than sucrose. If the protein values are "corrected" for underdetection of granule protein content (according to the results presented in Table II) then the mean internal aqueous space reduces to ,ol.5 lxl/mg protein equivalent.
Intragranular pH was measured initially in a sucrosecontaining medium using three separate isotope distribution procedures (Table HI A). Two of the measurements rely on the equilibrium distribution of a weak base ([|4C]methylamine) or a weak acid ([3H]acetate) (44). The third measurement is based on the equilibrium distribution of S6Rb÷ (a probe of transmembrane potential) under conditions 2 where ApH is equal in magnitude but opposite in direction to A~. A nearly identical internal pH >t7.7 is obtained by all three procedures. This value is considerably higher than that reported for normal parotid granules (pHin ---6.8 [7]) and all other types of secretion granules studied to date.
Although pHin is not affected by reducing the ionic strength of the medium (when the buffer concentration is decreased 10-fold), ionic composition has a notable effect (Table III B). Adding KCI or choline chloride causes a considerable decrease in pH~n, whereas the presence of K2SO4 causes an increase above the value obtained in buffered sucrose. Apparently the parotid granules of isoproterenol treated rats exhibit increased ionic permeabilities or decreased internal buffer capacity (or both) as compared with parotid granules from untreated rats (7).
Detection of H + Pump Activity in Parotid Granules of Chronically Stimulated Rats
Isolated granules from the treated rats were tested for their ability to translocate H + into the granule interior in an ATPdependent fashion. The generation of both ApH and A~ were detected.
Effect of ATP on Intragranular pH
In each of 10 granule preparations, addition of ATP resulted in measurable acidification of the intragranular space. Typically, ATP-dependent intragranular acidification of "o0.2 pH units is observed (with variability depending on the condi-2. As in our previous studies with ~Rb + in the presence of valinomycin (3), the ability to measure ApH with a probe of A~ requires the presence of the proton ionophore CCCP to insure that H ÷ is in equilibrium across the membrane (i.e., H ÷ electrochemical potential, 0). (14), the conditions favoring a large acidification (high levels of chloride in the medium) tended to result in a greater degree of granule lysis and for this reason, such conditions were not employed routinely. Fig. 3 shows that in contrast to control samples (no ATP), samples containing 10 mM ATP (without an ATP-regenerating system) exhibit a "o0.3-pH unit acidification in 20 min, with an additional "o0.07-pH unit decrease at 1 h (and without further acidification up to 2 h, not shown). Similar acidification is observed using 1 mM exogenous ATE however, a systematic analysis of the ATP-concentration dependence of granule acidification has not yet been made. Fig. 4 illustrates other properties of internal acidification, showing both pHin and pI-I~t after a period of incubation. The pH of the external medium tends to be more constant in 50 mM MOPS buffer (Fig. 4, B and C) than in 5 mM MOPS (Fig. 4 A). Of particular importance is the observation that unlike ATE addition of AMP-PNP (a nonhydrolyzable ATP analog, Fig. 4 A) does not result in an increase in ApH, thus serving to exclude possible effects of ATP that are independent of ATP hydrolysis. 3 By contrast, GTP (Fig. 4 B) appears to promote intragranular acidification (possibly reflecting the presence of a nucleoside diphosphokinase activity [17]).
The effects of various ATPase inhibitors and uncouplers were examined (Fig. 4 C). Efrapeptin, at a dose that inhibits >90 % ofparotid mitochondrial ATPase (8), fails to influence the ATP-dependent acidification of these granules. Sodium vanadate, which inhibits ATPases that proceed through a phosphorylated enzyme intermediate (35) also is ineffective. By contrast, a partial (and, for unknown reasons, variable) inhibition of intragranular acidification is obtained when granules are exposed to Nbd-Cl, a compound that inhibits the H ÷ pumps of chromatfin and platelet granules (17). Finally, addition of CCCP abolishes completely the ATPdriven acidification; this effect rules out the possibility that the observed acidification is due to passive H ÷ movement. 4
Effect of ATP on Transmembrane Potential
Experiments were undertaken to check for inside-positive changes in A~ that depended on ATP hydrolysis as observed in acidic organelles known to contain electrogenic H ÷ pumps (17,24,26,30). For this purpose, effects of AMP-PNP and ATP were studied in parallel. A~ was measured using the equilibrium distribution of 86Rb÷ in the presence of valinomycin (8,24,44). Results with AMP-PNP (over a 45-min time period, Fig. 5 A) demonstrate a slow but progressive exclusion of the positively charged probe, consistent with a gradual shift of A~ to a more inside-positive value. Although the reason for this shift in the baseline A is not established, it may be explained by an H÷-diffusion potential since the conditions required to measure A~u (nonionic medium [pH 7.2]) result in an intragranular pH of 7.7-7.8 (Table III A), which favors inward-directed H ÷ diffusion. 5 4. Note in Fig. 4 c that the addition of CCCP actually results in an elevated intragranular pH above control values, consistent with the presence of net fixed-positive charges in the granule interior. 5. It is unlikely that the baseline a6Rb÷ exclusion seen in Fig. 5 represents H ÷ pumping driven by endogenous ATP because the ATP concentration From the first time point measured, the presence of ATP results in an increase in A~ over that observed in AMP-PNPcontaining samples. This difference is •14 mV at 5 min and progresses to --31 mV by 45 min (Fig. 5 B). Despite the shift in baseline (in the presence of AMP-PNP) we take these data to indicate that parotid granules from the treated rats are capable of generating an inside-positive membrane potential which depends on ATP hydrolysis. This potential is less than that seen for chromaffin granules (50-70 mV at 30-40 min; 8, 26, 30), but much greater than that seen in normal parotid granules ('~2 mV at 30 min using the S6Rb÷ method; 3, 8).
Contaminating OrganeUes Contribute Minimally to ATP-Dependent Acidification of the Granule Fraction
Two approaches were used to exclude the possibility that the H ÷ pump activity described above might occur within vesicular contaminants rather than in the granules. First, from a series of representative electron micrographs, we measured the volume fraction (50) occupied by contaminants of the granule preparation in order to predict the extent of acidification expected of such structures as the exclusive source of H ÷ pump activity. In three independent experiments, the internal volume of nongranule structures (which consist partly of organelle contaminants and partly of the membranes of damaged granules) averaged only 2.3 % of the total internal volume (see Table IVA). Consequently, a measured acidification of 0.2 pH units, if ascribed entirely to contaminants, would require a selective acidification in these structures of >8 pH units. Such a magnitude seems extremely unlikely for vesicles consisting of biological membrane studied in vitro.
In a second approach we sought to establish directly the relative contributions of granules and organelle contaminants to the internal [3H]methylamine content measured in biophysical experiments using electron microscope level autoradiography to detect [3H]methylamine in specimens fixed after incubation. We reasoned that if contaminants were responsible for weak base accumulation in pH measurements, they would contain the probe at a level that is disproportionately large compared with their contribution to towithin these granules has been measured at <10 ~tM (Castle, J. D., unpublished observations). (.4) Three separate preparations of granules from chronically stimulated rat parotid tissue were prepared for electron microscopy as described in Materials and Methods. Using the point-count procedure described in reference 50, the internal volume of granules (mean diameter ,'~ 1.5 lira) was measured for a total of 503 granule profiles taken from representative photographs of the top, middle, and bottom of pellets. The remaining structures were similarly counted, and mean values + SD were determined. In general, most of the contaminant structures were found in the top regions of pellets, which make a minimal contribution to the biophysical measurements (44). (B) Results are shown for two preparations of granules in which equal-sized samples of granules were incubated with [3H]methylamine in a medium adjusted to 350 mosM with sucrose and containing 100 mM choline chloride and 7 mM MOPS-NaOH (pH 7.1) for 20 rain at 25°C. The samples were then processed in parallel for antoradiographic measurements as described in Materials and Methods. Grains overlying any portion of granule profiles were counted as granule-associated, with a similar measure for nongranule structures. Using this method, "~6% of grains were found to lie over organelle-free background. Mean values + SD are shown.
tal internal volume. Further, in the event that the retention of internal label were complete, measurement of autoradiographic grain density (in the presence and absence of ATP) might identify directly the acidifying structures. Quantitative measurements performed on a series of electron micrographs (Table IV B) indicate that only 2-3% of the grains occur over nongranule structures, a value that is not affected by the addition of ATP. By contrast, the vast majority of grains are over granules in the absence or presence of ATP, and the granule portion of total labeling is similar to the granule portion of total volume (Table IV). Although we did observe an increase in the grain density over granules in the presence ofATP (of'~l.3-fold in two separate experiments, which would correspond to a ApH of ~0.13 units) we are unable to conclude that we have measured granule acidification directly with this procedure, because the increase is not statistically significant and retention of internal label during processing was only •75 %. Nevertheless, the present data suggest that internal labeling is proportional to the fractional contribution to total internal volume and thus augment the case against contaminants of the granule fraction as a major source of acidification activity. From these morphological considerations, we find the support for H ÷-ATPase activity in the membranes of parotid granules from isoproterenol-treated rats to be compelling.
Acridine Orange Fluorescence Microscopy
The novel finding of storage granules whose internal pH is alkaline under most ionic conditions yet which possess inward-directed H ÷ translocase activity, raises the interesting question about the intragranular pH in living cells where millimolar levels of ATP are maintained and ionic conductances are carefully regulated. Initially we attempted to check for exclusion or concentration of a weak base probe detectable by immunocytochemistry (2). Unfortunately, the results were inconclusive due to difficulties in preserving the internal structure of these unusual granules (this occurred largely during prefixation in the absence of glutaraldehyde before the immunoperoxidase reaction).
To avoid the preservation problem, we used a different weak base probe, AO, to examine live cells from dispersed parotid acini by fluorescence microscopy. The fluorescence emission of AO is concentration-dependent; it emits a green color at low concentrations and an orange color (reflecting oligomer formation) at higher concentrations when viewed with a fluorescein filter. In each of three preparations, a reproducible pattern of fluorescence was obtained, namely, that granules visible by phase microscopy appear to exclude the fluorescent probe in comparison with the surrounding cytoplasm (Fig. 6). Nuclei fluoresce brightly (AO also intercalates into the DNA), and the surrounding cytoplasm exhibits a weak green staining. In most cases the apical (granule-rich) pole of the cell is even less fluorescent; orange-stained granules are never seen. In favorable planes of focus through these large, three-dimensional cells, a honeycomb-like pattern corresponding to green-stained cytoplasm surrounding unstained granules can be appreciated (Fig. 6 B). By contrast, rat peritoneal mast cells (known to contain granules with an acidic interior, [29]) when viewed under identical conditions (Fig. 6, C and D) characteristically show a cytoplasm replete with orange-staining granules, demonstrating intragranular concentration of the weak base. Thus, although AO fluorescence microscopy does not yield a precise measurement, it suggests that the pH in situ of the granules of the chronically stimulated rat parotid is higher than that of the cytosol.
Discussion
We have shown previously that mature exocrine granules from the rat parotid have an internal pH slightly below neutrality (7) and exhibit almost no inward-directed H ÷ pump activity (8). In the present studies, we have employed chronic 13-adrenergic stimulation of the same tissue to cause a change in transcription (52), which results in a quantitative alteration in the spectrum of granule-content polypeptides (Fig. 2) as well as morphological changes in the granule population ( Fig. 1, b-d). We believe that the biophysical findings presented in this report result primarily from this change in granule composition.
The passive distribution of H ÷ is such that granules isolated from chronically stimulated rats have an internal pH of '~7.7 when measured in a pH ,~7.1 sucrose-containing medium (Table IIIA). Evidently, these granules possess net fixed-positive internal charges, consistent with their observed enrichment in basic proteins (Table II). These data lend support to the notion that intragranular pH is influenced to a large degree by content molecules (analogous to the influence of fixed-negative internal charges in chromaffin granules, [39]). In contrast to normal parotid or chromaffin granules, however, the internal pH of these unusual granules varies with external ionic conditions (Table III B), suggesting that membrane ionic permeability is relatively large with respect to internal buffer capacity.
ATP addition to the isolated granules from isoproterenoltreated rats results in the generation of an internal acidification (Fig. 3) that is abolished by the proton ionophore CCCP Figure 6. AO staining of dispersed parotid acini from stimulated rats, and from mast cells. Paired phase and fluorescence micrographs were taken within 5-10 min of addition of 5 ~tM AO as described in Materials and Methods. In all cases parotid acinar cells (A and B) exhibit bright nuclear staining surrounded by fainter yellow-green or green cytoplasmic staining. By contrast, the apical granule population is unstained and in favorable planes of focus (B) green-stained cytoplasm outlining individual granules is seen. For orientation, a nucleus (n) is marked in A, corresponding to the bright green staining (upper left) of B. Examination of rat peritoneal mast cells under identical conditions (C and D) shows that they exhibit punctate orange AO staining of their acidic granules surrounding a central nucleus. Bars, 10 l~m. Doses of AO >5 IxM resulted in no change in the unstained appearance of the internal populations of parotid granules but caused the punctate orange staining of mast cell granules to become obscured within the dense red-orange granule population.
( Fig. 4 C), and the generation of an inside-positive A~g that depends upon ATP hydrolysis (Fig. 5). Other studies (not shown) reveal an ATP hydrolase activity associated with this granule fraction that, as in the case of chromaffin granules (but not in a fraction containing mature parotid granules, [8]), is inhibited by 20 gM Nbd-Cl (which also partially inhibits acidification, see Fig. 4 C). Further experimentation will be needed to test whether inhibition of both ATP hydrolase and H + pump activities are mediated through the same protein(s).
Several lines of evidence serve to exclude the possibility that contaminants in the granule fraction are a major source of the H+-ATPase activity. The lysosomal contribution to H+-ATPase activity must be very small, because the level of lysosomal contamination (as judged by the relative specific activity of ~-N-acetylglucosaminidase, see Table I) is approximately threefold lower than that obtained with normal parotid granule fractions (7) in which reliable acidification could not be demonstrated (8). The lack of inhibition of acidification by efrapeptin and vanadate (Fig. 4 C) suggests that contributions by mitochondrial and other selected H +-ATPases are also negligible. Stereologic measurements made on the fraction (Table IV A) suggest that the internal volume contribution of nongranule structures to this fraction is not sufficient to account for the magnitude of the observed acidification. Finally, [3H]methylamine autoradiography of the granule fraction, despite practical limitations, reveals a preponderance of granule-associated autoradiographic grains rather than a localized grain density over nongranule structures (Table IV B). These data, taken together, argue strongly in favor of the interpretation that acidification is a granule membrane activity.
In recent investigations, we have begun to address what biochemical features provide the characteristic properties and function of secretion granules 6 (11). Studies performed herein show that it is possible to make major changes in the 6. Cameron, R. S., E L. Cameron, and J. D. Castle, manuscript submitted. internal composition of these structures without compromising their storage capabilities. 7 In fact, when the underdetection of granule content protein (Table II) is taken into account, the internal packaging (as judged by intragranular aqueous space measurements) of parotid granules from isoproterenol-treated rats is equal to that found in normal rats (7). Such flexibility in packaging diverse content molecules (such as those with high isoelectric points) suggests the presence of adaptive mechanisms designed to meet a range of internal storage requirements. This machinery may normally reside in the membranes associated with the trans-Golgi where concentration usually begins. We have hypothesized elsewhere 8 that the acidifying ATPase could be a part of this machinery, and it now seems reasonable to propose that the finding of H ÷ pumping in parotid granules from isoproterenol-treated rats signifies a selective and purposeful retention of this activity (and perhaps others) for a function in this exocrine storage compartment. By contrast, the Golgi activity, galactosyl transferase (43), remains efficiently excluded from the granules ( Table I), suggesting that it continues to play no important role in granule function per se.
Since it is presumed that the H÷-ATPase is operant in the intracellular milieu, our results with acridine orange in situ (Fig. 6) make a distinction between a compartment which can acidify and one which is acidic. If the presence of H ÷ pump activity represents a compensatory response for the augmented presence of alkaline secretory proteins, then this compensation is simply of insufficient magnitude to effect an accumulation of AO. High intragranular buffering or other regulated ion conductances could account for this observation. However, it should be pointed out that an elevated intragranular pH does not preclude a role for H ÷ pump activity in facilitating (or maintaining) the packaging of secretory content. Indeed, such a role may well be manifest in the 7. In other studies (reference 42; Castle, J. D., unpublished observations) it has also been found that discharge of parotid granules can proceed normally under the present conditions of chronic isoproterenol stimulation. In other experiments (not reported herein), a direct comparison of parotid granules from uninjected rats and rats injected 3, 6, and 10 d with isoproterenol showed progressive alterations in their content composition, their internal (uncorrected) volume, their internal pH, and their ability to acidify upon ATP addition. This information, coupled with recently developed procedures for obtaining subfractions of normal parotid granules that contain content of different posttranslational age (28, 51) underscores the unusual potential of this system for exploring mechanisms of exocrine granule formation, packaging, and storage. | 2014-10-01T00:00:00.000Z | 1986-10-01T00:00:00.000 | {
"year": 1986,
"sha1": "3c4557bba61e1741c8f1a84b0b53e8027b5d16c2",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/103/4/1257.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "934939388f7db9ec3b414a7604fbedfc54b20092",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258744331 | pes2o/s2orc | v3-fos-license | Editorial: “Omics”- revolution in elucidating the virulence and resistance in Staphylococcus aureus
COPYRIGHT © 2023 Nurjadi, Tkadlec, Boutin and Vandenesch. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. TYPE Editorial PUBLISHED 17 May 2023 DOI 10.3389/fcimb.2023.1209671
Editorial on the Research Topic
"Omics"revolution in elucidating the virulence and resistance in Staphylococcus aureus Staphylococcus aureus is one of the most common causes of bacterial infections in humans, and the bacterial pathogen with the highest number of attributable deaths (Collaborators, 2022). The human nasal cavity serves as the natural reservoir for S. aureus. Although many healthy individuals are colonized with S. aureus, only some develop infections (Wertheim et al., 2005). The transition from a commensal to an opportunistic pathogen is not fully understood, but virulence factors likely play a significant role in initiating and facilitating infection processes (Howden et al., 2023). This Research Topic features articles showcasing the application of cutting-edge molecular biology methods ("Omics") to elucidate the virulence and resistance of S. aureus.
The convergence of resistance and virulence is an intriguing phenomenon increasingly observed in many bacterial species (Li et al., 2021;Biggel et al., 2022). One of the archetypes of this convergence is the emergence of virulent community-associated methicillinresistant S. aureus (MRSA), USA300 strains (Nimmo, 2012).To shed light on the events that shaped the evolution of this lineage, Bianco et al. investigated the evolution of the pandemic MRSA strain USA300 by analyzing and comparing genomic sequences of circulating USA300 strains and USA300 strains that predate the dominance of this expansive clone. They notably uncovered a pre-epidemic branching clade consisting of (already Panton-Valentine leukocidin (PVL)-positive) both methicillin-susceptible S. aureus (MSSA) and MRSA isolates circulating around the world that diverged from the USA300 lineage prior to the establishment of the South American and North American epidemics. The treatment of infections caused by S. aureus can be challenging, as recurrences or chronicity may occur despite appropriate therapy (Tuchscherr et al., 2020). The study by Klein et al. found that even when belonging to the same clone, S. aureus isolated from different body sites and infection foci may exhibit differences in virulence and resistance phenotypes. This phenotypic plasticity and heterogeneity can be attributed to the integration of Sa3int bacteriophages into the b-hemolysin (hlb) gene, which results in the truncation of the hlb gene and the insertion of genes encoding staphylokinase (sak) and staphylococcal complement inhibitor (scn), leading to a highly plastic immune evasive phenotype.
Blocking bacterial virulence to promote pathogen killing and elimination by the immune system is an interesting alternative treatment approach (Ford et al., 2020). The study by Zhou et al. investigated the role of small RNA SprC using RNASeq for transcriptomics analysis on the metabolism and virulence of S. aureus N315. Over 2,497 identified transcripts, the SprC-mutant N315 S. aureus exhibited 23 downregulated differentially expressed genes, mainly related to metabolism and pathogenesis. Considering the emergence of drug resistance in S. aureus, such "pathoblockers" may be a promising alternative treatment strategy.
Traditionally, the clinical severity of S. aureus infections is associated with the presence or absence of certain genes coding some of the various S. aureus virulence factors (Howden et al., 2023). However, the impact of the expression levels of these virulence factors has been underexplored, largely due to the lack of high-throughput quantification methods for virulence proteins. In the study conducted by Pivard et al., the authors investigated the quantitative virulomes of 136 S. aureus isolates using a targeted proteomic approach. Their findings revealed that several virulence factors, including PVL, were associated with severity parameters in a dose-dependent manner, providing the proof of concept that "expression matters" in pathogen virulence and can be inferred from in vitro culture of the corresponding strain.
Nasal colonization with S. aureus is associated with an increased propensity to acquire infections (Bode et al., 2010). Therefore, understanding the mechanisms of persistent nasal colonization may help identify novel targets and strategies to decolonize highrisk patients. In their study, Salgado et al. used serial passaging of a murine colonization model and genome sequencing to demonstrate that changes were found in genes associated with the cell surface and metabolism, which might indicate niche adaptation in S. aureus to promote long-term colonization.
The articles presented in this Research Topic showcase the promising use of "OMICs" technologies in advancing research on S. aureus virulence and resistance. Specifically, the application of transcriptomics and proteomics adds a new functional and mechanistic dimension to elucidating the pathophysiology of S. aureus infections. By gaining a deeper understanding of the correlation between virulence factors and clinical outcomes, we may be able to improve diagnostic and therapeutic strategies for S. aureus infections. | 2023-05-18T13:05:39.929Z | 2023-05-17T00:00:00.000 | {
"year": 2023,
"sha1": "328cb59fe511910a08bf519bae797b793d7cf937",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "328cb59fe511910a08bf519bae797b793d7cf937",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270615068 | pes2o/s2orc | v3-fos-license | Recent advances on anticancer and antimicrobial activities of directly-fluorinated five-membered heterocycles and their benzo-fused systems
Due to the importance of the fluorinated heterocycles as main components of marketed drugs where 20% of the anticancer and antibiotic drugs contain fluorine atoms, this review describes the reported five-membered heterocycles and their benzo-fused systems having directly connected fluorine atom(s). The in vivo and in vitro anticancer and antimicrobial activities of these fluorinated heterocycles are well reported. Some fluorinated heterocycles were found to be lead structures for drug design developments where their activities were almost equal to or exceeded the potency of the reference drugs. In most cases, the fluorine-containing heterocycles showed promising safety index via their reduced cytotoxicity in non-cancerous cell lines. SAR study assigned that fluorinated heterocycles having various electron-donating or electron-withdrawing substituents significantly affected the anticancer and antimicrobial activities.
Introduction
The spread of various types of cancers and microbes worldwide, which frequently leads to death, is regarded one of the world's greatest challenges in eliminating them.Multidrug resistance (MDR) is one of the primary causes of the failure of cancer chemotherapy, where cancer cells can survive several chemotherapy treatments with various structures and mechanisms of action.2][3] Researchers are devoted to nding effective drugs with minimum toxicity to avoid serious side effects of cancer drugs.One of the research methods is to introduce a new active moiety or atoms on the heterocyclic compounds that enhance their effectiveness.Researchers have discovered that ring-uorinated veand sixmembered nitrogen heterocyclic compounds have promising antibacterial and anticancer potencies. 4,5Due to the high electronegativity and small size of the uorine atom, the introduction of uorine atom(s) into biologically active heterocyclic compounds can remarkably improve chemical and physical properties of entire molecules.7][8][9] In addition, direct uorination via replacing hydrogen with uorine on heteroaromatic rings was very effective for the metabolic stability of the new uorinated heteroaromatics. 10About 20% of the marketed pharmaceutical drugs contain uorine atom(s). 11,12One of the oldest uorinated anticancer drugs was 5-uorouracil (Fig. 1), which was approved in 1957. 13Some directly uorinated heterocycles such as sunitinib (Fig. 1) was the rst anticancer drug approved by the FDA in 2006 for the treatment of renal cell carcinoma (RCC). 14][17] Abemaciclib was approved in 2017 for the treatment of advanced or metastatic breast cancers 18 and pralsetinib (Fig. 1) was approved in 2020 for the treatment of metastatic fusion-positive non-small-cell lung cancer. 19In addition, cedazuridine, 20 binimetinib, 21 selumetinib 22 and gemcitabine 23 (Fig. 1) were also approved as anticancer drugs in the market.Moreover, uorinated heterocycles were reported as the core component of some antibiotic drugs that were approved in the market, such as cephalosporins (b-lactam-based uorinated thiazine derivatives), delaoxacin, levooxacin and ciprooxacin, as shown in Fig. 2. [24][25][26] The importance of uorine-based small molecules in drug discovery forced chemists and drug developers to overcome challenges associated with the insertion of uorine atom(s) into small organic molecules.Several synthetic routes to uorinated heterocycles were described in the literature, particularly chemical and electrochemical tools or employing readily available uorinated reagents.3][34][35][36][37] The latestage uorination was reported as exceptional, effective access for the synthesis of complex small molecules involving metalcatalysed procedures to facilitate the C-F bond formation of such molecules with potential industrial applications. 38The above facts attracted our attention to collect all reports about the anticancer and antibacterial activities of the directly uorinated ve-membered and their fused heterocyclic compounds during the last two decades, from 2003 till the end of 2023.We highly expect that this review article will help a wide range of researchers interested in pharmaceutical drugs to discover effective derivatives for the treatment of various cancers and widespread bacterial diseases.
Anticancer activity of uorobenzofuran derivatives
Ayoub et al. screened the anticancer activity of the uorinated benzofuran derivatives 1-9 (Fig. 3) against the human colorectal adenocarcinoma cell line HCT116 using a WST-1 assay.
Anticancer activity of uoroindole derivatives
Pidugu et al. tested indole-2-carboxylic acids 10 (Fig. 4) as inhibitors of the human apurinic/apyrimidinic endonuclease 1 (APE1). 40The result indicated that compound 10 formed aggregates and was a weak inhibitor of APE1 under conditions that disrupted compound aggregation.5-Fluoroindole-2carboxylic acid (10) was found to inhibit APE1 with an IC 50 of 10 mM.APE1 was elevated in cancers and was correlated with increased tumor progression, decreased survival and reduced sensitivity to chemotherapy.
Two series of 6-uoroindole derivatives 11 and 12 (Fig. 5) were synthesized and screened for their in vivo cytotoxicity against HCT-116 CRC in a low nanomolar range. 41Among them, compound 12 could induce G2/M phase arrest via regulating cyclin B1 expression, produce excess reactive oxygen species (ROS), and target tubulin in CRC cells.Compound 12 signicantly increased the G2/M phase population in HCT-116 cells, indicating an anti-microtubule mechanism.Cyclin B1, a G2/M marker protein, was up-and down-regulated aer 12 and 48 hours of treatment, indicating its high cytotoxicity and cell death aer 48 hours.Compound 12 treatment signicantly reduced b-tubulin proteolysis at a 1 : 300 pronase ratio.Molecular docking conrmed the binding mode and activity of 12 with tubulin, with the 5-methoxyl and 1-amino functional groups interacting through hydrogen bonds.The additional aminoethyl group formed hydrogen bonds with Y202, con-rming the direct binding target of 12.In vivo, compound 12 signicantly suppressed tumor growth, achieving 65.3% and 73.4% at doses of 5 and 10 mg kg −1 d −1 (i.v.21 d), much better than 54.1% of Taxol at 7 mg kg −1 .In addition, compound 12 showed better in vivo tolerance compared to that of 11 (R = Me) (only 3 mg kg −1 tolerance, intraperitoneal, i.p.) and no major organ-related toxicity.4-Aminoethoxyphenyl indole-chalcone 12 represents the lead compound in the chemotherapy of CRC for further drug development.
Fluoroindole-tethered chromene derivatives 13a,b (Fig. 6) were synthesized and evaluated for anticancer properties against A549, PC-3, and MCF-7 cancer cell lines.The derivatives 13a and 13b displayed promising cytotoxic activity with IC 50 values ranging between 7.9-9.1 mM.Molecular docking revealed that 13a and 13b had a good binding affinity towards tubulin protein, better than the positive control (crolibulin), and the molecular dynamic simulations further demonstrated the stability of ligand-receptor interactions. 42ang and his colleagues designed and synthesized a series of phenylfuran-bisamide derivatives 14a-c (Fig. 7) as Pglycoprotein (P-gp) inhibitors based on target-based drug design and screened on MCF-7/ADR cells.The results revealed that the introduction of electron-withdrawing uorine substituent (6-F, 5-F, 4,6-di-F) at position 2 of indole was benecial to the reversal activity.The reversal activity could be ordered as: 14b (5-F) > 14c (4,6-di-F) > 14a (6-F), which demonstrated that uorine with stronger electron-drawing ability at 5-position of indole might be more favourable to the reversal activity of target skeleton. 43The uorinated derivative 14b exhibited low cytotoxicity and promising MDR reversal activity (IC 50 = 0.0320 mM, reversal fold = 1163.0),3.64-fold better than third-generation Pgp inhibitor tariquidar (IC 50 = 0.1165 mM, reversal fold = 319.3).The results of western blot and rhodamine 123 accumulation veried that compound 14b exhibited excellent MDR (multidrug reversal) activity by inhibiting the efflux function of P-gp but not expression.Furthermore, molecular docking showed that compound 14b bound to target P-gp by forming the double H-bond interactions with residue Gln 725.These results suggest that compound 14b might be a potential MDR reveal agent acting as a P-gp inhibitor in clinical therapeutics and provide insight into design strategy and skeleton optimization for the development of P-gp inhibitors. 43he uorinated indolone derivative 15 (Fig. 8) was reported by London et al. as a good candidate for the treatment of canine mast cell tumors (MCT) in dogs.Compound 15 had signicant biological activity against canine MCTs and was administered on a continuous schedule.The objective response rate (ORR) for 145 dogs that received 15 was 42.8% (21 complete responses, 41 partial responses), and the duration of tumor progression was 18.1 weeks.Clinical trials of this compound showed that spontaneous tumors in dogs were good for the evaluation of therapeutics in a clinical stage. 44hree uorinated indolinone derivatives 16a-c (Fig. 9) were synthesized and examined as multi-targeted kinase inhibitors against the hepatocellular carcinoma (HCC) cell lines HuH7 and HepG2.Compound 16c showed the best activity against both cell lines HuH7 and HepG2 with IC 50 values 1.1 and 0.4 mM more potent than the sunitinib reference drug with IC 50 values 4.7 and 4.5 mM in HuH7 and HepG2, respectively.Compound 16c demonstrated a good safety prole via its reduced cytotoxicity in the non-cancerous THLE2 liver cell line (IC 50 > 10 mM) compared with sunitinib IC 50 = 4.5 mM.The selectivity ratios (SR) of the tested compound 16c against HuH7/THLE2 and HepG2/THLE2 were SR = 6.4 and 17.2, respectively, indicating increased anti-proliferative activity in tumor versus normal cells.Thus, compound 16c showed a promising safety index with comparable efficacy with sunitinib (SR = 10.8 and 20.4). 45The study investigated the biochemical mechanism of antiproliferative effects of the target indolinone derivatives.It showed a dose-dependent inhibition of Erk and Akt phosphorylation in HuH7 cells, while HepG2 cells showed reduced Akt phosphorylation but not Erk phosphorylation.The study also demonstrated a reduction in cell cycle marker cyclin-D1.
Chen et al. described the synthesis of the 5(6)-mono-uorinated indolin-2-one, series 16 and 17 bearing the aminosulfonyl moiety, and screened their growth inhibitory activities on HCC (HuH7 and Hep3B) cells (Fig. 10).It was reported that they showed large variations of activities with IC 50 values 0.09 mM to >30 mM (HuH7) and 0.36-13.6mM (Hep3B).The 6-mon-ouorinated indolin-2-one 16l was found to be the most potent compound in this series, where IC 50 values were 0.09 mM (HuH7) and 0.36 mM (Hep3B), superior to both sunitinib and sorafenib.The IC 50 values of sunitinib and sorafenib were 5.6 and 5.4 (HuH7) and 5.2 and 5.4 (Hep3B).Thus, structure 16l was considered as a lead structure for drug design investigations.All uorinated indolin-2-one derivatives inhibited the phosphorylation of RTKs in HuH7, notably FGFR4 and HER3, and decreased tumor load in a mouse model.
Inhibition of the VEGFR-2 signaling pathway has already become one of the most promising approaches for the treatment of cancer.Gao et al. designed and synthesized a series of 4-uoroindole derivatives 18 and 19 and evaluated their inhibitory activity of VEGF(R)-2 kinase inhibition (Fig. 11).Most of these compounds demonstrated high anti-angiogenesis activities via VEGFR-2 in enzymatic proliferation assays on a nanomolar scale.In particular, compound 19g showed signicant activity with an IC 50 value of 3.8 nM.SAR was reported to study the effect of substitutions at the aniline ring linked to the pyridine motif on the activity.The SAR result declared that the removal of the p-methyl group led to a higher potency against VEGFR-2.The presence of the p-sulfonamide group dramatically decreased the potency by 3.4-fold (IC 50 = 31 nM) than the m-sulfonamide group (IC 50 = 9 nM).Thus, the effect of the position of substituent was crucial.Compound 18c showed similar potency with compound 18b (IC 50 = 31 nM).Furthermore, 4,6-disubstituted compounds showed an improved VEGFR-2 inhibitory potency than those having 3,4-disubstituted or trimethoxy groups on the ring.SAR was also reported to investigate the effect of substitutions at the aniline ring linked to the pyrimidine motif on the activity.Compounds (19f-19o) with an electron-withdrawing group at the aniline ring exhibited high VEGFR-2 inhibitory activities; particularly, compound 19g displayed a potent inhibitory activity with an IC 50 value of 3.8 nM.The selectivity of derivative 19g was evaluated on a panel of tyrosine kinases, where it showed good potency against VEGFR-1, PDGFR-a, and PDGFR-b, with high selectivity over VEGFR-3 and excellent selectivity over ErbB2, ErbB4, EGFR, ABL, EPH-A2, FGFR-1, FGFR-2, and IGF-1R.The described results conrmed that such compounds could act as lead structures for the development of more selective anticancer medication. 47okhale et al. described the synthesis of a series of quinazolinone-based 5-uoroindole hybrids 20a-i (Fig. 12) and studied their in vitro cytotoxic activity using a sulphorhodamine B (SRB) bioassay method.The bioassay was carried out against MCF-7 (human breast adenocarcinoma), HepG2 (human liver hepatocellular) and the non-tumorigenic Vero cell lines.Among the synthesized compounds, 20a, 20f, and 20i provided significant activities against the tested cell lines.It was also reported that HepG2 cells were more sensitive to all the synthesized hybrids than MCF-7 cells.Compound 20f presented the highest potency against MCF-7, HepG2 and Vero cell lines with IC 50 values of 42.4, 15.8 and 50.5 mM compared with 23.7, 18.8 and 45 mM for the standard reference 5-uorouracil.The presence of three uorine atoms in 20f might be responsible for its higher activity in all the cell lines.Thus, compound 20f showed substantial anticancer activity and could be used as a lead for further investigations where it had very low toxicity against the non-cancerous Vero cell line. 48ecoraro and co-workers 49 synthesized two series of the amino-triazine derivatives 21 and 22 (Fig. 13) to test their ability to lock into the nucleotide-binding pocket, occupying the space of the ATP and hampering the kinase enzymatic activity.The modulatory effect of the amino-triazine derivatives on the pyruvate dehydrogenase kinases (PDKs) was tested, where many derivatives were found to modulate the PDKs enzymatic activity with a marked selectivity against the PDK1 and PDK4 isoforms.Considering the high degree of similarity between PDK1 and HSP90 (heat shock protein 90), compounds 21c and 22c were tested for their ability to inhibit its enzymatic activity (Fig. 13).Interestingly, the two uorinated derivatives, 21c and 22c, were extremely active against HSP90.These results were also validated by molecular modelling, which demonstrated the ability of the newly synthesized triazine derivatives to t into the nucleotide-binding pocket of PDK1.The newly developed molecules of types 21 and 22 also exhibited very promising antiproliferative activities against KRAS (mutated pancreatic cancer) wild-type and mutant PDAC cells, namely BxPC-3 and PSN-1, respectively, with IC 50 values ranging from lowmicromolar to sub-nanomolar level.These ndings supported further development of new classes of PDK inhibitors relevant to PDAC treatment.
A series of chalcone-based 5(6)-uoroindole derivatives 23 and 11 (Fig. 14) were designed, and their inhibitory activity against colorectal cancer (CRC) was explored.Compound 11 exhibited signicant inhibitory activity toward HCT116 cells (IC 50 = 4.52 nM), CT26 cells (IC 50 = 18.69 nM), CRC organoids (IC 50 = 1.8-2.5 nM), HCT116-xenogra mice (TGI value of 65.96% at the dose of 3 mg kg −1 ), and APC min/+ mice (adenoma number inhibition rate of 76.25% at the dose of 3 mg kg −1 ).Meanwhile, the related mechanisms mediated by 11 for CRC were also studied in detail.This study supported that indolechalcone compounds were a class of promising lead microtubule-targeting drugs (MTAs) and highlighted the potential of 11 to combat CRC. 50ynthesis and anticancer activity of the 4-uoroindoline derivatives 24a,b and 25 were studied (Fig. 15).The anticancer activity was measured via selective inhibition of endoplasmic reticulum kinase (PERK) enzyme assay.All the synthesized 4-uoroindoline derivatives displayed PERK inhibitory activity with IC 50 ranging between 0.5 and 2.5 nM.The derivative 24a showed a 3-fold increase in the PERK inhibitory activity (IC 50 = 0.8 nM) compared with its non-uorinated analogue (IC 50 = 2.5 nM).Thus, the effect of uorine substitution was substantial for the inhibition of PERK.Owing to the potent activity of 24a, it was selected for advancement to preclinical development.In addition, all the uorinated compounds 26a-c and 27 (Fig. 15) were tested for activity against Endoplasmic Reticulum Kinase (PERK) enzyme and exhibited a pIC 50 value > 6.9 against PERK [where pIC 50 = −log(IC 50 )]. 51,52he biological study of compound 24c (Fig. 16) was extensively reported. 53It was found that treatment of mice with compound 24c led to inhibition of tumor growth in multiple human tumor xenogras.This compound was chosen as a candidate for preclinical studies.It was found to be an ATP- was reported for twice daily dosing of compound 24c.Thus, compound 24c was assigned as a lead for the development of any PERK inhibitor in human subjects. 53ome 5(6)-uoroindole-carboxamide derivatives, 28, 29 and 30 (Fig. 17), were designed and described as inhibitors of androgen receptor binding function-3 (AR-BF3) using an enhanced green uorescent protein (eGFP) AR transcriptional bioassay.Most of the synthesized compounds were noticed as promising selective AR-BF3 inhibitors and exhibited substantial activity against wild-type and drug-resistant prostate cancer cells.From the tested series, three compounds, 29c, 28d, and 29d, were the most potent BF3 inhibitors with eGFP IC 50 values of 0.7, 0.6, and 0.43 mM, respectively.Compounds 29c, 28d, and 29d also had the ability to reduce the levels of prostate-specic antigen (PSA), which is a serine protease, and showed IC 50 values of 0.84, 0.5, and 0.53 mM, respectively.Compound 29c had more solubility than compounds 28d and 29d at high concentrations; thus, compound 29c, having IC 50 values of 0.70 and 0.84 mM in eGFP and PSA, respectively, was assigned as a possible candidate for future clinical applications. 54,55antak et al. described the synthesis of the thiazole-based 5-uoroindole derivatives 31a,b (Fig. 18) and investigated their in vitro anticancer activities in human cervical (HeLa), breast (MDA-MB-231), embryonic kidney 293 (HEK293T), prostate (PC-3, LNCaP and castration-resistant prostate cancer cell line C4-2) cancer cell lines.The MTT bioassay was conducted in the presence and absence of FBS (fetal bovine serum) using doxorubicin as a reference drug.Compound 31a exhibited selective cytotoxicity towards HEK293T and HeLa cells with IC 50 values of 12.10 and 3.41 mM, compared with doxorubicin IC 50 = 0.84 and 0.45 mM, respectively, without FBS; however, the analogue 31b was inactive.In addition, compound 31a showed lower potency against HeLa cell lines with IC 50 = 32.48 mM and was inactive towards HEK293T, in the presence of FBS.The mechanism of action disclosed that the cytotoxicity against HeLa cells employed the induction of cell death by apoptosis. 56he glyoxylamide-based 5(6)-uoroindole derivatives 32a-c (Fig. 19) were synthesized and assigned for their in vitro cytotoxicity against a panel of human cancer cell lines; HEK293T, MDA-MB-231, HeLa, PC-3, C4-2, and pancreatic BxPC-3 using doxorubicin as a reference drug.The bioassay experiments Interestingly, compound 32b displayed no toxicity in mammalian cells.Compounds 32a, and 32c were completely inactive towards all the tested cell lines. 57he directly uorinated bis-indole derivatives 33a-k (Fig. 20) were synthesized by Mahboobi et al. and examined their inhibitory activity against Fms-like tyrosine kinase 3 (FLT3), which is active in many cases of acute myeloid leukemia (AML), and the platelet-derived growth factor receptor tyrosine kinase (PDGFR).Most of the derivatives exhibited selectivity towards both FLT3 and PDGFR kinases; in particular, compounds 33g and 33h were the best active FLT3 inhibitors with IC 50 values of 0.34 and 0.17 mM, respectively.Compounds 33c and 33k showed the highest PDGFR inhibitory activity with IC 50 = 0.4 and 0.5 mM, respectively.SAR study showed that the 5-F substitution (33a) reduces activity at both kinases, where uorine atom might interact with the inner pocket as indicated by the results of the 5-methoxy, 5-benzyloxy, and 5-piperidinylethoxy derivatives 33c, 33f, and 33k, respectively.The larger substituents with polar termini were superior to the more hydrophobic groups, where a 10-fold decrease of PDGFR inhibition from 5-piperidinylethoxy 33k to 5-benzyloxy 33f compounds was reported.It was mentioned that the more lipophilic group was preferentially transferred into the interior via hydrophobic patches. 58umar et al. reported the synthesis and anticancer activity of the uorinated-indole derivatives 34a-d (Fig. 21).The bioassay experiment was carried out against three human cancer cell lines: prostate (PC3), lung (A549), and pancreas (PaCa2), following the WST-8 protocol.The synthesized compounds showed moderate to high anticancer activities, and in particular, compound 34b was found to be the most potent inhibitor and selective against A549 cells with an IC 50 value of 0.8 mM.The designed hybrids were found to increase tubulin polymerization, giving these hybrids a chance to act as microtubulestabilizing agents.SAR study assigned that compound 34b, having uoro and methoxy groups at the C-6 position of the two indole rings, exhibited the best anticancer activity against A549 cells. 59 The 3-uoroindole derivative 35 was designed and synthesized by Wu et al. and evaluated for its inhibitory activity against the HepG2 cell line and B-Raf kinase (Fig. 22).Compounds 35 displayed substantial inhibitory potency in HepG2 cells and B-Raf with an IC 50 value of 2.50 and 1.36 mM, compared with the reference drug sorafenib (IC 50 values of 14.95 and 0.032 mM), respectively.Compound 35, which showed better inhibitory activity in HepG2 with 6-fold improvement than sorafenib, provided the potential for further research as a lead compound. 60he anticancer activities of the uorinated 2,3-dimethylindole 36 and tetrahydrocarbazole 37a-c derivatives (Fig. 23) were performed against human pancreas carcinoma (Panc1), lung carcinoma (GIII) (Calu1), kidney adenocarcinoma (ACHN), colon cancer cell (HCT116), non-small cell lung carcinoma (H460), and normal breast epithelium (MCF10A) cell lines.The bioassay results, sing propidium iodide (PI) staining method, conrmed that compound 36 exhibited signicant activity against both GIII-Calu1 and Panc1 cell lines with IC 50 values 3.1 and 3.2 mM, respectively.The uorinated tetrahydrocarbazoles 37a-c presented good activity against all the tested cell lines with IC 50 ranging between 4.9-7.4mM.Noteworthy, all the derivatives 36 and 37a-c were inactive against the normal cell line MCF10A. 61 Review RSC Advances
Anticancer activity of uorinated pyrazoles and indazoles
The 4-uoropyrazole derivatives, 38a and 38b, were patented by Glick et al. as inhibitory agents for hydrolytic activity of mitochondrial F 1 F 0 -ATP-ase (Fig. 24).It is known that transport ATPases carry out the hydrolysis of ATP, leading to the transport of ions for the treatment of tumors.The inhibitory activity of 4-uoropyrazole derivatives 38a and 38b against F 1 F 0 -ATPase was conducted by testing their ability to inhibit ATP synthesis, and the IC 50 values were <10 mM.Furthermore, assessment of compounds 38a and 38b for their cytotoxicity in human Ramos cells was reported, and the bioassay results declared the EC 50 values < 10 mM. 62asuma et al. patented the synthesis and the acetyl-CoA carboxylase 2 (ACC2) inhibitory activity of uoropyrazole derivatives 39 and 40 (Fig. 25). 63ACC2 inhibitory action is useful for the treatment of obesity, diabetes and cancer.The inhibitory rates (%) against ACC2 of compounds 39 and 40 were 89% and 99%, respectively, at 10 mM.
The anticancer activity of the uorinated indazole derivatives 41-46 (Fig. 26) via inhibition of phosphoinositide-3 0 -OH kinase (PI3-kinases) was investigated by Baldwin et al. [64][65][66][67] The prepared examples 41-46 were tested in one or more of the PI3Kd, PI3Ka, PI3Kb and/or PI3Kg assays and were found to have mean pIC 50 values $5. Certain compounds were also tested in the T-cells using ow cytometry and were reported to have mean pIC 50 values $5.
The uoroindazole derivatives 47 and 48 were estimated by Boys et al. as inhibitors of PDGFR, a type III receptor tyrosine kinase, for the treatment of cancer (Fig. 27).The synthesized compounds 47 and 48 were evaluated for their inhibitory action of PDGFR-b phosphorylation in the human broblast cell line (HS27) with IC 50 values of 21.9 and 17.6 nM, respectively. 68ehwinkel et al. patented the anticancer activity of the uorinated pyrimido[1,2-b]indazole 49 (Fig. 28) via its inhibition of PI3 K/Akt.Compound 49 exhibited potent PI3 K/Akt inhibitory activity with an IC 50 value of 0.093 mM. 69-Fluoroindazole derivatives 50a-c (Fig. 29), having dihydropyridine moiety in position 5, were synthesized by Michels et al. and evaluated their c-Met tyrosine kinase inhibitory activity through in vitro, ex vivo, and in vivo assays for the treatment of cancer diseases.These compounds were found to have a potent in vitro c-Met tyrosine kinase inhibitory activity with IC 50 values of 14-20 nM. 70,71Fluoroindazole derivative 51 was tested for its inhibition activity against heat-shock protein (Hsp90) (Fig. 30).This compound showed potent Hsp90 inhibition and represented a good candidate for the treatment of proliferation diseases.anticancer activity derived from nine cancer types (leukemia, nonsmall cell lung, colon, CNS, melanoma, ovarian, renal, prostate, and breast cancers) using SBR (sulforhodamine B) protein assay to estimate cell viability or growth.Most of the tested substrates demonstrated promising anticancer activity with GI 50 values in the nanomolar to micromolar concentration range.The GI 50 values (concentration causing 50% inhibition of cell growth) of compounds 52a and 54a, particularly, exhibited excellent inhibition against most of the cancer cell lines with GI 50 < 10 nM, but compound 53a showed moderate cytotoxic activity with GI 50 < 10 mM.Therefore, the presence of a piperazine ring in the alkylene spacer was responsible for increasing the cytotoxic activity. 73
Anticancer activity of uorinated benzimidazoles
The uorinated pyrazolylbenzimidazole hybrid molecules 55ad (Fig. 32) were reported by Reddy et al. and were tested for their anticancer activity against three human tumor cell lines: lung (A549), breast (MCF-7), cervical (HeLa) and against normal keratinocyte (HaCaT) cells using the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) growth inhibition assay protocol.All compounds showed potent growth inhibition against all the cell lines A549, MCF-7, and HeLa; particularly, compound 55b was the most potent with IC 50 values in the range of 0.95-1.57mM.Flow-cytometry revealed that all compounds arrested MCF-7 cells in the G1 phase of the cell cycle via the down-regulation of cyclin D2 and CDK2.Fluorescent staining and DNA fragmentation studies showed that cell proliferation was inhibited by induction of apoptosis.Moreover, the compounds led to the collapse of mitochondrial membrane potential (DJm), and increased levels of reactive oxygen species (ROS) were noted.The remarkable biological activity made compound 55b a promising new candidate for the development of cancer therapeutics. 74he pyrazolyl-based uorobenzimidazole hybrids 56-58 (Fig. 33) were assigned as inhibitors of PDK1 and cell proliferation/ cell vitality.The inhibitions of PDK1 (Pyruvate Dehydrogenase Kinase 1), which is a protein-coding gene, and the interleukin-1 receptor-associated IRAK-1 and IRAK-4 were evaluated.Compounds 56b, 57a and 57c demonstrated the highest inhibition of PDK1 with IC 50 values ranging between 0.1 mM to 1.0 nM; in particular, compound 56b had a powerful inhibition of all PDK1, IRAK-1 and IRAK-4 with IC 50 values of 0.1 mM to 1.0 nM. 75
Anticancer activity of uorinated benzothiazoles and benzoxazoles
Lion et al. examined the in vitro anticancer activity of the uorinated benzothiazole derivatives 59a-c (Fig. 34) against the human breast cancer cell lines: MCF-7 (oestrogen receptor positive) and MDA MB 468 (oestrogen receptor negative), and the human colon carcinoma cell lines HCT-116 and HT-29.Interestingly, the 5-uoro derivative 59a displayed the most potent antiproliferative activity against all the tested cell lines: MCF-7, MDA MB 468, HCT-116 and HT 29 with GI 50 values 0.37, 0.41, 0.08 and 0.41 mM, respectively.In addition, the ability of the uorinated benzothiazoles 59a,b to inhibit human thioredoxin signalling was measured using a modied protocol of the insulin reduction assay.The uorinated derivatives 59a,b inhibited thioredoxin signalling at micromolar concentrations. 76he cytotoxicity of the 6-uorobenzothiazole derivatives 60a,b (Fig. 35) was evaluated in vitro against four human cancer cell lines: prostate (PC-3), lung (A-549), leukemia (THP-1), and colon (Caco-2).Compounds 60a,b were highly active against leukemia (THP-1) cancer cells and exhibited IC 50 values of 1 and 0.9 mM, respectively.A docking study of the synthesized ligands was done on epidermal growth factor receptors using Argus Lab exible docking to determine their observed activity.Compounds 60a,b served as a useful ''lead'' for further anticancer drug development. 77iello et al. 78 synthesized two series of uorinated heterocycles, namely the uorinated benzothiazole 61a-c and benzoxazole 62-63a-i derivatives, and evaluated their anticancer potency against MCF-7 and MDA 468 breast cancer cell lines.The uorinated benzothiazoles 61a-c were more active against the ER −ve human breast cancer cell line MDA 468, giving GI 50 values 0.20-0.5 mM.In the MCF-7 (ER +ve) human breast cancer cell line, the most active compounds were found to be the uorinated benzothiazole derivatives 61a and 61b (GI 50 of 0.57 and 0.40 mM, respectively) (Fig. 36).In addition, the 5-uorobenzoxazole 62a exhibited high potency against both cell lines MCF-7 and MDA-468 with GI 50 values of 0.36 and 0.27 mM, respectively.Overall, the uorinated benzoxazole series 62a-j (Fig. 36) was more active against the MDA 468 cell line with IC 50 varying between the range of 0.017-98.6 mM.Against the generally less sensitive MCF-7 cell line, benzoxazoles displayed activity with GI 50 values ranging between 0.36 and 90.7 mM.The 6-uorobenzoxazole compounds 63d and 63h (bearing 3,4dimethoxy and 3,4,5-trimethoxy substituents on the phenyl ring) were found to be potently active on the MDA 468 cell lines, giving GI 50 values of 17 and 37 nM, respectively.
Anticancer activity of uorinated thiazoles and isoxazoles
The uorinated pyrazolylthiazoles 65a,b were profoundly reported as positive allosteric modulators of metabotropic glutamate receptors (mGluRA) (Fig. 38).The activity was tested on recombinant human mGluR4a receptors by disclosing the alteration in intracellular Ca 2+ concentration, employing Fluo4-(AM) (the uorescent Ca 2+ -sensitive dye) and FLIPR (Fluorometric Imaging Plate Reader).The mean EC 50 resulted from at least three separate experiments of the uorinated compounds in duplicate.The bioassay results conrmed that the reported compounds were positive allosteric modulators of human mGluR4 receptors with EC 50 values <100 nM.39) and probed their cytotoxicity against human glioblastomas (GBM6) and triple-negative breast cancer (MDA MB 231).The bioassay protocol was carried out using the MTT method with concentrations ranging between 0.8 mM to 100 mM for 72 hours.Two uorinated derivatives, 66l and 67f, exhibited robust anti-proliferative effects against both GBM6 and MDA MB 231 cells with IC 50 values 36.08 and 43.21 mM (GBM6) and 68.18 and 79.80 mM (MDA MB 231), respectively.Thus, compound 66l displayed better inhibition than 67f against both GBM6 and MDA MB 231 cells, giving a superior effect for the spiro-ether than the corresponding spiro-lactone in cytotoxicity. 81
Anticancer activity of uorinated furans
A series of purine nucleosides uorinated at the 3 0 -position of the sugar moiety 68 and 69 (Fig. 40) were synthesized by Ren et al. and their anticancer activity was probed.The assigned uorinated nucleosides were examined against HT116 (colon cancer) and 143B (osteosarcoma cancer) cell lines.Most of these compounds demonstrated inhibition of the growth activity of HT116 and 143B cancer cells at sub-or low-micromolar concentration.The uorinated sugars 68a, 69b, 69c, 69f, 69m and 69n showed the highest inhibition of cancer cell growth with IC 50 values 0.5-1.0 mM against both HT116 and 143B.From the SAR study, it was concluded that the protected uorinated purine riboside 69c and 69d demonstrated 10-fold higher anticancer activity than their deprotected analogues 68d and 68e.In addition, the 3 0 -uorinated purine nucleosides 68b, 68c, 68d, 68l and 69d showed moderate potency, but the other derivatives did not show detectable activity against the evaluated cancer cell lines. 82
Anticancer activity of uorinated pyrazolopyrimidines
The uorinated pyrazolopyrimidines 70a,b (Fig. 41) were reported to be useful for the treatment of cancer via the regulation of mTOR (mammalian target of rapamycin), which regulates cell proliferation, autophagy and apoptosis.The IC 50 values were determined by dose-response curves in duplicate, and both uorinated compounds showed mTOR with IC 50 values on a nanomolar scale between 0.5-2 nM. 83raley et al. designed the uorinated pyrazolo [3,4-d]pyrimidinone derivatives 71a,b (Fig. 42) and assigned them as inhibitors of the mitotic kinesin KSP for the treatment of cellular proliferative diseases, such as breast cancer.Measuring the mitotic arrest and apoptosis was done using FACS (Fluorescence-Activated Cell Sorting) analysis by measuring DNA content in a treated population of cells.The tested compounds 71a,b were reported to inhibit KSP with IC 50 values #50 mM.The EC 50 for mitotic arrest and apoptosis was derived by plotting compound concentration on the x-axis and the percentage of cells in the G2/ M phase of the cell cycle.The cytotoxic EC 50 was determined by plotting % inhibition of cell growth for each titration point on the y-axis and compound concentration on the x-axis. 84
Antimicrobial activity of uorinated indoles
The in vitro antibacterial activity of the thiazole-based 5-uoroindole molecular hybrids 31a,b (Fig. 18 above) was described by Tantak and his group against two Gram-positive Bacillus subtilis (B.subtilis), and Staphylococcus aureus (S. aureus) and two Gramnegative Escherichia coli (E.coli) and Pseudomonas putida (P.putida) bacterial strains using ciprooxacin as a reference drug.The tested compounds showed moderate activity towards the bacterial strains, wherein compound 31b showed selective inhibition of the Gram-negative bacteria (E. coli and P. putida), better than the Gram-positive ones (S. aureus and B. subtilis). 56 Two hybrids of benzofuran-based 5-uoroindole 72a,b (Fig. 43) were synthesized and tested for their in vitro antimicrobial activities against three Gram-positive bacteria (B.subtilis, S. aureus, and Staphylococcus epidermis (S. epidermis)), four Gramnegative bacteria (Salmonella typhi (S. typhi), E. coli, Klebsiella pneumoniae (K.pneumoniae) and Pseudomonas aeruginosa (P.aeruginosa)) as well as four fungi (Candida albicans (C.albicans), Aspergillus niger (A.niger), Aspergillus avus (A.avus), and Aspergillus fumigates (A.fumigates)) strains using Cup plate method at 100 mg mL −1 using ciprooxacin and miconazole as positive controls.Both the tested compounds showed signicant activity towards the bacterial strains but low activity against the antifungal agents.Compound 72b showed promising antibacterial activity against S. aureus, P. aeruginosa and S. typhi, almost similar or better than the standard ciprooxacin.Compound 71a showed good activity against S. aureus and B. subtilis and compound 72b possessed good activity against S. epidermis. 85hang et al. designed and synthesized some uorinated indol-2-one molecular hybrids 73a-e and 74 (Fig. 44) and evaluated their in vitro inhibitory activity against the growth of clinically isolated MRSA strain (MRSA = methicillin-resistant S. aureus) and VRE (vancomycin-resistant Enterococcus).The prepared molecular hybrids showed promising activity against the Gram-positive pathogen.Interestingly, compound 73a signicantly inhibited the MRSA (Chaoyang) strain, with a MIC value of 1.56 mg L −1 , comparable to vancomycin (MIC = 1.0 mg L −1 ).The MIC values of compounds 73b, 73c, and 73d were 0.39 mg L −1 against the MRSA (Chaoyang) strain.Compounds 73c and 73d presented anti-MRSA activities much better than their activities against S. aureus, where their MIC value was 0.78 mg L −1 .Compounds 73c and 73d also showed promising inhibition of VRE with a MIC value of 1.56 mg L −1 .Thus, compounds 73c and 73d displayed distinguished inhibition of MRSA and VRE as Gram-positive bacteria.The presence of uorine atom at the 7-position led to enhancement of the anti-MRSA activity.Such compounds were considered potential leads in discovering antibacterial inhibitors to combat drug resistance. 86ynthesis of 5-uoroisatin derivatives 75 was reported and their anti-tubercular activity was evaluated (Fig. 45).The antitubercular activity was measured using the Almar blue assay against the strain of Mycobacterium tuberculosis (MTB H37Rv).All compounds showed varied (from weak to good) inhibitory activity against MTB H37Rv, particularly compound 75d, having N-phenylpiperazine group, was the most active one against MTB H37Rv (ATCC-27294) with MIC = 6.25 mg mL −1 as compared to the standard drug isoniazid (with MIC = 1.6 mg mL −1 ). 87he glyoxylamide-based 5(6)-uoroindole derivatives 32a-c (Fig. 19 above) were synthesized and evaluated for their in vitro antibacterial activity against two Gram-positive (B.subtilis and S. aureus) and three Gram-negative (E.coli, P. putida, and K. pneumoniae) bacterial strains.Compound 32a displayed the best antibacterial activity against the Gram-negative strain.Compound 32a displayed high antibacterial activity (MIC = 12.5 mg mL −1 ) against all the tested three Gram-negative bacterial strains, and MIC = 25 mg mL −1 for the tested Gram-positive ones, compared with chloramphenicol reference drug (MIC = 16 mg mL −1 ) for all bacterial strains.Moving the uorine atom from position C-5 to C-6 in the indole moiety or arylation of indole N-1 led to the inactive derivatives 32b and 32c.The killing kinetics of compound 32a showed 95% bactericidal activity against the Gram-negative bacteria within the rst 2 h, indicating its fast-bactericidal action through quick inhibition of bacterial proliferation.Compound 32a also showed good bactericidal activity against the Gram-positive bacteria, B. subtilis (>80%) and S. aureus (>65%), within the initial 2 h of incubation.In addition, compound 32a showed no toxicity in any of the human cell lines; therefore, this compound appeared to target bacterial-specic pathways and was identied as a promising antibacterial agent. 57ng and his research group designed some 4-uoroindole derivatives 76-80 (Fig. 46) containing geranyl or n-octyl moieties at N-1 and determined their antimycobacterial activity against Mycobacterium bovis (M.bovis) BCG and M. tuberculosis H37Rv using the broth dilution method.The reported uorinated derivatives showed variable potencies against M. bovis BCG and M. tuberculosis H37Rv.The turbidity method was employed to determine the MIC values required to minimize the bacterial growth by 50% (MIC 50 ).The mycobacterial activity of 76a demonstrated good potencies against M. bovis BCG (MIC 50 = 5 mM) and M. tuberculosis H37Rv (MIC 50 = 3.5 mM), and compound 76a was found to be soluble at 25 mM, 25 °C and pH 7.4.The SAR study revealed that the 4-uoro analogues of 77a, 79a and 80a (all had MIC 50 = 2 mM) were twice more potent than the other uorinated regioisomers 77b-d, 79b-d and 80b-d (all had MIC 50 = 4 mM) against M. bovis BCG.However, compounds 77a, 78a, 78c, 79a, and 80a were found to be highly effective against M. tuberculosis H37Rv, where all of them presented potent activities with MIC 50 values 2-3 mM on the virulent tubercle bacillus. 88he tetralone-based 5-uoroindole molecular hybrid 81 (Fig. 47) was synthesized and its antibacterial and antitubercular activities were screened.Compound 81 displayed reasonable inhibitory activity against E. coli and S. aureus with MIC values of 12.5 and 25 mg mL −1 , respectively, compared with the ciprooxacin drug (MIC = 2.5 mg mL −1 ).It also showed the same potency against both S. typhi and M. tuberculosis with MIC = 50 mg mL −1 , compared with the ciprooxacin drug with MIC = 2.5 and 5.0 mg mL −1 , respectively. 89eswal and co-workers reported the synthesis of uorinated indolin-2,3-dione derivatives 82 and 83 (Fig. 48) and epidermidis) and two Gram-negative: P. aeruginosa and E. coli bacterial strains.Using ciprooxacin as reference drug, the serial dilution method was employed.The minimum inhibitory concentration (MIC) results revealed that most of the uorinated hybrids (82a-e, 83a-e) provided variable activity against tested bacterial strains with MIC ranging between 0.016-0.038mmol mL −1 .Compound 82b displayed high antibacterial activity against S. epidermidis and B. subtilis and both had MIC of 0.0075 mmol mL −1 , respectively, and 82c also showed high activity against S. epidermidis with a MIC of 0.0082 mmol mL −1 compared to ciprooxacin (MIC = 0.0047 mmol mL −1 ).The other uorinated hybrids 82 and 83 showed variable activities against the tested bacterial strains with MIC ranging between 0.015-0.038mmol mL −1 .It was found that compounds with electron-withdrawing groups had better antibacterial activity than those having electrondonating groups.The in vitro antifungal activity of 82a-e and 83a-e against two fungal strains (A.niger and C. albicans) revealed moderate to excellent activity; specically, hybrids 82a, 82d and 83c displayed promising activity for A. niger with MIC values of 0.0075, 0.0082 and 0.0092 mmol mL −1 , respectively, better than the antifungal drug uconazole (MIC, 0.0102 mmol mL −1 ).For C. albicans, however, the hybrids 82a, 82d and 82e showed good activity (MIC = 0.0090-0.0075mmol mL −1 ) comparable to uconazole (MIC, 0.0051 mmol mL −1 ). 90Some 1H-1,2,3-triazole-based 5-uoroisatin hybrids 84a-c were synthesized and examined for their in vitro antimycobacterial activities against multi-drug-resistant tuberculosis (MDR-TB) and M. tuberculosis H37Rv.The tested compounds showed signicant inhibitory activity against MDR-TB and MTB H37Rv with MIC values ranging between 0.25-1 mg mL −1 and 0.20-0.78mg mL −1 , respectively, better than the ciprooxacin drug with MIC = 4.0 and 3.12 mg mL −1 , respectively.The hybrid structure 84b (MIC: 0.20 mg mL −1 ) displayed the best activity with 16-fold inhibitory action against MTB H37Rv better than the reference ciprooxacin drug.Compound 84c showed extraordinary potency against MDR-TB (MIC = 0.25 mg mL −1 ), 16 times more active than the reference drug ciprooxacin.Thus, the reported 5-uoroisatins 84 were worthy of further development for possible anti-tuberculous drug therapy (Fig. 49). 91
Antimicrobial activity of uorinated pyrazoles and indazoles
The carboxamide-based 4-uoropyrazole derivatives 85a-f (Fig. 50) were invented and patented as antifungal agents via measuring their inhibition activity on Sclerotinia sclerotiorum (S. sclerotiorum), Rhizoctonia cerealis (R. cerealis), Gaeumannomyces graminis (G.graminis), and Valsa mali (V.mali).The in vitro inhibition activity of the invented 4-uoropyrazoles was examined on plant pathogenic fungi at a concentration of 50 mg L −1 using a hypha linear growth rate method.All compounds showed varied antifungal activity ranging between moderate to excellent activity.In particular, compound 85b exhibited excellent inhibitory activity against G. graminis and V. mali with inhibition rates of 100% and 86.9%, respectively, and compound 85e showed an inhibition rate of 92.8% against Rhizoctonia solani. 92Some oxadiazole containing 4-uoroindazoles 86ad (Fig. 51) were tested for their antibacterial activity against B. subtilis and E. coli and antifungal activity against A. niger employing cup-plate bioassay at a concentration of 1000 mg mL −1 .Compound 86a presented inhibition with 72.2% against both B. subtilis and E. coli and 50% against A. niger.The unsubstituted oxadiazole compound 86c presented the most potent inhibitory activity (94.4%) against B. subtilis and E. coli with almost equal potency as ciprooxacin and showed 50% inhibition against A. niger.Moreover, the amino derivative 86d showed 94.4% inhibition against A. niger compared with griseofulvin 93 .
Park et al. described the in vitro antifungal activities of some chiral uorinated indazole hybrids 87a-e (Fig. 52) against eleven fungi (Candida spp.and Aspergillus spp.); C. albicans, Candida krusei, Candida glabrata, Candida lusitaniae, Cryptococcus neoformans (C.neoformans), Candida tropicalis (C.tropicalis), Candida parapsilosis, Aspergillus fumigatus, A. niger, A. avus and Aspergillus terreus.The antifungal potencies were determined by in vitro broth microdilution assay.The synthesized compounds displayed variable inhibition potencies against most of the tested fungal pathogens and their MIC 80 's were determined.The 5-uoroindazole derivatives 87a and 87b were reported as promising lead candidates for antifungal therapy, where they showed the most potent antifungal activity with a broad spectrum. 94
Antimicrobial activity of uorinated benzazoles
The in vitro anti-mycobacterial activities of the uorobenzimidazole derivatives 88 and 89 (Fig. 53) against M. tuberculosis H37Rv strain by the MABA method were reported.The tested compounds presented moderate activity.
Compounds 88 and 89 displayed antitubercular activity with MIC lower than that determined for the naturally occurring sesamin against the pathogenic MTB H37Rv strain.Compounds 88 and 89 presented better activity against MTB H37Rv both with MIC = 25 mg mL −1 , compared to sesamin, the antitubercular natural product (MIC = 50 mg mL −1 ). 95l-Harthy et al. prepared a series of 5-uoro-benzothiazole derivatives 90a-h (Fig. 54) and screened their antimicrobial activity against a number of bacterial and fungi strains: E. coli, K. pneumoniae, Acinetobacter baumannii (A.baumannii), P. aeruginosa, and S. aureus.The same compounds were screened against two fungal strains, C. albicans and C. neoformans, at a concentration of 32 mg mL −1 .Overall, all compounds presented antibacterial activity against all tested strains, especially S. aureus, and also showed more activity towards Gram-positive bacteria than Gram-negative ones.Compounds 90a and 90b, particularly, displayed the best inhibitory potency against S. aureus, with 92.34% and 81.42% growth inhibition, respectively.Both compounds 90a and 90b had MIC 32 mg mL −1 compared to the tamoxifen reference standard with MIC 10 mg mL −1 .Moreover, only compound 90b showed the best inhibitory potency against the fungal strain C. neoformans with 103.06% growth inhibition. 96auhari et al. described the 6-uorobenzoxazole derivative 64 (Fig. 37 above) as an antimicrobial agent using the disc diffusion method at a concentration of 10 mg mL −1 .Interestingly, compound 64 exhibited signicant antifungal activity against both A. avus and A. niger with 90% and 95% growth inhibition, respectively.On the other hand, the antibacterial screening against P. aeruginosa, S. aureus, and K. pneumoniae, showed remarkable activity with inhibition zone diameters of 20-25 mm. 804.Antimicrobial activity of uorinated benzosiloxaboroles Krajewska et al. synthesized the benzosiloxaborole derivatives 91a-d (Fig. 55) and investigated their antimicrobial activity against the Gram-positive cocci, methicillin-sensitive S. aureus (MSSA) and methicillin-resistant S. aureus (MRSA) as well as the MRSA clinical strains.The tested compounds 91a-d exhibited high activity with MIC ranging between 3.12-6.25mg L −1 .These compounds also presented moderate activity against Enterococcus faecalis and Enterococcus faecium (with MIC values 25-50 mg L −1 ).The studies conrmed that compound 91d, having sulfonamide moiety, demonstrated the best inhibitory activity (MIC 3.12 mg L −1 ) and was not cytotoxic at the concentration close to its MIC value for S. aureus and was considered a potential antibacterial agent against S. aureus MRSA.97 The uorine-containing benzosiloxaborole derivatives 92a-f (Fig. with MIC values ranging between 0.78-6.25 mg L −1 , respectively.98
Conclusions
It was reported that 20% of the anticancer and antibiotic drugs approved by the FDA contain uorine atom(s).The current review article outlines numerous directly ring-uorinated heterocycles (ve-membered and their benzo-fused systems) of potent in vivo and in vitro anticancer and antimicrobial activities.Thereby, many compounds were considered as lead structures for drug design developments.Some uorinated heterocycles were found to be promising inhibitors and selective against a wide array of human cancer cell lines on a nanomolar scale.Some uorinated heterocycles were also established as highly potent bactericidal and fungicidal activities via strong inhibition of various bacterial and fungal strains with almost equal or better potency with the appropriate reference drugs.In most cases, the reported uorine-containing heterocycles demonstrated promising safety index via their reduced cytotoxicity in non-cancerous cell lines, and such derivatives are interesting candidates for anticancer or antibiotic drug discovery.In addition, several uorinated heterocycles were found to be useful for the regulation of cell proliferation, autophagy and apoptosis.SAR study assigned that the position of a uorine atom at the ring uorinated heterocycles greatly affected the anticancer activity to be much better than some of the corresponding reference drugs.Furthermore, uorinated heterocycles having various electron-donating or electronwithdrawing substituents signicantly inuenced the anticancer and antimicrobial activities.The mechanism of action disclosed that the cytotoxicity against the cancer cells employed the induction of cell death by apoptosis.This review is expected to open a new avenue for the development of anticancer and antimicrobial therapeutics.Charts 1 and 2 demonstrate the leading structures that showed higher inhibitory activities than reference drugs.
Thoraya
Abd Elreheem Farghaly was born in Cairo, Egypt, in 1974.She received her B.Sc. (1996), M.Sc.(2002) and Ph.D. (2005) degrees from the University of Cairo.She is a Professor of organic chemistry in the Chemistry Department, Faculty of Science, University of Cairo.She has worked at Umm Al-Qura University since 2014.She joined the scientic school of Prof. A. S. Shawali in 1997 and published more than 220 papers in the area of the chemistry of hydrazonoyl halides, heterocyclic chemistry and bioactive heterocyclic compounds.
compounds 1 and 2
inhibited the proliferation by approximately 70% with IC 50 values of 19.5 and 24.8 mM.Both compounds 1 and 2 were reported to exhibit inhibition of the expression of the antiapoptotic protein Bcl-2 and concentrationdependent cleavage of PARP-1, as well as DNA fragmentation by approximately 80%.
56
) were designed by Brzozowska et al. and screened for their antimicrobial potency.The bioassay data was based on using 14 bacterial and 7 yeast standard strains in the study, and the MIC and MBC (minimal bactericidal concentration) were calculated following CLSI (Clinical and Laboratory Standards Institute) and EUCAST (European Committee for Antimicrobial Susceptibility Testing) methodology.The uorinated compounds were more active against yeast strains than against bacteria.Compounds 92b and 92c were the most active, and both demonstrated promising antifungal activity against C. tropicalis with MIC values of 0.78 and 1.56 mg L −1 , respectively.Compound 92b showed signicant antifungal activity against Candida guilliermondii IBA-155, C. tropicalis IBA-171, Saccharomyces cerevisiae ATCC-9763, Saccharomyces cerevisiae IBA-198
39Only Alexander von Humboldt (AvH) Fellowship at Hanover University in2004-2005with Prof. A. Kirschning (in the area of polymer supported palladium catalyzed cross coupling reactions) and as AvH three short visits in 2007, 2008 and 2012 with Prof. P. Metz at TU-Dresden (in the eld of Metathesis Reactions in Domino Processes).Since May 2007to date, he has been appointed as a full Professor of Organic Chemistry, Faculty of Science, Cairo University.He worked as a Professor of Organic Chemistry at the Chemistry Department, Kuwait University from Sept. 2013 till Aug 2017.He received a number of National Awards: Cairo University Award in Chemistry (2002), the State Award in Chemistry (2007), Cairo University Award for Academic Excellence (2012) and the Cairo University Merit Award (2017).He has published about 160 scientic papers, reviews and book chapters in distinguished international journals.There are more than 3700 citations of his work (Scopus h-index 33). | 2024-06-21T05:04:25.829Z | 2024-06-18T00:00:00.000 | {
"year": 2024,
"sha1": "f637a69111e8be5b70ee8370a93ae9abe299b788",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f637a69111e8be5b70ee8370a93ae9abe299b788",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203212437 | pes2o/s2orc | v3-fos-license | Embracing Crowdsensing: An Enhanced Mobile Sensing Solution for Road Anomaly Detection
Road anomaly detection is essential in road maintenance and management; however, continuously monitoring road anomalies (such as bumps and potholes) with a low-cost and high-efficiency solution remains a challenging research question. In this study, we put forward an enhanced mobile sensing solution to detect road anomalies using mobile sensed data. We first create a smartphone app to detect irregular vehicle vibrations that usually imply road anomalies. Then, the mobile sensed signals are analyzed through continuous wavelet transform to identify road anomalies and estimate their sizes. Next, we innovatively utilize a spatial clustering method to group multiple driving tests’ results into clusters based on their spatial density patterns. Finally, the optimized detection results are obtained by synthesizing each cluster’s member points. Results demonstrate that our proposed solution can accurately detect road surface anomalies (94.44%) with a high positioning accuracy (within 3.29 meters in average) and an acceptable size estimation error (with a mean error of 14 cm). This study suggests that implementing a crowdsensing solution could substantially improve the effectiveness of traditional road monitoring systems.
Introduction
"No one knows how many potholes are out there, but we all agree there are a ton of them." The U.S. Federal Highway Administration (FHWA) estimates that about 52% of the U.S. highways are in a miserable condition [1]. A newly released report-Repair Priorities 2019 shows that the percentage of "poor condition" roads in the U.S. has rapidly increased from 14% to 20% between 2009 and 2017 [2]. The category of "poor condition" road is defined by FHWA, which contains excessive road anomalies, such as potholes, bumps, and ruts. Road anomalies can not only negatively impact driving experience, but they also damage vehicle components, cause economic loss, even lead to car crashes. The American Automobile Association estimates that pothole damage costs three billion U.S. dollars in vehicle repairs nationwide annually [3]. Meanwhile, approximately one-third of traffic fatalities occur on poor-condition roads each year [4]. Therefore, effectively detecting road anomalies has become a fundamental social need, which requires immediate attention.
Traditional road anomaly detections were conducted through three main types of approaches, including 3D laser scanning, vision-based image processing, and vehicular vibration-based
Related Studies
Different studies have been conducted to identify road anomalies (e.g., potholes and bumps) using smartphone sensors. Among the available mobile sensors, accelerometers are most sensitive for capturing vehicle jerks when hitting bumps and potholes. The existing methods have been implemented to analyze acceleration signals, which can be broadly classified into two categories: 1) threshold-based methods and 2) machine learning methods. In recent studies, signal processing techniques, such as wavelet transforms, start being adopted to analyze mobile sensed signals. Meanwhile, implementing crowdsensing solutions has become a promising research direction, which shows a significant potential to obtain more reliable detection results by synthesizing data provided by the public.
Threshold-based methods detect road anomalies through extracting extreme values from acceleration signals. Astarita et al. [14] explored the effectiveness of built-in smartphone accelerometers for detecting speed bumps and potholes using threshold-based method. In their study, the extreme peak values along the curve of z-axis acceleration were treated as direct indicators for identifying bumps and potholes. Three filters were utilized to eliminate data noise and enhance the peak signals. The result demonstrated that speed bumps could be successfully identified by the extreme peak values of filtered z-axis acceleration with an accuracy of 90%. However, this method was less useful for locating potholes with a detection rate of around 65%. Mednis et al. [17] compared different threshold-based methods for identifying road anomalies from acceleration signals. A dedicated accelerometer was installed on a vehicle to sense its vibration. The authors found a specific data pattern while hitting potholes-acceleration readings near to be 0 m/s 2 for all three axes. Therefore, they created a G-ZERO algorithm and compared with the other three methods, including Z-THRESH, Z-DIFF, and STDEW(Z). The results demonstrated this new method can achieve 90% accuracy for detecting road anomalies. Rishiwal and Khan [18] proposed a simple threshold-based solution to measure the severity of bumps and potholes. Continuous series of z-axis acceleration were collected to represent vehicle vibrations when driving along a road. A set of thresholds were generated through empirical tests to examine z-axis acceleration, which could extract road anomalies and label their severity levels (1 to 3) with an accuracy of 93.75%. Zang et al. [19] attempted to use bicycle-mounted smartphones to measure the conditions of pedestrian and bicycle lanes. Their study also implemented a threshold-based method to extract significant spikes from the curve of vertical acceleration. These spikes were recognized as 3 of 21 road anomalies. The authors validated their result with 10 ground truth samples and achieved 100% detection accuracy.
Machine learning methods have also been intensively utilized in road anomaly detections. Kalim et al. [20] created a new mobile app called CRATER to identify potholes and speed bumps through machine learning methods. In their study, the authors also used the built-in accelerometer to capture the vehicle shocks and vibrations while driving. A set of features (e.g., mean, maximum, minimum speed, etc.) were generated from the collected signals. Five classifiers were compared, including naïve Bayes, support vector machine (SVM), decision tables, decision tree, and supervised clustering. The results demonstrated that SVM did the best among the five methods, which could successfully identify potholes and speed bumps with accuracy rates of 90% and 95%. Meanwhile, this paper also attempted to obtain more reliable results by leveraging crowdsourced data. The potholes had to be reported by more than five different users before publishing on the web map. Celaya-Padilla et al. [21] utilized a different machine learning approach to check the existence of speed bumps. The authors first installed some hardware sensors (e.g., three-axis accelerometer and gyroscope) on a vehicle to measure vehicle vibration. The collected data series were split into two-second subsets. Each subset was manually labeled as with or without a speed bump. Then, seven statistical features (e.g., mean, variation, skewness, etc.) were generated from each axis of the two sensors' measurements for each subset. These features were selected through a multivariate feature section strategy supported by genetic algorithms. Finally, the selected features were fed to logistic regression models to identify whether a speed bump exists in each subset. This study achieved a detection accuracy of 97.14%. A similar study was conducted by Silva et al. [22]. The authors used random forest classifier to detect road anomalies from mobile sensed data. Fifty statistical features were generated from each subset of the collected data series. Each subset contained 125 continuous three-axis accelerometer measurements. Through applying feature selection procedure, 25 features were selected and used in the classification model. This method achieved a 77.23% -93.91% accuracy for distinguishing road with and without anomalies in different experimental settings.
Wavelet analysis has a superior ability for analyzing continuous changing signals, which shows a great potential to aid in interpreting mobile sensed data. Wei et al. [23] calculated wavelet statistics using an official roughness dataset to characterize road surface roughness. Results demonstrated that the obtained wavelet statistics showed a high correlation with officially measured roughness indexes. Recent studies attempted to use wavelet transforms to recognize bumps and potholes from mobile sensed data series. For example, Bello-Salau et al. [24] were the first to integrate wavelet transform (WT) into road anomaly detection. In their study, the authors combined a discrete WT model with the scale-space filtering algorithm to denoise the vehicle vibration signals collected from a dedicated accelerometer-NI myRIO-1950. Then, a fixed threshold was used to extract abnormal values from the denoised signals to identify the road anomalies (e.g., bumps and potholes). This study achieved relatively high accuracy for detecting bumps (96%) and potholes (94%). Rodrigues et al. [25] conducted a similar study to evaluate the effectiveness of a different discrete WT-Haar wavelet transform (HWT) for detecting potholes. The authors first created an Android-based mobile app to collect data from the built-in smartphone accelerometer. Then, HWT was applied to the z-axis accelerations in different decomposition levels to generate wavelet coefficients, which could highlight the abnormal variations when hitting potholes. Thresholds were generated based on the mean value and the standard deviation of the calculated wavelet coefficients. These thresholds were used to label the collected signals as potholes, intermediate irregularities, and acceptable perturbations. However, the authors only used two manually collected potholes to validate their result, which was not statistically sufficient.
Implementing crowdsensing solutions would be exceptionally beneficial in road anomaly detection, as it allows continuous monitoring of road surface conditions by leveraging public contributed data with little or even zero economic cost. Li et al. [15] proposed a crowdsensing solution to assess road surface conditions. The authors first used an improved threshold-based method to detect potholes. Then, the crowd sensed potholes within a 10-meter radius were aggregated into one pothole through a simple averaging procedure. Sabir et al. [26] conducted a similar study to enhance the accuracy of the detected road anomalies. In their study, the public reported potholes within a 5-meter radius were clustered to eliminate duplicated reports. Meanwhile, road anomalies had to be reported by different users before final confirmed. This study could successfully detect 90% of speed breakers and 85% of potholes.
Knowledge Gaps
Although existing studies have proven efficient to identify road anomalies using mobile sensed data, they also expose some knowledge gaps which need to be addressed, including:
1.
Existing detection methods have apparent limitations. Threshold-based methods need extensive empirical studies to obtain high-reliable thresholds. However, these thresholds mostly need to be adjusted and even re-tested when applied in different locations, which, in turn, significantly limits the repeatability of threshold-based methods. Machine learning methods usually require an extensive model training process based on a vast amount of labeled data, which is laborious and time-consuming. Utilizing wavelet transform (WT) can be more efficient to analyze mobile sensed data; however, integrating WT into road anomaly detection is still at a preliminary stage.
To date, only a few studies reported on the utilization of discrete WT. The implementation of continuous wavelet transform (CWT) is still underexplored.
2.
Pothole size estimation is lacking. Most existing studies focus only on identifying and locating potholes; however, few studies investigate how to estimate potholes' size using mobile sensed data. The damages caused by potholes vary by their sizes. Patching a pothole can cost about $35 to $50 U.S. dollars. Therefore, accurate and timely pothole size estimation is of great importance, which can help local governments allocate budget to fix hazardous potholes wisely.
3.
Prior crowdsourcing solutions are too simple to synthesize public contributed results efficiently. How to leverage crowd sensed data to achieve a better road anomaly detection is still an underexplored question. Currently, only a few studies have attempted to address this question with some simple crowdsensing strategies (e.g., average the crowd sensed data). However, these studies cannot effectively integrate public contributions to optimize the detection result.
Solution and New Contributions
To fill the above-referenced knowledge gaps, we propose an enhanced mobile sensing approach to detect road anomalies. In this study, we first acquire mobile sensors' data, including three-axis accelerometer and GPS, through a customized mobile app: PotholeAnalyzor. We then use wavelet analysis to identify road surface anomalies (such as bumps and potholes) and measure their sizes based on the mobile sensed data. Finally, we innovatively synthesize different driving tests' results through a spatial clustering method, Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN), to optimize the detection results.
Compared with prior studies, this study makes three new contributions for road anomaly detection, including:
1.
Implement a new method. To the best of our knowledge, this study marks the first attempt to test the performance of CWT in road anomaly detection.
2.
Provide a solution for pothole size estimation. Pothole size estimation plays an important role in road surface management; however, it has not been considered in prior studies. This study uses an innovative wavelet-based approach to extract size information for road surface anomalies, which is a new solution to an existing problem.
3.
Put forward an enhanced mobile sensing approach. There are some drawbacks associated with the crowd sensed data, such as data inaccuracy and redundancy. This study is among the first to investigate how to optimize road anomaly detection results by spatially clustering different driving tests' detection results. Implement a new method. To the best of our knowledge, this study marks the first attempt to test the performance of CWT in road anomaly detection.
Methods
In this study, we propose an enhanced crowdsensing approach to detect road anomalies by taking advantage of CWT and spatial clustering methods. The detection process goes through three main stages as shown in Figure 1, including (1) mobile sensors' data acquisition and preprocessing, (2) road anomaly detection and size estimation, and (3) result optimization by clustering crowd sensed data.
Methods
In this study, we propose an enhanced crowdsensing approach to detect road anomalies by taking advantage of CWT and spatial clustering methods. The detection process goes through three main stages as shown in Figure 1, including 1) mobile sensors' data acquisition and preprocessing, 2) road anomaly detection and size estimation, and 3) result optimization by clustering crowd sensed data.
This section details the data and methods used in each processing stage, respectively. We first create an Android-based mobile app-PotholeAnalyzor to acquire research data from two smartphone sensors (e.g., GPS and accelerometer). Next, the mobile collected raw data is preprocessed to clean, transform, and organize datasets before conducting analysis. Then, we make the first attempt to use CWT to analyze mobile sensed signals for identifying road anomalies and estimating their sizes. Finally, the detected bumps and potholes are confirmed and optimized by clustering multiple driving tests' results.
Data Acquisition and Preprocessing
Former studies have proven that a smartphone accelerometer works well for capturing irregular vehicle vibrations when hitting potholes or bumps [15][16][17]. By integrating with GPS data, these This section details the data and methods used in each processing stage, respectively. We first create an Android-based mobile app-PotholeAnalyzor to acquire research data from two smartphone sensors (e.g., GPS and accelerometer). Next, the mobile collected raw data is preprocessed to clean, transform, and organize datasets before conducting analysis. Then, we make the first attempt to use CWT to analyze mobile sensed signals for identifying road anomalies and estimating their sizes. Finally, the detected bumps and potholes are confirmed and optimized by clustering multiple driving tests' results.
Data Acquisition and Preprocessing
Former studies have proven that a smartphone accelerometer works well for capturing irregular vehicle vibrations when hitting potholes or bumps [15][16][17]. By integrating with GPS data, these abnormal acceleration signals can be geotagged, which can aid in identifying and locating road anomalies. Although some studies suggest that the gyroscope can measure smartphone orientation and generate additional features to characterize vehicle motion, this study only utilizes one smartphone motion sensor-accelerometer for two reasons: (1) the accelerometer is the most direct motion sensor measuring vehicle vibrations and proven powerful enough for capturing abnormal signals, (2) utilizing two motion sensors at a high sampling rate (e.g., 100 Hz) can drain the smartphone battery much more faster, which will significantly limits the implementation of the proposed solution.
In this study, we collect data from a smartphone accelerometer and GPS through a customized mobile app. The collected raw accelerometer's data is preprocessed through three steps: data reorientation, data smoothing, and geotagging accelerometer's measurements using GPS data.
Mobile Sensor Data Collection
To obtain the mobile sensors' data, we create a mobile app-PotholeAnalyzor using Android application program interfaces (APIs). PotholeAnalyzor can record real-time sensed accelerometer measurements, timestamps, and GPS coordinates. Please note that smartphones must be fixed on the vehicle using smartphone holders during data collection, which can avoid some noises caused by devices sliding.
Accelerometer measures both the real acceleration force and earth gravity. To eliminate the influence of earth gravity, Android provides a linear acceleration sensor, which isolates and removes the force of gravity from accelerometer measurements using a low-pass filter and a high-pass filter. Refer to [15,27] for a detailed explanation.
This study analyzes linear accelerometer measurements to detect road anomalies. The sampling rate of the accelerometer is set to 100 Hz while GPS is set to 1 Hz. Figure 2 shows the app's user interface, which contains a dynamic chart showing the z-axis acceleration and a Google Maps visualizer tracking the driving path using GPS. abnormal acceleration signals can be geotagged, which can aid in identifying and locating road anomalies. Although some studies suggest that the gyroscope can measure smartphone orientation and generate additional features to characterize vehicle motion, this study only utilizes one smartphone motion sensor-accelerometer for two reasons: 1) the accelerometer is the most direct motion sensor measuring vehicle vibrations and proven powerful enough for capturing abnormal signals, 2) utilizing two motion sensors at a high sampling rate (e.g., 100 Hz) can drain the smartphone battery much more faster, which will significantly limits the implementation of the proposed solution.
In this study, we collect data from a smartphone accelerometer and GPS through a customized mobile app. The collected raw accelerometer's data is preprocessed through three steps: data reorientation, data smoothing, and geotagging accelerometer's measurements using GPS data.
Mobile Sensor Data Collection
To obtain the mobile sensors' data, we create a mobile app-PotholeAnalyzor using Android application program interfaces (APIs). PotholeAnalyzor can record real-time sensed accelerometer measurements, timestamps, and GPS coordinates. Please note that smartphones must be fixed on the vehicle using smartphone holders during data collection, which can avoid some noises caused by devices sliding.
Accelerometer measures both the real acceleration force and earth gravity. To eliminate the influence of earth gravity, Android provides a linear acceleration sensor, which isolates and removes the force of gravity from accelerometer measurements using a low-pass filter and a high-pass filter. Refer to [15,27] for a detailed explanation.
This study analyzes linear accelerometer measurements to detect road anomalies. The sampling rate of the accelerometer is set to 100 Hz while GPS is set to 1 Hz. Figure 2 shows the app's user interface, which contains a dynamic chart showing the z-axis acceleration and a Google Maps visualizer tracking the driving path using GPS.
Data Reorientation
To ensure the effectiveness of mobile sensed acceleration for capturing vehicle jerks while hitting potholes, data reorientation needs to be implemented to align the accelerometer's axes with the vehicle's axes-x-axis and y-axis of the accelerometer should be used to measure the horizontal movement of the vehicle; z-axis should be perpendicular to the vehicle and senses its vertical vibration, which are directly caused by road anomalies [5]. Euler Angles have been widely proven to be effective for reorienting accelerometers. In this study, we reorient the accelerometer measurements through Euler Angles as follows [14,28]: where α and β are two Euler Angles, roll and pitch, a x, a y, a z are the raw accelerometer measurements along three axis, and a x , a y , a z are the reoriented three-axis accelerations.
Data Smoothing
Removing data noise is an essential step in signal analysis. Mobile sensed measurements inevitably contain noises. In this study, we implement a high-pass filter to wipe off noises and enhance signal patterns, which is conducted as: where x i is the ith raw sample data, y i is the ith smoothed data, t is the current time tag, dT is the event delivery rate, n is the number of samples, which refers to the number of z-axis accelerometer measurements in this study. Figure 3 shows the comparison between raw data and processed data, which indicates noises can be efficiently eliminated with an enhanced data pattern after filtering. To ensure the effectiveness of mobile sensed acceleration for capturing vehicle jerks while hitting potholes, data reorientation needs to be implemented to align the accelerometer's axes with the vehicle's axes-x-axis and y-axis of the accelerometer should be used to measure the horizontal movement of the vehicle; z-axis should be perpendicular to the vehicle and senses its vertical vibration, which are directly caused by road anomalies [5]. Euler Angles have been widely proven to be effective for reorienting accelerometers. In this study, we reorient the accelerometer measurements through Euler Angles as follows [14,28]: = cos + sin sin + cos sin , = cos − sin , where and are two Euler Angles, roll and pitch, , , are the raw accelerometer measurements along three axis, and , , are the reoriented three-axis accelerations.
Data Smoothing
Removing data noise is an essential step in signal analysis. Mobile sensed measurements inevitably contain noises. In this study, we implement a high-pass filter to wipe off noises and enhance signal patterns, which is conducted as: where is the ith raw sample data, is the ith smoothed data, is the current time tag, is the event delivery rate, is the number of samples, which refers to the number of z-axis accelerometer measurements in this study. Figure 3 shows the comparison between raw data and processed data, which indicates noises can be efficiently eliminated with an enhanced data pattern after filtering.
Geotagging
The sampling rates of GPS (1 Hz) and accelerometer (100 Hz) are different. To identify the locations of road anomalies, we need to geotag each accelerometer measurement by leveraging GPS readings. In this study, we adopt a scheme proposed in [6] to integrate these two sensors' data. First, the original GPS readings (latitude, longitude, height) are transformed into earth-centered earth-fixed (ECEF) coordinates (x, y, z). Then, we find two temporal-nearest GPS readings for each accelerometer measurement by matching their timestamps. Last, the accelerometer measurement can be geotagged through a linear interpolation scheme based on its temporal distance to its two nearest GPS points.
where (x, y, z) is the calculated ECEF coordinates for the accelerometer measurement with a timestamp t, (x 0 , y 0 , z 0 ) and (x 1 , y 1 , z 1 ) are two consecutive GPS readings with timestamps t 0 and t 1 , which are temporally nearest GPS points to the acceleration measurement.
Road Anomaly Detection and Size Estimation
From a digital signal perspective, each piece of accelerometer recording is a sum of multiple signals with varying frequencies and amplitudes. The amplitude signature of road anomaly is very sensitive to acquisition platform and conditions such as driving speed and the type of vehicle; therefore, amplitude-based detection approaches are often site-specific and unreliable. Frequency-based methods are much more stable because they focus on identifying unique frequency components that are indicative of surface roughness and road anomalies. Fourier analysis and wavelet analysis are the two most popular frequency-based approaches. The use of Fourier analysis in road surface roughness characterization [29,30], however, suffers from a major limitation which is the lack of association between the spatial domain and the frequency domain, such that locating a certain spectral anomaly on the distance profile is difficult with Fourier analysis. Wavelet analysis, on the other hand, is a superior option because it does not only reveal the frequency components of the road profile but also identify where a certain spectral anomaly exists in the spatial domain. Previous applications of wavelet analysis in this field have yielded satisfactory results in road roughness assessment and the detection of surface irregularities, e.g., [23]. In this study, we extend this application and discuss the use of wavelet analysis in pothole detection and pothole size estimation.
Continuous Wavelet Transform
We detect potholes and estimate their sizes by performing the continuous wavelet transform on the preprocessed data. We chose CWT over the discrete wavelet transform (DWT) because CWT results are easier to interpret given that CWT operates at every scale (frequency) and the shifting of the wavelet function is continuous. The one-dimensional CWT is defined as [31]: where C is the output wavelet coefficient, f (x) is the preprocessed input signal as a function of location x, a is the scale parameter (inversely related to spatial frequency), τ is position parameter and ψ * is the complex conjugate of the mother-wavelet function that is chosen based on the feature of interest.
In this study, we use order 3 Daubechies wavelet (DB3) as the mother-wavelet ( Figure 4) which is recommended by [23]. There is a correspondence between wavelet scales and frequency, such that a smaller scale corresponds to a compressed wavelet, which is high in frequency, while larger scales correspond to a stretched wavelet, representing lower frequency. As defined in Equation (8), a wavelet coefficient is a function of both wavelet scale and position. Scale controls the compression or stretching of the wavelet and position controls the shifting of the wavelet function. For each scale (corresponding to a certain degree of wavelet compression or stretching), the wavelet examines every location on the input signal by continuously moving along the distance axis. Therefore, the final output is a two-dimensional matrix in scale (frequency)-location space, which is then converted to a matrix of percentage of energy (the sum of all elements in the matrix equals 1). output is a two-dimensional matrix in scale (frequency)-location space, which is then converted to a matrix of percentage of energy (the sum of all elements in the matrix equals 1). CWT produces high wavelet coefficient values at scales where the oscillation in the wavelet correlates best with the signal feature. With a proper choice of mother-wavelet that approximates the target signal (in this case, our target signal is the accelerometer recording when hitting a pothole), the wavelet coefficient image will highlight the target location at the right scale.
Pothole Size Estimation
CWT generates a high value response when the wavelet shifts to a pothole location. The raw wavelet coefficient images, however, do not come with a meaningful scale that corresponds to pothole size and usually capture irrelevant information such as random road noise and the vibration of the engine. Therefore, we further process the wavelet coefficient images with the following steps: 1. Convert the unitless wavelet scales to physical scales in meters using the algorithm provided by MATLAB Wavelet Toolbox [32]. 2. Multiply the scale axis by a scaling factor, which relates the converted wavelet scales to the sizes of target. This scaling factor is determined by field experiments at a test site and is kept as a constant unless the data acquisition platform is changed (in this study, we get a value of 0.3 for generic vehicles including sedan and SUV). 3. Clean the wavelet coefficient images by thresholding (only keep values that are greater than N times of overall average, and in this case, we use N = 18). 4. Apply 2-D Gaussian filter to remove noise and combine detections that correspond to the same pothole. Then the center of each highlighted zone is considered as the center of a detected pothole. 5. Get the size estimation for each detected pothole (highlighted zones on the wavelet coefficient image).
The final result contains two pieces of information: pothole location (step 4) and pothole size (step 5). It is necessary to state that the choice of scaling factor and threshold value may subject to change in other data acquisition settings, because the signals can be influenced by the coupling between road and vehicle. For example, the data acquired by a pickup truck with a large tire and harder suspension may require a different set of processing parameters. Also note that since the mobile device mainly measures vehicle vibrations along a driving path, we only estimate the maximum driving-dimensional length of road anomalies in this study. Here, the driving-dimension of anomalies is parallel to the road driving direction, as illustrated in Figure 5. CWT produces high wavelet coefficient values at scales where the oscillation in the wavelet correlates best with the signal feature. With a proper choice of mother-wavelet that approximates the target signal (in this case, our target signal is the accelerometer recording when hitting a pothole), the wavelet coefficient image will highlight the target location at the right scale.
Pothole Size Estimation
CWT generates a high value response when the wavelet shifts to a pothole location. The raw wavelet coefficient images, however, do not come with a meaningful scale that corresponds to pothole size and usually capture irrelevant information such as random road noise and the vibration of the engine. Therefore, we further process the wavelet coefficient images with the following steps:
1.
Convert the unitless wavelet scales to physical scales in meters using the algorithm provided by MATLAB Wavelet Toolbox [32].
2.
Multiply the scale axis by a scaling factor, which relates the converted wavelet scales to the sizes of target. This scaling factor is determined by field experiments at a test site and is kept as a constant unless the data acquisition platform is changed (in this study, we get a value of 0.3 for generic vehicles including sedan and SUV).
3.
Clean the wavelet coefficient images by thresholding (only keep values that are greater than N times of overall average, and in this case, we use N = 18).
4.
Apply 2-D Gaussian filter to remove noise and combine detections that correspond to the same pothole. Then the center of each highlighted zone is considered as the center of a detected pothole.
5.
Get the size estimation for each detected pothole (highlighted zones on the wavelet coefficient image).
The final result contains two pieces of information: pothole location (step 4) and pothole size (step 5). It is necessary to state that the choice of scaling factor and threshold value may subject to change in other data acquisition settings, because the signals can be influenced by the coupling between road and vehicle. For example, the data acquired by a pickup truck with a large tire and harder suspension may require a different set of processing parameters. Also note that since the mobile device mainly measures vehicle vibrations along a driving path, we only estimate the maximum driving-dimensional length of road anomalies in this study. Here, the driving-dimension of anomalies is parallel to the road driving direction, as illustrated in Figure 5.
Result Optimiztion by Clustering Crowd Sensed Data
Using smartphone sensors to detect vehicle jerks is a highly efficient solution to identify road anomalies; however, it also has some significant drawbacks. For example, the detection result purely depends on whether the vehicle kicks up road anomalies. However, vehicle wheels only run over a small portion of pavement surface, which significantly limits the detection coverage. Meanwhile, a single user's detection result can be influenced by various factors, such as vehicle models, phone models, driving skills, etc. Therefore, in this study, we implement a crowdsensing solution to optimize the detection results by mining public contributed data. We hypothesize that the significant similarities among crowd sensed data could be used to obtain more reliable detection results than single user's results.
In this study, we innovatively implement spatial clustering methods to group crowd sensed results into clusters based on their similarities. Then, each cluster's member points are further synthesized to form a unique point using weighting schemes, which represents a confirmed road anomaly.
Density-Based Clustering
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) has been extensively utilized to analyze spatial patterns, which can effectively identify concentrated points (clusters) and discrete points (noises) [33,34]. Implementing DBSCAN requires two parameters, including 1) minimum points to form a cluster (Cmin) and 2) search distance (d) to define neighbors. The clustering procedure can classify data points into three classes, including [34]: • Core point-a point which has at least Cmin neighbors-points within the d distance to the tested point are counted as its neighbors. • Border point-a point which is counted as a neighbor to core points but does not have its own neighbors (the distance is insufficient, less than Cmin). • Noise point-a point which is neither a core point nor a border point.
The clustering procedure of DBSCAN contains the following main steps: 1. Choose a random sample point from the dataset as a starting point (p). 2. Identify the neighbors of p using a customized search distance.
Result Optimiztion by Clustering Crowd Sensed Data
Using smartphone sensors to detect vehicle jerks is a highly efficient solution to identify road anomalies; however, it also has some significant drawbacks. For example, the detection result purely depends on whether the vehicle kicks up road anomalies. However, vehicle wheels only run over a small portion of pavement surface, which significantly limits the detection coverage. Meanwhile, a single user's detection result can be influenced by various factors, such as vehicle models, phone models, driving skills, etc. Therefore, in this study, we implement a crowdsensing solution to optimize the detection results by mining public contributed data. We hypothesize that the significant similarities among crowd sensed data could be used to obtain more reliable detection results than single user's results.
In this study, we innovatively implement spatial clustering methods to group crowd sensed results into clusters based on their similarities. Then, each cluster's member points are further synthesized to form a unique point using weighting schemes, which represents a confirmed road anomaly.
Density-Based Clustering
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) has been extensively utilized to analyze spatial patterns, which can effectively identify concentrated points (clusters) and discrete points (noises) [33,34]. Implementing DBSCAN requires two parameters, including 1) minimum points to form a cluster (C min ) and 2) search distance (d) to define neighbors. The clustering procedure can classify data points into three classes, including [34]: • Core point-a point which has at least C min neighbors-points within the d distance to the tested point are counted as its neighbors. • Border point-a point which is counted as a neighbor to core points but does not have its own neighbors (the distance is insufficient, less than C min ). • Noise point-a point which is neither a core point nor a border point.
The clustering procedure of DBSCAN contains the following main steps: 1. Choose a random sample point from the dataset as a starting point (p).
2.
Identify the neighbors of p using a customized search distance. 3.
If p was a core point, it would be marked as visited, a cluster would be formed with the core point and all its connected points. Connected points include p's neighbors and all reachable points (within a d radius) of its neighbors. 4.
If p was not a core point, DBSCAN would retrieve an unvisited point from the dataset as a new starting point and repeat the process. 5.
The process will end when all points are marked as visited or all points are assigned to a cluster.
Hierarchical DBSCAN (HDBSCAN) is an enhanced density-based clustering method proposed by Campello et al. in 2013 [35]. This method integrates DBSCAN with hierarchical clustering algorithm, which significantly extends the ability of DBSCAN to identify clusters of varying densities. As one of the most data-driven clustering methods, HDBSCAN only has one required parameter C min . One prominent advantage of HDBSCAN is that it can generate probability scores for the sample points. The probability score indicates the likelihood of a point to be involved in a cluster. Refer to [36] for a detailed explanation of HDBSCAN.
In this study, we implement HDBSCAN to group the crowd sensed road anomalies. Each identified cluster is recognized as a unique road anomaly. Meanwhile, this process can also aid in filtering out low-quality public detected results though a simple procedure: points labeled as noises or with low probability scores are eliminated from the clustering result.
Weighting Schemes
After removing the low-quality crowd sensed data, we utilize two weighting schemes to synthesize each cluster's members into one data point. First, we calculate the weighted median center for each cluster to represent the locations of final determined anomalies. The median center is the location which minimizes the distance to all features in a group. The median center is less influenced by outliers than the mean center, which is a more reliable measure of central tendency [37]. Mathematically, the median center needs to satisfy the following objective function [37]: where x i and y i are coordinates of the ith point, u and v are coordinates of weighted median center, w i is the weight of the ith point, which refers to the probability score in this study, and n is number of points. Meanwhile, a weighted average scheme is used to optimize the size estimation result for each cluster.
where n is number of points in a cluster, s i the estimated size of the ith point, w i is the weight of the ith point, which refers to the probability score in this study, and S opt is the recalculated size for each cluster. Through these two weighting schemes, we can effectively leverage crowd sensed data to obtain an optimized detection result.
Experiment Settings
To verify the effectiveness of our method, we manually collected 24 road anomalies as ground truth points from two parking lots at Texas A&M University. These anomalies were positioned through a hand-held GPS-GARMIN GPSMAP 78 with a high positioning accuracy (~3 meters). The 3-meter positioning accuracy is accurate enough in this study to evaluate the performance of mobile sensed data (with 5 to 10 meters positioning error) and to help road maintainers locate road anomalies. Meanwhile, we carefully measured each pothole's driving-dimensional length using a ruler to form a ground-truth dataset. Figure 6 illustrates the spatial distribution of the obtained ground truth data. anomalies. Meanwhile, we carefully measured each pothole's driving-dimensional length using a ruler to form a ground-truth dataset. Figure 6 illustrates the spatial distribution of the obtained ground truth data. Table 1 shows our experiment settings. In this experiment, we tested each parking lot five times by two different drivers, with approximately 30 miles per hour (mph) driving speed. One driver drove a 2009 Toyota Corolla with a Moto X Pure phone running our PotholeAnalyzor to detect each parking lot three times. Another driver drove a 2009 Toyota RVA4 with an iPhone 8 running a similar iOS app CrowdSensor to detect each parking lot twice. Drivers' explicit permission was required before collecting sensors' data. The sampling rates of accelerometers for both phones were set to 100 Hz. GPS was set to 1 Hz. Through increasing the variability of the experiment (such as drivers, phones, vehicles, etc.), we were able to effectively assess the performance of our method for processing crowd sensed data. Table 1 shows our experiment settings. In this experiment, we tested each parking lot five times by two different drivers, with approximately 30 miles per hour (mph) driving speed. One driver drove a 2009 Toyota Corolla with a Moto X Pure phone running our PotholeAnalyzor to detect each parking lot three times. Another driver drove a 2009 Toyota RVA4 with an iPhone 8 running a similar iOS app CrowdSensor to detect each parking lot twice. Drivers' explicit permission was required before collecting sensors' data. The sampling rates of accelerometers for both phones were set to 100 Hz. GPS was set to 1 Hz. Through increasing the variability of the experiment (such as drivers, phones, vehicles, etc.), we were able to effectively assess the performance of our method for processing crowd sensed data.
Ground Truth Acquisition
Manually collected with GARMIN GPSMAP 78 and ruler.
Manually collected with GARMIN GPSMAP 78 and ruler.
Wavelet Analysis Results
After data collection, we first eliminated the noise of Z-axis acceleration data and geotagged each data point using GPS readings. Then, we analyzed the processed Z-axis acceleration series to identify road anomalies and measure their sizes.
As illustrated in Figure 7, the upper subplot shows the input signals-preprocessed Z-axis acceleration. Then, we performed CWT on the signals to calculate its similarity with mother wavelet at continuous scales, as shown in the middle subplot. The lower subplot shows the filtered high wavelet coefficients, which indicates the high possibility that an anomaly exists with a specific size. The red circles indicate the location and size of ground truth points. The results demonstrated that wavelet analysis can efficiently identify, locate, and measure abnormal signals caused by hitting road anomalies.
Ground Truth Acquisition
Manually collected with GARMIN GPSMAP 78 and ruler.
Manually collected with GARMIN GPSMAP 78 and ruler.
Wavelet Analysis Results
After data collection, we first eliminated the noise of Z-axis acceleration data and geotagged each data point using GPS readings. Then, we analyzed the processed Z-axis acceleration series to identify road anomalies and measure their sizes.
As illustrated in Figure 7, the upper subplot shows the input signals-preprocessed Z-axis acceleration. Then, we performed CWT on the signals to calculate its similarity with mother wavelet at continuous scales, as shown in the middle subplot. The lower subplot shows the filtered high wavelet coefficients, which indicates the high possibility that an anomaly exists with a specific size. The red circles indicate the location and size of ground truth points. The results demonstrated that wavelet analysis can efficiently identify, locate, and measure abnormal signals caused by hitting road anomalies. Meanwhile, we also further explored the influence of driving speed on the detection result. In this experiment, we tested a road segment from Parking Lot2 three times at different driving speeds (namely, 20 mph, 30 mph, and 40 mph). This road segment contains four bumps with the same size of 0.4 meters. Figure 8 shows the detection results generated from three driving tests. This figure shows that all four bumps can be successfully identified (yellow lines in right-side subplots) from the three driving tests with acceptable size estimation results (~0.25 to 0.5 meters). This indicates that our proposed method achieved a stable performance for detecting road anomalies with different driving speeds. It is also worth noting the detection results (yellow lines) show a positioning difference with the ground truth points (red circles) when driving at 40 mph (bottom-right subplot in Figure 8). This is because the GSP sampling rate is 1 Hz, which is more easily to be influenced by high driving speed. Therefore, we suggest implementing this approach with driving speeds under 40 mph for achieving higher road anomalies positioning accuracy. Meanwhile, we also further explored the influence of driving speed on the detection result. In this experiment, we tested a road segment from Parking Lot2 three times at different driving speeds (namely, 20 mph, 30 mph, and 40 mph). This road segment contains four bumps with the same size of 0.4 meters. Figure 8 shows the detection results generated from three driving tests. This figure shows that all four bumps can be successfully identified (yellow lines in right-side subplots) from the three driving tests with acceptable size estimation results (~0.25 to 0.5 meters). This indicates that our proposed method achieved a stable performance for detecting road anomalies with different driving speeds. It is also worth noting the detection results (yellow lines) show a positioning difference with the ground truth points (red circles) when driving at 40 mph (bottom-right subplot in Figure 8). This is because the GSP sampling rate is 1 Hz, which is more easily to be influenced by high driving speed. Therefore, we suggest implementing this approach with driving speeds under 40 mph for achieving higher road anomalies positioning accuracy.
Optimized Detection Results by Mining Crowd Sensed Data
After obtaining detection results from each driving test, we implemented HDBSCAN to group the 10 detection results (five for each study site) based on their similarities, which can aid in eliminating low-quality public contributed data and enhancing detection accuracy. Figure 9a,b illustrate detection results obtained from five driving tests for both study sites. These two subplots show that most of the detected anomalies are concentrated around ground truth points; however, there is still a certain number of (~24% in this study) detected points with a relatively far distance (greater than 10 meters) to ground truth points. It implies that the detection results obtained from one single driving test are not reliable. To optimize our results, we first implemented HDBSCAN on the five times detection results to form clusters. HDBSCAN can automatically group sample points into clusters or noises based on their spatial density patterns. Meanwhile, it also generates a probability score for each point, indicating its likelihood of being involved in a cluster. In this study, clustering noises and cluster member points with low probability scores (less than 0.5) were regarded as low-quality contributed points and eliminated from the detection results. Figure 9c,d show the clustering results for both study sites after eliminating low-quality contributed points. Through this procedure, the points with a large distance to the cluster centers can be successfully removed. Finally, we calculated the weighted median center for each cluster to synthesize multiple contributed points into one point, which represents the optimized location of a detected road anomaly. Figure 9e, f show that the optimized detection results (yellow dots) can perfectly match with ground truth points (red dots). Meanwhile, we also used a weighted average scheme based on cluster probability scores to recalculate the driving-dimensional size for each final confirmed road anomaly.
Optimized Detection Results by Mining Crowd Sensed Data
After obtaining detection results from each driving test, we implemented HDBSCAN to group the 10 detection results (five for each study site) based on their similarities, which can aid in eliminating low-quality public contributed data and enhancing detection accuracy. Figure 9a,b illustrate detection results obtained from five driving tests for both study sites. These two subplots show that most of the detected anomalies are concentrated around ground truth points; however, there is still a certain number of (~24% in this study) detected points with a relatively far distance (greater than 10 meters) to ground truth points. It implies that the detection results obtained from one single driving test are not reliable. To optimize our results, we first implemented HDBSCAN on the five times detection results to form clusters. HDBSCAN can automatically group sample points into clusters or noises based on their spatial density patterns. Meanwhile, it also generates a probability score for each point, indicating its likelihood of being involved in a cluster. In this study, clustering noises and cluster member points with low probability scores (less than 0.5) were regarded as low-quality contributed points and eliminated from the detection results. Figure 9c,d show the clustering results for both study sites after eliminating low-quality contributed points. Through this procedure, the points with a large distance to the cluster centers can be successfully removed. Finally, we calculated the weighted median center for each cluster to synthesize multiple contributed points into one point, which represents the optimized location of a detected road anomaly. Figure 9e,f shows that the optimized detection results (yellow dots) can perfectly match with ground truth points (red dots). Meanwhile, we also used a weighted average scheme based on cluster probability scores to recalculate the driving-dimensional size for each final confirmed road anomaly.
Result Evaluation
To better evaluate the performance of this enhanced crowdsensing solution in road anomaly detection, we compared our method with a widely utilized threshold-based method-Z-THRESH • Method 1: Z-axis accelerometer measurements exceeding 0.4g m/s 2 are counted as road anomalies. • Method 2: An improved threshold-based detection method integrated with a simple crowdsensing strategy-anomalies need to be reported by more than three users before finally confirmed. The location for the confirmed anomaly is calculated by averaging all the contributed points.
Since Method 1 does not mention how the crowd sensed data was synthesized, we integrated the same crowdsensing strategy used in Method 2 to Method 1 for fusing five driving tests' results. In this study, we compared these two methods with our enhanced solution in terms of detection efficiency and position accuracy.
The detection efficiency is evaluated from three perspectives: 1. Accuracy: Correctly detected anomalies (NCDA)/Total detected anomalies.
3.
Detection Redundancy: (NCDA -NDGT)/(NCDA) In this experiment, the detected anomalies within a 10-meter radius to any ground truth points are counted as correctly detected anomalies. For each ground truth point, if it can match with any detected anomalies within a 10-meter radius, it would be counted as detected ground truth points. Please note each ground truth point may be matched with more than one detected anomaly; therefore, we also checked detection redundancy for each method.
Meanwhile, we calculated the distance between detected anomalies to their corresponding ground truth points to compare the positioning accuracy while performing different methods. Table 2 represents the comparison results among these three methods. The results demonstrate that the proposed enhanced crowdsensing solution achieved the highest detected accuracy (94.44%), which is far superior to the other two methods (43.90% and 64.71%). Our approach also achieved the same coverage rate compared to Method 2. Moreover, by applying spatial clustering methods, we can dramatically synthesize crowd sensed points into high-reliable detection results with no redundant detected anomalies and higher positioning accuracy. More importantly, this study added a new dimension to road anomaly detections to estimate the driving-dimensional size for each road anomaly. In this study, we used two methods to synthesize the size estimation results of cluster member points into one final result. One is to average all member points' estimation values. Another is to calculate the weighted mean based on the cluster probability scores of each member point. Figure 10 shows the size estimation results by implementing these two methods. The centerline of the box represents the mean value of estimation errors. The box upper and lower bounds represent the mean plus and minus standard deviation, respectively. This figure indicates that our method can effectively estimate the driving-dimensional size for road anomalies with an acceptable detection error. Meanwhile, the weighted mean shows a lower mean error and a smaller standard deviation in Figure 10. It indicates that using the weighted average scheme can better synthesize crowd-sensed data than calculating the average. indicates that our method can effectively estimate the driving-dimensional size for road anomalies with an acceptable detection error. Meanwhile, the weighted mean shows a lower mean error and a smaller standard deviation in Figure 10. It indicates that using the weighted average scheme can better synthesize crowd-sensed data than calculating the average.
Discussion and Conclusions
Road anomaly detection is of great importance in road maintenance and management. Continuously monitoring road anomalies with a low-cost and high-efficiency solution is a fundamental social need; however, it remains a complicated and unsolved research task. In this study, we proposed an enhanced mobile sensing approach to detect road anomalies and measure their sizes using smartphone sensors. To the best of our knowledge, this study marks the first attempt to utilize CWT in road anomaly detection. We are also among the first to explore the implementation of spatial clustering methods (HDBSCAN) for synthesizing crowd sensed results.
In this study, a built-in smartphone accelerometer and GPS were first utilized to capture and geotag vehicle vibrations. Next, CWT was adopted to extract and analyze abnormal mobile sensed signals when vehicles are hitting road anomalies. Then, we utilized a spatial clustering method, HDBSCAN, to group different driving tests' detection results into clusters based on their spatial density patterns. Each cluster's member points were finally synthesized into a unique road anomaly. To verify the effectiveness of the proposed method, we validated it with 24 manually collected road anomalies and compared its performance with a widely utilized threshold-based method, Z-THRESH, and a preliminary crowdsensing approach proposed by Li et al. [15]. Our experiments demonstrated that wavelet analysis outperforms conventional threshold-based methods, which can more effectively identify abnormal vehicle vibrations when hitting road anomalies through analyzing mobile sensed data. Through spatially mining the crowd sensed results, our enhanced mobile sensing solution achieved the highest road anomalies detection accuracy (94.44%) among the three tested methods with a higher positioning accuracy (within 3.29 meters in average). More importantly, our approach could successfully estimate the driving-dimensional size of bumps and potholes based on the calculated wavelet coefficients with an acceptable size estimation error (with a mean error of 14 cm). This could be enormously beneficial for helping local government allocate a road maintenance budget to fix hazardous potholes wisely.
This study demonstrated that the mobile sensing approach is efficient for detecting road anomalies. It also proved the potential and effectiveness of mobile crowdsensing solutions for conducting large-scale sensing and monitoring tasks. Leveraging crowd sensed data could
Discussion and Conclusions
Road anomaly detection is of great importance in road maintenance and management. Continuously monitoring road anomalies with a low-cost and high-efficiency solution is a fundamental social need; however, it remains a complicated and unsolved research task. In this study, we proposed an enhanced mobile sensing approach to detect road anomalies and measure their sizes using smartphone sensors. To the best of our knowledge, this study marks the first attempt to utilize CWT in road anomaly detection. We are also among the first to explore the implementation of spatial clustering methods (HDBSCAN) for synthesizing crowd sensed results.
In this study, a built-in smartphone accelerometer and GPS were first utilized to capture and geotag vehicle vibrations. Next, CWT was adopted to extract and analyze abnormal mobile sensed signals when vehicles are hitting road anomalies. Then, we utilized a spatial clustering method, HDBSCAN, to group different driving tests' detection results into clusters based on their spatial density patterns. Each cluster's member points were finally synthesized into a unique road anomaly.
To verify the effectiveness of the proposed method, we validated it with 24 manually collected road anomalies and compared its performance with a widely utilized threshold-based method, Z-THRESH, and a preliminary crowdsensing approach proposed by Li et al. [15]. Our experiments demonstrated that wavelet analysis outperforms conventional threshold-based methods, which can more effectively identify abnormal vehicle vibrations when hitting road anomalies through analyzing mobile sensed data. Through spatially mining the crowd sensed results, our enhanced mobile sensing solution achieved the highest road anomalies detection accuracy (94.44%) among the three tested methods with a higher positioning accuracy (within 3.29 meters in average). More importantly, our approach could successfully estimate the driving-dimensional size of bumps and potholes based on the calculated wavelet coefficients with an acceptable size estimation error (with a mean error of 14 cm). This could be enormously beneficial for helping local government allocate a road maintenance budget to fix hazardous potholes wisely.
This study demonstrated that the mobile sensing approach is efficient for detecting road anomalies. It also proved the potential and effectiveness of mobile crowdsensing solutions for conducting large-scale sensing and monitoring tasks. Leveraging crowd sensed data could continuously monitor road surface condition with few additional economic costs, which substantially improves the effectiveness of traditional road monitoring systems.
However, some technical barriers exist, which limit the implementation of crowdsensing solutions at the current stage. For example, mobile crowdsensing is significantly constrained by smartphone hardware. Low-quality mobile sensors' data may lead to unreliable detection results. Collecting mobile sensors' data at a high sampling rate can drain phone battery in several hours, or even faster. To overcome these limitations, a comprehensive crowdsensing-quality-control strategy should be proposed and formalized in future work, which could further eliminate the low-quality crowd sensed data (e.g., data collected using low-quality sensors or devices, data collected while driving at high speed). Meanwhile, we could further optimize the mobile-based analyzing algorithm, reduce computing load, and choose a more appropriate sensor sampling rate instead of using 100 Hz, which may potentially extend the smartphone battery life. Meanwhile, in future work, we will improve the proposed solution from the following ways:
1.
Propose a new anomaly size estimation solution. In this study, we only estimate the driving-dimensional size of road anomalies. In fact, the depth of potholes is also a critical factor for assessing pothole damages. In future work, we will attempt to measure the depth of road anomalies through analyzing the amplitude of mobile sensed abnormal vibration signals.
2.
Improve the performance of crowdsensing solution. Using spatial clustering methods can efficiently eliminate low-quality contributed data points and optimize detection results. However, the density-based clustering method may mis-cluster two neighboring potholes into the same group, which could influence the detection accuracy. In future work, we will test different spatial clustering methods, compare their performances, and further form a formalized crowdsensing strategy to synthesize crowd sensed data with further improved accuracy.
3.
Put forward a real-time road anomaly detection system. Drivers can sense road surface using smartphones at real-time. With a certain number of reliable data contributors, we can potentially update road detection results on a daily, or even hourly basis. In future work, we will attempt to recruit vehicles from local governments (e.g., garbage truck, police vehicles) to put forward a real-time road anomaly monitoring system, which could continuously monitor road surface conditions with high accuracy.
It is worth noting that, to make autonomous vehicles a reality, vehicular sensing techniques are undergoing an unprecedented revolution, which also shows great potential for facilitating the implementation of crowdsensing solutions for assessing road qualities. Nowadays, each commercial vehicle is equipped with approximately 4,000 sensors [38,39]. These sensors empower vehicles to collect thousands of signals through the controller area network (CAN) bus technology, which could monitor the vehicle and its surrounding environment in real-time. These vehicular sensors have a higher sampling rate and a better data quality than that of a smartphone, which facilitates achieving a more precise detection result than smartphone sensors. Meanwhile, light detection and ranging (LiDAR) provides a compelling sensing ability to autonomous vehicles [40,41]. The vehicular LiDAR can simultaneously scan and generate high-resolution 3-D representations of immediate vicinity, which could help us identify road anomalies and bumpy road segments more effectively. Therefore, we believe that the vehicular crowdsensing system could be the next-generation approach for large-scale sensing and monitoring with higher data quality, faster data transmission, and better precision. This proposed solution remains promising and efficient in the foreseeable future.
Funding: The open access publishing fees for this article have been covered by the Texas A&M University Open Access to Knowledge Fund (OAKFund), supported by the University Libraries and the Office of the Vice President for Research. | 2019-09-19T09:10:57.682Z | 2019-09-13T00:00:00.000 | {
"year": 2019,
"sha1": "02de6f146680e5ce67e5725b1dcfb536548d62b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/8/9/412/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0b6bfa1ab0638d39f6a8b07f61abfda95273adaf",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237358053 | pes2o/s2orc | v3-fos-license | Coastal Wetland Shoreline Change Monitoring: A Comparison of Shorelines from High ‐ Resolution WorldView Satellite Imagery, Aerial Imagery, and Field Surveys
.
Introduction
Coastal wetlands serve as a natural barrier between marine and terrestrial habitats and provide essential ecosystem services such as fish and wildlife habitat, carbon sequestration, and natural flood control for upland areas [1,2].External forcing from sea-level rise, storms, and anthropogenic modifications [3,4] create highly dynamic conditions for coastal wetland evolution, often causing wetland loss through shoreline erosion, interior peat collapse, and submergence.Shoreline erosion is a primary cause of wetland loss in many parts of the world [5] and erosion has been linked to wind-driven waves, sediment availability and delivery, boat traffic, and sea level rise [6][7][8][9][10].Changes in sea level, sediment delivery, and storm frequency and intensity in coastal areas due to climate and other environmental changes increases the threat of these hazards on wetland survival [11,12].Environmental monitoring and assessment are critical for detecting the impacts of environmental change and developing adaptative management strategies [11,13].This is particularly true for coastal areas where erosion hazards are threatening critical habitats such as coastal wetlands.
Shoreline change analysis (SCA) is a common monitoring procedure for evaluating coastline dynamics and the vulnerability of communities and habitats to erosion hazards, such as sea level rise, storms, and anthropogenic modifications [14][15][16].SCA involves the repeated measurement of shoreline position over time and estimating the rate of erosion or accretion based on movement trends over time.The most common method is to calculate the slope of the linear trend of distance against time using ordinary least squares [17].SCA relies on consistency within two critical elements: identification of the shoreline and how the position is mapped.A shoreline is defined as the boundary between land and water, but because that boundary may be partial or gradual, mapping consistency requires the use of a shoreline proxy [18].For sandy beach environments, a common practice is to identify the shoreline position based on elevation and a tidal datum, such as mean high water.Other proxies may be the high water line, wet-dry line, cliff base or top, or low water line.For marsh shorelines, many of the water line features are obscured by vegetation, therefore the outer edge of the vegetation is dubbed the "apparent shoreline" [19,20].As with many shoreline proxies, boundary delineation is influenced by ambiguity and interpretation; however, the water-vegetation boundary can be identified both in the field and from remotely sensed data and, therefore, provides a consistent proxy for wetland shoreline change analyses.
Modern shoreline position is determined from several different types of source data, including field surveys and remote sensing [21].One of the primary modern sources of shoreline data is laser altimetry, such as Light Detection and Ranging (lidar), where an elevation proxy is identified to delineate the shoreline position [22,23].Though lidar has been used to map shorelines for coastal wetlands [24], often laser altimeter data are not available or the accuracy is limited for salt marsh shorelines due to poor laser penetration through the dense vegetation [25][26][27].Additionally, lidar collection times are irregular or focused on a specific episodic event, such as for post-storm assessment, consequently making it unreliable for regular monitoring of shorelines in coastal wetland habitats.Aerial imagery is another primary source for mapping wetland shoreline position since it is collected semi-regularly (approximately every 2-3 years for most coastal areas in the United States).The disadvantage of aerial imagery is that each image covers a small area and is collected along a flight path, thereby adjacent images may be collected on considerably different days or tidal cycles impacting data consistency [28].This inconsistency challenges automated procedures to delineate the shoreline position (typically the wetdry line), making it necessary to manually map the shoreline boundary by digitizing within Geographic Information System (GIS) software and manually correcting for tidal stages [22,29].Manual digitization of shorelines is costly, time and labor intensive, but continues to be a standard for modern shoreline mapping [29,30].
Advances in remote sensing allow for pixel and object-based classifications of satellite imagery to extract shoreline features and previous studies have used satellite imagery to extract shoreline positions using various methods [31][32][33][34][35], however most of these satellite-based studies focused on beach environments.Maglione et al. [36] developed a method for extracting estuarine shorelines using high-resolution (<2 m spatial resolution) WorldView (WV) imagery (Maxar Technologies, Inc.) but did not evaluate or quantify the accuracy of the delineated shorelines against other available shoreline data.High-resolution satellite imagery could be a valuable source of data for shoreline delineation due to its regular return interval for repeated collection, consistent spectral characteristics, high spatial resolution, and broad-scale coverage.The combination of these factors could make high-resolution satellite imagery more cost effective and efficient for high-frequency environmental monitoring of shoreline change than aerial imagery or lidar.
With the introduction of high-resolution satellite imagery with frequent return intervals, satellite-derived wetland shoreline data could provide the same spatial and temporal detail as other sources of data, including field-based Global Positioning System (GPS) or aerial imagery-derived shoreline data, but gain greater spatial coverage and reduce the cost of shoreline monitoring by either replacing GPS field surveys or reducing the necessity of survey frequency.In this study, we used a semi-automated procedure to map wetland shorelines from WV imagery from 2013 to 2020 and compared them to contemporaneous shoreline data from GPS and digitized aerial imagery for study sites at the Grand Bay National Estuarine Research Reserve, Moss Point, MS, USA.
Study Area
In 1972, the Coastal Zone Management Act was passed that established the National Estuarine Research Reserve (NERR) system in the United States (US).The NERR system was designed to facilitate long-term research and monitoring, education, and stewardship of estuarine habitats [37].In 1999, the Grand Bay NERR (GNDNERR), located in the Northern Gulf of Mexico in the state of Mississippi (Figure 1), was designated through a partnership between the National Oceanic and Atmospheric Administration (NOAA) and the Mississippi Department of Marine Resources [38].The GNDNERR also overlaps portions of the Grand Bay National Wildlife Refuge, located within Alabama and Mississippi.The GNDNERR is approximately 73 km 2 of relatively undisturbed estuarine habitat and contains a variety of habitats such as wet pine savanna, maritime forests, tidal creeks, salt pans, wetlands, bayous, and bays [39].Grand Bay has diurnal astronomical tides (microtidal with 0.42 m average amplitude) and experiences wind-driven water level fluctuations.The shoreline of Grand Bay is largely vegetated by saltmarsh grasses Juncus roemerianus Scheele, Spartina alterniflora Loisel., and Spartina patens (Aiton) Muhl.with some sandy shorelines along highly dynamic margins.
Wetland loss in the form of shoreline erosion is a pressing management concern at GNDNERR [38,39] and within the northern Gulf of Mexico in general [16,40,41].Shorelines in some areas of GNDNERR are eroding more than 2 m per year [42].With a relative sea level rise of 0.41 cm yr −1 [43], the erosion rates are higher than would be expected based on wetland retreat from sea level rise alone.Exposure to wind-driven waves and reduced sediment supply may all contribute to the high shoreline erosion rates.Wetland shoreline position at various sites within the reserve have been monitored using GPS field surveys on a semi-quarterly basis since 2013 (Figure 1 and Table 1).Sites are generally named after their geographic location, while a few are associated with monitoring stations and include the following: Bayou Heron Mouth (BHM); Middle Bay North, West, and South (MBN, MBW, and MBS, respectively); Grand Batture East (GBE); Bird Island (BSI); North Jose Bay, also known as the Spartina Sentinel Site (SPAL); Met Station Island (MET); and Point aux Chenes North, Middle, and South (PACN, PACM, PACS, respectively).The goal of the monitoring program is to understand wetland shoreline dynamics at a finer spatiotemporal scale than could be done with large-scale remote sensing techniques.Due to the labor-intensive nature of field-based surveys, the current study focuses on eleven field sites with different shoreline types and wind-wave exposure and explores a semi-automated technique to map wetland shorelines using WorldView satellite imagery. Sediment codes: M (fine grained/mud), S (sand), Ms (mud with shells or shell hash), Ss (sand with shells or shell hash).
Data
WV-derived shorelines (WVS) were compared to vector digital shoreline data from two other data sources: GPS-based shorelines (GPSS) and aerial imagery-derived shorelines (AIS).Details on data collection and vectorization are included in the following sections for all three shoreline data sets.
GPS Data
Since 2013, shoreline positions have been surveyed using real-time kinematic (RTK) GPS at eleven locations in GNDNERR to quantify shoreline change rates.GPS data were collected using a Trimble R8 Model 3 Global Navigation Satellite System (GNSS) and TSC3 data collector from 2013 to 2018, or a Trimble R10 GNSS system and TSC3 data collector from 2018 to 2020.Each were attached onto a 2 m graphite rod with a mounted foot to obtain both horizontal and vertical shoreline position.The positional accuracy of Trimble R8 Model 3 GPS points was ±10 millimeters (mm) + 1 parts per million (ppm) root mean square (RMS) horizontal error and ± 20 mm + 1 ppm RMS vertical error [44].The horizontal error of the Trimble R10 GPS points was ± 8 mm + 0.5 ppm RMS and vertical error was ± 15 mm + 0.5 ppm RMS [45].The GPS points were collected roughly 5 to 10 m apart along the vegetation-water boundary, which typically represented the top of an erosional scarp; where an erosional scarp was not visible, the most suitable shoreline position based on dense shoreline vegetation is mapped.After field data collection, the GPS data were imported into ArcGIS software by Esri [46] as points.Points were connected into lines to create a polyline feature class using the ArcGIS tool Points to Line within the Data Management toolbox for each site and year surveyed.
During May of 2021, additional field RTK GPS and site descriptive data were collected at each site to measure salt marsh platform elevations and estimate platform slope.Between three to five cross-shore transects were selected at each study site depending on the shoreline length (extra transects were collected at sites with longer shorelines).Along each transect, multiple GPS points were surveyed, including two locations in the marsh interior, at the marsh-estuary shoreline (the scarp crest, if present), and two points in the nearshore (one point at the scarp toe, if present).General site descriptions were also noted, including approximate percent cover of vegetation species present at the marsh shoreline using the 1 m quadrat technique [47] and nearshore sediment properties (mud/finegrained sediments, sand, or presence of shells).These data provided information regarding the cross-shore profile, including the marsh platform elevation and slope, which were used to correct satellite-derived shoreline features for water inundation distance (described in Section 2.2.1.WorldView-derived shoreline accuracy and Equation ( 3)).
Aerial Imagery-Derived Shoreline Data
Orthoimagery from the National Agriculture Imagery Program (NAIP) of the U.S. Department of Agriculture was downloaded via the U.S. Geological Survey (USGS) Earth Explorer (https://earthexplorer.usgs.gov/,accessed on 23 February 2021) for available dates between 2013 and 2020.NAIP collected new imagery every 2 to 3 years, with each state following on a cycle.Since GNDNERR is located along the border of two states, parts of the reserve are surveyed more frequently.A total of five NAIP acquisition dates were identified to have coincident spatial and temporal coverage as WV or GPS collection dates for the study sites (Table 2).NAIP imagery has a 1 m ground sample distance with a ground positional accuracy of 5 m.Shoreline position was identified using the land/water boundary as a shoreline proxy for vegetated shorelines or the wet/dry line for whenever beaches were present seaward of the marsh.Shoreline boundaries were digitized at a scale of 1:1,500 from natural color imagery [48].
WorldView-Derived Shoreline Data
High-resolution satellite imagery was obtained for collection dates that overlap the available GPSS and AIS data from either of the two WV satellites with color and infrared spectrum data (WorldView-2 [WV2] or WorldView-3 [WV3] © Maxar Technologies, 2020).Briefly, both WV satellites collect high-spatial resolution imagery (1.84 and 1.24 m, respectively) in eight spectral bands, including five bands for visible wavelengths (coastal blue, blue, yellow, green, and red bands) and three bands for infrared wavelengths (rededge and two near-infrared bands).A total of ten dated images between 2013 and 2020 were selected that provided either complete or partially complete coverage of the study area and were collected as close to the date of the GPS field-based shoreline as possible (Table 2).If the closest dated WorldView image had extensive clouds covering the shoreline or minimal study area coverage, the next closest WorldView image date was selected.At least one image was obtained for each year, except for 2017 where three images were selected.The three images in 2017 were collected in May, August, and December and provided additional information on how seasonality might impact automatic shoreline extraction methodology.Images were radiometrically and atmospherically corrected and then pansharpened using ERDAS IMAGINE 2020 (version 16.6.0)to obtain measures of ground reflectance.To improve comparisons between WVS and AIS, images were automatically co-registered to high-resolution aerial imagery (NAIP) using AutoSync Workstation toolbox in ERDAS Imagine.First, an NAIP image mosaic was created for an extent larger than the WV image coverage.Autosync generates automatic tie points between two images, in this case the WV and NAIP image.The tie points coincident on both images were used to adjust the WV image to the corresponding location on the NAIP.Tie points with an error value greater than 1 m were removed.The co-registration of the WV imagery improved the spatial accuracy of the WV imagery to less than 3.5 m and allowed for the direct comparison of WV and NAIP-derived shoreline data.
To generate vector shorelines from WV images, we modified the methodology described by Maglione et al. [36].All WV images were classified into binary land-water rasters using tools within ArcGIS.First, normalized difference vegetation index (NDVI) was calculated using WV band 5 in the visible red spectrum (RED) and band 7 in the near infrared spectrum (NIR1) using the following Formula (1): NDVI is used to estimate the density of vegetation, therefore it distinguishes between vegetation and water or bare soil [49].Since wetland shorelines in the study area are densely vegetated with salt marsh grasses, the NDVI provided the best approximation of the shoreline position.Maglione et al. [36] provide the following NDVI values for landwater classification: vegetation was classified as high values (above 0.2), water represents low values (usually less than −0.2), and soil somewhere in between −0.2 and 0.2.However, the exact threshold used to identify the shoreline may differ depending on the type of wetland, shoreline, and image acquisition parameters.This procedure worked well for vegetated shorelines but was inadequate in areas where sandy or shell beaches were present seaward of the salt marsh, a feature of marsh adjacent to former barrier islands and with high wave energy.Sandy beaches in this region tend to be bright white from high quartz content [50].To improve the shoreline classification for these beach shorelines, we selected a static threshold (5000) using band 8 (NIR2), which shows high reflectance for the white sand and shell beach.The "beach" classification was merged with the NDVI vegetation layer to create a final binary land-water raster.Several tools within ArcGIS were used to clean raster boundaries and produce a vector shoreline; if not otherwise specified, the default parameters were used.The raster was generalized using Expand and Shrink tools (using 1 cell) to remove any isolated and extraneous pixels, then filtered with Boundary Clean (Spatial Analyst toolbox) to smooth edges.The filtered raster was then converted into polygons (Raster to Polygon tool in Conversion toolbox) and polylines (Polygon to Polyline in the Data Management toolbox).The polylines were smoothed using the Polynomial Approximation with Exponential Kernel (PAEK) algorithm and a 2-m smoothing filter to reduce the cell structured appearance.Sometimes multiple shorelines were identified, including interior marsh ponds or streams, or shorelines were located outside the study area (due to a larger image extent); these extraneous shoreline vectors were manually deleted to produce a clean estuary-marsh shoreline geospatial data set.
Data Analysis
Several analysis techniques were selected to evaluate the accuracy of the WVS to field measurements and determine how well WVS replicated other methods for calculating short-term shoreline change rates.Most of these analyses were conducted in ArcGIS [46] and R [51], utilizing a package called Analyzing Moving Boundaries Using R (AMBUR) [52].
WorldView-Derived Shoreline Accuracy
Since field based GPSS is the most accurate available shoreline position data, we compared the WVS data to GPSS to estimate error in the WVS methodology.Comparisons were made on WVS and GPSS data collected as close together as possible (Table 2).Most comparisons were made on data collected less than two months apart to reduce error associated with time between data collection, such as changes in shoreline position and seasonal tidal cycles.The exception was the 2019 data sets which were approximately five months apart but were the only available data for that year.
Two methods were used to estimate the WVS error based on GPSS measurements.The first was to calculate the distance between the GPSS points and closest WVS vector (the WVS could be either landward or seaward of the GPSS point, thereby always providing a positive value) (Figure 2a).This was performed in ArcGIS using Near Analysis which measures the distance to the closest feature between two data sets.Distances are calculated in meters based on the closest available node on the WVS vector at any angle from the GPS point.
The second method was to connect the GPSS points to create vector shorelines in ArcGIS, then calculate the distance between the GPSS and WVS along a cross-shore transect (Figure 2b).This calculation was performed using AMBUR [52].AMBUR is a package within the statistical program R that calculates the distance and rate of change for shorelines using a cross-shore transect-based method.The program generates transects by connecting between offshore and onshore baselines parallel to the shoreline at a set interval distance (for our analyses, we chose 10 m increments between transects).We chose 10 m transect increments to coincide with the spatial resolution of the GPS data, which were collected approximately 5 to 10 m apart.The program generates points where the transects intersect the shorelines.The distances between the points are calculated to generate shoreline movement distances and rates of change for each transect.Several shoreline change statistics are calculated by AMBUR, but this analysis used the net distance of change (Δx).The Δx is the distance in meters (m) between the earliest and latest shoreline and provided an estimate of the difference between the GPSS and WVS pairs for each year (the distance between the position along the transect).Shoreline change rates calculated using only GPSS data were compared with WVSand AIS-only rates to determine if WVS provided a comparable analysis in to more commonly used remote sensing techniques.Shoreline change rates (also called the linear regression rate or shoreline rate-of-change) were also calculated using AMBUR.The shoreline change rate ( ) is the slope of the best fit line of the linear regression for shoreline distance against calendar date in meters per year (m yr −1 ).A negative value indicate erosion, while a positive value indicate accretion.Shoreline change statistics also include a 95% confidence interval (c.i.) used to estimate the confidence in the rate of change statistic.We used the ± c.i. to create shoreline change categories.If both + c.i. and c.i. were negative, the shoreline was classified as eroding; if both were positive, the shoreline was classified as accreting; if one was positive and another negative, then the general trend could not be ascertained (uncertainty on whether the shoreline was accreting or eroding) and was therefore classified as stable or indeterminate.
In addition, from GPSS data ( ) were used to correct the Δx estimate for errors due to changes in shoreline position between the GPSS and WVS collection dates.To correct the data, we first used AMBUR to estimate using all available dated GPSS vector data from 2013 to 2020.This provided an average shoreline change rate.The fraction of time between GPSS and WVS date of collection multiplied by provided an estimate of how much that shoreline would have moved between dates.This value was used to adjust the calculated Δx and account for the possible change in shoreline position between sample dates, using the following Formula (2): where Δt is the time difference between the GPSS and WVS surveys (in fraction of the year) and is the shoreline change rate calculated using GPSS field surveys in AMBUR.
In addition, water level can impact marsh shoreline detection by either obscuring the shoreline (inundating the marsh when water level is high) or confusing detection by exposing nearshore vegetation, such as sparse marsh grasses or seagrass blades.We could find no simple method for correcting shoreline vector position for water level, so we developed the following technique, modified from methods developed for beach environments [53].If water level was above the marsh platform elevation (determined as the mean of the marsh platform elevation data, data collection described in Section 2.2.1), a simple correction was applied to adjust the horizontal difference between the two shoreline vectors using the following Equation (3): where ∆ is the tidally corrected difference between GPSS and WVS, ∆ is the uncorrected difference, h is the water level height at the time of image collection, is the elevation of the marsh platform, and ms is the marsh slope.To obtain an estimate of the marsh platform slope (ms), we plotted the field collected GPS elevations against the distance from the shoreline, calculated the linear trend slope (Figure 3).There was a great deal of variability in marsh slope as shown by the low R 2 value (0.26) and the site-based slope calculations that range from 0.01 to 0.12 (Table 1) with a mean of 0.06 ± 0.03.However, we chose to use 0.07 as a conservative estimate of marsh slope for the study region rather than spatially resolved GPS elevations from each study site to reduce the possibility of over-correcting the shoreline position for two reasons: 1) marsh tidal flooding has many properties that influence the inundation distance other than slope that would impede water flow which cannot be accounted for, such as surface roughness, sediment type, and vegetation, and 2) if the marsh vegetation canopy is above the water level surface, this procedure could classify it as "land" despite surface inundation based on our method of using NDVI, which detects vegetation.The two methods provided different ways of evaluating the accuracy of the WVS in comparison to the best available data (GPSS).The first method provided straightforward differences between field measurements and WVS estimates.The second allowed for temporal adjustments to account for shoreline change between sample dates and examine the impact of water level on WVS estimates.By using two methods (point-based and transectbased) and applying time difference and water level corrections (∆ ), we provide a robust evaluation of the WVS methodology in comparison to GPSS data.Data were summarized by calculating the mean ± 95% c.i. by study site and date for each point or transect (depending on method used).
Shoreline Change Comparisons
SCA for WVS was applied to the full study area of GNDNERR using AMBUR.The linear rate of change (LRR) statistic was selected to provide the shoreline rate of change.Cross-shore transects that were located within the eleven shoreline erosion study sites were classified with the study site name.Additionally, SCA from WVS data were compared to rates from the two other methodologies (AIS and GPSS data) to determine whether WVS provided a cost-effective and repeatable methodology for calculating shoreline rates of change.It is important to note that full WVS were created for each WV image date for the entire GNDNERR study area, whereas the GPSS and AIS were available only for the eleven study sites.This is due to availability, as well as the time-and cost-intensive nature of on-the-ground or manual-digitized shorelines.For these analyses, three sets of shoreline change rates ( ) were calculated using AMBUR (described in Section 2.3.1 WorldView-derived shoreline accuracy) using exclusively WVS ( ), GPSS ( ), and AIS ( ) vector data dated from 2013 to 2020 (2014 to 2020 for AIS data).Shoreline change values using WVS and AIS data sets were compared to GPSS for each transect within the study sites using absolute difference and Bland-Altman plots [54][55][56].Bland-Altman plots are a data plotting method that is used to analyze the agreement between two data sets.By comparing and to , we are evaluating whether shorelines derived from the semi-automated method can yield similar results as field data (presumably the most accurate method) and the traditional method of manual digitization of shorelines from aerial imagery.
Results
The WV-derived shoreline procedure described in the Section 2.2.3 was applied to ten WV images.We visually compared shoreline vectors to the temporally coincident WV imagery displayed as both natural color and color infrared and discovered the value of 0.21 consistently provided an adequate representation of the shoreline (vegetation-water boundary).The accuracy of an automated technique for mapping saltmarsh shoreline position using WV satellite data was quantified by comparing WV-derived shorelines to field-collected GPS shoreline data using both a point-and transect-based technique.Next, we looked at shoreline change rates from all three methods to evaluate whether satellitebased shorelines could be used for future short-and long-term monitoring of wetland shoreline change.
WVS and GPSS Comparisons
WVS accuracy was estimated by calculating the mean distance between GPSS points collected during a similar time period as the WV image for eleven sites throughout the study area.The mean difference between GPSS points and WVS position was 2.03 ± 0.08 m, but ranged from 0 to 20 m.The mean difference showed large variability between study sites, ranging from 0.70 ± 0.06 to 4.71 ± 0.52 m (Table 3).Sites that were the most accurate in comparison to GPS measurements include SPAL and MBS, both with less than 1 m error between WV and GPS shore position.The shoreline of both these sites have a visible scarp that is vegetated with dense marsh grasses (20-60% estimated percent cover of Spartina alterniflora) and nearshore sediments are fine-grained mud.Sites with a difference of greater than 2 m between GPSS and WVS were PACS, BSI, and GBE, all sites with sandy nearshore sediment type.To calculate the mean for the transect-based method, we first took the absolute value in order to accurately account for both negative or positive values (seaward or landward data).
Table 3. Mean difference between Global Positioning System shorelines (GPSS) and WorldViewderived shorelines (WVS) (Δx) by study site using both a point and transect-based method.Transect data were corrected for differences between collection date (Δxt) and water level (Δxtw).N is the number of points or transects for each study site.To account for temporal inconsistencies associated with the date of imagery capture, mean distance between GPS point data and WVS were also calculated for each date-paired data set.We evaluated ten image dates, one for each year from 2013 to 2020, with three different data sets in 2017.The mean differences between WVS and GPSS measurements were less variable between years, ranging from 1 to 3 m (Table 4).Water level varied between WV image dates from −0.34 to 0.54 m North American Vertical Datum of 1988 (NAVD88) and represented all four seasons.The net difference between GPSS and WVS ranged from 1.4 ± 0.18 to 3.16 ± 0.34 m.The 2014 data had the lowest error with only 4 days between image date and GPS data collection, and water levels below the average marsh platform elevation.The highest error between the WVS and GPSS occurred with the August 2017 image, where the dates between WV and GPS collection were only five days apart, but the image was collected during higher water level (0.54 m NAVD).Overall differences between GPSS and WVS were small, even when uncorrected for water level variations and temporal differences between image and field data collection dates, well within the image geolocation accuracy (<3.5 m).
Points
Differences between the GPSS and WVS were higher using the transect-based method compared to the point-based method, even after accounting for time between survey dates and water level, which reduced the difference by 4 to 26%, with the greatest reductions occurring at the sites BSI and PACS.For each date-paired data set, the distance between GPSS and WVS range from 2.46 ± 0.67 to 3.85 ± 0.76 m without water level or temporal corrections; with corrections, the difference between the two data sets decreased, with the highest value for the 2014 data set with 3.58 ± 0.75 m.Temporal and water level corrections accounted for approximately 7 to 13% of the error in the values.
Shoreline Change Analyses
A total of 2422 cross-shore transects were created at an approximate 10 m spacing along the GNDNERR estuarine shoreline.All transects intersect between 4 and 10 WVS with a temporal coverage of 2.5 to 7 years between 2013 to 2020.Using and c.i. to classify shoreline change category, approximately 73.1% of the measured rates indicated shoreline erosion, 25.7% were stable or indeterminate (confidence interval indicates could be either erosional or depositional), and 1.2% of shorelines showed accretion.Mean shoreline erosion rate was −2.46 ± 0.10 m yr −1 (N = 1770 transects) and mean accretion was 2.12 ± 0.48 m yr −1 (N = 30 transects).
Availability of AIS and GPSS limited comparative analyses to transects with all three data sets.A total of 358 transects also contained four or more dated shorelines from each data source.The correlation between and was statistically significant (R 2 = 0.89, pvalue < 0.001).The correlation plot shows an increase in point spread in the highly erosive measurements and below the trend line, suggesting a slight overestimation of in location with high erosion rates (Figure 4).The correlation between and was also significant (R 2 = 0.93, p-value < 0.001).The scatter plot indicates a few values where AIS provided an overestimation of shoreline erosion at low erosive locations.
Mean shoreline change calculated for each site using GPSS, WVS, and AIS data are depicted in Figure 5a.Both and were similar to , with the exception of PACS and PACN sites, where calculated higher erosion rates than the .The difference between versus and provide an indication of the ability for each shoreline data source to accurately estimate shoreline change calculated from GPSS data (Figure 5b).The mean difference between and was 0.64 ± 0.09 and between and was 0.44 ± 0.05.The difference between and were lower than at sites MBW, BSI, and PACN, whereas the difference between and were lower than at PACM and PACS.
Discussion
Overall difference of the WV-derived shorelines from field-based GPS measurements was low at 2 m and is lower than the geolocation accuracy of the pan-sharpened WV imagery (approximately 3.5 m).These results support the conclusion that high-resolution satellites provide a valuable data source for monitoring shoreline change of coastal wetland environments.Shorelines with high discrepancies in comparison to field measurements were highly dynamic shorelines exposed to wind-waves from the Gulf of Mexico [57] and high long-term erosion rates [42].Site characteristics included a gradual slope or indistinct scarp, low (<30%) or no vegetation cover (exposed shoreline), and the presence of shells or sand along the shoreline and in the nearshore (displayed on imagery as white sandy beach).The sand and shells indicate that shoreline would not have been identified using the threshold NDVI technique, rather from the beach threshold analysis step, because the goal of this study was to focus on the vegetated shorelines.Since vegetated estuarine shorelines have been largely overlooked in the literature, a basic approach for sandy shorelines was adopted to include them in this study.The mixed shoreline type (vegetation and beach) is not a unique feature to Grand Bay and can be found frequently in other estuarine and marsh-dominated coastlines; therefore, a methodology that adopts a mixed analysis approach to address multiple shorelines types would be more appropriate for regional or national estuarine shoreline mapping programs.The simple method used here can be improved in light of other research that use high-resolution imagery (WV and other satellite data) to delineate beach shorelines [22,31,53], which may provide a method to improve delineations of wetland shorelines that are bordered by sandy beach.In addition, we discovered an NDVI value of 0.21 consistently provided an adequate representation of the shoreline (vegetation-water boundary) for coastal marsh habitat of southern Mississippi.This value may not provide an adequate boundary when applied to other wetland habitat types, such as mangrove or salt marsh where Juncus or Spartina are not the dominant species and should be investigated further.
The transect-based method resulted in higher differences than the point-base comparisons between the field-based data and satellite shorelines.This could be due to several and possibly compounding reasons.First, the transect-based method requires the GPSS points to be converted to a line, therefore the shoreline in between each GPS point is not a "true" shoreline.This results in the transect measuring the difference between an approximation of the shoreline from GPS data (based on adjacent measurements) and the satellite-derived shoreline.This approximation could introduce error depending on the distance between points and shoreline sinuosity.Second, the transect-based analysis method is well-documented and used by many researchers for shoreline change analyses but was developed for sandy ocean-facing shorelines [14,17,30,[58][59][60][61][62][63]; estuarine and wetland shorelines are generally more sinuous and spatially complex than ocean-facing beach shorelines.Because the shorelines frequently curve and bend, transects that extend from a baseline at regular intervals toward the shore may intersect the shoreline at an obtuse or acute angle rather than directly perpendicular to the shoreline (example can be noted on Figure 2 of a small-scale spit feature).When transects intersect shorelines at angles greater or less than 90º, distances between shoreline vectors are impacted and therefore the rates change.Techniques to reduce this effect are to increase transect frequency or creating curvilinear baselines that closely match the bends of the shoreline, however both options would require a greater time investment for data analyses.Other methodologies for evaluating shoreline changes over time, such as point-based techniques that evaluate distances between shoreline points [59,64], "fuzzy boundaries" techniques [65], Bayesian methods [15], or machine learning [66,67] may be appropriate for the gradual, indistinct boundaries common to wetland and estuarine shorelines.Calkoen et al. [68] evaluated machine learning techniques against ordinary least squares regression techniques (the transect-based approach explored here) to predict future shoreline change, but research that evaluates different shoreline extraction methods and their impact on statistical calculation of shoreline change rates for estuarine shorelines is a topic that warrants greater attention.
When corrected for shoreline change rate and water level, the difference between GPSS and WVS decrease, indicating that survey date and water level have an impact on the position of satellite-derived shorelines.Therefore, field verification data should be collected as close as possible to the date of satellite image collection with the maximum interval dependent on shoreline change rates.The maximum interval between field and image collection date would be less of a concern for slowly changing coastlines, whereas would have a greater impact on rapidly changing (eroding or accreting) shorelines.This process could explain the discrepancies between WVS and GPSS, particularly with high erosion rates.Shoreline position was not adjusted to water level prior to rate of change analyses and therefore the higher rates of change may be a product of changing water level conditions.Other studies applied water level corrections to satellite-derived beach shorelines [31,53], but we could not find examples where similar corrections were applied to vegetated shorelines.Other variables, such as vegetation and soil characteristics, could impact both sensor detection of shoreline position and distance of inundation, therefore we used a conservative estimate of marsh slope to avoid over-correction and provide our analysis to demonstrate the need to consider these impacts on results.There was a great deal of variability in marsh slope as shown by the low R 2 value and the site-based slope calculations that range from 0.01 to 0.12, highlighting the potential importance of this variable in evaluating water level impacts on the detection of shoreline position.To our knowledge, this is the first attempt to correct shoreline position for marsh inundation from high water level at the time of image collection and the method requires refinement.Methods could be improved with greater attention to characteristics of wetland inundation and the interpretation of vegetated shoreline position by optical sensors.Additionally, methods that have been shown to improve wetland classifications from Landsat might be adopted to improve overall results of wetland shoreline features [69].Another option is to select imagery of consistent water level or tidal datum to reduce variability caused by water level.Given the high revisit time of many high-resolution satellites (approximately once per day for WV2 and WV3), the availability of cloud-free imagery of appropriate water levels could be substantial.Considering the timing and environmental conditions during image collection when using high-resolution satellite imagery to delineate vegetated shorelines is important since tidal flooding and vegetation can impact shoreline position.
Conclusions
One of the greatest challenges to environmental monitoring is access to timely and consistent data that can be efficiently analyzed to support both short-and long-term management decision-making, restoration planning, and resiliency studies.Coastal wetlands have not received as much attention as ocean-facing sandy beaches for broad-scale shoreline change assessments, but wetlands are critically important resources to protect coastal communities from storms, provide habitat and refugia for economically important fish and shellfish species, act as water purifiers for floodwaters, and store carbon within organic rich sediments.In many areas of the United States and the world, availability of modern high-resolution wetland shorelines is non-existent, or data are out-of-date due to data limitations or the labor-intensive process for mapping these areas.The availability of high-resolution satellite imagery and new developments in rapid image analysis techniques can help fill the data gap and provide critical information for coastal wetland monitoring programs.Primary conclusions from this research include: A simple procedure to auto-delineate wetland shorelines from WorldView imagery was performed and, compared with field-survey data, resulted in an accuracy of approximately 2 m, but ranged from 0 to 20 m.Shorelines with gradual nearshore slope and sparse shoreline vegetation (bare mud or beach) may reduce boundary distinction and introduce positional error. Shoreline change analyses calculated exclusively from wetland shorelines extracted from WorldView imagery were strongly correlated to shoreline change calculations from field-based data (R 2 = 0.89, p-value < 0.001) indicating that these satellite-derived shorelines can provide an adequate assessment of short-term shoreline change by extending the applicability of field-based surveys to much larger areas.The timing of image collection and water level are important considerations when selecting imagery.Further characterization of the impact of these considerations on wetland shoreline position could improve future analyses and methodology. Improvement of the auto-delineation of mixed shoreline types (wetland, sandy beaches, rocky cliffs, etc.) that are common in estuaries or evaluate the effectiveness of the transect-based shoreline change analyses and other methodologies on wetland and estuarine shorelines is possible, particularly using other methodologies, such as fuzzy boundaries or pixel-based analyses, may have greater success for gradual or indistinct boundaries commonly found in coastal wetlands and estuaries.
Shorelines derived from high-resolution (meter-scale spatial resolution) satellite data with superior spatiotemporal coverage can provide a valuable data source to managers for frequent (e.g., annually) and consistent broad-scale monitoring of coastal wetlands or after extreme erosion events.
Figure 1 .
Figure 1.Map of the Grand Bay National Estuarine Research Reserve (red boundary line) with white stars depicting the location of shoreline erosion study sites (a).The inset map (b) shows the location of the study region on the border of Mississippi and Alabama, USA, in the northern Gulf of Mexico.Sites names refer to: BHM = Bayou Heron Mouth; MBN, MBW, MBS = Middle Bay North, West, and South, respectively; GBE = Grand Batture East; BSI = Bird Island; SPAL = North Jose Bay; MET = Met Station Island; PACN, PACM, PACS = Point aux Chenes North, Middle, and South, respectively.Data sources: Shoreline from © OpenStreetMap contributors (https://www.openstreetmap.org/accessedon 30 October 2018).Image basemap from © Maxar Technologies, 2020 (https://www.maxar.com/,accessed on 28 July 2021).All rights reserved.
Figure 2 .
Figure 2. Differences between shoreline from WorldView satellite imagery and field-based surveys from Real-time Kinematic Global Positioning System (GPS) are estimated in two ways: (a) the distance between the GPS point and the nearest WV shoreline vector, and (b) the distance between the GPS approximated shoreline (by connecting the points) and the WVS using the intersection along transects.
Figure 3 .
Figure 3. Scatter plot of marsh platform elevations against distance from the shoreline (shoreline is located at 0) with the least-squares linear regression line (blue solid line), indicating marsh platform slope is approximately 0.07.Red dashed lines depict the 95% prediction interval and greyshaded region show the 95% confidence interval.NAVD88 = North American Vertical Datum of 1988.Site names refer to: BHM = Bayou Heron Mouth; MBN, MBW, MBS = Middle Bay North, West, and South, respectively; GBE = Grand Batture East; BSI = Bird Island; SPAL = North Jose Bay; MET = Met Station Island; PACN, PACM, PACS = Point aux Chenes North, Middle, and South, respectively.
Figure 4 .
Figure 4. Bland-Altman plots showing the rate of shoreline change from Global Positioning System shorelines (GPSS) only versus shoreline change rates calculated from (a) WorldView-derived shorelines (WVS) exclusively and (b) aerial imagery-derived shorelines (AIS) exclusively.Solid blue line depicts the mean and blue dashed lines show the 95% confidence interval.
Figure 5 .
Figure 5. Mean shoreline change rate (a) and mean of the absolute difference between the shoreline change rate calculated using Global Position System (GPS) data and shoreline change from two remote sensing data sets: WorldView (WV) satellite imagery and aerial imagery (AI) (b).
Table 1 .
Site location, vegetation, elevation, and sedimentary characteristics for the eleven Grand Bay National Estuarine Research Reserve shoreline erosion study sites.
Table 2 .
The dates of collection for WorldView satellite imagery, Real-time Kinematic Global Positioning System (GPS) field-surveys, and aerial imagery shoreline (AIS) data.The remote sensing data were paired by year with the closest available GPS field survey date.
Table 4 .
Mean difference between Global Positioning System shorelines (GPSS) and WorldViewderived shorelines (WVS) (Δx) by year using both a point and transect-based method.Transect data were corrected for differences between collection date (Δxt) and water level (Δxtw).N is the number of points or transects for each satellite image date. | 2021-08-31T13:16:09.149Z | 2021-08-02T00:00:00.000 | {
"year": 2021,
"sha1": "e27e8c4a16b36af366a6e0006c53e8473a51f3ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/15/3030/pdf?version=1628057812",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d5bb195cdc0ea76118743468c999eb9d83603d24",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216868459 | pes2o/s2orc | v3-fos-license | Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?
Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition. We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits. A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets. However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set. This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.
Introduction
Natural language inference (NLI), a task whereby a system judges whether given a set of premises P semantically entails a hypothesis H (Dagan et al., 2013;Bowman et al., 2015), is a fundamental task for natural language understanding. As with other NLP tasks, recent studies have shown a remarkable impact of deep neural networks in NLI (Williams et al., 2018;Wang et al., 2019;Devlin et al., 2019). However, it remains unclear to what extent DNN-based models are capable of learning the compositional generalization underlying NLI from given labeled training instances. Systematicity of inference (or inferential systematicity) (Fodor and Pylyshyn, 1988;Aydede, 1997) in natural language has been intensively studied in the field of formal semantics. From among the various aspects of inferential systematicity, in the context of NLI, we focus on monotonicity (van Benthem, 1983;Icard and Moss, 2014) and its productivity. Consider the following premise-hypothesis pairs (1)-(3), which have the target label entailment: (1) P : Some [puppies ↑] ran.
H: Some dogs ran.
(2) P : As in (1), for example, quantifiers such as some exhibit upward monotone (shown as [... ↑]), and replacing a phrase in an upward-entailing context in a sentence with a more general phrase (replacing puppies in P with dogs as in H) yields a sentence inferable from the original sentence. In contrast, as in (2), quantifiers such as no exhibit downward monotone (shown as [... ↓]), and replacing a phrase in a downward-entailing context with a more specific phrase (replacing cats in P with small cats as in H) yields a sentence inferable from the original sentence. Such primitive inference patterns combine recursively as in (3). This manner of monotonicity and its productivity produces a potentially infinite number of inferential patterns. Therefore, NLI models must be capable of systematically interpreting such primitive patterns and reasoning over unseen combinations of patterns. Although many studies have addressed this issue by modeling logical reasoning in formal semantics (Abzianidze, 2015;Mineshima et al., 2015;Hu et al., 2019) and testing DNN-based models on monotonicity inference (Yanaka et al., 2019a,b; 2020), the ability of DNN-based models to generalize to unseen combinations of patterns is still underexplored.
Given this background, we investigate the systematic generalization ability of DNN-based models on four aspects of monotonicity: (i) systematicity of predicate replacements (i.e., replacements with a more general or specific phrase), (ii) systematicity of embedding quantifiers, (iii) productivity, and (iv) localism (see Section 2.2). To this aim, we introduce a new evaluation protocol where we (i) synthesize training instances from sampled sentences and (ii) systematically control which patterns are shown to the models in the training phase and which are left unseen. The rationale behind this protocol is twofold. First, patterns of monotonicity inference are highly systematic, so we can create training data with arbitrary combinations of patterns, as in examples (1)-(3). Second, evaluating the performance of the models trained with well-known NLI datasets such as MultiNLI (Williams et al., 2018) might severely underestimate the ability of the models because such datasets tend to contain only a limited number of training instances that exhibit the inferential patterns of interest. Furthermore, using such datasets would prevent us from identifying which combinations of patterns the models can infer from which patterns in the training data.
This paper makes two primary contributions. First, we introduce an evaluation protocol 1 using 1 The evaluation code will be publicly available at https://github.com/verypluming/systematicity. the systematic control of the training/test split under various combinations of semantic properties to evaluate whether models learn inferential systematicity in natural language. Second, we apply our evaluation protocol to three NLI models and present evidence suggesting that, while all models generalize to unseen combinations of lexical and logical phenomena, their generalization ability is limited to cases where sentence structures are nearly the same as those in the training set.
2 Method 2.1 Basic idea Figure 1 illustrates the basic idea of our evaluation protocol on monotonicity inference. We use synthesized monotonicity inference datasets, where NLI models should capture both (i) monotonicity directions (upward/downward) of various quantifiers and (ii) the types of various predicate replacements in their arguments. To build such datasets, we first generate a set of premises G Q d by a context-free grammar G with depth d (i.e., the maximum number of applications of recursive rules), given a set of quantifiers Q. Then, by applying G Q d to elements of a set of functions for predicate replacements (or replacement functions for short) R that rephrase a constituent in the input premise and return a hypothesis, we obtain a set D Q,R d of premise-hypothesis pairs defined as For example, the premise Some puppies ran is generated from the quantifier some in Q and the production rule S → Q, N, IV, and thus it is an element of G Q 1 . By applying this premise to a replacement function that replaces the word in the premise with its hypernym (e.g., puppy ⊑ dog), we provide the premise-hypothesis pair Some puppies ran ⇒ Some dogs ran in Fig. 1.
We can control which patterns are shown to the models during training and which are left unseen by systematically splitting D Q,R d into training and test sets. As shown on the left side of Figure 1, we consider how to test the systematic capacity of models with unseen combinations of quantifiers and predicate replacements. To expose models to primitive patterns regarding Q and R, we fix an arbitrary element q from Q and feed various predicate replacements into the models from the training set of inferences D {q},R d generated from combinations of the fixed quantifier and all predicate replacements. Also, we select an arbitrary element r from R and feed various quantifiers into the models from the training set of inferences D Q,{r} d generated from combinations of all quantifiers and the fixed predicate replacement.
We then test the models on the set of inferences generated from unseen combinations of quantifiers and predicate replacements. That is, we test them on the set of inferences D Similarly, as shown on the right side of Figure 1, we can test the productive capacity of models with unseen depths by changing the training/test split based on d. For example, by training models on D Q,R d and testing them on D Q,R d+1 , we can evaluate whether models generalize to one deeper depth. By testing models with an arbitrary training/test split of D Q,R d based on semantic properties of monotonicity inference (i.e., quantifiers, predicate replacements, and depths), we can evaluate whether models systematically interpret them.
Evaluation protocol
To test NLI models from multiple perspectives of inferential systematicity in monotonicity inferences, we focus on four aspects: (i) systematicity of predicate replacements, (ii) systematicity of embedding quantifiers, (iii) productivity, and (iv) localism. For each aspect, we use a set D Q,R d of premise-hypothesis pairs. Let Q = Q ↑ ∪ Q ↓ be the union of a set of selected upward quantifiers Q ↑ and a set of selected downward quantifiers Q ↓ such that |Q ↑ | = |Q ↓ | = n. Let R be a set of replacement functions {r 1 , . . . , r m }, and d be the embedding depth, with 1 ≤ d ≤ s.
(4) is an example of an element of D Q,R 1 , containing the quantifier some in the subject position and the predicate replacement using the hypernym relation dogs ⊑ animals in its upward-entailing context without embedding.
(4) P : Some dogs ran ⇒ H: Some animals ran I. Systematicity of predicate replacements The following describes how we test the extent to which models generalize to unseen combinations of quantifiers and predicate replacements. Here, we expose models to all primitive patterns of predicate replacements like (4) and (5) and all primitive patterns of quantifiers like (6) and (7). We then test whether the models can systematically capture the difference between upward quantifiers (e.g., several) and downward quantifiers (e.g., no) as well as the different types of predicate replacements (e.g., the lexical relation dogs ⊑ animals and the adjective deletion small dogs ⊑ dogs) and correctly interpret unseen combinations of quantifiers and predicate replacements like (8) and (9). Here, we consider a set of inferences D Q,R 1 whose depth is 1. We move from harder to easier tasks by gradually changing the training/test split according to combinations of quantifiers and predicate replacements. First, we expose models to primitive patterns of Q and R with the minimum training set. Thus, we define the initial training set S 1 and test set T 1 as follows: where q is arbitrarily selected from Q, and r is arbitrarily selected from R. Next, we gradually add the set of inferences generated from combinations of an upwarddownward quantifier pair and all predicate replacements to the training set. In the examples above, we add (8) and (9) to the training set to simplify the task. We assume a set Q ′ of a pair of upward/downward quantifiers, namely, We consider a set perm(Q ′ ) consisting of permutations of Q ′ . For each p ∈ perm(Q ′ ), we gradually add a set of inferences generated from p(i) to the training set S i with 1 < i ≤ n − 1. Then, we provide a test set T i generated from the complement Q i of This protocol is summarized as To evaluate the extent to which the generalization ability of models is robust for different syntactic structures, we use an additional test set generated using three production rules. The first is the case where one adverb is added at the beginning of the sentence, as in example (10).
(10) P adv : Slowly, several small dogs ran H adv : Slowly, several dogs ran The second is the case where a three-word prepositional phrase is added at the beginning of the sentence, as in example (11).
(11) Pprep: Near the shore, several small dogs ran Hprep: Near the shore, several dogs ran The third is the case where the replacement is performed in the object position, as in example (12).
(12) P obj : Some tiger touched several small dogs H obj : Some tiger touched several dogs We train and test models |perm(Q ′ )| times, then take the average accuracy as the final evaluation result.
II. Systematicity of embedding quantifiers To properly interpret embedding monotonicity, models should detect both (i) the monotonicity direction of each quantifier and (ii) the type of predicate replacements in the embedded argument. The following describes how we test whether models generalize to unseen combinations of embedding quantifiers. We expose models to all primitive combination patterns of quantifiers and predicate replacements like (4)-(9) with a set of non-embedding monotonicity inferences D Q,R 1 and some embedding patterns like (13), where Q 1 and Q 2 are chosen from a selected set of upward or downward quantifiers such as some or no. We then test the models with an inference with an unseen quantifier several in (14) to evaluate whether models can systematically interpret embedding quantifiers.
(13) P : Q1 animals that chased Q2 dogs ran H: Q1 animals that chased Q2 animals ran (14) P : Several animals that chased several dogs ran H: Several animals that chased several animals ran We move from harder to easier tasks of learning embedding quantifiers by gradually changing the training/test split of a set of inferences D Q,R 2 whose depth is 2, i.e., inferences involving one embedded clause.
We assume a set Q ′ of a pair of upward and downward quantifiers as We train and test models |perm(Q ′ )| times, then take the average accuracy as the final evaluation result.
III. Productivity Productivity (or recursiveness)
is a concept related to systematicity, which refers to the capacity to grasp an indefinite number of natural language sentences or thoughts with generalization on composition. The following describes how we test whether models generalize to unseen deeper depths in embedding monotonicity (see also the right side of Figure 1). For example, we expose models to all primitive nonembedding/single-embedding patterns like (15) and (16) IV. Localism According to the principle of compositionality, the meaning of a complex expression derives from the meanings of its constituents and how they are combined. One important concern is how local the composition operations should be (Pagin and Westerståhl, 2010). We therefore test whether models trained with inferences involving embedded monotonicity locally perform inferences composed of smaller constituents. Specifically, we train models with examples like (17) and then test the models with examples like (15) and (16). We train models with D d and test the models on ∪ k∈{1,...,d} D k with 3 ≤ d ≤ s .
Data creation
To prepare the datasets shown in Table 1, we first generate premise sentences involving quantifiers from a set of context-free grammar (CFG) rules and lexical entries, shown in Table 6 in the Appendix. We select 10 words from among nouns, intransitive verbs, and transitive verbs as lexical entries. A set of quantifiers Q consists of eight elements; we use a set of four downward quantifiers Q ↓ ={no, at most three, less than three, few} and a set of four upward quantifiers Q ↑ ={some, at least three, more than three, a few}, which have the same monotonicity directions in the first and second arguments. We thus consider n = |Q ↑ | = |Q ↓ | = 4 in the protocol in Section 2.2. The ratio of each monotonicity direction (upward/downward) of generated sentences is set to 1 : 1. We then generate hypothesis sentences by applying replacement functions to premise sentences according to the polarities of constituents. The set of replacement functions R is composed of the seven types of lexical replacements and phrasal additions in Table 2. We remove unnatural premise-hypothesis pairs in which the same words or phrases appear more than once. For embedding monotonicity, we consider inferences involving four types of replacement functions in the first argument of the quantifier in Table 2: hyponyms, adjectives, prepositions, and relative clauses. We generate sentences up to the depth d = 5. There are various types of embedding monotonicity, including relative clauses, conditionals, and negated clauses. In this paper, we consider three types of embedded clauses: peripheral-embedding clauses and two kinds of center-embedding clauses, shown in Table 6 in the Appendix.
The number of generated sentences exponentially increases with the depth of embedded clauses. Thus, we limit the number of inference examples to 320,000, split into 300,000 examples for the training set and 20,000 examples for the test set. We guarantee that all combinations of quantifiers are included in the set of inference examples for each depth. Gold labels for generated premise-hypothesis pairs are automatically determined according to the polarity of the argument position (upward/downward) and the type of predicate replacements (with more general/specific phrases). The ratio of each gold label (entailment/non-entailment) in the training and test sets is set to 1 : 1.
To double-check the gold label, we translate each premise-hypothesis pair into a logical formula (see the Appendix for more details). The logical formulas are obtained by combining lambda terms in accordance with meaning composition rules specified in the CFG rules in the standard way (Blackburn and Bos, 2005). We prove the entailment relation using the theorem prover Vampire 2 , checking whether a proof is found in time for each entailment pair. For all pairs, the output of the prover matched with the entailment relation automatically determined by monotonicity calculus.
Models
We consider three DNN-based NLI models. The first architecture employs long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997). We set the number of layers to three with no attention. Each premise and hypothesis is processed as a sequence of words using a recurrent neural network with LSTM cells, and the final hidden state of each serves as its representation.
The second architecture employs multiplicative tree-structured LSTM (TreeLSTM) networks (Tran and Cheng, 2018), which are expected to be more sensitive to hierarchical syntactic structures. Each premise and hypothesis is processed as a tree structure by bottomup combinations of constituent nodes using the same shared compositional function, input word information, and between-word relational information. We parse all premise-hypothesis pairs with the dependency parser using the spaCy li-brary 3 and obtain tree structures. For each experimental setting, we randomly sample 100 tree structures and check their correctness. In LSTM and TreeLSTM, the dimension of hidden units is 200, and we initialize the word embeddings with 300-dimensional GloVe vectors (Pennington et al., 2014). Both models are optimized with Adam (Kingma and Ba, 2015), and no dropout is applied.
The third architecture is a Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019). We used the baseuncased model pre-trained on Wikipedia and BookCorpus from the pytorch-pretrained-bert library 4 , fine-tuned for the NLI task using our dataset. In fine-tuning BERT, no dropout is applied, and we choose hyperparameters that are commonly used for MultiNLI. We train all models over 25 epochs or until convergence, and select the best-performing model based on its performance on the validation set. We perform five runs per model and report the average and standard deviation of their scores.
Experiments and Discussion
I. Systematicity of predicate replacements Figure 2 shows the performance on unseen combinations of quantifiers and predicate replacements. In the minimal training set S 1 , the accuracy of LSTM and TreeLSTM was almost the same as chance, but that of BERT was around 75%, suggesting that only BERT generalized to unseen combinations of quantifiers and predicate replacements. When we train BERT with the training set S 2 , which contains inference examples generated from combinations of one pair of upward/downward quantifiers and all predicate replacements, the accuracy was 100%. This indicates that by being taught two kinds of quantifiers in the training data, BERT could distinguish between upward and downward for the other quantifiers. The accuracy of LSTM and TreeLSTM increased with increasing the training set size, but did not reach 100%. This indicates that LSTM and TreeLSTM also generalize to inferences involving similar quantifiers to some extent, but their generalization ability is imperfect.
When testing models with inferences where adverbs or prepositional phrases are added to the be- ginning of the sentence, the accuracy of all models significantly decreased. This decrease becomes larger as the syntactic structures of the sentences in the test set become increasingly different from those in the training set. Contrary to our expectations, the models fail to maintain accuracy on test sets whose difference from the training set is the structure with the adverb at the beginning of a sentence. Of course, we could augment datasets involving that structure, but doing so would require feeding all combinations of inference pairs into the models. These results indicate that the models tend to estimate the entailment label from the beginning of a premise-hypothesis sentence pair, and that inferential systematicity to draw inferences involving quantifiers and predicate replacements is not completely generalized at the level of arbitrary constituents. Figure 3 shows the performance of all models on unseen combinations of embedding quantifiers. Even when adding the training set of inferences involving one embedded clause and two quantifiers step-by-step, no model showed improved performance. The accuracy of BERT slightly exceeded chance, but the accuracy of LSTM and TreeLSTM was nearly the same as or lower than chance. These results suggest that all the models fail to generalize to unseen combinations of embedding quantifiers even when they involve similar upward/downward quantifiers. Table 3 shows the performance on unseen depths of embedded clauses. The accuracy on D 1 and D 2 was nearly 100%, indicating that all models almost completely generalize to inferences containing previously seen depths. When D 1 +D 2 were used as the training set, the accuracy of all models on D 3 exceeded chance. Similarly, when D 1 + D 2 + D 3 were used as the training set, the accuracy of all models on D 4 exceeded chance. This indicates that all models partially generalize to inferences containing embedded clauses one level deeper than the training set.
III. Productivity
However, standard deviations of BERT and LSTM were around 10, suggesting that these models did not consistently generalize to inferences containing embedded clauses one level deeper than the training set. While the distribution of monotonicity directions (upward/downward) in the training and test sets was uniform, the accuracy of LSTM and BERT tended to be smaller for downward inferences than for upward inferences. This also indicates that these models fail to properly compute monotonicity directions of constituents from syntactic structures. The standard deviation of TreeLSTM was smaller, indicating that TreeLSTM robustly learns inference patterns containing embedded clauses one level deeper than the training set. However, the performance of all models trained with D 1 + D 2 on D 4 and D 5 significantly decreased. Also, performance decreased for all models trained with D 1 + D 2 + D 3 on D 5 . Specifically, there was significantly decreased performance of all models, including TreeLSTM, on inferences containing embedded clauses two or more levels deeper than those in the training set. These results indicate that all models fail to develop productivity on inferences involving embedding monotonicity. Table 4 shows the performance of all models on localism of embedding monotonicity. When the models were trained with D 3 , D 4 or D 5 , all performed at around chance on the test set of non-embedding inferences D 1 and the test set of inferences involving one embedded clause D 2 . These results indicate that even if models are trained with a set of inferences containing complex syntactic structures, the models fail to locally interpret their constituents.
IV. Localism
Performance of data augmentation Prior studies (Yanaka et al., 2019b;Richardson et al., 2020) have shown that given BERT initially trained with MultiNLI, further training with synthesized instances of logical inference improves performance on the same types of logical inference while maintaining the initial performance on MultiNLI. To investigate whether the results of our study are transferable to current work on MultiNLI, we trained models with our synthesized dataset mixed with MultiNLI, and checked (i) whether our synthesized dataset degrades the original performance of models on MultiNLI 5 and (ii) whether MultiNLI degrades the ability to generalize to unseen depths of embedded clauses. Table 5 shows that training BERT on our synthetic data D 1 + D 2 and MultiNLI increases the accuracy on our test sets D 1 (46.9 to 100.0), D 2 (46.2 to 100.0), and D 3 (46.8 to 67.8) while preserving accuracy on MultiNLI (84.6 to 84.4). This indicates that training BERT with our synthetic data does not degrade performance on commonly used corpora like MultiNLI while improving the performance on monotonicity, which suggests that our data-synthesis approach can be combined with naturalistic datasets. For TreeLSTM and LSTM, however, adding our synthetic dataset decreases accuracy on MultiNLI. One possible reason for this is that a pre-training based model like BERT can mitigate catastrophic forgetting in various types of datasets.
Regarding the ability to generalize to unseen depths of embedded clauses, the accuracy of all models on our synthetic test set containing embedded clauses one level deeper than the training set exceeds chance, but the improvement becomes smaller with the addition of MultiNLI. In particular, with the addition of MultiNLI, the models tend to change wrong predictions in cases where a hypothesis contains a phrase not occurring in a premise but the premise entails the hypothesis. Such inference patterns are contrary to the heuristics in MultiNLI (McCoy et al., 2019). This indicates that there may be some trade-offs in terms of performance between inference patterns in the training set and those in the test set.
Related Work
The question of whether neural networks are capable of processing compositionality has been widely discussed (Fodor and Pylyshyn, 1988;Marcus, 2003). Recent empirical studies illustrate the importance and difficulty of evaluating the capability of neural models. Generation tasks using artificial datasets have been proposed for testing whether models compositionally interpret training data from the underlying grammar of the data (Lake and Baroni, 2017;Hupkes et al., 2018;Saxton et al., 2019;Loula et al., 2018;Hupkes et al., 2019;Bernardy, 2018). However, these conclusions are controversial, and it remains unclear whether the failure of models on these tasks stems from their inability to deal with compositionality.
Previous studies using logical inference tasks have also reported both positive and negative results.
Assessment results on propositional logic (Evans et al., 2018), first-order logic (Mul and Zuidema, 2019), and natural logic (Bowman et al., 2015) show that neural networks can generalize to unseen words and lengths. In contrast, Geiger et al. (2019) obtained negative results by testing models under fair conditions of natural logic. Our study suggests that these conflicting results come from an absence of perspective on combinations of semantic properties.
Regarding assessment of the behavior of modern language models, Linzen et al. (2016), , and Goldberg (2019) investigated their syntactic capabilities by testing such models on subject-verb agreement tasks. Many studies of NLI tasks (Liu et al., 2019;Glockner et al., 2018;Poliak et al., 2018;Tsuchiya, 2018;McCoy et al., 2019;Rozen et al., 2019;Ross and Pavlick, 2019) have provided evaluation methodologies and found that current NLI models often fail on particular inference types, or that they learn undesired heuristics from the training set. In particular, recent works (Yanaka et al., 2019a,b;Richardson et al., 2020) have evaluated models on monotonicity, but did not focus on the ability to generalize to unseen combinations of patterns. Monotonicity covers various systematic inferential patterns, and thus is an adequate semantic phenomenon for assessing inferential systematicity in natural language. Another benefit of focusing on monotonicity is that it provides hard problem settings against heuristics (McCoy et al., 2019), which fail to perform downward-entailing inferences where the hypothesis is longer than the premise.
Conclusion
We introduced a method for evaluating whether DNN-based models can learn systematicity of monotonicity inference under four aspects. A series of experiments showed that the capability of three models to capture systematicity of predicate replacements was limited to cases where the positions of the constituents were similar between the training and test sets. For embedding monotonicity, no models consistently drew inferences involving embedded clauses whose depths were two levels deeper than those in the training set. This suggests that models fail to capture inferential systematicity of monotonicity and its productivity.
We also found that BERT trained with our synthetic dataset mixed with MultiNLI maintained performance on MultiNLI while improving the performance on monotonicity. This indicates that though current DNN-based models do not systematically interpret monotonicity inference, some models might have sufficient ability to memorize different types of reasoning. We hope that our work will be useful in future research for realizing more advanced models that are capable of appropriately performing arbitrary inferences.
Context-free grammar for premise sentences S → N P IV 1 N P → Q N | Q N S S → W hN P T V N P | W hN P N P T V | N P T V Lexicon Q → {no, at most three, less than three, few, some, at least three, more than three, a few} N → {dog,rabbit,lion,cat,bear,tiger,elephant,fox,monkey,wolf } IV 1 → {ran,walked,came,waltzed,swam,rushed,danced,dawdled,escaped,left} IV 2 → {laughed,groaned,roared,screamed,cried} T V → {kissed,kicked,hit,cleaned,touched,loved,accepted,hurt,licked to IV 1 Adv | IV 1 P P | IV 1 or IV 2 | IV 1 and IV 2 first argument of the quantifier. To generate natural sentences consistently, we use the past tense for verbs; for lexical entries and predicate replacements, we select those that do not violate selectional restriction.
To check the gold labels for the generated premise-hypothesis pairs, we translate each sentence to a first-order logic (FOL) formula and test if the entailment relation holds by theorem proving. The FOL formulas are compositionally derived by combining lambda terms assigned to each lexical item in accordance with meaning composition rules specified in the CFG rules in the standard way (Blackburn and Bos, 2005). Since our purpose is to check the polarity of monotonicity marking, vague quantifiers such as few are represented according to their polarity. For example, we map the quantifier few onto the lambda-term λP λQ¬∃x(few(x) ∧ P (x) ∧ Q(x)). Table 7 shows all results on embedding monotonicity. This indicates that all models partially generalize to inferences containing embedded clauses one level deeper than the training set, but fail to generalize to inferences containing embedded clauses two or more levels deeper. | 2020-05-01T01:00:56.751Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "387b5988331f8fe779c323f8a88df23daa715a8a",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.acl-main.543.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "c86349548cea49b79de6326c918ef5fdedb9f02d",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16158793 | pes2o/s2orc | v3-fos-license | Time-violating rotation of the electromagnetic wave polarization plane by a diffractional grating
The equations describing the T-violating photon scattering by a diffraction grating have been obtained. It is shown, that the T-violating rotation of the photon polarization plane appear under diffraction in a noncenter symmetrical diffraction grating. The rotation angle gives rise sharply, when the conditions of the photon resonance transmission are satisfied.
Introduction
Since the discovery of the CP-violation in decay of K -mesons (Christenson , Cronin , Fitch and Turlay (1964)), a few attempts have been undertaken to observe experimentally this phenomenon in different processes . However, that experiments have not been successful. At the present time, novel more precise experimental schemes are actively discussed: observation of the atom (Lamoreaux (1989)) and neutron (Forte (1983), Fedorov , Voronin and Lavin (1992)) electric dipole moment; the T(time)-violating atom's (molecule's) spin rotation in a laser wave and the T-violating refraction of a photon in a polarized atomic or molecular gas (Baryshevsky (1993(Baryshevsky ( , 1994).
In accordance with Baryshevsky (1993Baryshevsky ( , 1994 the P(parity)-and T-violating dielectric permittivity tensor ε ik is given by where χ ik is the polarizability tensor of the matter, ρ is the number of atoms (molecules) per cm 3 , k -the photon wave number. The quantity f ik (0) is the tensor part of the zero-angle amplitude of elastic coherent scattering of a photon by an atom (molecule) f (0) = f ik (0) e ′ * e k . Here e and e ′ are the polarization vectors of initial and scattered photons. Indices i=1,2,3 are refered to coordinates x, y, z, respectively, repeated indices imply summations. At zero external electric and magnetic fields, the amplitude f ik (0) can be written as +iβ P t ε iml Q mk n l + 1 2 β T t Q im ε mlk n l + Q km ε mli n l , where f ev ik is the P, T even (invariant) part of f ik (0), f P,T ik is the P,T-violating part of f ik (0), β P,T s,v,t is the scalar (vector, tensor) P,T violating polarizability of an atom (molecule), ε ikl is the total antisymmetrical unit tensor of the rank three, n = k k , In view of (1,2), the T-violating processes affect upon the dielectric permittivity ε ik and, as a result, upon the refraction index only in media with polarized atoms (molecules) of the spin equaled or larger than 1. This part of ε ik is proportional to β T t . If the atoms' (molecules') spins are nonpolarized, only the P-violating term f ik (0) = ω 2 c 2 iβ P s ε ikl n l exists. The term proportional to β P s describes the P-violating rotation of a light polarization plane in metallic vapours (Barkov and Zolotariov 1978, Bouchiat and Pottier 1986, Khriplovich 1991. As it has been shown in (Baryshevsky 1993(Baryshevsky , 1994, when an atom interacts with two coherent electromagnetic waves, the energy of this interaction depends on the Tviolating scalar polarizability β T t . Interaction of an atom (molecule) with two waves can be considered as a process of rescattering of one wave into another and vice versa. Then, as it follows from an expression for the effective interaction energy, the amplitude f k ′ , k of the photon scattering by an unpolarized atom (molecule) at a non-zero angle is given by (Baryshevsky 1994): where k is the wave vector of a scattered photon, n ′ = k ′ k , α s is the scalar P,Tinvariant polarizability of an atom (molecule). Expression (3) holds true in the absence of .external electric and magnetic fields. It should be emphasized that expression (3) for the elastic scattering amplitude can be derived from the general principles of symmetry. Indeed, there are four independent unit vectors: e and e ′ , which completely describe geometry of the elastic scattering process. The elastic scattering amplitude f k ′ , k depends on these vectors and therewith is a scalar. Obviously, one can compose three independent scalars from these vectors: e ′ e , ν 1 e ′ * e , ν 2 e ′ * e . As a result, the scattering amplitude can be written as: where f s is the P-,T-invariant scalar amplitude, f P s is the P-violating scalar amplitude, and f T s is the P-,T-violating scalar amplitude. It can easily be found from (3,4) that the term proportional to β T s f T s vanishes in the case of forward scattering n ′ → n . Vice versa, in the case of back scattering n ′ → − n the term proportional to β P s f P s gets equal to zero. Thus, one can conclude that the T-violating interactions manifest themselves in the processes of scattering by atoms (molecules). However, the scattering processes are usually incoherent and their cross sections are too small to hope for observation of the T-violating effect. Another situation takes place for diffraction gratings in the vicinity of the Bragg resonance where the scattering process is coherent. As a result, the intensities of scattered waves strongly increase: for instance, in the Bragg (reflection) diffraction geometry the amplitude of the diffracted-reflected wave may reach the unity. It gives us an opportunity to study the T-violating scattering processes (Baryshevsky 1994).
In the present paper, equations describing the T-violating scattering by a diffraction grating have been obtained. It has been shown that the photon's refraction index in a non-center-symmetrical grating depends on the T-violating amplitude f T s . It can result in a new phenomenon: the T-violating rotation of the photon polarization plane. It has also been shown that the rotation angle gives rise sharply in the back-scattering diffraction geometry when the conditions of the photon resonance transmission are satisfied.
2 The P-,T-violating electromagnetic waves diffraction by a diffraction grating The phenomenon of P-, T-invariant diffraction of electromagnetic waves by diffraction gratings has been studied in detail for a very wide range of wavelengths (see, for examples, Shih-Lin Chang (1984), Tamir (1988), Maksimenko and Slepyan (1997)). Accoding to these articles equations of the dynamic diffraction can be derived from the Maxwell's equations if the permittivity tensor ε ik ( r, ω) of a spatially periodic grating is known.
To include the P, T violating processes into the diffraction theory, let us consider the microscopic Maxwell equations: where E is the electric field strength and B is the magnetic field induction, ρ and j are the microscopic densities of the electrical charge and the current induced by an electromagnetic wave, c is the speed of light. The Fourier transformation of these equations (i.e. E ( r, t) = 1 2π 4 E k, ω e i k r e −iωt d 3 kdω and so on) leads us to the equation for E k, ω as follows: where n = k k .
In linear approximation, the current j ( r, ω) is coupled with E ( r, ω) by the well-known dependence: with σ ij r, r ′ , ω as the microscopic conductivity tensor being a sum of the conductivity tensors of the atoms (molecules) constituting the diffraction grating: Here σ A ij is the conductivity tensor of the A-type scatterers. The summation is over all atoms (molecules) of the grating.
In a diffraction grating, the tensor σ ij r, r ′ , ω is a spatially periodic function. It allows one to derive the expansion of j i k, ω from (7) as follows: where σ c ij is the Fourier transform of the conductivity tensor of a grating's elementary cell, τ is the reciprocal lattice vector of the diffraction grating. Using current representation (9), one can obtain a set of equations from (6): Tensor of the diffraction grating susceptibility is given bŷ Here F lj k, k − τ = iω c 2 σ lj k, k − τ is the amplitude of coherent elastic scattering of an electromagnetic wave by a grating elementary cell from a state with the wave vector k − τ to a state with the wave vector k.
The amplitude F lj is obtained by summation of atomic (molecular) coherent elastic scatterig amplitudes over a grating's elementary cell: where f A lj is the coherent elastic scattering amplitude by an A-type atom (molecule), R A is the gravity center coordinate of the A-type atom (molecule) , N c is the number of the atoms (molecules) in an elementary cell, angular brackets denote averaging over the coordinate distribution of scatterers in a grating's elementary cell.
The amplitude f lj has been given by equation (4,3). From (11), (12) and (4) one can obtaine an expression for the susceptibility χ lj of the elementary cell of an optically isotropic material: where χ s τ is the scalar P-, T-invariant susceptibility of an elementary cell, χ P s τ is the Pviolating, T-invariant susceptibility of the elementary cell, and χ T s τ is the P-and Tviolating susceptibility of the elementary cell, Then, using (10, 11, 13) we can derive a set of equations describing the P and T violating interaction of an electromagnetic wave with a diffraction grating (14) reduce to the conventional set of equations of dynamic diffraction theory (Shih-Lin Chang (1984)).
The phenomenon of T-violating rotation of the photon polarization plane by a diffraction grating
Let us suppose, first of all, the photon ω frequency and the wave vector k to be such that the Bragg diffraction conditions k = k ± τ and k ′ = k are not fulfilled exactly, and the inequality ≪ 1 holds true. In this case, the diffracted wave amplitude is much less comparing with the transmitted one: E k − τ ≪ E k , and the perturbation theory can be applied for the further analysis. As a result in the first approximation of the perturbation theory one can derive from (10) that Substitution (15) into (10) results in the diffraction equations as follows (16) which can be rewritten in more simple form by introducing the effective permittivity tensor One can see that even far away from the exact Bragg conditions, where the diffracted wave amplitudes are small, a spatially periodic isotropic medium manifests the optical anisotropy being characterized by the effective permittivity tensorε if k, ω .
Let a photon be incident on a grating normally to its reflection planes. In other words, let the photon wave vector k 0 be antiparallel to the reciprocal lattice vector τ , i.e. k 0 ↑↓ τ . In this case, the back-scattering diffraction regime can be realized for photons with wave numbers defined by the relation k ≈ 1 2 τ . If, nevertheless, the inequality α τ ≫ χ ij k, k − τ holds true, we can use set of equations (16) in which there is only one term satisfying the conditions τ ↑↓ k and τ ≃ 2k . Let the coordinate axis z be parallel to k 0 , τ . In this case, the tensorχ ij has nonzero components at i , j = 1,2 only. As a result, set of equations (17) can be rewritten in the form as follows: with the permittivity tensor given by In the above equations we introduced the designations: The term proportional t 0 χ P s (0) describes the P-violating and T-invariant rotation of the light polarization plane about the direction n. This term does not depend on a structure of the diffraction grating and exists for any ordinary spatially isotropic media. Unlike to that, the term proportional to χ T s is T-violating and depends on the grating's structure. The term proportional χ T s looks like the term proportional to χ P s and is responsible for the polarization plane rotation about ν τ 2 . It is known that the phenomenon of the polarization plane rotation arises when rightand left-circularly polarized photons have different indices of refraction in a medium n + and n − , respectively. It means that the tensor ε ij is diagonal for a given circular polarization and, concequently, the set of equations (19) is split into two independent equations. Really, let us write (19) in the vector notation: and let e 1 be the unit polarization vector of a linearly polarized photon; e 2 = [ n e 1 ] , e 1 ⊥ e 2 ⊥ n . Then, the unit vectors corresponding to the circular polarizations are as follows e ± = e 1 ± i e 2 √ 2 . For the right ( e + ) , left ( e − ) circularly polarized photons, the field E can be represented by E = c (±) e (±) . As a result, it follows from (19, 21) that: The corresponding refractive indices are obtained from (20) The angle of the photon polarization plane rotation is defined by : where L is the photon waylength in the medium, Ren ± is the real part of n ± . Then, the expression for ϑ can easily be derived from (24,23) One can conclude, thus, that the T-violating interaction results in the phenomenon of the T-violating rotation of the photon polarization plane. The effect manifests itself when the condition Rei χ s ( τ ) χ T s (− τ ) − χ s (− τ ) χ T s ( τ ) = 0 holds true. It follows from (13) that the susceptibilities χ P,T s ( τ ) can be presented as In view of (26,27) we can rewrite (25) as: So, the T-violating rotation arises in the case of nonzero odd part of the suscetibility: χ 2 ( τ ) = 0. Such a situation is possible if an elementary cell of the diffraction grating does not posses the center of symmetry.
In accordance with (28), the angle of the T-violating rotation grows at α τ → 0. .However, the condition α τ |χ s ( τ )| ≪ 1 violates at α −1 τ → 0, where the amplitude of diffracted and transmitted waves are comparable: E k − τ ≃ E k and, consequently, the perturbation theory gets unapplicable. A rigorous dynamical diffraction theory must be applied in this case.
The T-violating polarization plane rotation in the Bragg diffraction scheme
Let the Bragg condition is fulfilled for the only diffracted wave and is violated for all other possible ones. It allows us to restrict ourselves to the two-wave approximation of the dynamical diffraction theory (Shih-Lin Chang (1984)). In that case, set of equations (14) reduces to two coupled equations, which for the back-scattering diffraction scheme k 0 τ take the form as follows: Based on the above consideration, we can conclude that set of equations (29) can be diogonalized for the photon of a given circular polarization. Let the right-circularly polarized photon ( e + ) be incident on the diffraction grating. The diffraction process, as it follows from (29), results in the appearance of a back-scattered photon with the left circular polarization e τ − . This is because the momentum of the back-scattered photon k ′ = k − τ is antiparallel to the momentum k of the incident one. It is obvious that the left-circularly polarized photon will produce a right-circularly polarized back-scattered one.
Thus, for circularly polarized photons set of vector equations (29) can be split into two independent sets of scalar equations: Note that equations (30) are identical in form to conventional equations of the twowave dynamical diffraction (Shih-Lin Chang (1984)): It allows us to write down a solution immediately, without deriving (see, for example, (Shih-Lin Chang (1984))). As a result, the amplitude of the transmitted electromagnetic wave at the output is given by L is the thickness of the diffraction grating Consider now the diffraction of a photon with linear polarization e 1 being the superposition of two opposite circular polarizations . In this case and the amplitude of the transmitted wave E ′ (r) can be presented by the superposition In the case under consideration c + = c − .As follows from (38), this results in changing of the photon polarization at the output.
Let us analyze expression (32) for the transmitted wave amplitude more attentively. According to (32), the amplitude oscillates as a function of α, i.e., as a function of the wavelength, with maximums in points defined by the condition k 0 Re ε ± 1 − ε ± 2 L = 2πm or Re ε ± 1 − ε ± 2 = 2πm k 0 L with m as an integer.
Note first of all that the condition m = 0 dictates at ε ± 1 − ε ± 2 → 0 the limit transition which determines the thresholds of the total Bragg reflection band where there is a quickly damped inhomogeneous wave inside the diffraction grating. As a result, the transmitted wave amplitude is small.
Let Reχ s1,2 ≫ Imχ s1,2 , i.e., the absorption is assumed to be sufficiently small to satisfy the condition k 0 Im (ε 1 − ε 2 ) ev L ≪ 1 which admits the consideration of the diffraction grating as an optically transparent medium with (ε 1 − ε 2 ) ev and χ s as real functions. Let now the condition k 0 (ε 1 − ε 2 ) ev L = 2πm be fulfilled at m = 0. This condition defines the resonance transmission in the grating and allows us to rewrite formula (40) in the form as follows By substituting (41) into (33, 32) one can express E ± by where In view of that the second term in (42) is small and expression (42) can be written as where the phase terms are given by Using this equation one can find the angle of the polarization plane rotation: where the first term in the right-hand part defines the P-violating T-invariant rotation angle: and the second one corresponds to the T-violating rotation: the sign (−) is for α 1 , the sign (+) is for α 2 . The imaginary part of the T-violating polarizability Imχ T s1,2 is responsible for the T-violating circular dichroism. Due to that process, a linearly polarized photon gets a circular polarization at the diffraction grating's output. The degree of the circular polarization of the photon is determined from the relation: It should be pointed out that the resonance transmission condition is satisfied at a given m for two different values of α. This is because there is a possibility to approach to the Brilluan (the total Bragg reflection) bandgap both from high and low frequencies.
The T-violating parts of the rotation angle are opposite in sign for α 1 and for α 2 . It gives the addition opportunity to distinguish the T-violating rotation from the P-violating Tinvariant rotation. Indeed, the P-violating rotation does not depend on the back Bragg diffraction in the general case because the P-violating scattering amplitude equals zero for back scattering (see (32)(33)(34)(35).
There are different types of diffraction gratings destined for use in optical and more longwave ranges. However, it should be noted that the successful observation of the Pviolating rotation has been performed by means of studying of light transmission through gas targets . There are a lot of theoretical calculations for atoms of such gases: see, for example, (Khriplovich (1991)) for Bi , T l , P b , Dy. From that point of view, it would be preferable to use gases for studying of the T-violating phenomena of polarization plane rotation and dichroism applying the experience accumulated earlier. At the first glance, there is a serious problem how to create such a diffraction grating in a gas. Nevertheless, the problem can be solved if we make use some well-known results of the electromagnetic theory of waveguides (Jackson (1962), Tamir (1988), Maksimenko and Slepyan (1997)). According to this theory, there is a correspondence between wave processes in waveguides with periodically modulated boundaries and homogeneous filling and such processes in regular waveguides filled by a periodic and, generally, anisotropic medium . Let us consider a regular waveguide constituted by two plane-parallel surfaces; for example, two metallic mirrors (see Figure 1). Let us then place a plane diffraction grating (Figure 2) on the surface of the mirror (Figure 3) and fill the waveguide by the studied gas. Because of the above stated correspondence, such a system is equivalent to a regular plane waveguide filled by a gas with a spatially periodic permittivity tensor. The permittivity modulation period is therewith equal to the grating period. As the chosen plane grating has an asymmetric profile (Figure 2), the corresponding virtual volume grating of the permittivity turns out to be noncentrosymmetrical and, thus, satisfies to the above imposed requirements for displaying of the T-violating phenomena.
Letε ij (r, ω) = 1+χ ij (x, z) be the permittivity of the waveguide being considered with as a periodic function with respect to z. As it has been stated above , such a waveguide can be modeled by the waveguide with the effective permittivityε ef f (z, ω) = 1 +χ ef f (z) which is a periodic function of z and is independent on x. We can show it mathematically starting with the Maxwell equations the P-violating rotation of the polarization plane by the angle ϑ P = kReχ P s (0) L ∼ = 10 −7 rad/cm×L for the gas density ρ = 10 16 ÷ 10 17 . As a result, in our case the parameter ϕ = kχ T s (τ ) L turns out to be ϕ ∼ = 10 −10 ÷ 10 −11 rad/cm×L and can be even less by the factor h/d, where h is the corrugation amplitude of the diffraction grating while d is the distance between waveguide's mirrors. Assuming this factor to be ∼ 10 −10 , we shall find ϕ ∼ = 10 −11 ÷ 10 −12 rad/cm×L. Thus, the final estimate of the T-violating rotation angle ϑ T is ϑ T ∼ = 10 −1 (k 0 χ s (τ ) L) 2 10 −11 ÷ 10 −12 rad/cm × L In real situation the polarizabelity of a grating χ s (τ ) may exceed the unity. However, our analysis has been performed under the assumption χ s ≪ 1. If, for example, we take χ s = 10 −1 , k 0 = 10 4 then ϑ T ≃ 10 −6 ÷ 10 −7 rad/cm×L 3 and, consequently, for L= 1 cm we will have the amplification by a factor of 10 5 .
As it is seen, we have obtained the T-violating rotation angle ϑ T of the same order of ϑ P . It makes possible experimental observation of the phenomenon of the T-violating polarization plane rotation.
It should be noted that the manufacturing of diffraction gratings for the range being more longwave than the visible light one may be simpler. That is why we would like to attract attention to the possibility of studying of the T-violating polarization plane rotation in the vicinity of frequencies of atomic (molecular) hiperfine transitions; for example, for Ce (the transition wavelength is λ = 3.26 cm) and Tl ( λ = 1.42 cm).
Conclusion
Thus, we have shown that the phenomenon of the T-violating polarization plane rotation appears while the photon is scattered by a volume diffraction grating. The phenomenon grows sharply in the vicinity of the resonance transmission condition. An experimental scheme based on a plane waveguide with a diffraction grating as a mirror and gas filling has been proposed which enables real experiments on observation of the T-violating polarization plane rotation to be performed. The rotation angle has been shown to be ϑ T = 10 −6 ÷10 −7 rad/cm×L 3 , where L is the waveguide length (thickness of the equivalent volume diffracting grating). | 2014-10-01T00:00:00.000Z | 1997-08-18T00:00:00.000 | {
"year": 1997,
"sha1": "6c4376b82ee8fdf63febac90b04f3d9643779e61",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d6e6d324f3e8da5c785663e1269e6792bc7cf678",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
213094068 | pes2o/s2orc | v3-fos-license | Structural, optical and conductivity study of hydrothermally synthesized TiO2 nanorods
TiO2 nanorods are synthesized by hydrothermal method using the commercially available TiO2 nanopowder (P25) as a precursor. This work mainly focused on the study of the various properties and comparison among the P25, 20 mg TiO2 nanorods and 40 mg TiO2 nanorods by different characterizations. Fourier Transform Infrared Spectroscopy (FTIR) was carried out and the results confirmed the formation and presence of TiO2 nanorods by shifting peak positions from 1433 cm−1 to 1424cm−1 and 1420cm−1. The x-ray diffraction (XRD) results indicate that the crystallinity of TiO2 nanorods increased significantly and was confirmed by the variation in the diffraction peak intensity and the peak at 2θ = 25.23° is conformed the anatase phase. The Field Emission Scanning Electron Microscope (FESEM) images clearly show the formation and presence of TiO2 nanorods. Thermogravimetric Analysis (TGA) and Differential Thermal Analysis (DTA) reveal that increasing in thermal stability and differential scanning calorimeter (DSC) evaluates the increase in melting temperature of TiO2 nanorods. The UV–vis absorption spectra show the absorption peak redshift towards higher wavelength and it leads to expansion of optical activities of TiO2 nanorods. The optical band gap energy was found to be decreased to 5.3, 5.2 and 4.9 eV for P25, 20 and 40 mg respectively. The dielectric constant has increased twice and the dielectric loss by almost ten times compared to dielectric constant and dielectric loss of the P25. The current versus voltage (I-V) characteristics show the linear curve which reveals the easy flow of current is more in TiO2 nanorods. From the obtained results, it could be concluded that TiO2 nanorods are suitable for potential applications.
Introduction
In the recent years, Metal oxides play a major role in the field of Nano electronics, physics, chemistry and materials science [1]. Metal oxides and combination of Metal oxides are used in both dielectric and conducting applications. Metal oxides are used in different applications like sensors, microelectronic circuits, piezoelectric devices and fuel cells. Metal oxides can be classified into four groups namely high conductors (e.g. SrRuO 3 ), superconductors (e.g. YBa 2 Cu 3 O 7 ), semiconductors (e.g. TiO 2 , ZnO, SnO 2 , CuAlO 2 ) and insulators (e.g. Al 2 O 3 , MgO, BeO). Metal oxides based semiconductors and insulators have more application oriented properties as they play the key role in electronics domain [2]. Among the available metal oxides, TiO 2 is one of the best materials in the field of medical electronics due to their properties such as nontoxic, biocompatible, chemically stable, high band gap, high dielectric constant and inexpensive [3]. In addition to that, TiO 2 has also unique physical, chemical and electronic properties [4]. The TiO 2 nanoparticles belong to transition metal oxides family. The naturally occurring TiO 2 crystal has three polymorphs (phases) namely rutile, anatase and brookite [4]. Among those polymorphs rutile and anatase have wider application but rutile is the most stable than both anatase and brookite at room temperature. TiO 2 is available in different forms such as bulk material, nanoparticles, nanowires, nanorods, nanotubes etc TiO 2 nanoparticles have uniqueness in their physical and chemical properties due to their limited particle size and a high volume density. TiO 2 nanoparticles are one of the most important metal oxide nanoparticles compared to others and are used in various number of fields. TiO 2 nanoparticles are very popular for their enormous and miscellaneous applications. TiO 2 nanoparticles are used in day to day products such as foods, paints, pharmaceuticals, cosmetics, plastics, toothpaste, glazes and enamel etc [5]. In advanced applications, TiO 2 nanoparticles play major role such as energy field like storage cells, photovoltaic cells and environmental field like water purification, air purification, photo catalysts [6] and also biomedical fields like biosensing, drug delivery etc TiO 2 bulk material and TiO 2 nanoparticles are inexpensive but nanotube, nanowires and nanorods are quite expensive, to overcome this, the work concentrates to fabricate TiO 2 nanorods inexpensively by hydrothermal method using P25 (Titanium nanopowder) as a precursor. The major existing methods to fabricate TiO 2 nanorods are surfactant directed, electrochemical, microwave irradiation, alumina templating and hydrothermal. Among all, hydrothermal method is the best for fabrication of TiO 2 nanorods with some advantages like fabricated materials have high quality with small diameters of about 10nm, also inexpensive and very convenient [7]. This method has some advantages like not harmful, temperature can be controlled manually, duration of the process is short and it can be setup easily within the room as user friendly. In this paper, TiO 2 nanorods are prepared by hydrothermal method using commercially procured P25 to study the various properties such as structure, surface morphology, optical, thermal and electrical conductivity properties. The comparison among P25, 20 mg and 40 mg TiO 2 nanorods is the major aim of this work. According to the results, among those ratios 40 mg TiO 2 nanorods are best in thermal stability, crystal size, structure and conductivity. These studies help to state that the TiO 2 nanorods are suitable for potential application. Finally the paper concludes that, the 40 mg TiO 2 nanorods are the best ratio for potential applications against P25 and 20 mg TiO 2 nanorods and it is well explained in the paper.
Experimental work
2.1. Materials used TiO 2 nanopowder (P25) purchased by sigma Aldrich USA with high purity, sodium hydroxide (NaOH) AR grade and hydrochloric acid (HCl) AR grade was purchased from FINAR for synthesis of TiO 2 nanorods.
Preparation of TiO 2 nanorods
The TiO 2 nanopowder (P25) is used as precursor, 10N NaOH is dissolved in 40 ml of distilled water and the solution is transferred to the Teflon lined autoclave by adding 0.5 gm of P25 to the above solution and hydrothermally treated for 24 h at 130°C then allowed to cool for 4 to 5 h at room temperature. After cooling, the collected precipitation from the autoclave is washed with deionised water for 2 to 3 times called water treatment followed by an acid treatment (HCl) to obtain the pH value of the precipitation become 7 (neutralization). Finally the powder was dried at 60°C to produce TiO 2 nanorods. The synthesis process was clearly shown in the scheme 1.
Characterization techniques
The x-ray diffractometer (Rigaker Miniflex II) was used to study the structural properties of the samples between 0 to 60°with the scanning rate 5°per minute. The chemical composition and functional groups of the samples are examined by using Fourier transform infrared spectroscopy (FTIR, ALPHA BRUKER), spectral range between 4000 cm −1 to 500 cm −1 wavenumber. The surface morphology of the samples was studied with Field emission scanning electron microscope (FESEM, sigma Zeiss) with operating voltage 15 kV and at the magnification 2μm. The various thermal studies like Thermogravimetric analysis (TGA), Differential thermal analysis (DTA) and Differential scanning calorimeter (DSC) were studied by the instrument (SDT Q600 TA Instruments) with temperature range between room temperature and 800°C at the scanning rate 10°C per minute with a nitrogen flow rate of 20 ml per minute. The two probe work station instrument (Keithley workstation) is used to examine the current voltage (I-V) characteristics of the samples in the voltage range between −5V and +5V. The dielectric studies of the samples were carried out by using Impedance Analyzer (Agilent 4294A Precision Impedance Analyzer) in the frequency range from 40 Hz to 10 kHz.
Fourier transform infrared analysis
The Fourier transform infrared spectroscopy (FTIR) analysis helps to examine the functional groups and chemical composition recorded under the wavenumber 4000-500 cm −1 of TiO 2 nanorods as shown in figure 1. For the sample P25 the peak at 3311 cm −1 indicates the hydroxyl group (-OH) and that band shifted to 3304 cm −1 in 20 mg and to 3291 cm −1 in 40 mg due to the formation of TiO 2 nanorods and the broad absorption band between 3400 and 3200 cm −1 indicates stretching vibration of hydroxyl group (O-H) [8][9][10][11]. The band at 2928 cm −1 is assigned to the C-H stretching [9,12] of the sample P25 and the peak slightly shifts by increasing intensity to 2926 cm −1 and 2921 cm −1 respectively in case of 20 mg and 40 mg samples, and the band at 1433 cm −1 of P25 is shifted to 1424 cm −1 and 1420 cm −1 for the 20 and 40 mg respectively.
The C-O band stretching appears at the peak 1091 cm −1 is shifted to 1084 cm −1 and 1071 cm −1 respectively with increased peak intensity. The peaks at 1559 cm −1 ,1433 cm −1 , 1375 cm −1 , 1562 cm −1 , 1424 cm −1 , 1372 cm −1 , 1571 cm −1 , 1426 cm −1 , 1370 cm −1 corresponds the Ti-O-C groups of P25, 20 mg and 40 mg respectively [12]. The observed FT-IR results shifting in peak positions and variation in the peak intensity confirms the change in chemical bonds and hence forms the TiO 2 nanorods. Figure 2 shows the FESEM images of P25, 20 mg and 40 mg TiO 2 nanorods. Figure 2 There is also a broad peak at around 2θ=13°in P25, its decrease intensity or vanish for 20 mg and 40 mg is attributed to the initiation of phase transformation of TiO 2 nanoparticles into TiO 2 nanorods. The diffraction peak at 2θ=25.23°is confirmed the anatase phase [15] of the pure P25 and corresponding diffraction anatase phase gets stronger and intense in both the 20 and 40 mg TiO 2 nanorods. The variations in the peak positions were clearly observed and the peak intensity increased significantly with increasing the weight ratio and disappearance or vanish of a broad peak at 2θ=13°in P25 for 20 mg and 40 mg which attributed to the initiation of phase transformation of TiO 2 nanoparticles into TiO 2 nanorods. The observed results clearly indicate that the crystallinity was increased significantly in the nanorods.
Thermal analysis
The TGA/DTA techniques are widely used to evaluate the thermal stability of materials and the TG/DT graphs of P25, TiO 2 nanorods at 20 mg and 40 mg were shown in figure 4. The three major weight loss regions are observed in TG curves [16,17]. The first stage was observed at 30-100°C which is attributed to the removal of water or moisture content due to the braking of carbon-hydrogen bonds in the sample. The second stage was observed between 110-230°C which is corresponding to the loss of dopants and any acid content from the sample. The major weight loss was observed in the third stage around the 550-575°C which is related to the destruction in the backbone of polymer chain and losses the originality of the sample. From the figure 4 it is noticed that the Thermogravimetric temperature of the samples P25, 20 mg and 40 mg of TiO 2 nanorods have increased upto 557 to 567 and 575°C respectively with correspondingly increase in the decomposition temperature 319, 327 and 329°C for P25, 20 mg and 40 mg of TiO 2 nanorods respectively, which indicates the increased thermal stability of TiO 2 nanorods.
Differential scanning calorimetry (DSC)
The melting temperature (T m ) of the samples was determined by DSC technique. DSC curves of P25, 20 mg and 40 mg were shown in figure 5. The thermal stability and melting temperature of the TiO 2 nanorods were examined between room temperature and 700°C. It is observed that the melting temperature was increased with increase in the ratio of TiO 2 nanorods and it is responsible for enhancing the crystallinity.
The DSC curves exhibits two exothermic peaks in the temperature range from 135 to 148°C and 528 to 549°C. The exothermic peaks at 135°C, 141°C and 148°C of P25, 20 mg and 40 mg respectively are assigned to the adsorption of water or removal of moisture content from the TiO 2 nanorods [17]. The increase in the melting temperature at the exothermic peaks at 528°C, 542°C and 549°C are attributed to the phase transformation of the TiO 2 nanoparticles into TiO 2 nanorods [17] as a result the crystallinity of the TiO 2 nanorods was increased. These results are correlated with the XRD analysis.
3.6. UV-visible absorption spectra UV-Visible absorption spectra of samples P25, 20 and40 mg TiO 2 nanorods are shown in figure 6. The information about the band structure of compounds can be studied by optical absorption spectra. An electron excited from lower to higher energy state by absorbing a photon of energy of electron inter-/intraband transition or exciting transition. The main absorption wavelength of TiO 2 nanorods corresponds to the intrinsic absorption of anatase phase of TiO 2 . The absorption shows red shift towards higher wavelength from 272 nm to 276 nm and 279 nm for P25, 20 mg and 40 mg of TiO 2 nanorods respectively, it reveal the fact that the valence band shifts toward the conduction band resulting into narrowband gap. Hence the energy required for the electrons to transit from the valence band to conduction band (optical excitation) decreases due to the red shift in the absorption [18]. The obtained results suggest that, after the formation of TiO 2 nanorods the optical response was expanded and hence improves in the photo catalytic properties of TiO 2 nanorods [19].
Optical band gap energy
The optical band gaps of samples calculated using the relation between the absorption coefficient (α) and incident photon energy (hν) by the following equation, The indirect band gap is a transition of electron from the valence to conduction band which is associated with a photon of the right magnitude of crystal momentum and the bottom of the conduction band does not correspond to zero crystal momentum in indirect band gap materials and the absorption coefficient dependence on the photon energy for indirect transitions [20]. The direct optical band gap values are extracted from the linear portion of (αhѵ) 2 versus photon energy (hν) plots [21] as shown in figure 7. The optical band gaps were decreased from 5.3 eV, to 5.2 eV and 4.9 eV for P25, 20 mg and 40 mg TiO 2 nanorods respectively.
Dielectric property
The dielectric permitivity of P25, TiO 2 nanorods at 20 mg and 40 mg as a function of frequency at room temperature are studied and presented the dielectric constant (ε′) in figure 8 and dielectric loss (ε′) in figure 9.
The dielectric constant (ε′) of a material is related to the dipole polarizability, which arises from electric dipoles they can change the orientation of polarization subjected to the applied electric field. The ε′ of P25, TiO 2 nanorods at 20 and 40 mg is frequency dependent in lower frequency because of the dipoles are response significantly with applied electric field and the contribution ofthe space charge effect towards polarization may tend to increaseat lower frequencies as a result the high dielectric constant. The ε′of P25, TiO 2 nanorods at 20 and 40 mg is frequency independent at higher frequency region due to the dipoles are unable to respond with applied electric field and thus they behaviours like tightly bounded at high frequencies is reason to maintain the constant and low dielectric constant [22]. The dielectric constant (ε′) and dielectric loss (ε′) were calculated using following relations, e e e e d ¢ = = ¢ -C d A and tan p 0 ( ) / It is observed that the dielectric constant (ε′) and dielectric loss (ε′) of P25, TiO 2 nanorods at 20 and 40 mg are decreased suddenly with increase in the frequency which may be because the dipole polarization failed to change the direction of orientation with applied field or the polarisation of dipoles decrease whendipole rotation cannot follow electric field changes at highfrequencies that results in the decrease the dielectricconstant. The dielectric constant (ε′) and dielectric loss (ε′) increased at low frequencies due to the accumulation of charges between the sample and electrode as a result there is an increase in charge carrier density due to the increased dissociation of ion aggregation andhence the occurance of the relaxation phenomenon which results in the increase of dielectric parameters at low frequency and contribute to increase the ionic conductivity [23].
3.9. Current-voltage (I-V) characteristics I-V characteristics of the samples were examined at room temperature by using two probes Keithley workstation instrument as shown in figure 10. I-V characteristics of the samples reveal the ohmic behaviour i.e., the current increases linearly with the increase in the applied voltage in the range −5V to +5V. The current of the TiO 2 nanorods is increase in the field of induced polarization of the applied bias voltage which is attributed to enhance electrical conductivity [24]. By improving the interface and electrode contacts it is possible to obtain the good electrical characteristics with high stability which helps in high quality device applications of the TiO 2 nanorods. Due to the formation of TiO 2 nanorods the recombination of charge will decrease and increases the charge transfer rate, due to the path provided by straighten nanorods to flow easily. As a result the increase in the electron density and hence increases the conductivity.
Conclusion
The TiO 2 nanorods were prepared by hydrothermal method and investigated the structure, morphology, thermal, optical and conductive properties. The phase transformation from nanoparticles to nanorods of TiO 2 due to changes in the chemical structure or bonds was confirmed by the FTIR analysis. The surface morphology was studied by FESEM and noticed the formation of fine TiO 2 nanorods. The increased peak intensity is evident to enhanced crystallinity as observed in XRD analysis. Thermal studies were carried out by TGA/DTA analysis and the results confirmed the increasing thermal stability of the TiO 2 nanorods and the DSC analysis reveal the increased melting temperature from 528°C to 542 and 549°C for P25, TiO 2 nanorods of 20 mg and 40 mg respectively. UV-Vis. spectroscopy exhibits the redshift in the wavelength by decreasing the energy band gaps from 5.3eV, to 5.2 and 4.9eV for P25, TiO 2 nanorods of 20 mg and 40 mg respectively. The dielectric properties increased in low frequency and decreases with increased frequency. I-V characteristic shows that the linear increase in current with applied voltage. Finally, the obtained results are proved that the sensitive characteristics of the TiO 2 nanorods are better than P25 for potential applications. Hence the hydrothermally synthesized TiO 2 nanorods may overcome by P25 nanoparticles. | 2020-01-16T09:03:09.320Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "cd86328686aed73815fd23510bc6a97c20817d73",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2053-1591/ab691f",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f1a7178a5806fed3b1435d4092208994623dd96a",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
237371831 | pes2o/s2orc | v3-fos-license | The role of double ionization on the generation of doubly charged ions in copper vacuum arcs: insight from particle-in-cell/direct simulation Monte Carlo methods
Metal vapour vacuum arcs are capable to generate multiply charged metallic ions, which are widely used in fields such as ion deposition, ion thrusters, and ion sources, etc. According to the stationary model of cathode spot, those ions are generated by electron-impact single ionization in a step-wise manner, which is M ->M+ ->M2+ ->... mainly. This paper is designed to study quantitatively the role of double ionization M ->M2+ in the breakdown initiation of copper vacuum arcs. A direct simulation Monte Carlo (DSMC) scheme of double ionization is proposed and incorporated into a 2D particle-in-cell (PIC) method. The super-particles of Cu2+ ions generated from different channels are labelled independently in the PIC-DSMC modelling of vacuum arc breakdown. The cathode erosion rate based on PIC modelling is about 40{\mu}g/C in arc burning regime, which agrees well with previous experiments. The temporal discharge behaviours such as arc current, arc voltage, and ionization degree of arc plasma, are influenced with or without double ionization negligibly. However, additional Cu2+ ions are generated near the cathode in breakdown initiation from the double ionization channel, with a lower kinetic energy on average. Therefore, the results on spatial distribution and energy spectra of Cu2+ ions are different with or without double ionization. This paper provides a quantitative research method to evaluate the role of multiply ionization in vacuum arcs.
Introduction
Metal vapour vacuum arcs are widely used in deposition systems [1], circuit breakers [2], ion sources [3], electrical thrusters [4], and etc. The vacuum arc belongs to arc discharge regimes, which manifests itself as a high current and low voltage discharge in current-voltage characteristic [5].
Although the burning voltage of the arc is basically several tens of volts, the initiation of the vacuum breakdown requires voltage up to several thousands of volts. The reason is that surface electric field on the order of 10 9 V/m is anticipated for the explosive emission or thermo-field emission of electrons from the cathode [6]. Accompanying the emission of electrons, the metal vapours/plasmas are emitted from the cathode surface. The essential part of vacuum arc is cathode spot, which consists of highly ionized, multiply charged metallic plasma. In fact, metallic ions with average charge states up to +6 were observed in the short-pulse high-current discharges in vacuum [7,8].
The generation mechanism of such highly charged ions was not clear, partly due to the controversial models of cathode spots. One is a stationary model [9], in which the ions are mainly generated in a mild manner by electron-impact ionization of metallic vapours. The other is a non-stationary model, in which the ions are mainly generated in a wild manner by explosive emission of dense plasma [10,11]. The two models may be both present in different stages of the life cycle of cathode spots, depending on the current densities, the local heating, the surface morphology, and other factors. For example, in the non-stationary model, the dense plasma may result from a rapid phase-transition of the cathode material directly from solid state to non-ideal plasma [12], and the local temperature is beyond the critical point of the cathode material. While in the stationary model, the metallic vapours are emitted from the already melted cathode surface, and the local temperature is in the range from melting point to boiling point of the cathode material [13][14][15].
This paper focuses on the stationary model, and examines the possibility of double ionization on the generation of Cu 2+ ions by particle-in-cell (PIC)/direct simulation Monte Carlo (DSMC) method.
The PIC-DSMC method is a first-principle calculation tool, which is widely used in simulation study of vacuum arcs [16][17][18][19][20]. In the previous researches, the role of double ionization Cu -> Cu 2+ was presumably neglected without further justification from either experiment or modelling studies. Most studies took the step-wise ionization Cu -> Cu + -> Cu 2+ -> ... for granted in vacuum arc modelling. In this paper, the generation of doubly charged copper ions Cu 2+ directly from neutrals will be studied for at least two reasons. The first reason is that electrons can gain enough energy in the vacuum breakdown stage to overcome the double ionization threshold when they collide upon the neutrals.
The second is that in the generation of Cu 2+ , the double ionization is a single step process, while the step-wise ionization is a two-step consecutive process. The study here is expected to give firm evidence as what role the double ionization plays in the generation of multiply charged ions.
General description of the PIC-DSMC method
Cathode plasma is composed of large amounts of particles with different species i.e. electrons, multiply charged ions and neutrals. For each species, temporal evolution t of distribution function ( , , ) f r v t in real space r and velocity space v satisfies the nonlinear Boltzmann equation (BE) [21] coll ( , , ) .
Because the cathode plasma is highly ionized, the number densities of neutral particles may be very close or even less than that of charged particles, and cannot be viewed as background. The DSMC method is used to describe the collision operator in the right-hand-side of Eq. (1) (2) in which the distribution functions for all species need to be solved. This is in contrast to the test-particle MCC method for linear Boltzmann equations [22] '' coll ( ) d d , in which the distribution function for neutrals G g is usually assumed to be Maxwell-like and is not solved. The collision processes are simulated by a Monte Carlo technique over all colliding super-particles inside each cell one by one. Taken binary collision as an example, a pair of super-particles is chosen randomly every time step, which are called projectile and target respectively.
The mass of the projectile and target particle is denoted as 1 m and 2 m , and the velocity is denoted as 1 v and 2 v respectively. The collision probability coll p is calculated as In Eq. (4), coll t is the collision time step, 12 u v v is the relative velocity, t n is the number density of the target particle, rel is the collision energy of relative motion between projectile and target, and is the integrated cross section. The collision occurs if the collision probability coll p is larger than a random number R 01 evenly distributed between [0, 1). By enforcing the momentum conservation law during the collision, the post-collision velocity can be written as In Eq. (5), the superscript prime means velocity after collision, and In Eq. (6), E is the change of kinetic energy during the collision, and it equals to zero for elastic collision as a special case. The deflection of u is defined by the scattering and azimuth angles. The scattering angle is determined by the normalized differential cross section I through another random number R 01 . In all cases of electron-copper collision here, the isotropic scattering is assumed.
The azimuth angle is set randomly distributed between 0 and 2π.
Regarding for the left-hand-side of the BE, the motion of non-relativistic super-particle satisfies Newton's second law, and the details of the numerical scheme can be found in paper on PIC method [23]. The electric field inside the plasma is calculated by Poisson equation is the permittivity in vacuum and is charge density with contribution from all charged particles. The Poisson equation and BE are coupled through the electric field/potential and charge density with each other. The field is interpolated from mesh grids to particle position, and the charge density is scattered from particle position j r to mesh grids k r . The assignment of charge density can be written as follows In Eq. (9), W is the weight of particle α, ( , ) kj S r r is the shape factor, and j V is the cell volume. The popular strategy for shape factor is the so-called cloud-in-cell (CIC) scheme, which is a first order weighting scheme. In a 2D rectangle grid, the CIC scheme is written as 0, or ( , ) .
The cell volume is calculated by the method proposed by Verboncoeur [24]. At the same time, the interpolation of field from mesh grids to particle position is done in an inversely analogous way, which is consistent to that of charge density.
Detailed DSMC scheme of double ionization process
The DSMC scheme of double ionization for equally weighted super-particles is shown below. For not equally weighted particles, either the rejection method [25] or the merging method [26] can be consulted. To not lose generality, the process of double ionization is written as follows 2 e A e A 2e.
The velocities of the two reactants are known, and the DSMC scheme is to find out the post-collision velocities for the four products. A strict formulation would require the complete knowledge of differential ionization cross sections to determine the energy partition and scattering angles between primary and secondary electrons. Unfortunately, those information are not available right now even for a single incident energy of electron impacting on copper. Therefore, the double ionization is decoupled into four-stage binary collisions, and the scheme for each stage is quite mature. In the first stage, the neutral A undergoes an inelastic collision, and the electron loses the energy of In the second stage, the meta-stable atom A* splits into an electron and a doubly charged ion, similar to auto-ionization. In this stage, the new born particles inherit the velocity and position of the old particle.
In the third stage, the primary e 1 and secondary e 2 undergo elastic collision, similar to Coulomb In the last stage, one of the electron splits into two electrons, and double ionization is completed.
It's noted that the second stage and the last stage does not conserve electric charge and mass in a single stage. However, the total processes conserve charge, momentum, energy, and mass by the combination of four-stage collisions.
The splitting scheme in the last stage is shown as an example. To enforce the conservation of momentum and energy simultaneously, the velocity vectors after splitting form the two edges of a rectangle, and the velocity vector before collision is the diagonal line of the rectangle. The deflection of velocity between e 5 and e 3 is denoted as scattering and the azimuth angles. To find out the velocity of e 5 , the following steps are used. The first step shortens the velocity vector v of e 3 , cos The second step transforms the velocity from laboratory frame to a local frame, where only z-component of the velocity vector is non-zero.
In Eq. (17), θ is the angle between z v and v before collision, and φ is the angle between x v and xy v v v before collision.
The third step scatters the velocity in local frame The last step transforms the velocity from the local frame back to the laboratory frame The transforming matrix can be written as cos cos cos sin sin R( , ) sin cos 0 sin cos sin sin cos
Simulation models
The double ionization is incorporated within 2D3V PIC-DSMC methods in cylindrical geometry developed previously [27]. The modelling starts from complete vacuum between two plane electrodes, and the domain is z=6 μm and r=24 μm. The spatial step is 50 nm both in the z and r direction. The electron time step is 1 fs, and time step of heavy particle is ten times that of electrons. The tracked super-particles include electrons, Cu neutrals, Cu + , Cu 2+ , Cu 3+ and Cu 4+ , and all super-particles share the same weight of 100. Both simulation sets include electron-neutral elastic collision, Coulomb collision, charge exchange collision between neutral and single charged ion, and neutral-neutral elastic collision, besides ionization collision. In the first set of simulation, only step-wise ionization from Cu to Cu 4+ is considered: Cu -> Cu + -> Cu 2+ -> Cu 3+ -> Cu 4+ . In the second set, the double ionization is added for the process of electron impact of copper neutral, and the Cu 2+ super-particles generated by single ionization Cu + -> Cu 2+ and double ionization Cu -> Cu 2+ channels are labelled independently. The cross sections for those import channels that are relevant to double ionization are from [28][29][30] and plotted in Figure 1. Other cross sections used in this simulation can be found in a separate paper [31] and references there in. Figure 1. The cross sections [28][29][30] for those import channels that are relevant to double ionization.
A module of external circuit is coupled to the PIC simulation of plasmas. The cathode is grounded, and the anode is connected to a high voltage source 2.9 kV with external RC circuit with parameters: R= 1 kΩ and C=1 pF. To mimic the initiation of vacuum breakdown, an artificial tip with width 200 nm is located in the centre of the cathode with a nominal field enhancement factor of 35. In the initial stage, the tip-enhanced electric field is capable to extract electrons from the cathode surface, in which the electron current density is calculated by Fowler-Nordheim formalism. The neutral flux emitted from the cathode is set by a constant ratio of 3% to the emitted electron current density. No ions are injected in the surface or volume of the simulation domain, but are generated by electron-impact ionization of neutrals/ions. As ion densities build up, extra neutral fluxes are sputtered into the simulation domain by energetic ions incident upon the electrodes with sputtering yield given by Yamamura and Tawara [32].
The ratio of emitted neutral flux to electron current density 3% is quite close to that from molecular dynamics study on the erosion of a copper nanotip by arc plasma (2.5±0.3) % [33]. Although the nominal ratio 3% is set as a constant, the calculated result on specific cathode erosion rate varies with time between 20~40 μg/C, as shown in Figure 2 for the first simulation set. The erosion rate is calculated as the ratio between transferred mass and transferred charge. The reason for time-varying erosion rate is ascribed to the time-dependent returning flux of heavy particles and the sputtering flux.
When the arc settles down to a steady burn at a time later than 0.6 ns, the cathode erosion rate reaches a stable value. The calculated value by PIC-DSMC (~40 μg/C) agrees well with the measured experimental data 35~40 μg/C from different groups [34,35].
Results and discussion
In this section, the simulation results from both sets are compared, in order to unveil the role of double ionization during the vacuum breakdown. Firstly, we would like to study the temporal evolution of macroscopic behavior such as current-voltage characteristic, ionization degree, and average charge state of arc discharge. Next, the microscopic behavior of spatial distribution of number density Cu 2+ ions at different stage of arc discharge is studied. Lastly, the energy spectra of Cu 2+ ions at different stage are also investigated. The super-particles of Cu 2+ ions generated by different ionization channels are labeled independently to trace their effects. Figure 3 compares the temporal evolution of (a) arc voltage V, and (b) arc current I of both simulation sets. It is not strange that for both macroscopic parameters IV, negligible difference is observed during the vacuum breakdown with or without consideration of double ionization. During the discharge, the dropping of the voltage between electrodes is calculated self-consistently with the external circuit. According to the simulation results, the current-voltage characteristics can be divided into three stages: the initiation, the breakdown, and the burn process. Because this paper focused on the generation of multiply charged ions, the extinction process of cathode spot is not simulated to save the computation time. In the initiation process, the gap voltage between electrodes is high and the current is very low, although local low-density plasma is already produced in the near-cathode region.
Temporal evolution of macroscopic discharge behaviours
In the breakdown process, the voltage drops and the current increases sharply, and the local plasma propagates toward the anode to form a conductive channel. In the burn process, a low arc voltage and a high arc current are reached and maintained in a steady state.
The temporal behaviour of ionization degree and average ion charge state show similar trend with that of current-voltage characteristic: the influence of double ionization is negligible. In the initiation process, the ionization degree quickly increases, and the average charge state is about unity: the To explain the above results, the super-particle numbers of Cu + and labelled Cu 2+ based on the calculated results of the second simulation set are shown in Figure 5. The number of Cu + increases with time very quickly due to electron-impact ionization of Cu neutrals (black solid line). Once the number of super Cu + ions have aggregated to more than 100, the possibility of single ionization of Cu + to generate Cu 2+ is quite high (green dash line). However, the possibility of double ionization of Cu neutrals to generate Cu 2+ occurs slightly later (blue dotted line), because a large amount of Cu neutrals are destroyed in the single ionization process. The inset shows the ratio of Cu 2+ generated by double ionization Cu -> Cu 2+ to that generated by single ionization Cu + -> Cu 2+ . Only in the early stage of arc initiation, the number of Cu 2+ generated by double ionization is comparable to that generated by single ionization. After 0.2 ns, the number of Cu 2+ generated by double ionization is less than 10% of that generated by single ionization. Figure 7(a) is a mirror image along the cathode for comparison. ionization is considered. The most probable energy for Cu 2+ ions with double ionization is around 100 eV, and that without double ionization generally shifts to a higher energy around 130 eV. However, the shape of ion energy spectra at 1.0 ns almost coincide with each other for the two simulation sets, with most probable energy around 40 eV. The decrease of most probable energy of Cu 2+ ions during the expansion of cathode plasma toward anode is consistent with the simulation results in [20], in which the ion velocity along the symmetry axis can be accelerated to 2×10 4 m/s (corresponding to kinetic energy 133 eV) and then quickly falls.
Energy spectra of Cu 2+ ion
The energy spectra of labelled Cu 2+ ions generated by double and single ionization of the second simulation set is shown in Figure 9. At 0.3 ns in Figure 9(a), it is found that the most probable energy of double ionized Cu 2+ ions is much lower than that of single ionized Cu 2+ ions. Because electron mass is much less than that of heavy particles, the electron-impact ionization process does not alter the velocity of heavy particle notably. Therefore, the double ionized Cu 2+ ions inherit velocity almost from that of slow Cu neutrals, while the single ionized Cu 2+ ions inherit velocity almost from that of fast Cu + ions. Neutral Cu particle cannot gain energy directly from the electric field. Therefore, those double ionized Cu 2+ ions have a low energy on average. After long time evolution to about 1 ns, the memory effect of initial velocity is not obvious anymore in the expansion of cathode plasma toward anode. Therefore, similar shape of energy spectra is observed for Cu 2+ ions generated from both single ionization and double ionization, as shown in Figure 9(b).
Conclusions
In summary, a DSMC scheme for double ionization is proposed which conserves momentum, energy, charge, and mass during the collision. The DSMC scheme is then incorporated into a 2D3V PIC code to investigate the role of double ionization in the vacuum arc breakdown. Besides double ionization, the elastic collision between electron and neutral, step-wise ionization between electron and neutral, elastic collision between neutral and neutral, charge exchange between neutral and single charged ion, Coulomb collision are also included. A module for external circuit is coupled to the PIC simulation. The PIC-DSMC modelling starts from complete vacuum between two plane electrodes.
An artificial tip with nominal field enhancement is assumed in the centre of the cathode, where field-emission electrons are injected into the simulation domain. A constant ratio of neutral flux to electron current density is used, but the calculated cathode erosion rate is time-dependent with value between 20~40 μg/C. Based on the simulation result of arc voltage and current, it can be divided into three stages as initiation, breakdown, and burn. Although the general discharge behaviour is not influenced in arc burning regime, the number density and energy spectra of Cu 2+ ions are altered in breakdown initiation with or without consideration of double ionization. This paper provides a quantitative research method to evaluate the role of multiply ionization in vacuum arcs. | 2021-09-02T01:15:59.982Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "fe63c4600d74a3c76c378abc7171ac5efef12dc3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fe63c4600d74a3c76c378abc7171ac5efef12dc3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249020294 | pes2o/s2orc | v3-fos-license | Normative Data of the Trail Making Test Among Urban Community-Dwelling Older Adults in Japan
Introduction Population aging is likely to increase the number of people with dementia living in urban areas. The Trail Making Test (TMT) is widely used as a cognitive task to measure attention and executive function among older adults. Normative data from a sample of community-dwelling older adults are required to evaluate the executive function of this population. The purpose of this study was to examine the Trail Making Test completion rate and completion time among urban community-dwelling older adults in Japan. Methods A survey was conducted at a local venue or during a home visit (n = 1,966). Cognitive tests were conducted as a part of the survey, and TMT Parts A (TMT-A) and B (TMT-B) were completed after the completion of the Japanese version of the Mini-Mental State Examination (MMSE-J). Testers recorded TMT completion status, completion time, and the number of errors observed. Results In the TMT-A, 1,913 (99.5%) participants understood the instructions, and 1,904 (99.1%) participants completed the task within the time limit of 240 s. In the TMT-B, 1,839 (95.9%) participants understood the instructions, and 1,584 (82.6%) participants completed the task within the time limit of 300 s. The completion rate of TMT-B was 90.2 and 41.8% for participants with an MMSE-J score of >23 points and ≦23 points, respectively. Results of multiple regression analyses showed that age, education, and the MMSE-J score were associated with completion time in both TMTs. Conclusion In both TMTs, completion time was associated with age, education, and general cognitive function. However, not all participants completed the TMT-B, and the completion rate was relatively low among participants with low MMSE-J scores. These findings may help interpret future TMT assessments.
INTRODUCTION
Population aging is a global phenomenon, occurring at high rates in Eastern and South-Eastern Asia, Latin America, and the Caribbean (United Nations, The Population Division of the Department of Economic and Social Affairs, 2020). Japan is one of the most rapidly aging societies in the world; even in urban areas where the current aging rate is relatively low, the size of the aging population is expected to rapidly increase in the near future (Cabinet Office Japan, 2018). Approximately 30% of Japan's total population lives around Tokyo (Statistics Bureau of Japan, 2021). Population aging is likely to increase the number of people with dementia living in urban areas (Asada, 2013). This study on the prevalence of dementia in Japan estimates that the prevalence increases with the inclusion of urban areas, and that the overall prevalence in Japan is about 15%. Progress in dementia prevention and management requires insight into cognitive function in older adults; in particular, the faculties that directly affect the activities of daily living. Specifically, deficits in the attention and executive function necessary to smoothly process things according to the procedures presented can impair quality of life in urban areas where the environment is prone to change.
The Trail Making Test (TMT) is widely used as a cognitive task to measure attention and executive function among older adults (Arbuthnott and Frank, 2000;Tombaugh, 2004). It involves connecting randomly arranged circles with a pencil, and comes in Parts A (TMT-A) and B (TMT-B), which are used for functional evaluation of patients with brain injury (Reitan, 1958). In TMT-A, numbers are written in circles, and test takers are asked to connect the numbers in ascending order. In TMT-B, numbers or letters are written in circles, and test takers are asked to connect them alternately and in ascending order. In both TMTs, the time to completion is the main evaluation index. Processing speed such as that required for visual search is strongly reflected in the results of TMT-A, and working memory and cognitive flexibility are involved in TMT-B (Lezak et al., 2012). The TMT is not only sensitive to changes in cognitive function due to brain injury (Reitan, 1958); it may also detect those that occur due to aging and education (Hashimoto et al., 2006;Specka et al., 2021). The TMT is widely used in the evaluation of cognitive function in older adults, including evaluation indicators in intervention studies (Jacobs et al., 2020;Suzuki et al., 2020), driving performance in older adults (Vaucher et al., 2014), and studies examining the relationship between physiological indicators and attention function (Uchida et al., 2020). Both TMT scores are sensitive to progressive cognitive decline associated with dementia (Greenlief et al., 1985). Older adults with poor TMT-B performance have problems performing the activities of daily living (Bell-McGinty et al., 2002).
Normative data from a sample of community-dwelling older adults are required to evaluate the executive function of this population. Some studies have examined the normative values of the TMT completion time in healthy older adults (Tombaugh, 2004;Mitrushina et al., 2005;Cangoz et al., 2009;Fallman et al., 2020), but few reports have accounted for the TMT completion status (Seo et al., 2006;Wei et al., 2018;Specka et al., 2021). TMT-B involves complex processes and understanding what percentage of test takers complete the test is necessary to contextualize completion time values in community-dwelling older adults. Excluding data from adults unable to complete TMT-B may result in a standardized TMT-B value that overestimates population ability to complete this test. By classifying the completion status and examining the characteristics of older adults who have difficulty understanding the instructions and completing it within the time limit, it will be possible to examine appropriate methods of evaluating executive function using the TMT-B and reducing the burden. In addition, although the TMT completion time is affected by variables such as age, education, and general cognitive function, few previous studies have reported on the impact of these characteristics in the context of large-scale normative data for community-dwelling older adults in urban areas.
This study aimed to examine the normative TMT completion rate and completion time values for older adults living in urban areas using data from the Takashimadaira study (Iwasaki et al., 2020), which is a large-scale survey of community-dwelling older adults. Normative data from large-scale surveys may contribute to executive function evaluations using the TMT in older adults living in urban areas. Simultaneously evaluating the TMT completion status, age, sex, education, and general cognitive function of participants allows to examine any associations between participant characteristics and test performance. This study is the first to comprehensively examine the association between TMT and these variables using large-scale data in older adults. By using large-scale data to present normative values based on each attribute associated with TMT performance, it is possible to detect whether or not attention and executive function are lower than age-appropriate for a diverse group of older adults in the community. This contributes to early screening for cognitive decline in old age, such as mild cognitive impairment.
MATERIALS AND METHODS
This study was conducted from 2016 to 2017 in Takashimadaira area of Itabashi Ward, which is located on the north side of Tokyo, Japan. Takashimadaira is a large housing complex that was built during the 70 s, which was a high-growth period in Japan. Within Itabashi-ku, Takashimadaira is home to a high percentage of adults aged ≥65 years and the aging of the urban population is occurring ahead of other areas (total population is approximately 32,500). A mail survey was conducted as a primary survey of all adults aged ≥70 years (n = 7,614) living in this area. The people who responded to the first mail survey (n = 5,432) were invited by letter to participate in the second survey, which involved face-to-face health-related interviews. The second survey was conducted at a local venue or during a home visit. A total of 1,966 people who responded to the TMT were eligible for this study. The participants' demographic characteristics are presented in Table 1. Because we wanted to examine completion status and times for a diverse population with mixed characteristics in the community, we did not have any exclusion criteria in our sample. For normative data, not only data from all participants but also data excluding participants with neurological symptoms were created.
Data Collection and Variables
Data on demographic characteristics were collected using the first mail survey. Cognitive tests were conducted as part of the secondary survey, and TMTs were performed after the Mini-Mental State Examination (MMSE-J) (Folstein et al., 1975;Sugishita and Hemmi, 2010). The cognitive tests were conducted by trained nurses or psychologists. There were 69 missing on information of education and 16 missing on MMSE-J due to response refusals and survey errors. In one case, both education and MMSE-J information was missing. The Japanese version of the TMT used in this study was based on the original version (Reitan, 1958) and had been used in previous studies (Suzuki et al., 2014(Suzuki et al., , 2020. Both parts of the TMT consisted of 25 scattered circles drawn on the examination paper. In TMT-A, the circles were numbered from 1 to 25, and the participants were asked to draw lines to connect the presented numbers in ascending order as quickly as possible. In TMT-B, the circles included either numbers from 1 to 13 or the first 12 characters of the Japanese Hiragana alphabet. Participants were required to alternately connect numbers and characters. Practice tests were performed for both parts and involved eight circles with a layout different from that used in the final test; all participants received instructions on how to perform the task until they understood it; participants that failed to understand the training task did not perform the final tests and their outcome was recorded separately. During the final test, the time required to complete the task was recorded (seconds), as were participant errors; when errors were noted, participants were asked to stop the task and then resume it from the place where they had made the error. The time limit was set to 240 and 300 s for Parts A and B, respectively. The implementation status, completion time, and number of errors were recorded by the tester.
We distinguished between two types of reasons for failure to complete the task to understand the type of difficulties associated with it: failure to understand the instructions and failure to complete the task within the time limit despite understanding the instructions. These items were recorded by the testers and cross-checked by other testers. The time required by participants to complete the test was aggregated to obtain a normative value.
Statistical Analysis
The participants' characteristics and normative test values were examined. Density plots are shown for all participants and those stratified to exclude older adults with neurological symptoms (history of stroke, Parkinson's disease, dementia, and current depression). Correlation analyses were performed between the demographic variables and completion times on TMT-A and TMT-B. Multiple regression analysis was performed to examine the impact of age, years of education, sex, and MMSE-J scores on TMT completion time; these analyses were performed separately for all participants or the participants without neurological symptoms. Log-transformed completion times were used for multiple regression analysis. Chi-square tests and analyses of variance were performed to compare the characteristics of participants among the completed, not completed within the time limit, and failure to understand the instructions groups. Bonferroni's method was used for multiple comparisons. All analyses were performed using IBM SPSS Statistics for Windows version 25 (IBM Corp., Tokyo, Japan).
Ethics Approval and Participants Consent Statement
This study was conducted in accordance with the ethical principles of the Declaration of Helsinki and was approved by the Ethics Committee of the Tokyo Metropolitan Institute of Gerontology (approval number 9 and 31 in 2016). Written informed consent was obtained from all the participants prior to the survey.
Completion Rate of the Trail Making Test's in Older Adults
Data invalid due to tester errors and those from participants that could not perform the TMT due to sensory or physical dysfunction were excluded. A total of 1,922 and 1,917 participants provided valid TMT-A and TMT-B data, respectively (shown in Figure 1). For TMT-A, 1,913 (99.5%) participants understood and undertook the task, and 1,904 (99.1%) participants completed it within the time limit. For TMT-B, 1,839 (95.9%) participants understood the procedure, and 1,584 (82.6%) participants completed it within the time limit. Most participants were able to complete TMT-A, and 17.4% of the participants were not able to complete TMT-B. Participants who completed TMT-B within the time limit were younger than the non-completers and had more years of education and higher MMSE-J scores (p < 0.01). Among the non-completers, the participants who could not understand the instructions were older and had a lower MMSE-J score (p < 0.01). No sex-based differences were found among the three groups ( Table 2). Of the participants who could not understand the instructions, 34.6% had some neurological condition. Among the participants who were unable to complete the task within the time limit, 22.7% were participants with some neurological condition. Participants with a history of stroke were more likely to be able to complete the task within the time limit (p < 0.01).
Normative Data of the Trail Making Tests in Older Adults
Trail Making Test completion time values are shown in Table 3.
The density plots are shown in Figure 2. The mode, median, and mean TMT-A completion time values were 37, 46, and 52.7 s, respectively, indicating a log-normal distribution (shown in Figure 2A). TMT-B performance values also showed a lognormal distribution, with the mode, median, and mean values of 107.0, 126.5, and 137.4 s (shown in Figure 2B). The shape of the distribution did not change when the elderly with neurological symptoms were excluded (shown in Figures 2C,D). Details of the normative data are available in Supplementary Material. Data excluding participants with neurological symptoms showed a completion rate of 99.5% for TMT-A and 85.2% for TMT-B (shown in Supplementary Appendix 2A,B).
Relationships Between Completion Times and Demographic Characteristics
Multiple regression analyses were used to examine the impact of demographic characteristics on TMT-A and TMT-B completion times (Table 4). For all multiple regression analyses, residual histograms and plots confirmed that there were no problems with normality and homoscedasticity. TMT-A completion time was associated with MMSE-J scores (β = −0.33, p < 0.01) more strongly than it was associated with age (β = 0.27, p < 0.01) or education (β = −0.13, p < 0.01). TMT-B completion time was associated with MMSE-J scores (β = −0.29, p < 0.01). Similarly, among cognitively healthy older adults, MMSE-J scores (β = −0.30, p < 0.01) was associated with the TMT completion time more strongly than age (β = −0.26, p < 0.01). Sex was associated with TMT-B (β = −0.08, p < 0.01) but not with TMT-A completion times (β = −0.02, p = 0.59). In the overall sample and among cognitively healthy older adults, males performed more slowly than females. There was a moderate correlation between TMTs-A and TMT-B completion time (r = 0.44).
DISCUSSION AND CONCLUSION
In this study, we examined the normative values of the TMT-A and TMT-B completion time, and their association with participant age, education, sex, and general cognitive function among community-dwelling older adults living in urban areas.
While the completion rate of TMT-A was high, approximately 4% of the participants failed to understand TMT-B instructions, and 13% could not complete the task within the time limit. The distribution of the TMT completion time values of the participants who completed it was right-skewed. The mean TMT completion time values were lower than the median values; this finding is consistent with that of previous studies (Tombaugh, 2004). However, these distribution biases are due to the influence of the older adults who are taking time to complete, and even limited to healthy older participants, right skewed distributions were not changed. Data excluding those with neurological symptoms showed higher completion rates for both TMT-A and B, but the difference was small for TMT-A (0.4 points better for TMT-A and 2.6 points better for TMT-B). While most older adults whose MMSE-J scores were lower than the cut-off value failed to complete the TMT-B, the TMT-A shows that it can be performed by most older adults with neurological symptoms. In addition, this study cohort included older adults with the MMSE-J scores of <23 points who did not have dementia (Ura et al., 2020). TMT-A can be completed even by older adults experiencing general cognitive function decline, suggesting it may help evaluate attention function in community surveys over time.
Fewer participants with a history of stroke were unable to understand the instructions in TMT-B. A similar, although not significant, trend was observed for Parkinson's disease and current depression. In the case of these patients, cognitive decline is associated with disease, and an extended time limit may be useful in properly assessing executive function. On the other hand, in the case of dementia, the number of participants who could not complete the task within the time limit or could not understand the instruction was higher. Even within the time limit, the task should be rounded off if it is unlikely that the patient will be able to complete the task, thereby reducing the burden on the participants.
The TMT completion time was associated with age and general cognitive function test scores; this finding is consistent with that of a previous study (Mitrushina et al., 2005). However, the effects of sex were observed in TMT-B and not in TMT-A performance. Regarding sex effects on the MMSE-J scores, subgroup analysis of participants that completed TMT-B revealed that males scored significantly lower than females (data not shown). This finding indicates that men can complete TMT-B, even if their MMSE-J score is low. Consequently, men that completed TMT-B tended to do so more slowly than women, resulting in an association between sex and test completion time. The MMSE-J score and age were strongly associated with the TMT performance in both part A and B. The associations were also found in an analysis that excluded participants with neurological symptoms. These results suggest that the TMTs may help detect age-related changes in attention and executive function before their clinical manifestation. Age and the MMSE-J scores of TMT-B non-completers were lower than those of completers. TMT-B is a complex task; performance may be affected by cognitive decline. For this reason, TMT-B results were excluded from a study that reported standardized scores for cognitive tests in older adults (Fallman et al., 2020). Overall, 13.9% of the participants who understood TMT-B instructions could not complete the task within the time limit. In studies involving TMT-B, older adults with cognitive impairment may be naturally excluded. In addition, participants with low MMSE scores and expected difficulty in completing TMT-B may experience psychological distress when asked to complete the task (Brayne et al., 2007;Fowler et al., 2012); therefore, this task is not recommended for use with cognitively impaired individuals. Screening tests for mild cognitive impairment evaluate whether TMT-B can be completed with simple instructions (Nasreddine et al., 2005(Nasreddine et al., , 2012. This approach may help evaluate executive function without the risk of causing distress.
Half of the participants who completed TMT-B made errors, resulting in the loss of time. Testers should be highly trained to effectively detect and record such errors; these records may help interpret TMT-B results.
To adequately evaluate the TMT normative data, following six key criterion variables are deemed critical (Mitrushina et al., 2005), (1) fifty cases are considered a desirable sample size, (2) information regarding medical and psychiatric exclusion criteria is important, (3) age group intervals, (4) reporting of education levels, (5) reporting of intellectual levels, (6) reporting of means and standard deviations, and preferably ranges, for total time in seconds for each part of the TMT. In this study, criteria (2) through (6) are met. However, there is a subgroup of older participants with fewer than 50 cases. For example, in the TMT-B for ages 90 and older, there are 17 completers with MMSE-J scores of 24 or higher and only 5 completers with scores of 23 or lower (Supplementary Appendix 1B). As sample size decreases, the influence of outliers also increases, resulting in a reversal: the average completion time for the former group was 183.8 s, while the average completion time for the latter group was 158.8 s, with the group with higher cognitive function having a slower completion time. These data should be considered as reference information rather than normative data.
A comparison of the matchable portions of the completion time of the present study with a similar study conducted in Germany for TMT normative data (Specka et al., 2021) showed that the difference was within a few seconds for both TMT-A and TMT-B. The difference in TMT-B was largest for the 70-74 years old with higher education, with a difference of about 8 s, which was slower in the Japanese data. Specka et al. (2021) excluded older adults with mild cognitive impairment in addition to those with neurological symptoms. The Japanese data do not exclude mild cognitive impairment, and this may be reflected in the difference in the normative data. However, in other areas, the differences were only about 3 s, suggesting that there is essentially little effect of cultural differences on the speed of TMT execution. This study has some limitations. The older age groups in this study were too small for normative data; data after age 85 should be added. The sample size in the 70 s is rich, but may include undetected early dementia and mild cognitive impairment. Accumulation of data separating healthy older adults from mild cognitive impairment could lead to earlier detection of cognitive decline. Although this study used reliable data to establish normative values for older adults living in urban areas, it did not include older adults living in rural areas. Highly educated and high-functioning older adults may be over-represented in urban areas; thus, the presented findings may not apply to the general older adult population. This is a sampling bias for this study, which conducted initial recruitment by mail in an urban area. Recent studies have shown that urban life is more protective against cognitive decline than rural life (Robbins et al., 2019;Hirst et al., 2021). However, this effect tends to only be observed in the early stages; the subsequent decline tends to be rapid (Xiang et al., 2018). Surveys of the general older adult population may result in completion rate, completion time, and error rate values lower than those presently reported. Older people with low MMSE-J scores need to use more accurate criteria to determine normative TMT data.
This study provided comprehensive TMT normative data stratified by age, education, sex, and general cognitive function for older adults living in an urban area of Japan. In both TMTs, completion time was associated with age, education, and general cognitive function. However, not all participants could complete TMT-B, in particular, among those with low MMSE-J scores. These findings may support the interpretation of past and future study results using TMTs.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of the Tokyo Metropolitan Institute of Gerontology (approval number 9 and 31 in 2016). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
HS and NS: conceptualization, methodology, project administration, and writing-review and editing. HI, AE, CU, FM, SO, and MK: data curation. SO, NS, and HS: formal analysis. NS, MK, SO, and HS: investigation. YW, SS, and SA: project administration, resources, and supervision. HS: visualization and writing-original draft. All authors read and approved the published version of the manuscript.
FUNDING
This work was supported by the Tokyo Metropolitan Government. | 2022-05-25T13:18:20.658Z | 2022-05-25T00:00:00.000 | {
"year": 2022,
"sha1": "a6d78f72d6434de0d97566e05737a6c1b622c92b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a6d78f72d6434de0d97566e05737a6c1b622c92b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227950431 | pes2o/s2orc | v3-fos-license | 1H NMR Spectroscopy to Characterize Italian Extra Virgin Olive Oil Blends, Using Statistical Models and Databases Based on Monocultivar Reference Oils
During the last few years, the global demand for extra virgin olive oil (EVOO) is increased. Olive oil represents a significant percentage of world fat consumption determining an important development of its market. In this context, the problems related to counterfeiting and product fraud is becoming extremely relevant. Thus, the quality and authenticity control of EVOOs is nowadays mandatory. In this study we focused on the use of 1H NMR technique associated with multivariate statistical analysis to characterize Italian EVOOs commercial blends. In particular, a specific database including 126 monocultivar EVOOs reference samples, was used to characterize a total of 241 Italian EVOOs blends over four consecutive harvesting years. Moreover, the effect of the minor components (phenolic compounds) on the qualitative characterization of blended EVOOs was also evaluated. The correlation analysis of classification scores obtained using two pairwise orthogonal partial least square-discriminant analysis models (built with major and combined major–minor components NMR data) revealed that both could be profitably used to generally classify the studied Coratina containing blends.
Introduction
To date, due to its sensorial, nutraceutical, and other well-known health properties, extra virgin olive oil (EVOO) is considered one of the most important classic products of the Mediterranean countries, often called "the yellow gold" [1][2][3]. World olive oil production is still essentially concentrated in the Mediterranean basin, in particular in Spain and Italy, two countries representing almost all world exports (60% Spain and 20% Italy). Italian product covers on average 15% of world production (compared to 45% of Spain) [4]. Italian EVOO is worldwide recognized as a high value product thanks to its well established "health appeal", quantitatively limited production and organoleptic characteristics [1,5]. Indeed, Italian food market is widespread known for its richness of varieties and high quality products [6]. Over the years, the high national and international commercial value of Italian EVOOs has led to their adulteration with low quality foreign olive oils and in some cases, also with the addition of other low cost edible vegetable oils of uncertain origin. Therefore, the establishment and systematic implementation of a reliable quality control methodology for certifying the EVOO authenticity is a priority issue. The European Union Regulation 182 of 6 March 2009 [7], declared the OPLS-DA models were therefore built using both the standard zg NMR spectra and the combined zg noesy NMR spectra. These latter better accounts for the phenolic components responsible of the blends bitterness/pungency characteristics essentially due to their expected Coratina content. The studied blend samples were classified by prediction on these two models and the correlation observed for the resulting classification scores suggests a reasonable efficiency of both models. This NMR-bases method could be profitably used as a tool to classify commercial oil samples, putting a gate around high quality blend EVOOs and defining their characteristics with respect to the blend constituents.
Materials and Methods
Chemicals: All chemicals for analysis were of analytical grade: CDCl 3 (99.8 atom %D) and tetramethylsilane, TMS (0.03 v/v%) were purchased from Armar Chemicals (Döttingen, Switzerland). The different size of the reference samples number was chosen according to their provider declared use in the blends production. Being the model designed for Coratina based blend classification, the number of Coratina oils samples was higher in order to take into account the prevalence of Coratina cultivar and its variability within Apulia Region [27]. All the samples were stored in sealed dark glass bottles at room temperature in the dark until NMR analysis. The information reported on the label of each sample was described in detail in Table 1.
Sample Preparation for NMR Analyses
Each sample was prepared by dissolving~140 mg of olive oils in a volume of deuterated chloroform (CDCl 3 ), calculated on the basis of the ratio of 13.5% oil/86.5% CDCl 3 , w/w (standard Bruker methodology). Then, 600 µL of the prepared mixture was transferred into a 5 mm diameter NMR tube and subjected to spectroscopic analysis. Oil samples were provided by the producers before bottling. According to the previously published work flow chart illustrating the procedures sequence for blend EVOO origin assessment [12] 1 H NMR analyses were performed within one month of receiving the samples, within the optimal shelf life [30].
Acquisition and Processing of 1 H NMR Spectra
1 H NMR spectra were acquired using a Bruker Avance spectrometer (Bruker Italia, Milano, Italy) operating at proton frequency of 400.13 MHz, T = 300 K, equipped with a PABBI 5-mm inverse detection probe with a z axis gradient coil. The experiments were conducted under full automation, after loading individual samples on a Bruker Automatic Sample Changer (BACS-60), interfaced with the IconNMR software (Bruker Italia, Milano, Italy). In order to characterize fatty acids signals and to enhance signals of the minor components by the suppression of these strong fatty acids signals respectively, for each sample, both a standard 1 H zg NMR and a multi-suppressed 1 H noesygpps NMR (with suppression of strong fatty acids signals) experiments were performed.
Each 1 H NMR spectrum was acquired following the conditions: zg pulse program, 64 k time domain, spectral amplitude 20.5555 (8223.685 Hz), p1 12.63 µs, pl1 −1.00 db, 16 repetitions. Each multi-suppressed spectrum was acquired under the conditions: noesygpps1d.comp2 pulse program, 32 k time domain, spectral amplitude 20.5555 (8223.685 Hz), p1 12.63 µs, pl1 −1.00 db, 32 repetitions. The chemical shifts of sample signals were calculated according to the internal standard (TMS), whose signal was set at 0 ppm. The spectra were acquired and processed using the Topspin 3.1 software (Bruker Italia, Milano, Italy). 1 H NMR spectra were segmented in rectangular fixed (0.04 ppm width) buckets and integrated by Amix 3.9.15 (Analysis of Mixture, Bruker BioSpin GmbH, Rheinstetten, Germany) software. Bucketing of 1 H zg and 1 H noesygpps NMR spectra were performed within the range 10.0-0.5 ppm and 10.0-5.6 ppm, respectively. In both cases, the spectral region between 7.6 and 6.9 ppm was excluded to eliminate from the analysis the residual solvent signal peaks area. In order to minimize small differences due to olive oil concentration and/or experimental conditions among samples the total sum normalization was then applied [31]. The Pareto scaling procedure, performed by dividing the mean-centered data by the square root of the standard deviation, was applied to the variables. The data tables obtained with all the aligned buckets row reduced spectra were used for statistical analysis.
Multivariate Statistical Analysis Applied to NMR Spectroscopy Data
After the data processing, multivariate statistical analysis, was then performed by using the Simca-P version 14 (Sartorius Stedim Biotech, Umeå, Sweden) software. In particular, unsupervised principal components analysis (PCA) and supervised partial least squares discriminant analysis (PLS-DA) and orthogonal partial least squares discriminant analysis (OPLS-DA) were performed. PCA is a chemometric method applied in order to extract the maximum information from a multivariate data structure, reducing it in a few linear combinations of the variables [32]. The PCA is used, in the first data processing step, to obtain a general overview of the samples distribution and their possible grouping in homogeneous clusters also identifying the presence of possible outliers [33]. The correlation between the clusters distribution of the analyzed samples and the considered classes (such as variety and/or geographical origin), is therefore assessed by supervised multivariate statistical analyses. PLS-DA is the most used supervised analysis for the discrimination between samples classes with different characteristics [34]. The PLS-DA is performed in order to obtain the maximum separation between groups of observations and information about the variables responsible for the observed separation, by rotating the main components (the axes that express the variance of the data) [35]. OPLS-DA is a modification of the PLS-DA technique which filters out variation not directly related to the discriminating response. This is realized by separating the portion of the variance useful for predictive purposes from the non-predictive variance (which is made orthogonal). The result is a model characterized by an improved interpretability [36]. In our study when we considered six categories (all the different cultivars) PLS-DA rather than OPLS-DA analysis was preferred for further classification purposes [37]. On the other hand, OPLS-DA analysis was used in the pairwise comparisons between Coratina and Sweetener cultivars (considered as a single class) in order to better specify the molecular components responsible for the observed discrimination, being a superior discriminating tool for two classes models [37]. The robustness of the predictive ability for the OPLS/PLS-DA model was evaluated by the misclassification tables and classification list. Classification list provides the overall classification results, according to the predicted values for Y variables (YPredPS) of observations in the prediction set. Membership of a class depends upon matching the value of YPredPS (classification score). A value close to one indicates membership of the workset class. Class membership was defined as follows; YPredPS < 0.35 (also negative values); observation does not belong to the class; 0.35 < YPredPS < 0.65; observation is borderline; YPredPS > 0.65; observation belongs to the class [38]. The misclassification table is complementary to the classification list but presents the classification results in a more compact format. The internal cross-validation default method (7-fold) and the permutation test (40 permutations), both available on the SIMCA-P software [38,39], were used in order to validate the statistical models. The quality of the obtained models was described by the R 2 and Q 2 parameters. The first (R 2 ) is a cross validation parameter indicating the portion of data variance explained by the models and represents the goodness of fit. R 2 X and R 2 Y indicate the fraction of variance of the X and Y matrix, respectively. The second (Q 2 ) is a goodness-of-prediction parameter representing the portion of variance in the data predictable by the model. Values of R 2 X(cum), R 2 Y(cum) and Q 2 (cum) represent cumulative R 2 X, R 2 Y and Q 2 up to the specified component. The minimal number of necessary components can be defined since R 2 (cum) and Q 2 (cum) parameters show completely diverging behavior as the model complexity increases [39]. For each of these three model parameters a value greater than 0.5 indicates good model quality [25]. The variables responsible for the observed discrimination (loadings) were identified by using the statistical tool S-line plot. The S-line plot, which creates a plot of the loading vectors for two components (usually the first two) is specifically tailor-made for NMR spectroscopy data. Indeed, this plot resembles a NMR spectrum where the original buckets representing the loading and colored according to the absolute value of the correlation p(corr) [1] are displaced in opposite direction depending on their values and the considered components.
New Reference Model: PLS-DA Analysis
Starting from a previously published work [12] we used the same methodology: a reference model built with monocultivar EVOOs samples from specific geographical origins was used to obtain an indication of the composition of a test set Coratina based blends expected to contain the same cultivars from the same geographical areas. Thus, we built a new reference model with 126 reference oils mostly used to produce commercial EVOOs. These were represented by monocultivar EVOO samples from specific geographical origins (Coratina, Cima di Mola, Ogliarola, and Cellina from Apulia; Carolea and Rossanese from Calabria). The supervised PLS-DA analysis performed on the zg NMR spectra ( Figure S1 (6 components explained 95.3% and 75% of the total variance of the X and Y matrix respectively and 71.1% of predicted variance (R 2 X = 0.953; R 2 Y = 0.750; Q 2 = 0.711)). PLS-DA model was validated testing for non-casualty by the permutation test performed with 40 cycles of Y variables random permutations for the considered model [34]. The model is considered successfully validated when the R 2 -intercept and Q 2 -intercept do not overcome 0.3-0.4 and 0.05, respectively [38]. The permutation test exhibited Y-intercept and Q 2 -intercept pair values at 0.0163 (R 2 ) and -0.307 (Q 2 ), 0.0259 (R 2 ) and -0.212 (Q 2 ), 0.0236 (R 2 ) and -0.237 (Q 2 ), 0.066 (R 2 ) and −0.13 (Q 2 ), and 0.0119 (R 2 ) and -0.238 (Q 2 ) for Coratina, Ogliarola, Cima di Mola, Rossanese, Cellina di Nardò, respectively, thereby demonstrating that the PLS-DA classification model was successfully validated ( Figure S2). NMR signals indicating the molecular constituents distinctive for each extra virgin olive oil class and discriminating along the t[1] component were defined by examining the loading line plot for the model (Figure 1b). The variables, indicating signals with chemical shift (δ H ) at 1.3, 2.02, and 5.34 ppm, corresponding to the methylene (n-CH 2 ), allylic (-CH 2 CH=CH-) and olefinic (-CH=CH-) protons respectively of oleic acid are higher for Coratina oil with respect to the sweetener cultivars according to literature data [40]. As known, oleic acid (C18:1) is the main monounsaturated fatty acid (MUFA) in olive oil, and it is known to play a protective effect against several diseases, such as liver dysfunction and gut inflammation [41]. Interestingly, signal ascribable to squalene (1.7 ppm) was observed in the loading line revealing higher relative content for Coratina class samples of this triterpene known for important health properties, also able to improve the olive oil stability, and thus shelf life [42,43]. On the contrary, resonance at δ H 1.26 ppm corresponding to the methylene protons (n-CH 2 ) of the saturated acyl groups showed higher values for the sweetener cultivars than for the Coratina class. The analysis of the variable trend plot for the discriminating 1.26 ppm variable (Figure 2), showed a significant higher content of saturated fatty acid, associated with sweetness flavour characteristic of the Carolea samples class. The discrimination among the sweetener cultivars, from the analysis of the 1.26 ppm bucket trend plot. Carolea cultivar is known to be characterized by high level of palmitic (C16) and stearic acids [28].
The analysis of the variable trend plot for the discriminating 1.26 ppm variable (Figure 2), showed a significant higher content of saturated fatty acid, associated with sweetness flavour characteristic of the Carolea samples class. The discrimination among the sweetener cultivars, from the analysis of the 1.26 ppm bucket trend plot. Carolea cultivar is known to be characterized by high level of palmitic (C16) and stearic acids [28]. The obtained model was used to perform the 1 H NMR analysis of commercial blend samples from four different harvesting years (Experimental section). All the samples were declared as Coratina based blends containing also Ogliarola, Cellina, Cima di Mola, Rossanese, and Carolea as "sweeteners" cultivars. In order to analyze the resulting classification scores with respect to the reference classes, each prediction set, constituted by the Italian commercial blends for a specific harvesting year, was predicted into the PLS-DA model (Figure 3a-d). Each blend sample was classified according to its classification score which reflects the blend content (Table 2). Classification scores above a fixed value limit of 0.65 resulted in a sample assignment to that specific class. In all the other case, the samples were ranked as borderline between the classes resulting with a classification score below 0.65 for that sample. On the other hand, all the analyzed oil test sets resulted essentially mixed composition blends based on Coratina (with smoother cultivars) according to their classification score for Coratina higher than 0.35 (Table S1). The PLS-DA class discrimination occurs according to the differences in the spectral fingerprints for sweeteners with respect to Coratina cultivar essentially observed along t [1] component and already discussed ( Figure 1b). Moreover the discriminating ability of the PLS-DA model also results from the differences in the spectral fingerprints among the sweetener cultivars observed along t[2] component ( Figure S3). Specific information could be obtained also by examining the line plot for the model, indicating the 1 H NMR chemical shifts of the signals, characteristic of specific metabolites, discriminating the classes along t [2]. Higher relative content of saturated fatty acids (1.26 ppm corresponding to the methylene of the saturated acyl group) were observed for Carolea cultivar, whereas higher level of polyunsaturated acyl groups (PUFA), (signals at 1.34, 2.38, 2.78 ppm) were observed for Cima di Mola cultivar ( Figure S3). The obtained model was used to perform the 1 H NMR analysis of commercial blend samples from four different harvesting years (Experimental section). All the samples were declared as Coratina based blends containing also Ogliarola, Cellina, Cima di Mola, Rossanese, and Carolea as "sweeteners" cultivars. In order to analyze the resulting classification scores with respect to the reference classes, each prediction set, constituted by the Italian commercial blends for a specific harvesting year, was predicted into the PLS-DA model (Figure 3a-d). Each blend sample was classified according to its classification score which reflects the blend content (Table 2). Classification scores above a fixed value limit of 0.65 resulted in a sample assignment to that specific class. In all the other case, the samples were ranked as borderline between the classes resulting with a classification score below 0.65 for that sample. On the other hand, all the analyzed oil test sets resulted essentially mixed composition blends based on Coratina (with smoother cultivars) according to their classification score for Coratina higher than 0.35 (Table S1). The PLS-DA class discrimination occurs according to the differences in the spectral fingerprints for sweeteners with respect to Coratina cultivar essentially observed along t [1] component and already discussed (Figure 1b). Moreover the discriminating ability of the PLS-DA model also results from the differences in the spectral fingerprints among the sweetener cultivars observed along t[2] component ( Figure S3). Specific information could be obtained also by examining the line plot for the model, indicating the 1 H NMR chemical shifts of the signals, characteristic of specific metabolites, discriminating the classes along t [2]. Higher relative content of saturated fatty acids (1.26 ppm corresponding to the methylene of the saturated acyl group) were observed for Carolea cultivar, whereas higher level of polyunsaturated acyl groups (PUFA), (signals at 1.34, 2.38, 2.78 ppm) were observed for Cima di Mola cultivar ( Figure S3). 1 "0.65" is the fixed value to assign observations to all classes above the limit.
2016/2017 Harvesting Year
A total of 38 100% Italian commercial samples were classified by prediction in the PLS-DA model (Figure 3a). From the predicted scores plot the commercial blends confirmed their consistency with cultivars from the specific geographical origins declared by the supplier. Predicted samples of
2016/2017 Harvesting Year
A total of 38 100% Italian commercial samples were classified by prediction in the PLS-DA model (Figure 3a). From the predicted scores plot the commercial blends confirmed their consistency with cultivars from the specific geographical origins declared by the supplier. Predicted samples of mixed cultivars oils set were clustered in the middle of the scores plot slightly closer to the Coratina class, except for a small subset of samples clearly placed close to the "sweeteners" cultivar. The analysis of the classification scores reported in the Classification list (Table S1) and summarized in the misclassification table (Table 2), revealed as most of the predicted samples (33) were assigned to Coratina class (classification scores for Coratina > 0.65). One of the samples resulted assigned to Carolea class, and four of the total were not assigned to any specific class. Nevertheless, for all the samples assigned to cultivars different from Coratina or not assigned to a specific cultivar a classification score for Coratina higher than 0.35 was in any case observed (Table S1).
2017/2018 Harvesting Year
A total of 74 100% Italian commercial samples were classified by prediction in the PLS-DA model (Figure 3b). The bi-dimensional plot for the model revealed a clear compact clustering of the commercial samples in the middle of the predicted scores plot. From the predicted scores plot the placement of the commercial blends confirmed the consistency with cultivars from the specific geographical origin declared by the supplier. From the classification score analysis, the prediction of the commercial samples was evaluated. Most of the samples (63) were assigned to Coratina class (classification scores for Coratina > 0.65) while, 11 samples were not assigned to any specific class, being the predicted values borderline between classes and below the fixed 0.65 limit ( Table 2, Table S1). Moreover, in this case, for most of the samples assigned to cultivars different from Coratina or not assigned to a specific cultivar, a classification score for Coratina higher than 0.35 was observed. Only three samples were characterized by a relatively smaller classification score for Coratina (0.20 < 0.35) (Table S1).
2018/2019 Harvesting Year
A total of 80 100% Italian commercial samples were classified by prediction in the PLS-DA model. As observed in the bi-dimensional predicted scores plot for the model (Figure 3c) the predicted set is consistent with cultivars from the specific geographical origin declared by the supplier. In particular, predicted samples of mixed cultivars oil set were clustered in the middle of the scores plot confirming their blended composition. The misclassification table reported the assignment according to the prediction classification scores ( Table 2). A relevant number of the test samples (24) were directly assigned to Coratina class, confirming that the Coratina is the predominant cultivar in the provided blend samples. A total of 20 samples were also assigned to Cellina class, also accordingly to their position in the predicted scores plot. Some samples (17) overlapping between Cellina-Coratina classes were assigned to a combined Cellina Coratina class. A further subset of test samples (19) was not assigned to any specific class, being their predicted classification score values borderline between classes and below the fixed 0.65 limit (Table S1). Anyhow, for all the samples assigned to cultivars different from Coratina or not assigned to a specific cultivar a classification score for Coratina higher than 0.35 was observed (Table S1).
2019/2020 Harvesting Year
A total of 49 100% Italian commercial samples were classified by prediction in the PLS-DA model. As observed in the bidimensional scores plot for the model (Figure 3d) the predicted set is consistent with cultivars from the specific geographical origin declared by the supplier. In this harvesting year, the predicted test sample which constitute the central cluster of the scoreplot appears somehow more evenly distributed along the first component tPS [1]. The misclassification table (Table 2) confirmed the visual inspection of the scores plot. A total of 39 samples were directly classified as Coratina or combined Cellina Coratina class, nine samples were assigned to Cellina class and one blend was not assigned to any specific class. Again, all the samples not directly assigned to Coratina or combined Coratina classes were in any case characterized by a classification score for Coratina higher than 0.35 (Table S1).
As known, blended EVOOs are produced by combining monocultivar oils with different flavor and taste profiles in order to meet the demands of national and international market. In this study a 126 monocultivar samples database was used to build a PLS-DA model. A total of 241 Coratina based commercial blends samples obtained using selected cultivars from specific geographical origins produced over four consecutive harvesting years were classified by prediction using the PLS-DA model. The classification scores obtained for the test samples of the considered harvesting years reflected their expected blend composition and could provide to the producers useful information on the organoleptic aspects of the product (e.g., bitter and spicy). As already reported a minimal harvesting year effect is expected for the Coratina based blends [13]. Therefore, the observed slight variability of the blend samples according to the production year could be possibly ascribed to a variation in the EVOOs suppliers. Thus, the here reported tailor-made dataset could be used to assess the quality of the commercial blend samples with similar declared composition and highlight possible deviations with respect to the expected standard product features. Our assignment purpose was limited to assess, with statistical models, the samples compliance with the expected blend characteristics defined by Coratina and a range of sweeteners monocultivar reference oils from specific geographical origins. This assignment was simply performed according to the provider declaration of the used cultivars, without considering the percentage composition. On the other hand, a multivariate analysis (MVA)-based NMR approach-such as PLS regression (PLSR)-proved a very useful tool to assess quantitative blend composition although limited to simplified binary system [24].
Coratina vs. Sweetners Cultivars: OPLS-DA Pairwise Analysis
In order to obtain a possible classification of the examined blends according to their expected bitterness and/or pungency characteristics, a pairwise OPLS-DA analysis was further performed by considering Coratina cultivar class and the sweeteners cultivars grouped together as a single class. In this respect the resulting OPLS-DA models could be considered as a simplified classification tools with respect to the PLS-DA models described above. In the first instance the default bucket table (zg acquisition) was used for this purpose. As expected, the obtained model was characterized by a marked separation between the two class and described by excellent descriptive and predictive parameters: one orthogonal and one predictive components gave R 2 X = 0.673; R 2 Y = 0.951; Q 2 = 0.945 (Figure 4a). By examining the S line plot of the loadings for the model (Figure 4b) the molecular components discriminant for the two classes were defined. As already described for the signals, discriminating the classes along t [1] in the zg NMR PLS-DA model, the variables, indicating signals with chemical shift (δ H ) at 1.3, 2.02, and 5.34 ppm, ascribable to oleic acid are higher for Coratina oil. Moreover, resonance at δ H 1.26 ppm corresponding to the saturated acyl groups showed higher values for the sweetener cultivars than for the Coratina class. Moreover, because the phenolic compounds characteristic of the Coratina cultivar are responsible for the bitterness and/or pungency sensory attributes of the Italian EVOO blends, a further refined model was also considered. As known, the organoleptic attributes of pungency and bitterness in olive oil are attributed to phenolic compounds [44,45]. Furthermore, phenolic compounds are also the basis of oxidative stability and of the main nutritional properties of oils and this makes the analysis of the contribution of polyphenols essential for the extra virgin olive oil research [44]. In order to enhance the polyphenol contribution, another pairwise OPLS-DA analysis, using combined bucket reduced NMR spectra was performed. The combined bucket table was generated by combining in one matrix the 1 H noesygpps NMR spectra ( Figure S4) (within the range 10.0-5.6 ppm) and the 1 H zg NMR spectra (within the range 5.6-0.5 ppm) as previously reported [10]. Further in this case, the obtained OPLS-DA model for the two classes (Coratina and sweeteners cultivars) showed a clear separation between the groups and very good statistical parameters (one predictive and one orthogonal component, R 2 X = 0.579; R 2 Y = 0.945; Q 2 = 0.936) (Figure 5a). The S line plot of the loadings for the model showed the molecular components of triglycerides and p(ctr) [ Moreover, because the phenolic compounds characteristic of the Coratina cultivar are responsible for the bitterness and/or pungency sensory attributes of the Italian EVOO blends, a further refined model was also considered. As known, the organoleptic attributes of pungency and bitterness in olive oil are attributed to phenolic compounds [44,45]. Furthermore, phenolic compounds are also the basis of oxidative stability and of the main nutritional properties of oils and this makes the analysis of the contribution of polyphenols essential for the extra virgin olive oil research [44]. In order to enhance the polyphenol contribution, another pairwise OPLS-DA analysis, using combined bucket reduced NMR spectra was performed. The combined bucket table was generated by combining in one matrix the 1 H noesygpps NMR spectra ( Figure S4) (within the range 10.0-5.6 ppm) and the 1 H zg NMR spectra (within the range 5.6-0.5 ppm) as previously reported [10]. Further in this case, the obtained OPLS-DA model for the two classes (Coratina and sweeteners cultivars) showed a clear separation between the groups and very good statistical parameters (one predictive and one orthogonal component, R 2 X = 0.579; R 2 Y = 0.945; Q 2 = 0.936) (Figure 5a). The S line plot of the loadings for the model showed the molecular components of triglycerides and unsaponifiable fractions as discriminant for the two groups (Figure 5b). Higher oleic acid could be observed for Coratina (1.3, 2.02, and 5.34 ppm), as already found in the t [1] line and S line plots of the loadings for the PLS-DA (Figure 1b) and OPLS-DA (Figure 4b) models built with the standard zg NMR spectra. Moreover, Coratina class was also characterized by higher content of phenolic moieties of oleuropein and ligstroside aglycones such as tyrosol and hydroxytyrosol derivatives (6.78 ppm), as well as for higher secoiridois derivatives oleocanthal and oleacein (9.22 ppm). These secoiridoic phenolics are known for their antioxidant and anti-inflammatory properties and organoleptic association with bitterness and pungency [45]. Therefore, these signals could be also related to a possible classification of the examined Coratina based blends according to their expected bitterness and/or pungency characteristics. On the other hand, only a specific correlation study with organoleptic analyses could buttress this features assignment to the oils classified by the model. In the case of the sweeteners class, again, the variables (buckets) ascribable to saturated fatty acids (1.26 ppm) were observed as discriminating together with signals of compounds associated to degradation processes (5.98, 5.58 ppm) [46]. The predictive capability of the two models described by the high Q 2 values was then tested by classifying the whole Italian 100% EVOOs blend test set (Table 3 and Table S2). The predictive capability of the two models described by the high Q 2 values was then tested by classifying the whole Italian 100% EVOOs blend test set (Table 3 and Table S2).
For the OPLS-DA model built with the zg NMR spectra, the classification scores analysis revealed as more of the 98% out of the considered 233 observations showed classification scores values for Coratina higher than 0.35. In particular, 56% and 43% showed classification scores for Coratina above 0.65 and between 0.35 and 0.65, respectively. Only 1.2% of the classification scores showed values lower than 0.35 (although above 0.25) with the predicted samples assigned to Sweeteners class. A slightly lower percentage (96%) of the considered 233 observations showed classification scores values for Coratina higher than 0.35 predicted on the OPLS-DA model built with bucket reduced combined zg-noesy NMR spectra. Among them, 34% and 62% showed classification scores for Coratina above 0.65 and between 0.35-0.65, respectively. Furthermore, in this case, a small number of the predicted samples (4%) were assigned to sweeteners cultivars, showing a classification scores values for Coratina lower than 0.35 (although above 0.26). The model built with the combined bucket table, seems to be characterized by a greater selectivity with respect to prediction scores values for Coratina when compared to the "standard" major components model (zg spectra bucket table). Indeed, the combined buckets model, better accounts for the presence of specific components such as polyphenols and other molecules related to the expected bitterness and/or pungency characteristics of Coratina based blend oils. Therefore, the use of the combined buckets model, restricting the requirements associated to the predictions scores for Coratina, results in an increased percentage of samples not assigned to any specific class or assigned to the sweeteners class (classification scores values for Coratina between 0.35 and 0.65 or below 0.35, respectively). The relation between the classification scores for the two models ( Figure S5) was also investigated. The regression was significant with acceptable R 2 value (0.7029) and slope (0.86) revealing a good correlation between the predictive OPLS-DA models built with zg and combined zg-noesy bucket reduced spectra. Therefore, both models could be profitably used to generally classify the studied Coratina based blends, from specific geographical origins, taking into account only the major lipid fraction (standard zg spectra) or also the minor phenolic component (combined zg-noesy spectra). This classification may also constitute an indirect method to rank commercial Coratina based blend samples according to their expected bitterness and/or pungency characteristics. 1 "0.65" is the fixed value to assign observations to all classes above the limit.
Conclusions
In this work, based on 1 H NMR data, a total of 241 of commercial 100% Italian blend olive oil samples from four different harvesting years, were classified by prediction using a reference olive oils database built with 126 monocultivar EVOOs. This dataset includes Carolea, Cellina, Cima di Mola, Coratina, Ogliarola, and Rossanese monocultivar olive oil samples, from specific geographical origins. In particular, a supervised PLS-DA model, was built and used for classification purposes of the commercial blend samples. All the classified commercial blend samples resulted essentially mixed composition blend based on Coratina (with smoother cultivars) according to their resulting classification score for Coratina higher than 0.35.
In order to obtain a simple classification tool for ranking the examined blends according to their expected bitterness and/or pungency characteristics, an OPLS-DA analysis was also used in a pairwise comparisons between Coratina and Sweetener cultivars considered as a single class. For this purpose, two different OPLS-DA models were obtained by using both the default (zg acquisition) and the combined (zg 0.5-5.6 and noesygpps 5.6-10.0 ppm) bucket reduced 1 H-NMR spectra datasets.
The predictive capability of the two models, both characterized by good Q 2 values, was tested by evaluating the classification scores for Coratina of the Italian 100% EVOOs blend test set. The analysis revealed as most of the considered 233 observations (98% and 96% for the OPLS-DA models obtained using the default and the combined bucket table, respectively) showed a classification scores values for Coratina higher than 0.35. The combined buckets model, better accounts for the presence of polyphenols and other molecules related to the organoleptic characteristics of Coratina based oils. Accordingly, its higher discrimination power results in an increased percentage of samples (4%) characterized by classification scores values for Coratina below 0.35. Nevertheless, the correlation analysis for the obtained classification scores, showed that both OPLS-DA models could be profitably used to generally classify the studied Coratina based blends, from specific geographical origins, taking into account only the major lipid fraction (standard zg spectra) or also the minor phenolic component (combined zg-noesy spectra). Therefore, tailor-made databases based on 1 H NMR data of monocultivar oils from specific geographical origins could be profitably used to build a gate around high quality blend EVOOs and define their characteristics with respect to a specific monocultivar reference oils dataset. The described models may also offer an indirect method to classify commercial samples according to their expected bitterness and/or pungency characteristics, although a further specific correlation study with organoleptic analysis is required to buttress this result. Figure S1 Representative zg 1 H NMR spectra of EVOO sample. Main metabolites are indicated; Figure S2. Permutation test performed with 40 cycles of random permutation of Y variables on PLS-DA models for Coratina, Ogliarola, Cima di Mola, Cellina, Carolea and Rossanese cultivars; Figure S3. Line plot for the Figure 1a model, indicating the 1 H NMR chemical shifts of the signals, characteristic of specific metabolites, discriminating the classes along t [2] and colored according to the correlation-scaled loading (* p(corr) ≥ |0.5|). w*c [1] axis represented the weighted correlation vector; Figure S4 Representative noesygpps 1 H NMR spectra of EVOO sample. Main metabolites are indicated; Figure S5. Relationship between classification scores of the commercial blend samples predicted on the bucket reduced combined zg-noesy NMR spectra.
Funding: This research received no external funding. | 2020-12-09T14:06:50.209Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "6c57ef011136e9656189e5081f791483ac2b1659",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/foods9121797",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50c3827ddea8eb67bd002de9bcebc1c7b83646a3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
272211060 | pes2o/s2orc | v3-fos-license | Headliners: Respiratory Health: Effects in Infants from Tobacco Smoke, Mold, and Older Siblings
Biagini JM, LeMasters GK, Ryan PH, Levin L, Reponen T, Bernstein DI, et al. 2006. Environmental risk factors of rhinitis in early infancy. Pediatr Allergy Immunol 17(4):278–284.
Many environmental exposures have been confirmed to affect children’s respiratory health, but few have been studied in very young children. Now NIEHS grantees Grace K. LeMasters, Jocelyn M. Biagini, and their colleagues at the University of Cincinnati and the Cincinnati Children’s Hospital Medical Center demonstrate for the first time the relationship between exposure to environmental tobacco smoke (ETS) and allergy in infants.
About one-fifth of all American adults smoke cigarettes, resulting in about 43% of children being exposed to ETS at home. ETS exposure, along with mold exposure, has been documented as a risk factor for health problems such as wheezing, asthma, and otitis media in both children and adults.
In the current study, the researchers observed the effects of ETS and indoor mold exposure on the development of rhinitis and symptoms such as nasal blockage, sneezing, and nasal itching in a cohort of 633 infants under the age of 1 enrolled in the Cincinnati Childhood Allergen and Air Pollution Study. They used interviewer-administered questionnaires to collect demographics and information on smoking habits, family health history, and other covariates. They also analyzed any upper respiratory symptoms of the infants recorded by the parents in a monthly diary. In addition, they performed a skin-prick test on the parents and the infants (at approximately 12 months of age) to test for sensitivity to at least 1 of 15 airborne allergens.
The investigators found that exposure to ETS increased an infant’s risk of developing allergic rhinitis by almost threefold. They also found that exposure to mold in the home was associated with increased risk of upper respiratory infections but not allergy, which differed from previously reported research in older children and adults.
Other findings included a protective effect of having older siblings in the home. Infants with at least one older sibling were less likely to have allergic rhinitis by their first birthday. This finding supports the hygiene hypothesis, a theory that exposure to infectious agents early in life may decrease the risk for allergic diseases such as asthma later in life. Presumably, by having older siblings these infants were exposed to a wider variety of viruses and bacteria, causing their immune systems to develop in a way that decreased the risk of allergy.
The authors conclude that further research is necessary to confirm their results. Continued research is also needed to determine the components of cigarette smoke that cause these health effects, and to ascertain the role of possible gene–environment interactions.
Questions about the safety of the plasticizer di(2-ethylhexyl) phthalate (DEHP), particularly in regards to exposure during medical procedures such as transfusions, have swirled for decades, but especially in the last several years, given growing concerns about endocrine disruption.In October 2005, an independent panel of experts convened by the National Toxicology Program Center for the Evaluation of Risks to Human Reproduction (NTP-CERHR) sought to take stock of what is known and identify critical research needs regarding human exposure to DEHP, in particular its potential reproductive and developmental toxicity.Now that the independent experts have had their say, the NTP is weighing in with its interpretation.
Based on the expert panel's report, comments from stakeholders and peer reviewers, and new information published since the experts' meeting, the NTP released a draft brief in May 2006 about DEHP exposure and toxicity.With peer review completed in late August, the brief is now being finalized and will be added to the forthcoming NTP-CERHR monograph The Potential Human Reproductive and Developmental Effects of DEHP.
This monograph will comprise the CERHR expert panel report, a list of the panel experts, all public comments made about the report, and the NTP brief on DEHP.Although the brief summarizes what the expert report says, it is more than just an executive summary-it represents the NTP's view of the various public and peer-review comments and additional research studies received since the report was prepared.
The 2005 expert panel meeting marks the first time the CERHR has had a compound re-evaluated; a previous evaluation was published in 2000.The need for another just five years later underscores the intensity with which DEHP is being investigated.
" [Assessing] DEHP again shows that the CERHR process is evergreen," says Paul Foster, deputy director of the NTP-CERHR."This is the first time that CERHR has gone back and said there's now been a significant amount of water that's gone under the bridge, and we should go back and re-evaluate to see whether or not any of our original conclusions have changed." According to Foster, the brief distills the intricate and detailed scientific knowledge of the monograph into information that educated laypeople can use to put concerns about the potential for DEHP toxicity into perspective.
Hard Science on a Softener
DEHP is an oily chemical that confers flexibility to rigid polyvinyl chloride plastic.DEHP-softened plastic appears in numerous products, including building materials, food packaging, and medical devices.Because DEHP does not form tight chemical bonds with the plastic, some amount can leach out, and the compound has been detected in packaged foods, indoor air, household dust, and various substances and paraphernalia associated with medical treatment (such as bagged blood and tubing).
DEHP has induced reproductive and developmental problems in male rodents, but there are scant and uncertain data for effects in humans.It is known, however, that low-level human exposure is widespread and that certain populations are more highly exposed.For example, according to the draft brief, newborns and infants undergoing particular medical procedures may have 100 to 1,000 times the exposure experienced by the general population.
Because animal studies indicate that the developing male reproductive system is especially vulnerable to adverse DEHPassociated effects, the expert panel, in its 2005 report, attached "serious concern" to critically ill male newborns and infants receiving prolonged medical treatment.The NTP concurred in its draft brief and also agreed that concern is warranted for male infants younger than 1 year and for the sons of women who underwent certain medical procedures while pregnant.Less concern was attached to low-level exposures in utero or after the first year of life, and there was minimal concern for adverse effects from typical background exposures.
Fairness and Balance
The draft brief is generally deemed fair by both scientists and stakeholders."To me, it seemed to be very fair based on the discussions and deliberations at the expert review committee," says Foster.
The American Chemistry Council's Phthalate Esters Panel considered both the brief and the expert panel's report "fair, but very conservative," says Marian Stanley, the panel's senior director."We're certainly pleased to see that the areas of concern have been lowered [from the 2000 report] for a couple of cases [children older than a year and pregnant or lactating women].We think that's justified." The Phthalate Esters Panel disagrees, however, with the NTP's level of concern about DEHP exposure among newborns and infants."DEHP medical devices have been used for better than fifty years, and there hasn't been any verified evidence of harm to humans.We don't believe that there needs to be as much concern for critically ill neonates because, as the FDA has said [in a July 2002 Public Health Notification], the treatment outweighs any risks from exposure to DEHP," says Stanley.
Health Care Without Harm (HCWH), a coalition of health and environmental groups that, among other issues, advocates replacing DEHP-containing medical devices with alternatives, was satisfied with the NTP's position."We don't have any quibbles with [who was determined to be] medically exposed, because the panel has expressed serious concern about that and we agree with that," says Ted Schettler, science director of the Science and Environmental Health Network, on behalf of HCWH.
Schettler avoids defining a level of concern for DEHP exposure of pregnant and lactating women: "We remain concerned about that group of women.Whether we want to say it's some concern or more than that, we think it should be emphasized that in the general population, pregnant and lactating women are exposed not only to DEHP but also to other phthalates that work through a common toxicological mechanism.The committee wasn't charged with addressing aggregate exposures to multiple phthalates, but that's the real world."
Outstanding Questions
The question of aggregate exposures is, of course, a scientific dilemma facing the risk assessment community at large, not just the CERHR.Still, says Foster, "I think one of our weaknesses is that we do these evaluations based on single chemicals.I think what's emerging from a lot of the exposure information that's being published, mainly from the CDC but also from others in Europe, is that the population at large is exposed to multiple phthalates.We have not really devised an appropriate method yet for how we handle that and put it into a risk context."He adds that the CERHR system will need to be adapted as new, appropriate methodologies become available.
Another notable challenge is extrapolating results from animal studies to human health."I think we're going to continue seeing much more research trying to tease out and figure out if the effects we see in rodents are relevant to humans.This isn't cut-anddried research," says Stanley.
Research with nonhuman primates hasn't proven any simpler and represents one of the more contentious reactions to the brief.According to Schettler, there's disagreement about whether nonhuman primates, specifically marmosets, are less vulnerable to DEHP than rodents, as suggested by research published in the October 2005 issue of Birth Defects Research B: Developmental and Reproductive Toxicology.Industry-sponsored research indicates that marmosets are a good study model for predicting toxicity in humans, but the October 2005 study, published just as the expert panel meeting concluded, questions that belief, and the debate has not yet been satisfactorily resolved.
Also unresolved are questions about the metabolism of DEHP and its mechanisms of toxicity.The limited epidemiologic data reviewed in the draft brief raise questions that cannot be answered yet.Research is ongoing in all areas, however."The science is still moving forward; the science is still being created," says Stanley. "As new techniques become available, there at some point is going to come where science suddenly takes a quantum leap and we can start understanding a lot more."-Julia R. Barrett
A Two-Way Street: Building Lasting Community Connections
It is human nature to remain committed to endeavors in which one feels personally invested.For the parents and children involved in studies at the Mount Sinai Center for Children's Environmental Health and Disease Prevention Research, most of whom are from low-income, minority communities such as East Harlem and the Bronx, this sense of commitment plays an important role in their continued participation in such studies.The center's Community Outreach and Translation Core (COTC) encourages this community kinship by partnering with community organizations to create workshops and educational activities that help keep children and their parents engaged in the studies.
According to COTC director Luz Claudio, the COTC staff have designed activities that pick up where organized educational activities at school leave off, encouraging children to learn new, useful information they can share with their parents.The activities are also culturally relevant and easy to take advantage, which makes it easy to keep them going.
Claudio says one main goal of these programs is to expose the study participants to realistic, positive role models in the medical profession to encourage their interest in future medical careers.Another is to remind study participants that, through their participation in center studies, they are part of a national effort to improve and protect children's health."The COTC educational activities provide direct benefits to the participants that go beyond their participation as study subjects providing data," says Claudio. "They are truly our partners in the scientific endeavor." In one current collaborative project popular with kids and parents alike, COTC staff have joined with the nonprofit City Parks Foundation to produce educational workshops aimed at increasing study participants' physical activity.showing that low-income and minority neighborhoods have far fewer commercial physical activity-related facilities available."This makes our workshops all the more important for these communities because there are few gyms or other sports facilities that are accessible to them," she says.
The workshops offered through the City Parks Foundation collaboration introduce the wonders of the outdoors to children who might not have spent much time there.Workshops include "Trees, Leaves and Worms" (observing nature in action), "My Nature Journal" (recording those observations), and ice-skating excursions in Central Park."We want parents to learn that they can use New York City parks as learning resources that also provide free health benefits.Parents who become excited by the experiences are more likely to integrate these types of excursions into their children's lives," says Claudia DeMegret, director of education at the City Parks Foundation.
Other collaborations include family mini-golf with the Randall's Island Sports Foundation, a "Mad Hot Dancing!" class with salsa dancer Rodney Lopez (featured in the movie Mad Hot Ballroom), and a "Yummy Good" cooking class in partnership with the community organization Little Sisters of the Assumption.The center also distributes regular newsletters and fact sheets telling families where the program's outdoor activities are conducted.
Other workshops are designed specifically to demystify the scientific process and reinforce to the study participants how integral they are to the program.Kids can look at their own cells under a microscope in the "Your Body, Your Cells" workshop.They and their parents also learn to distinguish reliable and unreliable sources of health information on the web in the "On-line for Health" workshop.Scavenger hunts afford the children the opportunity to learn about different kinds of plastics and their varying levels of safety ("Plastics and More Plastics"), and they are also introduced to genetics and heredity ("Do These Genes Make Me Look Fat, and Other Things Genes Do").
Parents appreciate the diversity of entertaining learning opportunities offered to the children through the programs, and note how the kids connect these experiences to their role in the center's research projects.One parent (who is unidentified to protect the privacy of the study participant) observes, "They enjoy playing in the grass, being in the dirt, collecting leaves. . . .[After an event] they remember and talk about some of the things they did.I think the more they're exposed to things, the more interested they become in the study."About one-fifth of all American adults smoke cigarettes, resulting in about 43% of children being exposed to ETS at home.ETS exposure, along with mold exposure, has been documented as a risk factor for health problems such as wheezing, asthma, and otitis media in both children and adults.
In the current study, the researchers observed the effects of ETS and indoor mold exposure on the development of rhinitis and symptoms such as nasal blockage, sneezing, and nasal itching in a cohort of 633 infants under the age of 1 enrolled in the Cincinnati Childhood Allergen and Air Pollution Study.They used interviewer-administered questionnaires to collect demographics and information on smoking habits, family health history, and other covariates.They also analyzed any upper respiratory symptoms of the infants recorded by the parents in a monthly diary.In addition, they performed a skin-prick test on the parents and the infants (at approximately 12 months of age) to test for sensitivity to at least 1 of 15 airborne allergens.
The investigators found that exposure to ETS increased an infant's risk of developing allergic rhinitis by almost threefold.They also found that exposure to mold in the home was associated with increased risk of upper respiratory infections but not allergy, which differed from previously reported research in older children and adults.
Other findings included a protective effect of having older siblings in the home.Infants with at least one older sibling were less likely to have allergic rhinitis by their first birthday.This finding supports the hygiene hypothesis, a theory that exposure to infectious agents early in life may decrease the risk for allergic diseases such as asthma later in life.Presumably, by having older siblings these infants were exposed to a wider variety of viruses and bacteria, causing their immune systems to develop in a way that decreased the risk of allergy.
The authors conclude that further research is necessary to confirm their results.Continued research is also needed to determine the components of cigarette smoke that cause these health effects, and to ascertain the role of possible gene-environment interactions.
NUMBER 10 | October 2006 • Environmental Health Perspectives Environews NIEHS News Harming while healing?Concerns about potential reproductive effects of exposure to the plasticizer DEHP, including those to infants from uses in medical tubing and other equipment, prompted a new examination of the available health data by the NTP.
Claudio points to a study published in the September 2006 issue of the American Journal of Public Health NIEHS News Reeve Chace, Nathaly Filion Getting down and dirty with the environment.A program of the Mount Sinai Center for Children's Environmental Health and Disease Prevention Research and the New York City Parks Foundation engages children in activities that teach them the relevance of the environment-and environmental research-to their health.
NUMBER 10 | October 2006 • Environmental Health Perspectives NIEHS News Losevsky Pavel/Shutterstock Effects in Infants from Tobacco Smoke, Mold, and Older Siblings Biagini JM, LeMasters GK, Ryan PH, Levin L, Reponen T, Bernstein DI, et al. 2006.Environmental risk factors of rhinitis in early infancy.Pediatr Allergy Immunol 17(4):278-284.Many environmental exposures have been confirmed to affect children's respiratory health, but few have been studied in very young children.Now NIEHS grantees Grace K. LeMasters, Jocelyn M. Biagini, and their colleagues at the University of Cincinnati and the Cincinnati Children's Hospital Medical Center demonstrate for the first time the relationship between exposure to environmental tobacco smoke (ETS) and allergy in infants.
EHP can help you give your students the tools to succeed.
For
more information on the EHP Student Edition and Lesson Program log on today at www.ehponline.org/science-ed.ehp | 2020-04-04T09:12:06.153Z | 2006-10-01T00:00:00.000 | {
"year": 2006,
"sha1": "72e86405e542112edbe2de7d8b317857e9c33ca9",
"oa_license": "CC0",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "72e86405e542112edbe2de7d8b317857e9c33ca9",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": []
} |
231942366 | pes2o/s2orc | v3-fos-license | Geometry of Cascade Feedback Linearizable Control Systems
In this thesis, we provide new insights into the theory of cascade feedback linearization of control systems. In particular, we present a new explicit class of cascade feedback linearizable control systems, as well as a new obstruction to the existence of a cascade feedback linearization for a given invariant control system. These theorems are presented in Chapter 4, where truncated versions of operators from the calculus of variations are introduced and explored to prove these new results. This connection reveals new geometry behind cascade feedback linearization and establishes a foundation for future exciting work on the subject with important consequences for dynamic feedback linearization.
specifically concerned with the role of symmetry from the perspective of exterior differential systems, and provides a key property needed to define cascade feedback linearizable systems. Chapter 4 opens with the definition of a cascade feeback linearizable control system, and then presents the primary results of this thesis, including a new necessary condition for cascade feedback linearizable control systems, as well as the presentation of a new explicit class of such systems.
1.2. Control Systems. In this section we will define explicitly what we mean by a control system for the purposes of this thesis, as well as introduce several examples that will appear throughout.
Definition 1. Let M be a manifold such that M ∼ = loc R × R n × R m , with coordinates (t, x, u), where x = (x 1 , . . . , x n ) and u = (u 1 , . . . , u m ). A control system on M is an underdetermined system of ordinary differential equations, where f(t, x, u) = (f 1 (t, x, u), . . . , f n (t, x, u)). The coordinate t will denote time, and the variables x and u are the state variables and control variables respectively. Additionally, denote X(M) ∼ = R n to be the state space of M with the states x as local coordinates on X(M).
Definition 2.
A solution or trajectory of a control system is any curve (t, x(t), u(t)) in M ∼ = loc R × R n × R m that satisfies equation (1).
As alluded to in the introduction, we mention that, in general, a control system could refer to many different types of differential equations or processes, e.g. PDEs, SDEs, discrete DEs, general stochastic processes, etc. The author is curious to know the extent to which the ideas in this thesis may be applied to other types of control systems, and will likely investigate this to some degree in the future. For now, however, we will be content with our above definition of a control system.
One important property of control systems is the question of controllability.
Definition 3. A control system is controllable if, for any two points p and q in X(M), there exists a solution to (1) such that x(t 0 ) = p and x(t 1 ) = q.
Studying controllability of control systems is of central importance in the overall field of control theory, and there are many different types of controllability and related notions. We will not explore this topic any further in this thesis, except briefly in Section 1.3 and in the following example. One can refer to [12] for more on controllability. We now list several examples of control systems that appear throughout this thesis.
Solutions for this control system passing through the point (x 0 = x(t 0 )) ∈ X(M) ∼ = R 4 are easily seen to be given by (3) x 1 = f 2 (t), x 2 =ḟ 2 (t), where C = x 4 (t 0 ) − x 2 (t 0 ). We then notice the algebraic constraint x 4 (t) = x 2 (t) + C for any choice of f 2 (t) and for all t. Thus, this control system is not controllable. All remaining examples introduced in this section are controllable.
Example 2. [16]
The following is a control system of 3 states and 2 controls.
(4)ẋ 1 = 1 2 (x 2 + 2x 3 x 5 ),ẋ 2 = 2(x 3 + x 1 x 5 ), This example first appears in [16], and will have importance as an illustration of the main results in Chapter 4. It has the property of being cascade feedback linearizable, as shown in [16], and in particular, it serves as an example of Theorem 18 in Chapter 4.
Example 3. [27][22]
(5) Example 3 above is an example of a control system that appears to be a nonlinear system of underdetermined ODE. However, the control system is equivalent to a linear system in a precise way to be defined in Sections 1.2 and 3.7.
The above system in Example 4 will be referred to as the BC system, for Battilotti and Califano, who introduced the system in [7]. This system is also cascade feedback linearizable, as will be demonstrated in Chapter 4.
Example 5. This relatively simple looking control system will be used only once, and will not reappear until Chapter 4. The control parameters are z 1 2 and z 2 2 . It provides a nice demonstration of the necessary condition found in Theorem 17, which is one of the main results of this thesis.
where f, g are arbitrary functions and the a i are constants, not all zero.
The above family of control systems possesses a familiar set of symmetries-the affine transformations of the real plane. Specific choices of the functions g i and the constants a i will be used to demonstrate various theorems in Chapter 3 and Chapter 4.
One important example to mention, which will not be explored in this thesis, is that of the planar vertical take-off and landing vehicle, (PVTOL) control system, listed above. An indepth analysis of the system regarding cascade feedback linearization will appear in a later work.
(10)ẋ 1 = c 1 x 1 + c 3 x 3 + u 1 (a 0 + a 1 x 1 + a 3 x 3 + a 4 x 4 ), x 3 = u 1 , Finally, Example 8 is an 11-parameter family of control systems. Any choice of the parameters leads to a system that is not linearizable (see Definition 6). However, this system does 5 have the property of being linearized when additional differential equations are imposed. This concept will be made precise in Section 1.3.
All of the examples presented in this section have very different properties, and in terms of classification, are all inequivalent to one another.
1.3. Linear Control Systems. The most fundamental and well studied class of control systems are those that are linear.
Definition 4. A control system (1) in n states and m controls is linear if it has the form (11)ẋ(t) = Ax + Bu, where A and B are n × n and n × m constant matrices respectively.
In particular, Example 1 is a linear control system. We demonstrated that it was not controllable, which naturally makes one wonder about when a linear control system is controllable. (11) is controllable if and only if the n × nm matrix (12) [B AB A 2 B · · · A n−1 B], has rank n.
Theorem 1. (Kalman Condition)[32] A linear control system
We will restrict ourselves to controllable systems from here on. Given a control system, it may be possible to change the state and control variables in such a way that the new system is a linear control system. In particular, a theorem of Brunovský [9] says that all controllable linear control systems may be put into the following form by a specific type of transformation.
[9] The Brunovský normal form, is a linear control system (11) such that matrix A consists of σ i × σ i block matrices A i , 1 ≤ i ≤ m down the diagonal, with the form Sometimes it is possible to transform a seemingly nonlinear control system to Brunovský normal form via a change of coordinates. Indeed, in [27] and [22] Example (3) was shown to be equivalent to a Brunovský normal form via a change of coordinates where the new state variables are of the form z = z(x) and the new control variables have the form v = v(x, u). 6 Definition 6. A control system (1) is called static feedback linearizable (SFL) if there is an invertible map (t, z, v) = (t, ϕ(x), ψ(x, u)) such that (1) transforms to a Brunovský normal formż = Az + Bv. The control system is called extended static feedback linearizable (ESFL) if the map has the form (t, z, v) = (t, ϕ(t, x), ψ(t, x, u)).
Notice that the forms of the maps only take state variables to state variables, while the new controls are allowed to depend on the old controls and old state variables. This is what is meant by "feedback". For example, when one is driving a car, the current position of the car is used to determine how to change the steering wheel or acceleration to stay on the road. However, a driver has no way of controlling the shape or orientation of the road in order to keep the car on the road. This property is important for having meaningful solutions for a control system.
The first results concerning when a given nonlinear control system is SFL were given by Krener in [35], as well as by Brockett in [8] and then Jakubczyk and Respondek in [30]. Constructing explicit maps for SFL systems is harder, and that work was started by Hunt, Su, and Meyer in [27], and then a more geometric approach based on symmetry was developed in [20] by Gardner, Shadwick, and Wilkens, and finally work of Gardner and Shadwick [23], [21], and [22] provided what is now known as the GS algorithm for static feedback linearization.
The work of Vassiliou in [49] and [50] provides a way to construct the required maps for ESFL systems, as well as identifying when a given control system is ESFL. It is Vassiliou's work that will be central to this thesis. The main results of the two previously mentioned papers will appear in Chapter 2.
Dynamic Feedback Linearizable Control Systems.
A particularly desirable property for a control system is when solutions can be written purely in terms of arbitrary function and their derivatives. Much like the case of Example 1, determining a solution curve requires no integration, and involves only algebraic expressions of arbitrary functions and their derivatives. Solutions to Brunovský linear control systems always have this property. However, there are nonlinear systems that may also have this property and are not SFL.
Definition 7. A controllable control system is called explicitly integrable (EI) if generic solutions may be written as (14) x(t) = A(t, z i 0 (t), z i 1 (t), . . . , z i s i (t)), u(t) = B(t, z i 0 (t), z i 1 (t), . . . , z i r i (t)), Additionally, we may add the distinction of autonomous to an EI system if A and B have time dependence only through the functions z i 0 (t) and their derivatives. That is, A and B have the form (15) x It turns out that EI systems are related to another type of linearization called dynamic feedback linearization. 7 Definition 8. A control system (1) is dynamic feedback linearizable (DFL) if there exists an augmented system of the form such that the control system There has been considerable effort to understand DFL systems, more than we can exhaustively list here. The concept first appears in [47], and was subsequently studied in [28], [13], and [14]. A geometric necessary condition based on ruled submanifolds was presented in [48]. A method for producing a DFL, if it exists, was the subject of work by Battilotti and Califano in [5], [6], and [7]. However, a complete classification of DFL systems has yet to be achieved.
The 11-parameter family from Example 8 was shown to be DFL in [48] by differentiating twice along the control u 1 . That is, if we augment (10) by then the augmented system is SFL. We also have the following nonautonomous version of a dynamic feedback linearizable system. Definition 9. A control system (1) is extended dynamic feedback linearizable (EDFL) if there exists an augmented system of the form such that the control system
is ESFL.
In forthcoming work [16], it is shown that Proposition 1. A control system is EDFL if and only if it EI. Additionally, a control system is DFL if and only if it is an autonomous EI system. 8 Example 4 is DFL, and in fact, in Chapter 4 of this thesis, we will show that Example 4 is EI. In order to prove this, the theory of cascade feedback linearization (CFL) is introduced and applied in Chapter 4. The key idea is the existence of two particular kinds of ESFL systems whose trajectories may be "composed" in order to compute trajectories of Example 4. CFL systems are introduced in [51] where it is shown that such systems are EI. In light of this result and Proposition 1 from [16], we can say that any CFL system is EDFL. In particular, as is shown in [16], carrying out the CFL process tells one how to construct a simple augmented system that is ESFL, therefore demonstrating directly that a CFL system is EDFL.
We would also like to remark briefly on Example 7 mentioned in Section 2. The control system is shown to be DFL by Martin, Devasia, and Paden in [38], and in [51] it is shown to be CFL. However, the sizes of the augmented systems differ from the two constructions, namely the CFL construction in [51] requires a larger augmented system. In another forthcoming work by the author, Example 7 will be explored through the lens of CFL theory more closely, demonstrating, in particular, that there is an augmented system of the same size as that presented in [38].
Geometry of Feedback Transformations and Linearization
2.1. Exterior Differential Systems, Distributions, and Derived Systems. In this section we provide some background on exterior differential systems (EDS) and derived systems. For a comprehensive account of EDS, refer to [10] and [29]. Throughout this thesis, we will assume that the ranks of all bundles that appear are constant on sufficiently small open sets unless otherwise specified.
Definition 10. An Exterior Differential System (EDS) is an ideal I in the exterior algebra of differential forms on a manifold M that satisfies the condition dI ⊂ I, where d is the exterior derivative.
We will always consider the case that an EDS is finitely generated as an ideal. We have two ways of generating an EDS: algebraically or differentially. That is, (22) where k is positive integer, each θ a is a differential form on M, and θ a diff := θ a , dθ a alg . For shorthand, we will often drop the "diff" subscript so that θ a = θ a diff . An important question about a given EDS is whether or not it admits integral manifolds.
A straightforward example is the case of integral curves of a nowhere vanishing vector field X on a manifold M n . Let {θ a } n−1 a=1 span the space of all 1-forms ψ on M such that ψ(X) = 0. Then the set of integral manifolds of the EDS I = θ a contains the integral curves of X. If one considers the space L(X) = Span C ∞ (M ) {X}, then the set of all integral curves of vectors in L(X) are in 1-1 correspondence with integral manifolds of θ a .
Sometimes it is desirable to find an m-dimensional integral manifold f : N → M of I such that f * Ω = 0 for a given m-form Ω = dx 1 ∧ · · · ∧ dx m on M, where {x i } m i=1 form part of a local coordinate system given by (x 1 , . . . , x m , y m+1 , . . . , y n ). When this is the case, Ω is called an independence condition, and it plays the role of establishing independent variables for integral manifolds of the EDS. The requirement that f * Ω = 0 is equivalent to claiming that (x 1 , . . . , x m ) may be chosen as local coordinates for N. Then the integral manifold f : N → M may be thought of as a graph given by (x, y(x)) where x = (x 1 , . . . , x m ) and y = (y m+1 , . . . , y n ). Returning to the example of a vector field X, it may be possible to pick a 1-form Ω = dt such that integral curves to X (and therefore its associated EDS) may be written locally as graphs (t, x 1 (t), . . . , x n−1 (t)), where (t, x 1 , . . . , x n−1 ) form coordinates for M.
Definition 12. Let {θ a } r a=1 and {θ a , ω i } r,m a,i=1 be bases for sections of subbundles I, J ⊂ T * M respectively. An EDS I = θ 1 , . . . , θ r , is called a Pfaffian system. We say that I generates I, and write I = I . If in addition, I is given an independence condition Ω = ω 1 ∧ . . . ∧ ω m such that dθ a ≡ 0 mod J for all 1 ≤ a ≤ r, then (I, Ω) is called a linear Pfaffian system. Definition 13. Let I andĨ be two Pfaffian systems generated by the subbundles I andĨ of T * M, respectively. Then the sum of two Pfaffian systems is defined to be (23) I +Ĩ := I +Ĩ .
Additionally, if I ∩Ĩ is trivial, then the direct sum of two Pfaffian systems is the Pfaffian system The EDS in this thesis will be either Pfaffian or linear Pfaffian systems. Since our systems will be Pfaffian, we will often formulate results using the dual notion of distributions.
Definition 14.
A distribution V on a manifold M is a subbundle of the tangent bundle T M. An integral manifold of a distribution V is any submanifold N of M such that T N is a subbundle of V.
We will denote distributions by a set of sections of T M that generate the distribution by C ∞ linear combinations. That is, if V is a distribution of rank s, then where the X i , for 1 ≤ i ≤ s, are linearly independent sections of V ⊂ T M. In the case that we have a Pfaffian system, there is a natural distribution whose integral manifolds are the same as those of the Pfaffian system.
Definition 15. Let I ⊂ T * M be a subbundle. Then the annihilator of I is the subbundle of T M given by Conversely, given a distribution V on manifold M, is a subbundle of the cotangent bundle of M. Moreover, ann (ann B) = B for any subbundle B of T * M or T M.
We now discuss two equivalent versions of an important theorem in the study of EDS and distributions.
Theorem 2. The following are equivalent statements of the Frobenius theorem.
(1) Let I = θ 1 , . . . , θ n−r be a rank n − r Pfaffian system on manifold M n with r < n. If then through any point p ∈ M there exists an r-dimensional integral manifold of I containing p. Furthermore, on a sufficiently small open neighborhood of p ∈ M, there exists a coordinate system (y 1 , . . . , y n−r , x n−r+1 , . . . , x n ) such that (29) I = dy 1 , . . . , dy n−r and integral manifolds are determined by the equations y 1 = c 1 , . . . , y n−r = c n−1 , where c a , 1 ≤ a ≤ n − r are constants.
(2) Let V be a distribution of rank r on a manifold M n . If [X, Y ] ∈ Γ(V) for all X, Y ∈ Γ(V), then through any point p ∈ M there exists an r-dimensional integral manifold of V.
The condition (28) may also be stated as (30) dθ a ≡ 0 mod I, for all 1 ≤ a ≤ n − r. Condition (28) is also equivalent to saying that I is algebraically generated by 1-forms, i.e.
Definition 16. Pfaffian systems and distributions that satisfy the hypotheses of the Frobenius theorem are called Frobenius or completely integrable.
Certainly, not all Pfaffian systems are Frobenius, and indeed, one might be interested in measuring how far a system deviates from being completely integrable. One can do this by removing all the forms in I that obstruct the EDS from being Frobenius. This is the idea of the derived system.
Definition 17. Let I be a Pfaffian system. Then the Pfaffian system generated by is called the first derived system or derivation of I. If one starts with a distribution V, then the first derived system is defined as Informally, we will denote the derived system as Notice that if I (1) = I then I generates a Frobenius Pfaffian system. Similarly, if V (1) = V then V is Frobenius. Furthermore, the derived system of a Pfaffian system is a diffeomorphism invariant since pullback commutes with exterior differentiation and the wedge product.
Additionally, we mention that if a distribution and Pfaffian system are related by V = ann I, then V (1) = ann I (1) . This fact follows from the identity for any X, Y ∈ Γ(T M), θ ∈ Ω 1 (M). As there is a first derived system, one can repeat the constructions in Definition 17 to generate a second derived system, and so on.
Definition 18. Let I = I for I a subbundle of T * M and let V be a distribution. Then the lth derived system of I is Similarly for a distribution V, The derived flag of a Pfaffian system is given by where k is the smallest integer such that I (k+1) = I (k) . For a distribution V, the derived flag is where k is the smallest integer such that V (k+1) = V (k) . The integer k is called the derived length of the EDS/distribution.
Additionally, we have the following integer invariants of a derived flag.
Then one can define the following lists of integers: (1) The velocity of V: given by vel (2) The acceleration of V: given by accel(V) = ∆ 2 2 , . . . , The last bundle in the derived flag is always Frobenius. We see that, for distributions, if , which is exactly the condition needed to apply the Frobenius theorem. In the case of a Pfaffian system, I (k+1) = I (k) means that dθ ≡ 0 mod I (k) for all θ ∈ I (k) . Hence the Frobenius condition is satisfied.
Definition 20. Given a Pfaffian system I = I , where I is a subbundle of T * M, the first integrals or invariant functions of I are all non-constant functions f : M → R such that df ∈ Γ (I). Given a distribution V, the first integrals are given by all non-constant functions f such that X(f ) = 0 for all X ∈ V.
Consider a completely integrable Pfaffian system I of rank n − r on a manifold M n . Then the Frobenius theorem says there is a coordinate system (y 1 , . . . , y n−r , x 1 , . . . , x r ) where the coordinate functions {y 1 , . . . , y n−r } can be chosen so that I is generated by I = {dy 1 , . . . , dy n−r } ⊆ T * M. Hence the coordinate functions y 1 , . . . , y n−r are first integrals of I. Furthermore, in this coordinate system, any other first integral F of I must be of the form F (y 1 , . . . , y n−r ). Indeed, if F (x, y) is a first integral, then dF ∧ Ω y = 0, where Ω y = dy 1 ∧ · · · ∧ dy n−r = 0. However, this means that ∂F ∂x i dx i ∧ Ω y = 0 for all 1 ≤ i ≤ r, and thus F has no dependence on x i for all 1 ≤ i ≤ r.
Definition 21. If the derived flag of a system terminates in the zero ideal for an EDS (or is the entire tangent bundle for a distribution), then there are no first integrals of the system. We call such systems completely non-integrable.
A classic example of a completely non-integrable system is the EDS generated by a contact form on R 3 . Indeed, if then it is clear that d(dy − z dx) = −dz ∧ dx. This 2-form is not zero modulo dy − z dx. Thus the first derived system is the zero ideal, and therefore this EDS has no first integrals. When we consider control systems as linear Pfaffian systems, we will also assume that such systems are completely non-integrable. This is a necessary condition for a control system to be controllable. In fact, the contact system on R 3 is a simple example of a control system with 1 control and 1 state, where x is the independent variable. The contact system has only curves as integral manifolds, and such curves are given in coordinates by Another important bundle that is associated to a given EDS/distribution is the Cauchy bundle.
We call sections of Char V Cauchy characteristics. 13 Note that Char V is integrable. This follows directly from the Jacobi identity on Lie brackets.
Let X be any Cauchy characteristic for I and let N be any m-dimensional integral manifold of I on M n that is transverse to X. Consider the family of submanifolds N s = ϕ s (N), where ϕ s is the 1-parameter family of diffeomorphisms generated by the flow of X. Each submanifold in the family N s is a an integral manifold of I, and moreover, the manifold s N s is an m + 1 dimensional integral manifold of I. Hence, knowledge of the Cauchy bundle of an EDS can be used to construct more integral manifolds to the EDS. Additionally, we will always assume that the rank of any Cauchy bundle that appears in this thesis is constant.
Definition 23. Let I be an EDS. Then a vector field X is an infinitesimal symmetry of I if L X ψ ∈ I for all ψ ∈ I, where L X is the Lie derivative in the Lie derivative in the direction of X.
Cauchy characteristic vector fields turn out to be a special type of infinitesimal symmetry of an EDS. In general, the flow generated by an infinitesimal symmetry of an EDS will take integral manifolds to integral manifolds. However, we may not be able to construct higher dimensional integral manifolds as in the case of transverse Cauchy characteristics. In this thesis, symmetry plays a particularly important role and will be the main subject in Chapter 2.
We will frequently use a diffeomorphism invariant from [49] to identify particular types of distributions.
[50] Let V be a distribution with derived length k > 1. Let 2.2. The Resolvent Bundle. A particularly special structure that appears in the study of generalized Goursat bundles (see section 2.6 below) is that of the resolvent bundle. In this section we present the definition of a resolvent bundle as well as important theorems from [49]. Let V be a subbundle of T M and consider the map The kernel of this map is exactly the Cauchy bundle of V.
has less than generic rank for all y in a neighborhood of x}.
Then the singular variety of V is the bundle 14 Additionally, for X ∈ V, any matrix representation of the homomorphism σ(X) is called a polar matrix of [X] ∈ PV.
Given some [X] ∈ PV, the map deg V : PV → N is called the degree of [X] and is defined by Note that for X ∈ CharV, deg V ([X]) = 0. For this reason, we consider the quotient π : T M → T M/CharV and denote all quotient objects by an overbar, so that T M = T M/CharV.
Then we call (V,Σ) a Weber structure on M. Furthermore, for a Weber structure (V,Σ), let RΣ(V) denote the rank q + c subbundle of V such that with independence condition dt. In the language of distributions, a control system is given by the rank m + 1 distribution V = ann ω, which in local coordinates is given by Additionally, we require that the Cauchy bundles of ω and V be trivial.
The last part of this definition is found in the definition of Sluis in [48] on page 34/35. Control systems must be underdetermined ODE systems. That is, the differential equations (1) must have nontrivial dependence on all the specified control parameters in a way that is not redundant. This is the reason for the Cauchy bundle condition.
For control systems, we would like to pick the controls u 1 , . . . , u m to be functions of t so that a corresponding solution of the control system is the graph of a curve in M that passes through two desired points in X(M). If we assume that a control system is controllable, then it follows that as a Pfaffian system or distribution, the control system needs to be completely nonintegrable; otherwise integral curves would be "stuck" in submanifolds that foliate M. To see this in a bit more detail, assume that there is some k such that ω (k+1) = ω (k) is nontrivial. That is, the derived flag of ω terminates in a nontrivial Frobenius system at step k. Any integral curve of ω is also an integral curve of ω (k) . Let γ(t) = (t, x(t), u(t)) be such an integral curve, and let ω (k) be generated by the exact 1-forms dy 1 , . . . , dy s , so that locally we have a new coordinate system (t, y 1 , . . . , y s , z s+1 , . . . , z n+m ). Note that neither dt nor the du a may be in ω (k) ; otherwise they would belong to ω as well, and this is prohibited by the independence condition and the Cauchy characterstic condtion, respectively. In these new coordinates, the curve γ(t) becomesγ(t) = (t, y(t), z(t)). However, since γ(t) is an integral curve of ω (k) , then so isγ(t), and thus y(t) = c for some constants c = (c 1 , . . . , c s ). Thus the curveγ(t) is contained in the submanifold defined by y = c. However, if we wanted to connect two points not in any such submanifold, then we could not connect those two points via an integral curve of ω. Hence, the controllability property implies that ω must be completely nonintegrable. Hence, for the remainder of this thesis, we will always assume that control systems are completely nonintegrable.
2.4. Lie Transformations. The classes of diffeomorphisms considered in this thesis fall under the umbrella of Lie transformation (pseudo)groups. The study of transformation pseudogroups has produced a rich literature of interesting results. We cannot give a full account here, but we would like to direct the interested reader to [11], [36], [37], and [46], as well as [40] for modern perspectives in this area. We will use the following definition of a Lie pseudogroup.
Definition 28. [46]
Let M be a differentiable manifold and let P be a collection of diffeomorphisms of open subsets of M into M. We say that P is a Lie pseudogroup if: (1) P is closed under restriction: if ϕ : U → M belongs to P so does ϕ| V for any V ⊂ U, open.
(2) Elements of P can be pieced together: If ϕ : U → M is a diffeomorphism and U = ∪ α U α with ϕ| Uα ∈ P then ϕ ∈ P.
(5) The identity diffeomorphism belongs to P.
Although we are interested in studying control systems invariant under certain infinite dimensional Lie pseudogroups of transformations, we will also work with transformations induced by finite dimensional Lie groups.
Definition 29. Let G be a Lie group of dimension r < ∞. A local Lie group is (up to Lie group isomorphism) any open subset of G containing the identity element e.
Unless otherwise specified, all Lie groups G in this thesis will be considered as local Lie groups. Furthermore, we note that Definition 29 is essentially Theorem 1.22 of [41] which says that every local Lie group may be realized as an open subset of a Lie group that contains the identity element.
Definition 30. [41]
Let M be a smooth manifold. A (local) Lie group of transformations acting on M is given by a (local) Lie group G, an open subset U, with which is the domain of definition of the group action, and a smooth map Ψ : U → M with the following properties: (2) For all x ∈ M, One may also write g · x for Ψ(g, x).
When we refer to a Lie group of transformations, we will always be referring to a local Lie group of transformations unless otherwise specified. In practice, we will usually rely on the infinitesimal version of Lie transformations, which we now define.
Definition 31. [41]
Let G be a Lie transformation group acting on a smooth manifold M, and let g be the Lie algebra of right-invariant vector fields on G. Then the infinitesimal action of g on M is given by for all v ∈ g and x ∈ U ⊂ M, and where Ψ x (g) = Ψ(g, x). Equation (57) defines a vector field, ψ(v), on U ⊂ M.
The map ψ : g → Γ(T M) defined by 57, is a Lie algebra homomorphism and sections of T M in the image of ψ are infinitesimal generators of the group action G. We then have the following important theorem.
Theorem 3. [41]
Let w 1 , . . . , w r be vector fields on a manifold M satisfying for certain constants c k ij . Then there is a Lie group G whose Lie algebra has the given c k ij as structure constants relative to some basis v 1 , . . . , v r , and a local group action of G on M such that ψ(v i ) = w i for i = 1, . . . , r, where ψ is defined by (57).
Definition 32. The vector fields in w 1 , . . . , w r in Theorem 3 are called the infinitesimal generators of the action of G on M.
Recall Definition 23, where we defined an infinitesimal symmetry of an EDS. If an EDS has infinitesimal symmetries that satisfy the hypotheses of Theorem 3, then there will be an associated local Lie group of transformations that, as will be discussed later, takes integral manifolds of an EDS to integral manifolds of the same EDS. It is often easier to work with infinitesimal symmetries, however, and in light of Theorem 3, the word "symmetry" will be used to mean either an element of a local Lie group of transformations G, or one of the vector fields that arise from an infinitesimal action of g on a manifold M. In Chapter 3, we will explore symmetries of Pfaffian systems as they apply to control systems. Next, we'll present some important facts about local Lie group actions that will allow us to investigate invariant integral manifolds of an EDS.
Definition 33. Let G be a local Lie group of transformations acting on a manifold M.
Define the stabilizer of a point x ∈ M as the set We say the action of G on M is free if for all x ∈ M, G x = {e}, where e is the identity element.
Definition 34. Let G be a local Lie transformation group acting on a manifold M. Then the orbit of the action on a point x ∈ M is Two points x and y in M are equivalent if and only if they belong to the same orbit. Then the space of equivalence classes endowed with the quotient topology is denoted M/G and is called the orbit space of the action of G on M. Remark: In other contexts, a regular group action may refer to a free and transitive group action; however, we have no need for this meaning of the word. We also mention that, if a group action on a manifold is regular as in Definition 35, then the orbits of the action are regular submanifolds, although the converse may not be true.
A classic example of a semi-regular, but not regular, group action on a manifold M is the case of an irrational flow on the 2-torus. The group G is the whole real line, and although each orbit is a 1-dimensional, immersed submanifold of M, every open set of any point on T 2 fails the definition of regularity since the orbits of the action are dense. However, if G is any nontrivial, finite open interval of R containing zero, then the corresponding irrational flow is a regular action by G on T 2 . The definitions of semi-regular and regular actions extend to the infinitesimal action of a Lie group as well. Additionally, the definition extends to any completely integrable distribution on M.
[41] Let V be a completely integrable distribution on a manifold M. If rank V is a fixed constant everywhere on M, then we say V is semi-regular. Furthermore, a semiregular distribution V is regular if the integral manifolds of V have the property that for any x ∈ M, there exist arbitrarily small open sets U containing x such that the individual integral manifolds of V intersect U in pathwise connected subsets.
Let Γ denote the span over C ∞ (M) of the infinitesimal generators of the action on M of a Lie group G, which as a distribution, is always completely integrable by virtue of the Jacobi identity and Definition 23. If Γ is semi-regular or regular, then the action of G is also semi-regular or regular, respectively. As in the previous example, one may not always be guaranteed that a given distribution Γ on M corresponding to a Lie group action is regular or even semi-regular. However, we can always restrict to smaller open submanifolds of M and smaller open submanifolds of G containing the identity such that Γ is semi-regular or regular. For the remainder of the thesis, we will always assume that we have restricted to sufficiently small open submanifolds of M and G to guarantee that all actions are regular.
Theorem 5. [41]
Let M be a smooth n-dimensional manifold. Suppose G is an r-dimensional local Lie group of transformations which acts regularly and freely on M. Then the orbit space, or quotient manifold M/G, is a smooth (n − r)-dimensional manifold with a projection map π : M → M/G such that the following hold.
(1) π is a smooth map between manifolds.
(2) Two points x and y belong to the same orbit of G in M if and only if π(x) = π(y).
(3) If Γ denotes the Lie algebra of infinitesimal generators of the action of G on M, then the linear map are independent first integrals of the Lie algebra of infinitesimal generators Γ, then (η 1 , . . . , η n−r ) form a local coordinate system on M/G.
Extended Static Feedback Transformations (ESFTs).
We are concerned with two types of diffeomorphisms determining equivalence classes of control systems. They are known as static feedback transformations and extended static feedback transformations. The former of the two types of transformations are of particular interest to control theory as a whole and are generally well studied. The latter are a slightly broader type of diffeomorphism that allows for extra time dependence. To be precise: is called a static feedback transformation (SFT). Two control systems (M, ω) and (N, η) And the slightly broader class of diffeomorphisms can be defined by: is called an extended static feedback transformation (ESFT). Two control systems Although SFTs are more common in the control theory literature, we will need the use of both types. In particular, the last chapter necessarily requires that we use ESFTs. Since SFTs are special type of ESFTs, we will always refer to the more general case unless otherwise specified.
Example 9. The two systems The ESFT that accomplishes the equivalence is given by Computing the pullback by Φ −1 of the forms that generate ω, we find Hence, (Φ −1 ) * ω = η, so the two systems are ESFE.
2.6. Brunovský Normal Forms and Goursat Bundles. In this section we will explore a specific class of controllable linear control systems that are equivalent via ESFTs. We will introduce jet bundles, contact systems, and a generalization of these concepts known as Goursat bundles.
Definition 39. Let f, g : R → R m be two C n curves in R m . We say that f and g are equivalent via n-th order contact at a point x 0 ∈ R if the nth degree Taylor polynomials for f and g agree at x 0 . In particular, we denote the equivalence class of f as j n x 0 f and we call j n x 0 f the n-jet of f at x 0 .
Two functions are in 0-th order contact at x 0 if the graphs of f and g in R ×R m pass through the same point at x 0 , 1-st order contact if the graphs are mutually tangent to each other at x 0 , and so on. However, we are not only interested in n-jets of functions over a single point.
Then the jet bundle of order n is defined to be Furthermore, the space R in the notation J n (R, R m ) may be refered to as the source and is the image of the source projection map We will often abbreviate the notation J n (R, R m ) to J n whenever there is no danger of ambiguity. In general, jet bundles can be defined for maps between any two differentiable manifolds M and N. For more on jet bundles see the text [45]. Let t be the local coordinate for R in J n and (z 1 0 , . . . , z m 0 ) the local coordinates for R m in J n . The jet bundle J n has local coordinates given by (t, The n-jet lift of a function f : R → R m is the curve j n f : R → J n that has the parameterization given by Thus one can interpret local coordinates for a jet bundle so that the coordinate from R is the "independent variable", the coordinates z i 0 are "placeholders" for the "dependent variables", and the z i l may be thought of as "place-holders" for lth order derivatives of the "dependent variables". Consider the jet space J n (R, R m ). There is a natural Pfaffian system on this space whose integral manifolds correspond to the graphs of jets of functions from R to R m . This Pfaffian system is called the contact system or the Cartan system [29]. Note: the terminology "Cartan system" has another well established meaning in EDS theory as the retracting space; see Chapter 6.1 of [29].
for all 1 ≤ i ≤ m and 0 ≤ l ≤ n − 1, is called the contact system. Furthermore, denote by C n m the distribution on J n (R, R m ) that is annihilated by the 1-forms {θ i l }. 21 Let f : R → R m be any smooth function given in coordinates by (t, f 1 (t), . . . , f m (t)). Then the n-jet j n f is an integral curve of β n m . Indeed, each θ i l is zero when restricted to the curve j n f (t), since We now discuss the notion of prolongation of a jet space/contact system. First, notice that there is a surjective submersion π : The canonical contact systems on these two jet spaces have the property that For our purposes, we'll use the following restricted definition: See [29] and [10] for the general definition of prolongation of exterior differential systems. In this thesis, we will frequently work on partial prolongations of jet spaces.
where k is the derived length of β κ , and κ = ρ 1 , . . . , ρ k is the list of natural numbers that defines the type of the partial prolongation of J 1 (R, R m ).
The contact system mentioned in Definition 43 represents an important class of control systems. Indeed, in [9] it was proven than any controllable linear system is equivalent via a linear feedback transformation to a partial prolongation of the form given in Definition 43.
Definition 44. The contact system β κ m in Definition 43 is called a Brunovský normal form of type κ, and we will denote by C κ m the distribution annihilated by Ω 1 (M) ∩ β κ m .
22
A Brunovský normal form is uniquely determined by its type κ. For example, a Brunovský normal form of type κ = 1, 2, 0, 0, 1, 1 on is generated by the 1-forms In this example, one can say that J κ has one variable of order 1, two of order 2, zero of orders 3 and 4, one of order 5, and one of order 6. So the type κ is a list of the local coordinates on J κ categorized by order. As we will see later in this section, the type κ of a Brunovský form is a diffeomorphism invariant. However, when working in coordinates on the partial prolongation of a jet space, it is often easier to use an alternative notation. Indeed, we can also write the partial prolongation of a jet space as where the equivalence relation is the same as in Definition 43 in the sense that all source manifolds for each jet space are identified. For this description of J κ we can write local coordinates as (t, For the example of a partial prolongation of a jet space with type κ = 1, 2, 0, 0, 1, 1 , one can then write the local coordinates as (t, j 1 z 1 , j 2 z 2 , j 2 z 3 , j 5 z 4 , j 6 z 5 ).
Proposition 4. [50]
Let C κ m ⊂ T M be the distribution that annihilates the 1-forms in a Brunovský normal form β κ m with type κ = ρ 1 , . . . , ρ k . Then the entries in the refined derived type ] satisfy the following relations: Some of the most important geometric structures for this thesis are the generalized Goursat bundles [49]. The prototypical examples of Goursat bundles are exactly those subbundles B κ of the tangent bundle of some J κ that are annihilated by the Brunovský 1-forms on J κ . Like the example of Brunovský forms, Goursat bundles have the property of being completely nonintegrable.
Furthermore, although Goursat bundles are examples of completely nonintegrable distributions, they are in some sense degenerate among such distributions. The growth of the derived flag is as slow as possible to still guarantee that the distribution is completely nonintegrable. Rank 2 Goursat bundles are the classical Goursat bundles that were studied by Goursat, Engel, and E. Cartan. Indeed, in the case that M is a 4 dimensional manifold, a rank 2 Goursat bundle on M is exactly an Engel structure.
What about bundles of higher rank? This is the work of Vassiliou in [49] and [50]. Indeed, a generalized Goursat bundle, or simply a Goursat bundle, is given by the following definition.
Definition 46. [49]
A subbundle V ⊂ T M of derived length k will be called a Goursat bundle of type κ if: (1) V has the refined derived type of a partial prolongation of J 1 (R, R m ) whose type κ = deccel(V), is an integrable subbundle whose rank, assumed to be constant on M, agrees with the corresponding rank of Char(C κ m ) (3) in case ∆ k > 1, then V (k−1) determines an integrable Weber structure whose resolvent bundle is of rank ∆ k + χ k−1 .
Goursat bundles have a particularly nice normal form, and this is the main result of [49]. The paper [49] establishes the local normal form for generalized Goursat bundles constructively. However, in [50], the construction of local coordinates is streamlined into a nearly algorithmic procedure. We'll next outline this procedure, often referred to as procedure contact, and apply it to an example in detail.
2.7. ESF Linearizable Systems and Procedure Contact. In this section we'll present procedure contact from [50] and then apply the procedure to Goursat bundles that represent control systems. There will be additional requirements to make sure that the diffeomorphism created by procedure contact can be chosen to be an ESFT. At the end of this section, we prove a result that highlights the difference between ESFL and SFL systems.
Let V be a Goursat bundle on a manifold M with derived length k. The Goursat bundle V will induce one of two possible filtrations of T M; one for the case that V has ∆ k > 1 and the other for the case of ∆ k = 1. To start, we'll assume that ∆ k > 1. The associated filtration of T M for such a Goursat bundle is given by Similarly, there is also a filtration of T * M defined by taking the annihilator of all of the above subbundles, −1) ). Each subbundle in these filtrations is integrable by Definition 46. In particular, these subbundles are diffeomorphism invariants of a given Goursat bundle, and hence their first integrals are also diffeomorphism invariants of the Goursat bundle. Such invariant functions will be used to construct the appropriate contact coordinates. We will not, however, need all first integrals of these subbundles. Notice that Char V (i) = Char V . Each ϕ l j ,j is called a fundamental function of order j. Now let {ϕ 0,k , . . . , ϕ ρ k ,k } generate the first integrals of ΥΣ k−1 (V (k−1) ). These will be the fundamental functions of order k.
Notice that there are ρ j fundamental functions of order j and ρ k + 1 fundamental functions of order k. The fundamental function ϕ 0,k will usually denote a local coordinate for the source of J κ .
The above theorem gives a way to explicitly construct the coordinates for the Brunovský normal form for a Goursat bundle (in the case that ∆ k > 1). We now give the analogous result in the case that ∆ k = 1. In this case, the Goursat bundle V induces the filtrations In place of the resolvent bundle is a new integrable bundle Π k−1 ⊂ V (k−1) .
Definition 48. [50]
Let V be a Goursat bundle with ∆ k = 1, x a first integral of Char V (k−1) , and Z any section of V such that Zx = 1. Then the fundamental bundle Π k−1 ⊂ V (k−1) is defined inductively as In the proof of Theorem 4.2 in [49] it is shown that Π k−1 is integrable and has corank 2 in T M. Note also that x is a first integral of Π k−1 by virtue of filtration (85). We can now state the theorem that constructs contact coordinates for V in the case that ∆ k = 1.
These results can summed up as a procedure for calculating local contact coordinates for a Goursat bundle.
Procedure Contact 1. [50]
Procedure A Let V ⊂ T M be a Goursat bundle with derived length k > 1 such that ∆ k > 1. Then one can do the following to produce local contact coordinates for V: (1) Build filtration (82) and its associated filtration of T * M.
(3) Compute the fundamental functions ϕ l j ,j of Ξ (j) j−1 /Ξ (j) . (4) Fix any fundamental function of order k of the resolvent bundle, denoted x, and any section Z of V such that Zx = 1.
Furthermore, define the remaining contact coordinates to be The local coordinates for J κ (R, R m ) are given by x, z l j ,j 0 , and (89). In these coordinates V has the form C κ m . Procedure B 26 Let V ⊂ T M be a Goursat bundle with derived length k > 1 such that ∆ k = 1. Then one can do the following to produce local contact coordinates for V: (1) Build filtration (85) and its associated filtration of T * M up to Char V (k−1) .
(2) Identify a first integral x of Char V (k−1) such that there is a section Z of V with the property Zx = 1. Then construct Π k−1 , thereby completing filtration (85).
(4) Compute the fundamental functions ϕ l j ,j of Ξ Furthermore, define the remaining contact coordinates to be The local coordinates for J κ (R, R m ) are given by x, z l j ,j 0 , and (90). In these coordinates V has the form C κ m .
Procedure contact produces a local diffeomorphism equivalence between a Goursat bundle and a contact system. In particular, the first integral x in procedure contact plays the role of the source variable of some J κ , so that dx forms the independence condition for the linear Pfaffian system (J κ , β κ m ). Therefore, if V represents a control system with dt as the independence condition, then integral curves of V may not be sent to integrals curves of β κ m that are parameterized by t. The following theorem gives additional conditions that ensures that procedure contact produces an ESFT equivalence between a Goursat bundle V representing a control system and a Brunovský normal form. Before we present some examples, it is important to discuss previous work concerning linearization of control systems via feedback transformations. The work of Gardner, Shadwick, and Wilkens [20] solved the recognition problem of understanding when a given control system is SF equivalent to a Brunovský normal form. Their discovery was that the symmetry pseudogroups of Brunovský forms completely characterize such systems. In [23][22], Gardner and Shadwick devised an algorithm for transforming a SFL control system into Brunovský normal form. This approach used E. Cartan's method of equivalence and EDS theory and is considered the best method for SF linearization of a control system by control theorists.
However, the GS algorithm does have some shortcomings. For instance, it only applies to systems that are SF equivalent to a Brunovký normal form. This means it cannot fully address the question for control systems that are nonautonomous. Secondly, although the algorithm does indeed use the minimal number of integrations required to produce the SFT, one generally has to calculate the full structure equations in order to find the systems whose first integrals are used to construct contact coordinates. On the other hand, with Vassiliou's approach, one can first solve the recognition problem by computing the refined derived type and checking the integrability of the subbundles in (82) and (85), as opposed to calculating the full symmetry pseudogroup of the control system. Procedure contact allows for one to be able to find general ESFTs instead of just SFTs, and furthermore, one need only compute first integrals of nonempty quotients of sequential subbundles of either (82) or (85), plus the first integrals of the final subbundle in these filtrations (either the resolvent bundle or fundamental bundle). In this way, procedure contact also accomplishes the construction of an ESFT using the minimal number of integrations possible. Additionally, procedure contact is not restricted to control systems, and may be used to construct contact coordinates for any Goursat bundle (i.e. general diffeomorphism equivalence). The ability of procedure contact to produce ESFTs is especially important for the last steps of the cascade linearization process (see Chapter 4, Section 1). We would also like to remark that the construction of the remaining contact coordinates from procedure contact is reminiscent of the computation of higher order invariants from Olver's method of equivariant moving frames for Lie pseudogroups [43]. Indeed, the author believes that Olver's methods would be yet another way to construct an algortihm for producing contact coordinates for a Goursat bundle. To further highlight the comparison of procedure contact and the GS algorithm, we present the following example of Hunt-Su-Meyer [27], which was then linearized via the GS algorithm in [22]. Note that procedure contact can be executed in MAPLE or another suitable computer algebra program in a systematic way. For the sake of completeness, the following example has been computed with almost no details suppressed.
First, we'll rewrite the control system as the distribution Step 1: The derived flag of V is given by, Hence V has derived length 3, vel(V) = 2, 2, 1 , and deccel(V) = 0, 1, 1 . Since ∆ k = 1 we will implement Procedure B. Next we compute the Cauchy bundles for V (1) and V (2) . Let 28 be a section of the Cauchy bundle of V (1) , where T, b 1 , b 2 , c 1 , and c 2 are smooth functions. Then It is enough to check L C applied to the linearly independent sections generating V (1) . Doing so, we obtain Equation (97) implies that b 1 = b 2 = 0, and either of equations (100) or (101) imply that (2) and not V (1) . Therefore, any section of the Cauchy bundle must be of the form a 1 ∂ u 1 + a 2 ∂ u 2 for arbitrary functions a 1 and a 2 . Therefore, , and c 2 are smooth functions, such that C is a section of the Cauchy bundle of V (2) . Then applying the Lie derivative L C to the generating sections of V (2) , we find equations (103) and (106) force c 1 = 0 and T = 0, respectively. Hence C = b 1 ∂ x 3 + b 2 ∂ x 5 + c 2 ∂ x 4 for arbitrary smooth functions b 1 , b 2 , and c 2 . Notice also that there is no need to check sections of V (2) with components from Char V (1) , since Char V (1) ⊂ Char V (2) . Therefore, From here, it is easily deduced that Thus the refined derived type of V is [5,2,2], [7,4,5], [8,8]]. Checking that the relations in Proposition 4 are true and seeing that all the bundles in (85) (up to the fundamental bundle) are integrable, we see that V must be a Goursat bundle. Furthermore, since dt ∈ ann Char V (2) ; by Theorem 9 we deduce that V must be ESFL. Constructing the filtration of T * M (excluding the fundamental bundle) induced by V, we find (114) Step 2: Notice that t is a first integral of Char V (2) and that X(t) = 1. Now the fundamental bundle Π 2 is given by Steps 3 and 4: There is only one non-empty quotient bundle to be computed for this step, 1 /Ξ (2) = {dx 4 }, and therefore z 1,2 0 = x 4 .
Step 6: Applying the final step of the procedure, we conclude that the remaining contact coordinates are Thus t, z 1 0 = x 5 , z 2 0 = x 4 , and (117)-(121) define a static feedback transformation of V to the Brunovský normal form β 0,1,1 . Next, we will present an example in which Procedure A must be applied.
Example 11. Consider the following control system that arises from selecting c 1 = e 3 = a 0 = a 1 = b 0 = 1, b 3 = −1, and all other constants equal to zero in Example 8.
The control system as a distribution is
The distribution V is a Goursat bundle with type κ = 0, 2 , and we will find contact coordinates by applying Procedure A.
Step 1: First we calculate the derived flag, and then the filtration (82), stopping short of the resolvent bundle. Using MAPLE, we find that the derived flag is given by Using MAPLE to compute Cauchy bundles, we find Step 2: Next we calculate the resolvent bundle RΣ(V (1) ). First, we need to compute the quotient V (1) /Char V (1) =V (1) . Doing so gives . Now we can compute the polar matrix of E (see Definition 25), The polar matrix has less than generic rank when a 0 = −a 1 /u 1 . This means that the singular bundle is given by and therefore the resolvent bundle is Notice that the resolvent bundle is integrable.
Steps 3 and 4: We see that there are no nontrivial quotient bundles Ξ (i) i−1 /Ξ (i) , and hence no fundamental functions of order less than 2.
Step 5: Now we compute the first integrals of the resolvent bundle. By use of MAPLE, we find the first integrals to be (129) Step 6: Let Z = 1 u 1 X, so that Z(x) = 1. Then we can construct the remaining contact coordinates as Thus we have found local contact coordinates for this Goursat bundle V.
Notice that although we have found contact coordinates that put the Goursat bundle into normal form, it is not via an ESFT. Indeed, this is not possible since dt ∈ ΥΣ(V (1) ), and therefore the time coordinate cannot be singled out as the parameter for integral curves to V in Brunovský normal form. However, as mentioned in Section 3 of Chapter 1, this example can be prolonged twice to an ESFL system. The author has observed this property in a few other examples of control systems and conjectures the following: Conjecture 1. If a control system with at least 2 controls is a Goursat bundle, but cannot be transformed to Brunovský normal form via an ESFT, then there exists a DF linearization of the control system.
Background on the Euler Operator.
In Chapter 4, we will introduce an operator known as a truncated Euler operator, which will be used to establish the main results of this thesis. In this section, we will introduce some basic properties of the Euler operator from the theory of the calculus of variations. There is a vast literature on the calculus of variations, and much of the theory goes beyond our needs in this thesis. We primarily consider the geometric approach taken in [41] and to some extent [4]. Furthermore, we will restrict ourselves to real valued functions of a single real variable, but we mention that generalizations are straightforward and can be found in any of the works referenced in this section. The motivating problem in the calculus of variations is given by the following: Let L : R → R be a smooth function and A = (a, c) and B = (b, d) be two fixed points in the plane. Then for what functions u(t) whose graphs connect A and B does the integral, attain a minimum or maximum? In the physics literature, L is a functional associated to some physical system, and asking that there be a smooth function u(t) that minimizes this functional is to say the physical system possesses a principle of least action.
Definition 50. A function u(t) is an extremal of L if L[u] is a local maximum or local minimum on a space of functions with a given topology containing u(t).
Typically, the function space in question is some type of Banach space, and there are many considerations from functional analysis one has to check to ensure that extremals exist. We 32 start with an analogy to optimization of real valued functions. We will compute a type of derivative of the functional and subsequently check if any associated critical points give rise to optimal solutions. The precise arguments needed to make this idea rigorous will not be presented here, but can be found in any introductory text on the calculus of variations such as [24]. The derivative we will calculate is called a variational derivative.
Definition 51. The variational derivative δL[u] is defined by the condition that Analogously with optimization of functions of real variables, we have the following proposition. A simple example is that of the arc length functional which returns the length of the graph of the function u(t) connecting two fixed points A = (a, u(a)) and B = (b, u(b)). Indeed, the idea of a variational derivative is to "perturb" the curve u(t) by ǫv(t) for any smooth function v(t) such that v(a) = v(b) = 0, and some small ǫ. First we compute the derivative of L[u + ǫv] with respect to ǫ and evaluate at ǫ = 0. Doing so, we obtain so that upon performing integration by parts, we arrive at and hence the variational derivative is δL[u] = −ü (1 +u 2 ) 3/2 . The only way for a function u(t) to be an extremal is if the graph of u(t) is the line segment connecting the two points A and B. We now repeat this process for the more general case of (134). Doing so, we obtain 33 where v(t) is any smooth function that forces all boundary terms in the repeated integration by parts to vanish. Here, d dt is the total derivative in the multivariable calculus sense. The variational derivative is therefore It turns out that we can use the language of jets to describe the the variational derivative in a more geometric way. We will not go too deeply into this subject here; however, [4] is an excellent reference for a modern geometric formulation of the calculus of variations.
is the total derivative operator, and may be considered as a map from J n (R, R) to J n+1 (R, R) for all n ≥ 0.
Properly, the total derivative operator is a vector field on an infinite jet bundle. We will not go through the details here; however, we want to emphasize that the action of the total derivative operator on a function f ∈ C ∞ (J n (R, R)), for some non-negative integer n, produces no issues of convergence since the function f will have no dependence on jet variables with order greater than n. When the total derivative operator is applied to L, we find that D t (L) • (j n+1 f (t)) agrees with d dt (L(j n t f )). The variational derivative can also be written in terms of jet coordinates, and we give it a special name.
Definition 54. The variational derivative of a functional with Lagrangian L is obtained by applying an operator to L called the Euler operator. It is given by If L is a function on J n (R, R), then E(L) defines a function on J 2n (R, R). When E(L) is restricted to the 2n-jet of some function f : R → R, then the equation defines an order 2n ODE known as the Euler-Lagrange equation.
Although variational questions are of deep interest, we will be primarily concerned with properties of the total derivative operator and the Euler operator. In Chapter 4 we will introduce truncated versions of these operators, and it important to understand how they differ from each other. Proof. Let f ∈ C ∞ (J n (R, R)). Then If f is constant, then D t f = 0 immediately. Thus, assume that D t f = 0. Indeed, this means that Since f has no dependence on z n+1 , the right hand side of (147) has no dependence on z n+1 , so we must have ∂f ∂z n = 0. This means that f ∈ C ∞ (J n−1 (R, R)). We can then iterate this argument to conclude that ∂f ∂z i = 0 for all 0 ≤ i ≤ n. On the final iteration we can then conclude that ∂f ∂t = 0 as well. Therefore, f must be a constant.
Theorem 10. Let E be the Euler operator. Then where g ∈ C ∞ (J n (R, R)) for any n ≥ 0}.
We will not prove this theorem; however, a proof may be found in Chapter 4, Section 1 of [41]. Importantly, this means that two different Lagrangians may have the same Euler-Lagrange equations. Indeed, two Lagrangians
Invariant Control Systems and Reconstruction
In this chapter we will explore certain phenomena of control systems that admit particular kinds of symmetries. In particular, we will be interested in studying quotient control systems that arise from a control system's "special" symmetries. We can use solutions for the quotient control system to construct individual trajectories to the original control system by essentially solving an ODE that arises from the group action. We are particularly interested in the case that the associated quotient control system is ESFL.
3.1. Exterior Differential Systems with Symmetry. In this section we discuss some important results concerning EDS with symmetry. The material in this section is primarily from [2] and [3]. Recall from Definition 23 that a vector field X is an infinitesimal symmetry of an EDS on a manifold M if the Lie derivative with respect to X of any form in I is in I. It is possible that an EDS has no nontrivial symmetries whatsoever, and this will usually be the case. However, control systems that arise from application will usually have plenty of symmetries because of some underlying physics. So in terms of applications, studying control systems with symmetry can be quite enlightening. It is also possible that an EDS has enough symmetries such that the associated group of symmetries G for the EDS has the general structure of a Lie pseudogroup, as in Definition 28. We will always restrict our attention to finite dimensional subgroups of symmetries. In particular, we will choose subgroups of small enough dimension so that the subgroup in question has strictly smaller dimension than the dimension of the manifold X(M). The reason for this restriction will become clear in our subsequent discussion. Our first goal is to recognize when a linear Pfaffian system I on a manifold M with a finite-dimensional group of symmetries G has the property that on the quotient manifold M/G, the forms in I descend to another linear Pfaffian system I/G, called the quotient system.
Definition 55. [3]
Let I be an EDS with symmetry group G. The quotient system or reduced system of I is defined as where π : M → M/G is the orbit projection map.
We are specifically interested in the case that the quotient of a Pfaffian system is again a Pfaffian system. To that end, we need the following definition.
Definition 56. [3] Let Γ be a Lie algebra of infinitesimal symmetries of a Pfaffian system I = I . Then we say that Γ is transverse to I if Γ ∩ ann I = {0}. We say that the symmetries are strongly transverse if Γ ∩ ann I (1) = {0}.
Theorem 11. [3]
Let M be a manifold and consider a Pfaffian system I on M with finite dimensional Lie group of symmetries G such that dim G < dim M. Furthermore, assume that the Lie algebra of infinitesimal symmetries Γ for the action of G on M is strongly transverse to I. Then the quotient system I/G is also a Pfaffian system.
Example 12. Consider the following 5 state and 2 control Pfaffian system, where a i are constants not all zero. Then the Lie algebra of infinitesimal symmetries of ω is strongly transverse to ω (for generic functions f and g i ), and hence the quotient system is a Pfaffian system as well.
It is reasonably direct to check that Γ forms a Lie algebra of infinitesimal symmetries for ω. Denote the generating vector fields of 156 as X 1 and X 2 respectively. Then L X i θ j+2 = 0 for i = 1, 2, and j = 1, 2, 3, (161) the right hand sides of which all clearly belong to ω. Next we need to check that the symmetries are strongly transverse to ω. Let V = ann ω, A i = g i (t, x 3 , x 4 , x 5 , u 2 ) x 2 e −u 1 a i , and notice that Next we compute the derived system of V: So for sufficiently generic functions g 1 , g 2 , and g 3 , we can see that Γ is strongly transverse to ω. If G is the 2-dimensional Lie group whose action on M is defined by the flows of Γ, then ω/G on M/G will be a Pfaffian system. Local coordinates on M/G can be defined in terms of the invariant functions of Γ. Indeed, and hence they may also be chosen to represent local coordinates on the quotient manifold M/G. In these coordinates on M/G we find that where h i = h i (t, y 1 , y 2 , y 3 , v 1 ), so that π * h i = g i . Not only is ω/G a Pfaffian system, but it is also representative of a control system on M/G with 3 states and 2 controls. For this example, one can pick the a i and h i to make ω/G fit into nearly any of the normal forms presented in [52] (the only exceptions are normal form III of Theorem 2 and possibly the classes determined by case IV of Theorems 2 and case III of Theorem 1). Example 12 has, or can be made to have, other nice properties which we explore in the forthcoming sections of this chapter.
3.2.
Control Admissible Symmetry Groups. Given a control system that can be represented by a completely nonintegrable distribution or Pfaffian system on a manifold M, there may be many different kinds of symmetry groups. However, we are interested in a particular class of symmetries that are specific to the study of control systems. We want to make sure that any action by a control system's symmetries will not mix up time, state variables, and control variables in any way inappropriate for control theory purposes. To be precise, we present the following definition.
Definition 57. [17] Let M ∼ = loc R × X(M) × U(M), ω a Pfaffian system representing a control system, and let µ : G × M → M be a Lie transformation group with Lie algebra Γ that has the following properties: (1) Γ is a Lie algebra of infinitesimal symmetries of ω, (2) the action of G on M is free and regular, if π : M → R × X(M) is the projection map, then rank (dπ(Γ)) = dim(G).
We say that such a group G is a control admissible symmetry group. We may abuse this language somewhat by using the word "symmetries" to reference either a control admissible symmetry group or its infinitesimal generators.
In particular, items (3) and (4) of Definition 57 force elements of G to act as ESFTs. It turns out that we have already encountered an example of such a symmetry group.
Example 13. The symmetry group G generated by Γ = {x 2 ∂ x 1 , 2x 1 ∂ x 1 + x 2 ∂ x 2 + ∂ u 1 } for the control system ω in Example 12 is an example of a control admissible symmetry group.
Let (ǫ 1 , ǫ 2 ) be local coordinates on the Lie group associated to Γ. To compute the group action from Γ, we simply find the flows of each generator in Γ on M. These flows are so that the action may be written as the composition = (t, x 1 e 2ǫ 2 + x 2 ǫ 1 e ǫ 2 , x 2 e ǫ 2 , x 3 , x 4 , x 5 , u 1 + ǫ 2 , u 2 ).
This action has exactly the form of an ESFT for any ǫ 1 and ǫ 2 . Thus, the action by any element g ∈ G on M is by an ESFT. Furthermore, the control admissible symmetry group G is diffeomorphic to Aff(R), since the 2D Lie algebra Γ is not abelian. We remark that this group G may not be the entire (possibly pseudo-) group of control admissible symmetries for ω. A class of examples of such systems are those that are ESFL, since they will necessarily be invariant under the pseudogroup of contact transformations of their equivalent Brunovský normal form. Interestingly, there is at least one control system that is provably not ESFL and has an infinite dimensional control admissible symmetry group. Consider the control system in 7 states and 3 controls which is given as Example 2 in [7]. Example 14. The Battilotti-Califano (BC) system is the 7 state, 3 control, Pfaffian system generated by the forms 38 The control admissible symmetry group of the BC system is generated by the infinitesimal symmetries Γ = {X 1 , X 2 , X 3 }, where and with F any real-valued smooth function on M that has no dependence on the controls, F i = ∂F ∂x i , and F t = ∂F ∂t .
Thus we see that the BC system has an infinite dimensional control admissible symmetry group due to the dependence on F . Furthermore, using procedure contact in Maple, we find that the refined derived type of the BC system is [ [4,0], [7,3,4], [9,5,5], [11,11]], which does not agree with the refined derived type of a Goursat bundle presented in Proposition 4. Hence the BC system is not ESFL.
The following theorem is an important result that guarantees that the quotient of a Pfaffian system by a control admissible symmetry group is again representative of a control system.
Theorem 12. [17]
Let G be a control admissible symmetry group of a control system ω on a manifold M such that G is strongly transverse to ω and dim(G) < dim X(M). Then the quotient system ω/G is a control system on M/G and has the same number of controls as ω.
One can easily verify that Example 12 has the property that its quotient system is again a control system. It is also true that there are subgroups of the infinite dimensional control symmetry group of the BC system that are strongly transverse to the BC system. Indeed, choose H to be the subgroup of control admissible symmetries of the BC system are generated by which arises from choosing F = 1. The annihilator of ω BC is given by Thus, and one can now see that Γ H is strongly transverse to ω BC . Hence ω BC /H is a control system of 4 states and 3 controls. We see also that the ESFTs generated by Γ H are Interestingly, these maps are necessarily ESFTs, as opposed to SFTs, despite the fact that the original system is autonomous.
3.3. ESFL Quotient Systems. In general, we have seen that we can find reductions of control systems and obtain control systems again provided that the group action belongs to the Lie psuedogroup of ESFTs. The resulting control systems in Example 12 are of 3 states and 2 controls and can therefore be classified according to [52]. However, we need not restrict ourselves to control systems whose quotients will fit into a broad classification scheme (as in general, there are presently none for higher numbers of states and controls). There is, however, a nice class of control systems that were introduced in Chapter 1 that are generated by Brunovský/contact differential forms as discussed in Chapter 2. Given a control system ω with control admissible symmetry group G, one may wonder when the resulting control system ω/G is ESFL. This is the content of [17], and we will state some of those key results here.
Definition 58. [17] A relative Goursat bundle V ⊂ T M is a distribution of derived length k > 1 that has the following properties: (1) the type numbers satisfy the same relations as those listed in Proposition 4, if ∆ k > 1, then V (k−1) determines an integrable Weber structure whose resolvent bundle is of rank ∆ k + χ k−1 .
Note that a relative Goursat bundle may have a non-trivial Cauchy bundle. Next we state the most important theorem of this section.
Theorem 13. [17]
Let Γ be the Lie algebra of a strongly transverse, control admissible symmetry group G of a control system V. If Char(V) = {0} andV := V ⊕ Γ is a relative Goursat bundle, then the quotient system ω/G is a control system that is locally equivalent to a Brunovský normal form via a diffeomorphism. Furthermore, one can choose the diffeomorphism to be an ESFT if points (1) and (2) of Theorem 9 are true forV.
This gives a direct way to check whether a control system ω with control admissible symmetry group G admits ESFL quotients.
Definition 59. A relative Goursat bundle will be called an ESF relative Goursat bundle if it satisfies the points (1) and (2) of Theorem 9.
We will once again use the examples that have previously been explored in this chapter. Indeed, let ω be the control system from Example 12 with f = e u 2 , g 1 = ln 1 + (u 2 ) 2 , g 2 = sin(x 5 ), g 3 = x 3 , a 1 = 5, a 2 = 0, and a 3 = 1. The control system is now generated by and has annihilator given by Next we need to calculate the refined derived type of V = V ⊕ Γ in order to apply Theorem 13. We can once again use procedure contact to determine the refined derived type of V. Doing so, we find [5,2], [7,4,5], [8,8]]. We then discover that the associated type numbers are those of a relative Goursat bundle by checking the conditions in Proposition 4. Next we check that Char V (1) 0 is integrable. To do so, we use MAPLE to calculate the derived flag and the associated Cauchy bundles. We find that 0 is integrable, and thus V is a relative Goursat bundle. Therefore, by Theorem 13 we can conclude that ω/G is a Pfaffian system representing a control system on M/G that is diffeomorphism equivalent to a Brunovský normal form. Furthermore, the diffeomorphism equivalence is an ESFT since dt ∈ Ξ (1) and {∂ u 1 , ∂ u 2 } ⊂ Char V (1) 0 . Of course, one can see this directly by explicitly constructing ω/G as in equation (165), and then checking the refined derived type of ω/G. However, Theorem 13 provides a much simpler determination when an example presents itself with several control admissible symmetry groups. Thus Theorem 13 allows us to avoid needless computation-and subsequent ESFL testing of-several different quotient systems.
The quotient system for this example is We can explicitly find the ESF linearization of (191) via procedure contact in MAPLE. The ESFT is given by (192) (t, y 1 , y 2 , y 3 , v 1 , v 2 ) → (t, z 1 0 , z 1 1 , z 2 0 , z 2 1 , z 2 2 ), 41 where The BC system also admits an ESFL quotient. Let W = ann ω BC and Γ H be as in (177). Furthermore, denote W = W ⊕ Γ H . Using MAPLE to calculate the refined derived type of W, we find [7,3], [10,6,8], [11,11]], which are precisely the type numbers of a relative Goursat bundle. We also find that (197) Char W (1) is integrable and is annihilated by dt. Hence by Theorem 13, we find that ω/H is ESFL. We will further confirm that ω/H is ESFL by constructing an explicit ESFT to a Brunovský normal form. First, the invariant functions of Γ H are and they form a local coordinate system for the quotient manifold M/H. In these coordinates, the quotient system is given by ω/H = θ 1 , . . . , θ 4 , where The annihilator of ω/H will be denoted Checking the refined derived type, we find [7,3,5], [8,8]], which are the type numbers of the canonical contact system on J 2,1 . Since ρ 2 = 1, we construct the filtration (85) for W/H and find that All these bundles are integrable, and thus we have confirmed that W/H is ESFL. Next, we use procedure contact B to build an ESFT between β 2,1 and ω/H. Indeed, the invariants of Ξ (1) 0 /Ξ (1) = {dy 3 , dy 4 } are y 3 and y 4 , while the needed invariant of Π 1 is y 2 . Hence, the ESFT is given by 3.4. Reconstruction of Integral Manifolds and the Contact Sub-connection. In the previous sections of this chapter we have seen when and how one can perform symmetry reduction of a control system so that the resulting reduced system is again a control system. Now, we want to be able to construct solutions to the original control system using the reduced control system. This is the content of [1], [2], [3] for general Pfaffian systems, and [51] when applied to control systems.
We'll start with the following definition.
Definition 60. [51] Let G be a Lie group and let M and M G be manifolds such that π : M → M G is a right principal G-bundle, and let V M be the vertical bundle ker π * . Let Π G ⊂ T M G be a subbundle. A constant rank distribution u → H u ⊂ T u M is called a principal subconnection relative to Π G if the following are true: (4) u → H u is smooth.
If Π G = T (M G ) then this is the usual definition of a connection on a principal G-bundle. We state the following proposition about principal sub-connections. First we present a result which gives conditions for when an integral manifold of an EDS descends to an integral manifold of a quotient system.
Proposition 8. [1]
Let G be a symmetry group of the EDS I whose action is regular and strongly transverse to I. If s : N → M is an integral manifold of I, then π • s is an integral manifold of I/G, where π : M → M/G is the quotient map. where R a = ρ b a (ǫ)∂ ǫ b . As a Pfaffian system, it may also be written as Theorem 15 says that the control system may be "decomposed" to a Brunovský normal form plus an underdetermined equation of Lie type, that is, an underdetermined ODE arising from the action of a Lie group on a manifold. Although equations of Lie type have very nice properties, as explained in [11], most of these properties only apply in the case that the associated group has nontrivial isotropy subgroups, i.e. the group does not act freely on M.
The group actions appearing in this thesis are free and hence much of the larger theory of equations of Lie type does not apply. The only exception to this is the case when the Lie group G is solvable. In this situation we are guaranteed to be able to find solutions with a finite number of integrations, but only for fixed trajectories of the quotient system.
Notice that the mapφ in Theorem 15 is given by µ • Γ σ , and henceφ is an ESFT since ϕ is an ESFT and G is an admissible control symmetry group.
Before starting an example, we wish to emphasize the difference between Theorem 15 and Theorem 14, as well as the importance of the contact sub-connection. Theorem 14 should be thought of as a "decomposition" of an integral manifold to an EDS I via an integral manifold of the quotent system and a Frobenius system induced by both the group action and the integral manifold of the quotient system. In Theorem 15, a normal form for the given control system ω is given via the contact sub-connection. One may still consider Theorem 15 as providing a "decomposition" of integral manifolds, or in this case, trajectories; however, it is the special structure of the quotient system that allows for an explicit formulation of Θ G in coordinates.
We now construct the contact connection H G for the BC system from Example 14. We already have the action of the group H from (181), as well as the projection map π : M → M/H defined by Inv Γ H . So we pick a cross-section of the projection map to be (222) σ : (t, y 1 , y 2 , y 3 , y 4 , v 1 , v 2 , v 3 ) → (t, y 1 , y 2 , −y 3 , y 4 , 0, 0, 0, v 1 , v 2 , v 3 ). Now by (208), we can conclude that integral curves of ω/H are given by (223) c(t) = (t, y 1 =Ḟ 1 (t), where and F 1 (t), F 2 (t), and F 3 (t) are arbitrary smooth functions. Next we wish to construct the contact connection H H . We will use the dual form of the contact connection, which in this case is given by where Θ H = dǫ a − p a (t, z) dt for 1 ≤ a ≤ 3. The form of Θ H follows from the fact that H is abelian. Hence, we need only determine the functions p a (t, z). In order to find these functions, we use MAPLE to compute µ * t (ω BC ), where µ t = (µ • Γ H,σ •c)(t), and this leads to Each of (227)-(229) gives the form of one of the p a (t, z). Indeed, where Hence the contact connection for the BC system with respect to symmetry group H is or the dual form, In the next chapter, we will introduce another linearization that may arise from the contact sub-connection. In particular, in Chapter 4 we will prove new results on the form of the contact sub-connection γ G that allow one to determine whether γ G -and hence ω-is or is not EDFL.
Overview of Cascade Feedback Linearization.
In the previous chapter we learned that a control system (M, ω) with control admissible symmetries can be put into a normal form adapted to said symmetries. In particular, if there is a quotient system (M/G, ω G ) that is ESFL, then the original control system is ESFT equivalent to a "linear" system plus an equation of Lie type. In this chapter we will explore the fourth item in the following definition of Cascade Feedback Linearization: Definition 62. Let (M, ω) be a control system with control admissible symmetry group G. Then we say that (M, ω) is cascade feedback linearizable (CFL) if: (1) The right group action of G on M is such that the orbit space M/G is again a manifold and the associated quotient system ω G is again a linear Pfaffian system with the same number of controls.
(2) (M/G, ω G ) is equivalent to a Brunovský normal form on the partial prolongation of a jet space (J κ , β κ ) via an ESFT, where β κ is the Pfaffian system of Brunovský normal forms, i.e. the canonical contact system on J κ .
(3) The original control system (M, ω) is ESFT equivalent to a normal form (J κ ×G, γ G ), where γ G = β κ ⊕ Θ G with Θ G a 1-form associated to the action of G on M. This may be interpreted as the local trivialization of a principal G-bundle over J κ with contact sub-connection 1-form γ G .
(4) The restrictions of (J κ × G, γ G ) to a certain family of submanifolds known as partial contact curves become ESFT equivalent to a Brunovský normal form.
The last item in the definition for a CFL system is possibly the most mysterious, and the main results in the remainder of this thesis concern necessary and sufficient conditions for a control system to have this property. It turns out that, at least in the case dim G = 1, the last step is related to truncated versions of familiar operators from the calculus of variations. These operators and some of their properties are described in Section 4.3 below.
Partial Contact Curve Reduction.
The final requirement for cascade feedback linearization is ESFT equivalence to Brunovský normal forms when the system on the principal G-bundle is restricted to what may be called "partial integral manifolds" of γ G on J κ (R, R m )×G. For m ≥ 2, we can always rewrite a Brunovský normal form as β κ = β ν ⊕β ν ⊥ , where κ = ν + ν ⊥ and m = m ν + m ν ⊥ , so that β ν and β ν ⊥ are the canonical contact systems on J ν (R, R mν ) and J ν ⊥ (R, R m ν ⊥ ), respectively.
Definition 63. We say that a submanifold Σ ν f ⊂ J κ × G is a codimension s partial contact curve of β κ = β ν ⊕ β ν ⊥ if Σ ν f is an integral manifold of β ν and s is the sum of the entries in ν ⊥ . It is described by the image of a map C ν f = j ν f × Id J ν ⊥ × Id G : R × J ν ⊥ × G → J κ × G for a choice of sufficiently differentiable f : R → R mν . In particular, we refer to a system γ G restricted to a family of such submanifolds of the form {Σ ν f : f ∈ C ∞ (R, R mν )} as a partial contact curve reduction of γ G and denote it byγ G .
One may find it odd that we will be restricting our control system to submanifolds. However, we note that the definition for a partial contact curve leaves open an arbitrary choice for the function f . Indeed, restriction to a particular partial contact curve is equivalent to a choice of m ν controls and the states determined by that choice. If the resulting systemγ G is ESFL, then integral curves ofγ G can be expressed in terms of an arbitrary (up to some mild genericity conditions) function g : R → R m ν ⊥ , as well as in terms of the arbitrary choice of partial contact curve (again up to mild genericity conditions to be elaborated on later). Thus no real freedom of choice for the controls is lost by this process, since the choice of a partial contact curve is a choice of m − s controls. We now present an example. We will again use the BC system from Example 14. At the end of Chapter 3, we found the contact sub-connection of the BC system associated to the admissible control symmetry group H whose infinitesimal generators are given by (177). The contact sub-connection has the Brunovský normal form β 2,1 as a component. we will decompose this Brunovský normal form as β 2,1 = β 0,1 ⊕ β 2 . Specifically, we will be 48 choosing our reduction along the copy of β 2 given by j 2 (z 1 0 ) by choosing z 1 0 = f (t) for an arbitrary smooth function f . This defines a codimension 2 partial contact curve and leads to the following reduced contact sub-connection on J 0,1 × H: The refined derived type of the reduced contact sub-connection is [ [3,0], [5,2,3], [6,4,4], [7,5,5], [8,8]], which agrees with a Brunovský normal form with typeκ = 1, 0, 0, 1 . Sinceρ 3 = 1, we can construct the filtration (85): 0 = {dt, dǫ 1 , dǫ 2 , dǫ 3 , dz 2 0 , dz 3 0 }, and observe that all subbundles of the filtration are integrable and contain dt. Therefore, we can conclude thatγ G is ESFL to a Brunovský normal form of signature 1, 0, 0, 1 by Definition 46, Theorem 6, and Theorem 9. Thus we conclude that the BC system is CFL with respect to the symmetry group H.
We now mention an important consequence of a control system satisfying Definition 62.
Theorem 16.
[51] If a control system ω on a smooth manifold M is cascade feedback linearizable, then it is explicitly integrable.
As example demonstrating Theorem 16, we once again use the BC system. First we construct the ESF linearization for (236) via procedure contact B sinceρ k = 1. Using (238), we find that Ξ Let j 1 g 1 (t) and j 4 g 2 (t) be arbitrary smooth solutions to the canonical contact system on J 1,0,0,1 . Inverting (239) and solving for (z, ǫ) in terms of j 1 g 1 (t) and j 4 g 2 (t), we find the 49 following solution toγ H : The contact curve reduction depends on an arbitrary smooth function f (t). Thus, if we append j 2 z 1 0 (t) = j 2 f (t) to our above solution, then we have described all integral curves to γ H . Furthermore, if we pass through the ESFTφ from Theorem 15, then we can find an explicit solution to the BC system in terms of arbitrary functions and their derivatives alone. That is, no integration is required to describe the trajectories of the BC system.
Truncated Euler Operator.
In the last step of the cascade feedback linearization process, we want to know why a contact sub-connection γ G admits an ESFL partial contact curve reduction. Towards this goal, we will explore the structure of ESFL reductions more closely by directly analyzing the PDEs associated to the calculation of the refined derived type. Recall that a contact sub-connection on J κ (R, R m ) × G has the form where 1 ≤ a ≤ m and Note that the terms involving only jet bundle coordinates look very similar to the notion of a total derivative on an infinite jet bundle [4]. In computing the derived flag of an ESFL reduction for this contact sub-connection, one finds that this truncated total derivative operator is iterated in such a way that there is a truncated Euler operator that naturally appears. Hence, to better understand how the properties of these operators impact the refined derived type of an ESFL reduction, we will spend this section building some results about these types of operators. These truncated operators have similar-but not exactly the same-properties as their full infinite jet bundle analogues. The difference between the truncated versions and non-truncated versions arise naturally in proofs of results in this chapter, especially in Theorem 18.
Definition 64. The truncated total derivative and truncated sub-fiber total derivative operators on J κ (R, R m ) are given by respectively, where σ i is the order of the jet for each z i 0 .
50 Proposition 9. The first integrals of the truncated total derivative operator D t are generated by Proof. The proof follows by induction on k for each i. It is immediate that D t (I i 0 ) = 0, thus establishing the base case. Now assume that the above identity holds for some k > 0 and that D t (I i k−1 ) = 0. Then (244) . In the last line we have the invariant I i k−1 evaluated at t = t k . The induction hypothesis means that the recursive formula is true for k − 1 and hence I i k−1 (t k , z) = z i σ i −k+1 . Thus, D t (I i k (t, z)) = 0 for all k ≤ σ i . It is easy to see that the all dI i k are linearly independent of each other, and upon a quick dimension count we find that there are precisely i≤m (σ i + 1) functions I i k . The total dimension of J κ is 1 + i≤m (σ i + 1); hence, the I i k are a complete set of invariants for D t .
Note that we can explicitly write formulas for the invariant functions in Proposition 9. Indeed, for each t k = 0, . . .
An important observation about the proposition above is that if f : J κ → R is a differentiable function such that D t (f ) = 0, then f cannot have dependence on arbitrary functions of t alone. With this in mind, we will now characterize the time-independent invariant functions of D t . The first two of these t-independent invariants are Proposition 10. The t-independent invariant functions of D t are generated by the functions J i 1 := z i σ i together with the functions Proof. To prove that each J i k is an invariant of D t , we will simply compute D t (J i k ) for all 1 ≤ i ≤ m and 2 ≤ k ≤ σ i . Indeed, since The sum in equation (251) is telescopic and the last term vanishes. Hence, We finish with a dimension count. The J i k are invariants of both D t and ∂ t . Thus, for each i = 1, 2, . . . , m, there are precisely σ i independent invariants of D t and ∂ t that may be chosen to generate all invariants of D t and ∂ t . It is clear that all the dJ i k are linearly independent. Therefore, the J i k will generate the t-independent invariants of D t .
Next, we define the truncated Euler operator and prove some properties about its kernel. Its definition is in terms of the truncated total derivative operators. The truncated Euler operator appears naturally when computing terms arising in the derived flag of a contact sub-connection. In particular, an understanding of its kernel will lead to insight about how the symmetry group of a control system can act on the manifold and have the property that the control system is CFL with respect to that symmetry.
Definition 65. For each τ i ≤ σ i , define the truncated Euler operator E τ i on J κ (R, R m ) to be for any sufficiently differentiable function f on J κ , where D t is the truncated total derivative operator on J κ . We can also define the truncated sub-fiber Euler operator E τ i ,j by using the truncated sub-fiber total derivative operator D t,j in place of D t .
Proposition 11. The kernel of the truncated Euler operator of any order τ i ≤ σ i on J κ (R, R m ) contains the set (254) K Eτ i := f (z) = D t g(z) | g ∈ C 1 (J κ (R, R m )) and ∂g ∂z i where D t is the truncated total derivative operator on J κ (R, R m ).
Proof. The result above effectively follows from repeated applications of the identity In the case that l i = 0, the identity is Corollary 1. If f = D t g, where D t is the truncated total derivative operator on the jet space J κ (R, R m ), g is a function on J κ (R, R m ), and E τ i is the truncated Euler operator of order τ i ≤ σ i , then
53
This is in contrast to the classical theory of calculus of variations, which has a modern geometric formulation on J ∞ outlined wonderfully in [4]. In that work, the kernel of the Euler operator is the space of all functions on J ∞ that are equal to the total derivative of some other function on J ∞ . Theorem 11 and Corollary 1 above highlight the difference between the kernel of the full Euler operator and that of our truncated version. This discrepancy is necessary for our work, and for finding examples of CFL systems. Theorem 18 below makes this fact clear. One more important remark about the truncated Euler operator is required here. If f ∈ C ∞ (J 2n (R, R m ), R) for some n > 0, then if and only if either f is constant or ∂f ∂z i k = 0 for all k ≥ n and 1 ≤ i ≤ m.
4.4.
PDEs for the Refined Derived Type. In the first chapter, the idea of the "refined derived type" of a distribution/EDS was discussed. Recall that a given distribution admits a local normal form via diffeomorphism to a standard Goursat bundle if and only if both the refined derived type of the distribution in question is the same as one for a Goursat bundle and the appropriate filtration, given by either 85 and 82, is integrable. Furthermore, to ensure that the diffeomorphism is an ESF transformation, the control directions must be contained in the Cauchy characteristic bundle of the first derived distribution, and the 1-form dt must either annihilate the Cauchy characteristic bundle of the penultimate derived distribution or annihilate the resolvent bundle, whichever applies by procedure contact. In this section, we will look at the equivalent PDE conditions for the refined derived type of distributions on J σ 1 (R, R)×G that are ESF equivalent to the Goursat bundle associated with some J σ 1 +1 (R, R) when dim G = 1. For this case in particular, we mention that the integrable filtration condition will be satisfied automatically due to the necessary rank conditions on the derived flags of our distributions.
Remark 1. We emphasize here that the above-mentioned J σ 1 +1 (R, R) with contact distribution is not a prolongation of the contact distribution on J σ 1 (R, R). It is better to think of the relationship as "anti-prolongation," in that, instead of a derivative being added to the represented control system in Brunovský normal form, a new state is being added via some kind of anti-differentiation.
Below is a proposition that is equivalent to a special case of Theorem 13 in [17]. The purpose of this proposition is to recognize explicit PDE conditions that will be necessary and sufficient for the reduction of a contact sub-connection to be ESFL. As will be seen in the next section, the specific PDE conditions to be satisfied will be conditions on truncated Euler operators of the function associated to the right hand side of the equation of Lie type.
If
(262)H G = {X, ∂ z 1 σ 1 } is the partial contact curve reduction along the partial contact curves that annihilate the Brunovský forms θ i l i for 2 ≤ i ≤ m, thenH G is ESFL if and only if the following hold: • For 1 ≤ k ≤ σ 1 , the kth derived flag of the reduced contact sub-connection is where (264)Ȳ k = [X,Ȳ k−1 ] = Q k ∂ ǫ + (−1) k ∂ z 1 σ 1 −k withȲ 0 = ∂ z 1 σ 1 , and the coefficients Q k defined recursively by initialized with Q 0 = 0. For k = σ 1 + 1, we have That is,H G is bracket generating.
• For each 0 ≤ k ≤ σ 1 −1, the Cauchy characteristics ofH Proof. We start by noting that for the first bullet point, the requirement that the rank of the derived flag ofH G increases by one at each step is a necessary condition for ESF linearizability. Indeed, this condition would be sufficient to show thatH G is local diffeomorphism equivalent to a Goursat bundle. However, for the stricter class of ESF equivalence, the second bullet point adds an additional condition to guarantee sufficiency and necessity. The second bullet point is equivalent to the condition from Theorem 9 that ∂ z 1 σ 1 is a Cauchy characteristic forH In this section we give an important necessary condition for a contact sub-connection to be ESFL via partial contact curve reduction. It is a condition on the truncated Euler operator applied to a function arising from the group action. we will prove the following theorem: Theorem 17. Let (t, z, ǫ) = (t, j σ 1 z 1 0 , . . . , j σm z m 0 , ǫ) be local coordinates for J κ × G, where dim G = 1. Furthermore, assume that Θ G = dǫ − p(z) dt, and letγ G be the reduction of γ G by codimension 1 partial contact curves defined by j σ l z l 0 = j σ l f l (t), for all l = i for some 1 ≤ i ≤ m and m − 1 arbitrary smooth functions f l (t). Ifγ G is ESFL, then the truncated Euler operator applied top(z) must be degenerate in the sense thatĒ σ i (p(z)) has no dependence on z i k for k ≥ 1.
Proof. Without loss of generality, take i = 1. For notational simplicity we will drop the superscript of '1' on all z 1 l variables. Now recall the fundamental bundle in filtration (85) from Chapter 2. In this case, our fundamental bundle Π σ 1 is defined recursively by Each Π k for k ≤ σ 1 must be Frobenius for an ESFL system. Consider the Pfaffian system defined by (278) F σ 1 := ann (Π σ 1 ) = dt, dǫ − ψ .
So, upon computing d 2 ψ = 0, we quickly conclude thatĒ σ 1 (p) has no dependence on z i for 1 ≤ i ≤ n and must therefore be degenerate.
Theorem 17 provides, for the first time, a coordinate specific obstruction to ESFL partial contact curve reducibility. In particular, if an ESFL partial contact curve reduction exists, then computation ofĒ σ 1 (p(z)) can inform one about an appropriate choice of codimension 1 partial contact curve reduction to achieve the ESF linearization. There is also the added bonus that computingĒ σ 1 (p(z)) is a straightforward, albeit potentially tedious, calculation.
As an application of Theorem 17, recall Example 5 from Chapter 1. It already has the form of a contact sub-connection on J 0,2 × G, where G ∼ = R. The contact sub-connection may be written as 1 z 2 0 dt , i = 1, 2. Reduction along the z 1 0 jet coordinates does not produce an ESFL system. Let z 2 l = f (l) (t) for all l = 0, 1, 2, so thatp(z) = e z 1 | 2021-02-18T02:15:38.991Z | 2021-02-17T00:00:00.000 | {
"year": 2021,
"sha1": "e4ed0eeb42323fc8b87906f23875979afc8b78bf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e4ed0eeb42323fc8b87906f23875979afc8b78bf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
73719937 | pes2o/s2orc | v3-fos-license | Graphic-Card Cluster for Astrophysics (GraCCA) -- Performance Tests
In this paper, we describe the architecture and performance of the GraCCA system, a Graphic-Card Cluster for Astrophysics simulations. It consists of 16 nodes, with each node equipped with 2 modern graphic cards, the NVIDIA GeForce 8800 GTX. This computing cluster provides a theoretical performance of 16.2 TFLOPS. To demonstrate its performance in astrophysics computation, we have implemented a parallel direct N-body simulation program with shared time-step algorithm in this system. Our system achieves a measured performance of 7.1 TFLOPS and a parallel efficiency of 90% for simulating a globular cluster of 1024K particles. In comparing with the GRAPE-6A cluster at RIT (Rochester Institute of Technology), the GraCCA system achieves a more than twice higher measured speed and an even higher performance-per-dollar ratio. Moreover, our system can handle up to 320M particles and can serve as a general-purpose computing cluster for a wide range of astrophysics problems.
Introduction
The gravitational N-body simulation plays a significant role in astrophysics, including planetary systems, galaxies, galactic nuclei, globular clusters, galaxy clusters, and large-scale structures of the universe. The number of particles involved (denoted as N) ranges from O(10) in planetary systems to O(10 10 ) in cosmological simulations. Since gravity is a long range force, the main challenge of such simulation lies in the calculation of all N 2 pairwise interactions. Therefore anything involves particle number exceeding 10 6 will have to employ chiefly a mean-field scheme (see below). In the case of collisional system, the evolution timescale is roughly determined by two-body relaxation time which is proportional to N/log(N) (Spitzer, 1987). It implies that the total simulation time approximately scales as O(N 3 ) (Giersz & Heggie, 1994;Makino, 1996). Therefore, the size of such astrophysical simulation is usually limited. For example, for a CPU with 10 GFLOPS (Giga Floating Operations per Second) sustained performance, it would take more than 5 years to simulate the core collapse in a globular cluster with N = 64K.
A common way to speed up the N 2 force calculation is to adopt the individual time-step scheme (Aarseth, 1963) along with block time-step algorithm (McMillan, 1986;Makino, 1991). The former assigns a different and adaptive time-step to each particle. Since the characteristic time-scale in some astrophysical simulations varies greatly between a dense region and a sparse region, it is more efficient to assign an individual time-step to each particle. The latter normally quantizes the time-steps to the power of two and advances particles group-by-group. Such an algorithm is especially suitable for vector machines and cluster computers, since a group of particles may be advanced in parallel. Moreover, it also reduces the time for predicting the particle attributes.
An alternative approach to improve performance is to replace the direct-summation scheme by an approximate and efficient scheme, which has a better scaling than O(N 2 ). Examples of such schemes include the Barnes-Hut tree code (Barnes & Hut, 1986), Particle-Mesh (PM) code (Klypin & Holtzman, 1997), Particle-Particle/Particle-Mesh (P 3 M) code (Efstathiou & Eastwood, 1981), and Tree-Particle-Mesh (TPM) code (Xu, 1995). These schemes are efficient and can deal with a large number of particles. Accordingly, they are often used in large-scale structure simulations. The drawbacks of such schemes are the limited accuracy and the incapability to deal with close encounters, which make them inappropriate to study some physics, such as the core collapse in globular cluster.
To achieve both accuracy and efficiency, one needs a high-performance computer with direct-summation algorithm. The development of GRAPE (GRAvity piPE) (Sugimoto et al., 1990;Makino et al., 2003;Fukushige et al., 2005) is made for this purpose. It is a special-purpose hardware dedicated to the calculation of gravitational interactions. By implementing multiple force calculation pipelines to calculate multiple pairwise interactions in parallel, it achieves an ultra-high performance. The latest version, GRAPE-6, comprises 12288 pipelines and offers a theoretical performance of 63.04 TFLOPS. There is also a less powerful version, GRAPE-6A, released in 2005. It is designed for constructing a PC-GRAPE cluster system, in which each GRAPE-6A card is attached to one host computer. A single GRAPE-6A card has 24 force calculation pipelines and offers a theoretical performance of 131.3 GFLOPS. Some research institutes have constructed such PC-GRAPE clusters (Fukushige et al., 2005;Johnson & Ates, 2005;Harfst et al., 2007; MODEST 1 ), where the peak performance is reported to be about 4 TFLOPS. However, the main disadvantages of such system are the relatively high cost, the low communication bandwidth, and the lack of flexibility due to its special-purpose design (Portegies . By contrast, the graphic processing unit (GPU) now provides an alternative for high-performance computation (Dokken et al., 2005). The original purpose of GPU is to serve as a graphics accelerator for speeding up image processing and 3D rendering (e.g., matrix manipulation, lighting, fog effects, and texturing). Since these kinds of operations usually involve a great number of data to be processed independently, GPU is designed to work in a Single Instruction, Multiple Data (SIMD) fashion that processes multiple vertexes and fragments in parallel. Inspired by its advantages of programmability, high performance, large memory size, and relatively low cost, the use of GPU for general-purpose computation (GPGPU 2 ) has become an active area of research ever since 2004 (Fan et al., 2004;Owens et al., 2005Owens et al., , 2007. The theoretical performance of GPU has grown from 50 GFLOPS for NV40 GPU in 2004 to more than 500 GFLOPS for G80 GPU (which is adopted in GeForce 8800 GTX graphic card) in late 2006. This high computing power mainly arises from its fully pipelined architecture plus the high memory bandwidth.
The traditional scheme in GPGPU works as follows (Pharr & Fernando, 2005;Dokken et al., 2005). First, physical attributes are stored in a randomly-accessible memory in GPU, called texture. Next, one uses the high-level shading languages, such as GLSL 3 , Cg (Fernando & Kilgard, 2003), Brook (Buck et al., 2004), or HLSL 4 , to program GPU for desired applications. After that, one uses graphics application programming interface (API) such as OpenGL 5 or DirectX 6 to initialize computation, __________ 1 see http://modesta.science.uva.nl to define simulation size, and to transfer data between PC and GPU memory. Note that the original design of graphic card is to render calculation results to the screen, which only supports 8-bit precision for each variable. So finally, in order to preserve the 32-bit accuracy, one needs to use a method called "frame buffer object" (FBO) to redirect the calculation result to another texture memory for further iterations. In addition, this method also makes the iterations in GPU more efficient. For example in many GPGPU applications, the entire computation may entirely reside within the GPU memory (except for initializing and storing data in hard disk), which minimizes the communication between GPU and the host computer.
In February 2007, the NVIDIA Corporation releases a new computing architecture in GPU, the Compute Unified Device Architecture (CUDA) (NVIDIA, 2007), which makes the general-purpose computation in GPU even more efficient and user friendly.
In comparing with the traditional graphic API, CUDA views GPU as a multithreaded coprocessor with standard C language interface. All threads that execute the same kernel in GPU are divided into several thread blocks, and each block contains the same number of threads. Threads within the same block may share their data through an on-chip parallel data cache, which is small but has much lower memory latency than the off-chip DRAMS. So, by storing common and frequently used data in this fast shared memory, it is possible to remove the memory bandwidth bottleneck for computation-intensive applications.
For hardware implementation, all stream processors in GPU are grouped into several multiprocessors. Each multiprocessor has its own shared memory space and works in a SIMD fashion. Each thread block mentioned above is executed by only one multiprocessor, so these threads may share their data through the shared memory. Scientific computations such as finite-element method and particle-particle interactions are especially suitable for GPGPU applications, since they can easily take advantage of the parallel-computation architecture of GPU. In previous works, Nyland et al. (2004) and Harris (2005) implemented the N-body simulation in GPU but with limited performance improvement. More recently, a 50-fold speedup over Xeon CPU was achieved by using GeForce 8800 GTX graphic card and Cg shading language , but it is still about an order of magnitude slower than a single GRAPE-6A card. Elsen et al. (2007) achieved nearly 100 GFLOPS sustained performance by using ATI X1900XTX graphic card and Brook shading language. Hamada and Iitaka (2007) proposed the "Chamomile" scheme by using CUDA, and achieved a performance of 256 GFLOPS for acceleration calculation only. Belleman et al. (2007) proposed the "Kirin" scheme also by using CUDA, and achieved a performance of 236 GFLOPS for acceleration, jerk, and potential calculations. Although the works of Hamada & Iitaka, and Belleman et al. have outperformed what can be achieved by a single GRAPE-6A card, these are either a sequential code that applies to a single GPU (Hamada & Iitaka, 2007) or a parallel code but only has been tested on a 2-GPU system .
Consequently, their performances are still incomparable to those of GRAPE-6 and GRAPE-6A cluster.
Based on these works, we have built a 32-GPU cluster named GraCCA, which is compatible to CUDA and has achieved a measured performance of about 7 TFLOPS.
In this paper, we describe the architecture and performance of our GPU cluster. We first describe the hardware architecture in detail in Section 2, and then our implementation of parallel direct N-body simulation in Section 3. We discuss the performance measurements in Section 4. In Section 5, we give a theoretical performance model, and finally a discussion of comparison with GRAPE, stability of GraCCA, and some future outlook are given in Section 6.
GPU cluster
In this section, we first show the architecture of GraCCA, and then discuss the bandwidth measurement between PC and GPU memory, an issue that can be the Table 1 The main components of a single node in GraCCA.
Apart from graphic cards, other components are similar to those of a general PC cluster.
Each graphic card is installed in a PCI-Express x16 slot and each node is connected to a gigabit Ethernet switch. Figs. 2 and 3 are the photos of our GPU cluster and a single node, respectively. We use MPI as the API to transfer data between different CPU processes (including two processes in the same node). Each process is taken by one GPU. For transferring data between PC memory and GPU memory, we adopt CUDA library as API. Since GPU is capable of ultra-fast computation, the communication between PC and GPU memory could be a bottleneck if it is not sufficiently optimized. We illustrate this point in next section. By installing two graphic cards in a single PC, we maximize the performance of a single computing node. Moreover, as shown in Section 2.2, this architecture also utilizes the total bandwidth between PC and GPU memory more efficiently.
Bandwidth between PC and GPU memory
Data transfer between PC and GPU memory contains two parts: from PC to GPU memory (downstream) and from GPU to PC memory (upstream). Although the theoretical bandwidth of PCI-Express x16 is 4GB/s in each direction, it is well known that for traditional OpenGL API, the effective bandwidth is asymmetric. So it would be more prudent to measure them separately.
Direct N-body simulation in GPU cluster
To demonstrate the practicability and performance of GraCCA, we have implemented the direct N-body simulation in this system. In the following, we first describe the single-GPU implementation in detail, and then follow the parallel algorithm.
Single-GPU implementation
To implement the gravitational N-body calculation in a single GPU, we follow the basic ideas of Chamomile scheme (Hamada and Iitaka, 2007) and Kirin scheme , but with some modifications and a more detailed description.
As described in Section 1, one of the most important features of CUDA and GeForce 8800 GTX graphic card is the small but fast on-chip shared memory. It is the key to fully explore the computing power of GPU. In addition, all threads executed in GPU are grouped into several thread blocks, and each of these blocks contains the same number of threads. For simplicity, we use the term "Grid Size (GS)" to denote the number of thread blocks, and "Block Size (BS)" to denote the number of threads within each thread block. Therefore, the total number of threads is given by GS*BS.
In our current implementation, both BS and GS are free parameters which should be given before compilation. Also note that only threads within the same thread block may share their data through shared memory.
In our current implementation, only acceleration and its time derivative (jerk) are evaluated by GPU. Other parts of the program, such as advancing particles, determining time-step, and decision making, are performed in host computer. Fig. 6 shows the schematic diagram of our single-GPU implementation for acceleration and jerk calculations. Following the convention in N-body simulation, interactive particles are divided into i-particles and j-particles. The main task of GPU is to calculate the acceleration and jerk on i-particles exerted by j-particles according to the following: where m, r ij , v ij , a, j, and ε are mass, relative position, relative velocity, acceleration, jerk, and softening parameter. To make it more clearly, we use P ks to denote the The interaction groups computed by Block(1) are highlighted with blue border. The red regions in i-particle and j-particle arrays are the particles used to compute the group G 1,1 .
pairwise interaction between s th i-particle and k th j-particle. So, to match the CUDA programming model and extract the maximum performance of GPU, all N 2 pairwise interactions are grouped into (N/BS) 2 groups (denoted as G mn , m = 1, 2, ..., N/BS, n = 1, 2, ..., N/BS). Each group G mn contains BS 2 pairwise interactions between i-particles and j-particles. It may be expressed as (3) Groups within the same column are computed by the same thread block sequentially.
T (1) T (2) T (3) T (4) T (5) T (BS) T (6) T (7) T (8) …
… … P 2,1 P 3,1 P 3,2 P BS,1 P 1, 2 P BS,2 P 2,BS P 3,BS Block (1) BS comprised of BS threads, and each Thread(s) evaluates the acceleration and jerk on s th i-particle exerted by j-particles. For example, P 2,1 , P 3,1 , …, P BS,1 are evaluated by Thread(1); P 1,2 , P 3,2 , …, P BS,2 are evaluated by Thread (2), etc. This kind of computation decomposition fully exploits the immense parallel computing power of modern GPU. Besides, since threads within the same thread block may share their data through fast shared memory, each thread only needs to load one j-particle (the s th j-particle) into the shared memory. It reduces the number of data transfers between device memory and shared memory, which has much higher memory latency and lower bandwidth than the on-chip memory. Moreover, since the number of pairwise force calculations in G mn is proportional to BS 2 , but the number of data loading from device memory to shared memory is proportional to BS, we could further eliminate this memory bandwidth bottleneck by having larger BS (128 for example). On the other hand, because different thread evaluates force on different i-particle, we may store the information of i-particles in per-thread registers instead of shard memory.
The calculation procedure of a force loop may be summarized as following: (1) The host computer copies the data of i-particles and j-particles from PC memory to device memory through PCI-Express x16 slot.
(2) Each thread loads the data of i-particle into registers based on one-to-one correspondence.
(3) Each thread loads the data of j-particle into shared memory based on one-to-one correspondence.
(4) Each thread block evaluates one group of pairwise interactions (G mn ).
(7) GPU copies the acceleration and jerk on i-particles from device memory back to PC memory through PCI-Express x16 slot.
Note that by iterating over j-particles first, all data of i-particles may stay in the same registers during the calculation of whole column of G mn . Moreover, when switching from one G mn to another, each thread only needs to reload 7 variables (mass, position, and velocity of j-particles) instead of 12 (position, velocity, acceleration, and jerk of i-particles) from the device memory. So, it reduces the communication time between device memory and on-chip memory and results in a better performance, especially for small number of particles.
Finally, to integrate the orbits of particles, currently we adopt the fourth-order Hermite scheme (Makino & Aarseth, 1992) with shared time-step algorithm. For time-step determination, we first use the formula (Aarseth, 1985) , dt where ν is an accuracy parameter, to evaluate the time-step for each particle, and then adopt the minimum of them as the shared time-step.
Parallel algorithm
To parallelize the direct N-body simulation in GraCCA, we adopt the so called "Ring Scheme". In this scheme, all GPUs are conceptually aligned in a circle. Each GPU contains a subset of N/N gpu i-particles (denoted as Sub-I, N gpu denotes the total number of GPUs). Besides, j-particles are also divided into N gpu subsets (denoted as Sub-J), and a force loop is composed of N gpu steps. During each step, each GPU evaluates the force from a Sub-J on its own Sub-I, and then transfer the data of Sub-J between different GPUs.
The calculation procedure of a force loop may be summarized as following: (1) Initialize the acceleration and jerk backup arrays of each Sub-I as zeros.
(2) Copy the mass, position, and velocity arrays of each Sub-I to that of Sub-J.
(3) Use GPU to compute the acceleration and jerk on Sub-I exerted by the current Sub-J.
(4) Use CPU to sum the computing results of GPU with the backup arrays.
(5) Send the data of Sub-J to the GPU in clockwise direction and receive the data of Sub-J from the GPU in counterclockwise direction. Replace the data of current Sub-J by the received data. Note that in this scheme, we may use the non-blocking send and receive (ISEND and IRECV in MPI) to start the data transfer before step (4). The next force loop will wait until the data transfer is complete. By doing so, we could reduce the network communication time since it would be partially overlapped with the force computation (Dorband et al., 2003).
Performance
In this section, we discuss the performance of GraCCA for direct N-body simulation. For all performance-testing simulations, we used the Plummer model with equal-mass particles as initial condition and adopted the standard units (Heggie & Mathieu, 1986), where gravitational constant G is 1, total mass M is 1, and total energy E is -1/4. This initial condition is constructed by using the software released by Barnes (1994). For software, we used Linux SMP kernel version 2.6.16.21-0.8, gcc version 4.1.0, CUDA Toolkit version 0.8 for Linux x86 32-bit, CUDA SDK version 0.8.1 for Linux x86 32-bit, and Linux Display Driver version 97.51.
In the following, we first discuss the optimization of GS and BS. We then assess the performance of single-GPU system, and finally the performance of multi-GPU system.
Optimization of GS and BS
As mentioned in Section 3.1, both GS (number of thread blocks) and BS (number of threads within each thread block) are free parameters in our current implementation.
In theory, in order to maximize the utilization of GPU resources, both GS and BS should be chosen as large as possible. But on the other hand, a larger BS would introduce a higher cumulative error (Hamada and Iitaka, 2007). So, it would be necessary to determine the optimized values of GS and BS. GeForce 8800 GTX, which has exactly 16 multiprocessors. Since each thread block is executed by only one multiprocessor, executing a kernel in GPU with GS ≦ 16 will result in 16-GS "idle" multiprocessors. On the other hand, for GS = n*16, n = 2, 3, 4, ..., each multiprocessor processes more than one thread block concurrently. It enables a more efficient utilization of GPU resources. As shown in Fig. 8, a single-GPU system is able to achieve its maximum performance for GS ≧ 32. GRAPE system (Makino et al., 2003;Fukushige et al., 2005), a time estimate is provided to evaluate the system speed. In a similar fashion, the total calculation time per step for a single-GPU system can be expressed as GPU PCIe host gle sin where T host is the time for host computer to predict and correct particles, as well as to determine the next time-step, T PCIe is the time for transferring data between PC and GPU memory through PCI-Express x16 slot, and T GPU is the time for GPU to calculate the acceleration and jerk. It is clear from Fig. 10 that for N ≧ 4K, the performance curve has a slope of 2, which is the signature of N 2 calculation. It also verifies that for a large number of N, both T host and T PCIe are negligible. But for N < 4K, insufficient number of particles results in inefficient utilization of GPU resources. Moreover, the time for communication between PC and GPU memory and for computation in host computer become non-negligible. These factors further reduce the performance. We will describe the performance modeling for single-GPU calculation in detail in Section 5.1. where N FLOP is the total number of floating-point operations for one pairwise acceleration and jerk calculation, and T single is the average calculation time per step in single-GPU system. Here we adopt N FLOP = 57 (Makino et al., 2003;Fukushige et al., 2005) in order to compare to the result of the GRAPE system. As discussed above, the performance drops for small values of N (N < 4K) due to data communication, host computer computation, and insufficient threads in GPU. On the other hand for N ≧ 16K, the single-GPU system approaches its peak performance, which is about 250 GFLOPS for acceleration and jerk calculations. We note that the performance of the single-GPU system is limited by the computing power of GPU itself. Also note that here we use T single instead of T GPU in Eq. (6) for calculating GFLOPS. It makes Fig. 11 more practical and illustrative since T single and T GPU could be significantly different for small N (see Fig. 18). T T T T T where T host , T PCIe , T GPU are defined in the same way as Section 4.2, and T net is the time for transferring data between different nodes through the gigabit network. Note that for a dual-GPU system, the result is measured by two GPUs installed in the same node, which provides higher communication bandwidth between the two GPUs. In Fig. 12, all six curves have slope of 2 for N/N gpu ≧ 4K, which is consistent with Fig. 10. It shows that for large numbers of N, T host , T PCIe , and T net are all negligible, giving T multi ~ T GPU . We will describe the performance modeling for multi-GPU calculation in detail in Section 5.2. Fig. 13 shows the results of performance measurements in GFLOPS as a function of the total number of particles for different numbers of GPUs. We can see that for N/N gpu ≧ 16K, each system with different number of GPUs approaches their peak performance. Moreover, it demonstrates a great scalability of our system. The maximum performance of the multi-GPU system is still limited by the computing power of GPU itself. For the 32-GPU case, the system achieves a total computing power of 7.151 TFLOPS, in which case each GPU achieves a performance of 223
Multi-GPU performance
GFLOPS. It is about 89 percent of the peak performance in a single-GPU system.
In Figs. 12 and 13, the crossover points indicate that the system with more GPUs becomes marginally faster or even slower than the system with fewer GPUs. All these points appear when N/N gpu = 1K. This result is consistent with Fig. 11, since when the number of particles changes from 2K to 1K, the performance of single GPU drops more than 50%. (8) To demonstrate the GPU's ability for conducting the most time-consuming astrophysical computation, in Fig. 16 we show the temporal evolution of core density in the Plummer model up to N = 64K. The core density is estimated by using the method proposed by Casertano and Hut (1985), and with a faster convergence property suggested by McMillan et al. (1990). The time (x axis) in Fig. 16 is scaled by 212.75*log(0.11N)/N (Giersz & Heggie, 1994). Note that to capture the post-collapse behavior (e.g., gravothermal oscillation), one needs an alternative integration scheme such as KS regularization (Mikkola & Aarseth, 1998) to handle close two-body encounters and stable binaries (Makino, 1996). Currently this scheme is not yet implemented in our program.
Performance modeling
In this section, we construct a performance model of direct N-body simulation in GraCCA, and compare that to the measured performance. The performance model we adopt is similar to that of GRAPE-6 (Makino et al., 2003) and GRAPE-6A (Fukushige et al., 2005), but modified to fit the architecture of our system. In the following, we first present a performance model of the single-GPU system, and then follow the model of cluster system.
Performance modeling of single-GPU system
As mentioned in Section 4.2, the calculation time per step for direct N-body simulation in a single-GPU system (T single ) may be modeled by Eq. (5) : T single (N) = T host (N) + T PCIe, single (N) + T GPU (N). T host is the time spent on the host computer. In our current implementation, it may be written as where T pred is the time for prediction, T corr is the time for correction, and T timestep is the time for time-step determination. All of these operations are roughly proportional to the number of particles, so we may rewrite Eq. (9) as where the lower-case letter "t" represents the computation time "per particle". This number is mainly determined by the computing power of host computer, and is roughly the same for different N. So in our performance model, we take t host as a constant and T host (N) is directly proportional to N. T PCIe, single is the time spent on data transfer in PCI-Express x16 lanes. Since the effective bandwidth between PC and GPU memory in a single-GPU case is different from the multi-GPU case (see Section 2.2), here we use the subscript "single" to emphasize the difference. T PCIe, single may be written as where T i is the time for transferring i-particle position and velocity downstream to GPU memory, T j is the time for transferring j-particle mass, position, and velocity downstream to GPU memory, and T force is the time for transferring i-particle acceleration and jerk upstream to PC memory. The lower-case "t" represents the communication time per particle. They could be written as t i = 24/BW down , t j = 28/BW down , and t force = 24/BW up , where BW down and BW up represent the downstream and upstream bandwidth, respectively. So, by measuring BW down and BW up (see Fig. 4 and 5), we may estimate t PCIe, single . Finally, T GPU is the time spent on force calculation in GPU. It may be expressed as where t pair is the calculation time for a single pairwise interaction. Note that T GPU scales with N 2 . The measured results of t host , t PCIe, single , and t pair are recorded in unit of millisecond in Table 2. t host t PCIe, single t PCIe, multi t net t pair 2.746 * 10 -4 4.745 * 10 -5 7.606 * 10 -5 2.725 * 10 -4 2.139 * 10 -7 Table 2 Measured results of performance parameters in single-and multi-GPU systems (in unit of millisecond). Fig. 17 shows the wall-clock time per step predicted by this performance model (denoted as model 1), along with the measured performance for comparison. The agreement between model 1 and the measured result is quite good, except for the case with small number of particles (N < 4K). The discrepancy is originated from the lower Open triangles are the measured results.
PCI-Express x16 bandwidth and the less efficient utilization of GPU resource for small N (see Fig. 4, 5, and 11). Since the numbers recorded in Table 2 predicted performance by less than 2%. The wall-clock time per step predicted by model 2 is also presented in Fig. 17. It is clear that for N < 4K, model 2 is in better agreement with the measured performance than model 1. Fig. 18 shows the relative ratios of T host , T PCIe,single , and T GPU in model 2. Since T GPU scales with N 2 , but T host and T PCIe,single scale with N, T GPU /T single would increase with N.
This feature is clearly verified in Fig. 18. T GPU /T single reaches 71% for N = 4K and 91% for N = 16K. These predicted ratios are consistent with the timing measurement discussed in Section 4.2.
Performance modeling of multi-GPU system
Following the performance model of the single-GPU system, the calculation time per step in our GPU cluster may be modeled as where n ≣ N/N gpu is number of i-particles held by each GPU. The subscripts "multi" in t PCIe,multi and f PCIe,multi are used to highlight the difference of bandwidth between single-and multi-GPU systems. For the latter case, the efficiency factor f PCIe,multi may be expressed as T net is the time for transferring data of j-particles through gigabit network. In the ring communication topology, T net may be expressed as where t net is the time for transferring a single particle. It may be expressed as t net = 28/BW net , where BW net is the average measured bandwidth of gigabit network. The estimated value of t PCIe,multi and t net are recorded in Table 2. Note that for the dual-GPU case, since two GPUs are installed in the same node, we set T net = 0 in our performance model.
Discussion
In this section, we address on the comparison of performance between GraCCA and GRAPE system, the stability of GraCCA, and finally a discussion of future work. times better performance-per-dollar than GRAPE-6A board.
Multi-GPU system
The GRAPE-6 system was built in 2002 (Makino et al., 2003). It comprises 64 processor boards, each of which has a theoretical peak speed of about 1 TFLOPS.
Thus, the GRAPE-6 has a total performance of about 64 TFLOPS and can handle up to 32 million particles. However, in practical situation, the peak performance is In comparison with GRAPE-6 and GRAPE-6A cluster in RIT, our GPU cluster consisting of 32 GeForce 8800 GTX graphic cards has a measured performance of about 7.1 TFLOPS, which is more than two times higher than that of GRAPE-6A cluster in RIT. Although it is still about one-third of the measured performance of GRAPE-6, our system only costs about $32K (including all components within the cluster) and can store up to 320M particles. Stated in another way, we have achieved a performance-per-dollar about 35.5 times better than that of GRAPE-6 system and 33.3 times better than that of GRAPE-6A cluster in RIT. Furthermore, in contrast to GRAPE-6 and GRAPE-6A which are special-purpose computers, modern graphic cards are fully programmable. So our GPU cluster is more flexible and can actually serve as a general-purpose computer. Finally, the modern graphic cards only support single-precision accuracy at present (NVIDIA, 2007) (the NVIDIA Corporation has announced that GPUs supporting double-precision accuracy will become available in late 2007). By contrast, the GRAPE hardware uses a 64-bit fixed-point format to accumulate the acceleration (Makino et al., 2003), and therefore results in a higher accuracy than GPU. This issue has been addressed by Belleman et al. (2007), Hamada & Iitaka (2007, and Portegies Zwart et al. (2007).
Stability of GraCCA
Although commercial graphic cards are generally thought to have a relatively short time between failures, we have not experienced such instability. For example, the core collapse simulation for 64K particles took about 1 month, and the run had not experienced any system crash. It was paused several times due only to manual interruptions. However, improper coding in GPU program may easily and instantly lead to system idle or system crash. On the contrary, improper coding in CPU program generally only results in a forcible process termination.
Future outlook
Currently, we only use the shared time-step scheme for the purpose of performance measurements. This scheme is expected to be inaccurate for orbits of close pairs and may have an artifact of collision. In order to maximize the efficiency of direct N-body simulation as well as to improve accuracy for close pairs, we will adopt the individual time-step scheme along with block time-step algorithm 7 . Two issues may arise when we switch to this scheme. First, as illustrated in Fig. 11, the performance of single GPU drops dramatically for N < 4K in our current implementation, which is mainly __________ 7 The scheme of parallel individual time steps along with block time-step algorithm has been implemented after the submission of this paper and the results and comparison will be reported in a separate paper.
caused by the insufficient number of threads. Although this is not a problem in shard time-step scheme since we are more interested in large-N systems, it can suppress the performance in individual time-step scheme, where the number of i-particles to be updated in each step is much smaller than optimal number of particles (N). One solution to this problem is to equally divide the force calculation of a single i-particle into several parts, and each part is computed by one thread. In this way, we can keep the number of threads in GPU large enough even for small N .
The second issue is that the ratio of communication time in network (T net ) to total calculation time (T multi ) becomes worse. There are two ways to get around this problem. One is to use a faster network, such as Infiniband or Myrinet. Another is to adopt a more efficient scheme for parallel force computation (Makino, 2002;Harfst et al., 2007).
In addition, instead of using the direct-summation scheme for the entire gravitational system, we may treat the direct N-body computation as a GPU kernel to be embedded in the general cosmology computation. Most cosmology problems deal with dark matter particles, which are inherently collisionless where the gravitational force is given by the mean field. However, at the densest core regions, the computational of the mean field is limited by grid resolution. The dynamical range of spatial resolution is therefore severely limited to at most 4 orders of magnitude in mean field calculations. Conventionally, one circumvents this resolution problem by employing the direct N-body computation only for those particles in the densest cores to increase the local force resolution. The direct N-body part turns out to be the most time consuming. This is where the GPU computation comes to play.
The GPU program can replace the existing CPU sub-routine of direct N-body calculations. The replacement will shorten the cosmology computation time by a sizable factor.
Finally, Gualandris et al. (2007) have presented the first highly parallel, grid-based N-body simulations. It opens a paradigm for the GraCCA system to connect with the grid-computing community in the future. | 2018-12-29T08:11:40.751Z | 2007-07-20T00:00:00.000 | {
"year": 2007,
"sha1": "8e6ce545dda0779466c864fe9e74cff0437a10ff",
"oa_license": null,
"oa_url": "http://ntur.lib.ntu.edu.tw/bitstream/246246/163728/1/21.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8e6ce545dda0779466c864fe9e74cff0437a10ff",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
166616063 | pes2o/s2orc | v3-fos-license | The Influence of Architectural Practice in Poland on Cultural Heritage – Selected Problems
The condition of architect’s profession in Poland has changed significantly in the last thirty years. It doesn’t mean that important progress can be observed in every part of this specific job nowadays. Of course, there are many advantages of a political transformation, which were in place in this country starting from the year of 1989. The economic situation of architects today differs very much from the past. Meanwhile, alarming things have occurred in many examples of adaptive re-use of old building and gentrification of historical areas. The purpose of the research was to recognize reasons of controversial actions in the field of architectural design related to conservation issues. The main aim of the article is to outline a few problems of contemporary practice, which have impact not only on modern, but also on historical architecture. Among others, principle results are mentioned in the next couple of sentences. Very significant are contracts. In many times, there is a lack of proper balance between an investor and an architectural studio. It concerns primarily the part of the designing process. The essential chapters of contracts have some information about remuneration for services, usually quite low, and punishments for delays and withdrawals from the contract. In practice, the position of architects in Poland is very weak. The other problem is law regulations that seems to be rather inadequate to reality in the area of historical buildings’ preservation. Another issue is the fact that a designer has often no relevant knowledge, experience and interest in the field of tangible cultural heritage. Also, the architectural organisations, operating in Poland, seems to act not strong enough in discussed topics. There are many things to do in the matter of public orders, workshops, interventions and others. The discussion and conclusions include proposal of ways of changing the situation. Some of them have begun already, others can be improved. The content of the paper may help to understand why economic growth can be a negative factor in preventing ancient buildings from damage. Values of heritage are part of the basis of architectural design. Combining them with modern needs to find the best solution is difficult, even for professionals, but can be conducted properly, due to continuous development of theoretical findings.
Introduction
The purpose of the work is to explore some particular issues, which are important for preventing tangible cultural heritage. The conservation of historical settings is a knowledge that should be widely promoted among a society, especially investors. One of the objectives of the work is to find reasons why the situation of many old buildings in Poland significantly decreased in the last period. Another one is related with possibilities of changing their situation by architects. Do they aim on preservation or rather 2 on creating completely new structures? An important thing is also to find a few of the regulations, which impact improperly on contemporary architecture practice in the above-mentioned country.
The method of observation may be chosen to examine practical problems. So it was in the case of looking for reasons of unsatisfactory conditions of serving as professional architects in Poland. Of course, a critical analysis can also be very useful. It was partly conducted to outline the theoretical background.
Theory of the conservation of ancient monuments and its importance for architects
The discourse on restoration and conservation of ancient monuments has lasted for more than a hundred years. Between many individuals of the nineteenth century two adversaries are probably most recognized as the authors of opposite statements. Eugène Emmanuel Viollet-le-Duc and John Ruskin indicated two different approaches to historical buildings. The discourse was continued after their death and provided emphasise the significance of conservation. It could be possible because of three theoreticians working in Vienna -Camillo Sitte, Alois Riegl and Max Dvořak. Findings of these and other researchers allowed later to gather important statements in many international doctrinal acts, which were established in XX-th century. Among others, two of them have probably influenced at most on architectural practice in the field of old buildings' adaptation design, i.e. The Athens Charter of the Restoration of Historic Monuments (1931) and The Venice Charter for the Conservation and Restoration of the Monuments and Sites (1964).
A modern society has its own foundations, which are grounded on philosophy. It evolved during thousands of years but can be taken under consideration by the architect working today. Geoffrey Galt Harpham, in an article "Architecture and ethics: 16 points", deliberates: 'How would the architect in a classical society answer the questions about whose justice and which rationality should govern his practice?' [1, p. 35]. He points next: 'If that society were regulated by Aristotelian principles, the answer would be: those of the polis, the civic public space in which people meet to deliberate the common good. For Aristotle ethics is a sub-field of the larger category of politics, and this fact indicates a great deal about what counts in a classical society, as legitimate ethical reflection. In such a society, there would be little room for a caped seducer in any profession. For ethical attitude would be tested in interpersonal relations, above all in the relation of friendship.' [1, p. 35]. There is a vast difference between the contemporary practice of architecture and its counterpart in Ancient Greece. The societies also changed beyond all recognition. The separation between architects and common people is noticeable. The importance of cultural heritage is gradually emerging in the awareness of inhabitants of historic towns. Designers are usually conscious about that. Understanding of an adequate way in design is crucial, if someone approaches to change a historical setting, evolved during centuries. Which values are essential for beneficiaries? After explaining reasons of A. W. N. Pugin's attitude to the Gothic style, David Watkin, in his book "Morality and Architecture Revisited", noticed completely different arguments in convictions of another famous architect of the epoch of 1800s, who admired the same style. He argued that 'Viollet-le-Duc seems in certain passages to be one of those writers who see only two possible alternatives for architecture: either as capricious fashion, arbitrary and trivial, or as the expression of some external centre of gravity such as social and political ideals, technological necessity, or the spirit of the age.' [2, p. 29]. So often criticized mentioned authors of Gothic Revival caused social interest in old buildings.
In the middle of the nineteenth century, a dissociation of architectural profession from the crafts became more noticeable. Progress in this depended on a particular situation in different countries. For instance, as can be read in the book titled "Ethics and the Practice of Architecture" the architectural education in United States was established after 1850. Before that time Americans, who wanted to become architects, had to travel to Europe to obtain formal studies in France and work in an atelier [3]. However, there was no independent Poland at this time, but similar evolution had taken place also in this country.
Students of architecture in Poland are taught the theory of the conservation of historical monuments. There are also many lessons of conservational design and similar others. Educational base is quite solid, but nevertheless it must be said that not every adept of architecture uses it properly. Many times students' main concerns are subjects devoted to designing new buildings and urban spaces. After all, every architect should have some information about values of monuments, for instance based on systems of them proposed by Alois Riegl and Walter Frodl. Learning of conservation issues ought to be continued after studies in case, if someone wants to work in this special field. Lack of competence in this matter could cause unfavourably either on cultural heritage or professional career. To broaden one's own knowledge, architects may attend to post-graduate studies concerning historical monuments issues, but in reality, only a minority does it. There is no necessity of doing it, if someone doesn't want to work with old buildings. Otherwise, developing personal skills should be a must.
An essential problem of today in architecture is preserving historical structures. One of the most important things is to save the values that heritage possesses. Regina Maga-Jagielnicka in article "Axiology of space as a base of reflection on attitudes in the architect's profession" evoked an example of the Old Town in Wrocław to examine some ethical issues associated with transformation of this area. She considers: 'An attempt to answer the question of why throughout the centuries there have appeared so many spatial deformations in transforming the area that is so exceptional -often referred to as a phenomenon of urban planning and architecture -must lead to another reflection. If we assume that these actions were legal, perhaps what they lacked was axiological thinking.' [4, p. 22]. Next Maga-Jagielnicka continued that: 'Among the values which were lost we could mention urban planning and architecture consistency, style, historical truth and respect for authorship.' [4, p. 22]. This was consideration mainly about land development and what if someone takes under account only an individual construction plot?
An inquiring case study of "Adaptive Re-Use/Historic Preservation" was formulated in already mentioned book "Ethics and the Practice of Architecture". It concerns a hypothetical situation of designing a restoration of a public building, but it consists of a peculiar discussion, which could help in the real world [3]. Such developments are created all over the world in places of historic significance. Essential dilemmas are not only rooted in philosophy, but also associated with social, political and, among others, economical issues.
Free-market economy in design branch
Poland, as many other countries from Eastern Europe, had remained in the communist block since 1945 until 1989. At the end of this period, heavy economic decline had taken place. There was a monopoly of state enterprises in design branch, and of course in almost the whole economy, up to the beginning of 1980s when the crisis affected most. Strangely enough, the situation wasn't worst for valuable historical buildings. There were not too many investments in old structures as compared to later times. Besides, almost only one public company dealt with conservation design. It was called Przedsiębiorstwo Państwowe Pracownie Konserwacji Zabytków (what means: State Enterprise Monuments' Conservation Studios), in short PP PKZ. This situation was favourable for the heritage remaining in Poland, because high-skilled professionals, who were specialised precisely in historical buildings, designed adaptations and interventions in old settings. There was plenty of time for proper research of subjects of documentation and the pressure to bringing profits for the company was much less than it is nowadays in average enterprise.
Things rapidly changed after 1989. Gradually, most of the architects started their own enterprises. Running one's own business is not easy and not every one of them reached a success. At first, in 1990s, architectural design in Poland was comparably quite a good way of earning money, but later the situation got worse. Firstly, there was a crisis in the building sector. At this time levels of payments for architects fell sharply due to strong rivalry between designers. Secondly, the position of developers became prevailingly high to small architectural companies, which are often too weak to fight with an investor. Eventually, not only studios run by architects are designing architecture in Poland. There are plenty of companies owned by engineers or technicians, which offer services in this field. It is not proper, and probably unfair, but it is possible if only a licensed architect company for architectural projects as a designer in case of such an obligation. Convincing the moral side of such action it must be said that this should be regarded as a negative deed. The Code of Architects' Professional Ethics (in Polish: Kodeks Etyki Zawodowej Architektów) forbids such acts, but practically they are quite common. To be frank, there is nothing wrong in designing architecture by persons, who are qualified in a different field, unless they work together with an experienced architect. An interesting example of cooperation of this kind described by Andrew Ballantyne in a commentary "The Nest and the Pillow of the Fire" in the book titled "What is architecture?". He wrote about experience of Ludwig Wittgenstein, the famous philosopher of XX-eth century, in the field of design. The subject was a project of a house in the Kundmanngasse in one of the Viennese old districts. Wittgenstein together with the architect Paul Engelmann designed the modernist home for his sister in the neighbourhood of historical buildings. Ballantyne remarked about the philosopher: 'He was sensitive to the effects of architecture and knew the frisson that can be felt on contact with great architecture, but he found that it simply was not there in the house which he had designed, with painstaking care, and so -characteristically -he walked away from the practice of architecture. So far as he was concerned, there was simply no point in him being an architect if he was not going to be a great architect'.
Discussion
Aforementioned problems of contemporary practice in the architectural profession are only part of a bigger whole. Regulations stated in law could mislead a laic to the conviction that old monuments are well protected. It is not the truth and practitioners know many lacks and disadvantages of legal statements.
There are two main architectural organisations in Poland, i.e. the Association of Polish Architects (in Polish: Stowarzyszenie Architektów Polskich; in short -SARP) and the Chamber of Architects of the Republic of Poland (Izba Architektów Rzeczypospolitej Polskiej; IARP). They play an important role in the profession of architects in this country. The membership in IARP is obligatory, if one wants to serve architectural services and possesses relevant licensure in this matter. Joining SARP is voluntary. At least for a couple of years a little misunderstanding between the main authorities of these organisations can be observed. It does not cause many beneficial things. Indeed, such quarrels may influence rather negatively on the profession. There are many problems to be solved by architects. They should challenge, among others, also issues associated with designing in the field of cultural heritage. One of these is monitoring who works in this kind of service. Exclusion of non-architects from this part of service is essential. Practically, only selected professionals have appropriate skills, experience and time to devote to prepare documentation of historical monument's adaptation well. Management of change is a modern doctrine in conservatory. It makes it possible to input amazing transformations of settings. On one hand giving room for creation is a necessity of proper design, but on the other hand invalid principles can easily lead to failure. Careful approach to conduct the complicated process of design may provide excellent result. But it is not for sure. There are many circumstances that might deeply harm this process. One of the former governments tried to deregulate many professions, among them were the architect and the urban planner. It wanted to make it easier to obtain a right to work in professional services. In fact, changes were not equal in particular cases. For urban planners it was a disaster, because the authorities cancelled many relevant legal regulations and their Chamber had to be liquidated. Now, it looks very clear that a huge mistake was made. The present government is going to bring back licensure in the profession of urban planners which is positive, because the access to it shouldn't be wide open for everyone. After a couple of years, it occurred to governors that the implemented experiment in the job responsible for space is not worth continuing. Fortunately, basic rules in architecture have remained hardly the same for more than a dozen years. Of course, there are some changes, but they don't depreciate the profession as it was in an urban planning. For instance, the length of the period of apprenticeship as a designer's assistant was shortened. In spite of it all, it is not easy to become a licensed architect in Poland and that fact can be assessed as favourable. Working in this profession raises huge responsibility. Also, liability is an inseparable part of spectrum of problems that concern an architect's mind. The architectural organisations should be more involved in the process of creating by government new law acts related to built environment.
Favourable conditions in the job may raise coherent standards of service. A high level of practice is crucial in designing in the field of historic monuments. Only if someone has ability, talent and understanding in this matter, then sophisticated results may appear. Recognizing old estates as the work of arts in many times could help obtaining proper respect. An adequate research of the building is a must. It can allow to define which values are most important. The theory of conservation is very useful in that assessment. Focusing on values such as authenticity, integrity, cultural identity, testimony of the past, state of preservation or originality should be a part of the job. Evaluation of aesthetic values is similar to an architectural approach. Structural condition of a monument is also important. Nevertheless, utility often is considered as the most precious value, especially for the owner of the building. But an architect responsible for a project must be anxious about other principles, which are important in this matter. One of them is the rule that new units should have adequate scale comparing to historical neighbourhood. The designer must not forget about proper view at the heritage. New extensions or other volume ought to fit to historical substance. Serving according to the mentioned principles of conservatory could be more directly obligatory due to Polish law acts. There is a lack of formal obligation to act according to many of the important rules in legal statements in force. In fact, the restoration issues are not represented enough in the building regulations.
Introducing transformations obviously influences an existing structure, for example a cultural heritage. Designers should be aware of which involvements are positive for a historical setting and which are negative. There are a number of workshops associated with conservatory and restoration issues offered by IARP. Similar actions can make a special note. The changes may come more visible in different historical places. Architectural conduct is very influential on existing cultural heritage. It can cause irreversible damage in space. Many examples of that can be noticed in Bydgoskie District in Toruń. This historic area experienced a few demolitions of old buildings in order to make space for new investments. Architects should avoid such orders until the historical building still exists. Otherwise, they may be accused of causing loss in the space. The gentrification and revitalisation of historic districts, especially in such towns like Warsaw, Kraków or Toruń, are probably inevitable processes. The great role of architects is to affect relevantly to remained values of it and listening to people's needs.
Examples of good practice aren't a rarity in Poland's architectural conduct with historic plots. One of them is the Chopin's Centre in Warsaw, a new beautiful building designed by Bolesław Stelmach and his partners. It replaced an old city palace, which didn't exist yet in the moment of a call for an architectural contest for the design of this place. The shape of new object represents values of aesthetic and respect to its neighbourhood and former object there as well.
The deliberations about values are an essential part of the designing process. There are plenty of considerations associated with an assessment of what is worth to remain and what can be changed before the right decision is made. Inevitable difficulties can occur, such as conflicts of interest between the investor and local community. On which side will the architect stand? The desired solution would be to remain independent from these opponents, but in practice real situations are often complicated due to relationships implied by a contract between two parties. The architect depends on the developer, because of the rules stated there. Of course, there is no obligation to sign it, but a majority of public orders' contracts in the design branch have paragraphs that are unfavourable for architects. IARP tries to change the situation and protests in many cases, where unjust or even illegal obligations have been found in a proposed contract. These acts, in spite of being only the tip of an ice-berg, give some hope for the future.
Two parties of a contract have to accept its statements. In practice, in case of an adapting building it is difficult to decide at the beginning about everything, because inevitable features can occur during either designing process or realisation. Tough rules for a contract of works are associated with punishments. Many times they are incredibly high, if compared to low remuneration. Risks of withdrawals from the contract make the situation even worse. There is no sense in engaging in a process sometimes, if the rules were constructed unfairly and don't provide balance of rights and duties of contractors. Cases of problems with cash-flow are not rare. Increasing standards of practice, which is different among architects, is correlated also with the financial situation of a studio. Unsatisfactory payments for service is a persistent disadvantage of an unstable market.
Conclusions
To sum up, it must be explained that it could be possible to evoke only some problems in this article. The work could unveil a part of uneasy practical conduct that influence doubtless on cultural heritage. Only a couple of controversial actions were slightly described. They are although quite often seen in historical areas. The distinguished problems lead to the conclusion that there is no chance to modify the reality of today by an individual. Only an effort of a big group of practitioners may provide a favourable solution. Although, there are some difficulties related to economics, politics, legal regulations, etc., but designers are able to make improvements outside and inside their conduct. | 2019-05-28T13:09:42.641Z | 2019-02-24T00:00:00.000 | {
"year": 2019,
"sha1": "d5b7507759fcbfba73d9eb90d4b3a8e8ae91964e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/471/8/082066",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c7875be8ffb11f25927d869a6f7adf8ad85ff65a",
"s2fieldsofstudy": [
"History",
"Art"
],
"extfieldsofstudy": [
"Physics",
"Political Science"
]
} |
2124803 | pes2o/s2orc | v3-fos-license | Influence of ARHGEF3 and RHOA Knockdown on ACTA2 and Other Genes in Osteoblasts and Osteoclasts
Osteoporosis is a common bone disease that has a strong genetic component. Genome-wide linkage studies have identified the chromosomal region 3p14-p22 as a quantitative trait locus for bone mineral density (BMD). We have previously identified associations between variation in two related genes located in 3p14-p22, ARHGEF3 and RHOA, and BMD in women. In this study we performed knockdown of these genes using small interfering RNA (siRNA) in human osteoblast-like and osteoclast-like cells in culture, with subsequent microarray analysis to identify genes differentially regulated from a list of 264 candidate genes. Validation of selected findings was then carried out in additional human cell lines/cultures using quantitative real-time PCR (qRT-PCR). The qRT-PCR results showed significant down-regulation of the ACTA2 gene, encoding the cytoskeletal protein alpha 2 actin, in response to RHOA knockdown in both osteoblast-like (P<0.001) and osteoclast-like cells (P = 0.002). RHOA knockdown also caused up-regulation of the PTH1R gene, encoding the parathyroid hormone 1 receptor, in Saos-2 osteoblast-like cells (P<0.001). Other findings included down-regulation of the TNFRSF11B gene, encoding osteoprotegerin, in response to ARHGEF3 knockdown in the Saos-2 and hFOB 1.19 osteoblast-like cells (P = 0.003–0.02), and down-regulation of ARHGDIA, encoding the Rho GDP dissociation inhibitor alpha, in response to RHOA knockdown in osteoclast-like cells (P<0.001). These studies identify ARHGEF3 and RHOA as potential regulators of a number of genes in bone cells, including TNFRSF11B, ARHGDIA, PTH1R and ACTA2, with influences on the latter evident in both osteoblast-like and osteoclast-like cells. This adds further evidence to previous studies suggesting a role for the ARHGEF3 and RHOA genes in bone metabolism.
Introduction
Osteoporosis is a common and debilitating bone disease that is characterised by a low bone mineral density (BMD), which leads to an increased risk of fracture [1]. The disease is particularly prevalent in postmenopausal women due to a reduction in oestrogen production, with subsequent effects on bone as well as intestinal and renal calcium handling [2]. In addition to the effects of oestrogen, calcium and other environmental factors on bone structure, there is a strong genetic effect on peak bone mass (attained in early adult life), bone loss and fracture rates [3,4]. Twin and family studies suggest that 50-90% of the variation in peak bone mass [5][6][7] and 25-68% of the variance in osteoporotic fracture is heritable [4,8,9]. The genome-wide linkage scanning approach has identified at least 11 replicated quantitative trait loci (QTL) for BMD [10][11][12], including the 3p14-p22 region of the human genome (LOD 1.1-3.5) [11][12][13][14].
We have previously identified significant associations between variation in the RHOA and ARHGEF3 genes, which are both located within the 3p14-p22 region, and BMD in women [15,16]. The functions of these genes are related, with the product of the ARHGEF3 gene (the Rho guanine nucleotide exchange factor (GEF) 3) specifically activating two members of the RhoGTPase family: RhoA (encoded for by the RHOA gene) and RhoB [17]. RhoA is involved with regulating cytoskeletal dynamics and actin polymerisation [18] and has been shown to have a role in osteoblast differentiation [19,20] and osteoclastic bone resorption [21].
Given the associations that we have previously identified between the RHOA and ARHGEF3 genes and BMD, coupled with the evidence in the literature suggesting a role for RhoA in osteoblasts and osteoclasts, we decided to further investigate the role of these genes in these particular cell types. Knockdown of the RHOA and ARHGEF3 genes was achieved using small interfering RNA (siRNA) in a human osteoblast-like cell line and in osteoclastlike cells derived from a donor, with subsequent microarray analysis to identify genes that were differentially regulated. Replication of selected significant findings was then conducted in additional human osteoblast-like cell lines and in osteoclast-like cells from additional donors.
Ethics Statement
All subjects that donated blood samples for isolation of peripheral blood mononuclear cells (PBMCs) provided written informed consent and the institutional ethics committee of Curtin University approved the experimental protocol.
Experimental Approach
To identify genes involved in osteoblast and osteoclast function that are potentially influenced by the RHOA and ARHGEF3 genes, we examined the influence of knockdown of these two genes on 264 candidate genes in an osteoblast-like cell line and osteoclastlike cells obtained from a donor, in triplicate, by microarray analysis. The microarray results showed significant alterations in the expression of a number of the candidate genes, 7 of which were studied in greater detail to validate the findings, based on quantitative real-time PCR (qRT-PCR) studies of the 7 genes in 3 additional osteoblast-like and osteoclast-like cell cultures/lines.
Cell Culture
The osteoblast-like cell lines used for the gene knockdown experiments included: Saos-2, derived from osteosarcoma tissue (American Type Culture Collection (ATCC) Nu HTB-85) [22]; hFOB 1.19, derived from immortalised foetal osteoblasts (ATCC Nu CRL-11372) [23]; and MG-63, derived from osteosarcoma tissue (ATCC Nu CRL-1427) [24]. These cell lines are all human in origin and were cultured in DMEM (Sigma-Aldrich, St. Louis, USA) pH 7.4 supplemented with 4.77 g/l HEPES, 3.7 g/l NaHCO 3 , 10% (v/v) foetal bovine serum (FBS) and 1% (v/v) penicillin/streptomycin (100 units penicillin and 100 mg streptomycin per ml of media). The osteoclast-like cells used in these studies were differentiated from PBMCs (process described below) and were cultured in a-MEM (Invitrogen, Carlsbad, USA) pH 7.4 supplemented with 2.2 g/l NaHCO 3 , 10% (v/v) FBS and 1% (v/ v) penicillin/streptomycin. All cells were cultured at 37uC with 5% CO 2 and the medium was changed every 2-3 days. Total RNA was harvested from each culture using the RNeasy Mini Kit (Qiagen, Hilden, Germany) and reverse transcription of the RNA was performed using the QuantiTect Reverse Transcription Kit (Qiagen, Hilden, Germany). Quantitation of total RNA was performed using an ND-1000 spectrophotometer (NanoDrop Technologies, Wilmington, USA).
Isolation of Peripheral Blood Mononuclear Cells and Osteoclastogenesis
Osteoclast-like cells were differentiated from PBMCs isolated from 4 male donors of European descent aged 48615 years (mean 6 SD). Each batch of cells was isolated from 30 ml whole blood collected in 10 ml K 2 EDTA Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, USA). Anti-coagulated whole blood samples were centrifuged at 2,200 rpm for 10 min at room temperature before buffy coats were collected and diluted to a total volume of 4 ml with 16 phosphate buffered saline (PBS). The cell suspension was then gently layered over 3 ml of Ficoll-Paque (Pfizer, New York, USA) before being centrifuged again at 1,600 rpm for 40 min at room temperature. The PBMC layer was collected and washed by re-suspension in 6 ml 16 PBS and centrifuged at 800 rpm for 10 min at room temperature. The wash step was repeated on the cell pellet before the cells were resuspended in 5 ml medium supplemented with 10 ng/ml macrophage colony stimulating factor (M-CSF) (Invitrogen, Carlsbad, USA) and seeded directly into either a 24-well tissue culture plate or 25 cm 2 tissue culture flask. After two days, the medium was replaced with medium supplemented with 10 ng/ml M-CSF and 100 ng/ml receptor activator of nuclear factor kappa-B ligand (RANKL) (Invitrogen, Carlsbad, USA). The cells were then grown using this medium formulation for 17 days while osteoclastogenesis occurred.
Osteoclast-like cells were stained for tartrate resistant acid phosphatase (TRAP) using a chromogenic TRAP enzyme substrate to confirm production of the TRAP enzyme as an indicator of the osteoclast phenotype. This involved washing the cells with 16 PBS, fixation with 4% (v/v) paraformaldehyde for 15 min, washing 3 times with 16 PBS before incubation with filtered TRAP stain solution at 37uC for 25 min. The stained cells were then washed 3 times with 16PBS prior to visualisation using light microscopy.
siRNA Knockdown
Transfection of cells with siRNA sequences was used to knockdown expression of the ARHGEF3 and RHOA genes. Transfections were performed using HiPerFect Transfection Reagent (Qiagen, Hilden, Germany). Two different siRNA sequences were used in tandem to knockdown expression of each gene. There is evidence to suggest that the RhoA protein has a half-life of up to 31 h [25], therefore a minimum gene knockdown period of 48 h was used to ensure an effect at the protein level. Negative controls treated with AllStars Negative Control siRNA (Qiagen, Hilden, Germany) were included in each experiment. All knockdown experiments were performed in triplicate. Knockdown of the ARHGEF3 and RHOA genes did not appear to influence the proliferation or viability of any of the cell types studied.
Knockdown in Osteoblast-like Cells
siRNA knockdown experiments were performed in 24-well tissue culture plates. For the Saos-2, hFOB 1.19 and MG-63 osteoblast-like cell lines, each well was seeded with 5610 4 cells. Cells were grown for 24 h before fresh medium was added to each culture and transfections were performed using a final siRNA concentration of 30 nM with 6 mL transfection reagent per well. Cells in each well were incubated with the transfection mix for 48 h at 37uC prior to washing with 16PBS and extraction of total RNA.
Knockdown in Peripheral Blood Mononuclear Cells/ Osteoclast-like Cells
500 mL of freshly isolated PBMCs were aliquoted into 24-well tissue culture plates. Osteoclastogenesis was stimulated and confirmed microscopically and biochemically as described previously by TRAP staining. siRNA knockdown experiments were performed using a final siRNA concentration of 100 nM with 6 mL transfection reagent per well. Cells in each well were incubated with the transfection mix for 48 h at 37uC prior to washing with 16 PBS and extraction of total RNA.
RNA Extraction and Microarray Analysis
A total of 18 RNA samples, 9 from Saos-2 and 9 from osteoclast-like cell cultures (donor 1) were used for the microarray analysis. Each set of 9 was comprised of 3 cultures treated with siRNA specific for ARHGEF3, 3 treated with siRNA specific for RHOA and 3 treated with negative control siRNA. Total RNA was extracted from each culture using the RNeasy Mini Kit (Qiagen, Hilden, Germany). The quality and quantity of all RNA samples was checked prior to microarray analysis using a 2100 Bioanalyzer (Agilent Technologies, Santa Clara, USA). 10 mL of each RNA sample was amplified using the TotalPrep RNA Amplification Kit (Applied Biosystems, Foster City, USA) before microarray analysis was performed using the HumanHT-12 v3 Expression BeadChip Kit (Illumina, San Diego, USA). The HumanHT-12 BeadChip profiles the expression of more than 25,000 annotated genes derived from NCBI RefSeq (Build 36.2) [26]. The complete results from the microarray analyses performed in this study have been submitted to The University of Western Australia's Research Data Online resource.
Gene Selection
While data were generated for most of the .25,000 genes included on the microarray, only 264 candidate genes were selected for statistical analysis in order to limit the potential for false positives. These candidate genes were selected on the basis of the following criteria: genes thought to have potentially important roles in osteoblast (n = 45) or osteoclast function (n = 62), or genes thought to play a role in the RhoA/ARHGEF3 signalling pathway (n = 157).
Quantitative Real-time PCR
qRT-PCR was used to determine the degree of gene knockdown achieved and to validate microarray results for selected targets. Reverse transcription of RNA samples was first performed using the QuantiTect Reverse Transcription Kit (Qiagen, Hilden, Germany). The resulting cDNA was then amplified using the QuantiFast SYBR Green Kit (Qiagen, Hilden, Germany) in conjunction with an iQ5 Multicolor Real-Time PCR Detection System (Bio-Rad, Hercules, USA). cDNA samples were diluted in 16TE buffer before analysis. QuantiTect Primer Assays (Qiagen, Hilden, Germany) were used to amplify most gene transcript sequences. Bioinformatics analysis revealed that the QuantiTect Primer Assay for the candidate gene ACTA2 amplifies only one of the two transcript variants for this gene. Therefore, a custom primer pair was designed for this gene using the web-based Primer3 software package [27]. The human 18S ribosomal RNA gene (RRN18S) was selected as an internal reference for this work to allow for normalisation of the data for variations in the quantity of cDNA added to each reaction. The reaction efficiency of each primer pair was calculated by amplifying a 10-fold dilution series of target sequence across 5 orders of magnitude. This was performed to confirm that the amplification efficiency of each gene of interest is no more than 10% from that of the internal reference as recommended by Schmittgen and Livak [28]. The log template dilution (x-axis) was plotted against the cycle threshold (C T ) value obtained for each dilution (y-axis) with the slope of the line used for calculation of amplification efficiency using the equation m = 2 (1/log E), where m is the slope of the line and E is the reaction efficiency. A reaction efficiency of 2.0 equates to a perfect doubling of amplicon product during each PCR cycle. All reactions were performed in triplicate with the mean C T value used in the statistical analysis. Melting-curve analysis was performed on all real-time PCR products to confirm amplification of a single DNA sequence. A random selection of PCR products were also subjected to agarose gel electrophoresis for additional confirmation of the specificity of amplification.
Microarray Statistical Analysis
Differential expression analysis of the microarray data using the Illumina custom error model was performed using the BeadStudio v3.4.0 software package (Illumina, San Diego, USA). Samples treated with the negative control siRNA were specified as the reference group. The raw microarray gene expression data were normalised using the quantile normalisation algorithm [29], which adjusts the sample signals to minimise the influence of variation arising from non-biological factors (eg. pipetting variation) [30]. Background subtraction was performed on the data to minimise the variation in background noise between arrays and to remove signal resulting from non-specific hybridisation [31]. Once background subtraction has been performed on the data, the expected signal for unexpressed targets is zero. The data were corrected for multiple testing using the Benjamini-Hochberg False Discovery Rate algorithm [32].
Real-time PCR Statistical Analysis
Gene expression ratios were calculated using the comparative C T method as described by Schmittgen and Livak [28]. Briefly, the DC T (C T of the test gene2C T of the internal reference) was calculated for each gene of interest in each sample in the test and control groups. This figure was then entered into the equation 2 2DC T with the mean 6 standard error calculated for each of the test and control groups. 2 2DC T values for test and control groups were analysed using an unpaired t-test to determine whether differences in expression were statistically significant. Combined 2 2DC T values for the osteoclast-like cells were examined by 2-way analysis of variance (ANOVA) (note that this combined analysis was not performed for the osteoblast-like cells due to potential variation in the maturation state and gene expression profile of each cell line). Significant associations are defined as P,0.05.
Osteoblast Microarray Results
Knockdown of the ARHGEF3 and RHOA genes was validated in the Saos-2 cells by qRT-PCR prior to microarray analysis. For the ARHGEF3 and RHOA genes, 81% and 79% knockdown was achieved respectively in these cells (Fig. 1A and B). Of the 202 candidate genes examined in the osteoblast-like cells, gene knockdown resulted in significant changes in expression of 10 genes after adjustment for multiple testing ( Table 1). Knockdown of ARHGEF3 resulted in significant changes to the expression of 8 genes: TNFRSF11B, SP7, ALPL, ANGPTL2, GNA11, MYO9B, GNAI2 and PFN1. For RHOA knockdown, 2 genes were affected: PTH1R and ACTA2. Table S1 contains the microarray results for all of the candidate genes examined in the Saos-2 cells (P values corrected for multiple testing).
qRT-PCR Validation and Replication of Microarray Results for Targeted Genes in Osteoblast-like Cell Lines
Both of the differentially regulated genes in the RHOA knockdown group (PTH1R and ACTA2) and 2 from the ARHGEF3 knockdown group (TNFRSF11B and ALPL) were then selected for confirmatory and replication studies using qRT-PCR. While the microarray results suggested that 8 of the 202 genes examined could potentially be regulated by ARHGEF3, the TNFRSF11B and ALPL genes were selected based on a number of factors including their importance to bone metabolism, their level of expression in the cell type and the size and statistical significance of the regulatory effect. These 4 genes were thus examined in one additional replication study experiment in Saos-2 cells as well as in two additional osteoblast-like cell lines, hFOB 1.19 and MG-63. For the ARHGEF3 and RHOA genes, 75% and 68% knockdown was achieved respectively in the replication batch of Saos-2 cells, 75% and 77% respectively in the hFOB 1.19 cells and 84% and 83% respectively in the MG-63 cells (Fig. 1A and B). The average knockdown achieved across all of the osteoblast-like cell lines as determined by qRT-PCR was 76.8% for RHOA and 78.7% for ARHGEF3 (Fig. 1A and B).
The influence of gene knockdown on TNFRSF11B, ALPL, PTH1R and ACTA2 expression is shown in Fig. 2A-D. A highly significant down regulation of ACTA2 was observed in response to RHOA knockdown in each of the osteoblast-like cell lines examined (P,0.001). The qRT-PCR results also confirmed the up-regulation of the PTH1R gene in response to RHOA knockdown observed in the microarray screen (P,0.001); however, neither the hFOB 1.19 nor the MG-63 cell lines expressed this particular gene. The qRT-PCR studies showed ARHGEF3 knockdown had a significant influence on TNFRSF11B expression in the Saos-2 and hFOB 1.19 cell lines (P = 0.003-0.02), however little influence was seen in the MG-63 cells. ARHGEF3 knockdown had no consistent influence on ALPL expression.
Osteoclast Microarray Results
The osteoclastic phenotype of the cells was confirmed by expression of the genes encoding the osteoclastic biochemical markers TRAP (ACP5), cathepsin K (CTSK) and calcitonin receptor (CALCR) from the microarray output. The ACP5 and CTSK genes were found to be expressed at particularly high levels in this cell type (mean microarray signal .14,000 fluorescence units).
Knockdown of the ARHGEF3 and RHOA genes was validated in the osteoclast-like cells by qRT-PCR prior to microarray analysis. For the ARHGEF3 and RHOA genes, a mean knockdown of 63% and 84% was achieved respectively in the osteoclast-like cells from donor 1 (Fig. 1C and D). Of the 219 candidate genes examined in this cell type, gene knockdown resulted in significant changes in expression of 17 genes after adjustment for multiple testing ( Table 2). ARHGEF3 knockdown was found to significantly influence the expression of 12 genes: CCL5, HLA-C, SNCA, TNF, OSCAR, CD44, BIRC3, ITGB7, ITGAE, ITGAL, ITGA3 and ITGAM. For RHOA knockdown, 9 genes were found to be significantly influenced: TNF, THBS2, CCL5, ITGB7, ARHGDIA, IGF1, ACTA2, MYL9 and ITGAE. Of these, the effect of RHOA knockdown on the ACTA2 gene was also observed in the osteoblast-like cells. Table S2 contains the microarray results for all of the candidate genes examined in the osteoclast-like cells (P values corrected for multiple testing).
qRT-PCR Validation and Replication of Microarray Results for Targeted Genes in Osteoclast-like Cells
In the osteoclast studies, two of the differentially regulated genes from each of the knockdown experiments were selected for validation and replication analysis by qRT-PCR in osteoclast-like cells from 3 additional donors. These included the CCL5 and OSCAR genes for ARHGEF3 knockdown, and ARHGDIA and ACTA2 genes for RHOA knockdown (Fig. 3A-D).
For the ARHGEF3 and RHOA genes, 41% and 36% knockdown was achieved respectively in the donor 2 cells, 52% and 45% respectively in the donor 3 cells and 25% and 32% respectively in the donor 4 cells (Fig. 1C and D). The efficiency of knockdown of ARHGEF3 and RHOA averaged 45.3% and 49.3% respectively in this cell type, substantially lower than that observed in the osteoblast-like cells (76.8% vs 49.3% for RHOA, P = 0.07; 78.7% vs 45.3% for ARHGEF3, P = 0.007). The knockdown was considerably lower than desired, however there was some evidence from the overall analysis to suggest that knockdown of RHOA reduces the expression of ACTA2 in this cell type (P = 0.002 by ANOVA). RHOA knockdown also caused a significant overall reduction in ARHGDIA expression in the osteoclast-like cells (P,0.001 by ANOVA). While some significant changes were seen for cells from particular donors, the influence of ARHGEF3 knockdown on CCL5 and OSCAR was inconsistent.
Discussion
We previously reported associations between polymorphism in the RHOA and ARHGEF3 genes and bone density in women and in this study investigated the potential role of these genes in the regulation of bone cells. The knockdown of these two genes showed clear effects on the expression of a number of potentially relevant genes and pathways in two of the major bone cell typesosteoblasts and osteoclasts. Greater gene knockdown levels were achieved in the osteoblast-like cells than in the osteoclast-like cells.
Concerning the studies performed in the osteoblast-like cells, expression of the ACTA2 gene was found to be significantly downregulated by RHOA knockdown in all three osteoblast-like cell lines examined (Saos-2, hFOB 1.19 and MG-63), with an average expression ratio of 0.35 seen in knockdown cell cultures relative to control cell cultures by qRT-PCR. The ACTA2 gene encodes the alpha 2 actin cytoskeletal protein, which is a major component of the smooth muscle cell contractile apparatus and accounts for around 40% of the total protein and around 70% of the total actin in smooth muscle cells [33,34]. There have been few studies on the role of the ACTA2 gene product in bone metabolism, however there is evidence in the literature to suggest that the ACTA2 gene is regulated by RhoA signalling. Mack et al. [35] found that expression of constitutively active RhoA in rat smooth muscle cell cultures increased the activity of the Acta2 promoter, whereas inhibition of RhoA decreased the activity of the promoter. They also found that stimulation of actin polymerisation in these smooth muscle cells increased the activity of the Acta2 promoter by 13-fold [35]. In addition, Zhao et al. [36] reported that static tensile forces applied to rat fibroblasts stimulates the promoter activity of the Acta2 gene through the Rho signalling pathway. Collectively, these data suggest that expression of the ACTA2 gene may be regulated through the RhoA signalling pathway, and the results presented here support this.
Knockdown of the ARHGEF3 gene in both the discovery and replication experiments with Saos-2 cells resulted in significant down-regulation of the levels of TNFRSF11B (osteoprotegerin) mRNA. This effect was replicated in the hFOB 1.19 cells, but not in the MG-63 cell line. It is not clear why this effect was not seen in the MG-63 cells, it may be an effect specific to that cell line. TNFRSF11B mRNA levels were significantly higher in the MG-63 cells than in the hFOB 1.19 and Saos-2 cells, in line with studies by Pautke et al. [37]. There are well described differences in the expression patterns between osteoblast-like cell lines reported elsewhere [37][38][39].
In addition to these findings, knockdown of the RHOA gene in both the discovery and replication batches of Saos-2 cells resulted in significant up-regulation of PTH1R (parathyroid hormone 1 receptor) mRNA levels, although expression of this gene was not detected in the hFOB 1.19 or MG-63 cell lines. Both PTH1R and TNFRSF11B have a major role in the stimulation of osteoclastogenesis upon exposure to parathyroid hormone (PTH), suggesting that the ARHGEF3 and RHOA genes may be involved in this process. Radeff et al. [40] found that treatment of UMR-106 rat osteoblast-like cells with Clostridium difficile toxin B, which specifically inhibits the Rho proteins (including RhoA) through glucosylation of the nucleotide binding site [41], reduced PTHinduced expression of the Il6 gene, the product of which has been shown to promote osteoclastogenesis [42]. The authors concluded that the Rho proteins are an important component of PTH signalling in osteoblasts and may have a role in the activation of the intracellular messenger protein kinase C alpha [40]. Another study published by Wang and Stern [43] found that UMR-106 rat osteoblast-like cells transfected with dominant negative RhoA and treated with PTH and/or calcitriol increased production of TNFSF11 mRNA (encoding RANKL) and reduced production of TNFRSF11B mRNA, stimulating osteoclastogenesis of cocultured RAW 264.7 mouse monocyte/macrophage-like cells [43]. However, when these cells were transfected with constitutively active RhoA and treated with PTH and/or calcitriol, the levels of TNFSF11 and TNFRSF11B mRNA did not change significantly and osteoclastogenesis of the RAW 264.7 cells failed to occur [43]. These results led the authors to suggest that RhoA signalling can inhibit hormone-stimulated osteoclastogenesis through effects on RANKL and osteoprotegerin expression in osteoblasts [43]. No consistent effect of ARHGEF3 knockdown on ALPL expression could be found, however a higher expression of this gene in Saos-2 cells was found compared to the other cells investigated, including the MG-63 cells, in line with the findings of Pautke et al. [37].
One limitation of the gene expression data in the osteoclast-like cells was that consistently high gene knockdown (.60%) was not achieved in some of our experiments, and a greater level of knockdown may show more substantial changes than seen in our studies. Nevertheless, some interesting results were obtained. Expression of the ARHGDIA and ACTA2 genes was found to be significantly reduced in response to RHOA gene knockdown. The product of the ARHGDIA gene is a Rho GDP dissociation inhibitor (GDI) which acts as a negative regulator of several of the RhoGTPases [44]. RhoGDIs maintain the Rho proteins in their inactive GDP-bound state by inhibiting the exchange of GDP for GTP [45] and by restricting membrane anchoring [46]. The down-regulation of ARHGDIA expression seen in the RHOA knockdown osteoclast-like cells in this study could be a compensatory mechanism for the reduced expression of the RHOA gene. The influence of RHOA knockdown on expression of the ACTA2 gene adds further support to the earlier suggestion that expression of this gene is regulated by the RhoA signalling pathway.
In conclusion, knockdown of the ARHGEF3 and RHOA genes in bone cells of human origin reveals important regulatory changes including significant down-regulation of the ACTA2 gene, encoding the cytoskeletal protein alpha 2 actin, in both osteoblast-like and osteoclast-like cells in response to RHOA knockdown. RHOA knockdown also resulted in up-regulation of the PTH1R gene in the Saos-2 osteoblast-like cell line and down-regulation of ARHGDIA in osteoclast-like cells, whereas ARHGEF3 knockdown caused down-regulation of the TNFRSF11B gene in the Saos-2 and hFOB 1.19 osteoblast-like cells. These findings add further evidence to previous studies suggesting a role for the RHOA and ARHGEF3 genes in bone metabolism. Future work in this area could include confirmatory studies investigating the influence of over-expression of the ARHGEF3 and RHOA genes in these cell types and examination of effects at the protein level. | 2017-04-12T19:10:57.789Z | 2014-05-19T00:00:00.000 | {
"year": 2014,
"sha1": "9660e9f6f753bc4531d72476883ecaf50180c3ff",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0098116&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93c5117f647f31e2c1620d32ee58afabbf0aa3b0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257217497 | pes2o/s2orc | v3-fos-license | Chinese Diaspora Communities’ Knowledge of and Engagement with Advance Care Planning: A Systematic Integrative Review
Objectives: To synthesize evidence regarding Chinese diasporas’ understanding, experience, and factors influencing engagement with advance care planning. Methods: A systematic integrative review using content analysis. Seven electronic databases (Embase, CINAHL, SCOPUS, Web of Science, Medline (OVID), PsycINFo, and The Cochrane Library) and gray resources were searched for studies from January 1990 to March 2022. Study quality appraisal was undertaken. Results: 27 articles were identified and rated as moderate to good. Two overarching and interrelated themes were identified, “Awareness and knowledge” and “Engagement with advance care planning.” There are low levels of awareness, knowledge and engagement with advance care planning for Chinese diaspora. Findings highlight that this is influenced by two key factors. First, the geographic context and legal, cultural, and social systems within which the Chinese diasporas are living act as a potential catalyst to enhance awareness and engagement with advance care planning. Second, aspects of Chinese diasporas’ original culture, such as filial piety and a taboo surrounding death, were reported to negatively affect the promotion and engagement of advance care planning. Significance of Results: Chinese diasporas are intermediaries between two divergent cultures that intertwine to strongly influence engagement with advance care planning. Hence, a bespoke culturally tailored approach should be accommodated in future research and practice for Chinese communities in multicultural countries to further advance palliative and end-of-life care awareness among this group.
Introduction
In the last decade, public health palliative care has gained recognition and momentum globally, and has been advocated for communities to improve the experience of health, dying, and bereavement. 1 Advance care planning (ACP), traditionally advocated for the elderly or those diagnosed with a life-limiting condition, has seen a gradual shift in global and national policy 2,3 encouraging people to think about planning ahead regardless of their age or condition.
Advance care planning allows individuals to define, plan, and record their wishes and preference for future medical treatment and care. 3 It aims to help ensure individuals obtain the care they desire that is consistent with their values, goals and preferences when they no longer have the capacity to make any care decisions for themselves. 3 In the last two decades, Western cultural practices have led the understanding, delivery, and practice of advance care planning across the world. 4,5 This may have resulted in many such initiatives being Western-driven with little consideration of the needs of different ethnic groups.
However, international migration and diasporic populations have led to rapidly changing demographic characteristics across Western society. Debates on the accessibility and applicability of health care by different groups have raised questions regarding the provision of culturally appropriate healthcare in general. 6,7 Evidence suggests that mainstream healthcare and palliative care often do not serve ethnic populations effectively [7][8][9] with barriers related to culture, language, awareness, and adaptation reported. 7,10,11 The Chinese community represents the biggest and fastestgrowing ethnic community around the globe, 12 yet engagement with advance care planning remains low, 13 similar to that of other ethnic minorities. [14][15][16] Some authors attribute this to differing cultural, sociodemographic, and health-related factors. 13,17 Lee et al 17 in their review, emphasized the appropriateness and importance of collectivism and familism as major decisionmaking influences among Chinese people from Eastern and Western cultures rather than individual autonomy and self-determination.
However, there are few advance care planning public health campaigns exist that are tailored to the multicultural society in which they live. 18,19 It therefore could be argued that the developmental experiences of advance care planning in Western countries may not be aligned with ethnic minorities. A previous review by Jia et al 13 systematically synthesized the evidence regarding advance care planning among Chinese communities and recommended the need for campaigns to consider the Chinese communities' traditional social norms and culture. However, to date, there is a lack of evidence exploring the empirical and gray literature to inform a fuller picture relating to Chinese diaspora engagement and understanding of advance care planning. Consequently, this study aims to review and synthesize the evidence regarding Chinese diasporas' understanding and experience of advance care planning, and factors influencing their engagement with it.
Design
A systematic integrative review was conducted, guided by Whittemore and Knafl 20 methodological approach enabling the integration of evidence from multiple designs. 21 This review was reported in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 22
Search Strategy
A comprehensive literature search was carried out for peerreviewed papers published in English and Chinese from January 1990 to March 2022. The start date reflects the introduction of terms relating to advance care planning, such as advance directives and living wills. 23 Seven bibliographic databases were searched (Embase, CINAHL, Web of Science, Medline (OVID), SCOPUS, PsycINFo, The Cochrane Library (Cochrane Central Register of Controlled Trials, Cochrane Methodology register).
The electronic search was supplemented by hand searches of gray literature from the reference lists of included studies and other gray resources including EThOS, ProQuest Dissertations and Theses Global, OpenDOAR, and GreyNet. The search terms included a combination of two key terms, namely "advance care planning," "Chinese Diaspora," combined with medical heading terms and text words (see table 1). As "advance care planning" and "advance directive" are terms used interchangeably, 3 the term "advance directive" and related terms were also included to assure the recall ratio. Search strategies were tailored for each bibliographic database (see Appendix 1 for CINAHL search strategy).
Inclusion and Exclusion Criteria
Articles were included if they presented empirical studies about advance care planning among Chinese diasporas. There was no restriction by country. Table 2 provides detailed inclusion and exclusion criteria.
Selection Process
The results of searches from each database were exported and managed by Zotero software where duplications were removed. A two-step process was used for screening: 1. two reviewers (ZL & FH) independently read and eliminated studies from the title and abstract based on the identified inclusion and exclusion criteria. All articles that were considered relevant by each reviewer were included in the full text evaluation.
2. two independent reviewers evaluated the full-text studies based on inclusion to identify the final articles included in this review.
Any discrepancies in study selection were discussed by both reviewers and adjudicated by a third reviewer (EB). To enhance rigor, the third reviewer screened a random selection of 10% of the included papers. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) diagram 22 was used to record the screening process at each of the steps of the review process for visual representation. The search resulted in a sample of 1657 papers, with 27 studies included (see Figure 1).
Critical Appraisal
Articles were assessed for risk of bias independently by two reviewers (ZL & FH) using a range of appraisal tools aligned to the studies 24,25 (see Appendix 2). Mixed Methods Appraisal Tool (MMAT) is a 5-item assessment tool that has been widely used in previous studies for its characteristics of ease of use, efficiency, and reliability. Appraisals by MMAT were presented with detailed information on each criterion rather than the total score of the MMAT for each study guided by the tool's developer. 25 Appraisals by Joanna Briggs Institute Critical Appraisal Tools were presented and classified as High (a score below 49%), Moderate (50-74%), and Low (75 +%) by accounting for the number of "yes" answers and expressing them as a percentage of questions in the tool. The quality of papers were assessed as "high" in 5 qualitative, 9 quantitative, and 1 mix-method studies, "Moderate" in 7 qualitative and 5 quantitative studies (see appendix 2). All of these 27 studies were included in the synthesis.
Data Extraction and Analysis
Data were extracted independently by two reviewers (ZL & FH) using a generic data extraction template and disagreements were mediated by a third reviewer (EB) ( Table 3). Key extracted information includes (a) author and location of research; (b) year of publication; (c) aims, objectives, and/or research questions; (d) characteristics of study population; (e) methodology; (f) major findings and information relevant to research questions; and (g) limitations. The data extraction process was based on the four stages identified by Whittemore and Knafl 20 : data reduction, data display, data comparison, and conclusion drawing. Given the diversity of methodologies, the data were synthesized using content analysis which facilitates the identification of patterns, commonalities, and finally were contrasted in line with shifting perspectives to allow critical analysis of data. 26 The initial analysis was completed by the lead author and the themes were reviewed for accuracy by the team.
Advance directives were the focus of 3 studies, and advance care planning was the focus of the remaining studies. Three advance directive studies 27,43,48 addressed a set of outcomes: advance directive related knowledge, beliefs and attitudes, intention, completion, and associated factors. The remaining studies regarding advance care planning were reported in detail.
3. Facilitators to AD completion: lived in United States longer, greater English proficiency, family support.
Hinderer and Lee 49 To estimate the impact of a culturally tailored nurse-driven educational intervention on the relationship between attitudes toward ADs and AD completion and ACP discussions. 2. Predictors: cultural belief, and knowledge related to advance care planning were found to be an important predictor for Chinese Americans to advance in their stage of readiness.
3. Major inhibitors: a lack of knowledge or misunderstandings leading to ACP being negatively perceived; a sense of no urgency and/or procrastination; difficulties of facing or initiating talk of death topics, with adult children; and a possible unwillingness to make a commitment about future EOL situations.
Notes: OG: older group; YG: younger group; ACP: advance care planning; AD: advance directive; EOL: end-of-life; GPs: general practitioners; HCPs: healthcare professionals; TPB: the theory of planned behavior; HBM: health belief model.
Australian elders, reporting higher levels of awareness among participants who had completed higher education. However, education was not reported as a consistent influencing factor by Ng et al 39 whose cross-sectional study with 273 (67.4%) Chinese diaspora residing in Singapore reported educational attainment had no association with advance care planning. However, the reason for these divergent results was not documented.
Most of the studies retrieved investigated the influence of age on awareness and knowledge levels. Although the majority of papers are focused on the middle (>48 years) to older age participants (>65 years) evidence suggested that as age progresses, higher receptivity toward advance care planning was observed. 32,38,39,41,45 For example, Ng et al 39 in a crosssectional study of the Chinese general public (n = 406) (>21 years) in Singapore reported that about 14% of participants were aware of advance care planning, representing an older cohort (50.8 years vs 46.2 years, p = 0.045, t = 2.0, df = 402). However, the influence of age on awareness of advance care planning was not consistent in some studies. 38,41,45 For example, in a cross-sectional study undertaken in the United States of patients (n = 179) aged 55 + recruited via a community medical unit, Dhingra et al 41 reported no statistically significant associations among any of sociodemographic factors, including age and awareness of advance care planning. Age was also not associated with knowledge level with Ye et al 51 and Lee et al 47 reporting moderate knowledge levels in advance care planning/ advance directive among Chinese American elders. Furthermore, even among those who report an awareness of advance care planning, misconceptions were common, often associating it with living wills or euthanasia. 18,31,32 However, other authors highlighted language among Chinese Diasporas as a factor influencing the awareness and knowledge of advance care planning. 30,32 In a study of older Chinese American adults (n = 34) using focus groups, Yonashiro-Cho et al 30 found participants in English-speaking groups had a greater understanding of, and familiarity with, advance care planning than those in Mandarin and/or Cantonese-speaking groups. This suggests that language ability may affect the ease with which participants become aware but also gain information about advance care planning. Both Yap et al 32 and Yonashiro-Cho et al 30 recommended the need for culturally tailored language materials to educate and facilitate the Chinese diaspora's engagement with advance care planning.
Several papers reported on the implementation of culturally tailored educational interventions, 46,47,[49][50][51] all of which reported significantly improved outcomes. For example, in a study of Chinese American adults (n = 72), Lee et al 47 provided educational material in both English and Mandarin guided by the Five Wishes, a type of legal advance directive document in the United States, and found knowledge and engagement significantly improved. A similar programme has also been noted in the study conducted by Hinder and Lee. 49 However, all these retrieved studies were conducted in the United States and limited to only quasi-experimental methods, hence questions are raised about the generalizability of the results to other countries.
Acculturation was found by several studies to have an influential role when discussing advance care planning among Chinese living in multicultural countries. [30][31][32]36,40,42,45 Participants who had greater proficiency in English [30][31][32]45 or lived in the host countries longer 40,42 were found to be more likely to engage with advance care planning. For example, in a study undertaken in the United States, Lee et al 31 found that older Chinese diaspora generations who lacked English proficiency tented not to engage in advance care planning. However, this was not an issue for younger generations who were multilingual.
Other facilitating factors enhancing engagement were the diagnosis of a health-related problem (ie, falls, hospitalization, the decline in health) and/or a diagnosis of a life-limiting condition which acted as key triggers to engagement. 30 Several authors who undertook their research in America and Singapore 33,39,46,52 noted that participants who regarded themselves as healthy did not feel any requirements to engage in advance care planning discussions, regardless of age and geographical location. Only one study by Wong et al 45 contrasts this view. Adopting a cross-sectional design in Australia, findings indicated that there is no significant association between someone suffering from chronic illness or cancer and participating in advance care planning. However, this result was based upon a small (n = 26) sample of whom only seven had engaged in advance care planning.
Culture was also reported to partially affect the promotion and engagement with advance care planning. Jiao and Hussin 35 undertook a small-scale qualitative study in Malaysia, a highly collectivistic society, that reported none of the 13 participants had engaged in advance care planning discussions. Several authors have also highlighted traditional Chinese culture, where a taboo surrounding death, fear of upset, and causing physiological burdens among family members, hindered such topics from being broached. 32,35 Discussing dying and making advance care plans were found to be considered taboo subjects regardless of sociodemographic factors. 29,31,35,36,38,41,52 Some studies indicated that participants preferred the initiation of such conversations to be led by healthcare professionals or community representatives rather than by themselves or family. [28][29][30][31][32][33][34]37 Furthermore, Lee et al 31 indicated that both older and younger Chinese Americans expressed concerns about causing burdens to their families that inhibit their behavior to advance care planning. The experience and impact of the family burden on advance care planning conversations are echoed in other studies 29,31,36,37,52 that indicated the burden usually tends to be a double-edged sword. Fear of causing upset, facing one's own mortality, and the realization of older person care is the duty and burden of the remaining family members were key barriers to engagement. However, Yap et al's 32 qualitative study of 30 older Chinese Australians found that many participants were open to discussing death, end-of-life, and advance care planning. They suggested that the low uptake of advance care planning among Chinese Australians might not be culturally motivated but rather due to language barriers that prevent access to health information and services.
Several studies 29,32,34,39,[40][41][42]44 identified facilitators to advance care planning engagement such as social and health-related networks. The influence of a strong family culture was viewed as the foundation for promoting family involvement in decision-making. Liu 40 reported that family cohesion acts as the moderator. A similar finding was reported by Wang et al 44 who undertook a cross-sectional study of 260 Chinese Americans aged 55 + years and found that family relationships had a significant positive overall effect on the attitude toward family involvement in end-of-life discussions. However, conflicting evidence regarding the family's influence exists. Wang et al 43 previously stated that there is no correlation between family cohesion and the completion of advance directives among older Chinese Americans. Moreover, Pei. et al 42 found that family conflict, not cohesion, was associated with the engagement of advance care planning and end-of-life discussion.
Main Findings/Results of the Study
These findings highlight that awareness and knowledge, and engagement with advance care planning in the Chinese diaspora communities are variable. Two factors, geographical context and culture, were found to be particularly important.
Chinese diaspora living in countries where advance care planning is supported by legal, cultural, and social systems are more likely to have awareness and knowledge of it and engage in these conversations. Although knowledge of and engagement with advance care planning remain low internationally, the United States was the country most prominent in the promotion of advance care planning. 46,47,[49][50][51] The concept of advance care planning first emerged and was advocated in the United States. 23 Funded hospitals and nursing homes are required through federal legislation to provide an opportunity for the public to familiarize themselves with and complete an advance directive underpinned by the Patient Self-Determination Act of 1990. 53 It is pertinent to note that challenges around the usage of language and terminology in different cultural contexts exist. Across the papers, the terms advance care planning and advance directives are used synonymously yet they have different procedures, focuses, and distinct meanings. The implications of this on the general public, particularly the Chinese diaspora, are unknown. The supportive social contexts that embed advance care planning may help to understand the divergence in findings. 13,14,54,55 The importance of geographical context is echoed in McIlfatrick et al's 56 study which highlighted the importance of government-driven policies and a positive social atmosphere to promote advance care planning. However, only one paper in this review indicated the role of policy as the influence. Chiang et al 36 found that the Chinese diaspora assigns great weight and aligns their behavior to national policy. This likely stems from the role of and trust often placed in government in Chinese cultures. The realization of the influence of strong policy initiatives at the health system and institutional level is considered an influential factor in advance care planning's acceptance among Chinese populations. 4 Second, the findings from the review indicated that culture was reported to partially affect the promotion and engagement of advance care planning. 32,35 In traditional Chinese culture, common perspectives about death are a pragmatic acceptance of death's inevitability and this is also reported in the Chinese diaspora. 57,58 However, as this review confirms death is viewed as taboo, and death-related issues as sensitive topics. They believe conversations regarding death-related topics could result in ominous things and cause burdens on families. 32,35 This may help to explain why the Chinese population prefers indirect communication approaches rather than directly discussing end-of-life care plans or advance care planning with family members or healthcare professionals. As Jia et al 13 proposed, effective communication strategies need to be tailored to individuals and culturally appropriate. This is also echoed in other diasporas globally. 16 The Chinese tradition of reciprocal filial piety, in which adult children are expected to look after elders was found to be an influence on engagement with advance care planning. 57,59,60 However the evidence of the influence of this is unclear. Some research suggests filial relations in the West are consistent with that supported by the reciprocal aspect of filial piety in Chinese societies. 57,59 However evidence from this review suggested that in an attempt to reduce the burden of planning for the future, members of the Chinese diaspora generally prefer others in authority (ie healthcare professionals or community representatives), rather than themselves, to initiate advance care planning. [28][29][30][31][32][33][34]37 The findings from the review also indicated the influence of a strong family culture in decision-making, reflected in other review 54 which is a characteristic of the Asian culture. This emphasizes the importance of familism in making major decisions rather than individuals' autonomy and self-determination. 61 It is imperative, therefore, to understand the cultural differences to help inform public health approaches to enhance knowledge and engagement with advance care planning.
This review highlights some gaps in the evidence base with regard to the influence of cross-cultural integration and generational differences on advance care planning engagement.
What This Study Adds?
Advance care planning has been advocated as one way in which to improve the Chinese diaspora's end-of-life care experience. However, evidence suggests knowledge and uptake of it are low across multicultural countries. This study updates previous reviews on components of advance care planning for Chinese diasporas and highlights that Chinese diaspora's awareness, knowledge and engagement with advance care planning is not a linear process. In addition to the socio-demographic factors which have been recognized in previous studies as influencing engagement in advance care planning, two additional considerations were identified. First, in the geographical context and culture within which the Chinese diasporas are living, the legal, cultural, and social systems act as a catalyst to enhance awareness of and engagement with advance care planning. However, most studies, especially those that investigated bespoke culturally tailored advance care planning educational interventions, were conducted in the United States and limited to only quasi-experimental methods. There is a lack of evidence in other multicultural countries such as the UK. Second, Chinese diaspora's original culture has a significant impact on engagement with advance care planning. It is crucial to accommodate their traditional cultural beliefs in the practice of advance care planning. This review indicates the lack of highquality culturally tailored educational interventions to improve knowledge of advance care planning. It is therefore imperative to conduct more research to address these issues, in turn promoting Chinese diaspora engagement with advance care planning across multicultural countries.
Strengths and Limitations
While this comprehensive systematic global literature review was guided by standard methodology, it has several limitations. First, this review only included English and Chinese language studies, limiting the inclusion of other languages. Secondly, this review included papers published from 1990 to 31 st March 2022, and new studies published after this date may not be reflected in the analysis, so the conclusion should be treated with caution. Finally, a plethora of terms are used to denote advanced care planning, and it is possible that some terms were missed.
Conclusion
The review provided an international insight into the Chinese diaspora's knowledge of, and engagement with advance care planning. Overall, the results indicate that Chinese diaspora engagement is not a linear process but is influenced by a myriad of socio-demographic factors. Such findings are not novel and have been reported elsewhere; however, the influence of identity and culture has been neglected in the delivery and engagement with advance care planning among diaspora groups. The realization of geographical context and culture within which the Chinese diaspora are living, as well as their original culture, were found to be key factors influencing engagement. Therefore, a culturally tailored approach should be accommodated in future research and practice for Chinese communities in multicultural countries, especially in the UK. | 2023-02-28T06:16:44.485Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "0a75d3b18e99fa555249e9294d3d60fe4c336c5e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/08258597231158321",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "833f87040d0d4cfc36cb5157f7e8abcdb82f7d3b",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248651723 | pes2o/s2orc | v3-fos-license | A case report of profound atrioventricular block in an endurance athlete: how far do you go?
Abstract Background Athletes presenting with 1st-degree atrioventricular block (AVB) on 12-lead electrocardiogram (ECG) may present a diagnostic conundrum, especially when significantly prolonged and associated with higher degrees of block. A pragmatic stepwise approach to the evaluation of these patients is, therefore, crucial. Case summary A 19-year-old waterpolo player was referred for assessment of a 1st-degree heart block and one isolated episode of syncope. All other cardiac investigations were within normal limits except for a 24-h ambulatory ECG which showed Mobitz 1 AVB and episodes of 2:1 block occurring in the context of Wenchebach. An electrophysiological study (EPS) was performed which effectively excluded infranodal conductive tissue disease, confirming physiological intranodal block. Discussion The increase in vagal tone is one of the physiological adaptations to an increased demand in cardiac output in athletes, which explains the presence of 1st-degree AVB in up to 7.5% of athletes. The presence of 2:1 AVB on 24 h ECG raises doubts whether the 1st-degree AVB on resting ECG is pathological or physiological, especially considering this particular patient had suffered an episode of syncope. When this diagnostic uncertainty persists despite non-invasive investigations, including cardiopulmonary exercise testing, invasive EPS may be required to assess the refractoriness of the AV node and at what level within the cardiac conductive system block occurs. The electrophysiological study can effectively rule out infranodal disease by confirming physiological intranodal block using incremental atrial pacing.
Introduction
First-degree atrioventricular block (AVB) is a common training-related change, found in up to 7.5% of athletes on a resting electrocardiogram (ECG). 1 The electrophysiological study (EPS) assessment of the atrioventricular (AV) node in endurance athletes favours intrinsic adaptation, independent of vagal tone. 2 Downregulation of the funny channel (HCN4) is one of the most common causes of physiological bradycardia in athletic individuals, leading to a corresponding drop in funny current, an important pacemaker mechanism. 3,4 Nevertheless, further evaluation is warranted in the presence of arrhythmic symptoms, a broad QRS, an abnormal axis, an inappropriate sinus rate response during exercise. 1 The presence of high-grade AVB on prolonged ambulatory ECG monitoring should also raise the suspicion of cardiac pathology. We will be presenting a case of profound AVB in an endurance competitive athlete suffering from syncope.
Case presentation
A 19-year-old male Caucasian competitive waterpolo player, engaging in 20 h of moderate-to high-intensity physical activity on a weekly basis was referred for cardiovascular assessment follow an isolated episode of syncope 1 year previously during competition. This had occurred soon after the patient came out of the pool at the end of the race. It was preceded with lightheadedness and nausea, lasting 30 s. He was fully oriented and recovered spontaneously a few minutes later. He was referred to hospital and all investigations including blood tests, telemetry, and a computerized tomography pulmonary angiogram were normal. Physical examination was largely unremarkable, with sinus bradycardia (50 b.p.m.), normal blood pressure, and no added heart sounds. He was never symptomatic before that point. There was no family history of sudden cardiac death. He was not on any regular medications. The patient was eventually referred to the Sports Cardiology Clinic months later after having discussed this with his family doctor during his routine yearly screening appointment. The presence of symptoms in the context AVB prompted referral.
A 12-lead ECG revealed sinus bradycardia with profound 1st-degree AVB ( Figure 1). The PR interval was measured as 365 ms. The QRS satisfied voltage criteria for left ventricular (LV) hypertrophy. QRS duration and axis were normal, with no evidence of QRS fragmentation. A transthoracic echocardiogram was consistent with an athletic heart. The LV and right ventricular (RV) volumes were at the upper limits of normal. The systolic function of both ventricles was also towards the lower limits of normal, with an LV and RV ejection fraction of 53% and 52%, respectively. Diastolic parameters were normal with the left atrium mildly dilated. Diastolic parameters 24-h ambulatory ECG monitoring revealed prolonged periods of sinus bradycardia, 1st-degree AVB and intermittent 2nd-degree Mobitz I (Wenckebach) AVB ( Figure 2). Nocturnal episodes of 2:1 AVB in the context of Wenckebach were also recorded ( Figure 3). He did not report any symptoms during ambulatory ECG monitoring.
Cardiopulmonary exercise testing (CPET) was normal with a VO 2MAX of 42.5 mL/kg/min (93% of predicted). Ventilatory efficiency was normal (VE/VCO2 28.4). There was a normal blood pressure response. No arrhythmias were recorded. Atrioventricular conduction was also normal throughout the test, with PQ prolongation again seen in recovery. There was no objective evidence of cardiac or ventilatory limitation to exercise (heart rate recovery at 1 min was 28 b.p.m., O2/Pulse 17.1 mL/beat which was 104% of predicted). Cardiac magnetic resonance (CMR) imaging was also performed because of possible arrhythmic symptoms in the context of conduction abnormalities and ventricular function towards the lower limits of normal. This revealed a low-normal LV and RV ejection fraction (LVEF 52%, RVEF 51%), with normal chamber dimensions. No macroscopic fibrosis was present on late enhancement sequences. Family screening of both his parents and his two siblings was negative. The pros and cons of genetic testing were also discussed. In the absence of a definite clinical phenotype, the team opted against referral.
All these secondary investigations failed to confirm the presence of a definite cardiac phenotype. The presence of symptoms increased the suspicion of conduction disease in the context of early dilated cardiomyopathy. He was referred for an EPS for better risk stratification and phenotypic characterization, a pre-requisite prior to giving full clearance for competitive sports. Baseline sinus node recovery time was measured at 231 ms, baseline AH interval at 307 ms and an HV interval measured at 38 ms. Atrioventricular Wenckebach was achieved by pacing the atrium at 740 ms (81 b.p.m.). This improved to 320 ms (188 b.p.m.) following the administration of isoprenaline. A case report of profound atrioventricular block in an endurance athlete AH interval shortened to 88 ms. No His signal was recorded when in AVB, confirming the presence of intranodal physiological AVB ( Figure 4). Atrioventricular block improved with rapid atrial pacing and after administering isoprenaline infusion, effectively ruling out infranodal block. He was reassured and was advised to undergo biannual surveillance with echocardiography and ambulatory ECG monitoring. His initial symptoms were attributed to vasovagal syncope in the context of overtraining. This tallied with the acute increase in training volume in preparation for the competition at the time. Since his evaluation, he has not had any recurrence of his symptoms and is training normally. Based on this comprehensive evaluation, no medical treatment was necessary so far.
Discussion
Individuals who regularly engage in at least 4 h of physical activity per week undergo structural, functional, and electrical adaptations in the heart collectively known as the athlete's heart. 5 Increased vagal tone and enlarged cardiac chambers help accommodate the increased demand in cardiac output. 6,7 The type and degree of cardiac adaptation is influenced by age, gender, ethnicity, and sporting discipline. Extreme athletic adaptation may at times overlap with cardiac pathology, traditionally known as the grey zone. 8 First-degree AVB is a typical training-related change, found in up to 7.5% of athletes on a resting ECG. 1 Electrophysiological study assessment of the AV node in endurance athletes favours intrinsic adaptation, independent of vagal tone. 2 Biomechanical and mechanical effects induced by dilatation and hypertrophy have been proposed as possible mechanisms. Further evaluation is normally only warranted in the presence of arrhythmic symptoms, a broad QRS, an abnormal axis, an inappropriate sinus rate response during exercise. 5 The presence of high-grade AVB on prolonged ambulatory ECG monitoring should also raise the suspicion of cardiac pathology. Pathological AVB in young athletic individuals may be a manifestation of an inherited disorder (Lamin A/C dilated cardiomyopathy, SCN5A Arrhythmogenic Cardiomyopathy or Brugada Syndrome, Myotonic Dystrophy, PRKAG2 syndrome). 1 Various factors may, however, help differentiate physiological adaptation from pathology ( Figure 5). The case presented discusses several important factors in the diagnostic work up of an athlete in the grey zone. Further evaluation should be targeted at phenotypic characterization and risk stratification. 5 Symptoms and family history are extremely important. Symptoms strongly suggestive of cardiac arrhythmias should be evaluated comprehensively. A relevant family history of premature conduction disease, cardiomyopathy or sudden cardiac death should also raise suspicion for a familial disorder. The absence of symptoms and a relevant family history would favour physiological remodelling.
A resting 12-lead ECG is undoubtedly very important in the diagnostic work up. The absence of pathological ECG patterns including high-grade AVB, a wide QRS, repolarization anomalies (pathological T wave inversion, Q waves, ST-segment depression, QT prolongation) strongly favours athletic adaptation. 1 The presence of a pathological ECG pattern should always prompt referral for a comprehensive diagnostic work up, especially in symptomatic patients. Prolonged ambulatory ECG monitoring may help record highgrade AVB and/or malignant ventricular arrhythmias throughout the day. The presence of either one will again favour cardiac pathology.
Echocardiography is regarded as the 1st line imaging modality in ruling out significant structural heart disease. The presence of dilated and/or hypocontractile chambers may raise suspicion for a cardiomyopathic process. The absence of structural heart disease would undoubtedly favour athletic adaptation. The presence of chamber dilatation, systolic dysfunction, and/or fibrosis on CMR will all help ascertain the likelihood of pathology in athletes presenting in the grey zone. 9,10 Using athlete-specific reference ranges may also help decrease the false positive rates, potentially resulting in a misdiagnosis in an otherwise young healthy athletic individual. 11 As discussed previously, normal AV conduction during exercise favours physiological adaptation. An inappropriate sinus rate response is traditionally present in athletes who present with high-grade AVB. 1 A cardiopulmonary exercise assessment may give more information on cardiorespiratory fitness and evidence of cardiac limitation to exercise. Normal VO 2MAX and a normal physiological response will strongly favour athletic adaptation. Standard exercise tolerance testing is a reasonable alternative when CPET is not routinely available.
Current guidelines also encourage referral for invasive EPS when diagnostic uncertainty persists despite non-invasive evaluation. 12 This may help differentiate intranodal from infranodal AVB.
PR interval in the case presented was within normal ranges for an athletic individual. 1 The presence of syncope prompted further evaluation as per current guidelines, 12 using a systematic stepwise approach. 13 2:1 AVB raised the suspicion for significant conduction disease. The presence of low/normal systolic function of both ventricles also leads to a cardiomyopathy suspicion. Both these may raise the suspicion of an early Lamin A/C cardiomyopathy phenotype.
Several factors elicited during the evaluation were, however, reassuring. Aquatic athletes often show significant cardiac remodelling in response to the high physiological demand. Normal sinus rate response during exercise, normal cardiorespiratory fitness, absence of ventricular arrhythmias on ambulatory monitoring, and no fibrosis on CMR all favoured athletic adaptation. Negative family screening also helped rule out a familial inherited disorder.
The presence of syncope in the context of 2:1 AVB may traditionally have been a reasonable indication for permanently pacing this individual. Profound electrical remodelling in athletes may, however, occasionally overlap with cardiac pathology, hence why a comprehensive secondary evaluation is warranted in borderline cases. He would have been falsely mislabelled with an inexistent cardiac phenotype, itself carrying important lifelong implications (sporting career, family planning, life insurance policy). Implanting a pacemaker in an aquatic athlete would undoubtedly have also led to a higher incidence of lead fracture due to repetitive ipsilateral arm movements. Now more than ever, the latest European Society of Cardiology sports cardiology guidelines encourages a shared decision-making approach, respecting the athlete's autonomy after providing all the relevant information about the impact of sport and the potential adverse events which may occur. 14 A meticulous comprehensive assessment in a tertiary centre is strongly advised in such difficult cases. Electrophysiological studies may help rule out cardiac pathology in athletes presenting with profound athletic remodelling.
Lead author biography
Dr Mark Abela is a Cardiology registrar practicing at Mater Dei Hospital. He has finished speciality training in Cardiology and has undergone a fellowship in Sports Cardiology and Inherited Cardiac Conditions at St George's Hospital in London. His main academic and clinical interests are athletic cardiac adaptation, cardiac screening, inherited cardiac conditions, and cardiac rehabilitation. Currently, the clinical lead at Mater Dei Hospital for the Sports Cardiology Service, Inherited Arrhythmia Clinic Service, and Cardiopulmonary Exercise Testing modality. He sits on the medical committees of several sporting disciplines.
Supplementary material
Supplementary material is available at European Heart Journal-Case Reports online. | 2022-05-10T15:44:09.561Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "415185f07bc7568a098a9290fe40e86a1d6e18d7",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ehjcr/advance-article-pdf/doi/10.1093/ehjcr/ytac190/43518442/ytac190.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0131c16cea8d3fcd3922a2679551144786dd3ce4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
236213173 | pes2o/s2orc | v3-fos-license | Effects of fadolmidine, an α2‐adrenoceptor agonist, as an adjuvant to spinal bupivacaine on antinociception and motor function in rats and dogs
Abstract α2‐Adrenoceptor agonists such as clonidine and dexmedetomidine are used as adjuvants to local anesthetics in regional anesthesia. Fadolmidine is an α2‐adrenoceptor agonist developed especially as a spinal analgesic. The current studies investigate the effects of intrathecally administered fadolmidine with a local anesthetic, bupivacaine, on antinociception and motor block in conscious rats and dogs. The antinociceptive effects of intrathecal fadolmidine and bupivacaine alone or in combination were tested in the rat tail‐flick and the dog's skin twitch models. The durations of motor block in rats and in dogs were also assessed. In addition, the effects on sedation, mean arterial blood pressure, heart rate, respiratory rate and body temperature were evaluated in telemetrized dogs. Concentrations of fadolmidine in plasma and spinal cord were determined after intrathecal and intravenous administration in rats. Co‐administration of intrathecal fadolmidine with bupivacaine increased the magnitude and duration of the antinociceptive effects and prolonged motor block without hypotension. The interaction of the antinociceptive effect was synergistic in its nature in rats. Concentration of fadolmidine in plasma was very low after intrathecal dosing. Taken together, these studies show that fadolmidine as an adjuvant to intrathecal bupivacaine provides enhanced sensory‐motor block and enables a reduction of the doses of both drugs. The results indicate that co‐administration of fadolmidine with intrathecal bupivacaine was able to achieve an enhanced antinociceptive effect without hypotension and could thus represent a suitable combination for spinal anesthesia.
| INTRODUC TI ON
The intrathecal injection of variety α 2 -adrenoceptor agonist, such as clonidine and dexmedetomidine, has been shown to produce significant analgesia and is extensively used for anesthesia and intensive care medicine. 24,40 However, these drugs tend to induce hypotension, bradycardia and sedation which means that their role is limited to adjuvants as analgesics. 18,21,67 One local anesthetic, bupivacaine, is capable of achieving adequate pain relief and therefore it is commonly used in spinal anesthesia. However, the short duration of action and dose-dependent cardiovascular adverse effects, such as hypotension, tend to limit the use of bupivacaine. 45,52 In the clinic, the administration of vasoconstrictors is required to maintain blood pressure, although the use of vasoconstrictors offers another benefit, that is, decreasing the systemic absorption of local anesthetics. 8,18,56 α 2 -Adrenoceptor agonists, like clonidine and dexmedetomidine, when combined with local anesthetics, have been shown to enhance the analgesic effect by prolonging the duration of sensory-motor block of local anesthetics. 10,58,67 The combination allows a reduction of the doses of both drugs and furthermore, causes less side effects in perioperative anesthesia. 8,15,57 Fadolmidine, 3-(1H-imidazol-4-ylmethyl)-indan-5-ol, is an α 2adrenoceptor agonist especially developed for spinal analgesia. 31 Fadolmidine has been demonstrated to induce antinociceptive effect after intrathecal administration in rats, 33,43,44,48,49,61 dogs 34 and sheep. 20 Furthermore, due to its pharmacokinetic properties, fadolmidine passes poorly across the blood-brain barrier 20,31,43,49 and does not distribute significantly to the central nervous system. 62 During a 24-h continuous intrathecal infusion of fadolmidine in dogs, a good antinociceptive effect was achieved without any signs of adverse effects such as hypotension, respiratory depression and hypothermia, which were evident during the intrathecal infusion of clonidine. 34 The effects of intrathecal fadolmidine as an adjuvant to local anesthesia have not been studied previously. The aim of the present study was to evaluate if the combination of intrathecally administered fadolmidine with bupivacaine would exert antinociceptive effects (an increase in the thermal response latency) assessed with a rat tail-flick and a dog skin twitch models. Both the rat tail-flick and dog skin twitch test are well established and validated methods to assess the efficacy of analgesic drugs. 4,9,64 A noxious heat stimulation of the tail and skin of the lower back produces a nociceptive reflex response a flick of the tail away from the heat source 9 and contraction of the trunci cutaneous musculature of the lower back 17 respectively without changes in spontaneous or evoked behavioral responses of animals. 3 The duration of motor block of bupivacaine was assessed by measuring motor scores and rotarod performance tests in rats and by defining the duration of hind limb paralysis in dogs. In addition to rats, dogs were used in this study because the duration of subarachnoid conduction motor blockade in dogs has been shown to be qualitatively similar as the values for spinal anesthesia reported in humans. 22 Furthermore, the dog seems to be a more appropriate species than rodents for the evaluation of the cardiovascular effects of α 2 -adrenergic compounds. 19,29,34 Therefore, the effects of bupivacaine and fadolmidine on safety parameters as sedation, mean arterial blood pressure (MAP), heart rate (HR), respiratory rate and body temperature were determined in dogs fitted with telemetry transmitters. Furthermore, the concentrations of 3 α 2 -Adrenoceptor-induce antinociception has been shown to be sex-specific and attenuated by oestrogen in female rats. 39
| Test formulations for pharmacodynamics experiments
In rats, fadolmidine and bupivacaine were dissolved and diluted in sterile purified water (Aquasteril ® , Orion Corporation) and administered by a Hamilton syringe in a volume of 10 µl. In dogs, fadolmidine and bupivacaine were dissolved and diluted in sterile physiological saline (Natrosteril ® , Orion Corporation) and administered by syringe in a volume of 0.5 ml. The intrathecal injections of drugs were followed by an additional saline injection of 10 µl in rats and 0.5 ml in dogs to flush the drug remaining in the catheter lumen.
| Test formulations for pharmacokinetic experiments
An unlabeled stock solution of the test substance was prepared by Preparation of a test formulation for intrathecal administration: a measured amount of 3 H-labeled fadolmidine in methanol was evaporated to dryness with gentle flow of nitrogen at 30℃. The residue was dissolved in an aliquot of the unlabelled fadolmidine stock solution described above. Then the pH of the solution was adjusted to 6.0 with 0.1 M NaOH and finally its volume was brought to 1.5 ml by adding purified water. The target concentration of the test compound in the solution was 0.100 mg/ml and radioactivity 111 MBq/ ml. Preparation of a test formulation for intravenous administration: a measured amount of 3 H-labeled fadolmidine in methanol was evaporated to dryness and dissolved in a dilution of the above stock solution. Then the pH of the solution was adjusted to 6.0 with 1 M NaOH and finally its volume was brought to 40 ml by adding purified water. The target concentration of the test compound in the solution was 3 µg/ml and radioactivity 3.7 MBq/ml. The radioactivities from intrathecal (2 samples, 10 µl/sample diluted with 1990 µl of water) and intravenous (2 samples, 40 µl/sample diluted with 1960 µl of water) solutions were counted in a Wallac 1214 RackBeta liquid scintillation counter using six parallel aliquots of each sample. Specific radioactivity was calculated taking into account the dilution factors and sample volumes. The solutions for intrathecal and intravenous administration were stored at 4℃ and were used for dosing within 3 days from preparation.
Prior to the drug treatment, the rats were fasted overnight. Food was available to those rats remaining 3 h after dosing. Tap water was available ad libitum except during dosing and sampling. On the dosing day of drug, a single intrathecal bolus dose (10 µl) of 3 H-fadolmidine formulation was given via intrathecal catheter to 36 male and 36 female rats. The intrathecal formulation was followed by the same volume of physiologic saline (Natrosteril ® , Orion Corporation). For intravenous dosing, a single bolus dose (300 µl) of 3 H-Fadolmidine formulation was given via tail vein to 36 male and 36 female rats.
| Intrathecal catheterization in rats
Intrathecal catheters were implanted under midazolam (5 mg/kg, There was a recovery period between experiments of at least 3 days.
The animals were randomized within drugs in different groups with the Latin Square principle.
| Intrathecal catheterization and implantation of telemetry transmitter in dogs
Intrathecal catheterization and implantation of radio-telemetry transmitters were undertaken simultaneously under sterile conditions. Intrathecal catheterization was performed according to the method of Atchison et al. 6 with minor modifications. Briefly, the anesthesia was induced with medetomidine hydrochloride 40 µg/kg, i.m. (Domitor ® 1 mg/ml, Orion Corporation, Finland) and maintained with propofol 6.5 mg/kg as bolus intravenous injection and infusion of 0.9 ml/kg/h (Diprivan ® 10 mg/ml, Zeneca). Surgical areas were shaved and prepared with Betadine ® solution. The dog's head was positioned in a holder. We applied a sterile technique with autoclaved instruments to make a small skin incision between the skull base and C1 and the dura was exposed. An incision in the dura was made and a clear nylon epidural 19G catheter (Portex ® , Portex limited) was During the measurements (sedation, antinociception, MAP, HR, respiratory rate, and body temperature) the dogs (n = 5) were standing on the operating table.
| Tail-flick test in rats
The rat tail-flick test was performed with an analgesia meter (Ugo In the experiments with either fadolmidine or the combination of bupivacaine with fadolmidine, the following time points were used; 0.5, 1, 2, 4 and 6 h. The fadolmidine and bupivacaine doses (n = 7/ dose group) used for isobolographic analysis are presented in Table 1
| Motor score, Rotarod performance and body temperature measurements in rats
The measurements were performed in the same rats: first, the determination of the motor score and immediately after that there was the assessment of Rotarod performance and the measurement of body temperature. The motor function was scored by a slightly modified method of Penning and Yaksh. 46 Motor function was evaluated grading bilaterally with the following parameters (1) sedation (scored 0-2), (2) the placing/stepping reflex of the left (scored 0-2) and right (scored 0-2) hind legs, (3) the muscle tone of the right (scored 0-2) and left (scored 0-2) hind legs by stretching the legs, and (4) the righting reflex (scored 0-2). The scores were 0 = absent, 1 = impaired and 2 = normal, the normal baseline score being thus 12. The duration of action on the motor score is the first measurement time point after drug dosing when the motor score is the normal baseline score being 12. The muscle tone of the fore limbs (right (scored 0-2) and left (scored 0-2) fore limb) was also measured. The animals with a pre-test score of 16 were accepted for the study. The effect on motor co-ordination was evaluated on a Rotarod treadmill for rats (Ugo Basile) consisting of four drums (diameter of 70 mm, 4 r/min) separated by five flanges. After training, only those rats that were able to stay for at least 2 min on the rotating rod were selected for test. The rectal temperature was measured by a digital thermometer (Ellab) at a depth of 2 cm.
| Skin twitch response in dogs
The thermally evoked skin twitch response was measured using a probe with an approximately 1 cm surface area maintained at approximately 62.5 ± 0.5℃. The probe was applied sequentially to the shaven thoracolumbar areas of the animal's back. When a brisk contraction of the local musculature within 1-3 s of probe placement was detected, the probe was removed and the latency recorded.
Failure to respond within 6 s (cut-off time to prevent tissue damage) was assigned as the latency. During the study no tissue damage was noted even with 6 s cut-off time. For analytical proposes, the nociceptive response is presented as the mean of the two latencies.
| Motor function measurement in dogs
The time to onset and the duration of motor block were evaluated following the intrathecal injection. Onset of motor blockade was defined as the time between completion of the intrathecal injection and the time when the dog's hind limbs were unable to support its weight. The duration of motor blockade was defined as the time from onset of motor blockade to the time when the animal was again able to support its own weight.
| MAP and HR measurement in dogs
MAP and HR were recorded and analysed using Dataquest IV system The hemodynamic values of all cardiac cycles within these periods were averaged.
| Sedation, respiratory rate and body temperature measurements in dogs
Drug-induced sedation was monitored simultaneously with the telemetry recording. Sedation was scored (0-4) according to the following criteria: 0 = normal alertness and responsiveness to the investigators, 1 = quiet response, eyes closed, but readily alerted and retaining head tone continuously, 2 = quiet, drowsing, eyes transiently closed, minimal neck tone, but arousable, 3 = significant depression, eyes remain shut, loss of neck tone, difficult to arouse, 4 = not arousable, total loss of neck tone, no overall response to strong stimuli applied to paws. The behavioural assessment points were before drug administration (0) and 0.5, 1, 1.5, 2, 3, 4 and 6 h after drug dosing.
The respiratory rate was measured by observation of chest expansion and contraction. The measurement points were before drug administration (0) and at 0.5, 1, 1.5, 2, 3, 4 and 6 h after drug dosing.
The effect on body temperature (rectally 3-4 cm with a thermometer) was measured before dosing (0) and 1, 2, 3, 4 and 6 h after drug dosing. (Table 1). From the dose-response curve, the ED 50 values of the combination of bupivacaine and fadolmidine were calculated.
| Study design in dogs
The isobolographic analysis of the analgesia interaction was done graphically by the methods of Tallarida et al. 60 and Tallarida. 59 The type of interaction was calculated by two equations.
Equation 1 ([59] equation 3):
where z 1 is the ED 50 dose of fadolmidine in combination and z 1 * is the ED 50 dose of fadolmidine alone; z 2 is the ED 50 dose of bupivacaine in combination and z 2 * is the ED 50 dose of bupivacaine alone.
Equation 2 ([59] equation 4):
where z 1 * and as above, p 1 is the proportion of drug 1, p 2 is the proportion of drug 2, R is the relative potency (z 1 */z 2 *). The additive point (dose) must be calculated both for fadolmidine and bupivacaine separately. In the models, there were two within-dog factors, that is, dose (d) and time (t), with the time points before dose administration being used as a covariate. If the level of probability (p) was <.05 (considered statistically significant), then a pair-wise comparison was conducted.
Contrasts (pair-wise comparisons) were also made to characterize the differences in more detail. p-values *p < .05, **p < .01 and ***p < .001 were considered statistically significant. Statistical analyses were performed with SAS ® software (SAS Institute Inc.). Plasma and spinal cord radioactivity values are presented as mean ± SD.
| Nomenclature of targets and ligands
Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guide topha rmaco logy.
org, the common portal for data from the IUPHAR/BPS Guide to PHARMACOLOGY, 25
and are permanently archived in the Concise
Guide to PHARMACOLOGY 2019/20. 2
| Antinociception in rats
The antinociceptive effects of intrathecal bupivacaine and fadol-
| Motor function and body temperature in rats
The interaction between intrathecal bupivacaine and fadolmidine on motor function was studied by measuring the motor score and (Table 3).
| Antinociception and motor block in dogs
The
| MAP and HR in dogs
The effects of intrathecally administered fadolmidine 60 µg,
| Respiratory rate, body temperature and sedation in dogs
The effects of intrathecally administered fadolmidine 60 µg, bupi-
| Concentrations of fadolmidine in plasma and spinal cord in rats
Total and dose-corrected (after intrathecal administration in plasma) radioactivity in plasma and the corresponding concentration in mass equivalent of 3 H-fadolmidine (free base) in spinal cord after intrathecal and intravenous administration at the dose of about 3 µg/kg to rats are presented in Figure 7. Mass equivalents of the intrathecal dosing were corrected (dose-corrected) according to the radioactive dose ratio in order to allow for comparison F I G U R E 5 Effects on motor score was determined every 10-15 min after drug dosing. The effect of bupivacaine at the doses of 0, 1, 3, 10, 30, 50 and 100 µg (n = 8/dose) in the study 1 (A), and 0, 100 and 300 µg (n = 8/dose) in the study 2 (B) on motor score (12 = normal muscle tone, 0 = muscle tone absent) over time after intrathecal administration in rats. Values are presented as mean ± SD TA B L E 2 Motor score (the maximum effect and the duration of action) observed after intrathecal administration of bupivacaine in the study1 and the study 2 in rats Dose (µg)
Study 1 Study 2
Maximum effect (motor score)
Duration of action (min)
Maximum effect (motor score)
| DISCUSS ION
In this study, the effects of intrathecal fadolmidine, an α 2 -adrenergic agonist with a local anesthetic bupivacaine, on the sensory-motor TA B L E 3 The time course of the bupivacaine at the dose of 300 µg, fadolmidine at the doses of 0.3, 1, 3 and 10 µg and the combination bupivacaine 300 µg and fadolmidine 0.3, 1, 3 and 10 µg on motor score (12 = normal muscle tone, 0 = muscle tone absent), rotarod performance (s, maximum measurement time was 120 s) and body temperature (°C) were measured after intrathecal injection in rats block were evaluated in rats and dogs. In addition, the effects of the compounds on safety parameters as MAP, HR, respiratory rate and body temperature were evaluated in dogs.
Co-administration of intrathecal fadolmidine with bupivacaine
produced an increase in the magnitude and duration of the antinociceptive response (an increase in thermal response latency) when compared to that evoked by both compounds on their own.
Additionally, the co-administration prolonged the duration of bupivacaine-induced motor block but did not affect its onset time when compared to the value for bupivacaine alone. Furthermore, the duration of sensory block was much longer than the duration of motor block. The isobolographic analysis of the rat data revealed that the interaction of nociceptive response of fadolmidine and bupivacaine was synergistic in its nature.
Previously, intrathecally administered fadolmidine has been reported to induce antinociception in a rat tail-flick test 33,49,61 and in a dog skin twitch test. 34 Furthermore, the combination of in- Furthermore, fadolmidine induced emesis only once in a single dog, although dogs are known to be a very sensitive species to α 2 -adrenergic agonist-induced emesis. 53 The locus coeruleus has been reported to F I G U R E 6 Time course of the antinociceptive effect (%MPE; percent maximum possible effect) and the duration of motor block (line, the values are mean) of intrathecal fadolmidine 60 µg and saline (0.5 ml) alone and combined with bupivacaine 3 mg in the skin twitch test in dogs (n = 5). The antinociceptive effects were statistically significantly increased between fadolmidine 60 µg (p < .001), bupivacaine 3 mg (p < .05), and the combination bupivacaine 3 mg and fadolmidine 60 µg (p < .0001) groups compared to saline during the measuring time (6 h). Each % MPE point represent mean ± SEM TA B L E 4 Effects of intrathecal saline (0.5 ml), bupivacaine 3 mg, fadolmidine 60 µg and the combination bupivacaine 3 mg + fadolmidine 60 µg on mean arterial blood pressure, heart rate, respiratory rate and body temperature in dogs be the site in the brain mediating the sedative effects of fadolmidine. 62 The results are further support for our belief that fadolmidine has a weaker ability to redistribute to the supraspinal space after its spinal administration. Furthermore, in rats, fadolmidine (0.3-3 µg) alone exerted no effects on motor function, sedation and body temperature but did accentuate the motor function impairment and the hypother- Values are presented as mean ± SD, n = 5. Comparisons are *p < .05, **p < .01 significant difference from saline response. Leino, Viitamaa, and Salonen conducted experiments and performed data analysis.
TA B L E 4 (Continued)
All authors wrote or contributed to the writing of the manuscript.
None contributed new reagents or analytic tools.
DATA AVA I L A B I L I T Y S TAT E M E N T
The generated and analyzed data that support the findings of this study are available from the corresponding author upon reasonable request. | 2021-07-25T06:17:04.563Z | 2021-07-24T00:00:00.000 | {
"year": 2021,
"sha1": "669bd561162770af0ae720d9109fd0786ac59e37",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/prp2.830",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ae6659c07df4bd408b56830790203c5e87e7ca2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233198009 | pes2o/s2orc | v3-fos-license | Communication Scheduling for Control Performance in TSN-Based Fog Computing Platforms
In this paper we are interested in real-time control applications that are implemented using Fog Computing Platforms consisting of interconnected heterogeneous Fog Nodes (FNs). Similar to previous research and ongoing standardization efforts, we assume that the communication between FNs is achieved via the IEEE 802.1 Time Sensitive Networking (TSN) standard. We model the control applications as a set of real-time flows, and we assume that the messages are transmitted using scheduled traffic that is using the Gate Control Lists (GCLs) in TSN. Given a network topology and a set of control applications, we are interested to synthesize the GCLs for messages such that the Quality-of-Control (QoC) of control applications is maximized and the deadlines of real-time messages are satisfied. We have proposed a Constraint Programming (CP)-based solution to this problem, and developed an accurate analytical model for QoC, which, together with a metaheuristic search employed in the CP solver can drive the search quickly towards good quality solutions. We have evaluated the proposed strategy on several test cases including realistic test cases and also validate the resulted GCLs on a TSN hardware platform and via simulations in OMNET++.
I. INTRODUCTION
We are at the beginning of a new industrial revolution (Industry 4.0), which will bring increased productivity and flexibility, mass customization, reduced time-to-market, improved product quality, innovations and new business models. However, Industry 4.0 will only become a reality through the convergence of Operational and Information Technologies (OT & IT), which are currently separated in a hierarchical pyramid (Purdue Reference Model [1]) and use different computation and communication technologies. OT consists of cyber-physical systems that monitor and control physical processes that manage, e.g., automated manufacturing, critical infrastructures, smart buildings and smart cities. These application areas are typically safety-critical and real-time, requiring guaranteed non-functional properties, The associate editor coordinating the review of this manuscript and approving it for publication was Songwen Pei . such as, real-time behavior, reliability, availability, safety, and security and often required to show compliance to industry specific standards. OT uses proprietary solutions, imposing severe restrictions on the information flow.
Instead, a new paradigm, called Fog Computing, is envisioned as an architectural means to realize the IT/OT convergence in Industrial IoT [2], which cannot be realized using Cloud Computing. According to NIST, ''Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources [. . .] that can be rapidly provisioned and released with minimal management effort or service provider interaction'' [3]. The OpenFog IEEE standard defines Fog Computing as a ''system-level architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things'' [4]. We define Edge Computing is as a new architectural paradigm in which the resources of an edge server are placed at the edge 1 The references for all sub-standards can be easily found via IEEE Xplore of mixed-criticalties, which have different requirements, in terms of safety, timeliness and control performance. TSN supports multiple traffic types, and hence, is suitable for mixed-criticality applications running on an FCP. Applications with tight timing constraints typically use Scheduled Traffic (ST) implemented via IEEE 802.1Qbv, which defines a Time-Aware Shaper (TAS) mechanism that enables the scheduling of messages based on a global schedule table. The scheduling relies on a clock synchronization mechanism 802.1ASrev [15], which defines a global notion of time. Thus the devices are synchronized, and the global schedule is formed. Applications that need bounded latency but do not have stringent latency and jitter requirements can use the IEEE 802.1BA Audio Video Bridging Systems (AVB) traffic type. Best-Effort (BE) traffic compliant with IEEE 802.3 Ethernet can be used for non-critical applications that do not need timing guarantees. ST traffic has the highest priority, followed by AVB and BE. AVB mechanisms are intended to prevent the starvation of lower priority BE flows.
In this paper we address control applications virtualized on a distributed FCP, which are implemented as tasks running on FNs that exchange messages over TSN. We assume, similar to the related work, that the messages use the ST traffic type. In this context, the scheduling of ST messages has a strong impact on the Quality-of-Control (QoC), i.e., the control performance [16]. Given the network topology of the FCP, the set of mixed-criticality applications, for which we know their communication flows and their routing, we are interested to synthesize the TSN ST communication schedules such that the QoC is maximized and the mixed-criticality application requirements, e.g., deadlines, are satisfied. We have proposed a Constraint Programming (CP)-based solution for deriving the ST communication schedules. We have addressed the problem of scheduling of tasks on an FCP for QoC [17], which is orthogonal to the message scheduling problem. However, to facilitate the integration of tasks and message schedules, our CP implementation also aims at supporting the integration of tasks and messages by creating space in the communication schedule timelines, where tasks need to execute.
A. CONTRIBUTIONS
The related work, discussed in Sect. VII, has shown that the communication synthesis has a strong impact on control performance. TSN has become a de-facto standard in several areas, including industrial applications. Although there has been much work in scheduling ST traffic in TSN, very few researchers have addressed scheduling in TSN for control performance [16], [18]. Compared to these works, the main contributions of this paper are as follows. We formulate the ST scheduling for QoC as an optimization problem, and propose a scalable CP-based solution to solve it. Our CP formulation considers all the relevant constraints of TSN, e.g., frame isolation, forwarding delay, resulting in realistic schedules that have been validated via simulations in OMNET++ and on a TSN hardware platform. We consider a more realistic model of control applications and provide more accurate measure of QoC compared to previous work, based on JitterTime. JitterTime uses time consuming simulations of the control application behavior, and hence they cannot be integrated into a CP solver since the search will not scale. Thus, we proposed a novel analytical model for the QoC evaluation within the CP formulation. In addition, we have used a metaheuristic search strategy in the CP-solver to quickly obtain good quality solutions, enabling us to handle large test cases.
B. OUTLINE OF THE PAPER
The system model is presented in Sect. II where architecture, application and the internals of a TSN switch are described. We formulate our problem in Sect. III. An introduction to control theory is presented in Sect. IV. In Sect. V, the details of our proposed method are given. We evaluate our proposed method in Sect. VI on several test cases. The related work in presented in Sect. VII and Sect. VIII concludes the paper.
II. SYSTEM MODEL
This section presents the architecture and application models. Table 1 summarizes the notation used. The application model consists of a set of periodic messages that are sent via flows over a distributed Fog-based architecture that consists of end systems interconnected via links and switches that use TSN.
A. ARCHITECTURE MODEL
The architecture is modeled as a directed graph G = {V, E}, where V = ES SW is the set of vertices and E ⊆ V × V is the set of edges. A vertex ν i ∈ V represents a node in the architecture which is either an end system (ES) or a network switch (SW). An ES is either the source (talker) or the destination (listener) of an application flow, whereas an SW forwards the frames of flows. Nodes have input (ingress) and output (egress) ports. We denote the set of egress ports of a node with ν i .P. A port p j ∈ ν i .P is linked to at most one other node. The set of edges E represents bi-directional full-duplex physical links. Thus, a full-duplex link between the nodes ν i and ν j is denoted with both i,j ∈ E and j,i ∈ E; a link is attached to one port of the node ν i and one port of the node ν j .
Each link i,j is characterized by the tuple s, d, mt denoting the speed of the link in Mbit/s, the transmission delay function of the link and the macrotick, i.e., time granularity of an event for the link, in µs. The transmission delay function of a frame on a link i,j .d(size) is calculated based the frame's size and the link speed. For example, transmitting a maximum transmission unit (MTU)-sized IEEE 802.1Q Ethernet frame of 1,542 bytes on a 1 Gbit/s link would take 12.33 µs. The function d is a notation used in the constraints in Sect. V and it is attached to the link concept, i.e., .d(size) means d( .s, size).
A route r i ∈ R, where R is a set of routes, is an ordered list of links, starting with a link originating from a talker ES, and ending with a link to a listener ES. The number of links in the route r i is denoted with |r i |, and it starts from 2 since we assume there is at least one SW in the route. We define the function u : R × N 0 → E to capture the jth link of the route r i .
An architecture model with three ESes two SWs is presented in Fig. 2, where the thick lines are physical links. We also show in the figure examples on how the notation is used, e.g., for a link tuple, ports, and routes.
B. TSN SWITCH MODEL
In the introduction we have motivated the use of TSN and the choice of traffic type for application messages, i.e., Scheduled Traffic (ST) that is being sent based on schedule tables in 50784 VOLUME 9, 2021 the switches using the IEEE 802.1Qbv ''Enhancements for Scheduled Traffic'' amendment. Here we model the details of a TSN switch needed to formulate our problem. For further details on how TSN works, the reader is directed to the respective standards.
A TSN switch consists of ingress ports, a switching fabric, priority queues, gates, a Gate Control List (GCL) and egress ports, see Fig. 3. The switching fabric receives flows from the ingress ports and forwards each flow to the egress port p i , according to the frame's route. The egress port which has a set of eight priority queues p i .Q (according to the IEEE 802.1Q standard [19]), stores the flow in a relevant priority queue q j ∈ p i .Q in First-In-First-Out (FIFO) order. A subset of the priority queues are used for the ST traffic and the remaining queues are used for the less critical traffic, similar to [20]. Each frame has a Priority Code Point (PCP) field in the frame header that specifies the priority.
According to the 802.1Qbv standard, transmission of traffic from each queue is regulated by an associated gate which opens and closes based on a predefined GCL which contains the opening and closing time of the switch gates. Queued flows in a queue can be transmitted when a gate is open and cannot be transmitted when gate is closed. In this paper we assume that the GCLs are deterministic, i.e., the flows are isolated from each other: Only the frames of one of the flows are present in a queue at a time, see [20] for details.
Related work has ignored the forwarding delay that a frame experiences in a switch, which is the time it takes a frame to get from the input (ingress) port to the queue of the output (egress) port. This transmission delay is not related to the time the frame spends in the queue.. However, since delays have an impact on QoC [21], we have decided to capture the forwarding delay in our model, and depends on the particular TSN switch implementation. Hence, we denote the forwarding delay with ν i .d(c) which takes c (frame size in bytes) as the input and returns the time delay in µs. In the experiments we measured this delay for the TSN implementation reported in [22].
C. APPLICATION MODEL
An FCP hosts multiple applications of mixed-criticalities, e.g., critical control applications, real-time applications, and best effort applications. Applications are typically modeled as interacting periodic real-time tasks that exchange messages, see [17] for how application tasks can be modeled. In this paper we address the configuration of the TSN communication infrastructure, hence we focus on messages. Sect. III discusses how tasks and messages can be put together in a system-level configuration.
Our model consists of a set of applications, which can either be control applications, for which their QoC is important, or they can be real-time applications. Note that control applications are also real-time, but not all real-time applications are control applications. The set of control applications is denoted with . The tasks of both control and real-time applications exchange messages, which, if they are on different ESes, are transmitted using flows. The set of all flows (also called streams) in the system-both control and real-time flows-are denoted with S.
Each flow s i ∈ S is responsible for sending the frames that encapsulate the data from an application message and it is characterized by the tuple p, c, t, d denoting the priority, the size in bytes, the period in milliseconds and the flow deadline, i.e., the maximum allowed end-to-end delay in milliseconds. The priority of a flow is in the range from 0 to 7, where 0 is the highest priority concerning the eight priority queues of a switch egress port).
As mentioned, flows are periodic and may have different periods. We define the hyperperiod as the least common multiple of the periods of all flows. Depending on its period, the frames of a flow will have to be transmitted multiple times within a hyperperiod, and we refer to each such transmission as an instance of a flow. The number of instances for a flow s i is denoted with |s i |, and is derived from the period of the flow t and the hyperperiod. For example, for three flows with the periods of 4, 5 and 3 ms, the hyperperiod would be 60 ms and the flows will have 15, 12 and 20 instances respectively.
Each flow s i is transmitted via a route r j which is captured by the function z : S → R that maps the flows to the routes. We assume that each flow is associated to only one route but several flows may share the same route. We also assume that the flows are unicast, i.e., there is only one listener for a flow. Our model can be easily be extended to handle multicast flows, i.e., that have multiple listeners, by adding each talker-listener pair as a stand-alone flow with additional constraints. We assume that the routes are fixed and given. Determining routing in TSN is an orthogonal problem with scheduling. Researchers have shown how to integrate routing with scheduling [23] and have concluded that most shortest-path routing is appropriate in most network topologies, with the exception of mesh networks that have a lot of redundant links. Our system model, including the Constraint Programming model from Sect. V-B can be extended to include routing optimization, if needed.
We define a frame for each instance 1 ≤ m ≤ |s i | of the flow s i and on each link 1 ≤ k ≤ |r j | of the route r j , and denote it with f k i,m . A frame f k i,m is associated with the tuple φ, l denoting the start time of the frame (offset φ) and its duration (length l).
A control application γ i ∈ is characterized by the tuple K , I, O denoting the control transfer function, the set of input flows, and the set of output flows. The control transfer function γ i .K captures the control law of the application, see Sect. IV for more details. The set of input flows γ i .I is a subset of S which represents the control I/O flows that are generated by sensors (i.e, ESes in the network) and deliver data to the control application running on an ES. The set of output flows γ i .O is a subset of S which represents the control I/O flows that are generated by control function running on an ES and deliver data to actuators (i.e, ES on the network).
III. PROBLEM FORMULATION
We formulate the problem as follows: Given (1) the set of all flows S in the system, for both the control and the real-time applications, (2) the details of the control applications , (3) the network graph G, and (4) a set of routes R, we are interested in synthesizing the GCLs in the network such that (a) all the flows in the system are schedulable (their deadlines are satisfied) and (b) the QoC of control applications, as defined in Sect. IV-C, is maximized. Synthesizing the GCLs is equivalent to determining (i) the frames' offsets f k i,m .φ, and (ii) the frames' length f k i,m .l. An example solution, considering the network from Fig. 2 Table 2 is presented in Fig. 4. The solution is depicted as a Gantt chart where the rows are the resources (links) and the rectangles labeled with the flow names s i depict the frames' offsets and lengths.
and the flows from
As discussed, the network configuration problem we address in this paper is orthogonal to the problem of configuring the tasks, e.g., deciding their mapping to the cores of an ES and their scheduling. Researchers have proposed several ways of putting together the schedules for tasks and messages in a global system configuration, e.g., by combining the formulation of their scheduling problems [24] or by iteratively integrating the task and message scheduling. The solution presented in this paper for flows can be combined with the formulation for tasks from [17]. In addition, to support the integration of the GCLs that we determine with tasks schedules derived separately, we maximize the time duration where tasks have to execute, denoted with E in Fig. 4, see see Sect. V-C for its definition.
IV. CONTROL THEORY
This section gives the essentials of the theory needed for the calculation of the QoC. We start with the definition of an Feedback Control Systems (FCS) in Sect. IV-A where the mathematical representation of a plant and the associated controller, and also the control design principle are described. Afterwards, we continue with the model we used for implementing a control application and a brief definition of the control performance and the effect of timing on it, in Sect. IV-B. Finally, we define in Sect. IV-C the QoC and present the approach we use in this work for calculating it.
A. FEEDBACK CONTROL SYSTEMS AND CONTROL DESIGN
A dynamical system around an equilibrium point is modeled as a mathematical relation between its inputs and outputs, and described with a transfer function [25]. The transfer function, commonly called Plant, is defined in the form of where Y (s) is the outputs, X (s) is the inputs, and G(s) is the transfer function, all defined in the frequency domain. An FCS, or alternatively a control application, uses sensors to sample the plant's outputs Y (s), calculates the deviation E(s) from the desired output R(s) and uses the control function K (s) to generate the control signal U (s) which is applied by actuators. In this paper, we assume that the desired output R(s) is zero which results in E(s) = Y (s). The control function K (s) defines the mathematical relation between the deviation E(s) of the plant feedback from desired output, and the control signal U (s). A simple FCS is depicted in Fig. 5, where W (s) are the disturbances applied to plant inputs.
An FCS is implemented as a periodic real-time application running on a FCP whose period depends on the system plant G(s). The shorter the period, the faster the controller is able to respond to the disturbances and the more computational power is required (which is a bottleneck on real-times systems where the resources are constrained). To this end, while designing an FCS choosing the right application period is an optimization problem. It is common to choose the period based on a rule of thumb which determines the period based on the bandwidth of the closed-loop system [26]. On the other hand, choosing an appropriate control law to be implemented in the control function K (s) has an impact on the resources needed for the calculation and the its response to the disturbances. Several control laws are proposed in the literature for control functions [25].
B. MODELING AND TIMING OF FEEDBACK CONTROL SYSTEMS
The implementation of an FCS consists of three periodic events: (i) receiving the inputs data from sensors, (ii) calculating the control signal with control function K (s), and (iii) sending the control signal data to actuators that apply the signal to the plant. Without the loss of generality, we assume that each FCS receives the input from exactly one sensor and sends signal data to exactly one actuator. We also assume that the three periodic events have the same period.
We map our FCS model to the control application model described in Sect. II-C as follows: A control application γ i is an FCS that has the control function γ i .K , equivalent to K (s), running on the node ν j (which is an ES) in the network G. The associated sensor is also an ES node that transmit a period network flow s m ∈ γ i .I to the node ν j as the destination via TSN. The generated control signal U (s) is also a period network flow s n ∈ γ i .O transmitted from the node ν j to the associated actuator which is also an ES. To this end, the set of input flows γ i .I and the set of output flows γ i .O both have only one unique member which are s m and s n respectively. Concerning our FCS model, the control function γ i .K is ready for execution when its input is arrived, i.e. the node ν j receives the input flow s m ; and produces the control signal s n when it terminates. Thus the control signal s n needs to be transmitted after the reception of the input signal s m and execution of the control function γ i .K . We formulate this constraint in Sect. V-B.
While designing an FCS, for finding the suitable control law and tuning it, several parameters such as the damping ratio, the phase margin and the gain margin (see [25] for more details) have to be determined. These parameters affect the accuracy and rapidity of the FCS which is called control performance, in opposite directions. The performance of an FCS is associated with its rise-time T rise , peak-time T peak , settling-time T settling and steady state error.
The rise-time T rise is defined as the time takes for the output response to reach 90% of the input value. The rise-time shows how fast the controller can react to the disturbances exerted to the dynamical system. The peak response is defined as highest output response the controller reached before the desired value. The peak plays an important role in the robustness of the controller against disturbances. The settling-time T settling is defined as the time takes for the output response to reach 98% of the input value. The settling-time shows how fast the controller can reach to the desired state. The steady-state error shows the minimum deviation of the controller output response from the desired state. It shows the accuracy of the controller. Fig. 6 shows the step-response of a sample control loop where these associated parameters are depicted.
Furthermore, for a given FCS whose design parameters are determined, the control performance changes in runtime due to the discrete time nature of the real-time systems. Ideally, all three events of an FCS should execute with the shortest delay between the events and without timing variations (jitter) as well. A time delay decreases the phase margin of the FCS leading to worse control performance. Jitter, i.e. the deviation from the periodic timing of an event, also negatively impacts the control performance.
We assume that the time delay and jitter apply only to the event of network message transmission, while the execution of the control application is ignored in this paper, and only VOLUME 9, 2021 FIGURE 6.
Step response of a sample control loop [17].
addressed as the required time interval needed between the reception and transmission of the input and output flows. We also consider the input-output jitter of the control application which is the maximum deviation of the worst-case delay between the sensors' sampling and the actuators actuation covering the timing of communication links from and to sensors and actuators.
C. QUALITY OF CONTROL
In real-time systems where control applications are running, preserving QoC (which is used interchangeably to mean ''control performance'') is a necessity. The QoC can be captured in a cost function which can also be used to evaluate the performance of the controller. A common choice is to use a quadratic cost function of the form where the weighting matrices Q 1 and Q 2 tell how much deviations in the different states and the control input should be penalized. A larger value of such as control performance cost function means worse QoC and typically increasing the settling time, the steady state error and closer peak time to rise time of the system. The value of cost J depends on several criteria such as Input-Output jitter of a control application as well as the end-to-end response of the control application (the delay between sampling and actuation). Generally, the control performance is degraded when the end-to-end response is more than what the control application is designed for or when the control application experiences Input-Output jitter in each iteration, see [27] for more details. The amount of each criterion's impact depends on the control function. To this end, the calculation of the QoC is possible via a simulation of the control function behavior. Tools such as Jitterbug [28], JitterTime [27] and TrueTime [29] are proposed to simulate the control function behavior. Jitterbug can calculate the QoC based on the fixed or random jitter applied to inputs and outputs of a control function. It can also be used to design controllers concerning the stability margin of the control function. JitterTime can calculate the QoC based on the inputs and outputs schedules as well as the control task schedules. Also, it can be employed to analyze the sensitivity of a control function to delays and jitter. TrueTime can simulate the execution of a control application based on a given schedule tables making the analysis of the control output possible. Thus, we employed JitterTime to calculate the QoC with the same cost function as Eq. (2) in this paper.
JitterTime takes the sending and receiving time of sensor and actuator flows which can be captured from GCLs, and simulates the behaviour of a control application with the given timing of control application's inputs and outputs. More information about the inner workings of JitterTime and its use cases can be found in [27].
V. CONSTRAINT PROGRAMMING
The communication scheduling problem as a decision problem has been proved to be NP-complete in the strong sense [30]. To this end, we propose an optimization strategy called Control-Aware Communication Scheduling Strategy (CACSS), based on a CP formulation that uses search heuristics inside the CP solver.
As shown in Fig. 7, CACSS takes as the inputs the architecture and application models and outputs a set of the best solutions found during search. As mentioned, CACSS is based on a CP formulation (the ''CP Solver'' box) in the figure. CP is a declarative programming paradigm that has been widely used to solve a variety of optimization problems such as scheduling, routing, and resource allocations. With CP, a problem is modeled through a set of variables and a set of constraints, see the ''CP model'' box. Each variable has a finite set of values, called domain, that can be assigned to it (see Sect. V-A). Constraints restrict the variables' domains by bounding them to a range of values and defining relations between the domains of different variables.
CACSS visits solutions that satisfy the constraints defined in Sect. V-B and evaluates them using the objective function defined in Sect. V-C to check if the solution is an improving solution, i.e., better than the best solutions found so far. Ideally, for the QoC calculation, the objective function should use JitterTime. However, tools such as JitterTime and Jitterbug use time consuming simulations of the control application behavior, and hence they cannot be integrated into a CP solver since the search will not scale. Thus, we propose a novel analytical model for the QoC evaluation within the CP formulation, see Sect. V-C. Every time the CP solver finds an improving visited solution (the ''New Solution'' box), we call JitterTime (the ''Jitter Time'' box) calculating the simulation-based accurate QoC value. By default, the CP solver systematically performs an exhaustive search by exploring all the possibilities of assigning different values to the variables. However, such a search is intractable for NP-complete problems, therefore we instead employ a metaheuristic search, see Sect. V-D.
A. CP MODEL
We define two sets of decision variables for the CP model, which are associated with the frame offsets and the frame lengths respectively. Each decision variable is associated with where the domain of the frame lengths contains exactly one element, i.e. the CP solver initially decides the values of frame length variables.
B. CONSTRAINTS
We define five constraints that regulate the network traffic and relates the domain of the CP variables. CP only finds the feasible solutions, i.e. all the constraints are met.
The Link Overlap constraint imposes the restriction on the solution to not allow a physical link to transmit more than one frame at a time, which is equivalent to avoid sharing a physical link with two frames at any time. The constraint is defined in Eq. (4).
The Route constraint enforces the ordered propagation of a frame concerning its associated route from its talker all the way to its listener. The constraint also enforces that forwarding a frame from a node starts after it has completely arrived at the reception of the node concerning the propagation delay. We define the constraint in Eq. (5) where δ is the network precision that is the worst-case difference between the nodes clock in the network according to the 802.1AS clock synchronization mechanism [15].
∀s i ∈ S, ∀m ∈ [1, .., |s i |], ∀k ∈ [1, .., |r j |), r j = z(s i ), v,w = u(r j , k), w,x = u(r j , (k + 1)), We define the Isolation constraint in Eq. (6) to avoid displacement of frames in different switch queues. The constraint imposes the restriction on any two same-priority frames on the same link not to arrive at the ingress port of a switch simultaneously. In another word, either a frame is received after or before any other frame on the same link, or the different priority frames on the same link are received at the same time. This constraint enforces the order of frame transmission in the switch schedules, see [20] for more details. In Eq. (6), δ represents the network precision.
The Deadline constraint defined in Eq. (7) imposes the restriction that a flow is received by its listener within its deadline. This constraint is equivalent to that the time interval between the scheduled transmission of a stream from its talker and the reception of it by the listener is smaller than its deadline.
∀s i ∈ S, ∀m ∈ [1, .., |s i |], r j = z(s i ), a,b = u(r j , 1), y,z = u(r j , |r j |) : The Control Precedence constraint enforces every instance of a control application's output flows to be scheduled for transmission after the complete reception of the same-instance input flows at the listener. Thus, the control application's output flows are transmitted from the talker node after the execution of the control function is terminated which needs the complete reception of the input flows. The constraint is defined in Eq. (8).
C. ANALYTICAL QoC CP MODEL AND OBJECTIVE FUNCTION
The CP solver propagates the constraints all over the search spaces and removes the unfeasible solutions (which do not satisfy the constraints) from the search space that results in the creation of the solution space. Afterwards, the CP solver picks the first solution from the solution space and determines the value of the objective function for the solution. The CP solver searches for better solutions in terms of the objective function until no such solutions can be been found.
In this work, we are interested in finding the solutions which have better QoC. Since calculating the QoC needs a simulation of the control application's behavior, the integration of QoC calculation tools such as JitterTime in the CP model is impossible due to their runtime. Thus, we propose using an analytical model for QoC as the objective function in the CP model, which aims to drive the search to solutions that are as close as possible (in terms of the QoC value obtained with JitterTime simulations) to solutions obtained if JitterTime would be used as an objective function for the search.
Our proposed analytical model captures within the CP formulation: (i) minimum jitter for end-to-end input-output flows, (ii) maximum delay between reception of the input flow and transmission of the output flow (which is equivalent to minimum input flow delay and minimum output flow delay), denoted with E and called task execution interval, and (iii) minimum jitter for the task execution interval.
Let us illustrate these aspects using the example in Fig. 4 where we have a Gantt chart for the execution of an example control loop depicting components of our analytical model. In this toy example, we have a control application γ 1 which has s 1 as the input flow and s 2 as the output flow. The application's control function γ 1 .K is running on the node ν 1 . The flow s 1 is transmitted from the sensor node ν 4 and routed via the switch ν 3 to the node ν 1 and has the same period as the control application, denoted with P in the figure. The flow s 2 is transmitted from the node ν 1 and routed via the switch ν 2 to the actuator node ν 5 and also has the same period as the control application.
The node ν 1 runs the control function once its input flow s 1 arrives and transmits the flow s 2 on the terminal of the control function. Thus, the larger the task execution intervalE, the more probable that the control function implemented as tasks are scheduled for execution on the node ν 1 . Since we need to define the CP objective function to be minimized, and the control application has the known period P, the objective would be to minimize the ω 1 and ω 2 which are, respectively, the input flow and the output flow end-to-end delay. Furthermore, we are interested in minimizing the variation of the task execution interval E which results in more possibility of the control function's schedulability. This is also formulated as minimizing the input and output flows jitter.
Additionally, minimizing ω 1 and ω 2 and their variation positively impacts on the QoC, since the control function receives the plant's sampling faster and without variation and the control signal is applied to the plant faster and without variation as well. However, the control function implemented as tasks could be scheduled for execution anywhere in the execution slice, but because of the jitter-free and short-delay input output the negative side of the task scheduling is compensable.
We define the QoC analytical function in Eq. (9), where the terms ω 1 captures input flow delay, ω 2 captures output flow delay, ω 3 captures input flow jitter, ω 4 captures output flow jitter and ω 5 captures E jitter. The range of all the ω terms is from 0 for no delay/jitter to 1 for a delay/jitter equal to the control application's period. The delay and jitter trade-off is controlled by the weight β which can direct the search towards either optimized delay or optimized jitter, concerning the type of the control applications. A larger β value drives the search towards smaller jitter. The β value can be determined by analyzing using JitterTime the behavior of the control function regarding the sensitivity to jitter and delay. JitterTime simulates the behavior of a control function with a given delay and jitter values. Hence, given different delay and jitter, JitterTime is capable of determine the sensitivity ratio. Thus, we use JitterTime for analyzing the sensitivity and determining the β value for a control function. 1) :
D. SEARCH STRATEGY
In this work we used the Google OR-Tools [31] CP solver. We configured this solver to use a metaheuristic as the search strategy. A search strategy specifies the order of selecting the CP model variables for assignment and the order of selecting the values from the domain of a variable. The metaheuristic strategy does not guarantee optimality but it is effective in finding a good quality solutions in a reasonable time.
We used the same metaheuristic strategy as [16] based on a Tabu Search metaheuristic algorithm [32], which aims to avoid the search process being trapped in a local optimum by increasing diversification and intensification of the search. We apply the metaheuristic strategy to the set of offset variables f k i,m .φ that represents control-I/O flows. In this strategy, once a control application is scheduled with the respective minimum objective value, it is treated as keep variables whose values should not be changed. We also used SolveOnce strategy for the set of length variables f k i,m .l.
VI. EVALUATION
The structure of this section is as follows: we first describe our test setup and the test cases we used for evaluation in Sect. VI-A followed by comparing our proposed Control-Aware Communication Scheduling Strategy (CACSS) with the related work in Sect. VI-B. Afterwards, we evaluate our proposed method on the synthetic test cases in Sect. VI-C. In Sect. VI-D we evaluate CACSS on a realistic test case. We also validate the generated GCLs using the OMNET++ simulator in Sect. VI-E. Finally, we used the generated GCLs and validated them on a TSN hardware platform in Sect. VI-F.
A. TEST CASES AND SETUP
We implemented CACSS in Java using Google OR-Tools [31] as the CP solver and run it on a computer with an i9 CPU at 3.6 Ghz and 32 GB of RAM. We have considered a time limit for the CP solver of 10 to 100 minutes depending on the test case size. For the evaluation we set the macrotick, the network precision and the link speed to 1 µs, 0 µs and 100 Mbit/s, respectively.
We have generated thirteen synthesis test cases which all include control applications inspired from the industrial domain. The control applications have different control functions for controlling plants in the form of Eq. (10) where a and b are randomly chosen respectively from [50,100,150] and [100, 200, 300, 400]. We have used Jitterbug for designing the control function K with the LQG control law [28] as discussed in Sect. IV-C. The test case sizes are progressively increasing in number of ESes, SWs, and flows (and respectively control applications). The flows are generated randomly with various sizes to fit in single MTU-sized frames, various periods all in the form of 2 n ms, n = {0, 1, 2, 3, 4}, and various priorities. The details of the synthetic test cases are depicted in Table 3 where the sixth column shows the total number of flow frames.
We have also considered a realistic test case, an autonomous mobile robot, called AMR. The AMR case consists of 27 flows varying in size between 100 and 1,500 bytes, with periods between 1 ms and 40 ms and deadlines smaller or equal to the respective periods. We used Jitterbug for designing the control functions from the plant in Eq. (10). The details of the realistic test case are shown in Table 6. Additionally, we generated three test cases for evaluating on a hardware platform. The generated GCLs are implemented on the platform and the end-to-end (E2E) delay-the time between sending a frame from it source to the time it arrives at its destination-and jitter of flows are measured. The details of the test cases are shown in Table 5. The hardware platform is presented in [22] and consists of three TSN switches that are connected in a daisy chain manner. The first and the last switches consist internal ESes. The links are full duplex with the speed of 1 Gbps and flows can be sent from both ESes. A schematic of the hardware platform is shown in Fig. 8 where the points for the measurement are marked.
B. COMPARISON WITH THE RELATED WORK
Let us first compare qualitatively the features of our CACSS with the approaches of the related work, i.e., (i) Zero-Jitter GCL (ZJGCL) proposed in [20] and (ii) Frame-to-Window GCL (FWGCL) proposed in [33]. Table 4 summarizes the model features, where the first column lists the feature compared. CACSS and ZJGCL consider scheduling of each individual flow frame and leads to zero jitter solutions under the jitter-optimized condition, whereas FWGCL schedules ''windows'' which may contain several frames, reducing thus the size of GCLs at the expense of introducing jitter. Considering flow frames for scheduling, CACSS and ZJGCL both enforce ''frame isolation'' that results in frames with zero jitter, see [20] for a discussion of the need for frame isolation to create deterministic GCLs. All the three approaches consider network precision.
The main advantage of CACSS over the related work is the modeling of control applications, i.e., the precedence constraints of input and output flows and the task execution interval. None of the related work considers the control application modeling, which makes the assessment of QoC impossible. To integrate the evaluation of control performance into the optimization, we have formulated the QoC analytically capturing the minimization of the input-output and execution jitter of control applications and also leaving enough time space for the control functions to be executed. In addition, the CACSS also considers a model for forwarding delay of SWs, which makes the schedules more accurate considering a TSN hardware implementation.
We have also performed a quantitative comparison our proposed method CACSS with the ZJGCL approach from the related work. Note that a comparison between ZJGCL and FWGCL is provided in [33], and since FWGCL introduces scheduling flexibility at the expense of jitter, it will lead to worse control performance. Hence, due to this, and for space reasons, we have not compared against FWGCL. ZJGCL does not consider control performance, hence, in order to facilitate a comparison, we have reimplemented ZJGCL using a CP formulation and added constraints that enforce the construction of valid solutions, i.e., to schedule the output flows to be transmitted after the reception of the input flows and to be received close to their deadline (leaving enough space for execution of the control functions). The GCLs obtained with both CACSS and ZJGCL were then evaluated using JitterTime, which accurately measures the control performance of each solution. The evaluation results are depicted in columns 8 and 9 in Table 3. The results show that CACSS has generated schedules with significantly better QoC than ZJGCL. The average QoC for ZJGCL is 64% larger. ZJGCL schedules flows such that jitter becomes zero; this is useful but not sufficient for a good QoC value, which also depends on input-output jitter and input/output delay. In addition, our method also maximizes the task execution intervals, which support the integration of the resulted schedules with the schedules for tasks. In contrast, the ZJGCL GCLs will have to be drastically modified before they can be integrated with task schedules.
C. EVALUATION ON SYNTHETIC TEST CASES
We evaluated the performance of CACSS on the synthetic test cases from Table 3. Our solution has successfully scheduled all the test cases and the schedules have zero jitter. We first evaluate the runtime of our proposed solution. The solution Table 3. runtime in milliseconds for each test case is given in column 10 in Table 3. As depicted in the table, the runtime increases with the increase of the total number of frames, i.e., larger test cases. As mentioned, we have given a time limit to the solver, between 10 and 100 min., depending on the test case size. All runs have finished well before the time limits, which means that the CACSS was able to determine the optimal results in terms of the objective function value from Eq. (9). This shows that, using our analytical QoC model inside the CP formulation, we are able to solve large test cases in a reasonable time.
The columns 7 and 8 in Table 3 show the objective function value Eq. (9) and QoC measured with JitterTime (which corresponds to the J value captured by Eq. (2)). The question is if driving the search with , which is a ''proxy'' for QoC, as we do in CACSS is as good as driving the search with J , which is the actual QoC value. Hence, we were interested to determine if our analytical QoC model is able to drive the search to solutions with good QoC. Thus, for a test case 5 from Table 3, we have replaced the fast analytical QoC model in the CP formulation with the simulation-based slowbut-accurate JitterTime QoC value. We have run CACSS for test case 5 with both setups, using from Eq. (9) vs. the QoC value J obtained with a call to JitterTime. The results are shown in Fig. 9, where we compare the two values (y-axis) during the search, i.e., during the iterations listed on the x-axis. On the y-axis we have the percentage deviation of and J for their best respective values obtained at the end of the search; in the last iteration, the deviation is zero, because we have the best value for both of them. As we can see in the figure, our analytic model of QoC closely tracks the simulation-based model of QoC, which supports our hypothesis that the analytical QoC model is a good proxy objective function for guiding the search.
D. EVALUATION ON A REALISTIC TEST CASE
We also evaluated CACSS on the autonomous mobile robot (AMR) realistic test case. The results of the evaluation are presented in Table 6, where column 2 shows the number of control applications. In the realistic test case, we assumed that the link speed is 1 Gbps. The CACSS has successfully scheduled all the flows in the test case and achieved a good QoC, which is captured by the objective function (column 4 of the table).
E. OMNET++ VALIDATION
We have used the OMNET++ simulator with the TSN NeSTiNg extension [34] to validate the generated GCLs, and also measured the average delay and jitter of the solutions. Our goal was to evaluate the correctness and the accuracy of our proposed solution within a realistic simulation environment.
The NeSTiNg extension of OMNET++ ignores the forwarding delay, so to facilitate a fair comparison we updated our CACSS approach creating a variant that considers a zero forwarding delay (ZFD), and named it CACSS-ZFD. We took the synthetic test case 1 from Table 3, synthesized the GCLs with both CACSS and CACSS-ZFD. We simulated the schedules of all the synthetic and realistic test cases from Tables 3 and 6 using OMNET++. The schedules behave as expected and the delays we extract from the OMNET++ simulations are identical with the values obtained by our CACSS. Let us provide more details for one of the test cases. Fig. 10 shows the architecture of the synthetic test case 1 implemented in OMNET++. The simulation is run for a hyperperiod which is 16 ms and the results are depicted in Table 7, where the observed and reported end-to-end (E2E) delays are shown in µs for OMNET++ and CACSS, respectively.
Our validation experiment shows that the generated GCLs are correct and all the flows meet their requirements. The VOLUME 9, 2021 values of the observed E2E delay from OMNET++ (column 2) are equal to the values reported by CACSS-ZFD (column 3), which is expected, since they both use the same assumptions, e.g., ignoring the forwarding delay. Moreover, the maximum jitter is the same for all the solutions and equals to zero.
F. EVALUATION ON A HARDWARE PLATFORM
We have also evaluated the performance of CACSS on the hardware platform from [22] and in this context we removed the assumption that the forwarding delay is ignored. For this evaluation, we assumed that all the SWs are the same type as presented in [22]. The authors proposed the following equation for capturing the forwarding delay d in µs: where c is the size of the flows in bytes. Although we are using this TSN switch hardware implementation in a different application scenario compared to [22], since the forwarding delay model depends on the hardware implementation and not the application scenario, their delay model is also applicable to our case.
To be closer to implementation, CACSS can also consider the scheduling of PPTP flows for time synchronization. These PPTP flows have precedence constraint which has been already addressed in CACSS. We have considered that PPTP flows are implemented as high priority time sensitive traffic that are also scheduled along with network flows.
The generated GCLs are implemented on the SWs and the maximum delay and jitter of flows are measured from the measurements points shown in Fig. 8. We have used the three small ''hardware test cases'' from Table 5, where 4 flows are sent between ES1 and ES2 via SW1, 2 and 3. The three test cases differ in their flows' periods and deadlines, which are in the range of thousands of µs. The measurements were over several minuets using an oscilloscope resulting in hundreds of thousands of samples. The results are depicted in Table 5 where the columns 7 and 9 show the maximum delay and jitter values reported by CACSS and the columns 8 and 10 show the maximum delay and jitter values measured on the hardware platform. The deviation of the measured and reported maximum E2E delay values is small and is less than 1 µs for all the flows in all the test cases. Although, the measured maximum E2E jitter is non-zero for all the flows in all the test cases, the values are very small, in the nanoseconds range, without any effect on the deadlines or the control performance.
Let us illustrate the small variations measured in E2E delay for the hardware test case 2 from Table 5. Fig. 11 shows the measured E2E latencies in all samples for each flow, s 1 to s 4 . The x-axis has the measured value of the E2E delay and the y-axis has the number of samples in which this value was measured. Although, as mentioned, the deviations are very small compared to the values reported by our CACSS, this shows the importance of considering realistic assumptions in the problem formulation. Note that the worst-case values of these variations can be added to the network precision δ introduced in Sect. V-B in order to guarantee that deadlines are satisfied.
VII. RELATED WORK
There is already a lot of research on Fog Computing, focused mostly on aspects related to quality-of-service (QoS) [35]- [37], with limited attention to safety-critical and real-time applications such as those used in the industrial areas. Real-time and safety-critical systems require guarantees for non-functional properties such as timing, e.g., that the deadlines are satisfied. Also, control applications have to fulfill non-functional properties related to control performance, e.g., QoC. Addressing the QoC for control applications in the Fog is still an open issue, researchers investigating the issue of degradation of control applications [38]- [40]. For example, [18] focuses on the routing and scheduling of messages of control applications to protect them from instability. The authors propose the control of the queue gates status via GCLs with careful consideration of the non-determinism of messages.
However, the area of co-design of control and real-time is a well studied area [41]- [46] which have tackled the design of controllers and scheduling of the control tasks and messages with respect to the control performance. The co-design procedure involves designing of control applications such that the controller is robust against degradation due to scheduling of the tasks and messages.
The control performance is not only affected by the scheduling of tasks but also affected by the scheduling of messages in networks. On one hand, researchers have addressed the configuration of communication aiming at increased control performance [41], [47], [48], but very few works address TSN. On the other hand, there is a lot of work on routing and scheduling for TSN, see the discussion below, but none of these works consider QoC. The work in [18] has considered routing and scheduling in Deterministic Ethernet, but lacks TSN-specific features which makes it difficult to implement the results, and uses an SMT formulation that cannot optimize the solutions and does not scale for large problem sizes. Our initial investigation in [16], [18] addresses QoC and considers the particularities of TSN, but uses a simplified model for control applications.
Researchers have addressed the routing and scheduling problems in TSN and have employed different approaches for the optimization, such as heuristics, metaheuristics and mathematical programming, e.g., ILP and Satisfiability Modulo Theories (SMT).
An example heuristic approach is [49], where the packets do not wait in switch queues, called no-wait scheduling. The authors propose a Tabu Search metaheuristic to optimize the flowspan which may become larger because of the no-wait scheduling, and also let lower-priority traffic to use the residual bandwidth. Wisniewski et. al in [50] increase the flexibility of the scheduling by employing a greedy-based FIGURE 11. The details of the measured E2E delay of flows in the test case 2 form the Table 5 implemented on the hardware platform. Thick lines are kernel density estimates.
heuristic approach which is less resource demanding, and possible to be implemented on industrial equipment on the field floor. The greedy-based heuristic approach is also proposed in [51], where authors aim to generate joint network routing and communication scheduling that are fault-tolerant, within a reasonably short time. Arestova et al. in [52] propose a hybrid genetic algorithm for the communication scheduling and network routing to find a near-optimal solution in a reasonable time, and also optimizing the bandwidth to let more less-critical traffic transmitted. A heuristic list scheduler for joint communication scheduling and network routing is proposed in [53], where multi-cast traffic and application distribution are allowed, and bandwidth is optimized. The same problem is addressed in [54] where a genetic algorithm is employed and in [23], where multiple traffic types are considered.
The use of SMT solvers for the communication scheduling is first proposed in [55]. The author proposes a general method for off-line scheduling of communication and uses the SMT solver as the back-end solver. The SMT-based model for TT-schedules shows promising results and scales well with the problem size. Craciunas et al. in [20] propose an SMT model for the traffic scheduling which generates solutions that are jitter-free and the number of used port queues in the network switches is minimized. The authors also propose frame and flow isolation constraints and evaluate them on several tests concerning the run-time and number of used queues. Craciunas et al. derive general traffic regulating constraints for SMT solvers in [56], which introduces windows in GCLs and maps the frames to them. Another SMT model based on ''array theory encoding'' is proposed in [33], where the authors see the GCL windows as array elements, letting more relaxed scheduling with allowing jitter and having fewer GCL entries. However, the implementation of the proposed method shows resource demanding. The trade-of between the GCL length and run-time is well studied in [57].
The SMT-based schedulers are extended for the benefit of other applications. For example, in [58], authors combine traffic scheduling and network routing problem to achieve the minimum delay for AVB traffic. The traffic scheduling combined with task scheduling is studied in [24], where an SMT solver is employed to schedule network messages and tasks on a networked computation platform which is equipped with time-triggered network. Park et al. in [59] proposes a generic algorithm approach to schedule the communication in TSN where preemption is allowed. The proposed algorithm shows increased reliability in the generated solutions. The communication scheduling concerning the security of control applications is addressed in [60] where the authors aim to increase the resilience of the control applications to malicious interference.
VIII. CONCLUSION AND FUTURE WORK
In this paper, we have addressed the problem of scheduling real-time traffic via TSN on an FCP, aiming at improving the performance of industrial control applications and addressing the timing requirements of real-time applications. The scheduled traffic in TSN is regulated through the Gate Control VOLUME 9, 2021 Lists (GCLs), which allow the transmission of flows by opening and closing the switch gates.
We have proposed a Constraint Programming-based solution for determining the GCLs such that the control performance (in terms of QoC) is maximized and the deadlines are satisfied. The solution models the problem through a set of constraints and uses an QoC analytical model inside the objective function for optimizing the solution. We also employ a metaheuristic search strategy to drive the search faster towards good quality solutions in a short time. Our CP solution for messages is extensible and can be integrated with CP task scheduling models from the literature. In addition, we aimed at introducing space in the timeline of message schedules, increasing thus the probability of successfully integrating our GCLs with the tasks running on the end systems.
As the results show, the solution has successfully scheduled the flows in all test cases and also achieved a good QoC for control applications. We have used OMNET++ and Jit-terTime for validating the results and the performance of the QoC analytical model proposed. We have also implemented the resulted GCLs on a TSN hardware platform. | 2021-04-10T13:33:15.997Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "fe88065816e18c0425ce08ba84e3611424b27ad4",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09387313.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "fe88065816e18c0425ce08ba84e3611424b27ad4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256287355 | pes2o/s2orc | v3-fos-license | The Added Value of Postoperative Neurotrophins/ Peptide Mixture in Treating L5 Motor Weakness in Lumbar Disc Prolapse: A Preliminary Report of Multicenter Randomized Controlled Study
Background data: Neurotrophins/peptide mixture is a porcine brain-derived peptide preparation with pharmacodynamic properties similar to those of endogenous neurotrophic factors. No study has evaluated the postoperative role of neurotrophins/peptide mixture in the recovery of postdiscectomy motor weakness. Purpose: This study aims to evaluate the effect of postoperative neurotrophins/peptide mixture treatment on the recovery of L5 motor weakness after lumbar discectomy compared with placebo. Study design: A prospective randomized controlled study (preliminary report) was conducted. Patients and methods: In total, 15 patients (group I) with L5 weakness who received a postdiscectomy adjuvant neurotrophins/peptide mixture were compared with group II (15 postdiscectomy patients with L5 weakness) treated with a placebo. The whole patient population was followed up at 2 weeks, 1 month, 3 months, 6 months, and 1 year for assessment of motor recovery. Results: The mean postoperative Medical Research Council score was signi fi cantly improved in both groups; however, the improvement was faster in group I than in group II. The mean Medical Research Council score improvement was signi fi cantly higher in group I than that in group II at 2 weeks, 1 month, 3 months, and 6 months; however, it was statistically insigni fi cant at 1 year. At 1-year follow-up, 80% of cases in group I had improved motor power up to grade 5 compared with 40% of cases in group II. The rest of the patients reached grade 4 in both groups. There was no motor deterioration after improvement in either group. There were no reported drug-related adverse effects in group I. Conclusion: Neurotrophins/peptide mixture may be an ef fi cient and safe adjunctive postoperative treatment for discogenic L5 motor weakness. It may accelerate recovery of nerve injury in an acute setting, which may be a result of accelerating nerve regeneration; however, the overall improvement was comparable to placebo (2022ESJ2601).
Introduction
I ntervertebral disc prolapse most commonly occurs between the fourth and fifth lumbar and between the fifth lumbar and first sacral vertebrae; only~5% become symptomatic [1]. The posterolateral location is the most common direction of herniation (~90e95%) because the lateral extension is the weakest part of the posterior longitudinal ligament, thus causing compression of the traversing L5 nerve root [2]. Progressive and significant motor weakness of dorsiflexion of the big toe is the most common indication for surgical discectomy [3].
Neurotrophins/peptide mixture (Cerebrolysin) is a mixture of peptides and free amino acids purified from pig brain; it can cross the bloodebrain barrier and is believed to have similar effects of endogenous neurotrophic factors on cell growth, proliferation, migration, and differentiation [4,5]. Several fragments of neurotrophic factors have been identified in Cerebrolysin by immunoassay, which is believed to stimulate neurotrophic signaling pathways, including nerve growth factor, brainderived neurotrophic factor, ciliary neurotrophic factor, and glial cell line-derived neurotrophic factor [6,7]. Cerebrolysin has been used in several neurological conditions, such as dementia and cerebral stroke, with significant improvement and no reported significant adverse reactions [6,8e10].
No study has evaluated the postoperative role of Cerebrolysin in the recovery of L5 motor weakness. The present study aimed to assess the added effect of Cerebrolysin as a postdiscectomy adjuvant medication for patients with L4eL5 disc prolapse with L5 weakness.
Patients and methods
Between January 2016 and January 2021, 30 patients who presented with L5 motor deficit secondary to lumbar 4e5 disc protrusion/extrusion were included in the study. All included cases had motor power less than grade 4 on the Medical Research Council (MRC) grading scale. The exclusion criteria were as follows: complete L5 motor paralysis (i.e. MRC score 0), spondylolysis, translational and angular lumbar instability, diabetes, smoking, epilepsy, renal impairment, and accompanied generalized peripheral neuropathy. Radiological evaluation was accomplished using plain radiograph (anteroposterior lateral and flexion/extension views) and MRI to determine the level of disc herniation, location of the disc (central, posterolateral, and foraminal), percentage of canal compromise, and possible migration of the disc.
Informed consent was given by all participants before surgery after a clear explanation of expected complications and the design of the study. The patients were recruited randomly into two groups, with 15 patients each, by asking them to pick up one of the shuffled sealed envelopes for treatment allocation. All patients were treated by conventional open discectomy under general anesthesia via unilateral fenestration, flavectomy, and removal of the protruded/extruded disc material, and then closure in a standardized fashion.
Postoperatively, group I patients received an adjuvant course of intramuscular Cerebrolysin 5 ml/ day, 5 days per week for 4 weeks, whereas group II patients received a placebo in the form of normal saline injection. Both groups had received the same regimen of postoperative NSAIDs in the form of oral celecoxib 200 mg as a single, after-meal dose once daily for 2 weeks. Follow-up was scheduled at 2 weeks, 1 month, 3 months, 6 months, and 1 year postoperatively. Regular neurological examinations and muscle power scoring using the MRC score were performed at each visit, with MRC score improvement calculated. The MRC improvement was formulated mathematically as follows: the deduction of the postoperative score at every follow-up period from the preoperative score. Collected data were analyzed blindly by an independent biostatistician using SPSS software 20 (SPSS Inc., Chicago, Illinois, USA) and MegaStat software version 10.1 (McGraw-Hill). Descriptive statistics was performed for all data; data analysis was done using the c 2 test for categorical data and KruskaleWallis and ManneWhitney test for ordinal variables. P values less than 0.05 were considered statistically significant.
Results
No case missed the follow-up schedule in either group. All cases were followed up for at least 1 year. There were three females and 12 males in group I, whereas there were five females and 10 males in group II (P ¼ 0.09). The mean age was 34.3 ± 6.2 (range, 25e49 years) in group I and 31.9 ± 5.6 (range, 23e44 years) in group II (P ¼ 0.14). The mean preoperative MRC scores were found to be of no statistically significant difference between both groups (Table 1). Postoperatively, the mean MRC score was higher in group I in all follow-up periods. These differences were found to be statistically significant in the follow-up periods from 2 weeks to 6 months (P < 0.001), whereas they were statistically insignificant at a 1-year follow-up (P ¼ 0.084) ( Table 1). In group I, the mean MRC score improved significantly in all follow-up readings (P < 0.001). Using post-hoc analysis, the P value was found to be statistically significant at a 2-week follow-up compared with preoperative scores, which became insignificant when compared with the next followup readings until 6 months, and then became statistically significant when comparing 6-month values with 1-year values ( Table 2). In group II, the mean MRC scores improved insignificantly at 1 month, and then, the P values became statistically significant until the 1-year follow-up ( Table 3).
The mean MRC score improvement was higher in group I compared with group II during all follow-up periods. The differences were statistically significant at 2 weeks, 1 month, 3 months, and 6 months. However, at 1-year follow-up, the difference was statistically insignificant (Table 4). At 1-year followup, 80% of cases (12 cases) in group I had motor power grade 5, whereas the other three cases reached grade 4. In group II, only 40% of cases (six cases) had motor power grade 5 and the other nine patients reached grade 4 at the final follow-up ( Fig. 1). There was no deteriorated case after improvement in either group. There were no reported drug-related complications in group I.
Discussion Cerebrolysin is a complex mixture of balanced and stable biologically active oligopeptides and free amino acids. The neuroprotective properties of Cerebrolysin® are attributed to many constituents as it is believed to contain many nerve growth factors such as glial cell-derived neurotrophic factor [7]. Many theories have been suggested to explain the neuroprotective mechanism of Cerebrolysin, such as reduction of amyloid protein deposition, controlling the expression of interleukin-1 and thus reducing inflammation, reduction of calcium intake in nerve cells, and antagonizing apoptosis by inhibiting the abnormal metabolism of nitric oxide [11e13].
Many clinical studies have been conducted on the beneficial effect of Cerebrolysin in pathological CNS conditions, mostly for brain conditions such as dementia, cerebral stroke, and traumatic brain injuries, with significant improvement, faster recovery, and no reported significant adverse KruskaleWallis test post-hoc analysis; P values for ManneWhitney test P value less than 0.05 ¼ significant. intrathecal administration of Cerebrolysin in adult rats after avulsion of the C5 ventral roots and suggested that Cerebrolysin can reduce avulsioninduced loss of adult rat motoneurons. Recently, Haggag et al.
[23] evaluated the local application and injection of Cerebrolysin hydrogel after facial nerve axotomy in 72 rats and found a statistically significant improvement in facial nerve regeneration by enhancing Schwann and axonal growth compared with the control group. In another experimental study on peripheral nerve lesions, including posttraumatic brachial plexopathy and compressive radial nerve injury, Cerebrolysin was reported to be associated with more rapid neurological recovery than other therapies, which could support the use of Cerebrolysin in the treatment of acquired peripheral nervous system diseases [24]. Moreover, the effects of intraperitoneal Cerebrolysin injections in type 2 diabetic peripheral neuropathy mouse model revealed that the number, diameter, and area of myelinated nerve fibers increased in the sciatic nerves of these mice after administration of Cerebrolysin [11]. After a thorough review of the literature, the only clinical study performed on the effect of Cerebrolysin on peripheral neurological lesions was a single-blinded randomized clinical trial conducted on 52 patients with Bell's palsy. The author found that Cerebrolysin did not affect the overall recovery rate compared with a placebo. However, it has a significant effect on the speed of recovery [25].
In this study, we evaluated the efficacy of Cerebrolysin in treating L5 motor weakness as an adjunctive treatment method after surgical discectomy versus surgical discectomy plus placebo. We found that Cerebrolysin, when given after surgical discectomy, had a good effect on the speed of improvement compared with placebo. Although the improvement of MRC was significantly faster in the Cerebrolysin group than that in the placebo group, the overall 1-year MRC and 1-year MRC improvement were statistically comparable between groups; this could be attributed to the small number of populations studied. The small population number of the study is one of its limitations; however, this is a preliminary report and is a part of an ongoing study being carried on a larger scale. More studies with prolonged follow-up periods and a larger population are recommended to assess the therapeutic effect of Cerebrolysin in different peripheral nerve disorders. Moreover, the evaluation of other higher doses and longer duration is recommended in future studies. To the best of our knowledge, this is the first clinical study to evaluate the postoperative role of neurotrophins/peptide mixture Cerebrolysin in compressive motor weakness after decompression.
Conclusion
Neurotrophins/peptide mixture (Cerebrolysin) may be an efficient and safe adjunctive postoperative treatment for discogenic L5 motor weakness. It may accelerate recovery of nerve injury in an acute setting, which may be a result of accelerating nerve regeneration; however, the overall improvement was comparable to placebo.
Conflict of interest
There are no conflicts of interest. | 2023-01-27T16:03:22.538Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "2a59f3181a6e4091f588b766ee7ce6f500fa4837",
"oa_license": "CCBYNCSA",
"oa_url": "https://esj.researchcommons.org/cgi/viewcontent.cgi?article=1262&context=egyspinej",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ce9aa19ad4d0bb5168af0e8e2f9eea0bc76ebfe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
235086042 | pes2o/s2orc | v3-fos-license | Low‐carbohydrate high‐fat weight reduction diet induces changes in human gut microbiota
Abstract Obesity has become a major public health problem in recent decades. More effective interventions may result from a better understanding of microbiota alterations caused by weight loss and diet. Our objectives were (a) to calculate the fiber composition of a specially designed low‐calorie weight loss diet (WLD), and (b) to evaluate changes in the composition of gut microbiota and improvements in health characteristics during WLD. A total of 19 overweight/obese participants were assigned to 20%–40% reduced calories low‐carbohydrate high‐fat diet for four weeks. Protein and fat content in the composed diet was 1.5 times higher compared to that in the average diet of the normal weight reference group, while carbohydrate content was 2 times lower. Food consumption data were obtained from the assigned meals. Microbial composition was analyzed before and after WLD intervention from two sequential samples by 16S rRNA gene sequencing. During WLD, body mass index (BMI) was reduced on average 2.5 ± 0.6 kg/m2 and stool frequency was normalized. The assigned diet induced significant changes in fecal microbiota. The abundance of bile‐resistant bacteria (Alistipes, Odoribacter splanchnicus), Ruminococcus bicirculans, Butyricimonas, and Enterobacteriaceae increased. Importantly, abundance of bacteria often associated with inflammation such as Collinsella and Dorea decreased in parallel with a decrease in BMI. Also, we observed a reduction in bifidobacteria, which can be attributed to the relatively low consumption of grains. In conclusion, weight loss results in significant alteration of the microbial community structure.
diet and human health and their associations with obesity, scientists have extensively studied the gut microbiome in recent years. Despite much work, it remains unclear how our diet and gut microbiome influence weight management and its association with obesity status.
Obesity has been linked with altered gut microbiota by additional energy harvest (Turnbaugh et al., 2006). Diet is one of the main factors that modulate gut microbiota because changes in bacterial abundance can occur very rapidly after the digested food has reached the colon (David et al., 2014). Although plant-based diets have a significant effect on gut microbiota, animal-based diets have been shown to have a greater impact (David et al., 2014). Nutritional profiles of the dietary patterns differ greatly in total energy and the content of macronutrients and dietary fiber (DF). Bacterial fermentation of DF has a major influence on intestinal function-the production of organic acids influences glucose and lipid metabolism, the immune system, and hormone secretion (Chambers et al., 2018).
While caloric and macronutrient intake has been implicated in microbiome modulation, the effect of fiber content in weight reduction diets on the gut microbiota has been less extensively studied (Table A1). Weight reduction studies have demonstrated disparate results with regard to changes in the composition of the microbiota, which may be partially due to differences in the amount and choice of fiber in the diet. Thus, evaluating DF-associated effects during weight loss is crucial because reducing the daily caloric intake can also lead to a reduction in DF intake (Brinkworth et al., 2009).
Generally, interventional studies that investigate the relationship between food intake and the gut microbiome are often performed by supplementing extracted and purified fiber within a regular diet, however, baseline consumption of foods can remain different (Baxter et al., 2019;Dewulf et al., 2013). To circumvent this problem, uniform diets can be designed using prepared foods for all participants, and however, it can be costly to manage food delivery and the variety of foods that can be prepared in this way is too restrictive for the subjects. Alternatively, more subject-friendly diets can be formulated based on whole food components. To use this approach, a database of whole foods with detailed nutrient information is required, which would enable one to fine-tune diets with specified amounts of selected nutrients. With regard to microbiota, fiber composition should be determined in detail because fibers (type and amount) drive the modulation of gut microbiota (Dewulf et al., 2013;Walker et al., 2011).
We aimed to characterize the specific fiber content and sources in both weight loss and habitual diets, to analyze the fecal microbial composition of obese and normal weight subjects, and to investigate the effect of WLD on fecal microbial consortia.
| Study design
A group of overweight/obese participants began a voluntary lowcalorie low-carbohydrate high-fat WLD interventional program to follow the appointed regime for at least four weeks. Inclusion criteria for participants included BMI of >28 kg/m 2 in the intervention arm (maximum 25% in BMI range 28-30 kg/m 2 ), BMI of >18 kg/m 2 in the reference group, no previous history of gastrointestinal (GI) disease, no reported antibiotic use in preceding 3 months, or taking any medication known to alter bowel motility, no history of food allergies and not taking medications, not pregnant nor breast-feeding, no specific dietary choices, and with ability to adhere to an omnivorous diet.
Participant recruitment for the intervention arm was carried out in April and May 2017, and the 4-week WLD was carried out in June 2017. Reference group recruitment was carried out in March-April 2017, and sampling in April-August 2017, described in detail previously (Adamberg et al., 2020).
| Diet design for WLD
Assigned diet plans included 3 main meals (30 ± 5% E each) and 1 dessert (10% E) per day. Subject-specific daily energy intake was calculated based on daily expenditure, which accounted for body weight, activity, and energy restriction of 30 ± 10%. Meals in the diet were randomly selected from specifically designed recipes, which consisted mostly of animal-based foods (meat and dairy products) as a protein source, supplemented with vegetables, fruits, and a limited amount of cereals. Participants were provided with individual daily meal plans based on their energy needs.
Subjects were asked to confirm that they had eaten the prescribed meals and they received personal assistance via phone calls if they were having any difficulties adhering to the diet. Diet plan meals were prepared by the subjects, and no additional food intake was allowed. None of the subjects consumed pre-or probiotics as supplements.
| Diet analysis in the WLD and reference groups
The subjects in the reference group were instructed to record their food consumption for at least one day before sampling, described in detail previously (Adamberg et al., 2020). For calculations, meals were decomposed to ingredients based on dietary instructions in the WLD group and based on consumed foods in the reference group. Assigned WLD and reference group consumption data were analyzed for energy, macronutrient, and total DF content based on the NIHD (The National Institute for Health Development) food composition database Nutridata v6/7.
| Diet quantification
Nutritional data were analyzed and normalized to 1,000 kcal caloric intake and presented as the mean of a group ± SD. Analysis of the food and diet records was carried out as previously described (Adamberg et al., 2020). Shortly, all food components were decomposed into 35 primary and 72 secondary food groups, of which 46 dietary fiber-containing foods groups were characterized based on the fiber patterns in raw food materials to cover similar products within the same category. Meat and milk-derived products were considered as negligible sources of DF if not containing cereals, fruit, or vegetables. Specifically, the content of arabinoxylan, β-glucan, cellulose, inulin, lignin, and pectin was calculated for each category based on the literature data on raw foods (Bengtsson et al., 1990;Dodevska et al., 2013Dodevska et al., , 2015Herranz et al., 1981;Holtekjølen et al., 2006;Kalala et al., 2018;Karppinen et al., 2000;nut.s, 2020).
Analysis of food consumption data was carried out using custom R scripts.
| Fecal sampling and anthropometric data collection
The subjects were asked to collect fecal samples immediately after defecation by sterile swab and to suspend the collected material in a buffer containing ammonium sulfate (40% solution), EDTA (16 mM), and sodium citrate (20 mM). With each fecal sampling, a Bristol stool scale score (BSS) was also recorded by the subjects. The samples were transported at room temperature to the laboratory and stored at −20°C before DNA extraction. For anthropometric measurements, body weight and height were measured before and after the intervention period. Height was measured to the nearest 0.5 cm and weight to the nearest 0.5 kg. BMI was calculated at the beginning and end of the study using the formula: BMI = Weight/Height 2 [kg/m 2 ]. Two fecal samples were collected sequentially before intervention and four weeks later. Participants were provided stool collection swab kits, which contained saturated ammonium sulfate solution and EDTA-citrate buffer. Samples were delivered to the laboratory every three days and were stored at +2-6°C until extraction.
Reference group samples were collected similarly, but the interval between two sampling time points varied between 41 and 121 days, on average 61 days (Adamberg et al., 2020). On average, 84,324 reads per sample were obtained for the intervention group and 82,639 reads per sample for the reference group.
| Taxonomic profiling of the sequencing data
DNA sequence data were analyzed using BION-meta according to the author's instructions (https://box.com/v/bion). First, sequences were cleaned at both ends using a 99.5% minimum quality threshold for at least 18 of 20 bases for 5′-end and 28 of 30 bases for 3′-end, then joined, followed by removal of shorter contigs than 350 bp.
Sequences were cleaned from chimeras and clustered by 95% oligonucleotide similarity (k-mer length of 8 bp, step size 2 bp). Lastly, consensus reads were aligned to the SILVA reference 16S rDNA database (v123) using a word length of 8 and a similarity cut-off of 90%. All mapped taxa with relative abundance <0.0001 were discarded and considered as potential noise.
| Statistical analysis
Based on sample size calculations, we estimated that with 17 participants, the study would have more than 80% power to detect a significant difference among weight loss study groups, assuming a mean BMI reduction by 3 kg/m 2 , with a mean BMI and standard deviation of 35 and 3.2 kg/m 2 , respectively, at an alpha level of 5%.
Statistical analysis included bacteria with average colonization frequency >70% and average abundance >0.001. Analysis of data was carried out in the R statistical programming language, version 3.5.0 (R Core Team, 2020). The resulting p-values were corrected for multiple comparisons for each phylogenetic level using Benjamini-Hochberg correction (FDR). A corrected p-value <0.1 was considered statistically significant.
Unless stated otherwise, corrected p-values are shown in the text.
Pairwise comparisons were evaluated using Wilcoxon signedrank test, for the comparison of test and reference groups Kruskal-Wallis test was applied.
To control for within-subject variability, we used the subsequent sample pairs as within-subject controls and compared β-diversity before and after the intervention. This was also applied to reference group samples. The following cutoffs were used: *p < 0.05; **p < 0.01; ***p < 0.001; and ****p < 0.0001.
| Agglomerative hierarchical clustering
Ward's agglomerative hierarchical clustering on a distance matrix was generated from a species by sample Bray-Curtis distance matrix. The method produces a dendrogram by treating each sample as a singleton cluster, merging pairs of clusters until all clusters have been merged into one big cluster containing all samples. Ward's agglomeration method minimizes the total within-cluster variance.
| RE SULTS
A total of 27 overweight/obese participants were enrolled in a WLD regime. Of those, one participant did not enter the study. By the end of the study, three participants were lost due to low compliance or incomplete baseline measurements, three canceled for unknown reasons, and one discontinued the weight loss program due to an unexpected antibiotic course. Thus, a total of 19 overweight/obese participants (14 females, 5 males, aged 25 to 43) with BMI ranging between 28.9 and 44.4 kg/m 2 successfully finished. A TREND flow diagram is displayed in Figure 1.
| Assigned diet was high in fat, reduced BMI, and normalized bowel habits
The diet used for all participants was rich in fat (55.6 ± 2.1 g / 1,000 kcal, 50% from total energy), protein (62.7 ± 3.2 g / 1,000 kcal, 25%), and low in carbohydrates (56.5 ± 3.5 g / 1,000 kcal, 23%, Figure 2). Normalized intake values per 1,000 kcal were used because the caloric intake of the participants varied greatly due to highly dissimilar body weights. Both fat and protein were mainly sourced from animal-based foods ( Figure A1). DF content was moderate (11.6 ± 1.1 g / 1,000 kcal), but it was comparable to levels in an average Estonian diet in an obese reference group (9.8 ± 4.8 g / 1,000 kcal). DF composition analysis based on food categories showed that most of the DF was cellulose and pectin (3.8 ± 0.4 and 2.4 ± 0.2 g / 1,000 kcal, respectively), which was higher than these in the obese reference group (2.5 ± 1.4 and 1.5 ± 1.0 g / 1,000 kcal, respectively). On the other hand, consumption of arabinoxylan and β-glucan were relatively low (1.4 ± 0.2 and 0.3 ± 0.1 g / 1,000 kcal, respectively), yet a similar intake was observed in the obese reference group (1.8 ± 1.1 and 0.5 ± 0.4 g / 1,000 kcal, respectively).
DF source analysis showed that vegetables were the main source of DF (5.7 ± 0.6 g / 1,000 kcal), while intake of DF originating from cereals was very low in the WLD group (1.4 ± 0.5 g / TA B L E 1 Participant characteristics of the WLD and reference group at baseline 1,000 kcal) compared to the reference group (2.0 ± 1.2 and 3.5 ± 2.0 g / 1,000 kcal, respectively). Specifically, consumption of DF from wheat, rye, and barley during WLD was low (0.5 ± 0.4 g / 1,000 kcal). Due to limited information about food processing, the content of resistant starch (RS) was eliminated from our analysis.
During the intervention, the BMI and body weight were significantly reduced on average by 2.5 ± 0.6 kg/m 2 and 7.7 ± 2.5 kg, respectively (p < 0.0002 for both) (Figure 3a,b).
Out of 14 participants, who filled questionnaires about gastrointestinal (GI) disturbances before and after WLD, not a single participant reported daily constipation or diarrhea and only one subject reported daily flatulence at the end of the intervention (before intervention: 1, 1, 7 subjects, respectively). A detailed description of bowel habits before and after the intervention can be found in Table A2. THE mean BSS value before WLD was 3.9 ± 1.5 and decreased to 3.4 ± 0.8 after the intervention, but in general BSS values stabilized ( Figure 3c). A r a b i n o x y l a n B − g l u c a n C e l l u l o s e F r u c t a n s L i g n i n P e c t i n ns ns ns ns + * ns * * 0 20 40
Bray-Curtis (B-C) distances of species composition within subjects reveal that intervention resulted in significantly altered microbial profiles in the WLD group. Similarly, within two months, changes in the microbiota in the reference group were also significant but not as extensive as in the test group (Figure 4b). Over the study period, change in β-diversity was observed between participants in the test and reference group ( Figure A3). Overall, between-subject dissimilarities in the test group decreased more significantly, potentially due to a similar dietary regime with comparable nutrient composition. Hierarchical clustering revealed that most of the samples before and after intervention paired together ( Figure A7).
Although within-subject B-C distances changed significantly, both samples from the same participant before and after WLD clus- increased, but not significantly, after correcting for multiple comparisons (p < 0.05, uncorrected p > 0.1). At the species level, changes were less pronounced but still noteworthy, for example, the abundance of B. adolescentis, B. longum, C. aerofaciens, D. formicigenerans, and D. longicatena was reduced after WLD but not significantly (p < 0.05, uncorrected p > 0.1).
Analysis of presence/absence data at the species level showed that several bacteria were less or more frequent after WLD. The most striking example was observed in the case of cellulose-degrading B.
cellulosilyticus, which was more prevalent after the intervention and detected (>0.01%) in 12/19 samples after WLD compared to 4/19 samples before the intervention ( Figure A5) Participants in the WLD group were divided into three subgroups based on their initial abundance of significantly altered taxa: not detected, low, and high abundance groups (ND, HAG, and F I G U R E 5 Altered taxa in response to WLD and comparison with reference groups. Statistical analysis was carried out at the family, genus, and species levels. (a) Average abundance of altered taxa. (b) uncorrected and corrected p-values between before and after samples from the same subject in the WLD group. (c) Fold change after WLD vs before study and reference group abundances. The last six columns indicate the logarithmic fold change by the colors ranging from dark blue to red. B-H corrected p < 0.1 (+), <0.05 (*), and <0.01 (**). See Figure A4 for full data on subject level abundances LAG, respectively). This analysis revealed that changes in bacterial levels depend on starting levels which means that bacteria in the LAG group increased more than in the HAG group. For example, abundances of Porphyromonadaceae, Rikenellaceae (Alistipes),
Butyricimonas, Ruminococcaceae_UCG−002, Ruminococcus_1, and
Odoribacter splanchnicus were significantly increased after WLD in the LAG group compared to insignificant changes in the HAG group ( Figure A6). In the case of bacteria, which decreased after WLD, reduction of Bifidobacterium, Collinsella aerofaciens, and both Dorea species was significant in the HAG group, while significant changes were not observed in the LAG group ( Figure A6).
| Comparison of pre-and post-intervention time points to normal weight, overweight, and obese reference groups
To investigate how WLD altered bacteria compared to the wider population, pre-intervention and post-intervention time points were compared with normal weight, overweight, and obese reference groups. Further analysis included only taxa, which were significantly altered in the intervention. No significant difference in bacterial abundances was detected between the obese reference group and the WLD group (Figure 5c, Figure A4). On the other hand, compared to the normal weight reference group, the abundance of
| DISCUSS ION
This study evaluated the effect of a low-carbohydrate high-fat weight loss diet (WLD) on the intestinal microbiota of overweight/ obese subjects. We specifically analyzed the macronutrient, fatty acid, and dietary fiber composition of appointed WLD to elaborate the effect of weight loss on fecal microbiota and gastrointestinal (GI) symptoms. Another goal of this study was to compare microbiota between WLD and reference groups (normal weight, overweight, and obese individuals).
Consumption of dietary fiber (DF) in a habitual diet has been well
characterized by food sources (O'Neil et al., 2012), but weight reduction diets are poorly defined and only provide information about total DF, non-starch polysaccharides, or resistant starch (RS) content (David et al., 2014;Fava et al., 2013;Santacruz et al., 2009;Walker et al., 2011). To date, the most detailed dietary analysis of fiber intake in a habitual diet was conducted by Munch Roager et al. (2019), who analyzed the effects of wholew grain and refined grain diets on adult human fecal microbiota. They measured the RS, arabinoxylan, and monosaccharide composition of whole-grain and refined grain products in the diets utilized in the study. In another study, the same group investigated the effects of a low-gluten diet on fecal microbiota and analyzed the carbohydrate composition of DF in representative meals of the diets utilized in the study (Hansen et al., 2018). They observed that arabinoxylan-rich cereals were important to keep sufficient levels of bifidobacteria in the fecal microbiota (see below).
In our study, a diet analysis revealed that DF content was moderate and slightly below the recommended value of 12.6 g/1000 kcal (Øverby et al., 2013) and is comparable with the DF content in the reference group with a non-statistically significant upward trend.
Because plants contain a wide variety of DFs, we formed 11 main categories and 45 subcategories of foods to characterize the specific fiber composition of foods. From the main categories, the most abundant fiber source was vegetables, which provided approximately 50% of the total DF intake and subsequently determined the cellulose-rich nature of WLD. The second richest source of DF was fruits, which significantly increased the amount of pectin and lignin content in WLD. Because cereal consumption was low in WLD, it resulted in a low-to-moderate amount of arabinoxylan and β-glucan in the diet.
In our study, before introducing the WLD regime, the GI symptoms of subjects varied from low Bristol Stool Scale Score (BSS) to high BSS, and many reported frequent flatulence. After the fourweek intervention, these conditions normalized thereby demonstrating a positive effect of this WLD on GI symptoms. These effects can be explained by the high amount of vegetable fibers, for example, cellulose, pectin, and lignin in the WLD plan.
Previous weight-loss interventions have shown subject-specific deviations in community composition and considerable alterations in specific bacterial abundances (Ott et al., 2017). In our study, using NMDS analysis, we show that the microbial communities within participants drifted during the intervention and in some cases displayed large changes indicating more significant alterations in the microbial communities. According to the hierarchical clustering, our WLD intervention did not hamper subject-wise clustering, which has been also shown by Salonen et al. (2014).
Although baseline fecal microbiota had similar abundance profiles compared with reference groups, we observed a lower abundance of Christensenellaceae and a higher abundance of Dorea compared to the normal weight reference group. These bacteria are known to correlate with BMI (Goodrich et al., 2014). After four weeks of WLD intervention, the enterotype status and α-diversity were mostly unchanged. However, specific changes in the microbiota were observed, for example, a decrease in the number of Collinsella, Coprococcus, and Dorea species. These results correspond with results from other weight loss studies and dietary interventions, where the α-diversity (Ott et al., 2017) or enterotype status (Wu et al., 2011) were not affected by short-term calorie reduction yet an increase in α-diversity has been reported after long-term weight loss (Liu et al., 2017).
Our study corroborates previous findings concerning the reduction of bifidobacteria on carbohydrate-limited hypocaloric diets (Duncan et al., 2007;Salonen et al., 2014;Santacruz et al., 2009). However, an increased abundance of Bifidobacterium has been observed on a moderate carbohydrate and fiber-rich weight reduction diet (Ott et al., 2017). This could be explained by contrasting the macronutrient profiles of the applied diets because the WLD intervention in our study was limited in fiber and carbohydrate content compared to the dietary regime applied by Ott et al. (2017). In studies where diets are aimed to maintain body weight, supporting effect to increase the abundance of bifidobacteria by high-carbohydrate diets has been shown in comparison with high-fat diets (Fava et al., 2013). However, a decline in bifidobacteria has been observed also in weight loss intervention on a macronutritionally balanced diet (Santacruz et al., 2009), gluten-free diet (Palma et al., 2009), and low-gluten intervention diet (Hansen et al., 2018). Thus, the reduction of Bifidobacterium levels after WLD intervention can be attributed to the low intake of cereal grains and starchy vegetables. There is some evidence to support that bifidobacteria are supported by arabinoxylan oligosaccharides, although only a few studies show that long-chain arabinoxylan is bifidogenic (Hopkins et al., 2003;Monteagudo-Mera et al., 2018;Truchado et al., 2017). The growth of bifidobacteria is supported by dietary fructans (Dewulf et al., 2013), which are enriched in the wheat endosperm. In WLD, the intake of arabinoxylan and fructan was low-to-moderate but comparable to ref- erence group values, and the nature of food components suggests that RS intake could have been low, thus potentially limiting the growth of bifidobacteria. Studies that compare the microbiota of normal weight and obese subjects have shown contrasting results regarding Bifidobacterium, which has been associated with both high (Selma et al., 2016;Sepp et al., 2013;Verdam et al., 2013) and low BMI (Ignacio et al., 2016;Korpela et al., 2017;Santacruz et al., 2010) in adults and children. Our study supports the idea that the abundance of bifidobacteria is not conditionally dependent on weight or weight loss and is rather related to fiber type and carbohydrate content in WLD.
Another interesting shift after WLD intervention is an increase in R. bicirculans. This trend has been also observed in a high-protein low-fat weight reduction diet (Salonen et al., 2014). It has been suggested that R. bicirculans selectively utilizes certain hemicelluloses, especially β-glucans and xyloglucan (XyG) (Wegmann et al., 2014).
Vegetables, especially leaves are rich in XyG, which could also explain the increase in the prevalence of B. cellulosilyticus on vegetablerich low-cereal WLD (McNulty et al., 2013;Williams et al., 2017).
Collinsella and Dorea species, which were both reduced in WLD, have been associated with metabolic diseases (Candela et al., 2016;Duvallet et al., 2017;Gomez-Arango et al., 2018;Goodrich et al., 2014;Lahti et al., 2013;Liu et al., 2017;Zupancic et al., 2012). C. aerofaciens levels have been shown to decrease on a high-protein lowfat weight reduction diet (Walker et al., 2011). Diet-specific effects on C. aerofaciens have not yet been elucidated: an increase in prevalence has been observed after a high-cereal-grain diet (Foerster et al., 2014) and reduced abundance after vegetable and whole-grain fiber-rich fruit-free diet (Candela et al., 2016). Nutritional studies have shown that low-gluten intervention reduces the abundance of Dorea (Hansen et al., 2018), which agrees with our results because consumption of DF from wheat, rye, and barley during WLD was minimal. Biochemical tests have shown that both Collinsella and Dorea species exhibit low-carbohydrate fermentation while the latter species can consume sugars derived from arabinoxylan or fructose (Kageyama et al., 1999;Taras et al., 2002).
Enrichment of Butyricimonas and Rikenellaceae in lean subjects, negative correlation with BMI and triglyceride levels indicates that these taxa may promote health or contribute to the prevention of obesity (Goodrich et al., 2014;McNulty et al., 2013). Our study supports this idea because these taxa increased after WLD intervention. Furthermore, a high abundance of butyric-acid-producing Butyricimonas has been associated with normal weight and diets high in animal protein and saturated fats (Garcia-Mantrana et al., 2018).
High-fat diets have been previously associated with increased bile release (Cummings et al., 1978;David et al., 2014), while weight reduction diets can reduce serum bile acid (BA) (Biemann et al., 2016;Jahansouz et al., 2016;Straniero et al., 2017) and total fecal BA concentration (Kudchodkar et al., 1977). On the other hand, fecal BA concentrations were not altered during dietary weight loss therapy (Damms-Machado et al., 2015) but were reduced with a low-fat hypocaloric diet supplemented by high fiber (Reddy et al., 1988). Even though we did not analyze the BA concentrations in feces or plasma, the abundance of several bile-tolerant bacteria increased during WLD such as Rikenellaceae (Alistipes), Odoribacter splanchnicus, and Bilophila wadsworthia, which indicates that bile concentration may have increased in the GI tract. An increase of B. wadsworthia on a high-fat diet has also been observed in other studies (David et al., 2014).
| CON CLUS IONS
This study investigated changes in fecal microbiota during significant weight loss on a high-fat diet. In contrast with most weight loss studies, we characterized the DF sources and estimated the specific DF intake in the diets used which provides an additional layer of data to link microbiota alterations with diet. To our knowledge, this is the first publication that characterizes specific fiber intake and DF intake quantitatively by food subcategories in a weight loss study based on food composition data. WLD intervention both reduced BMI and improved GI symptoms. High vegetable intake increased the levels of cellulose and low-cereal intake reduced the levels of arabinoxylan and β-glucan content in the diet, which were accompanied by shifts in microbiota such as a reduction in the abundance of bifidobacteria. WLD supported the growth of bile-resistant bacteria, while the abundance of bacteria associated with inflammation was reduced. We conclude that the dietary intake of different fibers and the initial abundance of bacteria in the microbiota (low or high abundant groups) should be taken into account when analyzing the impacts of a weight reduction diet.
ACK N OWLED G M ENTS
This study is supported by the European Regional Development Fund (2014-2020.4.02.16-0058) and the Estonian Ministry of Education and Research (project IUT 1927). We would like to acknowledge all the subjects who joined this study.
CO N FLI C T O F I NTE R E S T
None declared.
AUTH O R CO NTR I B UTI O N S
Madis Jaagura: Conceptualization (supporting); Data curation
E TH I C S S TATEM ENT
The study protocol was approved by the Tallinn Medical Research Ethics Committee (TMEK no 1631). Informed consent was obtained from all subjects involved in the study.
DATA AVA I L A B I L I T Y S TAT E M E N T
All data are provided in full in this paper and its appendices. Fat Protein a n i m a l p l a n t a n i m a l p l a n t | 2021-05-23T05:14:40.990Z | 2021-05-15T00:00:00.000 | {
"year": 2021,
"sha1": "90ece419727caff34f98aaa24d61fe9c3b044b7e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mbo3.1194",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90ece419727caff34f98aaa24d61fe9c3b044b7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8888483 | pes2o/s2orc | v3-fos-license | 3,3′-({4-[(4,5-Dicyano-1H-imidazol-2-yl)diazenyl]phenyl}imino)dipropionic acid
The title compound, C17H15N7O4, is a push–pull non-linear optical chromophore containing a dialkylamino donor group and the dicyanoimidazolyl acceptor separated by a π-conjugated path. The benzene and imidazole rings are not coplanar, making a dihedral angle of 10.0 (2)°. In the crystal, molecules are linked by an extended set of hydrogen bonds and several motifs are recognized. Pairs of molecules are held together by hydrogen bonding between carboxy O—H donor groups and diazenyl N-atom acceptors, forming R 2 2(24) ring patterns across inversion centres. Four-molecule R 4 4(28) ring motifs are formed, again across inversion centres, through hydrogen bonding involving carboxy O—H donor groups and diazenyl and imidazole N-atom acceptors. Four-molecule R 4 4(42) patterns are formed among molecules related by translation and involve carboxy O—H and imidazole N—H donor groups with carbonyl O-atom and imidazole N-atom acceptors.
The title compound, C 17 H 15 N 7 O 4 , is a push-pull non-linear optical chromophore containing a dialkylamino donor group and the dicyanoimidazolyl acceptor separated by aconjugated path. The benzene and imidazole rings are not coplanar, making a dihedral angle of 10.0 (2) . In the crystal, molecules are linked by an extended set of hydrogen bonds and several motifs are recognized. Pairs of molecules are held together by hydrogen bonding between carboxy O-H donor groups and diazenyl N-atom acceptors, forming R 2 2 (24) ring patterns across inversion centres. Four-molecule R 4 4 (28) ring motifs are formed, again across inversion centres, through hydrogen bonding involving carboxy O-H donor groups and diazenyl and imidazole N-atom acceptors. Four-molecule R 4 4 (42) patterns are formed among molecules related by translation and involve carboxy O-H and imidazole N-H donor groups with carbonyl O-atom and imidazole N-atom acceptors.
The rationalization of the local packing modes of chromophore units (Thallapally et al., 2002;Centore & Piccialli, 2012;Centore, Piccialli & Tuzi, 2013) is another crucial point, because many properties required for optimum device performances (e. g. electron mobility) critically depend on the packing not less than on strictly molecular properties. In our research group we are interested in the synthesis of new heterocyclic compounds, including metal containing heterocyclic compounds (Takjoo et al., 2011;Takjoo & Centore, 2013), for applications as advanced materials and bioactive compounds , and in the analysis of crystal structures controlled by the formation of H bonds (Centore, Jazbinsek et al., 2012;Centore, Fusco, Jazbinsek et al., 2013). Following these issues, we report, in the present paper, the structural investigation of the title compound, shown in the Scheme. The title compound is a typical push-pull azo-dye, containing the dialkylamino as donor group and two cyano acceptor groups. Moreover, the cyano groups are attached to an electron poor imidazole ring. The chromophore unit has been used in the synthesis of polymers showing quadratic NLO behaviour (Carella, Centore, Sirigu et al., 2004).
The molecular structure is shown in Fig. 1. The geometry around the donor N1 atom is substantially planar indicating sp 2 hybridization (the sum of valence angles at N1 is 360°) and the pattern of bond lenghts within the adjacent phenyl ring shows a certain degree of quinoidal character. All these structural features are in accordance with the expected π conjugation and push-pull character of the chromophore group.
The two aromatic rings are not coplanar, the dihedral angle between the mean planes being 10.0 (2)°; the π-conjugated part of the molecule has a slighlty curved shape, as the result of small torsions around the bonds C10-N2, N2-N3 and N3-C13.
The molecules of the title compound have several H bonding donor and acceptor groups, and the crystal packing is dominated by the formation of H bonds, Table 1. Several H bonding motifs are recognized in the crystal packing (Allen et al., 1999;Steiner, 2002) and some of them are shown in Fig. 2
Refinement
The H atoms of the carboxy groups and of the imidazole ring were located in difmaps and their coordinates were refined.
All other H atoms were generated stereochemically and were refined by the riding model. For all H atoms U iso =1.2×U eq of the carrier atom was assumed.
Computing details
Data collection: MACH3/PC Software (Nonius, 1996); cell refinement: CELLFITW (Centore, 2004); data reduction: XCAD4 (Harms & Wocadlo, 1995); program(s) used to solve structure: SIR97 (Altomare et al., 1999); program(s) used to refine structure: SHELXL97 (Sheldrick, 2008); molecular graphics: ORTEP-3 for Windows (Farrugia, 2012) and Mercury (Macrae et al., 2006); software used to prepare material for publication: WinGX (Farrugia, 2012). where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max < 0.001 Δρ max = 0.30 e Å −3 Δρ min = −0.33 e Å −3 Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Several crystal specimens were tested but their quality was, in general, rather poor, as witnessed by the relatively high fraction of low intensity reflections. The poorly diffracting nature of the crystals is the reason for the relatively high R factors. | 2018-04-03T00:05:51.199Z | 2013-04-27T00:00:00.000 | {
"year": 2013,
"sha1": "0ef62f8c9ef5a5b2afdf6bf51bad0c1dc730f322",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2013/05/00/bx2439/bx2439.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ef62f8c9ef5a5b2afdf6bf51bad0c1dc730f322",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
253789890 | pes2o/s2orc | v3-fos-license | Young Adults’ Knowledge of State Cannabis Policy: Implications for Studying the Effects of Legalization in Vermont
Objectives. Cannabis policy evaluations commonly assume equal policy exposure across a state’s population using date of implementation as the key independent variable. This study aimed to explore policy knowledge as another measure of exposure and describe the sociodemographic, cognitive, and behavioral correlates of cannabis policy knowledge in young adults in Vermont. Methods. Data are from the PACE Vermont Study (Spring 2019), an online cohort study of Vermonters (12-25). Bivariate and multivariable analyses estimated prevalence ratios (PR) for correlations between knowledge of Vermont’s cannabis policy (allowed possession for adults 21 and older) and sociodemographics, cannabis use, and harm perceptions in 1,037 young adults (18-25). Results. Overall, 60.1% of participants correctly described the state’s cannabis policy. Being younger, Hispanic, non-White race, and less educated were inversely correlated with policy knowledge. Ever (PR=1.37; 95% CI 1.16-1.63) and past-30-day cannabis use (PR=1.27; 95% CI 1.12-1.45) were positively correlated with policy knowledge. Policy knowledge was more prevalent among young adults who perceived slight risk of harm from weekly cannabis use (vs. no risk; aPR=1.28; 95% CI 1.11-1.48) or agreed that regular cannabis use early in life can negatively affect attention (vs. disagree; aPR=1.55; 95% CI 1.22-1.97). Conclusion. Findings suggest that 40% of Vermont young adults in the study were unaware of current state cannabis policy and that policy knowledge was lower in younger, less educated, Hispanic, and non-White young adults. Future research should explore using a measure of policy knowledge as an exposure or moderator variable to better quantify the effects of changes in cannabis legal status on perceptions and use in young people.
In 2018, Vermont became the first state to legalize possession of cannabis for adults aged 21+ through the legislative process distinguishing Vermont from prior states that legalized cannabis through ballot initiatives (Zezima, 2018). As of July 1, 2018, individuals aged 21+ could legally possess up to an ounce of cannabis, as well as two mature and four immature plants per household, and the state eliminated penalties for limited possession by those aged 21+ (General Assembly of the State of Vermont, 2018). Vermont legalized medical cannabis and decriminalized possession prior to the 2018 policy changes (General Assembly of the State of Vermont, 2004Vermont, , 2013. In 2020, Vermont became the eleventh state to legalize a taxed and regulated retail cannabis market and the second through the legislature (National Conference of State Legislatures, 2021a)-the state plans to open the market in 2022 (General Assembly of the State of Vermont, 2021). As of December 2021, other states-Connecticut, New York, Virginia, and New Mexico-have legalized a regulated retail cannabis market through state legislation (National Conference of State Legislatures, 2021a). Vermont's 2018 cannabis policy mirrors cannabis policies in Washington, DC at the time, and resembles legislation in Montana, Maine, New Mexico, New York, Virginia, and Connecticut during transitions to a regulated market (Commonwealth of Virginia; "Connecticut General Assembly," 2021; Lahut & Lee, 2021;Lopez, 2020;Maine State Legislature, 2021;Metropolitan Police Department, 2014; "Montana Marijuana Regulation and Taxation Act," 2020; National Conference of State Legislatures, 2021a; Victor, 2021).
Previous studies (Brooks-Russell et al., 2019;Cerda et al., 2017;Fleming et al., 2016;Paschall & Grube, 2020) have identified cannabis harm perceptions and use as key outcomes for evaluating the impact of changes to cannabis legal status on youth and young adults. Data from national surveillance have shown Vermont young adults report lower perceptions of harm from cannabis use and higher average annual cannabis initiation rates compared to the U.S. overall (SAMHSA, 2019). This is consistent with national cross-sectional data suggesting that higher cannabis harm perceptions protect against cannabis use (Terry-McElrath et al., 2017). Young adults in particular may be impacted by cannabis policy changes given cannabis is increasingly the first substance tried in adolescence (Keyes et al., 2019) and the high prevalence of alcohol and drug use among this age group Grant et al., 2004;Pearson et al., 2012;Rath et al., 2012). Since substance use behaviors developed in young adulthood may persist throughout life (Arnett, 2005), substance use prevention and early intervention are beneficial to public health. Additionally, data from a national sample of young adults suggest that changes to cannabis policies may affect behavior change, with 9% of current non-users of cannabis reporting that they would use cannabis if legalized and 14% of current users reporting they would use cannabis more often after legalization (Cohn et al., 2017). Results from studies using National Survey on Drug Use and Health data showed that young adults (aged 18-25) from states with medical cannabis had lower cannabis risk perceptions compared to young adults in states without medical cannabis policies (Schuermeyer et al., 2014;Wen et al., 2019). While many aspects of cannabis legal status may impact individual beliefs and behavior (e.g., retail market, social norms; (Carliner et al., 2017), these findings highlight that policy implementation may impact use behaviors and individual attitudes and beliefs about cannabis.
Cannabis policies vary by state. These variations may include how the policy is enacted (i.e., ballot initiative vs. state legislative process) and specific components of the law (e.g., legal to buy or sell, number of plants legal to own). A systematic review of the effect of cannabis legal status on individual beliefs highlights that these differences, including knowledge of the policy and its specific components, may impact individual beliefs about cannabis (Carliner et al., 2017). Given the relationship between cannabis policy knowledge and attitudes and beliefs about cannabis (Carliner et al., 2017), state measures of policy awareness could inform state public health communication efforts. Colorado's Responsibility Grows Here campaign, for example, focuses on responsible cannabis consumption and includes messages targeting understanding of the state's cannabis policy (Colorado Department of Public Health and Environment, 2021). Outcome evaluations may also benefit from accounting for policy knowledge in their analyses.
Existing evaluations of changes in cannabis legal status assess policy implementation based on the year in which the policy was implemented (Johnson & Guttmannova, 2019), which assumes equal policy awareness and exposure across the population. However, policy awareness may differ based on sociodemographic characteristics or experience with cannabis, which would suggest the need for more nuanced evaluations of policy implementation that account for differences between population subgroups. The goal of this study was twofold: first, to explore policy knowledge as an alternate measure of policy exposure and second, to describe the prevalence and correlates of knowledge of Vermont's cannabis policy in young adults, the age group with the highest past-month cannabis use in the state (SAMHSA, 2019).
METHODS
The Policy and Communication Evaluation (PACE) Vermont Study is an ongoing online cohort study conducted in Vermont youth and young adults aged 12-25 designed to understand the impact of state-level policies and communication campaigns on substance use beliefs and behaviors in young Vermonters. Eligible participants were Vermont residents aged 12 to 25 years who were willing to complete three 10-to 15-minute web-based surveys over a 6-month period. Recruitment was conducted by Hark, a Vermont-based digital design and marketing firm (Hark Inc), over a 10-week period (March 26-June 4, 2019). Participants were recruited via the following three main mechanisms: 1) web-based recruitment including both paid and unpaid advertising, 2) community recruitment through partner organizations, and 3) participant referrals via a personalized link. Further details on study methods are available elsewhere (Villanti et al., 2020). Participants represented each of the 14 counties in the state, with the distribution by county generally reflecting 2018 population estimates for Vermont youth and young adults and past 30-day substance use estimates in the PACE Vermont sample were similar to those estimated in the National Survey on Drug Use and Health (Vermont Department of Health, 2019; Villanti et al., 2020). The study was approved by the University of Vermont and Vermont Department of Health's Institutional Review Boards and received a Certificate of Confidentiality from the National Institutes of Health. Data for the current analyses were limited to the 1,037 young adults aged 18-25 who completed Wave 1 (March 26-June 4, 2019) of the PACE Vermont Study. The current study focuses on young adults-the age group with the highest prevalence of past 30-day cannabis use in the Vermont (SAMHSA, 2019).
Measures
Knowledge of state cannabis policy. The term "marijuana" was used throughout the survey rather than "cannabis" to reflect language used by large national and state-level surveys (e.g., Vermont Youth Risk Behavior Survey, Monitoring the Future, National Survey on Drug Use and Health; Jones et al., 2020;Miech et al., 2020;SAMHSA, 2019;Schulenberg et al., 2020). To assess knowledge of cannabis law, all participants were asked, "Marijuana law recently changed in Vermont. Which of the following best describe Vermont's new marijuana law?" with the following response options: 1) "Legal for anyone to use," 2) "Legal for people 21+ to use," 3) "May use in public," 4) "Allowed for medical use," 5) "May own up to two plants," and 6) "Legal to sell." Respondents were asked to select all applicable choices. All responses to this item were categorized as either "correct marijuana policy" or "incorrect marijuana policy knowledge." Participants were incorrect if they selected "Legal to sell," "May use in public," or "Legal for anyone to use" as these were incorrect statements about key components of the law. Young adults who did not select any of the incorrect responses were considered to have correct knowledge if they 1) selected "Legal for people 21+ to use" and "May own up to two plants," or 2) selected "Allowed for medical use," "Legal for people 21+ to use," and "May own up to two plants." Correct responses were considered with or without inclusion of "Allowed for medical use" as medical use has been legal in Vermont since 2004 but was not included in the 2018 legal status change (National Conference of State Legislatures, 2021a, 2021b); therefore, some participants may not have selected "Allowed for medical use" despite it being part of current cannabis law.
Cannabis beliefs. Cannabis harm perceptions
were assessed with the item "how much do you think people risk harming themselves (physically or in other ways) if they use marijuana weekly?" Response choices were "great risk," "moderate risk," "slight risk," and "no risk." Participants were also asked to identify the substance in cannabis that makes a person high, with response options "CBD," "THC," "Neither," "Both," or "Don't know." Responses were collapsed to three categories: 1) correctly identified THC only, 2) did not identify THC only (i.e., "CBD," Neither," "Both"), and 3) don't know. Beliefs about the effects of cannabis use were assessed by agreement ("Strongly agree," "Agree," "Disagree," "Strongly disagree," or "Don't know") with the following statements developed from evidence presented in a government report by the Vermont Department of Health (Vermont Department of Health, 2016): a) "Regular marijuana use during early years of life can negatively affect attention and memory in adulthood;" b) "Teens who use marijuana weekly or more often have twice the risk of depression or anxiety;" c) "Approximately 1 in 6 teens who start using marijuana before age 14 develop addiction;" and d) "Teens who use marijuana have lower academic performance and worse job prospects-and those who continue using marijuana regularly show a decrease in IQ 20 years later." Responses were collapsed into three categories: agree ("strongly agree" and "agree"), disagree ("strongly disagree" and "disagree"), and don't know.
Cannabis use. Respondents received the following statement before cannabis use survey items "The next questions are about marijuana use. Marijuana also is called pot, weed, or cannabis. Marijuana is usually smoked, either in cigarettes, called joints, or in a pipe. It is sometimes cooked in food or used in concentrates. Hashish is a form of marijuana that is also called 'hash.' One form of hashish is hash oil. These questions do not relate to the use of cannabidiol (CBD) products." Ever use of cannabis was measured with "Have you ever, even once, used marijuana or hashish?" Respondents chose from the following response options "yes," "no," and "I don't know" with ever use defined as binary with ever use=1 and never use and "don't know"=0. Ever users were asked "How long has it been since you last used marijuana or hashish?" with current use collapsed into a dichotomous variable 1=use in the past 30 days, and 0=no use in the past 30 days. Covariates.
Sociodemographic measures included age (grouped as 18-20 years and 21-25 years), sex assigned at birth, race, ethnicity, and education completed. Subjective financial status was included as a proxy for socioeconomic status in young adulthood (Williams et al., 2017). Respondents were asked "Considering your own income and the income from any other people who help you, how would you describe your overall personal financial situation? Would you say you:" with the following response options: 1) "Live comfortably," 2) "Meet needs with a little left," 3) "Just meet basic expenses," and 4) "Don't meet basic expenses."
Data Analysis
Survey weights were developed post-hoc from population estimates of females and males between the ages of 12 and 25 (year by year) residing in each of Vermont's 14 counties in 2017 (the most current data available at the time of analysis) to correct for higher response by females and those residing in the most populous county (Chittenden County).
All analyses were conducted using survey (svy) procedures in Stata/SE statistical software version 16 (StataCorp LP) to account for survey weighting. Missing data (range of item-level missingness: 0%-2.8%) were handled through listwise deletion. Bivariate analyses examined differences in sociodemographics and ever and past-30-day cannabis use stratified by cannabis policy knowledge (correct vs. incorrect knowledge). Given the high prevalence of cannabis policy knowledge, multivariable modified Poisson regression models (Zou, 2004) were used to estimate the association between cannabis policy knowledge and cannabis harm perceptions, and knowledge of the psychoactive substance in cannabis, adjusted for age, sex, race and ethnicity, subjective financial status, and past-30-day cannabis use.
RESULTS
The weighted sample of 1,037 young adults was primarily non-Hispanic White (84.3%) and approximately half were female (52.1%) with a mean age of 21.2 (SD=2.2) years (Table 1). When asked about their subjective financial status, most young adults in the sample met their needs with a little left or lived comfortably (69.9%). In addition, most had at least some college education (69.7%). Most of the sample reported ever cannabis use (70.6%), with 41.3% reporting past 30-day cannabis use.
Sixty percent of respondents reported correct knowledge of all aspects of Vermont's cannabis policy (Table 2). When asked, "Marijuana law recently changed in Vermont. Which of the following best describe Vermont's new marijuana law?" most young adults correctly indicated that cannabis was legal for people 21+ to use (91.5%) and for medical use (71.8%). Most participants accurately indicated that cannabis was not legal for anyone to use (98.3%), not allowed for public use (93.9%), and not legal to sell (94.8%). Most young adults correctly stated that the state's cannabis policy allowed the ownership of up to two plants (71.6%). A small proportion of young adults with correct knowledge responded "No" to whether cannabis was allowed for medical use (12.4%). Note. Abbreviations: PR, prevalence ratio. All findings account for survey weights. Number of observations missing data on the following variables: sex assigned at birth (n = 1); subjective financial status (n = 2); past 30-day marijuana use (n = 5). *"Other" race categorized by respondents who selected one of the following races: American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander. Number of observations responding "don't know" to ever marijuana use: n = 5. In a series of multivariable analyses adjusting for age, sex, race and ethnicity, subjective financial status, and past-30-day cannabis use, young adults who reported slight risk of harm from weekly cannabis use had a higher prevalence of cannabis policy knowledge (aPR=1.28; 95% CI, 1.11-1.48) than those who reported no risk (Table 3). Young adults who identified THC as the substance in cannabis that makes a person high had a greater prevalence of correct knowledge of cannabis policy (aPR=1.91; 95% CI, 1.30-2.79) than those who incorrectly identified THC as the psychoactive substance in cannabis. Participants who agreed that "regular marijuana use early in life can negatively affect attention" (aPR=1.55; 95% CI, 1.22-1.97) and young adults who did not know if early cannabis use impacts attention (PR=1.44; 95% CI, 1.06-1.95) had higher prevalence of cannabis policy knowledge than those who disagreed with this statement. Policy knowledge was more prevalent among young adults who responded that they did not know whether "one in six teens who start using marijuana before age 14 develop addiction" than those who disagreed with the statement (aPR=1.20; 95% CI, 1.01-1.43). Correct knowledge of the policy was not associated with responses to the following items: 1) "Teens who use marijuana have lower academic performance and worse job prospects" and 2) "Teens who use marijuana weekly or more often have twice the risk of depression." Note. Abbreviations: aPR, adjusted prevalence ratio. All modified Poisson models adjusted for age, sex, race/ethnicity, subjective financial status, and past 30-day marijuana use and account for survey weights. Number of observations missing data on the following variables: perceived risk of harm from weekly marijuana use (n = 1); effect of marijuana on attention (n = 1); effect of early marijuana use on addiction (n = 2); effect of early marijuana use on academic performance (n = 2); effect of weekly marijuana use on depression risk (n = 2).
DISCUSSION
Approximately 60% of Vermont young adults in the study correctly identified the state's cannabis policy in 2019 in the survey. This is a high proportion of Vermont young adults respondents with correct policy knowledge given that the law was enacted by the state legislature and likely received less political advertising than a public ballot measure. Correct knowledge of cannabis policy was associated with being older, non-Hispanic White, and more educated, as well as with past-month and ever cannabis use.
Compared to those who believed weekly cannabis use poses no risk, policy knowledge was more prevalent among young adults who believed cannabis use poses a slight risk of harm. Cannabis policy knowledge was associated with several cannabis beliefs, including identifying THC as the substance in cannabis that makes a person high, agreement that regular cannabis use early in life can negatively affect attention, and the impact of early cannabis use on addiction. Cannabis policy knowledge among young adults was not associated with other beliefs that cannabis use leads to depression or low academic and work performance. Differences in cannabis harm perceptions and knowledge among young adults may be explained by cannabis use status (Berg et al., 2015;Terry-McElrath et al., 2017), with more experienced users having a higher awareness of the policy.
On the other hand, 40% of young adult respondents did not demonstrate knowledge of state cannabis policy, highlighting variation in policy knowledge after implementation. Given that existing evaluations of state-level changes in cannabis legal status in youth and young adults rely on the dates of policy implementation, our findings indicate it may be important to account for policy knowledge in these evaluations (Brooks-Russell et al., 2019;Cerda et al., 2017;Fleming et al., 2016;Paschall & Grube, 2020). Associations between ever and current cannabis use and policy knowledge may indicate how policy awareness impacts population subgroups differently. Policy knowledge, therefore, could be used in several ways in sensitivity analyses to gain a more unbiased estimate of the effect of cannabis legalization on young adult beliefs and behaviors -as an alternate measure of policy exposure, as a control variable, or as a potential moderator. For example, current estimates may underestimate the effect of cannabis policy on behavior by grouping those without knowledge unlikely to change their behavior with those who have knowledge of the policy and may have considered or changed their behavior as a result. Using policy knowledge as an exposure variable in sensitivity analyses may identify an upper bound for the likely effect of cannabis policy change on key outcomes of interest; these estimates would be useful in modeling the expected long-term impacts of the policy. Second, findings that cannabis policy knowledge may differ by cannabis harm perceptions and use behavior in young adults are particularly salient as more states legalize cannabis and seek to evaluate policy effects (National Conference of State Legislatures, 2021b). Controlling for policy knowledge may reduce variability in findings across states and provide greater insight into the effects of changes in cannabis legal status on youth and young adult beliefs and behaviors. Third, using policy knowledge as a potential moderator of the relationship between date of policy implementation and outcomes of interest may identify differential patterns in change relevant to public health education efforts -for example, there may be greater changes in certain beliefs about cannabis use among those with policy knowledge that could be targeted in health communication programs. Assessment of policy knowledge may also identify subgroups of the population at risk for greater cannabis use following policy implementation and inform efforts to prevent cannabis uptake and use.
Strengths of the current study are a large online sample of young adults from across the state of Vermont, relevance to changes in state cannabis policy across the U.S., and data collection within nine months of the policy implementation. While this timeframe allowed for Vermonters to be affected by the policy, news about the change in legal status likely occurred months before our data collection. Other limitations of this study include: a convenience sample, the use of cross-sectional data, no questions about medical cannabis use, and a lack of data prior to the policy implementation in 2018. The sample was limited to participants from a small, largely rural, and non-Hispanic White state. Vermont's homogeneity was represented in the sample and prevented detailed analyses of cannabis policy knowledge by race and ethnicity. Prior to the 2018 cannabis policy change, Vermont young adults had a higher prevalence of past 30day cannabis use (SAMHSA, 2017) and reported lower perceptions of harm from cannabis use compared to the national prevalence (Moss et al., 2018). A high prevalence of use and low harm perceptions may impact the representativeness of the current study results and may influence the associations between cannabis policy knowledge and beliefs. Additionally, policy knowledge is only one of several mechanisms by which policy impacts beliefs and behaviors (e.g., access, availability of cannabis, retailer licensing; (Pedersen et al., 2021) and there are other outcomes related to cannabis legalization relevant to population health (e.g., criminal justice; (Firth et al., 2019). Our analysis is limited to policy knowledge, though future policy evaluations will need to consider the various mechanisms by which changes in cannabis legal status impacts a range of health outcomes to adapt state-level programming -and potentially, the policies themselves -to protect public health.
Conclusion
Evidence from Vermont young adult respondents suggests that knowledge of changes in cannabis legal status is greater in older young adults (aged 21-25 years) and of legal age to possess cannabis in Vermont, females, non-Hispanic White young adults, those with the highest education, and ever and current cannabis users. While sociodemographic factors are typically treated as covariates in existing policy studies, future evaluations of changes in state cannabis legal status that account for policy knowledge may improve estimation of the impact of policy change on harm perceptions and use of cannabis. The large proportion of young adults with correct policy knowledge, combined with a higher prevalence of policy knowledge among past 30-day cannabis users and young adults with low perceived risk of regular cannabis use, signals novel opportunities for state-level education on cannabis to ensure all young adults have accurate policy knowledge and are informed of the potential harms of cannabis use. Most young adults correctly understood the policy and nearly all respondents correctly identified that cannabis was not "legal for anyone to use" and that was "legal for people 21+ to use. A notable portion (40%); however, did not accurately identify all aspects of the policy, underscoring the potential to misattribute behavior change to policy implementation. Assessment of policy knowledge could be used in future evaluations to better estimate the effects of change to cannabis legal status on cannabis use behavior and beliefs and to inform public health efforts to prevent or reduce cannabis use in young people. | 2022-11-23T16:20:49.144Z | 2022-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "0a005412a424aa564e7a72c0fba4dcdec7ae0d63",
"oa_license": "CCBYNCND",
"oa_url": "https://publications.sciences.ucf.edu/cannabis/index.php/Cannabis/article/download/116/76",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "be365297ecf6310140663b48cb709264b7101de3",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9226698 | pes2o/s2orc | v3-fos-license | Spectroscopic evidence of odd frequency superconducting order
Spin filter superconducting S/I/N tunnel junctions (NbN/GdN/TiN) show a robust and pronounced Zero Bias Conductance Peak (ZBCP) at low temperatures, the magnitude of which is several times the normal state conductance of the junction. Such a conductance anomaly is representative of unconventional superconductivity and is interpreted as a direct signature of an odd frequency superconducting order.
drop in resistance is observed below 14 K due to the superconducting transition of the NbN layer.Below 14 K, the observed rise in low bias resistance is due to the opening of the NbN gap and freezing out of sub-gap conductance.This rise of resistance below 14 K is reflective of a decreasing sub-gap resistance R S to normal state resistance R n ratio at low temperatures, and is therefore a signature of good quality junctions 21 .The drop in resistance below 4 K is due to the evolution of a zero bias conductance peak.To confirm the non-superconducting nature of the TiN used in these experiments, the temperature dependence of the resistance of un-patterned films of TiN/GdN grown in the same deposition run as that of the junctions, is shown in the bottom inset to Fig. 1.This shows no detectable superconducting transition above 1.6 K (the temperature limit of the cryostat used for Resistance vs Temperature (RT) measurements).
We observed that a wide range of properties can be obtained in TiN films by altering the nitrogen concentration.In order to obtain non-superconducting TiN, we have tuned the nitrogen concentration (8%) in the sputtering gas mixture.
In Fig. 2, we show the differential conductance curves of a junction with a 3 nm GdN barrier.The curves clearly show the emergence of a strong ZBCP as the junction is cooled to low temperatures.Identical characteristics have been found in all eight junctions on the same chip, and similar characteristics have also been found in all 8 junctions on the same chip of a thinner 2 nm GdN thickness tunnel junction which has spin-polarization (P)~65% at 4 K.The ZBCP in all junctions is extremely robust, reproducible, and independent of magnetic field history.The behaviour of these S/I/N junctions at temperatures above which the ZBCP disappears (> 3 K), is well understood and has been addressed in detail in a previous publication 17 .
It has been theoretically predicted that for spin active interfaces, in the tunnelling limit, a subgap state appears due to spin-dependent phase-shifts 22 .This interface state is manifested via strong conductance peaks at a voltages eV = ± Δ 0 cos(ϑ/2) where ϑ is the spin-dependent phase shift that is present due to the FI.For ϑ = π the state is pinned to the Fermi level (zero bias).The appearance of this interface state is intimately linked to odd-frequency pairing 13 .
ZBCPs are known to occur in several superconducting systems and for a variety of underlying physical mechanisms 4,[23][24][25] .A ZBCP analogous to our experiment observation is the case of d-wave superconductors 24 , which occurs due to the sign change of the order parameter at regions in the a-b plane.
For s-wave superconductors, analogous phenomenon can be observed for sign change of the spin dependent phase shift due to the FI which translates to a phase shift of π.Such strong phase shifts 26,27 can be obtained when (a) quasiparticles normal to the interface are the major contributors to the transport process, (b) when spin polarization by the barrier is high, (c) when the barrier profile is not sharp.All the above conditions are met by an NbN/GdN/TiN tunnel junction system, especially that of high spin polarization.An order of magnitude difference between Fermi vectors of NbN 28 and TiN 29 results in quasiparticles normal to the interface being the major contributors to the transport.A previous study has shown that NbN/GdN barrier is different from a conventional box type potential barrier, as a Schottky barrier forms at the NbN/GdN interface 20 .The fact that all the conditions for obtaining a large spin-dependent phase-shift at the interface are met, taken in conjunction with the fact that the conductance spectra demonstrate a ZBCP is a clear indication that this phase-shift is likely to have a value very close to π.
Theoretical model.The experimental data in Fig. 2 can be modelled by the theoretical conductance of an S/FI/N structure with a spin-dependent phase-shift close to π, as shown in Fig. 3.The conductance for a ballistic S/FI/N structure has been studied previously 22 , and we have followed their analysis when fitting our experimental data.In the tunneling limit, we neglect the suppression of the superconducting order parameter and use the following expression for the current density across the junction where The following quantities have been defined in Eq. ( 1): D = D ↑ + D ↓ , J N is the current density when the superconductor is in its normal state (J N ∝ D), D σ and R σ are the probability coefficients for transmission and reflection of spin σ carriers, respectively, β = (k B T) −1 , V is the applied voltage, T is the temperature, E is the quasiparticle energy, ∆ is the superconducting gap, ϑ is the spin-dependent phase-shift due to the magnetic barrier, and acos( / ) for , acosh( / ) for (2) For the theoretically simulated conductance plots, we have differentiated Eq. ( 1) with respect to voltage and normalized the conductance against the normal-state conductance obtained at large voltages eV ≫ ∆ .To model inelastic scattering, we have incorporated a Dynes parameter via the relation E → E + iΓ where Γ provides the quasiparticles with a finite lifetime.The model also accounts for the large difference in tunnelling probability for majority and minority carriers, as expected for a strongly polarized FI.The temperature-evolution of the conductance spectra matches only qualitatively: the ZBCP vanishes experimentally more rapidly with temperature than in the theory, the reason for this is unclear.However, it must be noted that the temperature dependence of ZBCP is consistent with previous experimental observations of qualitatively similar origins of ZBCP.STM measurements of LDOS in Nb/Ho systems (due to odd frequency triplet superconductivity) observed the ZBCP disappearing at 660 mk 7 -far below the superconducting transition of Nb used in the experiment (T c,Nb ~6.6 K, please refer to supplementary information section of ref. 7), while ZBCPs in YBCO (originating due to sign change of order parameter in d-wave superconductors) were only observed until 40 K and 60 K (T c,YBCO ~90 K) in refs 24 and 30 respectively.We therefore assume that the temperature dependence arises due to aspects of theory which have not been fully understood.
Conclusions
We have not seen oscillatory behaviour in the intensity of ZBCPs with the application of magnetic field, thus ruling out the possibility of attributing the observed ZBCP to possible Majorana bound states 4 .ZBCPs occurring due to Kondo effects, on application of an external magnetic field, should separate out to a double peak structures 31 .The strong intensity of the ZBCP (3.5 times the normal state conductance) rules out other possibilities like de Gennes-Saint-James resonances 23 or a pin hole mediated junction which in accordance to the BTK theory 32 should give rise to a maximum ZBCP intensity of twice the normal state conductance.ZBCPs could also occur due to the TiN layer turning superconducting thus facilitating a critical current.However, the monotonic field-suppression and the observation of the ZBCP at high magnetic fields clearly indicate that Josephson effects do not cause the ZBCPs.Moreover, the top inset to Fig. 1 -shows that the ZBCPs start to evolve at 2.8 K, 3.8 K and 3.6 K for 3 nm, 2 nm and 1 nm barrier thicknesses respectively.Since the TiN layer for all these films were grown without breaking the vacuum and with the same plasma, this non-monotonic behaviour cannot be related to any possible superconductivity in TiN.However, such temperature dependence again points to an incomplete understanding of theoretical origins for ZBCPs for unconventional superconducting orders.For a more detailed analysis -which rules out superconductivity in TiN layer -please refer to the supplementary information section.Hence, none of the above possibilities are suitable in explaining the observed ZBCPs in our experiment.
The ZBCPs in NbN/GdN/TiN tunnel junctions therefore clearly establish an unconventional non-BCS type DOS indicating odd frequency superconductivity evolving at NbN/GdN interfaces.The current discovery of odd frequency pairing is not only relevant in understanding superconductivity beyond the conventional scope of BCS theory; but also firmly establishes FIs as important material systems for developing active devices for superconducting spintronics 33 .
Methods
The trilayered films of NbN/GdN/TiN are grown without breaking the vacuum in an ultra high vacuum chamber, by means of reactive dc magnetron sputtering in an atmosphere of Argon and Nitrogen.TiN is here grown as a (non-superconducting) metallic layer.Mesa type tunnel junctions were fabricated from sputtered tri-layered films by means of a fabrication procedure described elsewhere 21 .The only difference was that instead of plasma etching, TiN had to be Ar ion milled controllably.Measurements were performed using a 3He dip probe in a closed cycle liquid helium cooled variable temperature insert capable of cooling down to 0.3 K. Spin polarization was calculated from resistance vs temperature measurements using a procedure described in a previous publication 16 .
Figure 1 .
Figure 1.Temperature dependence of junction resistance.A 3 nm GdN junction measured at low bias.Top inset shows the Resistance vs Temperature (RT) dependence below superconducting transition of the NbN layer for junctions of 3 thicknesses -1, 2, 3 nms.Bottom inset shows the RT dependence of a bilayer film of GdN and TiN to demonstrate the absence of superconducting transition in TiN films used in this work.
Figure 2 .
Figure 2. Evolution of Zero Bias Conductance Peak (ZBCP) with temperature.Differential conductance (dI/dV) measurements normalised to the normal state conductance of an 100 nm NbN/3 nm GdN/30 nm TiN tunnel junction showing evolution of a ZBCP with decreasing temperature.Inset to the figure shows the temperature dependence of the intensity of the ZBCP.
Figure 3 .
Figure 3. Theoretical dI/dV curves for an S/FI/N junction as a function of applied voltage eV.Following the framework of ref. 22, we have used transmission probabilities D ↑ = 0.20 and D ↓ = 0.015 for each spin species, a spin-mixing angle of 0.98π , and set the Dynes parameter to 0.05Δ 0 .Inset: the formation of a zero-energy bound state at the interface due to spin-active Andreev reflection (AR) by the gap Δ 0 (indicated by red circles) where an additional phase-shift close to π is picked up by the quasiparticles. | 2017-08-03T22:09:22.143Z | 2017-01-20T00:00:00.000 | {
"year": 2017,
"sha1": "300901762be6ec4f2c9d98d1b93eb41c88aa3a84",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep40604.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "300901762be6ec4f2c9d98d1b93eb41c88aa3a84",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
258430319 | pes2o/s2orc | v3-fos-license | Ocean Acidification as a Hyperobject: Mediating Acidic Milieus in the Anthropocene
Through the usage of Timothy Morton’s hyperobjects (2013) as a heuristic, this essay aims to portray how Ocean Acidification can be read as a hyperobject affecting tropical seawaters and beyond. Furthermore, it illustrates how the arts and humanities, through their hermeneutical gaze, might help us grasp Ocean Acidification as a hyperobject and the wide array of other objects that act upon each other in such acidic oceanic waters. In this task, the article will close-read the Underwater Woman set of pictures by Christine Ren (2018) understanding the interpretation of art as a tool to reconnect cognition and emotion to move from the understanding of a crisis to the feeling of such crisis. Finally, it aims to shed light upon the implications arising from considering Ocean Acidification as a hyperobject. By connecting the theoretical, visual and political in the same narrative, this essay highlights the transformative potential of interpretation and thinking through hyperobjects. With this, the challenges of the Anthropocene are put at the forefront, situating specific events and problematics in a planetary scale.
In her work called The Underwater Woman (2018), Christine Ren presents the audience with different sets of images that trigger the viewers to recast and reconsider different problems and crises that are happening below the sea surface. In a storytelling manner and with an eclectic set of skills and disciplines that mixes photography, apnea, performance, dance and marine science, Ren dives deep into the issues that are threatening the seawaters of the Earth in a local and global sense. From jellyfish blooms due to the unbalance of ecosystems due to climate change to trawling nets and overfishing, the evocative and performative media Ren creates, allows the viewers to immerse themselves underwater and expand their understandings of the imagery common to the current anthropocentric view of Western rationalist ideology. In this sense, the conjunction between the performative arts and biology become one in order to broaden the perspectives of the viewers, unifying disciplines that have been separated in their discourses for many decades. Ren, 2018 This essay aims to utilize an Environmental Humanities and New Materialism lens in order to demonstrate how Ocean Acidification (OA) can be read as a hyperobject, immersing the analysis in an underwater milieu that challenges terrestrial Western ontological and phenomenological structures. Thinking through hyperobjects means that the different problems that OA causes in the tropics need to be situated in a planetary scale in the Anthropocene. As the Planetary Boundaries have shown 1 , OA, species extinction and global warming are part of the same problem and should be understood as interrelated. 1 The Planetary Boundaries frame the safe operating space for humanity in the Anthropocene, proposing nine processes that regulate the stability and resilience of the Earth system. Crossing one of them is thought to have non-linear catastrophic planetary consequences. These are: stratospheric ozone depletion, loss of biosphere integrity, chemical pollution and the release of novel entities, climate change, ocean acidification, freshwater consumption and the global hydrological cycle, land system change, nitrogen and phosphorus flows into the biosphere and oceans, atmospheric aerosol loading (Rockström et al., n.p.). Furthermore, the essay seeks to portray how the arts and humanities, through their hermeneutical gaze, might help us grasp OA as a hyperobject in relation to the different objects that act upon each other in acidic waters. By close-reading the Underwater Woman pictures by Christine Ren that deal with OA, this article thus argues that artistic representations can be crucial for generating awareness in people unacquainted with current climate science. This process reconnects cognition and formal knowledge with the world of emotions. Finally, it ambitiously aims to shed light upon the different implications coming to terms with OA as a hyperobject interconnected with other agents. As such, the ocean becomes an element that helps us theorize the materiality of history, placing itself as a space of narratives that can challenge anthropocentrism and the given stories of human history.
In the turmoil of the vast number of global ecological crises currently ongoing, this essay focuses on the set of Ren's images that deal with Ocean Acidification. It is not a random choice but, in fact, one quite helpful for making clear that certain crises flowing invisibly with uneven temporal and spatial frames can be rendered visible in art. The artworks mediate these crises in ways accessible to the human eye by enacting a union between the rational findings of factual sciences and the hermeneutic and cathartic nature of certain artistic disciplines. In other words, by connecting the theoretical, visual and political in the same narrative complementing each other, this essay illustrates the transformative potential of interpretation and thinking through hyperobjects with the help of OA and Ren's work. Such a study reveals that Western rationality and the anthropocentrism of the current epoch might not be the best practice for coming to terms with events that happen in temporalities and spaces that human rationality itself cannot fully experience. Hence, mediations and a unity between the sciences and the performative arts might break the outdated division between them in a crisis that calls for cooperation, attentiveness, and care in a non-anthropocentric way.
With this, the challenges of the Anthropocene are put at the forefront, situating specific events and problematics in a planetary scale. That is, this article aims to contribute to the dismantling of the problematic "rationalism" by immersing analysis in an underwater and acidic milieu that will expand and reshape ontology, phenomenology and epistemology. By doing this, the rationalist and anthropocentric view that has dominated hegemonic capitalist human culture is challenged, arguing that the arts and the humanities together with scientific findings can steer the viewer towards a better and more egalitarian comprehension of the ecological crisis, weirding the coherence of the rational world as we know it.
Acidic Waters
Since the Industrial Revolution, the pH value of the oceans has fallen from 8.2 to -8.1. With a logarithmic nature, this drop in the pH of seawater represents an average of a 30% increase in acidity over the past two centuries as the ocean acts similar to a sponge to capture and store CO2 released into the atmosphere, making us go back 55 million years so as to find a similar process in terms of Ocean Acidification (OA) (Hayes 3). OA occurs when CO2 is absorbed into seawater at a high rate. When this absorption takes place, chemical reactions happen. CO2 reacts with water molecules (H2O) to form carbonic acid (H2CO3). This, in turn, breaks down into a hydrogen ion (H+) and bicarbonate (HCO3-) generating, with the presence of all these hydrogen ions, a decline in the pH of the water or, in other words, acidifying seawater (NOAA n.p.). 2 These reactions "reduce the seawater pH, carbonate ion concentration and saturation states of biological important calcium carbonate minerals" (NOAA n.p.). In areas that are bountiful in terms of sea life, seawater tends to be supersaturated with calcium carbonate minerals. Calcium carbonate minerals are the foundation of skeletons and shells of a wide array of marine lifeforms. Thus, ongoing acidification is de-saturating many oceanic ecosystems to become undersaturated with calcium carbonate minerals, affecting the ability of some organisms to produce and maintain shells (NOAA n.p.). To put it another way, and although fully portraying this goes beyond the scope of this article, there is a historicalmaterialist connection between the history of capitalism and OA, and how capitalism has generated the unbalanced and multispecies/multiobject nature of the Anthropocene that, sooner or later, will have a global presence due to the transgression of the Planetary Boundaries.
Yet, acid oceans can only be perceived through mediation. This means that bodies need to be submerged over a notable period of time to start sensing these chemical changes in the environment. Unlike other events such as coral bleaching, it is not visually spectacular and, unlike human-generated debris at sea, it is more difficult to conceptualize and isolate as an event (Hayes 3). OA is then rather-invisible and less immediate in terms of aggressiveness due to its chemical nature compared to other stressors such as sea-surface temperature rises, trawling or seafloor mining. Yet, in such an acidic milieu, bodies are undone, put at stake and recast in their uncanny surroundings.
Here it is pivotal to acknowledge the obvious yet oft denied fact that human culture is always inherently connected with the physical world, despite claims that humanity has disentangled itself from the world through technology. Recalling our entanglements now, in the Anthropocene, requires new ways of perceiving ecology such as theoretical and mediative modes of understanding that describe the current epoch as constituted by infinite flows and forces (Alaimo 16). In this light, the ocean is no longer the aqua nullius 3 realm. Human intervention and capitalist accumulation have affected in one way or another most of the Earth's oceans and seas. These seas and oceans are now "understood in terms of its agency, its anthropogenic pollution and acidity, and its interspecies ontologies-all of which suggest that climate change is shaping new oceanic imaginaries" (DeLoughrey 34). In other words, it allows human beings to reconsider their existence and challenge established beliefs as meaning itself is put into question. Therefore, thinking ecologically here makes us situate the thinking body in a specific ecological field, understanding the roles that the different materialities play and how they constitute the exceptional relations between the body and its environment in the Anthropocene (Hayes 21).
As Timothy Morton (2018) pointed out, ecological awareness is a "detailed and increasing sense, in science and outside of it, of the innumerable relationships among lifeforms and between life and non-life" (Being Ecological 128). Therefore, theoretically speaking, the task of the Environmental Humanities is not just coming to terms with this statement, but also to figure out what this interconnection means in the realms of sensing and acting. Thus, here Morton's concept of hyperobject (2013) becomes of noteworthy relevance. Hyperobjects are "things that are massively distributed in time and space relative to humans" (Hyperobjects 1), which can be human-made and non-human entities, and share some common traits: all of them are viscous, molten, nonlocal, phased and interobjective. Hyperobjects are entities that exceed human apprehension due to their vastness, yet we can sense them in specific local manifestations. Therefore, hyperobject theory poses challenges to the ideas of nation, state, border, individualism, ecology, culture, ontology, anthropocentrism and capitalism. In fact, thinking through hyperobjects questions if a multispecies and multi-object knowledge is possible and how current narratives can be challenged. Furterhmore, this approach might help us adopt a theory of a structured natural necessity or nature of being in relation to how beings are modified in time and space, and how these changes can be philosophically understood in their milieus. In other words, approaching elements from this lens allows us to grasp the incongruences of time and space when it comes to non-anthropocentric agents and how relationships are not linear or balanced since most of the relationships between human and non-human entities need to be understood in a time and space different from our own.
Ocean Acidification as a Hyperobject
Ocean Acidification can be perceived as a hyperobject as it seems to meet, a priori, the five features of hyperobjects. OA is viscous, as it adheres and affects the different living and non-living agents touched by it. It portrays an ecological interconnectedness that cannot be untied. Thus, thinking through and coexisting with OA leads us to the logical and material discernment that, as a hyperobject, the more you try to get rid of OA, the more you realize it is there. To put it another way, all lifeforms immersed in acidic waters are, in one way or another, affected by OA. Shells dissolve, coral reefs perish, and ecosystems as a whole become victims of this acidification. This viscosity sheds light upon the different simultaneous and contradictory temporalities and "the breakdown and (re)formation of new multitemporal relations" (Bastian and van Dooren 7).
In turn, OA is also molten 4 . Like climate change, OA is molten in terms of time and space in the sense that it stretches and reshapes to such vast extent that, even though it might be part of most of the seas and oceans in the world, humans are not able to logically grasp its limitations, making OA an uncanny realization. In other words, humans are "faced with the task of thinking at temporal and spatial scales that are unfamiliar, even monstrously gigantic" (Dark Ecology 25). Thus, the naked eye and human linear consciousness are unable to grasp the different, uneven, multiple time-spaces and realities in the current planetary epoch. OA is therefore an entity that lets us know that it exists, but we become accustomed to its existence without being able to perceive it as a whole. This is because humans have a logic based on a terrestrial milieu. 5 That is why, when we observe OA by an underwater lens and regard ontology, one can actually understand the fact that non-acidic oceans have become strange and acidic oceans have become the norm. In order to come to terms with such trait, artificial mediation is needed to enable humans to fully grasp the molten existence of OA. Furthermore, OA is nonlocal and phased. Its nonlocality is defined regarding the fact that hyperobjects are never experienced directly, since the immediate appearance of a hyperobject in the physical world does not correspond to its reality. To put it another way, as Morton (2013) postulated, "nonlocality means just that -there is no such thing, at a deep level, as the local. Locality is an abstraction" (47). It would take a lot of time for a human to perceive OA as it is without being immersed for a large period of time underwater, but we can in turn perceive it through the causes OA has in direct or indirect terms: ecosystem degradation, species extinction, scarcity of resources for local fisheries, just to name a few. In terms of phased agency, OA seems to be a parallax that comes and goes between and through different objects in three-dimensional space. However, if provided with another multidimensional lens, this would appear to be very different. In this light, climate change, fine dust, the biosphere or black holes are all hyperobjects if we consider the ideas of nonlocality and phasing (Eperjesi 238).
Thinking through non-locality might allow theoreticians to expand the time-space framework in which ecological agents, objects and events unfold and intertwine with one another. In light of this, when we see that OA is affecting in the most severe way the South-Pacific Ocean and the Tropic of Capricorn (Earth Institute n.p.), thinking through the nonlocality of OA makes us go beyond the framed region itself, acknowledging that a) OA is uneven in a planetary scale and b) the causes of OA are not found in the framed region per se. To put it differently, and although it is beyond the scope of this work to develop a full cause-effect narrative between globalized extractive capitalism perpetrated by the global north and OA, it is pivotal to understand that this causality is a fact as postulated by the Planetary Boundaries theory.
Going back to the tropics, Australia is the world's largest exporter of coal, iron ore, bauxite or alumina amongst other mining assets. The extraction of these goods by far exceeds its domestic consumption (Granwal n.p.). As one of the countries that will be affected the most by the transgression of the Planetary Boundaries, Australia is 5 Human knowledge is terrestrial in the sense that the milieu in which humans coexist and interrelate with human and non-human entities is based on solid ground. As Melody Jue (2020) considered, when we are put in a milieu that is not terrestrial (for instance, when scuba-diving) the conceptions and feelings of time, space, mobility or breathing, for instance, are put at stake and brought to a new realm. Thus, it might be interesting to try to veer towards this kind of non-terrestrial thinking when approaching coexistence, multispecies interactions and ecological thought. nevertheless not addressing the problem seriously. Many areas, especially the Whitsundays/Cairns areas, solely rely on an ecosystem that is dying due to the transgression of the Planetary Boundaries and the eco-tourism that takes place there: the Great Barrier Reef. However, although it would be relatively easy to blame Australia due to its proximity, industrial and mining force and their consequent emissions of pollutants over other countries such as Papua New Guinea, Fiji or the FSM in the South-Pacific Ocean, in terms of the generation of OA, as a non-local hyperobject, OA allows us to stretch the causes to the world-ecology system. That is, for instance, one should not turn a blind eye to the fact that currently China is probably the major polluter in the world (Lu et al. 1423). Nonetheless, this is also an event of non-locality. The main reason why China is a major polluter, and-hypocritically-pinpointed by other countries as such, is because companies from all over the world, have moved their production to China and Southeast Asia. They have done so to reduce labor costs, have easier access to cheap labor in general, and get away with fewer environmental regulations (Eperjesi 240). Pollution generated there is a symptom of a transboundary generation of pollution in the productionaccumulation system in order to satisfy the demand of goods in the global north. However, this phasing moves even further away from mainland China and Australia, to put OA in a planetary scale as a consequence of coal-burning, mass-accumulation or transportation worldwide. OA is thus viscous, molten, non-local and phased. The sense of time-space in which OA flows makes it, as already aforementioned, very difficult for the human eye and consciousness to grasp. Hyperobjects that flow in such time-spaces are like an "ultraslow-motion nuclear bomb" (Hyperobjects 125), as their effects are almost invisible until, in this case, entire ecosystems perish.
Finally, OA is interobjective as it is formed through and has effects on the relationships it generates with other objects. OA, in this sense, gets enmeshed in the strange interconnectedness in which almost all entities exist. Understanding such a vast mesh of relationships and agencies can lead one to understand the signification of thinking through hyperobjects. Namely, OA generates meshy relationships with full marine ecosystems, from plankton to sharks, from coral reefs to whales. In a multispecies and multi-object world, thinking about OA as a hyperobject, makes even clearer the meshy relationship that exists between human and non-human living beings, and beyond-human agents. As Bastian and van Dooren (2017) highlighted, "in these and other fundamental ways, this is a period in which relationships between life and death, creation and decay, have become uncanny; no longer entailing what was once taken for granted" (2). Then, as stated in the introduction of the text, realizing the existence of this mesh of different relationships makes us rethink the temporalities, spaces and synchronizations of the different lifeforms and forms of life that play part, in one way or another, within the mesh.
This interobjectivity, in turn, puts front and center the force of capitalism in the Anthropocene. Human activities based on the grounds of capitalist accumulation and the consequences it entails have infused non-human ecosystems with new substances that are alien to them. The different narratives of hyperobjects highlight the fact that "the battle against the capitalist production of climate change must be waged at several levels simultaneously" (Hartley 165). As a consequence, new meanings, relationships and (de)synchronizations between human, non-human and beyond-human entities have been generated. To put it another way, thinking through interobjectiveness in the mesh of the hyperobject that OA seems to be, allows us to discern harmful alien agents that should not be in a given ecosystem, and yet that enter a synchronized system, desynchronizing and, in most of the cases, destroying the ecosystem itself. Hyperobjects in Morton's formulation seem, however, to embed some proliferating contradictions in our coexistence with other objects. They confront us with the strangeness of the world and an anti-romantic view of nature itself while, at the same time, generating a greater feeling of knowing and intimacy with the entities that surround us (Heise, 2014). Nonetheless, and considering these critiques and weaknesses, if we understand that the Earth is not just a non-living entity but a living one in the biological sense sustained by the different knots of life and complex physiological and ecological processes, hyperobjects allow us to move well beyond the traditional view of ecology. That is, reason as an element coming from the Enlightenment is put into question and the idea of humanity as the center of life crumbles down because it starts to make no sense if we are to understand such vast and non-even processes. In addition, and as discussed below in this text, the recognition of hyperobjects might mean the end of modernity and reason as we know them in our multi-entity world similar to Gaia.
Therefore, thinking through OA as a hyperobject demonstrates the fact that we are living in the Anthropocene, an epoch that does not affect all regions at the same level or at the same time. Nevertheless, this patchiness will start to become blurrier as effects of these transgressions will move from the local catastrophes to the planetary scale. Finally, this realization also acknowledges the risk that not coming to terms with these hyperobjects that are gigantic and imperceptible without being mediated, can generate the transgression of the thresholds established by the Planetary Boundaries.
The effects of OA are not directly perceived by humans due to their slow-motion nature, invisibility and non-locality when seen through a three-dimensional lens. Thus, coming to terms with OA and its hyperobjectic nature calls for a rejection of human rationality extrapolated to natural ecosystems from a Cartesian, Spinozian or Leibnizian tradition. This rejection is necessary because these modes of reflection lead to an understanding of the physical world from anthropocentric models based on a terrestrial milieu as a starting point of analysis. Therefore, this postulation allows us to acknowledge that environmental ecology benefits from being discerned as an epistemological system based on an understanding of nonlinear systems and causality, including human, nonhuman and beyond-human agents in the mesh. Namely, it calls for a perception that rejects anthropocentric linear understandings, as linearity deprives them from the ability to unfold or unpack their agency in theoretical terms (Guattari 45;Bastian 99).
With that said, according to mainstream concepts, rationality presupposes that by knowing a subject from the outside, it can be grasped completely, whereas a hyperobject cannot be grasped in this way. It calls for the aforementioned deviation from traditional reason towards an ontology that discerns the planetary ecology as a world in which living and non-living agents coexist and interact with each other on their own individual and collective terms, and that get intertwined in the knots of life that shape ecology itself. Therefore, thinking through hyperobjects from a new materialist lens allows us to question anthropocentrism and the Anthropocene itself, as human culture as a product of the material world must be understood and analyzed as part of ecology and nature, not as its antithesis.
Mediating Hyperobjects: The Case of the Underwater Woman
Thus, to mediate OA in order to understand it, one needs to go beyond the 'natural' or the 'rational' to put it in a timescale understandable for the human three-dimensional perception at first sight. Here it might be interesting to think through Santiago Zabala's (2017) assumptions on art and emergency. For Zabala (2017), in the current age of globalized late-capitalism, every socioeconomic, cultural, political and ecological phenomenon is put at stake through an objective analysis. Only what is confined in the rationality of calculability is seen as the real, while the rest is obliterated from the discourse. Therefore, even though OA can, be calculated through scientific reason in a way, it is very difficult to generate a discourse that allows the current rationality of the system to frame all the dangers it entails from a multispecies and multi-object perspective.
For Zabala (2017), the biggest emergency of all is the lack of emergency. In other words, although the media apparatuses are bombarding societies with emergencies, the current hegemonic conception in industrialized countries is that nothing new happens. Thus, reality seems fixed and secured, framed within ideology and seen through an invisible infrastructure (Zabala 7). In light of this, it is no surprise that the transgression of four of the nine Planetary Boundaries and the fact that planet Earth is facing no-return thresholds in the near future is in fact "an indication that the emergency they entail for our lives is hidden, absent" (Why Only Art Can Save Us 94).
According to Zabala (2017), as a hermeneutics philosopher, "the truth of art no longer rests in representations of reality but rather in an existential project of transformation" (10). Interpretation becomes an event that adds vitality to artistic representations, mobilizing our internal self. The shock produced by an artistic representation puts at stake the established and rational truths, calling for a consideration of the Other, such as the non-human agents. This process aids the viewing subject to recast and refine their political and cultural ideas with each situation so as to avoid falling into the capitalist realism surrounding us. To put it another way, interpretation becomes crucial in order to avoid clinging to the ideological principles that are inherent in capitalism in an automatic manner. As Zabala (2017) illustrates, artistic representations might inform us of the emergency at hand and also make us participate in a hermeneutical exercise to "call into question our comfortable existences" (122). Ren, 2018 In this sense, images such as The Underwater Woman by Christine Ren (2018) help us approach the dangers of OA, its hyperobjectical traits and the emergencies it entails. The set of photographs portrays a woman entangled in different elements or events that are endangering the oceans (microplastic debris, trawling, overconsumption, coral bleaching and ocean acidification) 6 . This set, in turn, reveals the fact that humankind and its practices are directly linked to these ecological calls, and that we are directly enmeshed with them. These frames raise the question of emergence because they alter the notion of the mesh. That is, bearing Zabala in mind, they shed light upon emergencies in an alien milieu, quantified through rationality as individual emergencies but not connected with the different entities entangled in the mesh. Realizing this belonging and dependency to a holistic multispecies and multi-object mesh, renders subjects disoriented. What is crucial then is that the subject is positioned vis-à-vis its dis-reorientation, not asking questions on "who is lost or who is foreign, who is comfortable or who has colonized, who decides where maps stop and start, but rather what kind of relationality explains who feels disor re-oriented" (Martin and Rosello 1). Ren, 2018 Interesting for this article is the set of images that represent OA in Christine Ren's work. In these images, the mermaid, who resembles Botticelli's Venus in The Birth of Venus, is blowing air underwater towards a nautilus shell, dissolving it in the process. Seemingly inspired by David Liittschwager's shell photographs for the NOAA PMEL Carbon Program (2017), the carbonated air the mermaid is blowing is in fact acidic and puts into question the agency of humankind and its modus vivendi. Through the different elements present (humankind, acidic water, the ocean) and absent (reminiscences of accumulation, pollution or globalization, for instance), this project enhances hermeneutic engagements and critical connections between the different agents that play part in OA. As briefly mentioned in the introduction of this essay, Ren challenges Western rationality and, with the union of different artistic and scientific disciplines, tries to modify and recast the narrative and the underwater imaginary that said rationality and anthropocentrism entail.
The choice of colors also appears as a very important element for Ren in order to create a coherent narrative and an illuminating experience for the viewer. The usage of very pale tone on the mermaid's skin contrasting with an almost pitch-black background with shades of dark blue reminds the viewer of the vast immensity of the ocean in opposition with the human. From a speculative lens, one could also say that the pale tone of the mermaid's skin, even though it is clearly inspired by Greek and Neoclassic sculpture, suggests the deathly agency humankind has acquired in terms of ecological degradation. The corpse-like tone and the inexpressive face of the mermaid, together with the carbonated air that the mermaid is blowing, can steer the viewer towards the realization that humankind is, in fact, dead inside. This realization can hardly be deemed coincidental, at least in terms of the motto and aims of Ren's work, since it has been this deathly anthropocentrism what has led the world towards an unparalleled ecological destruction and, in this case, the crisis that OA poses.
With that said, this set of images shows us how the hyperobjects theory makes sense with reagard to OA. As aforementioned, OA takes part in time and space frames that are uncanny or, at least, complicated to understand through human rationality, and it also reflects the existence of humankind together with, and inseparable from, the living and non-living agents that coexist together among us. By putting together a shell, carbonated air and a mermaid (half woman, half fish), the existence of OA as a hyperobject is brought front and center in a way that the human eye can perceive. In other words, the acidic elements of the ocean and its interconnectedness between the different agents that play part in this mesh is illustrated in just one frame.
To sum up, this work falls into the paradigm of the globalization of art as it addresses and provides coherence both to our own humanity and to the necessity of finding meanings that the paradigm of the lack of emergencies is unable to provide (Danto xvi). Artistic expressions such as this one, due to their straightforward yet allegorical nature, may not appeal to the individual connoisseur or the elevated knowledge of their viewers. This piece is clear in the message and intention it has, and it is very well stated in all the sections of the website where viewers can access this work. They also leave some room for the viewers to engage with them and think through them, putting the different crises that the ocean is suffering at the center of the debate, whether the viewer is acquainted with the problem at hand or not. With that said, images such as these ones have the power of meaning and the possibility of truth that depends upon the interpretation that viewers bring into play (Danto 155). As Zabala (2017) pointed out with Heidegger and Gadamer in mind, "hermeneutics does not seek compromises but interpretations, reactions and, most of all, interventions" (24).
As largely allegorical photographs, they aim to reconnect ontologically humankind with otherness, unmaking and reconfiguring the ways the viewers embody the experience of living in a world with acidic oceans. Thus, submerging ontology and thought, even if it is just metaphorically-speaking, is of paramount importance to think with, interact with and reconnect with the seas and the different temporal strands that co-exist there. As Stacy Alaimo (2011) pointed out, Submersing ourselves, descending rather than transcending, is essential lest our tendencies toward Human exceptionalism prevent us from recognizing that, like our hermaphroditic, aquatic evolutionary ancestor, we dwell within and as part of a dynamic, intra-active, emergent, material world that demands new forms of ethical thought and practice […] thinking with sea creatures may also provoke surprising affinities. (283) Consequently, the pictures portraying OA in Ren's work, when put under a hermeneutical and underwater gaze, force us to think differently in a non-terrestrial medium. In other words, immersing analysis allows the viewer to get disentangled from terrestrial biases, recasting ideas on the political, the cultural and the ecological. Therefore, new multispecies and multi-object engagements challenge the anthropocentric terrestrial existence, epistemology and ontology.
Furthermore, Ren's work also puts into question the instability of the Anthropocene. Referring back to Niall Martin and Mireille Rosello's disorientation (2016), with their connection between the human, the non-human animal and the non-human material, the photographs disorient the viewer. This disorientation, in turn, brings front and center some problematic aspects of humanity's agency and, thus, of the prefix Anthropos in the Anthropocene, shedding light upon its potential inadequacy in order to frame the current geological age. Namely, the set of images challenges the problematic that the Anthropocene poses by framing the planet as a holistic entity in a rather stable geological period. Even though one might think that the very idea of Anthropocene already puts the notion of stability into question, the idea of a period dominated by the geological agency of humankind leaves some blanks in the discourse that might steer the human perspective towards an acceptance of a domination of the Anthropos over the other entities. In this period, abnormalities become an accepted commonplace. And, although it is not the aim of this essay to fully develop on this matter, by abnormalities here one should understand, for instance, the idea of the physical and intellectual domination of humankind over the rest of the entities that co-exist on the planet. Therefore, the Anthropocene is, by no means, stable. The Anthropocene as an established term for the realist regime of geopower becomes central and alternative nomenclatures such as Capitalocene, Necrocene or Chthulucene arise in order to reject anthropocentrism itself. When new nomenclatures arise, theory, ontology and the environment are rethought and open new spaces for discussion, critique and engagement. At the same time, this debate becomes of paramount importance for original thinking regarding the agency of humankind, the power of mass-accumulation, the current state of emergency and justice from a multispecies and multi-object lens.
In addition, these pictures shed light upon the fact that what a priori seemed normal or a commonplace becomes suspicious as the set of pictures brings the abnormalities to the surface. To put it differently, the ocean is portrayed as an area in which humankind has become abnormal due to its extractivist and accumulative practices and that abnormality has become the norm in the Anthropocene. By scrutinizing the set of pictures on OA through hermeneutical interpretation, the viewer is able to recognize the abnormalities that are present both in OA and the Anthropocene while, at the same time, emotionally engaging with the ontological reparation of the different abnormalities. Namely, these pictures elude the historical ignorance about oceanic waters, at least their depths and living compositions; dedicating greater attention to the waters and their increasing acidity adds both to the fascination of the ocean and to the destruction of oceanic ecosystems due to human practices.
In light of this, it seems that the greatest challenge scientists have today is not to demonstrate, for instance, the fact that coral reefs are bleaching. What they are faced with is the unwillingness of governments to take real action as very few have put degrowth policies into the debate and the IPCCs and COPs have failed to bring real changes to societies. Moreover, although science has been extremely helpful in terms of explaining the causes of the current crisis, it has failed to connect with a wider, laymen-audience due to its complexity, desecrating the narrative and reconnecting with the lack of emergencies. Here, notwithstanding that visual media has been central in the conceptualization of the Anthropocene, this imagery has been left in a secondary position by scientific portrayals of the crisis. This, together with the overexploitation of certain topics in climate change media such as graphics portraying emissions, experts speaking or already classic images of glaciers melting, has led many to a feeling of exhaustion and detachment to what now seems mundane. Thus, there is the need to convey meaning with emotion. This necessity can be satisfied by the images presented by Christine Ren. They provide us with a connection between a scientific claim and an impactful representation that can appeal to an audience that is unacquainted with more formal knowledge of the crisis. The images originally connect the human and the non-human, making us think about the different relations that play a part, even if the viewer is not aware of what a theoretical hyperobject is. In other words, they allow the viewer to interpret the visual material from an underwater perspective, challenging Western rationality, reconnecting it with the realm of emotions. Consequently, the viewing process questioned and reframed in an alien underwater milieu. As sociologist Peter Wagner (2016) put it, "the climate risk should have radically altered the human relation to nature, but it did not" (151). In this light, the inability to come to terms with hyperobjects such as OA calls for the sciences and humanities to join forces. This would allow these two broad disciplinary areas to reconsider what the "rational" is and to move beyond it towards a terrain of human significance and interpretation that steers beings towards an ontological intervention and the meaning of the emergencies. In other words, the arts and humanities, together with scientific data, can be important heuristics to illustrate the emergencies the world is facing because of their ability to create intensity and depth, difficult to find in other disciplines (Zabala 10). That is why, together with a rather hermeneutical approach, the arts and the humanities can be crucial to make the different emergencies of the contemporary world perceivable in the paradigm of the lack of emergencies but also to re-orientate the so-called "subject" itself. To put it more forcefully, and paraphrasing Jennifer Fay (2018), art and visual media help us observe and experience the current ecological crisis as an aesthetic practice that is both a risk and a necessity to come to terms with. Namely, as Lynn Badia, Marija Cetinic and Jeff Diamanti (2020) illustrated, thinking through and with OA in an interdisciplinary manner calls "for us to consider that what it means to be a human observer is to already veer toward and with an altered sense of meaning-making, detailing, and also weirding the coherence of the world" (6).
Conclusion: Matter, Politics and Aesthetics in Acidic Waters
Objects, elements, or events such as Ocean Acidification exist in states that are impossible for the human eye to grasp fully and are hardly ever given full attention to their existence and interconnectedness with the other elements that surround them and interact with them. As Graham Harman (2012) pointed out, "entities such as chairs, floors, streets, bodily organs, and the grammatical rules of our native language, are generally ignored as long as they function smoothly. Usually it is only their malfunction that allows us to notice them at all" (15). Thus, the increase in levels of acidification and its unbalanced consequences can steer us materially and ontologically to clearer understandings of the different theoretical dimensions inherent to OA if perceived as a hyperobject. As Zabala (2017) highlighted, "we cannot simply observe, describe, and understand emergencies without being part of them" (112). Here, artistic representations that trigger interpretation are of paramount importance. Hence immersing ourselves within an acidic milieu with the help of visual and artistic material makes us realize the materialism entailed in OA and its hyperobjectic traits without having to be experts in climate science. Therefore, it is here where thinking through hyperobjects as a hermeneutical heuristic and, precisely, perceiving OA as a hyperobject can be very helpful to grasp the dissuasive agency of OA itself. As shown throughout this discussion, OA is not easily seen or perceived unless it is mediated somehow. Using the hyperobjects theory, together with the mediative tools that Christine Ren offers in her work, OA becomes something tangible for the human eye. Through subjective interpretation, the viewer can come to terms with the uneven and non-linear nature of such an event, the way in which it affects other entities and how it is an actual emergency in the current paradigm. Bearing this in mind, thinking through hyperobjects in the Anthropocene can help us understand that environmental law sometimes does not grasp the planetary emergency, centering itself instead on the role of states and the financial problematics embedded within them. Furthermore, it also allows us to come to terms with the fact that locality is an abstraction, and that abstraction is also necessary to frame factual policy and political activism in order to confront and portray the inconsistencies and recast the world-ecology.
The unbalanced nature and hyperobjectity OA represents in the current epoch, and the difficulties that reason has to come to terms with it, is a rather straightforward representation of the inadequacy of the established science, policy and economics approaches to understand the current ecological crisis (Sörlin 788). To put it differently, the belief that science alone can solve the situation in which the Earth and its living agents are at the moment is problematic since the central cause of this crisis is industrial, capitalist human practices that are claimed to be "rational." Hence, even though the current ecological crisis glides as an uncanny spectrum over human reason, it still remains ungraspable for many economically-rich populations. The dangers are believed to happen at a geological distance that cannot jeopardize the comfort of their lives. At the same time, and as proposed throughout this essay, these dangerous events occur in space-time scales that are too complicated to be perceived fully by most individuals unless mediated.
The arts, humanities and social sciences can potentially generate a shift that challenges established truths such as the paradigm in which domination is largely based on previous formal privileges, such as inherited capital and, thus, social class positioning. It is here where artistic representations that move beyond the connoisseur and the scientifically-aware public play an important part as they move beyond these previous formal privileges to generate both an internal and external dialogue in their viewers. To put it another way, pieces such as the Underwater Woman, break the boundaries of domination as they not only appeal to a well-educated viewer in the arts, but also more broadly. They reconnect cognition and emotion in a necessary way to overcome the current indifference and the feeling of routine most of the current climate change media poses. Nonetheless, in a world in which artistic, mediative and theoretical interventions and projects have been left aside, usually inaccessible for the less-wealthy or lesseducated, the cultural, political and transformative dimension of art is undermined. Thus, one of the most important tasks that the arts, humanities and social sciences have is to bring this knowledge to a broader population while, at the same time, to give voice to the voiceless.
With that being said, the ocean presents itself as multispecies, multi-object space with profound and divergent modes of sensorial and phenomenal experience. This, together with the differences that terrestrial and oceanic milieus present, calls for analytic lenses that expand and include modes of embodied experience. The Underwater Woman set of images on OA allows the viewers to interpret and immerse their thoughts metaphorically in a manner that can recast and challenge the given reality and its ontological constructions. Consequently, it might open new spaces to generate connections with the nonhuman, understanding that scientific certainty and Western rational epistemology might need to be open to offer space for non-human interactions in a multispecies world. As a result, the unknown and the known are brought together to generate fruitful, egalitarian and multispecies/multi-object dialogues. When mediated, the ocean is presented as a space that is less alien. In turn, we experience it as more familiar and intertwined with human existence, calling for new ways of understanding it and relating with it, shedding light upon the emergencies and temporal compression of the Anthropocene. | 2023-05-02T15:04:43.535Z | 2023-04-28T00:00:00.000 | {
"year": 2023,
"sha1": "d861d3c37429a8774e559b15c081f85a603a37b2",
"oa_license": "CCBYNC",
"oa_url": "https://ecozona.eu/article/download/4326/5526",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "96121df91a22184d5e1720b9bffc3733ae8918d8",
"s2fieldsofstudy": [
"Environmental Science",
"Art"
],
"extfieldsofstudy": []
} |
216336843 | pes2o/s2orc | v3-fos-license | Lipids source and degradation as revealed by molecular biomarkers in soils after acid-pretreatment: A case of a plantain soil under long-term cultivation
The extractable lipids are important components in soil organic matter (SOM) which were used to trace the sources and degradation of SOM. The protection of lipids by soil mineral have been suggested through organic solvents. But, the extraction efficiency of some lipid compounds was low. This study applied a mild acid treatment to firstly remove most of the reactive mineral particles, and without altering SOM chemical structures in 10% HF/1M HCl (1:4 w: v). Based on the obtained lipid biomarker information, we observed that the lipid extraction efficiency significantly increased by organic solvents on after removal of active minerals. The acid treatment increased the scientific to quantitative the amount of lipids. The minerals showed significant differences in the selective protection to different components of lipids. In this study, the amount range of protected n-alkanoic acids is 73~85%, n-alkanol 41~62% and n-alkanes 26~46%. After the vegetation was replaced, the increased alkenoate and alkane in soil input by the plant tissues of plantain directly, and the alkanols probably input by the hydrolysis of wax esters. Under the interference of man-made tillage activities, the C content in 0-20 cm decreased, suggesting that cultivated activities may enhance SOM degradation and accelerate SOM turnover. Understanding SOM behaviour in this area will provide important information for soil management and to evaluate carbon cycling in human-affected ecological systems.
Introduction
Soil organic matter (SOM) is the largest carbon pool in terrestrial ecosystems. It is estimated that the total amount of soil organic carbon storage in the world is about 1.6×10 18 g, which is higher than the sum of atmospheric carbon pool (0.75×10 18 g) and vegetation carbon pool (0.6×10 18 g) [1]. Therefore, weak changes in soil organic carbon pool will have a profound impact on soil properties and global climate changes. Xishuangbanna Tropical Rainforest is one of the most diversified areas in China and has been included in the India-Burma Biodiversity Hotspots [2]. It`s natural ecosystems have been transformed into economic agricultural systems on a large scale, and the cultivation of developed economic crops has become an important economic pillar of local society. Vegetation transformation has already pose a serious threat to the biodiversity of the local ecosystem and the property of the soil [3] Understanding the dynamics of soil organic matter in this region is an important step in assessing the carbon cycle and will provide important information for soil carbon cycling and fixation mechanisms in the plateau terrestrial system. Current researchers are focusing on lipid-related information. Lipids are insoluble in water but soluble in organic solvents, including n-alkanoic acids, n-alkanols, n-alkanes, hydroxy alkanoic acids, ketones, steroids, terpenoids, acylglycerols, sugars, and phospholipids and lipopolysaccharides [4]. Lipid biomarkers are not only use to distinguish between plant sources, but also distinguish between microbial and animal sources. Previous studies have systematically summarized lipid biomarkers such as chain-lipids (alkanoic acids, alkanols and alkanes) and cyclic-lipids (steroids, terpenoids) [5]. The carbon preference index(CPI)even-odd advantages of alkanoic acids and alkanols, and the odd-even distribution of alkanes are characteristic of plant-derived dominant performance [6]. These informations will provide important knowledge for studying the SOM turnover and carbon cycle.
Li et al. [7] observed that the content of alkanoic acid increased by 6 times, the increase dominated by long-chain alkanoic acid, and some indication parameters changed significantly before and after mineral removal, such as CPI. Before mineral removal, CPI decreased with increasing soil depth, and CPI increased with soil depth after mineral removal. Zegouagh et al. [8] also observed an increase in polar and long-chain compounds after reactive mineral removal. In the presence of minerals, the extraction efficiency and distribution pattern of lipids are greatly changed. Therefore, this study used the molecular biomarker method and acid treatment to analyze the source and degradation of lipids in the soil after long-term planting of plantain, and evaluated the regional carbon cycle and soil function through a more accuracy approach.
Sample and preparation
The samples were collected from the plantainplantation of Xishuangbanna, Yunnan Province (101°01'-101°02'E, 22°46'-21°47'N). After removing the top plant litters, soil samples were taken(BA) at different depths, 0-20 cm (S), 20-40 cm (Z), and 40-60 cm (X). The background soil [9] is collected from the uncultivated area outside the plantain planting area, about 100 m away from the edge of the plantainplanting area. All samples were freeze dried, ground and passed through 60 mesh. Manually remove the visible roots. In order to remove the reactive minerals, all soil samples were pretreated according to the previous method [10]. Briefly, soil samples were mixed and shaken with acid (10% HF/1M HCl) in a ratio of 1:4 (w:v) for 2 h. The mixture was then centrifuged at 2500 r·min -1 for 30 min and remove the supernatant.
Acid treatment supernatant analysis
All the solutions used and produced during the acid treatment are collected and mixed in a 5L beaker, including 10% HF/1M HCl solution, deionized water (wash soil to pH changesno longer) and 2M NaOH solution (adjust the pH of the mixed solution to neutral), the total volume of each solution is about 4~5L. 40 mL per solution are preparedfor TOC determinationand then calculate the total organic carbon content. The remaining solution was completely driedin oven at 60 °C, and the residues was collected and weighed. Then, 15 g of the residues was weighed and subjected to section 2.3 and 2.4, the rest was dried and stored.
Sequential extraction of free lipidbiomarkers
All soil samples and acid treatment residues were extracted as described in the previous study [10]. Briefly, 30 mL of dichloromethane was added to the original soil (20 g), the acid treated soil sample (15 g) and the acid treatment supernatant dried residues (15 g), and sonicated for 15 min. The mixture was then centrifuged at 2500 r·min -1 for 20 min. The supernatant was filtered through a glass fiber filter (Whatman GF/A 1.6 μm), and the filtrate was collected. Then, the residue was extracted successively with dichloromethane:methanol (1:1; v/v) and methanol, and the extraction conditions were the same as above. All filtrates were collected and mixed, concentrated by rotary evaporation, transferred to a 2 mL glass vial and completely dried with N2 gas. Duplicate for each sample.
Derivatization and GC-MS detect
The extract willtreat with trimethylsilyl compound (TMS). Briefly, the extract was dissolved in 1 mL of dichloromethane:methanol (1:1; v/v). 100 μL of the sample was completely dried by blowing with N2 gas, and then 90 μL of N, O-bis(trimethylsilyl)trifluoroacetamide (BSTFA) and 10 μL of pyridine were added, and reacted at 70 °C for 3 hours. After cooling, diluted by 400 μL n-hexane. All derivatized samples were detectedby GC-MS (Agilent, 7890A GC, equipped with 5975C quadrupole mass spectrometer). TMS derivatives of heptadecanoic acid and ergosterol were used as external standards for extractable lipidsquantitative. The GC was equipped with a DB-5MS fused silica capillary column (30 m × 0.25 mm i.d., 0.25μm film thickness). The GC operating conditions were as follows: temperature was hold at 65℃ for 2 minutes, raised from 65 °C to 300 °C by 6 °C·min -1 , and finally hold at 300 °C for 20 min, helium gas as carrier gas. The sample was injected at a 1:2 split ratio with an injector temperature of 280 °C. Samples (1μL) were injected through an Agilent 7693 autosampler. The mass spectrometer operates in electron bombardment mode (EI) with an ionization energy of 70 eV, a scan range of 50 to 650 Da, and a solvent delay time of 8 min.
Results and discussion
3.1. Soil properties before and after acid treatment Elemental analysis shown, the carbon content in the soil increased significantlyafter acid treatment (Table 1), mainly due to the loss of reactive mineral components. TSE/C (TSE: Total solvent extracts; total organic solvent extract based on carbon content) and TSE/S (total organic solvent extract based on soil quality after acid treatment) increase due to organic matter enrichment or concentration after active mineral removal [11] (Table 1), while the increase in TSE/Y (the total organic solvent extract based on soil quality after acid treatment) is due to the improved extraction efficiency of organic solvents after reactive mineral removal. Note: -S: 0-20cm depth; -Z: 20-40cm depth; -X: 40-60cm depth. 30-: soils planted with plantain for 30 years (BA), B-: original soils [9]; C/N: Atomic ratios; TSE/C: Total solvent extracts on basis of carbon content (mg·g-1 C); TSE/Y: Total solvent extracts on basis of original soil mass (mg·g-1 original soil); TSE/S: Total solvent extracts (mg·g-1 particle mass of the corresponding soil); SL indicates soil particle mass loss after acid treatment; CL represents soil carbon loss (based on the original soil mass) after acid treatment. All data were mean values. SD: Standard Deviation. A: after acid treatment.
After acid treatment, the mass loss [12] of the two soil samples was in the range of 65~67% [9] and 73~74% (BA), respectively. It should be mainly attributed to the loss of mineral quality. The carbon content loss [13] is in the range of 42-51% [9] and 59-70% (BA), respectively, which should be mainly attributed to the high hydrophilic organic compounds and fineness particlesloss during the removal of reactive minerals. Therefore, we collected and tested the solution produced during the acid treatment. The results of TOC detection of the solution showed that the TOC content in the solution was in the range of 89.9~106.6 mg·L -1 [9] and 114.0~140.0 mg·L -1 (BA), respectively, indicating that the organic carbon content in the acid treatment solution was higher. There may be some lipid biomarkers lost with the solution. The results of GC-MS analysis of the acid treatment supernatant dried residual showed that a large amount of sugars and a small amountof alkanes were detected, and no other lipid information was detected. Alkanoic acids and alkanols are more hydrophilic than alkanes, but they are not detected. This phenomenon has not been reasonably explained in the literature. However, the collection and testing of acid treatment solutions will further improve the precise of the quantification of lipid biomarkers in soil.
The lipid biomarker informationinterference by reactive mineral
Organic solvent extractable lipids mainly include aliphatic (alkanoic, alkanol and alkane) and steroids [6]. We noticed an increase in the amount of lipids extractionafterreactive minerals remove (Figure 1). However, this does not explain the correlation between mineral protection and carbon number, because a large amount of C16, C18 and its unsaturated alkanoic acid were detected in fresh plantain and bamboo leaf plant tissues, indicating an increase in short alkanoic acid and it may be related to its relatively rich source. The concentration of the alkanol is lower to the alkanoic acid, and the acid treatment also increases the concentration of the alkanol, but not significant compare to alkanoic acid. The amount of alkanol increased by 1.7~1.9 [9] and 2.3~3.7 (BA) times, respectively, indicating that about 41~62% of the alkanol was protected by reactive minerals. The effect of acid treatment on alkanes is the weakest. After acid treatment, the amounts of alkanes increased by 1.4~1.9 times, indicating that 26~46% of alkanes are protected by reactive minerals. Before and after the removal of reactive minerals, the distribution characteristics of lipids has changed, which may lead to deviations in the source and degradation information of SOM by biomarkers. A typical example is CPI. CPI is commonly used to indicate the degree of degradation of lipids because the even (alkanoic acid)/odd (alkane) homologs are preferentially synthesized in plant tissues, so the value of CPI is greatest in fresh plant tissues. In order to achieve a parity balance during the degradation process, even (alkanoic acid) / odd (alkane) homologs will preferentially degrade, and CPI will decrease [14,15]. Our recently reported studies have seen changes in some indication parameters before and after acid treatment [7]. The most obvious change in CPI observed in this study was that in BA, CPIS decreased with increasing soil depth before acid treatment, but CPIS increased with soil depth after acid treatment.
The lipid biomarker information by cultivation
After vegetation replacement, all types of lipids were observed the increase for all soil samples ( Figure 2). Alkanoic acids increase mainly in short-chain C16:1, C16, and C18, alkanes increase mainlyin longchain C25, C27, and C31, and alkanols increased generally. The results of GC-MS showed that although the distribution characteristics of lipids in leaves and soil samples were same, the highest alkanoic acid content in leaves was C16 and C18, which is the same as the number ofalkanoic acids in BA. The increase in the alkane is similar to that of the alkanoic acid, indicating that the increased alkanoic acids and alkanes in the BA may be primarily derived from the input of the plantain plant tissue. A variety of unsaturated C18n-alkanoic acids (C18:1, C18:2 and C18:3) were detected in both plant tissues, and C18:2 and C18:3 alkanoic acids were rarely detected in soil. This is because the double bonds of unsaturated alkanoic acids are easily oxidized [16] and degrade rapidly in the soil. Thus,the direct input of fresh plantain plant tissue observed in BA, but the total organic carbon content decreased in 0-20 cm (cultivation layer), indicating that artificial cultivation activities enhanced lipid degradation in soil. Note: a, d, g: The distribution of n-alkanoic acids in soil planted with plantain and original soil for different depth before and after vegetation changed; b, e, h: The distribution of n-alkanols in soil planted with plantain and original soil for different depth before and after vegetation changed; c, f, i: The distribution of n-alkanes in soil planted with plantain and original soil for different depth before and after vegetation changed; a, b, c: 0-20 cm depth; d, e, f: 20-40 cm depth; g, h, i: 40-60 cm depth.
Conclusion
After acid treatment, the extractability of lipids increased significantly. The protection of minerals to alkanoic acid is between 73 and 85%, the protection of alkanoic acid is between 41 and 62%, and the protection of alkanes is between 26 and 46%. The lipid distribution characteristics also changed significantly. Some indication parameters (CPI, ACL, RLS) indicating the source and state of organic matter changed, and even the opposite result appeared; during the acid treatment, some lipids will be lost with the acid treatment solution (the main observations of sugars and alkanes in this study), therefore, reactive minerals should be removed prior to quantitative analysis of lipid biomarkers in soil to ensure accurately analysis of lipid biomarkers, and the acid treatment solution should be collected and tested. Artificial farming activities promote the input of organic matter and enhance the degradation of organic matter, thus, cultivation activities may accelerate the turnover of SOM. | 2020-04-02T09:26:19.600Z | 2020-03-24T00:00:00.000 | {
"year": 2020,
"sha1": "36f16ff21dc366368a013e73a3d7b6b2517848af",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/450/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "78f58a9afa0699a43487211abd553464f3da10cc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
119275581 | pes2o/s2orc | v3-fos-license | An ordinal index characterizing weak compactness of operators
We introduce an ordinal index which characterizes weak compactness of operators between Banach spaces. We study when classes consisting of operators having bounded index form a closed ideal, the distinctness of the classes, and the descriptive set theoretic properties of this index.
Introduction
In this work, we introduce an ordinal index for operators on Banach spaces in order to characterize weak compactness. The construction of such an index was inspired by the Szlenk index, which is used to characterize Asplund operators [5]. To define our index, we use a characterization of weakly compact operators due to James concerning sequences in the unit ball of the domain such that the images under a given operator have a certain convex separation property. Consequently, we denote our index by J . For a weakly compact operator A : X → Y between Banach spaces, we denote the James index of A by J (A). For an ordinal ξ, we let J ξ denote the class consisting of all weakly compact operators A with J (A) ξ. We write J (A) = ∞ whenever A fails to be weakly compact, with the precise meaning of this made clear below. In this work, we establish the following results. Theorem 1.1. For each ordinal ξ, J ω ω ξ is a closed operator ideal. The class J ω consists of all super weakly compact operators. The J index of an operator on a separable domain is countable if and only if that operator is weakly compact. For every ordinal ξ, there exists a weakly compact operator which does not lie in J ξ , and that operator can be taken to have a separable domain if ξ is countable. The index J is a coanalytic rank on the class of operators between separable Banach spaces.
The index
In this work, all Banach spaces are assumed to be real, while the modifcations for the complex case are straightforward. By "subspace," we shall mean a closed subspace. By "operator," we shall mean a bounded, linear operator between Banach spaces. We let B X denote the closed unit ball of X. We let 2 = {0, 1}.
Given a set S, we let S <N denote the finite sequences in S, including the empty sequence, denoted ∅. We let S N denote the infinite sequences in S. Given t ∈ S <N ∪ S N , we let |t| denote the length of t, and for an integer i with 0 i |t|, we let t| i denote the initial segment of t having length i. We let t| i denote the tail of t which remains after removing t| i from t. Given s, t ∈ S <N , we let s t denote the concatenation of s with t, listing the members of s first.
We define the order on S <N by letting s t if and only if |s| |t| and s = t| |s| . That is, if and only if s is an initial segment of t. We say a subset T ⊂ S <N is a tree provided it is downward closed with respect to the order . We let MAX(T ) denote the maximal members of T . That is, MAX(T ) consists of those members of T which do not have a proper extension in T . We define T ′ = T \ MAX(T ), and call T ′ the derived tree of T . Note that T ′ is also a tree. We then define the transfinite derived trees T ξ by T 0 = T , T ξ+1 = (T ξ ) ′ , and T ξ = ∩ ζ<ξ T ζ when ξ is a limit ordinal. Note that there exists an ordinal ξ so that T ξ = T ξ+1 = T ζ for all ζ > ξ. We say T is ill-founded if there exists an infinite sequence (x i ) in S so that (x i ) n i=1 ∈ T for all n ∈ N, and T is well-founded otherwise. Note that T is well-founded if and only if whenever T ξ = T ξ+1 , T ξ = ∅. In the case that T is well-founded, we let o(T ) denote the minimum ordinal ξ, called the order of T , such that T ξ = T ξ+1 = ∅. If T is ill-founded, we write o(T ) = ∞. We establish the convention that ξ < ∞ for all ordinals ξ.
Suppose X is a Banach space. Suppose that K ⊂ X * is non-empty, symmetric, convex, and w * compact (such sets will henceforth be called bodies). Define the seminorm | · | K on X by |x| K = sup x * ∈K x * (x). Given two non-empty subsets S 1 , S 2 of S, we let d K (S 1 , S 2 ) = inf x∈S 1 ,y∈S 2 |x − y| K . For ε > 0, we say a sequence t ∈ X <N is (K, ε)-cs (for convexly separated ) if for any 1 m < |t|, d K (co(t| m ), co(t| m )) ε. We consider the empty sequence to be (K, ε)-cs for every K and ε. We say that an infinite sequence is (K, ε)-cs if all of its initial segments are (K, ε)-cs.
Note that by the Hahn-Banach theorem, the condition that the sequence t = (x i ) |t| i=1 is (K, ε)-cs is equivalent to the following: For every 1 m < |t|, there exists x * ∈ K so that for each 1 i m < j |t|, x * (x i − x j ) ε. Note also that if A : X → Y is an operator, the condition that (x i ) n i=1 is (A * B Y * , ε)-cs is equivalent to the condition that (Ax i ) n i=1 is (B Y * , ε)-cs. That is, for any 1 m < n and non-negative scalars ( Given a Banach space X, a body K ⊂ X * , and ε > 0, we let J(K, ε) denote the tree consisting of all (K, ε)-cs sequences in B <N X . We let j(K, ε) denote the order of J(K, ε). We define J (K) = sup ε>0 j(K, ε). If A : X → Y is an operator, we let J (A) = J (A * B Y * ). For an ordinal ξ, we let J ξ be the class of all operators A so that J (A) ξ. The main result of this work is the following.
Theorem 2.1. For every ξ ∈ Ord, J ω ω ξ is a closed operator ideal. Moreover, J ω is the ideal of all super weakly compact operators and ∪ ξ∈Ord J ξ is the ideal of weakly compact operators. For all ζ ∈ Ord, there exists ξ > ζ so that J ζ J ξ .
A result of James
Recall that a bounded subset S of X is relatively weakly compact if and only if S w * in X * * ⊂ X, where X is identified with its image under the canonical embedding into X * * . From this we deduce that an operator A : X → Y fails to be weakly compact precisely when there exists y * * ∈ AB X w * \ Y . The following result is due to James, and it is at the heart of our characterization.
Proposition 3.1. [8] If A : X → Y is any operator between Banach spaces and if y * * ∈ AB X w * \ Y , then for any 0 < ε < y * * -cs, no convex block of (Ax i ) can be norm convergent, which means no subsequence of (Ax i ) can be weakly convergent. Thus A : X → Y fails to be weakly compact precisely when there exists ε > 0 so that J(A * B Y * , ε) is ill-founded, when happens if and only if there exists ε > 0 so that j(A * B Y * , ε) = ∞. Thus we obtain the following portion of Theorem 2.1.
For the next result, we recall that an operator A : X → Y is called super weakly compact if whenever U is an ultrafilter, the induced operator A U : X U → Y U between the ultrapowers is weakly compact.
We note the following, which follows from standard ultrafilter techniques: If 0 < δ < ε, if A : X → Y is an operator, and if U is any free ultrafilter on N, To see this, note that j(A * B Y * , ε) > ω implies the existence of (x n i ) n i=1 ∈ B <N X so that for all n ∈ N and 1 m < n, the convex hulls of (Ax n i ) m i=1 and (Ax n i ) n i=m+1 have norm distance at least ε from each other. For each n ∈ N and i > n, let x n i = 0. Then the equivalence class χ i ∈ B X U containing (x n i ) n is such that the convex hulls of ( have norm distance at least ε from each other for all m ∈ N. Conversely, if (χ i ) ∈ B N X U has the latter property, then we can find for all n ∈ N some sequence (x n i ) n i=1 ∈ B <N X having the former convex separation property with ε replaced by δ (recall that 0 < δ < ε were fixed constants). This means that j(A * B Y * , δ) ω. Since J(A * B Y * , δ) includes the empty sequence, j(A * B Y * , δ) cannot be a limit ordinal, so j(A * B Y * , δ) > ω. These observations immediately yield the next portion of Theorem 2.1. (iii) J ξ (X, Y ) is norm closed in L(X, Y ), the space of operators from X to Y endowed with operator norm.
Proof of Proposition 3.4. Both (i) and (ii) are trivial in the case that B is the zero operator, so we assume B is not. It is clear that we may assume Taking the supremum over all ε > 0 gives (i). ( . For 1 m < n and non-negative In order to complete the proof that J ω ω ξ is an ideal, we need only to show that it is closed under finite sums. For this, we need one more lemma, the proof of which will comprise the final section of this work. Corollary 3.6. For each ξ ∈ Ord, J ω ω ξ is a closed operator ideal.
We recall that in [1], the Bourgain ℓ 1 index of an operator was defined. Given A : X → Y and K 1, we let T 1 (A, X, Y, K) consist of all of those finite sequences in B X (including the empty sequence) so that the image of each sequence under A satisfies a K lower ℓ 1 estimate. By this, we mean sequences ( Moreover, it was shown in [1] that for any ordinal ξ, there exists a reflexive Banach space W ξ so that the identity I W ξ on W ξ satisfies NP 1 (I W ξ , W ξ , W ξ ) > ω ξ . Thus these examples yield that for any ordinal ξ, there exists a weakly compact operator which does not lie in J ξ . Moreover, for ξ countable, W ξ can be taken to be separable. Thus the classes J ξ exhaust the ideal of weakly compact operators, but each J ξ is properly contained within the ideal of weakly compact operators. This observation completes the proof of Theorem 2.1.
We note that for any body K, the tree J(K, ε) is closed. That is, for any n ∈ N and for any sequence . To see this, simply note that if 1 m < n is fixed and if for each j ∈ N, Since our definition of body included w * compactness, x * ∈ K. Therefore if X is separable, Bourgain's version of the Kunen-Martin theorem [4] implies that J(K, ε) is either ill-founded or has countable order. Thus if X is separable and A : X → Y is an operator, A is weakly compact if and only if J (A) < ω 1 .
As observed in the previous paragraph, NP 1 (A, X, Y ) J (A). In general, however, there is no way to bound J (A) by NP 1 (A, X, Y ). A somewhat obvious example of this is James space, J, which fails to be reflexive, but also fails to contain a copy of ℓ 1 . Therefore More generally, for any set S, we may define the James tree space JT (S), where the Hamel basis (e t ) t∈N <N is replaced by the Hamel basis (e t ) t∈S <N . If S = [1, ξ], there exists a wellfounded tree MT ξ (for a specific example of such a tree, see [6]) on [1, ξ] with o(MT ξ ) = ξ+1, and the tree (e t ) t∈MT ξ witnesses the fact that j([e t : t ∈ MT ξ ], 1) > ξ, where ξ may be uncountable. As before, we deduce [e t : t ∈ MT ξ ] is reflexive (actually, [e t : t ∈ T ] is reflexive whenever T is well-founded, which may be shown by induction on the order as in the countable case). However, the ℓ 1 index of JT (S) cannot exceed the ℓ 1 index of JT . To see this, note that if ζ is the ℓ 1 index of JT , ζ is countable. If the ℓ 1 index of JT (S) exceeded ζ, then there would be a separable subspace X of JT (S) having ℓ 1 index exceeding ζ. But then there would exist a countable subset S 0 of S so that X ⊂ [e t : t ∈ S <N 0 ]. But the latter space is isometrically isomorphic to a subspace of JT . Thus there exists an ordinal ζ such that for any ordinal ξ, there exists a reflexive Banach space having J index exceeding ξ, but having ℓ 1 index not exceeding ζ, and we deduce that the J index cannot be controlled by the ℓ 1 index. Similarly, since every block sequence in JT dominates the ℓ 2 basis, JT cannot contain a copy of c 0 , and we deduce that we cannot control the J index by the c 0 index. We may similarly deduce results for uncountable indices by passing to JT (S) and repeating the arguments for ℓ 1 .
Descriptive set theoretic results
We wish to recall the coding of the class of operators between separable Banach spaces, modeled on Bossard's coding of the class of separable Banach spaces [3]. Let C(2 N ) denote the space of all continuous functions on the Cantor set. Recall that SB denotes the space of all closed subspaces of C(2 N ), endowed with the Effros-Borel structure, and that this structure is standard. That is, there exists a Polish topology on SB such that the Borel σ-algebra generated by this topology is the Effros-Borel σ-algebra. We fix such a topology on SB to which we omit direct reference. Recall also [11] that there exists a sequence d n : SB → C(2 N ) of Borel functions, called selectors, such that for each X ∈ SB, d n (X) ∈ SB and D X := {d n (X) : n ∈ N} is dense in X. Recall also the definition of the space L ⊂ SB × SB × C(2 N ) N defined in [2] by (X, Y,Â) ∈ L if and only ifÂ(n) ∈ Y for all n ∈ N and there exists k ∈ N such that Then L codes the space of all operators between separable Banach spaces by taking A : X → Y to (X, Y, (Ad n (X))) for X, Y ∈ SB. By an abuse of notation, we identify operators with triples in this way. Moreover, L is a Borel subset of SB × SB × C(2 N ) N , and therefore it is also standard. We arrive at the following. Remark It follows from this result that for any countable ξ, {(X, Y,Â) ∈ L : J (A) ξ} is a Borel subset of L, and for any analytic subset A of L, sup{J (A) : A ∈ A} is countable. Moreover, it follows from the proof, which involves a Borel reduction of the weakly compact operators to the well-founded trees on N, that L ∩ WC is coanalytic in L. These facts concerning coanalytic ranks can be found in [7].
Remark It was shown in [2] that L∩WC is coanalytic complete, and in particular non-Borel, in L.
Proof. Let Tr denote the trees on N, topologized with the relative topology inherited from 2 N <N . Let WF ⊂ Tr denote the well-founded trees in Tr. To show that J is a coanalytic rank, it suffices to show that the map is Borel, f −1 (WF) = L ∩ WC, and that o(f (X, Y,Â)) = J (A) + 1 [7]. The second and third facts follow from an inessential modification of the a similar argument from [1] concerning the indices NP p . To see that f is Borel, as argued in [1], it is sufficient Since this is a collection of countably many Borel conditions, we deduce that f is Borel.
5. Proof of Lemma 3.5 5.1. The Hessenberg sum, results on simple colorings, tree multiplication. Recall that any ordinal ξ can be uniquely written ξ = ω α 1 n 1 + . . . ω α k n k for ordinals α 1 > . . . > α k and natural numbers n i (where k = 0 corresponds to ξ = 0) [13]. This is called the Cantor normal form of ξ. If ξ, ζ are two ordinals, by adding zero terms into the Cantor normal forms of ξ and ζ, we can express where the same ordinals α i appear in the expressions. We then define the Hessenberg or natural sum by Because it is rather inconvenient to include the empty sequence in our proofs below, we will be concerned with subsets T of S <N \ {∅} such that T ∪ {∅} is a tree. Such sets are called B-trees. The notions of derived B-trees and orders can be relativized to B-trees.
. Given a monotone map θ : U → V , we say a function e : MAX(U) → MAX(V ) is an extension of θ if for each s ∈ MAX(U), θ(s) e(s). We say a pair (θ, e) : U × MAX(U) → V × MAX(V ) is an extended monotone map if θ is monotone and e is an extension of e. To avoid cumbersome notation, we will often say an extended monotone map (θ, e) : . Note that if V is a non-empty subset of a wellfounded B-tree and if θ : U → V is any monotone map, then there exists an extension e of θ.
The following method for "multiplying" non-empty, well-founded B-trees is inspired by the "replacement trees" defined in [9]. Recall the convention that for sets . We identify ∅ with (∅, ∅). Given a member x of S and n ∈ N, we let x (n) denote the constant sequence in S which has length n and begins with x.
The intuition behind this construction is to build a "tree of trees," where we think of beginning with the tree T 0 and replacing each of its members with a tree isomorphic to T 1 . To understand this, note that if t = (s 1 , x for some x k+1 , then the function taking s ∈ T 0 to t (s, x . The interested reader is invited to compare this process with the convolution of regular families and its effects on Cantor-Bendixson index, discussed, for example, in [12]. Of particular interest to us will be the B-trees M ξ , defined in [6]. We let M 1 = {(1)}, M ξ+1 = {(ξ + 1), (ξ + 1) t : t ∈ M ξ }, and M ξ = ∪ η<ξ M η+1 when ξ is a limit. Note that when ξ is a limit ordinal, M ξ is a totally incomparable union. We will also be interested in the B-trees [M ξ , M k ] when k ∈ N. Note that in this case, an arbitrary member t of [M ξ , M k ] can be written uniquely as In this case, we say that t is in the i th level of [M ξ , M k ]. In this case, the first level is naturally order isomorphic to M ξ via the map t ↔ (t, k (|t|) ). Moreover, for i < k, if t is maximal in the i th level, then the proper extensions of t which lie in the (i + 1) st level, a set which we will call the unit under (or beneath) t, form a set naturally ordre isomorphic to The following result is easily shown by induction on ζ for ξ held fixed. This result can also be deduced from the well-known and easy to see result that for B-trees T 0 and T 1 , there exists a monotone function θ : T 0 → T 1 if and only if o(T 0 ) o(T 1 ) (see [10]). The following facts were shown in [6]. there exists ε ∈ 2 and an extended monotone map (θ, e) : M ω ξ n → M ω ξ (2n−1) so that for all s t ∈ MAX(M ω ξ n ), ε = f (θ(s), e(t)).
The goal of this subsection is to discuss such colorings when S is finite and how to find "large subtrees" T 0 of T such that f | Λ(T 0 ) is constant. To that end, we have the following result.
We will implicitly use these facts throughout. It will be very convenient, however, to allow the trees T , T 0 , and T 1 to be other trees besides M ξ for some ξ, so we do not state the lemma in this way.
Remark The base case of Lemma 5.4 is equivalent to the finite Ramsey theorem from [14]: For any n ∈ N, there exists N = N(n) ∈ N so that for any N M ∈ N and any function f : {(i, j) : 1 i < j M} → 2, there exist 1 p 1 < . . . < p n M and ε ∈ 2 so that f (p i , p j ) = ε for all 1 i j. We simply note that M M is order isomorphic to {1, . . . , M}, and so there is a natural bijection between Λ(M M ) and {(i, j) : 1 i < j M}. If f : M ω → 2 is any function, then for each n ∈ N, we find a monotone map θ n : M n → M N (n) and ε n so that f (θ n (t i ), θ n (t j )) = ε n for each 1 i < j n, where M n = {t 1 , . . . , t n }, t 1 ≺ . . . ≺ t n . We then choose n 1 < n 2 < . . . and ε ∈ 2 so that ε n i = ε for all i ∈ N, and let T ε = ∪ i M n i . Then the monotone map θ ε from T ε to T is given by θ ε | Mn i = θ n i . Then o(T ε ) = ω 1 , and ξ ε = 1 we set. The monotone map θ 1−ε : M 1 → M ω given by mapping the unique member of M 1 to any member of M ω vacuously statisfies the condition required of it, since Λ(M 1 ) is empty.
Moreover, the ill-founded analogue of Lemma 5.4 is just the infinite Ramsey theorem. The ill-founded analogue would be that if f : Λ(T ) → 2 is any function, where T is ill-founded, then there exists ε ∈ 2, an ill-founded tree T 0 , and a monotone map θ : T 0 → T so that f (θ(s), θ(t)) = ε for all (s, t) ∈ Λ(T 0 ). This is precisely equivalent to the statement: For any function f : {(i, j) ∈ N × N : i < j} → 2, there exists ε ∈ 2 and natural numbers m 1 < m 2 < . . . so that f (m i , m j ) = ε for all 1 i < j. This is because a tree is ill-founded if and only if there defines a 2 coloring of {(i, j) : i < j}, and we may define a monotone map from T 0 = {(1, . . . , n) : n ∈ N} by θ ((1, . . . , n) Remark 5.6. We will prove Lemma 5.4 by induction on ξ. We have already argued the base case. We have already noted that if ζ = 0, the existence of a monotone θ : M ω ξ → T so that f (θ(s), θ(t)) is constant on Λ(M ω ξ ) = Λ(M 1 ) = ∅ is trivial. For ζ > 0, the existence of ε, T ε with o(T ε ) = ω ξε , and a monotone map θ ε : T ε → T satisfying the conclusions of Lemma 5.4 is equivalent to the existence of a subset A of [0, ω ξε ) with sup A = ω ξε and for each ζ ∈ A the existence of T ε,ζ with o(T ε,ζ ) = ζ and a monotone map θ ε,ζ : T ε,ζ → T so that f (θ ε,ζ (s), θ ε,ζ (t)) = ε for all (s, t) ∈ Λ(T ε,ζ ). The tree T ε can then be taken to be a totally incomparable union of the B-trees T ε,ζ (or, formally, B-treesT ε,ζ which are order isomorphic to T ε,ζ but made to be totally incomparable), the map θ will be equal to θ ε,ζ when restricted to T ε,ζ , and o(T 0 ) = sup ζ∈A o(T ε,ζ ) = ω ξε . This fact will be prevalent, so we isolate it to avoid repetition during our proofs.
For the successor case, we will do some preliminary work. We return to the proof of the claim at the beginning of the previous paragraph. We prove the result by induction on n. If n = 1, the conclusion is vacuous when θ is the identity, since there is only one level. Assume the result holds for a given n ∈ N and let f : Λ([M ω ξ , M 2 n ]) → 2 be as in the statement of the claim. Let T = [M ω ξ , M 2 n ] ω ξ , which is the first 2 n − 1 levels of [M ω ξ , M 2 n ] and is naturally order isomorphic to [M ω ξ , M 2 n −1 ]. For t ∈ MAX(T ) and U t the unit beneath t, define f t : U t → 2 |t| by letting f t (u) = (f (t| i , u)) |t| i=1 . Using Proposition 5.2 and noting that U t can be identified with M ω ξ , we deduce the existence of (ε t i ) |t| i=1 ∈ 2 |t| and a monotone map θ t : Define f ′ : T × MAX(T ) → 2 by letting f ′ (s, t) = ε t |s| and note that f ′ (s, t) = f (s, θ t (u)) for any proper extension u of t. Using Proposition 5.3 and recalling that T is order isomorphic to [M ω ξ , M 2 n −1 ], we deduce the existence of a monotone map θ : [M ω ξ , M 2 n−1 ] → T and an extension map e : MAX([M ω ξ , M 2 n−1 ]) → MAX(T ) of θ and ε n+1 ∈ 2 so that ε n+1 = f ′ (θ(s), e(t)) for any s t ∈ MAX([M ω ξ , M 2 n−1 ]). Let f ′′ : Λ([M ω ξ , M 2 n−1 ]) → 2 be given by f ′′ (s, t) = f (θ(s), θ(t)). Applying the inductive hypothesis, we obtain (ε i ) n i=1 ∈ 2 n and monotone map θ ′′ : [M ω ξ , M n ] → [M ω ξ , M 2 n−1 ] so that for s ≺ t, s in the i th level, t on the j th level, and i < j, f ′′ (θ ′′ (s), θ ′′ (t)) = ε j . We last define θ ′ : ] be the natural order isomorphism between [M ω ξ , M n ] and the first n levels of [M ω ξ , M n+1 ]. Let θ ′ be defined on the first n levels of [M ω ξ , M n+1 ] by letting θ ′ = θ • θ ′′ • ι −1 on those levels. It is straightforward to verify that for s ≺ t, s on the i th level, t on the j th level, 1 i < j n, f (θ ′ (s), θ ′ (t)) = ε j . Fix t maximal in the n th level of [M ω ξ , M n+1 ] and let U t be the unit beneath t. Let U e(θ ′′ •ι −1 (t)) be the unit beneath e(θ ′′ •ι −1 (t)) and let ι t be the natural order isomorphism between U t and U e(θ ′′ •ι −1 (t)) . Let θ| Ut = θ e(θ ′′ •ι −1 (t)) • ι t . Fix s ≺ u with s on the i th level for some i n and u on the (n + 1) st level. Then there exists a unique t which is a maximal member of the n th level of [M ω ξ , M n+1 ] and such that s t ≺ u. Then by the properties of (θ, e) and the first sentence at the beginning of the paragraph (with s replaced by θ ′′ • ι −1 (s), t replaced by e(θ ′′ • ι −1 (t)), and u replaced by θ e(θ ′′ •ι −1 (t)) • ι t (u)), (ii) We first claim that for any k and any function g as in the statement of Proposition 5.7, there exists a monotone map θ : [M ω ξ , M k ] → [M ω ξ , M k ] taking the i th level to the i th level and so that the restriction of θ to a unit is an order isomorphism with a unit of [M ω ξ , M k ], and so that there exists (x i ) k i=1 ⊂ S k so that if s is on the i th level of [M ω ξ , M k ], g(θ(s)) = x i . The result in the case that k = n|S| then follows by the pigeonhole principle, which guarantees the existence of 1 l 1 < . . . < l n n|S| and x ∈ S so that x = x l i for all 1 i n, and using the method from part (i) for defining a monotone map from [M ω ξ , M n ] to [M ω ξ , M k ] mapping the i th level to the l th i level and then composing this map with the θ from the claim. We prove the claim by induction on k. For k = 1, the result holds trivially, since the hypothesis guarantees that g is constant on the single unit of [M ω ξ , M 1 ]. Assume the result holds for a given k and assume g : [M ω ξ , M k+1 ] → S is as in the statement. Fix t maximal in the first level. Note that the proper extensions of t form a set naturally order isomorphic to [M ω ξ , M k ]. Let E t denote the proper extensions of t in [M ω ξ , M k+1 ] and define g t : E t → S by letting g t (s) = g(s). Identyfing E t with [M ω ξ , M k ] (as we may by previous remarks) and using the inductive hypothesis, we obtain a monotone θ t : E t → E t and a finite sequence , and E e 0 (t) denote the proper extensions of e 0 (t), we note that E t is naturally order isomorphic to E e 0 (t) . Let p t : E t → E e 0 (t) denote the natural order isomorphism, and let θ| Et = θ e 0 (t) • p t . This θ is clearly seen to satisfy the claim with the sequence (ε i ) k+1 i=1 , where ε 1 is the common value of g on the first level of [M ω ξ , M k+1 ], which consists of a single unit.
Proof of Lemma 5.4, successor case. Assume Lemma 5.4 holds for a given ξ 1 and fix a B-tree T with o(T ) ω ξ+1 and f : T → 2 any function. Let S = {(γ 0 , γ 1 ) : γ 0 ⊕ γ 1 = ξ}, and note that S is finite. We first claim that there exist natural numbers n 1 < n 2 < . . ., ε ∈ 2, a pair (ξ 0 , ξ 1 ) ∈ S, and monotone maps θ ′ i : [M ω ξ , M n i ] → T so that (i) for each i ∈ N and (s, t) ∈ Λ([M ω ξ , M n i ]) with s, t on different levels, f (θ ′ i (s), θ ′ i (t)) = ε, (ii) for each i ∈ N and each unit U of [M ω ξ , M n i ], there exist monotone maps φ 0 : M ω ξ 0 → U and φ 1 : M ω ξ 1 → U so that for j = 0, 1 and (s, We first show how this finishes the proof, and then we show this claim. Suppose for convenience that the ε in the claim is equal to 0. Then fix any i ∈ N and let U be the first unit of [M ω ξ , M n i ]. Let φ 1 : M ω ξ 1 → U be as in (ii) of the claim. Then T 1 = M ω ξ 1 and θ 1 : T 1 → T defined by θ 1 = θ ′ i •φ 1 satisfy the j = 1 conclusion of Lemma 5.4. We will construct for each i ∈ N a monotone map ϕ i : . We let V ∅ denote the first level of [M ω ξ 0 , M n i ] and ι ∅ : V ∅ → M ω ξ 0 be the natural order isomorphism and define ϕ i on V ∅ to be φ 0 • ι ∅ . Next, assume ϕ i has been defined on the first k levels of [M ω ξ 0 , M n i ] for some k < n i and that ϕ i takes the j th level of [M ω ξ 0 , M n i ] to the j th level of [M ω ξ , M n i ] for each 1 j k. Fix t maximal in the k th level of [M ω ξ 0 , M n i ], and let u be an extension of ϕ i (t) which is a maximal member of the k th level of [M ω ξ , M n i ]. Let V t be the unit beneath t and let U u be the unit beneath u. Let ι t : V t → M ω ξ 0 be the natural order isomorphism. Fix some monotone φ t : M ω ξ 0 → V u as in (ii) and let ϕ i be equal to φ t • ι t on V t . This completes the recursive construction, and it is clear that ϕ i has the announced property. If s ≺ u for s, u lying in the same unit This follows from (i) together with the fact that φ t 1 • ι t 1 (s) and φ t 2 • ι t 2 (u) lie on different levels when s and u do.
Remark We remark that the proof of Lemma 5.4 makes it easy to construct examples showing that the result is sharp. That is: For any ξ ∈ Ord and for any pair (ζ 0 , ζ 1 ) such that ζ 0 ⊕ ζ 1 = ξ, there exists a B-tree with o(T ) = ω ξ and a function f : Λ(T ) → 2 so that if ξ 0 , ξ 1 ∈ Ord are ordinals with ξ 0 ⊕ ξ 1 = ξ, T 0 , T 1 are B-trees, and if θ 0 : T 0 → T , θ 1 : T 1 → T are monotone maps such that for ε ∈ 2, and (s, t) ∈ Λ(T ε ), f (θ e e(s), θ ε (t)) = ε, then o(T 0 ) ω ξ 0 and o(T 1 ) ω ξ 1 . We only sketch the proof, since we do not use this fact in the sequel. We do, however, remark that for any T satisfying the assertion above and any monotone θ : M ω ξ → T , f ′ (s, t) = f (θ(s), θ(t)) defines a function f ′ : Λ(M ω ξ ) → 2 also satisfying the the claim above. Therefore this claim is true if and only if it is true when T is replaced by M ω ξ .
Next, assume the result holds for an ordinal ξ and all pairs (ζ ′ 0 , ζ ′ 1 ) with ζ ′ 0 ⊕ ζ ′ 1 = ξ. Let ζ 0 ⊕ ζ 1 = ξ + 1. Then one of ζ 0 and ζ 1 must be a successor, assume for convenience that ζ 0 = ζ ′ 0 + 1. Fix a function g : M ω ξ → 2 to satisfy the conclusion of this claim for the pair (ζ ′ 0 , ζ 1 ). Note that ∪ k [M ω ξ , M k ] is a totally incomparable union. Define f : Λ(∪ k [M ω ξ , M k ]) → 2 by letting f (s, t) = 0 when s and t lie on different levels of [M ω ξ , M k ] (note that, of course, if s and t are comparable, they must both lie in [M ω ξ , M k ] for the same k), and by letting f (s, t) = g(ι(s), ι(t)), when s, t lie in the same unit of [M ω ξ , M k ], where ι is the natural order isomorphism from the unit containing s and t to M ω ξ . It is easy to check that the conclusion is satisfied by this construction.
We state a few more simple propositions which we need to complete the proof of Lemma 3.5. For a well-founded B-tree, we let The content of this proposition is that we can remove from f the dependence on the second argument for (i), and remove from f the dependence on the third argument in (ii).
The scheme of the proof is as follows: Since [M 2 , M ζ ] and M ζ have the same order, and since [M 2 , M ζ ] is essentially M ζ where all the nodes have been replaced by a pair of nodes, we can define p(s) and q(s) to be the sequences terminating at the first and second nodes, respectively, which took the place of the original node s.
We are now ready to prove Lemma 3.5. | 2015-08-23T18:12:41.000Z | 2015-08-09T00:00:00.000 | {
"year": 2015,
"sha1": "33141a61e8cfafb8d3657780d58e151fbff5f5ea",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "33141a61e8cfafb8d3657780d58e151fbff5f5ea",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
162509 | pes2o/s2orc | v3-fos-license | Contextual Influences in Visual Processing
Definition Vision is the analysis of patterns in visual images with the view to understanding the objects and the physical processes in the world that generate them. Locally, visual patterns are highly ambiguous and subject to multiple interpretations. Image structures surrounding the pattern being analyzed can provide additional constraints or context to disambiguate the interpretation. The resulting ▶contextual influences are ubiquitous in visual perception and manifest at the neuronal level as the modulation of the activity of neurons by image structures outside their ▶classical receptive fields.
Characteristics
The study of contextual influences in visual processing has a long history in psychology and neuroscience [1]. Investigations of these effects in the visual system have focused on the ▶modulatory effect on the activity of a neuron by image structures outside its localized ▶receptive field. The classical approach employs the simplest stimuli such as bars and sinusoidal gratings to probe the interaction between the stimuli presented inside and outside a neuron's classical receptive field. A prevalent finding is that neurons in both the ▶primary visual cortex (striate cortex, V1) and the ▶extrastriate cortex exhibit ▶feature contrast enhancement, i.e., the cells respond better when the stimulus attributes in the area surrounding their receptive fields, such as bar orientation, are different from those inside their receptive fields (Fig. 1a).
Recent approaches seek to understand the neural basis of the perceptual interpretation of the local receptive field stimulus by changing the global image context (Fig. 1b). With this approach, a number of neural correlates of perception have been revealed, providing insights into the representation of subjective perceptual experience in the brain.
Contextual Influences in the Primary Visual Cortex
Neurons in the primary visual cortex receive converging input from the ▶lateral geniculate nucleus (LGN). A neuron's classical receptive field, also known as the minimum responsive field, is the part of visual space in which the presence of appropriate features can excite the neuron. By definition, stimulating the visual space outside a neuron's classical receptive field cannot evoke a response. Modulation of neuronal activity by surround stimulation can be observed, however, only when the neuron is responding to a stimulus presented to its receptive field. This modulation is called the nonclassical or ▶extra-classical receptive field effect. Such effects have been considered neural manifestations of contextual influences in visual perception.
A variety of extra-classical receptive field effects have been identified. A commonly reported phenomenon is called ▶surround suppression: the response of a neuron to an oriented bar or grating within its receptive field is suppressed when stimuli are simultaneously introduced to the surrounding area outside its receptive field. There are several types of surround suppression effects, mediated by a number of ▶local circuits as well as ▶recurrent feedback circuits [2]. The early phase of surround suppression is fast and is not sensitive to the exact parameters of the surround stimuli. However, the later phase of surround suppression is stimulus-specific. Simply put, while the neuron can detect the presence of stimuli in the surround immediately, its sensitivity to the precise nature of the surround stimulus or global context takes time to develop. The onset delay of this sensitivity varies considerably depending on the types of the stimuli and the spatial extent of the contextual stimuli.
One well-known stimulus-specific surround suppression, observed with an onset delay, is called ▶isoorientation suppression. In this phenomenon, a neuron's response is stronger when the orientation of the surround stimulus is different from that of the center receptive field stimulus than when the orientations are the same. When the receptive field stimulus is a bar, iso-orientation suppression emerges at about 10 ms after the onset of the response to the receptive field stimulus [3]. When the receptive field stimulus is a part of an oriented texture region significantly larger than that of the receptive field, the later part of the neuron's response is inversely proportional to the size of the regionthe larger the region, the smaller the response. This results in a relative enhancement of response when the neuron's receptive field is inside a smaller region than when it is in the larger background region. Interestingly, the enhancement is uniform across the surface of a compact region, with a sudden drop off at the region's border. Hence, it has been proposed to be a signal that could highlight a figure against its background and is called the ▶figure enhancement effect [4]. According to most studies, the onset delay of this figure enhancement effect is proportional to the size of the region. When the receptive field is at the center of a region that is six times larger than its size, the onset delay is typically 40 ms relative to response onset on the average. The figure enhancement effect is more general than iso-orientation suppression as it has been observed in studies with motion or shape from shading stimuli without any orientation contrast between the receptive field stimulus and the surround [4,5].
Functionally, both iso-orientation suppression and figure enhancement can serve to enhance stimulus feature contrast, resulting in an increase in ▶perceptual saliency of the representation of less expected or surprising visual events to facilitate further processing. Indeed, it has been demonstrated that this response enhancement is directly proportional to perceptual saliency of the visual pattern, as measured in terms of the reaction time for target detection, and it is dissociable from luminance contrast or orientation contrast in the stimulus (Fig. 1b) [5]. The broader spatial extent and the longer onset latency of the figure enhancement effect suggest that, while iso-orientation suppression might be mediated primarily by inhibitory ▶local circuits, the figure enhancement or perceptual saliency effect likely involves additional long range facilitation circuits including recurrent ▶feedback from the extrastriate cortex, as suggested by both anatomical and deactivation studies.
Surround interaction can be quite complex and can vary according to the luminance contrast or the spatial scale of the stimuli. While surround modulation tends to be suppressive when the luminance contrast of the stimulus is strong, it can become facilitatory when the luminance contrast is weak. Neuronal ▶adaptation, well known in the ▶retina and LGN, is sensitive to the absolute luminance and luminance contrast levels in the entire scene. In a dark and low-contrast environment, retinal and LGN neurons are known to expand their receptive fields temporally and spatially with a simultaneous increase in their sensitivity gains. Such a strategy serves to optimize feature detection in the presence of noise. The contrast dependence in surround influence likely results from V1 neurons inheriting and extending these adaptation or optimization strategies.
Perceptual computations supported by the complex machinery in V1 likely go beyond feature detection and feature contrast enhancement. From a computational perspective, contextual effects reflect the influence of computational constraints, realized by neuronal connectivity and interaction, necessary for solving visual inference problems. Surround interaction can bring in contextual information to improve local estimates of visual cues, as evident in the observations that ▶orientation tuning curves and ▶disparity tuning curves tend to sharpen over time during the analysis of each visual image. The ▶retinotopic organization, the connection infrastructure, and the tuning properties of neurons in V1 make it ideally suitable for supporting a variety of visual computations. One such computation is the grouping of edges into contours and features into Contextual Influences in Visual Processing. Figure 1 Stimuli used in contextual modulation studies. (a) Classic center-surround stimuli that have been typically used in neurophysiological studies on iso-orientation surround suppression [3]. Neurons tend to respond better when the orientations of the center and surround gratings are different (left image) than when they are the same (right image). The red ellipse outlines the spatial extent of the receptive field of the neuron. A similar effect observed in a larger center patch with a significantly longer delay is called figure enhancement [4]. (b) Surround context can change the perceptual saliency of the receptive field stimulus. The receptive field stimulus is said to pop out from the background on the left image, but not on the right image. This pop-out phenomenon depends on 3D interpretation of the stimulus elements. Early visual neurons' activity is correlated with the perceptual saliency of this pop-out phenomenon [5]. coherent regions. There is some evidence that V1 plays an important role in this computation to be discussed below.
First, the activity of some V1 neurons is enhanced if the surrounding bars outside their receptive fields line up with the bar presented within their receptive fields to form a longer contour (Fig. 2a).
Moreover, some V1 neurons respond to the ▶subjective contour of a ▶Kanizsa figure, even when no feature is presented to their classical receptive fields (Fig. 2b). There is also evidence that neurons can interpolate contours across the blind spot or behind an occlusion. Furthermore, collinear contours have been found to induce neuronal synchrony in V1 neurons of the same ▶orientation selectivity. Recently, it was also found that neurons with different orientation tunings, when stimulated simultaneously by curved contours, also exhibit an increase in synchrony or ▶effective connectivity, as revealed by multi-electrode recordings [6]. This dynamic change in effective connectivity between neurons as a function of stimulus is suggestive of a mechanism for ▶contour completion.
In addition, similar changes in effective connectivity have also been observed among spatially disjoint ▶disparity selective neurons when the 3D depth plane of the random dot stereogram stimulus intersects with the cells' optimal disparity tunings. This process appears to contribute to the gradual sharpening of the neurons' disparity tunings over time, providing a plausible mechanism for improving local estimates of visual cues based on global context. Such cooperative or mutual facilitatory mechanisms might also contribute to surface association by increasing the firing rates of the neurons analyzing different parts of the same visual surface simultaneously. The resulting enhanced and correlated activities, partly represented in the figure enhancement effect, can highlight the relevant coincident features in visual input as a group to provide a stronger drive for downstream neurons in the extrastriate cortex to learn explicit representations for higher order features and structures.
Contextual Influences in the Extrastriate Visual Cortex
The extrastriate cortex, downstream from the striate or primary visual cortex, is partitioned into many different visual areas. The feature contrast enhancement effect observed in V1 is also prevalent in extrastriate visual areas, expressed in the respective feature dimensions that neurons in those areas are tuned to. In area ▶MT (medial temporal), for example, the motion of surround stimuli has been shown to significantly modulate the response of a neuron to moving stimuli presented to its receptive field. The response of the neuron is suppressed when the direction of surround motion is the same as the motion detected in the neuron's receptive field. This is analogous to the iso-orientation suppression in V1 but in the motion domain. In addition, the disparity-tuned MT neurons also experience iso-disparity suppression.
The extrastriate cortical areas, however, exhibit some additional contextual effects that are rarely observed in the striate cortex. Many of these new contextual effects are concerned with the inference of 3D surfaces, their occlusion and depth ordering relationships, also known as ▶figure-ground organization. In MT, it has been shown that the responses of direction-selective neurons to a motion stimulus are sensitive to the figure-ground context defined by the surrounding surface depth structures in a way that is consistent with ▶Barber Pole illusion [7].
Several lines of evidence suggest that the computations underlying figure-ground segregation and 3D surface inference might start in visual area V2. First, a significant fraction of V2 neurons (and a small number of V1 neurons) have been shown to signal whether their receptive fields are at the left border or the right border of a figure in an image regardless of the polarity of contrast at the border (Fig. 3a).
A left-border-preferring neuron carries the information that the border within its receptive field belongs to (or is owned by) the surface or region to its right [8].
Contextual Influences in Visual Processing. Figure 2 Neurophysiological evidence of contour completion in V1. (a) Oriented bars in the surround (left image), when aligned with the receptive field stimulus to form a contour, can increase a cell's response to its receptive field stimulus (right image) (Kapadia, Westheimer and Gilbert 2000). The red ellipse outlines the spatial extent of the receptive field of the neuron. (b) The subjective contour of a Kanizsa's illusory square can evoke response in a V1 neuron even when no stimulus feature is present in its receptive field (red ellipse) (Lee and Nguyen 2001). The subtle addition of thin circles on the right image changes the perceptual interpretation of the image from a white square occluding four black circular disks, with a vivid subjective contour over the receptive field (left image), to that of a white square in a background visible through four circular windows on a white wall in front (right image).
A complementary, right-border-preferring neuron exists at the same location, and both neurons could form a push-pull pair for every border orientation. The activity of a set of such pairs of ▶border-ownership neurons in various orientations along the border of each region in an image can encode the depth-order relationship between the different image regions or inferred surfaces. Secondly, it has been found that neurons in V2, but not in V1, are sensitive to the mismatch in features between the images from each eye at visual locations where one surface occludes another [9]. The emergence of sensitivity to this surface occlusion cue in V2, known as the ▶Da Vinci stereo, further suggests that 3D surfaces and their occlusions are explicitly represented in V2. The figure-ground context made explicit in V2 could feed back to constrain the computation in V1, resulting in, for example, the figure enhancement effect. However, it should be noted that the figure enhancement effect in V1 has not been conclusively demonstrated to depend solely on figure-ground organization.
The perception of surface attributes such as brightness, shading and color depends very strongly on the interpretation of the underlying 3D surface geometry and the illumination direction in the visual scene. Two observations suggest that these surface attributes might also be inferred and represented in V2 because of the dependence of such inference on 3D surface interpretation. First, the neural correlate of ▶shapefrom-shading pop-out, a perceptual phenomenon that crucially depends on 3D surface interpretation, is observed in V2 but not in V1 pre-attentively [4]. Second, the neural correlate of the ▶Cornsweet-O'Brien illusion, an illusion in perceived brightness induced by edge contrast, which ultimately can be traced back to surface geometry and lighting direction interpretations in natural scenes, is observed in V2 but not V1 [10] (Fig. 3b). There has been, however, some evidence for brightness representation in V1 [1]. It is possible that the construction of brightness representation is a gradual and distributed process, computed first at V1 based on surround luminance contrast, but achieving a more abstract and invariant representation in V2 as the 3D surface representation is made explicit. In general, neuronal activities tend to become progressively more abstract and more correlated with our subjective perceptual experience as one moves up the visual hierarchy.
In addition to global image structures, behavior, task demands and memory are also known to provide strong contextual information to influence visual perception and object recognition. ▶Attentional modulation of neuronal responses has been widely observed and studied in the extrastriate cortex (see ▶Visual Attention). Attentional effects in V1 are subtle and observable mostly when visual scenes are cluttered or in tasks that demand considerable spatial attention at precise locations such as the task of tracing a curve. Beyond V2, extrastriate neurons tend to have large receptive fields. Attentional modulation in neurons of these higher areas typically manifests as the selection of one relevant feature over the others present within their individual receptive fields. Attention can be voluntary, as in selecting a particular spatial location (spatial attention) or a particular feature (feature attention) in the receptive field for further analysis. But it can also be reflexive, driven or captured by the saliency of the stimuli computed automatically in early visual areas. The variety of ▶feature contrast and perceptual saliency effects observed in V1 and in the extrastriate cortex likely serves as a part of this reflexive attention mechanism. Recently, higher-order non-spatial contextual effects, such as context familiarity and associative memory, have also been shown to modify the activities of neurons in ▶inferotemporal cortex (IT) and medial temporal (MT) respectively.
From the perspective that vision is a process for inferring the various underlying environmental causes of visual patterns such as the 3D geometry of surfaces, the identities of objects and the illumination direction in the scene, the extrastriate areas in the visual hierarchical system might be conceptualized as modules that provide Contextual Influences in Visual Processing. Figure 3 Neurophysiological evidence of surface inference in V2. (a) A left-border cell will respond more strongly when its receptive field (red ellipse) is analyzing the left border of a figure (left image) than when it is analyzing the right border of the figure (right image), even when the visual pattern on the receptive field and in its immediate surround is identical [8]. This class of cells, observed primarily in V2, is said to convey information about border-ownership or surface occlusion. (b) In the Cornsweet-O'Brien illusion, the presence of a contrast edge can change the perception of the brightness of a region. A V2 neuron that prefers darkness over brightness would respond better to the perceptually darker region (left image) than to the perceptually brighter region (right image) even though the physical luminance of the receptive field stimulus in the two cases is exactly the same [10]. explicit representation of these decomposable causes. Each extrastriate module furnishes an explanation on some aspect of the visual scene. The inference of the underlying causes involves integration of information across space and over time by neurons in the higher-order visual areas, which in turn provide a variety of context in which visual processing in the earlier visual areas can be refined. V1, with its neurons arranged in a spatially precise ▶retinotopic map and endowed with small localized receptive fields capable of representing fine details in images, might serve as a high resolution buffer at which all the causes are combined together to synthesize an explanation of the visual input represented explicitly there. These interactive computations can bring about a very rich variety of contextual influences in V1 and the extrastriate cortex. The long latency of many of the contextual effects observed suggests that a substantial amount of recurrent interaction could have taken place. Computations involving such recurrent interaction will predict the simultaneous emergence of the perception-related signals in many visual and decision areas in the brain. | 2015-03-12T23:57:50.000Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "13be6577d243faf54591e8159626e6048c366936",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Contextual_Influences_in_Visual_Processing/6604454/1/files/12094859.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bf5e60240f887e762348c8fdd24cf3d704c9f13e",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
21337837 | pes2o/s2orc | v3-fos-license | The Relationship between Language Ability and Cognitive Function in Patients with Schizophrenia
Objective Cognitive dysfunction is common in people with schizophrenia, and language disability is one of the most notable cognitive deficits. This study assessed the use and comprehension ability of the Korean language in patients with schizophrenia and the correlations between language ability and cognitive function. Methods Eighty-six patients with schizophrenia and a group of 29 healthy controls were recruited. We assessed both clinical symptoms and cognitive functions including Korean language ability. For clinical symptoms, the Positive and Negative Syndrome Scale, Clinical Global Impression-Schizophrenia Scale, and Social and Occupational Functioning Assessment Scale were used. For the Korean language ability assessment, a portion of the Korean Broadcasting System (KBS) Korean Language Test was used. The Short-form of Korean-Wechsler Adult Intelligence Scale, the Korean version of the University of California San Diego (UCSD) Performance-based Skills Assessment (K-UPSA), and the Wisconsin Card Sorting Test (WCST) were used to assess cognitive functions. Results Schizophrenic patients had significantly lower scores in the language and cognitive function tests both in the total and subscale scores. Various clinical scores had negative correlations with reading comprehension ability of the KBS Korean Language Test. The WCST and a part of the K-UPSA had positive correlations with multiple domains of the language test. Conclusion A significant difference was found between schizophrenic patients and controls in language ability. Correlations between Korean language ability and several clinical symptoms and cognitive functions were demonstrated in patients with schizophrenia. Tests of cognitive function had positive correlations with different aspects of language ability.
INTRODUCTION
Schizophrenia is a major mental illness with a prevalence rate of 1% in the adult population worldwide. Cognitive deficits are common symptoms in people with schizophrenia; 90% of patients have clinically relevant deficits at least in one cognitive domain and 75% show deficits in two or more cognitive domains. 1) Such a wide range of cognitive deficits appear gradually after disease onset and impair patients' functional reintegration. 2,3) Language disability is one of the most notable cognitive deficits in patients with schizophrenia. The two main components of language function are production (usage) and comprehension. Disability of verbal communication is a main symptom in the diagnosis of schizophrenia and usually appears in the form of thought disorder. Moreover, these language disabilities allow anticipating defects in social and occupational abilities. 4) Impaired language comprehension usually manifests as an inability to understand figurative usage of language. Therefore, patients with schizophrenia exhibit difficulties in understanding proverbs, words with ambiguous meanings, and grammatically complex and long sentences. 5) Among several neurocognitive domains, verbal learning and memory show the most pronounced deficits compared to other cognitive domains in patients with schizophrenia; they also have the highest variability. 6) Some studies showed the strong association of verbal memory impairment and initial work function in the patients with schizophrenia. 7) Furthermore, deficits in neuropsychological tests with linguistic components are found in both chronic schizophrenic patients and children at risk of de-veloping schizophrenia, 8) in early stages of schizophrenia, 9) and at disease onset. 10) Of the two main theories of language disability of schizophrenia, the first describes the structural and functional abnormalities of semantic memory 11) and the second consists of abnormalities in the creation and usage of context due to impairments in working memory or executive function. 12) Several neuropsychological tests are used to examine cognitive functions in schizophrenic patients, but it has not been clearly established how improvement in tests of cognitive function relate to everyday social function improvement. Neuropsychological tests explain only about 25-50% of everyday function. 13) Occupational and social activities in everyday life do not only rely on cognitive function but also on a complex combination of motivation, desires, and environmental factors. The University of California San Diego (UCSD) Performance-based Skills Assessment (UPSA), which is highly sensitive and specific, was developed to evaluate everyday cognitive function in patients with schizophrenia. 14) It is a widely used role-playing tool that assess basic everyday living skills.
The neuropsychological tests used for evaluating cognitive function in schizophrenic patients in Korea typically do not include a tool to systematically evaluate ability of language usage. There are currently no tests to evaluate Korean language ability in everyday life for the schizophrenic patients. Furthermore, studies of Korean language ability in schizophrenic patients are very scarce. Recently, a Korean language test has been used by a few authorized institutes as an examination tool to measure universal Korean language ability in everyday life for the public. The Korean Broadcasting System (KBS) Korean Language Test is the nationally authorized language test to evaluate various language usage areas and has been continuously implemented since 2004. Therefore, the authors designed this study to evaluate the usefulness and the applicability of the KBS Korean Language Test to measure functional ability of language use in everyday life for schizophrenic patients, and to investigate correlations between language ability and other cognitive function.
Participants
This study included patients who were diagnosed with schizophrenia using the Diagnostic and Statistical Manual of Mental Disorders 4th edition (DSM-IV) 15) and who met the following additional inclusion criteria: (1) between 18 and 60 years of age, (2) no changes in medication for the past three months, (3) relatively stable maintenance of symptoms, and (4) in an out-patient hospital or vocational rehabilitation. Exclusion criteria included were (1) an intelligence quotient (IQ) below 70 according to the short-form of the Korean-Wechsler Adult Intelligence Scale (K-WAIS), (2) a history of substance abuse or dependency according to DSM-IV criteria, and (3) a medical history of brain damage or neurological disorder. The control group was recruited from healthy persons who worked as volunteer caregivers for patients with psychiatric disorders at a hospital and were similar in age and educational background to the patient group. All subjects gave written informed consent. This study was approved by the institutional review board of Inje University Busan Paik Hospital, Busan, Korea (IRB: 11-032).
For the patient group, sex, age, level of education, duration of illness, and kind and dosage of current medications were assessed through interview and review of medical records. The dosage of antipsychotic drugs was converted into a chlorpromazine equivalent dose. 16,17)
Clinical Assessments
For the evaluation of clinical symptoms, the Positive and Negative Syndrome Scale (PANSS), 18) the Clinical Global Impression-Schizophrenia scale (CGI-SCH), 19) and the Social and Occupational Functioning Assessment Scale (SOFAS) 20) were administered by psychiatrists.
The PANSS consists of 30 items in three subscales: 7 assessing positive symptoms; 7, negative symptoms; and 16, general psychopathology. Scores for each item range from 1 to 7 points and are compared to evaluation standards; higher scores indicate more severe psychopathology. The Korean version of the PANSS 21) was used. The CGI-SCH was developed to evaluate the general function of patients with schizophrenia via a short and standardized method by a psychiatrist. This scale evaluates 4 domains: positive, negative, depressive, and cognitive symptoms. The SOFAS evaluates the degree of a patient's social and vocational functions on a continuum ranging from 100 (optimal) to 1 (serious impairment).
Neuropsychological Assessments
Short form of Korean-Wechsler Adult Intelligence Scale (K-WAIS) The K-WAIS is the Korean version of Wechsler Adult Intelligence Scale-Revised (WAIS-R) and evaluates verbal and performance intelligence. This study used a short form consisting of the picture completion, arithmetic, dig-it symbol, and similarities sub-tests of the K-WAIS. In a study analyzing the usefulness of WAIS-R short forms in schizophrenia, the short version consisting of these four sub-tests proved easy to perform compared to other short forms, and showed a high correlation between estimated intelligence and WAIS-R-tested intelligence. 22) The Korean version of the UCSD Performance-based Skills Assessment (K-UPSA) The UCSD Performance-based Skills Assessment (UPSA) was developed to evaluate a patient's function in everyday life and evaluate basic livelihood skills through role-play tasks similar to those necessary for independent functioning in everyday life. 14) This tool consists of 5 functional domains: financial, communication, comprehension/planning, transportation, and household skills (estimated evaluation time: 30 minutes). The authors received approval from the copyright holder and used a standardized Korean version of the UPSA.
Wisconsin Card Sorting Test (WCST): Computer Version 4
The original WCST consists of 128 cards categorized by a combination of four kinds of color, shape, and number. The subjects first acquire the concept of the different categories, and without special instructions during the test, they have to discover the sorting principle (e.g., by color) that changes throughout the test. The WCST is a typical test that measures executive functions, including conceptual flexibility in response to feedback. In this study, the computerized WCST CV4 software program (Psychological Assessment Resources, Odessa, FL, USA) 23) was used. Common outcomes are Categories Achieved, Perseverative Errors, Perseverative Responses, total errors, and number of attempts to achieve the first category.
Korean Broadcasting System (KBS) Korean Language Test
The KBS Korean Language Test is a language ability qualification test authenticated by the Korean government. The KBS Korean Language Test evaluates effective and fluent language abilities as well as accurate and creative language abilities reflecting actual lingual circumstances, which can be easily reproduced in everyday life. Language ability is broadly classified into five domains, and each domain includes a variety of subfields: grammar ability, which includes vocabulary and grammar; comprehension ability, which includes listening comprehension and reading comprehension; expression ability, which includes writing and speaking; originality ability, which includes creative language use; and language culture ability, which consists of comprehensive knowledge related to language.
For this study, we obtained approval to edit and use parts of the KBS Korean Language Test from the KBS Korean language promotion director. Among the entire set of evaluation domains, listening comprehension, reading comprehension, and creative language use abilities were considered further. Based on the analysis of test questions 17-20, we pre-selected questions that yielded over 90% correct answers, and further included some questions with over 80% correct answers to balance the level of difficulty, resulting in 30 questions altogether. This procedure selected relatively simple items that did not require specific test preparations. The test was conducted in two sessions with 15 questions each to minimize fatigue.
Statistical Methods
Averages, standard deviations, and ranges were calculated for the continuous variables, and frequencies and percentages for discrete variables. Chi-square test, t-test, analysis of covariance (ANCOVA), and Wilcoxon rank sum test were used as statistical tests. Correlation analyses identified relationships between clinical symptoms and cognitive functions. The SAS 9.3 package (SAS Institute Inc., Cary, NC, USA) was used for analysis, and the significance level for all tests was set at p<0.05.
Demographic Data
The patient group consisted of 86 patients with schizophrenia and the control group, of 29 age-matched volunteers. Demographic and clinical characteristics are displayed in Table 1. The patients consisted of 50 men and 36 women (average age, 37.59±10.04 years) and the control group, of 14 men and 15 women (average age, 37.34±9.15 years). The level of education for the patient and control groups was 13.03±1.58 and 13.12±1.39 years respectively. The average duration of illness for the patient group was 174.21±112.96 months, and the dosage of antipsychotic drugs was 610.88±339.24 mg/day, converted into a chlorpromazine-equivalent dose.
Neuropsychological Assessment
Results for the neuropsychological and language assessments are summarized in Table 2 For the KBS Korean Language Test, the patients' average score was 12.70±6.99, and the controls' average score was 26.70±1.57, yielding a significant difference (p< 0.001). This difference also manifested in the specific domains of listening comprehension, reading comprehension, and creativity, with the patient group achieving scores of 4.77±2.70, 4.55±2.72, and 3.38±2.61, respectively, and the The total score of the K-UPSA for the patient group was 65.69±17.48 and that for the control group was 91.77± 3.98. The patient group showed a significantly lower performance on the total score and on all subdomains (financial domain, p=0.032; other subdomains, p<0.001).
Correlation between KBS Korean Language Test and Clinical Symptoms
The performance in the KBS Korean Language Test correlated with aspects of clinical symptoms in the current sample of patients with schizophrenia (Table 3). Although age, education level, and medication did not correlate with performance in the KBS Korean Language Test, the duration of illness was shown to have a significant negative correlation with the total score. IQ, as assessed by the short form of the K-WAIS, was found to have a positive correlation with the total score (p=0.017), and with the reading comprehension (p=0.022) and listening comprehension (p=0.008) domains, but not with the creativity domain.
The total score and all subscales of the PANSS had negative correlations with the reading comprehension domain (total score, p=0.002; positive subscale, p=0.003; negative subscale, p=0.012; general psychopathology subscale, p=0.021). For the CGI-SCH, the positive-symptoms subscale did not correlate with performance on the KBS Korean Language Test; however, the negative-symptoms subscale correlated negatively with the reading comprehension domain (p=0.049). Depressive symptoms, cognitive symptoms, and overall severity in the CGI-SCH had negative correlations with the reading comprehension domain (p=0.017, p<0.001, p=0.001) and total score (p=0.044, p=0.004, p=0.011). The SOFAS was shown to have a positive correlation with the reading comprehension domain of the KBS Korean Language Test (p=0.024).
Correlation between KBS Korean Language Test and Cognitive Function
In the WCST, Perseverative Responses and Perseverative Errors were not correlated with result of the KBS Korean Language Test (Table 4), but numbers of Categories Achieved were correlated positively with listening comprehension (p=0.021), creativity (p=0.007), and total score (p=0.006).
The K-UPSA correlated with results of the KBS Korean Language Test in various domains ( Table 4). The total score of K-UPSA was correlated positively with all fields (reading comprehension, listening comprehension, creativity, and total score) of the KBS Korean Language Test. The comprehension/planning and household skills subscales did not correlate with language ability. Communication and transportation subscales were positively correlated with all language domains, but the financial subscale did not correlate with the creativity domain.
DISCUSSION
This study aimed to measure Korean language use and comprehension ability in everyday life in patients with schizophrenia and to investigate the correlation between Korean language ability and cognitive function. A portion of the KBS Korean Language Test was used for language assessment, the short form of K-WAIS and the WCST measured cognitive functions, and the K-UPSA assessed cognitive functions in everyday life.
Although the patient group and controls were matched for age and educational background, the control group had a higher IQ than the patient group, as measured with the short form of the K-WAIS. This result matches various studies assessing intelligence in patients with schizophrenia, who consistently show lower intelligence scores than normal controls over a range of periods from pre-morbid and onset of schizophrenia 24) to the first episode of schizophrenia, 25) and after progression of the disease. 26) The KBS Korean Language Test assessed the use and comprehension ability of Korean language in everyday life, and we selected questions with a percentage of correct answers above 90% (to adjust the difficulty, some questions with a percentage above 80% were also chosen) to assess everyday abilities more easily. The results of the control group were similar to those of participants in the original KBS Korean Language Test study. To measure language ability of patients with schizophrenia, several methods have been applied in previous studies; the vocabulary test included in the WAIS-R, the Boston Naming Test, language proficiency tests, and other tools were mainly used to measure vocabulary. 25,27) Reading comprehension ability, that is, how accurately words are read, was measured with a reading comprehension test of the Wide Range Achievement Test. Since the KBS Korean Language Test assesses how accurately subjects can understand language using advertising descriptions and announcements, it more accurately reflects the language ability of everyday life. Since vocabulary forms the basis of language ability, the assessment of vocabulary ability is necessary to reliably assess language ability in schizophrenic patients. Although many studies focus on basic language ability, only a few reproduced the use of language in everyday life. 28,29) The performance of schizophrenic patients in this study was poorer than that of the control group in the listening comprehension, reading comprehension, and creativity domains. Age and education level had no correlation with language ability, whereas the results of the intelligence test had a high correlation with measures of language ability.
Measurements of clinical symptoms using the PANSS and the CGI-SCH mainly correlated with the reading comprehension domain and did not show much association with the listening comprehension and creativity domains. This result is difficult to interpret based on the current data and requires additional studies that specifically relate listening comprehension and reading comprehension skills of schizophrenic patients to the clinical symptoms.
The patient group showed poorer performance than the control group on the WCST and the K-UPSA, which both assessed cognitive functions. These results were similar to those of existing studies. 14,27) Executive functions measured with the WCST correlated with Korean language abilities. The correlation between executive function and language comprehension ability has been described by another study. 30) However, this previous study mainly used the interpretation of proverbs or words with dual meaning to assess language comprehension, while the current study mainly used advertising descriptions, announcements, or instructions in which conveyance of the meaning is comparatively specific and clear. The UPSA is a tool that measures the cognitive function of patients through role-play, by setting up a situation that can occur in everyday life. 14) The K-UPSA showed a positive correlation with the KBS Korean Language Test in most of the domains. In accordance with the intended objective of this study, this can be interpreted as a relationship between language deficiency and the decline of cognitive function in everyday life. If treatments that can improve language ability are added to therapeutic programs aimed at improving cognitive functions, we can expect an overall improvement in cognitive functions in schizophrenic patients. Moreover, tests of language ability could be used as instruments for assessing the effects of a cognitive function treatment program.
This study has some limitations. First, the KBS Korean Language Test is not specifically developed for the assessment of language ability in schizophrenic patients. Moreover, the questions used for the Korean language ability assessment are limited to this study and cannot be used in other studies. Therefore, the development of a standardized assessment instrument is needed to accurately assess Korean language ability in schizophrenic patients. Second, although the age and education level of both groups were about the same, the measured intelligence of the patient group was significantly lower than that of the control group. This difference can affect the results of cognitive function tests; therefore, we should be cautious in comparing cognitive functions between patient and control groups. Third, medications taken by the schizophrenic patients were not controlled. Despite the fact that many medications, including antipsychotics, can affect cognitive function, the influence of the medications was not assessed in this study.
A review of the relevant literature found this to be the first study assessing the Korean language ability of patients with schizophrenia in everyday life. Although no standardized test instruments were used, it was confirmed repeatedly that language deficiencies were strongly correlated with executive and cognitive functions used in everyday life. For future studies in Korea, we should consider language ability as a crucial factor when assessing cognitive function, especially in the everyday lives of schizophrenic patients, and as a useful method of determining the effects of cognitive rehabilitation on cognitive function. Thus, the development of a systematic and standardized test that can accurately assess Korean language ability in schizophrenic patients is needed for future studies and intervention. | 2018-04-03T05:13:57.776Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "5f91609cf99129232d7ab64177aa107717972261",
"oa_license": "CCBYNC",
"oa_url": "http://www.cpn.or.kr/journal/download_pdf.php?doi=10.9758/cpn.2015.13.3.288",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6befd4050ac2e8d193ca05ccbeb792dbb5dd2fd",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201651315 | pes2o/s2orc | v3-fos-license | Enhancing the capture velocity of a Dy magneto-optical trap with two-stage slowing
Magneto-optical traps (MOTs) based on the $626\;{\rm nm}$, $136\;{\rm kHz}$-wide intercombination line of Dy, which has an attractively low Doppler temperature of $3.3\;\mu{\rm K}$, have been implemented in a growing number of experiments over the last several years. A challenge in loading these MOTs comes from their low capture velocities. Slowed atomic beams can spread out significantly during free-flight from the Zeeman slower to the MOT position, reducing the fraction of the beam captured by the MOT. Here we apply, for the first time in a Dy experiment, a scheme for enhancing the loading rate of the MOT wherein atoms are Zeeman-slowed to a final velocity larger than the MOT's capture velocity, and then undergo a final stage of slowing by a pair of near-detuned beams addressing the $421\;{\rm nm}$ transition directly in front of the MOT. By reducing the free-flight time of the Zeeman-slowed atomic beam, we greatly enhance the slowed flux delivered to the MOT, leading to more than an order of magnitude enhancement in the final MOT population.
I. INTRODUCTION
Dysprosium, which possesses the largest magnetic moment (µ ≈ 10µ B ) of any atomic species, has grown in popularity in the ultracold quantum gas community over the last decade [1][2][3][4][5][6][7][8][9][10][11] The large magnetic moment, as well as several other useful properties, arise from its [Xe]4f 10 6s 2 electronic configuration. The two 6s electrons give rise to a helium-like excitation spectrum, including a strong transition at 421 nm and a weak, intercombination transition at 626 nm. The unfilled 4f shell gives rise to narrow clock-like transitions. It also leads to spin-orbit coupling in the ground state, which is useful for many quantum simulations, including simulating gauge fields [11,12].
The narrow linewidth of the 626 nm transition in Dy corresponds to a low Doppler temperature of 3.3 µK, making it an attractive option for magneto-optical trapping. The downside of using a narrow transition is that the capture velocity of the MOT is lower than for a broader transition. Slowing an atomic beam to within a low capture velocity can lead to a situation where the slowed beam transversely spreads out so much that many slowed atoms miss the MOT. Cooling the transverse degrees of freedom of the atomic beam and increasing the capture velocity of the MOT by frequency-dithering the MOT light are two measures which are typically employed to mitigate this limitation [2][3][4][5][6]13], but their effectiveness can be limited.
In the present work, we add a new approach which we refer to as "angled slowing," which applies a second stage of slowing to the atomic beam with a pair of lowpower beams that intersect directly in front of the MOT. This allows us to choose a sufficiently large final velocity for the first stage of slowing (i.e., Zeeman slowing) that atoms do not spread out appreciably before reaching the MOT. This approach was introduced in an Yb experiment, where it gave a small enhancement to the MOT loading rate [14]. In our experiment, angled slowing enhances the MOT population by more than a factor of 20.
Compared to other methods which have been employed to increase the capture velocity of narrow-line MOTssuch as the two-stage MOT [1] and the core-shell MOT [15]-the angled slowing approach requires fewer beams and less laser power.
In section II we briefly describe the aspects of our experiment that are similar to those previously reported by other experiments. In section III, the idea behind angled slowing and how it is particularly applicable to experiments with narrow-line MOTs is discussed. In section IV, we describe how we optimized the performance of angled slowing with respect to beam pointing, laser power, and frequency. In section V, the compression and detection sequence that follows the loading of our MOT is described, and the temperature and phase space density of the compressed MOT are reported.
II. EXPERIMENTAL SETUP
Due to the recent explosion in popularity of dysprosium, several groups have developed similar cooling and trapping protocols in parallel [2][3][4][5][6]13]. Here we briefly describe our approach, and give references to more detailed explanations of similar systems.
Our atomic beam of Dy is generated by a commercial molecular beam epitaxy oven [16] heated to 1250 • C. The dysprosium vapor is collimated into an atomic beam by a 7 mm diameter nozzle, which is 90 mm from the opening of the oven, followed by a 10 cm long, 7 mm diameter differential pumping tube that starts 19 cm from the nozzle.
We use 421 nm laser light for Zeeman slowing, transverse cooling, and absorption imaging. This light is generated using an M-Squared Ti:Sapphire laser and ECD-X fixed-frequency second-harmonic generation cavity (1.6 W total output), as well as two injection-locked laser diodes (90 mW total output each). The frequency of the Ti:Sapphire laser is stabilized by measuring the frequency of the doubled light with a HighFinesse WLM-7 wavemeter, and feeding back on a piezo-actuated mirror in the laser cavity using an Arduino Due. We periodically adjust for drifts in the calibration of the wavemeter (which are typically a few MHz per day) by checking the resonance frequency of the 421 nm transition via absorption imaging. A Toptica TA-SHG system (700 mW total output) generates the 626 nm light for the MOT. For frequency stabilization of this laser, we shift the light by about +1 GHz and employ a modulation transfer spectroscopy scheme to lock the laser frequency to a transition in a room-temperature iodine cell.
In the present work we slow the bosonic isotope 162 Dy, which has 25.5% natural abundance [17]. Our Zeeman slowing light consists of 300mW of light addressing the 421 nm transition (Γ 421 = 32.2 MHz), which comes to a focus at the position of the oven. Light enters the vacuum chamber with a beam diameter of about 2 cm, bouncing off of a 45-degree in-vacuum mirror as shown in Figure 1. This scheme was implemented so the entrance window for the slowing light does not get coated by the Dy atomic beam.
To minimize the effect of the Zeeman slower light on atoms trapped in our MOT, we use an increasing-field Zeeman slower design. This allows us to employ a larger detuning in our slowing beam, which reduces the losses due to scattering in the MOT. A counter-wound segment of coils at the end of the slower cancels the fringing magnetic field from the slowing coils at the position of the MOT. We use light detuned about 1.1 GHz from the zero-velocity transition, which resonantly addresses atoms moving at 480 m/s (close to the most probable velocity of the atoms emitted from the oven). We have an additional, uniformly wound bias coil running the length of the Zeeman slower, which creates a constant offset magnetic field inside the slowing region. This allows adjustment of the effective detuning of the Zeeman slower beam by up to several hundred MHz without needing to employ an accousto-optic modulator (AOM).
Our MOT is formed by three retroreflected 626 nm beams. Each beam has a diameter of 2.3 cm and a total power of 42 mW (±5%), corresponding to a (peak) saturation parameter of s ≈ 280. The quadrupole field's gradient along the strong direction is approximately 2.5 G/cm. To improve the capture velocity of the MOT we dither the frequency of the MOT light using a double-passed AOM. The dithering occurs at a frequency of 120 kHz and broadens the laser linewidth to 2.6 MHz (30Γ 626 ). Three pairs of rectangular coils in Helmholtz configuration allow us to cancel background magnetic fields, and will also allow us to employ feedback-and feedforward-based magnetic field stabilization schemes during future experiments.
III. ANGLED SLOWING
Atoms that have been slowed by a Zeeman slower must travel some nonzero distance at their final, slowed velocity from the end of the Zeeman slower to the position of the MOT. During this period of free flight, the transverse velocity distribution of the beam causes the atomic beam to spread out. If the free flight time is sufficiently long, then the atoms can spread out far enough that they are not captured by the MOT.
While this is not typically a limiting factor in experiments, the combination of an increasing-field slower and the narrow linewidth of the MOT transition creates a situation in which the transverse spread plays a significant role. To clarify this point, we compare Dy to the more common alkali MOTs.
The capture velocity of a MOT can be estimated by calculating the largest velocity that can possibly be slowed to a stop within the profile of the MOT beams. Assuming that the atoms scatter photons at the maximum possible rate Γ 2 across an entire beam diameter D, an expression for the capture velocity is given by where m is the atomic mass and k = 2π λ is the wavenumber of the MOT light.
The spatial spread σ of the atomic beam can be estimated as where d is the free-flight distance, v trans is the RMS transverse speed, and v long is the average longitudinal speed. Collimation of the atomic beam by one or more apertures typically leads to a transverse velocity distribution with an RMS speed around 1% of the average (unslowed) longitudinal velocity [18]. Let us compare the case of a 87 Rb MOT to a 162 Dy MOT, taking typical values of D = 2 cm for the MOT beam diameters. For Rb, λ = 780 nm, m = 87 amu, and Γ = 2π × 6 MHz. This corresponds to a capture velocity of 67 m/s. A typical initial most-probable velocity for atoms effusing from a Rb oven is about 330 m/s, and so 3.3 m/s is a reasonable estimate of the RMS transverse speed of the atoms. If we consider an atomic beam slowed to the capture velocity value and an example freeflight distance of 10 cm we can estimate the spread of the atomic beam to be which is smaller than the size of the MOT beams.
For a Dy MOT, λ = 626 , m162 amu, and Γ 626 = 2π×136 kHz. This gives a capture velocity of 8 m/s. The most probable velocity of the Dy atoms effusing from our oven is about 480 m/s, so 4.8 m/s is a reasonable estimate of the average transverse speed. For a free-flight distance of 10 cm, we estimate the spread of the atomic beam to be σ Dy ≈ 12 cm (4) which is much larger than the size of the MOT beams.
We thus see that the narrow linewidth of the 626 nm transition already lead to a significantly larger spreading of the atomic beam than in a typical alkali MOT.
Employing an increasing-field slower, while effective in reducing scattering losses in the MOT due to the larger Zeeman slower laser detuning, further exacerbates the transverse spreading problem. One reason is that the larger detuning reduces the amount of off-resonant slowing that occurs during the free-flight distance. The more off-resonant slowing that occurs during the freeflight, the larger the initial exit velocity from the Zeeman slowing region can be. We can estimate the typical effect of off-resonant slowing in a Dy experiment: a typical Zeeman slower beam detuning in a spin-flip slower is around −18Γ 421 , with (resonant) saturation parameters of s 0 ≈ 1 [2]. If we assume that the slowed atoms scatter at a (detuned) saturation parameter of s = s0 1+4∆ 2 /Γ 2 ≈ 7.7 × 10 −4 over a 10 cm free-flight distance, then we can estimate that atoms with exit velocities as high as 13 m/s will be decelerated to within the capture velocity of the MOT.
A second reason is the increased free-flight distance due to the need for field-cancelling coils near the MOT. In an increasing-field Zeeman slower, the largest numbers of windings are closest to the MOT. As a result, it is necessary to compensate for the large fringing fields with an oppositely-wound compensation coil, so that the total residual magnetic field and field curvature at the position of the MOT is close to zero. Slowed atoms must thus travel an extra distance of several cm compared to the travel distance in spin-flip Zeeman slowers. In our experiment, the free-flight distance is 16 cm.
The purpose of the angled slowing scheme is to reduce the free-flight time by allowing atoms to exit the Zeeman slower at velocities well above the MOT's capture velocity. A few cm before the MOT, two beams with red detuning on the order of Γ intersect the atomic beam to provide a net longitudinal slowing force, slowing the atomic beam to within the capture velocity of the MOT. The transverse components of the two beams' scattering forces are oppositely oriented and thus cancel. In effect, the addition of the angled slowing beams increases the capture velocity of the MOT.
The advantage of using a pair of angled beams over a single beam colinear with the main Zeeman slower light, or adding a near-resonant sideband to the Zeeman slower, is that scattering losses in the MOT are avoided. Angled slowing also requires fewer beams and less laser power than the recently reported core-shell MOTs for alkalineearth-like atoms [15]. The setup for the angled slowing beams is depicted in Figure 1.
Without employing angled slowing, optimization of our Zeeman slowing parameters led to a MOT population of about 10 7 atoms. We observe more than a factor of 20 gain in the final population of our MOT when using angled slowing. As described in the next section, we found optimal angled slowing performance with only 7 mW per beam, and a detuning of −50 MHz (−1.6Γ 421 ). The beam diameters are about 5 mm, putting us far below the saturation regime (I sat = 56 mW/cm 2 ). Figure 2 shows the population with and without angled slowing as a function of MOT loading time.
IV. OPTIMIZATION OF ANGLED SLOWING
We determined the optimal alignment of our angled slowing beams by maximizing the steady-state MOT population as measured by the integrated 626 nm fluorescence scattered by the MOT. While σ − light is used to pump and cycle atoms that are being slowed by our Zeeman slower, the magnetic field magnitude and direction at the position where the angled slowing beams intersect the atomic beam are not easily known, so we varied the polarization of the angled slowing beams to maximize the MOT population after the pointing had been optimized.
The angled slowing light is prepared by frequency shifting light from our 421 nm master laser with a 500 MHz AOM in a double-pass configuration, and then splitting the shifted light into two separate fibers. With more than 200 mW of input power to this frequency shifting setup, thermal lensing in the AOM causes sensitively powerdependent variations in the spatial mode of the beams reaching the fibers, greatly reducing the fiber coupling efficiency. To avoid thermal lensing (and allow for more controlled variation of the angled slowing power via the RF power), we keep the power going to this AOM low, resulting in a maximum power of about 10 mW per beam in our angled slowing light.
Given this power constraint, we looked for an optimal combination of power and detuning for the angled slowing beams. Figure 3 shows the population after a fixed load time and fixed compression sequence (see the following section) as a function of both detuning (always red) and power per beam. The uncertainty in our beam power was at most ±15%, and the uncertainty in our frequency was about ±2 MHz, with the latter uncertainty arising from drifts in our wavemeter.
The general trends are explained by a simple physical picture: At small detunings, a small amount of power kicks some of the slowed flux to below the capture velocity of the MOT, but increasing power causes significant additional scattering in the nearly-zero-velocity atoms and causes them to turn around. At intermediate detunings, more flux is kicked out of the broad, slowed distribution to velocities below the capture velocity. Eventually, with enough power, off-resonant slowing begins to turn the atoms around again. At large detunings, the majority of the slowed flux is only addressed off-resonantly by the angled slowing beams. Eventually, atoms will also be turned around off-resonantly and thus there should be an optimal power for any given detuning.
At each detuning, we scanned the constant offset field in the slower, which is equivalent to scanning the Zeeman slower laser frequency and hence the final velocity. We found that the optimal bias field was the same for all detunings to within the step size we explored (steps of 1 A ≈ Γ421 2 of effective Zeeman slower detuning). We also found that the same bias field was optimal when loading a MOT without angled slowing. Together, these observations suggest that the final velocity distribution of the Zeeman-slowed atoms is broad compared to Γ 421 If the final velocity distribution were narrow, we would expect the optimum to vary with the choice of angled slowing detuning.
V. COMPRESSION AND DETECTION
We load about 3 × 10 8 atoms in 2 seconds with our optimized angled slowing parameters. To prepare the captured atoms to be loaded into an optical dipole trap (ODT) for evaporation, we compress the cloud over 50ms and let the compressed cloud equilibrate for at least 300 ms. Compression consists of switching off the dithering of the MOT light frequency, and ramping the frequency from the initial detuning to within about a few linewidths of resonance. To minimize losses due to lightassisted collisions, and to reach the lowest final temperature of the cloud, the MOT beams are ramped down to a final power of 22 µW per arm. We also reduce the magnetic field gradient from 2.5 G/cm to 1.75 G/cm in order to further minimize losses. At the end of the compression, the MOT is approximately 400 µm × 400 µm × 800 µm. We lose up to half of our atoms during the 300 ms of post-compression equilibration, but obtain a net gain in phase space density due to the simultaneous reduction in temperature.
To detect the number of atoms captured in our trap, we perform absorption imaging using light resonant with the 421 nm transition. We expect a high degree of spin polarization in the m J = −8 spin state as a result of the force of gravity on our narrow-line MOT as discussed in [4], and so we image using σ − light to address the m J = −8 → m J = −9 transition, which has a Clebsch-Gordon coefficient of nearly unity [19]. We let the cloud expand freely for between 10ms and 30ms before shining a 100 µs imaging light pulse. We have verified that we have a high degree of spin polarization by using σ + light instead of σ − light, and observing that the optical depth was reduced by more than an order of magnitude.
We measure the temperature of our cloud after compression by loading successive MOTs with identical parameters and varying the time-of-flight (TOF) after turning off the MOT beams and quadrupole. By fitting the cloud size as a function of the TOF, we can observe the mean speed of the cloud and hence the temperature. We observe faster expansion along the vertical direction than along the horizontal direction, corresponding to a "vertical temperature" of 6 µK and a "transverse temperature" of 13 µK.
To obtain the optimal phase space density, nλ 3 T , we varied the MOT frequency, detuning, and gradient during the compression sequence. We used the size in large TOF (20−25 ms) as a proxy for velocity (and therefore temperature), which in combination with the measured number allowed for single-shot estimation of the phase space density. We optimized the phase space density both through manual parameter scans and by automating the search using a genetic algorithm, which converged after about 4 generations. We obtained similar results from both approaches, and measured an optimal phase space density of 10 −5 after 10 seconds of loading.
VI. CONCLUSION
In conclusion, we used the described angled slowing technique to reduce the effect of transverse atomic beam spreading on our MOT loading, effectively increasing the capture velocity of our narrow-line MOT. We observe more than an order of magnitude increase in the number of atoms captured in the MOT when the angled slowing is operated with optimal parameters, allowing us to load MOTs in the 10 8 regime in a few seconds. In our experiment, the combination of a narrow cooling transition, long free-flight distance, and reduced off-resonant slowing means that the free-flight time is particularly long; we believe that angled slowing can be of use in similarly designed experiments using species with narrow cooling transitions (such as Dy, Er, or Yb). Even in experiments where transverse spread can be avoided by employing other techniques, such as transverse Doppler cooling or the core-shell MOT configuration, the low power requirements and simple geometry of the angled slowing scheme may make it a comparatively attractive option. | 2019-08-27T19:47:51.000Z | 2019-08-27T00:00:00.000 | {
"year": 2020,
"sha1": "1aa8c77ea5007bf9077e8b6cd230cc09bceccea9",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/134029/2/PhysRevA.101.063403.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1aa8c77ea5007bf9077e8b6cd230cc09bceccea9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250642415 | pes2o/s2orc | v3-fos-license | Inequalities in Covid-19 Messaging: A Systematic Scoping Review
ABSTRACT The impact of the Covid-19 pandemic has been widely documented. While deaths are now in the millions and many more have been impacted in other ways, the impact of Covid-19 has not been felt equally, with it exacerbating existing inequalities and disproportionately impacting a number of populations. With this Covid-19 has created unprecedented challenges in relation to health communication, with the need to reach disadvantaged populations. This systematic scoping review sought to 1) synthesize the existing research regarding communication inequalities in the response to the Covid-19 pandemic, and 2) analyze the recommendations that emerge from this body of evidence on how to best address these inequalities. This review includes 40 studies that fell into three broad groups (1) those revealing a disadvantage or inequality in studies of general population; (2) those focussing on communication with sub-groups disproportionately affected by the pandemic; and (3) those reporting and evaluating practical attempts to address inequalities. The results largely corroborate those found in past pandemics, highlighting the role of sociodemographic, cultural/religious, and economic factors in facilitating/jeopardizing the public’s capacity to access and act upon public health messaging. In a number of studies it was encouraging to see recommendations from the literature – particularly, lessons learnt on the importance of community partnerships, trusted messengers and the co-creation of health and risk messages – being applied, however many challenges remain unmet. Covid-19 has also highlighted the need to actively tackle misinformation, something which was recognized, but largely unaddressed.
Introduction
A little over two years after its identification, Covid-19 had claimed almost 5.7 million lives (WHO, 2022).The pandemic forced one of the largest changes to life in living memory, with billions hospitalized, countries locked down and an unprecedented strain placed on healthcare systems.Beyond its vast impact, Covid-19 has exacerbated existing inequalities, with several disparities identified in risk and outcomes amongst a number of groups, among others, those who are older, those from racial/ethnic minority groups, migrants, those from lower socioeconomic backgrounds and certain occupational groups, such as transport, health and social care workers.The pandemic has also "brought to the fore the centrality of communication" (Viswanath et al., 2020(Viswanath et al., , p. 1743) ) as a tool for implementing public health measures crucial for controlling the spread of Covid-19, such as non-pharmaceutical interventions (e.g., physical distancing or face covering), lockdown/quarantine interventions and, more recently, mass vaccination.With widespread (health) disparities across racial, ethnic, gender, geographic and educational lines and an increasingly diverse society the question arises: How to effectively communicate these measures to different segments of the population, particularly those from underserved communities and others disproportionately impacted by the pandemic?
Prior to the pandemic, there was a consensus among communication scholars and practitioners that groups with less social power such as minority language speakers, (forced) migrants and those living in poverty, may require targeted and tailored messaging to ensure they have equal access to health and risk information and are capable to act upon it (Koval et al., 2021;Lin et al., 2014;Ryan et al., 2021;Savoia et al., 2013;Vaughan & Tinker, 2009).Lessons learnt from previous pandemics also indicated a need to use trusted messengers, deploy a mix of communication channels and formats, and most importantly, to actively work with communities in the co-creation of effective communication strategies (e.g., Vaughan & Tinker, 2009).In the context of Covid-19, the challenge of inclusive and equitable communication has further been compounded by uncertainty regarding the course of the pandemic, an explosion of health and scientific information, the polarization of target audiences, as well as the proliferation of dis/misinformation (Dan & Dixon, 2021).
Commenting on the relationship between broader societal inequalities and communication, Viswanath et al. (2020Viswanath et al. ( , p. 1744) ) have argued that "Covid-19 driven inequalities in economic, social and health sectors find a parallel in communication", while Watson (2020, np) has noted that "Covid-19 presents a special problem [. . .] as certain populations at increased risk of contracting and experiencing the worst effects of the virus are also at risk for inadequate health literacy".Put differently, inequalities in health and communication operate together and exacerbate each other, often through mechanism in which access to information, literacy (be it in relation to a language, media, or health literacy) and trust play vital roles.With Covid-19 far from over, and the looming prospect of future pandemics, it is important to take stock and review the existing research in this area to inform both research and practice.
The aims of this scoping review are therefore twofold: 1) to understand what communication inequalities exist in the context of Covid-19 and which populations are (most) affected, and 2) to explore whether practices suggested in the past have been implemented to reach, engage, and communicate effectively with disadvantaged groups in the context of this pandemic.The review is not limited to one aspect of health communication but rather covers multiple facets, including research into exposure to different channels of health communication, information seeking, language, message framing, digital and health literacy, and trust in information sources.Inequality refers to differences, variations, and disparities in communication that had the potential to negatively impact groups defined by protected characteristics (age, race, ethnicity, religion and belief, and disability), gender, or socioeconomic disadvantage (e.g., low-income communities, homeless people).
Methods
This review followed a five-step process which included the definition of review questions, development of search strategy, study selection, data extraction, and synthesis (Arksey & O'Malley, 2005).The research process has been documented using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Page et al., 2021).
Search strategy
A systematic search was conducted utilizing four databases, Scopus, MEDLINE, CINALH and PsychInfo on 26 October 2021.The search syntax -which combines core concepts and terms related to Covid-19, health and risk communication, and inequalities -was tested and refined through several preliminary searches.The final search strategy is shown in Table S1 of the Supplemental Material.Our search yielded 1669 results; after removing duplicates (n = 258), 1411 records were retained for screening and eligibility assessment.
Screening and eligibility criteria
Title and abstracts were independently screened by EK and RE to identify potentially relevant publications.Disagreements (n = 17) were resolved by discussion.Studies were included if they had been peer reviewed and: (1) focussed in whole or part on Covid-19 related risk and health communication, and (2) reported empirical research that revealed inequalities in Covid-19 communication and/or set out to explore or address such inequalities.
Studies were excluded if they: ( Following the abstract and title screening, 90 studies were retained for full text assessment.Full-text articles were also retrieved for any records for which eligibility could not be determined based on the abstract and title. Where full text was not available, the corresponding authors were contacted.If an answer was not obtained following a reminder and a two-month waiting period, the study was excluded (n = 6).Following the full text assessment, only those studies were retained that met the above outlined criteria in addition to containing extractable data about (in)equality in communication or message targeting to underserved groups defined either by protected characteristics (age, race, ethnicity, religion and belief, and disability), gender, or socioeconomic factors (e.g., low-income communities, homeless people).Disagreements at any stage were resolved through discussion.
The search strategy was updated and complemented on 12 January 2022 by hand searching selected journals of health communication (2020)(2021)(2022) and by checking the reference lists of the retrieved papers, which yielded 6 additional records for full-text assessment.The full search and study selection process has been documented in Figure 1.
Data extraction and synthesis
Data extraction was undertaken by EK and RE.For 20% of studies, the data was extracted in duplicate by EK to ensure accuracy and consistency.The percentage of agreement was over 85% percent on all data extraction fields, which captured: the background (author, year, country and study aims), methods (study design, participants/sample size, demographics and Covid-19 impact for each sub-group where available), Covid-19 messaging (who is delivering the message to whom, what is the content of the messaging, how it is delivered, and who may be disadvantaged) and results and outcomes (including the reach, feasibility and acceptability of targeted Covid-19 health campaigns where available).The extracted data have been analyzed using a narrative synthesis approach to bring together the findings of and draw conclusions from the reviewed body of evidence.
Descriptive results
Our searches yielded 1675 references, of which 40 articles met the inclusion criteria.The majority of studies were carried out in the United States (n = 17), followed by China (n = 4) and the United Kingdom (n = 3).The geographical distribution of the remaining studies is shown on Figure 2. Much of this body of research focused on comprehensive public health guidance related to Covid-19 (i.e., advice for the public about preventive measures, common symptoms, restrictions, testing and vaccination), with some notable exceptions that had a much narrower focus: e.g., Higashi et al. (2021) (health information on Covid-19 and cancer), Blake et al. (2021) (accessing primary health care during the pandemic) and Montgomery et al. (2021) (hand hygiene among people experiencing homelessness).Following from the above, government and major healthcare bodies were the most common messengers, although messaging by faith leaders (e.g., Brewer et al., 2020), community organizations (e.g., Villani et al., 2021;Wieland et al., 2021), and physicians (e.g., Alsan et al., 2021;Torres et al., 2021) has also been researched in the context of the pandemic.The included studies also considered multiple channels of communication from radio and television broadcasts (e.g., Alvarez et al., 2021;Woko et al., 2020) through digital platforms (e.g., Cheng et al., 2021;Kusters et al., 2021) and emergency text alerts (e.g., You & Lee, 2021;Yu et al., 2021) to loudspeakers and door-to-door distribution (e.g., Feinberg et al., 2021;Kalagy et al., 2021).
From a methodological perspective, the included studies relied predominantly on qualitative methods (n = 21), mainly interviewing, focus group discussions, and case studies.14 studies used quantitative methods -mostly cross-sectional surveys -to assess the reach of Covid-messaging and its reception by and impact on different population groups.This review also identified two studies that stood as outliers amongst the quantitative papers; a randomized controlled trial (Alsan et al., 2021;Torres et al., 2021) and a linguistic landscape study (Kalocsányiová et al., 2021).Lastly, five articles reported mixed methods studies.Overall, the studies enrolled a total of n = 48,454 participants, excluding articles in which the exact participant numbers were unclear.The reviewed studies were categorized according to the study population and aim and then divided into three groups: (1) studies of general population that revealed inequality; (2) studies of sub-groups disproportionately affected by the pandemic; and (3) studies of practical attempts to address inequalities. 1We discuss the results from each group separately in the next sections.
Studies of general population that revealed inequality
Most studies in this group explored differences in public perceptions and attitudes toward Covid-19 messaging, often with the aim to assess the association between risk and health communication and the uptake of Covid-19 health advice (e.g., Alvarez et al., 2021;McCaffery et al., 2020;Wang et al., 2020;Yu et al., 2021).A few studies (e.g., Higashi et al., 2021) were primarily intended to measure the extent to which communication inequalities exist, however this was quite rare in the reviewed literature.The main characteristics of each study in this group are summarized in Table S2 of the Supplemental Material.
There were three studies that explicitly considered the role of language selection in Covid-19 related communication (Higashi et al., 2021;Kusters et al., 2021;McCaffery et al., 2020).Higashi et al. (2021) conducted a multimodal document review study to assess the equity of online Covid-19 information available to Spanish-and English-speaking cancer patients from seven major healthcare providers in the Dallas-Fort Worth area (US), where around 20% of the population is Spanish speaking.The authors concluded that Spanish speakers lacked equal access in both diversity of Covid-19 content and access to further resources, "leaving an already vulnerable cancer patient population at greater risk" (Higashi et al., 2021, p. 9).A similar study was conducted by Kusters et al. (2021) who examined the local health department websites of the top ten largest US cities by population.This latter study also found discrepancies in the amount, quality, and navigability of Covid-19 information available in languages other than English.In another anglophone setting -Australia -a survey completed by McCaffery et al. (2020) showed that respondents who reported speaking a language other than English at home (LOTE) experienced more difficulty in accessing and understanding government messaging than those who spoke English as their primary language.They were also more likely to endorse misinformation about Covid-19/vaccination. The same pattern of results was observed by McCaffery et al. (2020) among people with inadequate health literacy.The risk of excluding minority language speakers -including signers -from accessing crucial health information about Covid-19 was also noted by Blake et al. (2021) and Kalocsányiová et al. (2021).Blake et al. (2021) analyzed communications from government, media, and local general practitioner (GP) services in Te Papaioea (New Zealand) to understand how people were advised to seek care during lockdown.The study concluded that all three of these sources neglected the cultural and social diversity of the local population, in addition to relying primarily on access to telecommunications or the internet for their messaging.This further marginalized communities that were already disadvantaged, among them older people, Māori, Pasifika, and people with chronic health conditions and disabilities.The promotion of health care seeking behaviors was also investigated by Mayfield et al. (2021) in a sample of community clinic patients in North Carolina (US).Black and Latino/Hispanic patients and people without a comprehensive insurance were identified as hard(er) to reach groups by the study authors, who acknowledged that existing barriers (such as digital poverty, lack of trust and a desire not to be contacted out of experiences of discrimination or fear of not being able to pay for healthcare) were likely to limit the impact of Covid-19 messaging campaigns.Trust in Covid-19 information sourcesincluding mainstream and social media, public health authorities and the former US president (Donald Trump) -was also central to the study of Woko et al. (2020) who investigated potential contributors to vaccination intention among Black Americans.
Age, education, and gender also emerged as important factors, particularly in relation to exposure to risk and prevention messages (Wang et al., 2020), and the impact of messaging on individuals (Alvarez et al., 2021;Wang et al., 2021;Yu et al., 2021).Taking a closer look at some of the studies, Wang et al. (2020) showed that older, less educated Chinese men reported lower exposures to risk communication messages than the general population.Similarly, Yu et al. (2021) detected age and gender differences in their study of emergency alert text messages in China.Their results were similar to those of You and Lee (2021), who found that engagement with emergency Covid-19 text messages was positively associated with both female sex and older age.Finally, Alvarez et al. (2021) also revealed a gendered pattern in their exploration of perceptions of mass media in Spain, with women reporting higher levels of anxiety and fear compared to men when watching, listening, or reading news about the disease or the pandemic more broadly.
Interestingly, there were two studies that considered population heterogeneity in their design and yet revealed little to no differences related to race, educational status, gender, or healthcare status groupings (Torres et al., 2021;van Scoy et al., 2021).The randomized control trial of Torres et al. (2021), which aimed to determine whether public health messages delivered by physicians improved Covid-19 knowledge, beliefs, and practices and to assess the differential effectiveness of messages acknowledging the unequal burden of the disease on Black Americans, concluded that the intervention was equally impactful for both Black and White participants.There were no statistically significant differences by sex or political affiliation either, even though the campaign was more impactful among participants with lower income.Similarly, the research of van Scoy et al. ( 2021) into the perceptions of early pandemic communication in the US concluded that concerns about media messaging were equally distributed across sub-groups defined by race, gender, educational attainment, and healthcare worker status.Likewise, confusion, distrust and anxiety, attributed to flawed messaging, permeated the responses from all subgroups, even though respondents who were white, male or those with higher educational attainment were disproportionately affected by distrust.
In two of the studies (Jarynowski & Skawina, 2021;Kalocsányiová et al., 2021), the exposure to and content of public health messaging were explored from a geographical (area-level) perspective.Kalocsányiová et al. (2021) carried out a comparative linguistic landscape analysis in the London Borough of Hackney (UK).Their study revealed significant differences in the amount, content, and prominence of Covid-19 signage between more affluent and deprived neighborhoods, with signage in deprived areas, including signage about key preventive measures such as staying at home and/or self-isolation, limiting non-essential travel and wearing a face covering lagging behind that in less deprived area.In contrast, the geographical focus of Jarynowski and Skawina (2021) was much broader -based on "socio-epidemiological data" (which combined data based on internet searches, infection/deaths rates and vaccine refusal rates), the authors delineated several potential sub-populations in Poland in need of targeted messaging to boost vaccine uptake.
Finally, there was one study that evaluated the accessibility of Covid-19 information published by the World Health Organization (WHO) on its website (Fernández-Díaz et al., 2020).Using web content accessibility guidelines as a benchmark, Fernández-Díaz et al. ( 2020) concluded that the WHO was not accessible to all citizens, among them groups of older people who have vision problems as a result of physical aging.
Studies of sub-groups disproportionately affected by the pandemic
There were eleven studies that explored the challenges of reaching certain sub-groups within the population and/or the messaging preferences of groups that have been disproportionately affected by the pandemic.These included religious and ethnic minorities (Elers et al., 2020;Garcia et al., 2021;Kalagy et al., 2021;Vanhamel et al., 2021), immigrants and speakers of minority languages (Brønholt et al., 2021;Wild et al., 2021), various age groups (Brown & Reid, 2021;Cheng et al., 2021), people living with a physical disability or mental illness (Bailey et al., 2021), and others who are particularly deprived, among them people experiencing homelessness, refugees, and Roma and traveler communities (Eshareturi et al., 2021;Montgomery et al., 2021).Detailed information about each study's design and outcomes is shown in Table S3.
The reviewed studies also converged in showing that individuals excluded from Covid-19 communications, be it because of a limited language proficiency or the absence of TV, radio and internet use in their community, were often the most vulnerable to Covid-19 -e.g., older people from immigrant or ethnic minority backgrounds, travelers and noncitizens in precarious employment who were far less likely to be able to work from home.Other key observations that emerged from this body of research related to different processes of message targeting, among them: the testing and tailoring of translated materials with culturally and linguistically diverse groups (Feinberg et al., 2021;Wild et al., 2021); the deployment of trusted messengers and venues with cultural significance to deliver Covid-19 related information (Garcia et al., 2021;Vanhamel et al., 2021;Wild et al., 2021), and the need for acknowledging the unequal burden of disease and historical trauma, for instance, in African Americans or amongst those with a disability (Bailey et al., 2021;Garcia et al., 2021).Vanhamel et al. (2021) and Wild et al. (2021) also noted that efforts to reach communities that are disproportionately affected by public health emergencies, such as Covid-19, risk being interpreted as singling out of certain groups by the general population as disease-spreaders or rule-breakers.Health messaging should therefore be also geared toward the prevention and mitigation of enacted and anticipated stigma.
There were also some interesting findings with regards to the timing and frequency of (tailored) messaging during a pandemic.The study of Brønholt et al. (2021) shed light on minority language speakers' uncertainty and frustration about accessing essential Covid-19 information with delay (if at all), while the data collected by Bailey et al. (2021) suggested that a high volume and frequency of risk communication messages, combined with economic uncertainty and isolation brought on by stay-at-home orders, led to negative emotional responses (e.g., fatalism or heightened fear and anxiety) among the most vulnerable to Covid-19.
Studies of practical attempts to address inequalities
There was a total of 13 studies which implemented and assessed interventions, among them tailored communicative strategies, aimed at reducing Covid-19 related health inequalities.Out of these, 11 articles presented case studies of community engagement through multisector partnerships in the US, Iran, Israel, and Ireland (Brewer et al., 2020;Despres et al., 2020;Feinberg et al., 2021;Fletcher et al., 2020;Humeyestewa et al., 2021;Karamidehkordi et al., 2021;Liebman et al., 2020;Ramos et al., 2020;Romem et al., 2021;Villani et al., 2021;Wieland et al., 2021).In addition, this review also identified a virtual ethnography of a Chinese volunteer-driven disability support network (Dai & Hu, 2021) and a randomized control trial which investigated the effectiveness of physician-delivered Covid-19 prevention messages in Black and Latinx communities in the US (Alsan et al., 2021).
In the case studies, the focus was primarily on partnerships involving faith leaders (Brewer et al., 2020;Fletcher et al., 2020;Romem et al., 2021), indigenous community leaders (Humeyestewa et al., 2021), and community-health partnerships catering for the needs of those living on the margins of society (Despres et al., 2020;Feinberg et al., 2021;Villani et al., 2021;Wieland et al., 2021).The majority of communicative "interventions" outlined in these case studies (e.g., creation and dissemination of culturally tailored digital resources about Covid-19, virtual town halls and church services, door-to-door outreach etc.) focused on both awareness-raising and the prevention of community outbreaks of Covid-19 through promoting protective measures and vaccination.The digital content curation model discussed in Despres et al. (2020) also addressed the inequitable impact of Covid-19 on Latinx people in the US while supplying community advocates with localized data tools, including blog posts exploring food (in)security or paid sick leave during the pandemic, and peer-modeled stories of Latinx people meaningfully responding to the Covid-19 crises.There were also three case studies (Karamidehkordi et al., 2021;Liebman et al., 2020;Ramos et al., 2020) advising specifically agricultural/rural communities on how to modify their work practices and environment to halt the spread of Covid-19.One of them -Liebman et al. ( 2020) -considered partnerships with researchers (e.g., microbiologists studying Covid-19 aerosols) as one of the best avenues for translating science into practical prevention strategies for those providing healthcare to agrarian populations.
In regard to evaluation and impact assessment, most of the studies relied on the authors reflection on the action -i.e., a retrospective contemplation of the community engagement partnerships and their success in bringing timely and relevant Covid-19 information to communities -to draw results and conclusions about what works.Incorporation of community voices (including faith leaders' voices) in risk and health messaging, participatory generation of pandemic communications, active tackling of Covid-19 myths and misinformation, as well as regular revision of message contents in response to community concerns have all been identified as key facilitators of effective communication.To show impact, around half of the studies (Brewer et al., 2020;Despres et al., 2020;Feinberg et al., 2021;Fletcher et al., 2020;Karamidehkordi et al., 2021;Ramos et al., 2020;Wieland et al., 2021) reported reach and engagement data, although these were mostly limited to the number of viewings and/or individuals reached. 2 How the messaging impacted vulnerable communities' risk and efficacy perceptions and actual behavior (e.g., compliance with protective measures) was largely unexplored leaving somewhat in doubt the effect these studies have had.
As indicated earlier, the review has also identified a randomized controlled trial which investigated the impact of public health messages tailored for Black and Latinx communities on Covid-19 knowledge and information-seeking (Alsan et al., 2021).The intervention consisted of video messages that varied by physician race/ethnicity, acknowledgment of racism/inequality, and community perceptions of mask wearing.Interestingly, the incidence of informationseeking increased for race-concordant messages for Black but not Latinx respondents.Other tailoring of the content (e.g., acknowledgment of unequal treatment in healthcare, economic difficulties and fears of deportation in public health videos) did not make a significant difference.The final study in this group was the virtual ethnography of Dai and Hu (2021): the study highlighted the empowering role of a disability support network which provided emergency Covid-19 communications in formats accessible to people with hearing impairment, visual impairment, and intellectual and developmental disabilities.Dai and Hu (2021) also offered examples of good practice for effective communication with individuals living with disabilities during a pandemic.The main characteristics of each study reviewed above are summarized in Table S4 of the Supplemental Material.
Discussion
The Covid-19 pandemic has challenged governments and public health bodies around the globe in developing effective communication strategies to ensure acceptance, uptake, and adherence to public health measures.This scoping review has provided an analysis and synthesis of data derived from 40 empirical studies focused specifically on communication inequalities in the context of the Covid-19 pandemic, including various explorations of communication interventions targeted at traditionally underserved groups and/or those disproportionately affected by the pandemic.
With respect to the first review aim (i.e., the nature of communication inequalities and affected groups), the results largely corroborated the findings from earlier pandemics (e.g., Lin et al., 2014;Vaughan & Tinker, 2009) by confirming the role of sociodemographic, cultural/religious, and economic factors in facilitating/jeopardizing the public's capacity to access and act upon crucial public health messaging.The studies focussing on communication outcomes such as information seeking, exposure to different communication channels, and trust in information sources (e.g., Cheng et al., 2021;Wang et al., 2020Wang et al., , 2021;;Yu et al., 2021) also confirmed age, education, and gender as important social determinants of communication inequalities.Overall, the breadth of research focusing on the messaging needs and preferences of those disproportionately affected by the pandemic is encouraging as it reflects a commitment to tackle communication as well as broader inequalities in the context of Covid-19.At the same time, however, the sheer volume of Covid-19 specific research which uncovered and/or explored communication inequalities along racial, ethnic, economic, geographic, and educational lines (dozens of publications in less than two years) highlights serious inadequacies if not outright failures in our attempts to reach and provide much needed health information for those most at risk of Covid-19.It is important to emphasize that unmet information needs arose mostly from: (1) language barriers and insufficient or inadequate translation into community/migrant languages, (2) lack of information reflecting the lived experience of individuals and/or consideration of their specific circumstances or vulnerability; and (3) hard-to-access or ineffective communication channels; findings which are consistent with previous literature on pandemic communication.While a small number of studies stood in contrast to these points, finding no or little difference between different demographic groups (i.e.van Scoy et al. ( 2021), a study that was conducted in the US in the early stages of the pandemic), it is not unreasonable to suggest that this was because messaging was already so flawed, it made little difference between groups.
The second aim of this review was to identify and analyze strategies that have been implemented to reach, engage, and communicate with disadvantaged groups in the context of Covid-19.It was encouraging to see that recommendations from the literature -particularly, lessons learnt on the importance of community partnerships, trusted messengers and the co-creation of health and risk messages -had been taken on board in multiple studies.Interestingly, while engaging community leaders and members in the tailoring and delivery of health communications has been considered "desirable practice for decades" (Ryan et al., 2021, p. 30; see also Vaughan & Tinker, 2009), limited empirical evidence was available prior to Covid-19 on how to best undertake these activities.This scoping paper partially fills this knowledge gap through the synthesis of evidence from eleven case studies of community-health partnerships and real-world implementations of Covid-19 campaigns tailored to local contexts and groups, among them indigenous and faith communities, (foreign) agricultural workers, and people living with disabilities.Incorporation of community voices in risk and health messaging while staying true to the facts, active tackling of Covid-19 myths and misinformation, as well as regular revision of message contents in response to community concerns have all been identified as key contributors to effective Covid-19 communication.
There are however also important limitations related to this body of evidence, most importantly, the fact that impact was evaluated almost exclusively in terms of reach data (e.g., number of website viewings) without giving due consideration to the feasibility or acceptability of the proposed communicative measures 3 or their real-word impact on adherence to Covid-19 measures or improved health outcomes.As to broader limitations that warrant consideration, the review is skewed toward studies from the US and the UK and it may have missed some relevant studies due our search strategy relying on English-only terms without language restrictions rather than on a comprehensive multilingual search strategy. 4Because of the speed with which new research is emerging, it is likely that further studies on the topic are already available as preprints or journal publications. 5Another limitation is the exclusion of gray literature which could have filled some of the gaps in impact-related evidence.
The review also raises intriguing questions regarding the nature and extent of message customization, given that in the reviewed studies message targeting and tailoring 6 often referred to adjustments in solely one or two of the key aspects such as language selection (e.g., translation of materials into community or migrant languages without further editing), accessibility (in terms of format, processing, and comprehension difficulty), relevance of content (e.g., guidance on how selfisolate in nomadic households), framing (e.g.messages emphasizing individual gains from getting vaccinated vs collective good), appeal (e.g., use of religious imagery to convey the risks of disease), channel (e.g.ethnic language print), source and messenger (e.g.use of race-concordant physicians to deliver messages about the importance of mask-wearing) and trust-building (e.g.messages promoting mutual respect and solidarity).Interestingly, in some studies (Despres et al., 2020;Torres et al., 2021), communications recognizing racism, economic hardship, historical trauma and/or the disproportionate impact of Covid-19 on certain groups (e.g., Latinx or Black communities in the US) were considered a key if not primary form of message targeting, even though evidence around the effectiveness of such a strategy is scarce and inconclusive (see e.g., Alsan et al., 2021).
Going forward, studies are needed that consider all the above aspects of message customization comprehensively, while also putting in place robust evaluation methods that can capture real-word effects (for example, in the form of increased adherence to public health measures or improved health outcomes in the communities concerned).Another gap in the literature relates to communication with individuals with special needs or disabilities and those who were required to shield throughout the crisis due to their age and/or underlying health conditions.Similarly, while much of the pandemic's frontline work fell on women, migrants, ethnic and racial minorities, and low-paid workers (OECD, 2022), relatively little evidence is currently available to understand the communication disadvantages faced specifically by these groups whose jobs could not be done remotely and implied a higher risk of contagion throughout the pandemic.A further gap in the literature relates to the lifting of Covid-19 restrictions and how to best communicate with different segments of the public about the gradual phasing out of public health measures, which in some parts of the world have been in effect for over two years.Covid-19 has also highlighted the need to actively tackle misinformation, something which was recognized in the literature, but largely unaddressed.
As well as bringing about one of the largest changes to social life in living memory, Covid-19 has also created a distinct challenge in relation to health communication, with governments, health authorities and others having to deal with substantial uncertainty and unprecedented volumes of misinformation.This has been compounded amongst those who have been disproportionately impacted by the pandemic.While it remains important that we continue to provide information to people that is both accessible and that resonates with them, the Covid-19 pandemic has also highlighted the need for tailored approaches to tackle misinformation.Several of the studies flagged misinformation as a concern, but very few addressed it specifically.Finally, and while unprecedented in many ways, Covid-19 has shown how little we have learnt (or at least applied) from past pandemics in relation to health communication.There is a pressing need to address this as the pandemic continues to impact lives in the coming years.Notes 1.Each paper was included in one group only.2. Brewer et al. (2020) andWieland et al. (2021) also reported positive outcomes related the feasibility and acceptability of their emergency risk communication proposals.3. (with the exception of Aslan et al., 2021;Brewer et al., 2020;Wieland et al., 2021).4. The search returned only English-language publications.5.A search conducted on 8 April 2022 on OSF preprints (https://osf.io/preprints/) -which aggregates search results from over thirty preprint providers -returned two manuscripts which are potentially relevant for this review.6.We recognize that targeting usually draws on audience segmentation to develop and use group-specific messages, while tailoring fits messages to individual characteristics and preferences.This distinction, however, was not present in many of the papers reviewed in this study, most probably due to the different disciplinary backgrounds the authors drew on.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
The author(s) reported there is no funding associated with the work featured in this article.
Figure 2 .
Figure 2. Distribution of studies by country. | 2022-07-20T06:17:36.435Z | 2022-07-19T00:00:00.000 | {
"year": 2022,
"sha1": "850e38279d7f5643eeb405c32133b1f785ec1620",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10410236.2022.2088022?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "12169cf933eedb1148b633a9afa46a1d03163df2",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
197470614 | pes2o/s2orc | v3-fos-license | H2−H∞ control of discrete-time nonlinear systems using the state-dependent Riccati equation approach
ABSTRACT A novel H2−H∞ State-dependent Riccati equation control approach is presented for providing a generalized control framework to discrete-time nonlinear system. By solving a generalized Riccati equation at each time step, the nonlinear state feedback control solution is found to satisfy mixed performance criteria guaranteeing quadratic optimality with inherent stability property in combination with H∞ type of disturbance attenuation. Two numerical techniques to compute the solution of the resulting Riccati equation are presented: The first one is based on finding the steady-state solution of the difference equation at every step and the second one is based on finding the minimum solution of a linear matrix inequality. The effectiveness of the proposed techniques is demonstrated by simulations involving the control of an inverted pendulum on a cart, a benchmark mechanical system.
Introduction
The Hamilton-Jacobi equation (HJE) is a traditional approach to characterize the optimal control of nonlinear systems. The solution of the HJEs provides the necessary and sufficient optimal control conditions for system modelled by nonlinear dynamics. When the controlled system is linear time-invariant and the performance index is linear quadratic regulator (LQR), the HJEs can be reduced to algebraic Riccati equations (AREs). As for H ∞ nonlinear control problem, the optimal control solution is equivalent to solving the corresponding Hamilton-Jacobi inequalities (HJIs). However, HJEs and HJIs, which are first-order partial differential equations and inequalities, cannot be solved for more than a few state variables.
Motivated by the success of linear system optimal control methods, there has been a great deal of research involves in approximating the solutions of HJEs and HJIs over the last decade. As powerful alternatives to HJE/HJI techniques: the state-dependent linear matrix inequality (SDLMI) and the state-dependent Riccati equation (SDRE) techniques have provided us very effective algorithms for synthesizing the nonlinear feedback controls. Both SDLMI and SDRE utilize state-dependent linear representations, some of the earliest work can be found in Cloutier (1997), Cloutier, D'Souza, and Mracek (1996); Huang and Lu (1996) and Mohseni, Yaz, and Olejniczak (1998). The purpose behind SDLMI is to convert a nonlinear system control design into a convex optimization problem involving state-dependent linear matrix inequality solutions. The recent development in numerical algorithms for solving convex optimization provides very efficient means for solving LMI (Boyd, Ghaoui, Feron, & Balakrishnan, 1994). If a solution can be expressed in LMI form, then there exist efficient algorithms providing globally optimal numerical solutions. Therefore, if the LMIs are feasible, then SDLMI control technique provides optimal solutions at each step for a given state for nonlinear system control problems. As pointed out in Jeong, Feng, Yaz, and Yaz (2010), Yaz (2009), Wang, Yaz, and and , SDLMI provides us an effective method to synthesize nonlinear feedback control in achieving nonlinear quadratic regulator (NLQR), H ∞ and positive realness performance criteria.
The SDRE control has emerged as general design method since the mid-1990s, which provides a systematic and effective design framework for nonlinear systems. Motivated by linear quadratic regulator control by algebraic Riccati equation (ARE), Cloutier et al. extended the result to nonlinear quadratic regulator problem by using state-dependent coefficient matrices as pointed out in Cloutier (1997) and Cloutier et al. (1996). A discrete SDRE method is developed in Dutka, Ordys, and Grimble (2005). Due to the computational advantage and guaranteed local stability, the SDRE method is of practical importance and has a wide range of applications, including robotics, missiles, aircraft, satellite/spacecraft, unmanned aerial vehicles (UAVs), ship systems, autonomous underwater vehicles, automotives, process control, chaotic systems, biomedical systems, guidance and navigation, etc. A recent survey of the development of SDRE method can be found in Cimen (2008Cimen ( , 2010. Traditionally, the SDRE method approaches address the nonlinear quadratic regulator problem. The contribution of this manuscript is to propose a novel H 2 −H ∞ SDRE control approach with the purpose of providing a generalized control framework to discrete-time nonlinear systems. By solving the generalized SDRE at each time step, the optimal control solution is found to satisfy mixed performance criteria guaranteeing quadratic optimality with inherent stability property in combination with H ∞ type of disturbance reduction (Basar & Bernhard, 1995;Van der Shaft, 1993). Two numerical solution procedures: one involving the steady-state solution of a generalized Riccati difference equation and the other involving a state-dependent LMI are also given. The effectiveness of the proposed technique is demonstrated by simulations involving the control of a benchmark mechanical system. The paper is organized as follows: In the second section, the system model and the performance criteria are introduced. In the third section, the derivation of the H 2 −H ∞ SDRE controller is provided. Optimal control solution can be obtained by solving the generalized SDRE. To solve the generalized SDRE, a difference SDRE and an SDLMI solution are also presented to provide computational alternatives. The fourth section contains an illustrative example involving the control of the inverted pendulum on a cart. Finally, the conclusions are summarized in the fifth section. The following notation is used in this work: x ∈ n denotes n-dimensional real vector with norm x = (x T x) 1/2 where (·) T indicates transpose. A ≥ 0 for a symmetric matrix denotes a positive semi-definite matrix. l 2 is the space of infinite sequences of finitedimensional vectors with finite energy: ∞ k=0 x k 2 < ∞.
System model and performance index
Consider the input affine discrete-time nonlinear system given by the following difference equation: where x k ∈ n is the state vector, u k ∈ m the applied input, w k ∈ q the l 2 type of disturbance and A k , B k , F k the state-dependent matrices of known structure. Note that the simplified notation for time-varying matrices A k , B k , etc. is used to denote the state-dependent matrices. The performance output function z k ∈ p is generalized as follows: where C k , D k , G k are state-dependent coefficient matrices of known structure. It is assumed that the state feedback is available. Otherwise, estimated state variable can be obtained from a nonlinear state estimator. The nonlinear state feedback control input is given by Consider the quadratic energy function for the following difference inequality: Note that upon summation over k, Equation (5) yields Notice that Q k and R k are state-dependent counter parts of the weighting matrices in the traditional linear quadratic (H 2 ) control approach and γ 2 is the H ∞ bound. By properly specifying the value of the weighing matrices Q k , R k , C k , D k , mixed performance criteria can be used in nonlinear control design, which yields a mixed NLQR in combination with H ∞ performance index.
Main results
The following theorem summarizes the main results of the paper: Theorem 1: Given the system (1), performance output (2), and control input (3), the mixed performance index (6) can be achieved by using the control feedback where P k is obtained from the generalized SDRE: Proof: By applying system (1), performance output (2), control input (3), performance index (5) can be written as Equivalently, we have Therefore, we have where By applying the Schur complement (Boyd et al., 1994), we obtain which yields The minimum value of P k is achieved when the inequality above is satisfied as an equality. Since the iterative solution starts at P ∞ and runs backward in time and for P k+1 = P k convergence occurs, the difference equation becomes an algebraic equation (Dutka et al., 2005) as follows: By collecting terms, we have Equivalently, the equation can be simply written as where By completing the square in the controller gain K k , we have For Equation (18) to be equal to Equation (20), we must have Therefore, the optimal feedback gain When K k = K o k , the minimum P k is defined by the positive-definite solution of the following generalized SDRE: Equation (23) is the generalized discrete SDRE equation. By solving P k from Equation (23), the H 2 −H ∞ SDRE control can be achieved by Equation (22).
Remark 1:
As a special case, if there is no H ∞ component in the performance index, i.e. the problem is of nonlinear quadratic regulator control, then the following controller can be derived as a special case of the above results: By neglecting the noise term, the system equation becomes The optimal feedback control gain as where P k is defined by the positive-definite solution of the following generalized SDRE: (26) Therefore, the conventional discrete SDRE solution (Dutka et al., 2005) is derived as a special case of our results.
Remark 2:
The generalized SDRE (23) can be numerically difficult to solve. To facilitate the computation process, the following two results provide two alternative numerical solutions to the generalized SDRE in Theorem 1. Method 1 provides us the solution by solving the difference SDRE (28) until the steady state is reached, instead of (23). Method 2 provides us a state-dependent linear matrix inequality approach.
Numerical method 1 (H 2 −H ∞ difference SDRE control)
Given the system (1), performance output (2), control input (3) and performance index (6), optimality can be achieved by using the control feedback where P k is obtained as the steady solution to the following difference SDRE equation: At time step k, the difference equation (28) is iterated starting with an arbitrary initial condition P k,0 > 0 until P k,i converges to P k,i+1 , for i = 1, 2, 3, . . .. Hence, the solution to the generalized SDRE equation (23) can be found using this method. In practical applications, we can choose as the starting value for iterations to calculate P k .
Numerical method 2 (state-dependent LMI control)
Given the system equation (1), performance output (2), control input (3) and performance index (6), if there exist matrices M k = P −1 k > 0 and Y k for all k ≥ 0, such that the following state-dependent LMI holds (Wang, Yaz, & Long, 2014a, 2014b: where 12 = −αM k C T k D k + 0.5 · βM k C T k , and M k+1 ≥ M k , where max π k s.t. M k ≥ π k I, (32) then inequality (5) is satisfied. The nonlinear feedback gain of the controller is given by Proof: Inequality (10) is equivalent to the ≤ 0 following inequality: By adding and subtracting the same term in Equation (34), the following inequality results: Therefore, subject to P k+1 ≤ P k , Equation (35) can be rewritten as By pre-multiplying and post-multiplying the matrix with block diagonal matrix diag{M k , I, I}, where M k = P −1 k , the following inequality as follows: Hence, if the LMI (41) holds, inequality (5) is satisfied. The following initial conditions are assumed: x 1 = 1, x 2 = 0, x 3 = π/4 and x 4 = 0.
Simulation results for different design parameter values are compared in Figures 1-5 for performance: the classical SDRE or NLQR result (Dutka et al., 2005), the new H 2 −H ∞ controller for a set of design parameter values computed by using the difference equation technique, new controller for two different sets of parameter values computed by the SDLMI technique and the traditional LQR control based on linearization. From these results, one can choose the controller that suits the designer's expectation best. Note that Figures 1, 3 and 4 show that the traditional LQR technique loses control of the state variables. Figure 5 shows that the lowest control magnitude is needed by the linearization-based LQR technique at the expense of losing control of the state trajectory.
Conclusions
A novel H 2 −H ∞ control of discrete-time nonlinear systems with SDRE approach is presented in this paper. The optimal control solution can be obtained by solving generalized state-dependent Riccati equations or statedependent LMIs. The inverted pendulum on a cart is used as an illustrative example. For future work, the mixed H 2 −H ∞ SDRE control approach will be extended to nonlinear systems with nonaffine structure.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-04-22T13:05:44.437Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "50b0884c5f9d8b8c99d760682509e13104acfec6",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642583.2017.1310635?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "86e849705ad3e7e6973ff23ea4ec780ab91f9ca2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
61321813 | pes2o/s2orc | v3-fos-license | A Comparison Study on Machine Learning Algorithms Utilized in P300-based BCI
This study addresses Brain-Computer Interface (BCI) systems meant to permit communication for those who are severely locked-in. The current study attempts to evaluate and compare the efficiency of different translating algorithms. The setup used in this study detects the elicited P300 evoked potential in response to six different stimuli. Performance is evaluated in terms of error rates, bit-rates and runtimes for four different translating algorithms; Bayesian Linear Disciminant Analysis (BLDA), Linear Discriminant Analysis (LDA), Perceptron Batch (PB), and nonlinear Support Vector Machines (SVMs) were used to train the classifier whilst an N-fold cross validation procedure was used to test each algorithm. A communication channel based on Electroencephalography (EEG) is made possible using various machine learning algorithms and advanced pattern recognition techniques. All algorithms converged to 100% accuracy for seven of the eight subjects. While all methods obtained fairly good results, BLDA and PB were superior in terms of runtimes, where the average runtimes for BLDA and PB were 13 ± 2 and 15.6 ± 6 seconds, respectively. In terms of bit-rates, BLDA obtained the highest average value (22 ± 12 bits/minute), where the average bit-rate for all subjects, all sessions, and all algorithms was 18.76 ± 10 bits/minute.
Introduction
The electrical activity of the brain i.e., electroencephalography (EEG) originates mainly from the cerebral cortex. The amplitude of the EEG signal is commonly in the range of 0 to ± 100 µV, and its frequency is in the range of 0 to 100 Hz. The four different EEG waves, namely, alpha, beta, theta, and delta, are characterized by its own unique properties. Although not observed separately, one property is usually dominant over the others depending on the state. One of the problems associated with the EEG frequency range is the signal susceptibility to ambient noise. Noise sources such as the 50 Hz power line interference and impendence fluctuations tend to complicate and hinder the acquisition of a strong EEG signal with a high Signal to Noise Ratio (SNR) in addition to cause motion artifacts [1].
Since its appearance in 1923, EEG has been used mainly as a diagnostic tool for neurological disorders. Recently, there has been growing interest in using EEG as a control channel to help people suffering from a health condition apparently similar to coma in which patients are mute and totally paralyzed, except for eye movements, but stay conscious. This condition usually results from massive hemorrhage, thrombosis, or other damage, affecting upper part of brain-stem, which destroys almost all motor function, but leaves the higher mental functions intact. People having such a condition are identified as locked-in. Those people produce the same spatiotemporal activation patterns on intention to use an extremity as those observed in healthy individuals [2][3][4].
In order to extract useful information from the EEG data, pattern recognition algorithms may be employed. There are different techniques for pattern recognition necessitating the careful selection and design of an appropriate method to a specific problem. Most schemes are based on statistical probability evaluation [5], neural networks [6], support vector machines (SVMs) [7], or other similar techniques.
A functional and seamless human-machine interface will have a profound impact on those suffering from neurological disorders that make them unable to communicate or manipulate their surroundings.
While some people with limited handicaps may achieve this communication through physical contact with a machine-such as using a keyboard, manipulating a joystick, or even issuing voice commands, others are severely handicapped and unable to communicate through the normal neuromuscular channels, and may benefit from a special interface. A Brain Computer Interface (BCI) is a system that allows communication with the central nervous system (CNS) by translating brain signals into commands a machine is able to understand [6,[8][9][10]. Most BCIs are special types of Human Machine Interfaces (HMIs), which are characterized by the use of Electroencephalography (EEG) as the main channel of communication. The use of EEG for communication is an important feature of BCIs allowing for a communication pathway independent of patterns produced by motor activity, where the command signal is extracted directly from cortical brain activity and is independent of efferent neuromuscular channel activity.
There are currently several major categories of BCIs in use that are classified based on the type of neurophysiologic signal they utilize. These categories include, but are not limited to, Visual Evoked Potentials (VEPs), P300 elicitation, alpha and beta rhythm activity, slow cortical potentials (SCPs), and microelectrode cortical neuronal recordings [2,3,11]. The P300 waves are evoked potentials that are elicited in response to specific stimuli, while SCPs occupy the lowest frequency range of the EEG signal and are associated with cortical activation and deactivation. In the SCPs, negative shifts correspond to cortical activation and positive shifts correspond to cortical deactivation. The alpha (or Mu rhythms) and beta activities cover the frequency ranges [8][9][10][11][12] Hz and 12-30 Hz, respectively. These are thought to be activities of the sensory/motor cortex [12]. As for the microelectrode cortical neuronal recordings, it is an invasive technique involving the direct contact of microelectrodes with brain tissue. Most BCI systems aim to distinguish between different signals based on subject intention. A P300-based BCI system emits different commands based on the time at which the P300 is elicited, while SCP-based systems detect commands based on positive or negative voltage shifts [3,13]. Furthermore, P300 is an event-related potential that can be seen as a positive deflection of the normal EEG in response to stimuli after approximate 300 ms latency [14]. The algorithm was first introduced by MacKay [15] in Bayesian interpolation, and later implemented for P300 wave detection by Hoffmann et al. [16]. It exhibits viability for online BCI systems due to the recursive computation of the hyper-parameters.
Various techniques are applied to classify data and features. For example, LDA assumes linear separation between different states, and in order to ensure separation, a simple mathematical procedure is applied [17]. On the other hand, Principal Component Analysis (PCA), which is a statistical method that searches for components of high significance in representing the data while eliminating those of low contribution, can be seen as a search for directions that are efficient for representing the data [5]. Independent Component Analysis (ICA), sometimes referred to as "blind source separation", is a statistical procedure used to separate signals that are mixed linearly and randomly, assuming that these signals originate from independent sources [18][19][20]. BLDA is an iterative procedure, which aims to compute the posterior probability using hyper-parameters [15]. ICA and PCA were not investigated in this study due to the additional computational complexity introduced into the problem, which is contrary to our goal of providing a simple and efficient online EEG processing tool.
Sellers and Donchin [21] developed a P300-based BCI using a four choice paradigm (Yes, No, Pass, End), where classification was based on Stepwise Discriminant Analysis (SWDA). The aim of their study was to determine whether Amyotrophic Lateral Sclerosis (ALS) patients could use P300 BCIs as an alternative communication channel. Using the Berlin brain computer interface (BBCI) [2], data was collected based on stimuli that evoked readiness potentials, and then preprocessed using fast Fourier transform (FFT) filters. Subsequently, Fisher Linear Discriminant Analysis (FLDA) was trained to classify the data. The authors obtained good results, with an error approaching zero within 500 ms, and a bit rate of 37 bits/min for a spelling task.
In order to conduct a comparison of various algorithms, different electrode configurations, and the consequent bit rates, a P300 based BCI was developed by Hoffmann et al, providing an evaluation of two classification algorithms, specifically, FLDA and BLDA [16]. The study suggested that BLDA obtained a higher bit-rate and classification accuracy. Furthermore, the accuracy and speed (bit-rate) increased proportionally with the dimension of the data. After an extensive first session of supervised algorithm learning and feedback, a study based on the datasets provided by BCI competition 3 incorporated the use of adaptive linear discriminant analysis (ALDA) for classification of different motor imageries [22]. The study showed that ALDA outperformed LDA during supervised learning sessions with higher decoding power over time. A real-time independent BCI system was implemented in the Graz-BCI. The system aimed at distinguishing the different motor imageries based on Event-related De-synchronization (ERD) and Event-related Synchronization (ERS) of the Mu and beta rhythms [12].
The present work introduces an optimized P300-based BCI system.
Detecting the emergence of the P300 wave in a time sequence is not easy due to variability among subjects in amplitude, latency and duration. Thus instead of designing subject-based detection algorithms, this study utilized machine learning techniques in which a classifier is trained to detect target signals from a training set. It also attempts to evaluate the performance of four different translating algorithms (BLDA, LDA, PB, and Nonlinear SVM).
Materials and Methods
This study employed the same data set used in the investigation by Hoffmann et al. [16], which comprised four healthy subjects and four subjects with neurological deficits. The recorded EEG data were based on visual stimuli (TV, telephone, lamp, door, window, and a radio) that evoked the P300 component. Each subject recorded four sessions, one minute for each class for six different classes, giving a total of 24 minutes of recording. Subjects were asked to focus on a specific image for each run; while the sequence of stimuli was randomly presented. Several performance measures were computed to develop a comparison between the different methods. The ultimate objective is to determine which of the methods would be most suitable for accurate real-time communication. The study employs Bayesian Linear Discriminant Analysis (BLDA) to detect the emergence of the P300 wave in the time series. It utilizes a set of different algorithms, including Linear Discriminant Analysis (LDA) and perceptron neural network programming, in order to train a linear classifier. In addition, a nonlinear algorithm (nonlinear SVM) is also used to train a nonlinear classifier for the sake of performance comparison with the other linear algorithms.
The BCI system was designed for real-time analysis relying on prior training of the classifier. The online data processing was developed using Matlab and Simulink (Math Works, Inc., USA). During acquisition, data are processed online sample by sample. Recording starts at the initialization of stimuli. A timing function specifies the feature vector lengths and time points in reference to the start time of recording. Thus the predefined timing function, based on subject intention and flags, emitted by the different stimuli can produce the six different class labels online during the training phase of the classifier.
In the Bayesian framework, one seeks to estimate the posterior probabilities for each state, based on prior probabilities calculated from the class labels. The class corresponding to the highest posterior is selected. An indirect representation of the posterior probability is the weight vector (W) which is calculated to train the classifier (given in equation (9)). In BLDA, the weight vector is computed recursively in contrast to the direct calculation used in LDA. In contrast, Perceptron Batch (PB) and nonlinear SVM aim to separate classes with the largest possible margin. A possible drawback of both techniques is the need to predefine a maximal number of iterations. The four algorithms were compared in terms of accuracy, error, bitrates, and computational complexity.
Classification error is defined as the ratio of erroneously emitted commands (N e ) to the total commands emitted (N t ) and is computed from the formula: And accuracy, as such, is the ratio of correct emitted commands (N c ) to the total commands emitted is When assessing the performance of communication systems, Information Transfer Rates (ITRs) are given in terms of bit-rates. There are several bit-rate definitions in literature; among the first reported is that by Farwell and Donchin [14] defined as: where V is the classification speed in (symbols/minute) and R is the information carried by one symbol (bits/symbol), defined as: where, N is the number of possible targets.
The second definition which is based on Shannon information theory for noisy channels was introduced in Wolpaw et al. [3] given as: where, p is the probability of a target being correctly classified. In this study, the definition in equation (5) for computing the bit-rates was chosen because it takes accuracy into account, thus representing information transfer rates without assuming a faultless classifier; although at 100% accuracy, it is reduced to the definition in equation (4).
The principal objective of a P300 based algorithm is to detect target signals. In statistical terminology the algorithm estimates the probability of a certain data set containing a P300 wave. This study compares the previously mentioned set of algorithms in terms of the above mentioned parameters. As offline analysis is used to train and test the data, a parameter that indicates the computational complexity is required. To establish a comparison between the different methods, their runtimes were computed using Matlab profiler; the lower the runtime, the more feasible the method for online implementation on a small digital signal processing (DSP) board. Results were obtained using the 4-fold cross validation procedure. The setup implemented contained four sessions for each subject. As such, three labeled sessions were used to train the classifier and the fourth was used unlabeled to test it. At each run, a record of the number of correct emitted commands was kept to compute the average error rate and the bit-rate corresponding to each subject. A 4-fold Cross validation was used to obtain average values of errors and bit-rates (±) the standard deviation. The procedure was repeated for the different algorithms. Classification errors and bit rates were obtained for eight subjects. Four of the subjects (A, B, C, D) had neurological deficits, while the remainder (E, F, G, H) were healthy with no known neurological disorders. The data used in training and testing contained eight channels of EEG signals recorded from four midline electrodes (Cz, Pz, Fz, and Oz) and four parietal electrodes (P3, P4 P7, and P8). Each decision was emitted based on a probability comparison between six different datasets corresponding to the different visual stimuli that were elicited. At each correctly classified command, there are one true positive and five true negatives, and at each erroneously emitted command there are false positive and five false negatives.
The sensitivity and the specificity were calculated, respectively with the given below formula [15].
Pattern Recognition Stages
An important step when presenting the proposed system is the implementation of the pattern recognition stages in the communication channel between the human brain and the computer.
Preprocessing
For BCI applications, the recorded data is preprocessed to reduce noise, artifacts, and dimension, prior to being fed to a machine learning algorithm. The data was preprocessed by six different preprocessing blocks. A high order notch filter was used to eliminate power line noise. A third-order Butterworth bandpass filter was used with lower and upper cutoff frequencies of 1 and 12 Hz, respectively. The cutoff frequencies were varied for each subject to identify the values that produced the best results. The data was then down sampled to 32 samples to reduce the dimension of the filtered data.
Stimuli were elicited every 400 ms, and due to the variable latency of the P300 component among subjects, extraction proceeded for one second after the onset of stimulus, which resulted in 600 ms of overlap as can be seen in Figure 1. Each trial was concentrated into a multidimensional array and sent to the next stage where it was multiplied by a window function to emphasize the late signal content. In the following stage, trials were scaled to a [-1,1] interval, and normalized to have a zero mean and a unity variance according to Equation (8).
where µ is the mean value, σ is the standard deviation, and n is the number of data points.
A whitening transform was applied to the extracted features. The data covariance matrix proportional to the identity was used for data whitening [5]. In order to obtain coordinate transformation, the mean (µ) was subtracted from the data as in equation (9).
Then Eigenvalue-decomposition was applied to the data covariance matrix, as in equation (10): where λ i is the Eigen values matrix, v i is the Eigenvectors matrix, and C is the data covariance matrix defined in Equation (11):1 A new whitened data vector (X) is obtained by performing the transformation as shown in equation (12): where D is the diagonal matrix of the given values illustrated in (13): Machine Learning algorithms for classification: When evaluating uncertainties in data we often relay on probabilistic methods such as Bayes theorem [5].
The aim of any Bayesian algorithm is to approximate the probability of a state w given evidence x. Bayes theorem is defined as in equation (14).
where p(w) is the probability of occurrence of state w, p(x) is the probability of occurrence of event x, p(w|x) is the probability of occurrence of state w given the event x, and p(x|w) is the probability of occurrence of event x given the state w.
With the prior probabilities defined and the evidence approximated by a multivariate density function, the posterior probability is evaluated for each state and the class/command corresponding to the largest probability is selected according to equation (15).
Classification is based on distinguishing one feature from another. If the features are well extracted and represent an event that occurred in the time series, the classifier is set to distinguish between two or more different states/classes and select the most probable event although this process is sometimes contaminated by errors due to noise and artifacts in the EEG.
In this study, LDA, BLDA, and Perceptron programming were used for linear classification, while a nonlinear sigmoid SVM was implemented to train a nonlinear classifier. The LDA model assumed is simple, yet robust and sensitive to artifacts and noise. The objective is to evaluate a linear hyper-plane (w) with maximum margin (α) that separates two different classes (commands/states) (Figure 2).
Computation of the normal vector w that satisfies the maximum criterion function outlined in equation (17) gives the normal vector as shown in equation (18): where S is the scatter matrix defined as seen in equation (19): Performance measures: The performance evaluation of the BCI algorithm was measured in terms of classification error (or classification accuracy), and bit-rates were used to assess the information transfer characteristics. Published performance data are summarized in Table 1. To study the feasibility of BCI algorithms for online applications, the runtimes for each algorithm were computed using a 2.6 GHz Intel dual-core processor. The runtime of each algorithm was scaled to the maximum runtime of all methods at a specific data size.
Results and Discussion
The classification error and bit-rate against time for the disabled Table 2 shows the confusion matrix obtained as a result of the decision made based on probability comparison. By looking at Figures 3 and 4 one can notice the inverse relation between the bit-rate and accuray, this suggests that the more trials are incorporated in the decision, the higher the accuracy and the lower the bit-rate. But since 100% accuracy is approched it is assumed that the operating bit-rate for that accuracy is the one obtained at the same time interval. For example, subject A will operate a faultless classifier at 7.5 bits/minute and subject E at 12.5 bits/ minute.
The maximum accuracy, the sensitivity, and the specificity, obtained for the different algorithms are presented in Table 3. It is fairly clear that, in terms of accuracy, all of the methods behave similarly. However, in terms of classification speed (bit-rate), variable results were obtained. The mean squared error (MSE) and the maximum bit-rate obtained for all subjects is presented in Table 4.
All of the results were averaged over the four sessions. The average bit-rates obtained with BLDA, LDA, PB, and nonlinear SVM for all subjects were 23 ± 13, 20 ± 13.6, 17.3 ± 5.6, and 14.6 ± 5.5 bits/ minute, respectively. This suggests that BLDA outperformed all the other methods in terms of both speed and accuracy since bit-rate and accuracy are in direct proportionality.
For offline analysis, all algorithms obtained fairly good results. However from a developer's point of view, a system is best when trained online and operated on a small portable hardware. Thus noncomplex and fast algorithms need to be developed. To access the feasibility of each method used for such a task, the runtime was computed against the data size as shown in Figure 5.
It is obvious that as the data size increases, BLDA and Perceptron batch converge faster than the other two methods. The study employed 18 minutes of data for training and 6 minutes for testing. For this set, BLDA had the fastest runtime (14.5 seconds) and LDA had the slowest (60 seconds); this can be explained by the fact that LDA computes the inverse matrix directly to obtain the weight vector while in BLDA hyper-parameters are computed recursively to obtain the weight vector. Runtimes were scaled to the maximum argument of all methods to demonstrate how the runtimes vary as the data size increases ( Figure 6).
The runtimes in Figure 5 were averaged across four runs for each algorithm. From Figures 5 and 6, it can be seen that BLDA and PB converge faster and are less affected by the data size in comparison with the other two algorithms.
The four algorithms tested varied in runtimes as the data set size increased (Figures 5 and 6). However, BLDA and PB were not significantly affected by the sample size and exhibited robustness in the training phase. For example, with BLDA, when the data size increased from 6 to 12 minutes, the average runtime increased by only 3 seconds. On the other hand, using LDA the runtime increased almost 20 seconds. The average runtime for BLDA was 13 ± 2 seconds, and the average runtime of PB was 15.6 ± 6 seconds. For LDA and Nonlinear SVM the runtimes were in the range of 1-2 minutes for the 24 minutes of data set size. The classification speed (bit-rate) seemed to increase when data whitening was used in preprocessing. This was only valid for data with a low dimension (eight and four channels). With higher dimension data, whitening seemed to decrease the classification speed. In a BCI setup, it is more convenient to use a low number of channels. Table 5 shows the effect of data whitening on classification speed.
Several studies reported that the amplitude of the P300 increases proportionally with the number of choices [16,23]. On the other hand, as the number of choices is increased, it is fairly obvious that the probability of error increases. Nijboer et al. [24] reported a P300 speller system using 6×6 and 7×7 matrices (choices). The study included offline and online classifications, and reported higher accuracies for offline analysis. Theoretically, speed (bit-rate) increases with increased number of choices ( Figure 7). However, there is a practical limitation imposed by the fact that a high number of choices require higher dimensional features for training and testing.
The performance of a BCI system, as expected, depends on the machine learning algorithm it uses. Many algorithms have been developed and utilized in P300 detection. The question of which is better is never simple due to the performance variability observed among subjects. Generally the method that requires minimal training data and needs less user intervention is better. Moreover a better method takes less time to converge. Due to the rapid development of fast computers, multiple algorithms can be used in parallel. The methods used here all obtained high performance in terms of accuracy. However two of them might be of significant importance in future BCI research: BLDA and PB, due to their low runtimes which would make them practical for real-time applications has been mentioned in Table 6. Other methods include Hidden Markov Models (HMMs), ICA, and Wavelets Packets Transform (WPT). Obermaier et al reported the use of HMMs for online classification of motor imageries [25]. Hung et al. [26] used ICA in pre-classification and reported an increase in accuracy. Limitations on these methods include the necessity for knowing the number of original sources for ICA, and choosing an appropriate Wavelet type and the number of scales for WPT. These methods were not implemented in the presented P300-based BCI and need further research and testing.
Conclusions
People suffering from neuromuscular dysfunction may use a P300based BCI to communicate with their environment quite successfully. This study showed that people suffering from neuromuscular disorders will perform slightly lower than healthy subjects (Table 4), future development of this work needs to include a larger pool of subjects to validate this claim and to test whether the difference in performance is consistent between healthy and disabled subjects. All algorithms adopted in this study produced acceptable levels of performance but two of the four algorithms (BLDA, and PB) were superior in terms of minimal runtimes as their runtimes were found to be much lower than the actual data length when acquiring the time vector online. As a result, implementing both BLDA and PB would provide the best choice. In general, all methods performed accurately but slowly. The reliance on the P300 wave in brain computer communication defines the information transfer characteristics since P300 is related to time in latency and duration and is dependent on an external stimulus. Improvement of information transfer rates can be done either by increasing the number of choices (Figure 7), which is practically limited, or by increasing the number of commands that can be emitted in one minute. Further research on strategies is needed to develop highspeed online algorithms for enhanced user convenience. | 2019-02-15T14:16:22.454Z | 2013-06-19T00:00:00.000 | {
"year": 2013,
"sha1": "3d397e49a7b1a8c80dfe63c4b978499f0be5a146",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/a-comparison-study-on-machine-learning-algorithms-utilized-in-p300-based-bci-2157-7420.1000126.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bfc868fa67c7658a61a0c627e6691d076b0f39b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
239672649 | pes2o/s2orc | v3-fos-license | An Algorithm for Fast Multiplication of Kaluza Numbers
: This paper presents a new algorithm for multiplying two Kaluza numbers. Performing this operation directly requires 1024 real multiplications and 992 real additions. We presented in a previous paper an effective algorithm that can compute the same result with only 512 real multiplications and 576 real additions. More effective solutions have not yet been proposed. Nevertheless, it turned out that an even more interesting solution could be found that would further reduce the computational complexity of this operation. In this article, we propose a new algorithm that allows one to calculate the product of two Kaluza numbers using only 192 multiplications and 384 additions of real numbers.
Introduction
The permanent development of the theory and practice of data processing, as well as the need to solve increasingly complex problems of computational intelligence, inspire the use of complex and advanced mathematical methods and formalisms to represent and process big multidimensional data arrays. A convenient formalism for representing big data arrays is the high-dimensional number system. For a long time, high-dimensional number systems have been used in physics and mathematics for modeling complex systems and physical phenomena. Today, hypercomplex numbers [1] are also used in various fields of data processing, including digital signal and image processing, machine graphics, telecommunications, and cryptography [2][3][4][5][6][7][8][9][10]. However, their use in brain-inspired computation and neural networks has been largely limited due to the lack of comprehensive and all-inclusive information processing and deep learning techniques. Although there has been a number of research articles addressing the use of quaternions and octonions, higher-dimensional numbers remain a largely open problem [11][12][13][14][15][16][17][18][19][20][21][22]. Recently, new articles appeared in open access that presented a sedenion-based neural network [23,24]. The expediency of using numerical systems of higher dimensions was also noted. Thus, the object of our research was hypercomplex-valued convolutional neural networks using 32-dimensional Kaluza numbers.
In advanced hypercomplex-valued convolutional neural networks, multiplying hypercomplex numbers is the most time-consuming arithmetic operation. The reason for this is that the addition of N-dimensional hypercomplex numbers requires N real additions, while the multiplication of these numbers already requires N(N − 1) real additions and N 2 real multiplication. It is easy to see that the increasing of dimensions of hypercomplex numbers increases the computational complexity of the multiplication. Therefore, reducing the computational complexity of the multiplication of hypercomplex numbers is an important scientific and engineering problem. The original algorithm for computing the product of Kaluza numbers was described in [25], but we found a more efficient solution. The purpose of this article is to present our new solution.
Preliminary Remarks
In all likelihood, the rules for constructing Kaluza numbers were first described in [26]. In article [25], based on these rules, a multiplication table for the imaginary units of the Kaluza number was constructed. A Kaluza number is defined as follows: where N = 2 m−1 and {d n } for n = 1, 2, . . . , 31 are real numbers, and {e n } for n = 1, 2, . . . , 31 are the imaginary units.
Imaginary units e 1 , e 2 , . . . , e m are called principal, and the remaining imaginary units are expressed through them using the formula: e s = e p , e q , . . . e r , where 1 ≤ p < q < · · · < r ≤ m.
All kinds of works of imaginary units are entirely based on established rules: e 2 p = p ; q p = α pq p q ; p < q; pq = 1, 2, . . . , m For Kaluza numbers [26]: Using the above rules, the results of all possible products of imaginary units of Kaluza numbers can be summarized in the following tables [25]: Tables 1-4. For conveniens of notation we represents each element e i in the tables by its subscript i, and we set i = e i . Table 1. Multiplication rules of Kaluza numbers for e 0 , e 1 , . . . , e 15 and e 0 , e 1 , . . . , e 15 (elements e i denoted by their subscripts, i.e., i = e i ). ine× 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 ine0 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 10 11 12 13 14 15 26 27 28 29 22 23 24 25 31 30 Suppose we want to compute the product of two Kaluza numbers: where d (1) = a 0 + 31 ∑ n=1 a n e n and d (2) The operation of the multiplication of Kaluza numbers can be represented more compactly in the form of a matrix-vector product: where The direct multiplication of the matrix-vector product in Equation (1) requires 1024 real multiplications and 992 additions. We shall present an algorithm that reduces computation complexity to 192 multiplications and 384 additions of real numbers.
B
(1) where: There is a possiblity to use a method of factorization for the standardized matrices (6)- ( 8). This allows us to reduce the number of multiplications to 8 2 /2 using 8(8 + 1) additions for each of above matrices. Therefore, similar to the previous we can write [27,28]: where A N/2 , B N/2 are some matrices. Therefore, we can rewrite (6) (2−) 8 where: B 8 .
Combining partial decompositions in a single procedure we can rewrite procedure, (3) as following: , H 2 is the order 2 Hadamard matrix, i.e.: Introducing the following notation: to (10)-(13), we obtain: In order to simplify, we introduce the following notation for the elements of matrix B (2+) 8 (14): we obtain: Now, we introduce the following notation for the elements of matrix B (2−) 8 (15): we obtain: All of the above matrices have the same internal structure. We can permute rows and columns using the π r = (5 1 2 7 4 0 3 6) and π c = (5 1 2 6 4 0 3 7) permutation rules, respectively. We obtain the following form: where B The matricesB , We can use the multiplication procedure (9) and represent the above matrices in a form:B Figure 1 shows a data flow diagram describing the new algorithm for the computation of the product of Kaluza numbers (17). In this paper, the data flow diagram is oriented from left to right. Straight lines in the figure denote the operations of data transfer. Points, where lines converge, denote summation. The dotted lines indicate the subtraction operation. We use the regular lines without arrows on purpose, so as not to clutter the picture. The rectangles indicate the matrix-vector multiplications with matrices inscribed inside a rectangle.
Evaluation of Computational Complexity
We will now calculate how many multiplications and additions of real numbers are required for the implementation of the new algorithm and will compare this with the number of operations required both for direct computation of matrix-vector products in Equation ( 1) and for implementing our previous algorithm [25]. The number of real multiplications required using the new algorithm is 192. Thus, using the proposed algorithm, the number of real multiplications needed to calculate the Kaluza number product is significantly reduced. The number of real additions required using our algorithm is 384. We observe that the direct computation of the Kaluza number product requires 608 additions more than the proposed algorithm. Thus, our proposed algorithm saves 832 multiplications and 960 additions of real numbers compared with the direct method. Thus, the total number of arithmetic operations for the proposed algorithm is approximately 71.4% less than that of the direct computation. The previously proposed algorithm [25] calculates the same result using 512 multiplications and 576 additions of real numbers. Thus, our proposed algorithm saves 62.5% of multiplications and 33.3% of additions of real numbers compared with our previous algorithm. Hence, the total number of arithmetic operations for the new proposed algorithm is approximately 47% less than that of our previous algorithm.
Conclusions
We presented a new effective algorithm for calculating the product of two Kaluza numbers. The use of this algorithm reduces the computational complexity of multiplications of Kaluza numbers, thus reducing implementation complexity and leading to a high-speed resource-effective architecture suitable for parallel implementation on VLSI platforms. Additionally, we note that the total number of arithmetic operations in the new algorithm is less than the total number of operations in the compared algorithms. Therefore, the proposed algorithm is better than the compared algorithms, even in terms of its software implementation on a general-purpose computer.
The proposed algorithm can be used in metacognitive neural networks using Kaluza numbers for data representation and processing. The effect in this case is achieved by using non-commutative finite groups based on the properties of the hypercomplex algebra [24]. When using the Kaluza number, in this case, the rule for generating the elements of the group will be set, as well as the rule for performing the group operation of multiplication. Such a system can contain two components: a neural network based on Kaluza numbers, which represents a cognitive component, and a metacognitive component, which serves to self-regulate the learning algorithm. At each stage, the metacognitive component will decide how and when the learning takes place. The algorithm removes unnecessary samples and keeps only those that are used. This decision will be determined by the magnitude and 31 phases of the Kaluza number. However, these matters are beyond the scope of this article and require more detailed research. | 2021-10-21T16:28:56.796Z | 2021-09-03T00:00:00.000 | {
"year": 2021,
"sha1": "4cb19a7c803bf7e72f5fe40bb6afb023a447bb27",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/17/8203/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9d0c7687f6bda29b51cc6c153264eebfa719712",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119308688 | pes2o/s2orc | v3-fos-license | A spiral interface with positive Alt-Caffarelli-Friedman limit at the origin
We give an example of a pair of nonnegative subharmonic functions with disjoint support for which the Alt-Caffarelli-Friedman monotonicity formula has strictly positive limit at the origin, and yet the interface between their supports lacks a (unique) tangent there. This clarifies a remark appearing in the literature (see \cite{cs05}) that the positivity of the limit of the ACF formula implies unique tangents; this is true under some additional assumptions, but false in general. In our example, blow-ups converge to the expected piecewise linear two-plane function along subsequences, but the limiting function depends on the subsequence due to the spiraling nature of the interface.
Introduction
The Alt-Caffarelli-Friedman monotonicity formula (hereafter denoted ACF formula) has been and continues to be a powerful tool in the study of free boundary problems. It was introduced in [4] in order to prove that the solutions to a twophase Bernoulli free boundary problem are Lipschitz continuous. The formula was then adapted to treat more general two-phase problems, and a discussion of the formula, its proof, and its applications to two-phase free boundary problems may be found in [8]. The ACF formula has also been effective in studying obstacle-type problems, and applications of the formula for obstacle-type problems are found in [11]. Further applications also include the study of segregation problems in [7]. While the most typical use of the formula is to prove the optimal regularity of solutions or flatness of the free boundary, it can also be used for other purposes, such as to show the separation of phases in free boundary problems (see [1][2][3]).
The key property of the ACF formula (1.1) is given in the following proposition: is well defined.
Our paper is motivated by the following claim, which appears as Lemma 12.9 in [8].
As no proof of this Lemma 12.9 is provided in [8] (it is followed only by some general remarks), it is not entirely clear whether it is meant to be taken at face value. We note, for example, that if u is also assumed to satisfy a two-phase free boundary problem of the type treated in [8], then the claim is valid, but requires heavy use of the free boundary relation to prove. Claim 1.2, and in particular the question of whether it is true in the generality stated, drew the authors' interest when the second author was tempted to use it while working on certain eigenvalue optimization problems [10] but was unable to write down a proof. Typically, a monotonicity formula is applied together with other tools making explicit use of the free boundary relation in order to prove regularity of an interface; however, Claim 1.2 would imply that the ACF monotonicity formula, on its own, yields some regularity of the interface. This makes the claim very powerful and useful, especially in problems where the free boundary condition is difficult to exploit, such as the vector-valued free boundary problems arising from spectral optimization [9,10].
Unfortunately, it is also not true: the main result of this paper is to provide a counterexample to Claim 1.2. Theorem 1.3. For any dimension n ≥ 2, there exist two continuous subharmonic functions u,ũ ≥ 0 with u,ũ both harmonic in their respective positivity sets and u ·ũ = 0. Furthermore, Φ(0+, u,ũ) > 0. However, ∂{u > 0} and ∂{ũ > 0} (which are given by a piecewise smooth, connected hypersurface when restricted to any annulus B 1 \B r ) do not admit tangents (or approximate tangents) at the origin, nor do there exist numbers α, β > 0 and a change of coordinates such that u +ũ = αx + 1 + βx − 1 + o(|x|). In the above, the boundary of a measurable set A is said to admit a tangent (plane) at the origin if and there is a unit vector ν such that It seems that the notion of approximate tangent above (or another similar measure-theoretic notion) is the more meaningful one in this context. Indeed, there are simpler constructions which produce functions u,ũ as in Claim 1.2 for which ∂{u > 0} does not admit a tangent at 0 but does admit an approximate tangent.
If one only considers functions u for which ∂{u > 0} is, say, given by a 1-Lipschitz graph over some plane π r on every annulus B 2r \B r , these two notions of tangent plane are equivalent. This property holds for the example constructed in the proof of Theorem 1.3.
The functions u,ũ we construct in proving the theorem have ∂{u > 0} a spiral: while u +ũ looks more and more like α(x · ν) + + β(x · ν) − on progressively smaller balls B r , the choice of ν can not be made uniformly in r, and the optimal ν rotates (slowly) as r decreases. Some free boundary problems are known to exhibit spiraling patterns for the interface (see [6,12] for examples, although the spirals produced there have different properties from ours). We also remark that an example of nonunique tangents for an energy minimization problem is given in [13].
1.1. Further Questions. Before turning to the proof of Theorem 1.3 we would like to offer some discussion of the further questions raised by this theorem and speculate on what "optimal" results, both positive and negative, might look like.
A standard argument with the ACF formula shows that if u,ũ are as in Claim 1.2, then for every sequence r k → 0, there is a subsequence r kj such that where α, β, ν depend on the subsequence. Let us refer to any such subsequence r kj as a blow-up subsequence. We are interested in whether or not these parameters may be chosen independent of the blow-up subsequence.
In the example constructed below, the functions u andũ are rotations of one another around the origin; in particular, this means that for all of the blow-up subsequences, α = β = c Φ(0+, u,ũ) are the same, while ν depends on the particular subsequence.
This example gives one way for (1.2) to fail. There could, in principle, be another way: say that ∂{u > 0} = ∂{ũ > 0} is given by a C 1 hypersurface (including up to the origin, so that it admits a tangent there), and that u,ũ are as in Claim 1.2. Can one find a pair u,ũ like this for which (1.2) fails? This would mean that between the various blow-up subsequences, ν would remain fixed, while α and β would vary. Note that if the hypersurface is more regular near the origin (in particular, if it is a Lyapunov-Dini surface), then this is impossible.
Another set of questions is related to optimality in 1.3. To clarify the discussion, define, for each r, ν(r) to be the best approximating normal vector: It may be verified that ν(r) is uniquely determined from this relation and depends in a Lipschitz manner on r. The property of having an approximate tangent, then, can be reformulated as saying that ν(r) has a limit as r → 0, while Theorem 1.3 gives an example where What restrictions on the change in ν(r), one may ask then, are implied by the conditions in Claim 1.2? We conjecture that under those conditions, one must have on the other hand, for any ν 0 (r) satisfying (1.4) and (1.5), there is a pair of functions u,ũ as in Claim 1.
To explain the source of (1.5), let us point out that in Section 2, we construct a pair of functions u,ũ for which and Φ(0+,u,ũ) Φ(∞,u,ũ) ≥ 1−θ 2 (and this dependence on θ seems to be sharp up to constants). By gluing truncated and scaled versions of this construction, one might hope to attain functions u,ũ satisfying the hypotheses of Claim 1.2, and with This restriction is equivalent to (1.5) for such a construction. In the actual proof of Theorem 1.3, we are unable to perform the truncation and gluing steps uniformly in θ, and so do not obtain such a quantitative result.
Finally, over the past two decades enormous progress has been made in understanding the relationship between the behavior of positive harmonic functions with zero Dirichlet condition near the boundaries of domains and the geometric measuretheoretic properties of the boundary (we do not attempt to provide a summary here, but refer the reader to the introduction and references in [5]). We suggest that the questions above can be thought of as a continuation, or extension, of this program, with the goal of relating (finer) geometric properties of a boundary to the simultaneous behavior of positive harmonic functions on a domain and its compliment, using the ACF formula as a crucial tool.
1.2. Outline of Proof. To prove Theorem 1.3 we will construct a subharmonic function u ≥ 0 in R 2 such that u is harmonic in its positivity set and u(0) = 0. Furthermore, ∂{u > 0} will be invariant under a rotation of π. Consequently, if u(z) := u(−z), then the pair u,ũ will satisfy the assumptions of the ACF formula in Proposition 1.1. Before explaining the construction of u and the outline of the paper, we first give two definitions.
We define the class of functions in By working in the class K, we may consider using a one-sided rescaled version of the ACF formula. If u ∈ K, then is monotonically nondecreasing in r since J(r, u) = (2/π) 2 Φ(r, u(z), u(−z)). Furthermore, if u is C 1 up to ∂{u > 0} near the origin, then J(0+, u) = |∇u(0)|.
In order to prove Theorem 1.3 we first show in Section 2, working on unbounded domains, that it is possible to turn ∂{u > 0} so that its asymptotic behavior at infinity differs from its tangent near the origin by an angle of θ while arranging so that J(∞, u) − J(0+, u) < 1 − θ 2 (for small θ). In Section 3 we transfer this result to a bounded domain. In Section 4 we inductively construct a sequence of functions in K and take a limit to obtain the u in Theorem 1.3. Heuristically, the value of J(0+, u) should be (1 − θ 2 i ), and this is strictly positive if, say, θ i = i −1 . On successively smaller balls, the interface {u = 0} will have turned a total amount of i −1 → ∞, which implies that the interface spirals towards the origin and therefore lacks a unique tangent there. We make these heuristic ideas rigorous, and then we show how the pair u,ũ also provide a counterexample in higher dimensions.
Conformal Mapping
We utilize the Schwarz-Christoffel formula to obtain a conformal mapping. For a fixed angle 0 < θ < π/2, we map the upper half plane to the domain Ω θ (see Figure 1) by the conformal mapping f θ with derivative (2.1)
Figure 1. Conformal Map
We translate f θ by a constant z 0 , so that the midpoint of the line segment in the image is the origin 0 + 0i. We define t θ ∈ (−1, 1) ⊂ R to be t θ = f −1 θ (0 + 0i). Clearly, t θ → 0 as θ → 0. What is of importance is how quickly t θ → 0. In order to determine this decay rate we use the following result.
Lemma 2.1. Let f, g > 0 be integrable functions on an interval I. If f /g is an increasing function, then for any By the same argument, we have that and so the conclusion follows.
We will also need the following Proof. We have that where the second inequality is due to Lemma 2.1. Since x 1 is chosen so that (2.2) holds, we have that the denominator in the above inequality is the same so that The above two Lemmas allow us to prove Lemma 2.3. Let f θ be defined as in (2.1) and let t θ = f −1 θ (0 + 0i). Then there exists θ 0 > 0 such that 0 < t θ ≤ 2θ/π as long as 0 < θ ≤ θ 0 .
Proof. To determine the midpoint of a line segment it suffices to find the x-value. Consequently, we focus on the real part of the mapping f θ . If t ∈ (−1, 1), then Thus, t θ is the unique value in (−1, 1) such that We now note that Then t θ ≤ ξ θ where ξ θ is the unique value such that 0 −1 We also have that is an increasing function on (0, 1). If we let then we may apply Lemma 2.2 and conclude that t θ ≤ ξ θ ≤ τ θ where τ θ is given by The above integrals have elementary antiderivatives. In order to show that τ θ ≤ 2θ/π for small θ, we choose 2θ/π as the point of integration. By taking explicit antiderivatives and simplifying, it suffices to show that for small enough θ, 3) The expression on the left of (2.3) evaluates zero as θ → 0. If we take the derivative of the left side of (2.3) with respect to θ and let θ → 0 we obtain (1+ln(1/2))/π > 0. Then (2.3) is true as long as 0 < θ ≤ θ 0 for θ 0 > 0 chosen small enough. Hence we conclude that t θ ≤ τ θ ≤ 2θ/π for any 0 < θ ≤ θ 0 .
Bounded Domain
The aim of this section is to transfer the inequality in (2.4) to a harmonic function on a bounded domain. We approximate Ω θ with domains Ω θ,M , see Figure 2. If where z 1 , z 2 ∈ R and 1 < z 1 < z 2 . We again translate f θ,M by a constant so that the domain is centered on the origin as in Figure 2. The points z 1 , z 2 are chosen so that f θ,M (z 2 ) = M + 0i. We point out that |f θ,M | → 1 as |z| → ∞. We define φ θ,M (u, v) = y + where f θ,M = u + iv. For any θ ≤ θ 0 , we fix an M that satisfies Lemma 3.1. We now transfer the decrease in energy to a finite domain. Lemma 3.3. Let θ and φ θ,M be as in Lemma 3.1. Let Ω θ,M be defined as before. If we define w R to be such that then w R → φ θ,M locally uniformly in Ω θ,M and in C 1 in B ρ ∩Ω θ,M for small enough ρ.
Proof. Using the rescaling we have that φ R → y + in C 1 on (∂B 1 ) + . Thus, for any η > 0, there exists R 0 > 0 such that if R ≥ R 0 , then Then rescaling back we obtain that ( From the maximum principle we then have that Then as R → ∞, we have that w R → w ∞ locally uniformly in Ω θ,M and in C 1 in a neighborhood of the origin. Futhermore, we have (1 − η)w ∞ ≤ φ θ,M ≤ (1 + η)w ∞ . Since η can be taken to be arbitrarily small, we conclude that w ∞ = φ θ,M .
We end this section by defining a θ-turn. If u ∈ K and for some ρ > 0 we have ∂{u > 0} ∩ B ρ is a line segment with inward unit normal ν, then a θ-turn in B ρ gives a new function v with The idea of property (iv) is to shrink φ θ,M on B 2M to B ρ and give v the same positivity set, see Figure 3 for when ν = i.
Construction of counterexample
As before we let θ 0 be as in Lemma 2.4. This next Lemma shows how to apply a θ-turn to a function that is almost linear at the origin. If θ ≤ θ 0 , then there exists r, ρ with s > r > ρ > 0 with a θ-turn in B ρ such that if v is the redefined function, then v satisfies Proof. We choose r < s small enough so that u(rx)/r − J(0+, u)y + C 1 ((∂B1) + ) < δ, (4.1) and so that |u| < 2J(1, u)r. We now apply a θ-turn in B ρ with 0 < ρ < r. As ρ → 0, we have that v → u uniformly away from the origin, so that by choosing ρ small enough, then v satisfies (B). We now let η > 0 be small and use a cut-off function and obtain in the standard way the Caccioppoli inequality Then as ρ → 0, we have that v → u in H 1 (B 1 \ B η ) for any η > 0. We now use the monotonicity of J(r, v) to prove that v → u in H 1 (B 1 ) as ρ → 0. We have and we conclude that Then v H 1 (B1) is bounded as ρ → 0, so that v u in H 1 (B 1 ) as ρ → 0. We now have Since η can be chosen arbitrarily small, we have that ∇v → ∇u in L 2 (B 1 ) and thus conclude that v → u in H 1 (B 1 ) as ρ → 0. Consequently, we may choose ρ even smaller so that properties (A) and (C) hold.
Proof of Theorem 1.3 in dimension n = 2. We now use Lemma 4.1 to construct a sequence u k ∈ K with lim u k → u. The pair u andũ(z) := u(−z) will be a counterexample to Claim 1.2. The sequence u k is constructed inductively as follows. We choose θ k = 1/(k + N 0 ) where N 0 ∈ N is chosen large enough so that θ k ≤ θ 0 . We then let u 0 = y + on B 1 . By Lemma 4.1 there exists ρ 1 < r 1 such that if a θ 1 turn is applied in B ρ1 to obtain u 1 , then u 1 will satisfy properties (A) − (D). We now suppose that u k has been constructed for some k ≥ 1. By rotating u k it will satisfy assumption (1) of Lemma 4.1. Assumption (2) will also be satisfied because u k satisfies (A) for r = r k . By Lemma 4.1 there exists ρ k+1 < r k+1 with r k+1 < ρ k so that if we apply a θ k+1 turn to u k to obtain u k+1 we have From the same arguments involving the Caccioppoli inequality as in the proof of Lemma 4.1, there exists u such that u k → u in H 1 (B 1 ) and locally uniformly away from the origin. Then u is continuous away from the origin. From (i) we obtain that |u| ≤ Cr on B r for 0 < r ≤ 1, so that u is continuous up to the origin, and u(0) = 0. (k + N 0 ) −2 = ∞ k=1 θ 2 k < ∞.
We now show that the pair u andũ are also a counterexample in higher dimensions.
Proof of Theorem 1.3 in dimension n > 2. For u as in the proof for dimension 2, we let w n (x 1 , x 2 , . . . , x n ) = u(x 1 , x 2 ). Since in dimension n = 2 we have 1 r 2 Br |∇u| 2 ≥ C > 0, it follows that in dimension n, so that Φ(r, w,w) > 0. We have already shown that u +ũ cannot satisfy the conclusions in Claim 1.2; consequently, w +w also do not satisfy those conclusions. | 2018-09-13T20:43:42.000Z | 2018-01-05T00:00:00.000 | {
"year": 2018,
"sha1": "57d49db892e06daaecaf0777e1b9fcdcfd2c138c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.01940",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "57d49db892e06daaecaf0777e1b9fcdcfd2c138c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
235495640 | pes2o/s2orc | v3-fos-license | Geometric Deep Learning for the Assessment of Thrombosis Risk in the Left Atrial Appendage
The assessment of left atrial appendage (LAA) thrombogenesis has experienced major advances with the adoption of patient-specific computational fluid dynamics (CFD) simulations. Nonetheless, due to the vast computational resources and long execution times required by fluid dynamics solvers, there is an ever-growing body of work aiming to develop surrogate models of fluid flow simulations based on neural networks. The present study builds on this foundation by developing a deep learning (DL) framework capable of predicting the endothelial cell activation potential (ECAP), linked to the risk of thrombosis, solely from the patient-specific LAA geometry. To this end, we leveraged recent advancements in Geometric DL, which seamlessly extend the unparalleled potential of convolutional neural networks (CNN), to non-Euclidean data such as meshes. The model was trained with a dataset combining 202 synthetic and 54 real LAA, predicting the ECAP distributions instantaneously, with an average mean absolute error of 0.563. Moreover, the resulting framework manages to predict the anatomical features related to higher ECAP values even when trained exclusively on synthetic cases.
Introduction
Atrial fibrillation (AF) is the most common clinically significant arrhythmia, which can lead to irregular contraction and wall rigidity of the left atrium (LA). This often results in atrial blood stagnation promoting the formation of thrombi within the LA, thereby, increasing the risk of cerebrovascular accidents [14]. In fact, non-valvular AF is responsible for 15 to 20% of all cardioembolic ischemic strokes, 99% of which originate in the left atrial appendage (LAA) [3]. As a result, there have been several attempts at characterizing LA haemodynamics either through transesophageal echocardiography (TEE) or computational fluid dynamics (CFD). Yet, ultrasound imaging is quite ill-suited to characterize complex three-dimensional haemodynamics, while the latter suffers from tediously long computing times and demands huge computational resources [9].
In this regard, deep learning (DL) has made its way into fluid flow modelling, resulting in highly accurate surrogate models that can be evaluated with significantly less computational resources [7]. That being said, many of the most widespread DL models are not well adapted to non-Euclidean domains, such as graphs and meshes, in which medical data is often best represented [5]. As a response, a set of methods have emerged under the umbrella term Geometric DL, that have succeeded in generalizing models such as convolutional neural networks (CNN) to non-Euclidean data [2].
Hence, seeking to improve upon prior studies [11], we leveraged Geometric DL to develop a CFD surrogate capable of learning the complex relationship between the heterogeneous LAA geometry and the endothelial cell activation potential, parameter linked to an increased thrombosis risk. More specifically, we employed a spline-based spatial convolution operator, which enables extracting features from the underlying anatomy without the need for mesh correspondence [5], i.e., they extend properties that have made classical CNNs so successful (local connectivity, weight sharing and shift invariance), without the need to convert LAA meshes to Euclidean representation. We show that our model not only is accurate, but also generalizes well from synthetic to real patient data. The pipeline of the study involved, at first, the generation of the ground truth data through in-silico CFD simulations of the entire LA, requiring prior assembly of the geometries as shown in Figure 1. Subsequently, the meshes derived from the simulations were converted to graph format, suitable for the training of the geometric neural network. Finally, the model was trained seeking to learn the complex non-linear relationship between the LAA anatomy and the ECAP maps.
Data
The employed dataset consisted of 256 LAA, combining 202 synthetic and 54 real patient geometries. The synthetic geometries and their corresponding simulations were borrowed from a previous study [11]. More specifically, the synthetic dataset stems from a statistical shape model (SSM) based on 103 patient LAA surfaces [13]. All cases were reconstructed from computed tomography (CT) images provided by the Department of Radiology of Rigshospitalet, Copenhagen.
For the time being, we have just considered the geometry of the LAA as incorporating the highly heterogeneous LA anatomy would qualitatively increase the inter-subject variability of the hemodynamic parameters. Thus, prior to the simulations, all LAA were assembled to an oval approximation of the LA [6] to ensure that ECAP variability solely depended on individual anatomical differences of the appendage. Finally, since the employed framework does not require any sort of mesh correspondence, all the synthetic data were remeshed to ensure that the network was only able to learn from geometric features.
2.2
In-silico thrombosis risk index -ECAP The endothelial cell activation potential (ECAP), defined by Di Achille et al. [4], was the parameter chosen to evaluate the risk of thrombosis in the LAA. Since the pathophysiology of thromboembolism in AF is based upon the formation of mural thrombi, the calculation of ECAP is based upon haemodynamics in the proximity of the vessel wall, more precisely, as the ratio between the oscillatory shear index (OSI) and the time averaged wall shear stress (TAWSS).
High ECAP values result from low TAWSS and high OSI values, indicating the presence of low velocities and high flow complexity, which is associated with endothelial susceptibility and risk of thrombus formation. The ground truth ECAP distributions were obtained through CFD simulations performed on Ansys Fluent 19.2 1 and automated through the MATLAB R2018b Academic license 2 . Simulation setup was performed accordingly to the preceding study [11].
Geometric deep learning framework
The model was constructed by leveraging PyTorch Geometric (PyG) 3 , a Geometric DL extension of PyTorch 4 . PyG offers a broad set of convolution and pooling operations that extend the capabilities of traditional CNN to irregularly structured data such as graphs and manifolds. With this in mind, the mesh dataset resulting from the simulations had to be converted into individual graphs. Together with PyVista 5 , we converted each mesh to a graph represented by G = (V, E), with V = 1, ..., N being the set of nodes, and E corresponds to the set of edges of the triangular faces. For each vertex the curvature and surface normal vectors were computed, totaling 4 input feature channels. Among all the available graph CNN layers, we opted for SplineCNN [5], since being a spatial method, it offers several advantages when dealing with meshes. In particular, it avoids the need of establishing mesh correspondence. Additionally, defining the spatial relations between vertex features becomes trivial by employing pseudo-coordinates. In our use case, pseudo-coordinates were obtained by computing the relative distance in Cartesian coordinates between the vertices of each edge. During the training process, these edge attributes define the way in which the input features will be aggregated in the neighborhood of a given node. Lastly, we also tested the residual and dense layers for graph neural networks developed by Li et al. [8], aiming to reduce vanishing gradients in deep layers.
Experimental setup and hyperparameter tuning
The schematic representation of the model architecture is shown in Figure 2. A thorough grid search was carried out to fine tune the model by iteratively swapping several hyperparameters, sequentially increasing model depth from 5 to 25 layers and including dense blocks of different sizes with a fixed random seed. The highest accuracy was obtained when employing 20 consecutive SplineConv hidden layers with 32 feature channels per layer. Besides, the ideal amount of transition layers from input-output to the hidden layers was also tested. The inclusion of one transition layer in both ends, as observed in Figure 2, yielded the best performance. Moreover, several configurations of residual connections were evaluated, with the best results attained using dense blocks of depth = 4. Various pooling and U-Net like models were tested, aiming to improve multiscale feature extraction, but so far to no avail.
In regards to the parameters of the SplineConv layer, a B-spline basis of degree 1 and a kernel size of k = 5 were chosen, following suggestions by the authors [5]. Concerning general hyperparameters, the exponential linear unit (ELU) provided the best results among all activation functions, always coupled with batch normalization and a dropout of 0.1. In addition, the training loop was carried out through 300 epochs with a batch size of 16 and a learning rate of 0.001. Adam was employed as an optimizer with a weight decay of 0.05 when training in synthetic only. Finally, the L 1 loss was chosen for regression.
Given the limited availability of comparable models in similar tasks, the performance of the model was benchmarked against one of our earlier studies [11]. As opposed to the novel graph-based network, this study relied on conventional fully connected layers (FCN) and therefore it required thorough preprocessing of the input meshes. Two separate experiments were completed. In the first, a 10-fold cross-validation was performed with the whole dataset, meaning that the model was given both synthetic and patient data during training and testing. In the latter, we trained the model solely on synthetic data and tested the accuracy of the model on the real cases to test its generalization capabilities.
Results
The accuracy results with the final model are given in Table 1 in terms of the mean absolute error (MAE), which indicates that the geometric DL network significantly outperforms the conventional fully connected network in both tasks. Furthermore, a small batch of 5 testing geometries from the first experiment is shown in Figure 3. Cases in row 1-3 are derived from the SSM model while the remaining two represent real patient cases. Additional test subjects are provided in Appendix A.1 and A.2. As only the areas of high ECAP values are said to be related to increased risk of thrombosis, a binary classification was performed with a positive condition of ECAP > 4, being the 90 th percentile of the distribution. Once again, the Geometric DL model outperformed its counterpart with a true positive rate of 73.1% against 67.5% in the FCN model. 3. From left to right: in-silico index (endothelial activation cell potential, ECAP) ground-truth from fluid simulations; prediction obtained with the geometrical deep learning model (Geo); and prediction from the fully connected network (FCN) [11].
The ECAP values are colored from low values (0, blue) to higher than 6 (red areas), the latter indicating a higher risk of thrombus formation. The FCN model struggles to identify the highlighted lobe in the circle.
Discussion
Careful inspection of the results in Table 1 indicate not only that the geometric DL model outperforms the conventional network but also that it has a higher generalisation potential. While the accuracy of the graph-based network decreases by just 9 % when training solely in the synthetic data, the accuracy in the latter falls by almost 30%. The drop in accuracy was to be expected as the real geometries present far higher heterogeneity than its synthetic counterparts. In this sense, the inclusion of a weight decay turned out be crucial in avoiding over-fitting the model to the synthetic cases.
Our hypothesis, although difficult to ascertain due to the "black box" nature of neural networks, is that the graph CNN model is probably better able to exploit the anatomical features in the vicinity of each node and, consequently, is capable of predicting higher ECAP values in areas with fluctuating curvature and normal vectors, which reflect the lobes and cavities of the LAA where blood tends to stagnate. Therefore, even though the network has only been provided with synthetic geometries during the training process, when tested on real cases it is able to recognise anatomical features such as bulges and gaps more proficiently, ultimately leading to improved accuracy. This is best exemplified in case 4 shown in Figure 3, as the FCN network completely fails to recognise the bulge (encircled in the figure), being in a region where the synthetic population rarely shows high ECAP values, while the graph-based network shows moderate success.
In spite of the results, this study has several limitations that must be addressed before it can be of any use in a clinical setting. First, the choice of ECAP as an index of thrombosis risk may be debatable, as its validity in the LAA has not yet been demonstrated in any clinical study. Nonetheless, although the ECAP index was originally developed in carotid and abdominal aorta fluid models [4], the underlying mechanisms of thrombus formation are analogous to those in the LAA, which typically involve some degree of blood stagnation or re-circulation at low velocities that the ECAP should be able to reflect. In fact, it has already seen some use in clinical studies exploring device-related thrombus formation in LAA occlusion surgeries [10,1].
Secondly, the hemodynamic variability arising from the heterogeneous anatomy of the LA has been completely neglected for the sake of simplicity. Nonetheless, since the chosen deep learning framework does not involve mesh correspondence it should be fairly trivial to include the complete LA anatomy. Moreover, the network should be capable of learning the ECAP fluctuations caused by factors such as the interaction of pulmonary vein orientation [6].
Lastly, at the moment, the model is completely agnostic to flow dynamics and boundary conditions that play a key role in the process of thrombogenesis. To address this challenge, we intend on capitalising on the rapid advances in the field of physics-informed neural networks, with examples such as the study by Pfaff et al. [12], enabling the full exploration 4D flow MRI and CFD data that may pave the way towards the prediction of the velocity vector field in the LA.
Conclusion
In the present study we have successfully leveraged recent advances in graph neural networks to instantaneously predict the ECAP mapping in the LAA, solely from its anatomical mesh, effectively skipping the need to run CFD simulations. Furthermore, we have significantly improved the results from our previous model with a framework that no longer requires mesh correspondence. These results could lay the foundation for real-time monitoring of LAA thrombosis risk in the future and open exciting avenues for future research in cardiological mesh data.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
A.2 Synthetic LAA results Fig. 5. Predictions over synthetic LAA, belonging to the same cross-validation experiment as Figure 3. From left to right: in-silico index (endothelial activation cell potential, ECAP) ground-truth from fluid simulations; prediction obtained with the geometrical deep learning model (Geo); and prediction from the fully connected network (FCN) [11]. The ECAP values are colored from low values (0, blue) to higher than 6 (red areas), the latter indicating a higher risk of thrombus formation. | 2021-06-22T13:24:36.031Z | 2022-10-19T00:00:00.000 | {
"year": 2022,
"sha1": "da0b99cff53907ee4692f048297924d78e3f1872",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "73934017ddb818bb023119150f42a3c551d2d2bf",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
196616593 | pes2o/s2orc | v3-fos-license | Biohydrogen production from alkaline wastewater: The stoichiometric reactions, modeling, and electron equivalent
Graphical abstract
Introduction
In the biological processes, the H 2 produce via photo and dark fermentation. In dark fermentation processes, the energy carrier produced by anaerobic acidogenic bacteria during organic matter consumption [1]. As the carbohydrates were used as an original electron source, the theoretical yield of hydrogen was 4 and 2 mol H 2 per mol of glucose based on the acetate or butyrate pathways, respectively. When the propionate was produced as soluble fermentation end products, the H 2 consumed with conversion of acetate and H 2 to propionate. In addition, ethanol and lactic acid produced in a pathway without any H 2 production [2].
Application of wastewaters as an organic matters source show great potential for biohydrogen production via biological process [3]. Different studies showed that the biohydrogen production rate highly related to wastewater characteristic [4][5][6][7][8].
The key wastewater characteristic that influenced on hydrogen production was alkalinity content [9]. The effect of alkalinity on methanogenic process is well known and established that not only alkalinity concentration but also the ratio of alkalinity to COD must be optimized. The reported ratio was fluctuated from 0.8 to 1.6 g CaCO 3 /g COD, and the lower limit was 0.3 g CaCO 3 /g COD. The required alkalinity for biohydrogen processes was lower than methanogenic process however was not exactly established [10,11].
Initial alkalinity in the influent substrates has strong effect on hydrogen producing bacteria by effecting on major metabolites. The high alkaline condition is essential for good performance of anzymatic system of hydrogenase bacteria. The alkaline content must be optimized due to the negative effect of osmotic pressure on fermentation process. The excessive alkalinity concentration can led to hydrogen producing bacteria poisonous and reducing H 2 amount [10,11].
Almost no studies have provided alkalinity effect on fermentative biohydrogen based on stoichiometric reaction for up to now. In this study, we studied the effect of alkalinity on biohydrogen production during anaerobic sequencing batch reactor (ASBR) operation. In addition, the electron equivalent balances have been determined and prediction capability of Gamperts model examined. The method's details have been described, step by step in follow.
Experimental design
ASBR set up. The cylindrical ASBR from Plexiglas that used in this study described elsewhere [12].
The anaerobic sludge was collected from the South wastewater treatment plant (Tehran, Iran) and used as parent inoculums. As our previously paper [8], the anaerobic sludge was sieved in order to elimination of debris and pretreated for 45 min and 95 C in order to inactivation of methanogenic bacteria. When the sludge temperature reached environmental temperature, the sludge inoculated to ASBR reactor and first cycle of operation started by synthetic wastewater injection.
Synthetic wastewater. The original electron donor in this study was Glucose. The ASBR was operated at organic loading rate (OLR) 0.5 g COD/L.d and influent chemical oxygen demand (COD) 4.5 g/L. the essential macro and micro elements for microorganism growth were also added to the synthetic substrate [13]. In this study, NaHCO 3 was used as alkalinity source at 1125 mg/L in first stage and gradually increased to 2225, 3750, and 4500 mg/L during 120 day operation of ASBR. The mentioned influent alkalinity was corresponding to 670, 1325, 2232, and 2678 mg/L as CaCO 3 of influent alkalinity concentration. Each stage continued until reach to the steady state condition.
Volatile fatty acids (VFAs) and alcohols analysis. The ASBR effluent was filtered through filter paper with pore size 0.45 mm (Whatman No. 42) and stored in glass container in the freezer until analysis time. The VFAs (acetic, propionic, butyric and valeric acid) were extracted and analyzed via liquidliquid extraction method and gas chromatograph equipped with flame ionization detector (GC-FID) based on Manni et al., [14] as below. We added 2 mL of diethyl ether to each 2 mL melted sample and shacked for 30 s. The upper phase transferred to other glass container that contains 0.4 g anhydrous MgSO 4 for adsorbtion probable water in the extracted sample. After 10 min, the liquid separated from the magnesium sulfate and transferred to the other vial with gastight cap. The GC system syringe used for derived and injection 10 mL of extracted sample to the GC-FID.
An Agilent 7890A GC with Varian Cp-Sil5cb column was used to determine the content of acids in the extracts. The chromatographic program was as follows: The helium gas at flow rate of 1 mL/min (19.086 cm/s) was used as a carrier gas; oven temperature was 70 C (3 min), first ramp as 10 C/min to 130 C (0 min), second ramp as 5 C/min to 180 C (5 min), post run 250 C (1 min). The nitrogen gas was used as a makeup at flow rate of 30 mL/min.
The extraction and quantification of solvents (methanol, ethanol and acetone) was done by pouring 2 mL sample in a standard vial (10 mL) containing 1 g of NaCL, 70 mL isobutanol solution 1 g/L, 200 mL 2 M H 2 SO 4 solution and analyzed with derived method from Adorno et al. [15]. The vials were incubated for 25 min at 100 C (5 s mixing and 2 s ideal). The chromatographic program was as follows; The helium gas at flow rate of 1.5 mL/min (26.686 cm/s) was used as a carrier gas; oven temperature was 35 C (0 min), ramp 1 as 2 C/min to 38 C (0 min), ramp 2 as 10 C/min to 75 C (0 min), ramp 3 as 35 C/min to 120 C (1 min), ramp 4 as 10 C/min to 170 C (1 min), post run 250 C (1 min). Temperature of split/splitless injector was 250 C.
Monitoring. Influent and effluent COD, pH, alkalinity, and carbohydrate were routinely measured by closed reflux colorimetric method, precalibrated glass body pH probe (CG 824 SCHOTT), titration method, and phenol-sulfuric acid methods [16,17]. In the headspace of the ASBR, the H 2 percentage was determined by a hydrogen analyzer (COSMOS-XP-3140 model, Japan).
Method evaluation
Biohydrogen production. The variation of biohydrogen production regard with different influent alkaline concentration is shown in Fig. 1. As depicts in Fig. 1, with increasing influent alkalinity, the hydrogen production increased and then promptly descended. At studied influent alkalinity, the average volume of hydrogen production were 57.91, 220.02, 204.65, and 92.51 mL/d respectively. As the initial alkalinity was 1325 mg CaCO 3 /L, The highest volume of biohydrogen produced. This observation may be due to effect of hydrogen ion on ATP level. The H + ion was essential for adjusting ATP level but when its amount excesses from optimum level, the sever environmental condition occurred and, the most sever condition, the most ATP consumed for cell neutration so the H 2 production decreased [18]. Geng et al., reported that as the amount of KHCO 3 increased from 0 to 40 mM, the biogas production increased and when the alkalinity reached to 60 mM, biogas production decreased [7]. Choi and Ahn reported that when the pH and alkalinity were 8.95 and 3.18 g CaCO 3 /L, respectively; anaerobic bacteria can produce the highest volume of hydrogen. At alkalinity higher than 4 g CaCO 3 /L, the lactate type fermentation bacteria started activity and resulted in increasing the amount of propionate, reducing of butyrate, and finally stopping the hydrogen production [10]. luo et al., demonstrated that the influent alkalinity of 6 g NaHCO 3 /L and HRT 24 h were optimal condition for hydrogen production in rate of 3215 mL H 2 /L/d [9].
Mohammadi et al., studied the alkalinity range of 200-2000 mg CaCO 3 /L and found that the maximum hydrogen yield (124.5 mmol H 2 /g COD) was obtained at alkalinity 1100 mg CaCO 3 /L with 3000 mg/L of initial COD concentration [19]. Therefore, the highest hydrogen yield observed at the ALK/COD ratio 0.37 and increasing influent COD with constant alkalinity, the hydrogen yield decreased. The required ALK/COD ratio for methanogenic processes was varied from 0.11 to 0.30 g CaCO 3 /g COD but the lower ALK/COD ratio reported for hydrogenogenic processes by Valdez-Vazquez et al., [11]. This difference may be because of different working condition, Valdez-Vazquez's report was in solid substrate fermentation, not in wastewater. In our paper, the studied ALK/COD ratios were 0.15, 0.3, 0.5 and, 0.6 at constant OLR (0.5 g COD/L.d) and the corresponding calculated hydrogen yield was 0.15, 0.6, 0.5 and 0.24 mmol H 2 /g COD in .
When the ALK/COD ratio was 0.3, the maximum H 2 produced. The difference between calculated hydrogen yield in this study and Mohammadi et al., may be related to operation condition including influent COD and pH, batch test condition, parent inoculums and feed substrate [19].
COD removal efficiency. The effect of initial alkalinity on COD removal during ASBR operation was presented in Fig. 2. The average of COD removal at studied initial alkalinity was 18.13, 14.72, 10.46, and 17.36%, respectively.
The average of COD conversion to VFA was 51.42, 65.8, 53.9, and 66.6% that was responsible for 62.8, 67.2, 70.2, and 81.3% of effluent COD (Fig. 3). In the hydrogenogenic phase, the significant portion of the carbon remain in the effluent as released VFAs by acidogenic bacteria [18]. Lee et al., reported that the maximum specific production rate of hydrogen and maximum carbohydrate degradation efficiency was observed simultaneously and depicted that the highest specific production yield for VFAs was 0.7 g COD/g sucrose [20].
The observed glucose conversion by Shida et al., was greater than 70% for all studied HRT and reached up to 94% at HRT 8 -2 h [6]. Van Ginkel et al., reported that the COD removal during biohydrogen production from four food processing wastewaters was 5-11.1%, same as our study [3]. In addition, Sridevi et al., reported the higher COD removal efficiency around 87.35% by hybrid upflow anaerobic sludge blanket reactor [21]. This deference was presumably related to lower studied OLR in our study. The maximum and minimum of COD removal efficiency reported by Mohammadi et al., study were 58.3 and 39.6% at 1.1 and 0.2 g CaCO 3 /L, resepectively [19].
SEP & solvent production. The variation of soluble end products (SEP) during glucose fermentation by thermal pretreated anaerobic sludge was monitored and depicted in Fig. 4. The dominant SEP in the studied initial alkalinity concentration was acetic acid that was rather than 50% of total VFA. As the initial alkalinity was 2225 mg/L, the acetic acid was 70% of SEPs and the percentage of acetic acid in the other studied initial alkalinity was lower than this value. By application of 1125 mg/L as initial alkalinity concentration, the Valeric acid was 5.8% of SEP but its concentration decreased and not detected in other studied alkalinity.
The highest volume of H 2 was achieved at the highest portion of the acetate acid and butyrate acid and lowest portion of propionic acid and valeric acid. The acetate and butyrate pathways used by acidogenesis bacteria for H 2 production but propionate produced when the bacteria used H 2 consuming pathway [18]. With accumulation of propionate in the biological reactor, the hydrogen production was stopped [10]. As illustrated in Fig. 4, the lower hydrogen production was obtained as the higher propionate measured. During 120 d operation of ASBR, the dominant SEP was acetic acid followed by butyric acid and or propionic acid. As shown in Fig. 4, at the studied initial alkalinity, methanol, ethanol and acetone were not detected. The high portion of SEP was related to VFA and demonstrated that the fermentation process in studied ASBR was acidigenes than solventogenis. This finding was in line with shida et al., but showed difference with Lin et al., and Geng et al., [5][6][7]. As reported by Geng et al., by using monocultures of C. thermocellum for hydrogen production, the high concentration of ethanol and acetate were detected and by inducing the C. thermopalmarium as co-cultures, the butyrate concentration increased. This finding confirms that by changing the fermentation bacteria species and pathways, the composition and amount of SEP was changed [6,7]Á As the anaerobic bacteria used the solventogenic pathway, the reduced end products such as alcohols formed and synchronize with consumption of additional free electron and low H 2 yields [6,22].
Electron equivalent. We produced the Stoichiometric reactions by converting the amount of electron sink into electron equivalent (eˉeq). The fraction of electron sinks at different influent alkalinity in ASBR was summarized in Table 1. The highest and lowest eˉeq of H 2 was occurred as the initial alkalinity was 1325 and 670 mg/L, respectively. The highest H 2 fraction was coinciding with high eˉeq of acetate. As H 2 fraction of eˉglucose was decreased, the eˉeq fraction of acetate and butyrate decreased and eˉeq fraction of the propionate improved. Previous study reported that the highest conversion efficiency of the initial electron for H 2 was 15%. In fact, the high portion of the initial carbon and energy remained in the effluent [18].
Stoichiometric reactions. The calculation of Stoichiometric reactions for glucose fermentation by ASBR was performed according to our previously published paper [8].The stoichiometric reactions for all studied initial alkalinity were summarized in Table 2. As shows in Table 2, without cell synthesis and production of SEP, conversion of each mol of glucose theoretically produces 12 mol of H 2 . The mentioned theoretical value decreased to 4 and 2 mol H 2 /mol glucose by using acetate and butyrate fermentation pathway, respectively. As shown in Table 2, In this study when the alkalinity was 670, 1325, 2232, and 2678 mg/L as CaCO 3 , the H 2 production per mol of influent glucose was 0.19, 0.67, 0.47, and 0.26 mol, respectively. In the other word, the maximum hydrogen production that achieved was only one eighteenth of the theoretical H 2 . This reduction can be related to amount and composition of intermediate fermented products [23].
In the end of each stage, the H 2 production was monitored in the time interval (1 h) and then the solver function of excel software used for optimization of H max , R max , and l and depicted in Table 3.
The experimental and predicted result of H 2 was shown in the Fig. 5. The R 2 was higher than 0.99 for all studied alkalinity and confirmed good agreement between experimental and predicted values. The estimated lag phase in this study was significantly shorter than previously published studies. The reported lag phase by Gadhe et al., Rasdi et al., and Zhang et al., studies was 4.08, > 3, and > 21 h [25][26][27], as shown in Table 3, we observed the shorter lag phase around 0.7 h. Same short lag phase (0.5 h) observed in continuous stirred anaerobic bioreactor in Xing et al., study [28].This deference presumably related to reactor type, influent COD and solution pH, substrate type and operation condition. As the reactor operated for continuous long day, some portion of gas trigged in the sludge and released by waiting and resulted in the shorter lag phase.
Conclusion
The alkalinity effect on fermentative biohydrogen based on stoichiometric reaction was provided in this study, In addition, the electron equivalent balances were determined and prediction capability of Gamperts model examined. The following results were achieved.
The average of hydrogen production at studied alkalinity 670, 1325, 2232, and 2678 mg/L as CaCO 3 , were 57.91, 220.02, 204.65, and 92.51 mL/d. As the ALK/COD ratio was 0.3, the highest hydrogen yield (0.6 mmol H 2 /g COD in) was achieved and the required ALK/COD ratio for methanogenic and hydrogenogenic processes was about alike. The highest H 2 fraction was coinciding with high eˉeq of acetate. The highest and lowest eˉeq of H 2 was occurred as the initial alkalinity 1325 and 670 mg/L, respectively. According to stoichiometric reactions, the maximum hydrogen production was only one eighteenth of the theoretical H 2 . The estimated lag phase in this study was significantly shorter than previously published studies, because of reactor type, influent COD, solution pH, and substrate type and operation condition. | 2019-07-16T14:31:21.846Z | 2019-06-20T00:00:00.000 | {
"year": 2019,
"sha1": "5d94a0ba8f56a08b4d89626079058dabe9c3f0de",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.mex.2019.06.013",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45e31e88defbd970cc1a262811a3383596e30107",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
158369617 | pes2o/s2orc | v3-fos-license | Who is responsible for climate change adaptation?
The mixture of socio-economic classes, ethnicities, and cultures that characterizes many cosmopolitan urban areas can contribute to unequally perceived impacts of extreme weather events and, hence, need and responsibility for adaptation. Awareness of these differences is, as we argue, decisive for effective adaptation. This study explores the relationship between person-specific, socio-economic characteristics that are frequently associated with social vulnerability and the perception of current affectedness by extreme weather events, future impact severity as well as adaptation need and adaptation responsibility. We use a large online questionnaire survey from New York City studying two extreme weather events, heatwaves and heavy rainstorms. We find that previous harm is the most important factor across all tested models for both weather events. However, previous harm and affectedness do not well explain the perception of future impacts, whereas they correspond to views about adaptation responsibility; respondents who felt significantly more affected in the past perceive the community to be in charge of adaptation. Women (during both weather events) and the elderly (during heatwaves) state largest affectedness during past events, and see the community as being responsible for future adaptation. Hispanic and African American respondents, on the other hand, were identified to perceive adaptation to be more of an individual task—potentially related to previous experience with (a lack of) local government services in their areas. Our findings evoke equity questions, and can aid urban decision makers aiming to implement effective and just adaptation measures, targeting vulnerable socio-economic groups in New York City and potentially other cosmopolitan areas.
Introduction
Over the last 20 years extreme weather events, such as storms, extreme temperature events, and floods were the deadliest weather-related phenomena in the world [1]. These three weather-related events are also the most deadliest in the United States [2] with heatwaves topping the list of the 30 year average, storms being number one of the 10 year average, and floods having caused the most fatalities in 2017. Urban areas are particularly susceptible to weather-related hazards since they are densely populated, reliant on transportation and vulnerable to utility outages [3]. For example, New York City (NYC), the largest metropolis of the United States faces the largest future health risks from increasing temperatures/heatwaves and coastal storms with flooding [4].
Impacts of heatwaves and coastal storms are usually stratified across the population, in particular in cities-characterized by a diversity of people regarding socio-economic backgrounds, ethnicities, and cultures-with a strong relation to social vulnerability [5]. For example, heat-related risk is linked to both intrinsic person-specific factors (e.g. age, sex, ethnicity, disabilities, and medical status) and extrinsic socio-economic factors (e.g. socio-economic status, gender, education, living and working location and conditions) [6,7]. Romero-Lankao, Qin [8] found altogether 13 variables commonly used to examine vulnerability to temperature-related hazards, including hazard magnitude (i.e. temperature level), population density, age, income, gender, pre-existing medical conditions, minority status, education, poverty, acclimatization, and access to home amenities such as air conditioning and swimming pools. Similar factors are often assumed to influence the vulnerability to rainstorms, but with a stronger focus on locational factors, such as the presence of impermeable surfaces, the scarcity of green spaces, inadequate or clogged drainage systems, and the ill-advised development of housing on marshlands, flood plains and other natural buffers [7].
Climate change adaptation, 'the process of adjustment to actual or expected climate and its effects' [9, p 1758], is well underway in NYC [10]. Influenced, among other reasons, by experiences with hurricane Sandy and to a lesser extent hurricane Irene recent adaptation planning concentrated on the rehabilitation and stabilization of the waterfront. Both events demonstrated the devastating impacts a lack of preparedness to coastal storms can have in terms of human health and well-being as well as property damage. The city of New York is also slowly but increasingly acknowledging the potential of serious impacts of heat. It has recently implemented various heat preparedness actions and, for example, operates cooling shelters throughout the five boroughs during heat emergencies [11,12].
However, although in particular heatwaves pose a major future climate-related hazard in NYC, adaptation planning and policy actions to the risk of heatwaves as compared with heavy rainstorms is far less extensive. Local planning documents to heat adaptation are fewer, although climate projections suggest that heatwaves will approximately triple in frequency by the end of the century compared to current conditions [13]. Excess heat-related deaths due to heatwaves are expected to increase by 47%-95%-with a mean of 70%-for the NYC area from 1990 to 2050 [14]. Precipitation is expected to decrease overall for the North-Eastern region of the United States [15]. However, seasonal increases in winter precipitation may in some instances put a burden on areas that are already exposed to flooding and other rain-related hazards [16].
Moreover, it is documented that some adaptation measures already in place in NYC, e.g. cooling centers, are only used by a fraction of those in need, e.g. the vulnerable populations [11,12]. Although heatwaves pose a major risk to urban populations [17], particularly when air conditioning and other short-term remedies fail, it might be perceived as less of a risk as their impacts are subtle, private, and not structural [18]. We need to conclude that the adaptation challenge is enormous [19][20][21], in particular with regard to the documented underutilization of existing adaptation measures.
Scholars argue that, in order to deliver effective adaptation, adaptation actors need to assume specific and clear roles [22,23]. The question on roles and responsibilities is crucial, in particular with regard to the protection of the most vulnerable that may lack the means to protect themselves, evoking the question 'whether the protection of vulnerable individuals should be an individual or a collective responsibility' [24, p 1065]. Eisenack and Stecker [25] and Eisenack, Stecker [26] define three types of actors based on location factors: the exposure unit, the operator, and the receptor of adaptation. Mees, Driessen [22] see responsibilities as mainly divided by type of governance entity (public versus private)-the most common distinction. According to that division and a study of European and North-American cities, local governments play the primary role in adaptation while private entities have a less pronounced role [22,27]. This reflects the widely held assumption in adaptation science that adaptation should take place at the local government level [28]. However, while such a division is seldom clearly defined in practice, it reflects a corresponding debate among citizens, as, e.g. found with respect to the responsibility for health care in the Netherlands [24]. And, while local governments may be in the driving seat in the stage of policy emergence [27, p 374] it is envisaged that 'with the maturation of the policy field and the expected acceleration of climate impacts K local public authorities need to more actively engage the different private actors such as citizens, civil society and businesses' [27, p 374]. This would allow responsibilities to be shared and all of society's resources to be fully exploited: 'active involvement of all societal actors might overcome problems of inefficiency and raise the legitimacy of adaptation action' [22, p 305].
If private actors are to be more actively engaged in adaptation processes one question is whether and how citizens see and perceive their role in adaptation. In particular it was argued that understanding perceptions of adaptation responsibility and roles may help explain the documented lack in using of provided adaptations in New York [11,12] and other cities [29]. Research in Australia shows that citizens may not view themselves as passive players in climate adaptationresults of a climate change engagement program showed that many people want to act and be engaged [21]-but that residents lack procedural knowledge of or have diverging views on how to adapt [30]. For example, in a coastal community in central Victoria, opinion among community members ranged between: 'retreat is the only option' and 'there will not be much leaving' [30, p 350].
Such diverging views may be related to differential impacts across various socio-economic groups of residents and across different weather events [31], as local experiences of impacts play an important role in adaptation [22,23,30,[32][33][34][35]. Indeed, whereas a growing body of research in NYC focusses on quantifying mostly infrastructural, sectoral impacts of heat and coastal storms [36][37][38][39][40], the public perceptions of the related risks and vulnerabilities as well as attitudes towards adaptation needs and responsibilities are yet to be understood [41,42]. Views of stakeholders and perceptions of residents constitutes a vitally important aspect for the effectiveness and the legitimacy of adaptation [22,30]. Therefore, we ask: (1) How were impacts of heatwaves and rainstorms perceived by different socio-economic groups in NYC in the recent past?, (2) According to citizens views, which sectors are most impacted and therefore most in need of adaptation in the future?, (3) What is the perceived responsibility of citizens and of communities in adaptation?
To do so, we investigate experienced impacts and perceived future impact severity, adaptation needs and adaptation responsibility for heatwaves and heavy rainstorms in NYC, and how these factors are influenced by different levels of social vulnerability. NYC serves as case study due to its diverse demographics and experiences with extreme weather in the recent past.
Data collection and processing
The main data source is an online questionnaire survey on the perception of impacts and adaptation responsibility of heatwaves and heavy rainstorms conducted in NYC. The term heavy rainstorm was used in reference to both hurricanes and Nor'easters, the most numerous weather conditions entailing storms with flooding in NYC (see above). We used the term heavy rainstorm in a generic way drawing on the perception of the respondents, instead of providing a scientific definition that would relate the hazard to an abstract, scientific concept.
The survey was conducted from 5 November to 8 December, 2013, and supported by the Center for Research on Environmental Decisions (CRED), Columbia University, protocol IRB-AAAK2162 (Y1M04). The implementation of the online questionnaire survey was done by Qualtrics Survey Providers, using their survey software and sample procedures [43].
The sampling frame consists of the five counties and boroughs of NYC-Bronx, Kings (Brooklyn), New York (Manhattan), Queens (Queens), and Richmond County (Staten Island), initially targeting 100 respondents from Staten Island (the maximum number of respondents Qualtrics could assure to generate there) and 200 respondents from the other boroughs.
The survey was conducted with a randomly selected sample-representative of the NYC adult population with regard to gender and age (supplementary material (SM) 1 is available online at stacks.iop.org/ERL/ 14/014010/mmedia). However, as in other online surveys, it is difficult to make an informed judgment about the response rate [44]. Survey providers do not provide this information. The software registered more than 1200 attempts (complete and incomplete questionnaires), of which 935 were completed correctly-meaning that approximately 22% of respondents did not finish. After rigorous automated and manual quality control the sample contained N=762 fully completed and valid responses. Automated quality control included checks of the IP address, a captcha code, a valid ZIP code and attention questions as well as the need for completeness. Manual quality control comprised of checking the understanding, truthfulness (sorting out respondents that put in a random selection of letters, such as 'asdddrftsfgg') and reliability of responses (via internal consistency, asking for similar aspects in two different questions). Automated and manual quality control reduces concerns about the quality of the online questionnaire data to a minimum.
Respondents were compensated according to Qualtrics policy, and received 4 US$ per completed questionnaire. The survey lasted for about 30 min. It was drawn independently of any other sample drawn for surveys in the area previously. Participants had to provide informed consent. The questionnaire comprised of maximum 68 questions (depending on previous answers the questionnaire differed in length), open-and closed-ended, multiple or single choice. Questions were clustered into groups/sub-groups, each providing indicators of either impacts (or impact interactions, not analyzed here), adaptation, or socioeconomic characteristics of the respondents. Order effects were accounted for, i.e. answers for multipleresponse questions were randomized and blocks of questions regarding extreme events (i.e. asking for heatwaves or heavy rainstorms) were shuffled. The questionnaire is provided as SM5. Table 1 provides an overview of variable dimensions, time horizons, variables, indicators and data types.
Data analysis
Tests for associations between the dependent variables, i.e. impact and adaptation dimensions (#1, #2, #4-#7 in table 2) and the independent, socioeconomic variables (#8 to #14 in table 2) included: (1) Linear regression (testing associations between a continuous dependent variable given two or more independent variables assuming a normal probability distribution).
(2) Ordinal logistic regression (testing associations between an ordinal dependent variable given one or more independent variables assuming a multinomial probability distribution).
(3) Loglinear regression (testing associations between a dependent variable that consists of 'count data' given one or more independent variables assuming a Poisson probability distribution) [45].
Detailed descriptions to the definition and processing of the dependent and independent variables are provided in SM1. The independent predictors are either categorical variables (dichotomous or multinomial) and therefore entered as factors (gender, ethnicity, building conditions, income) or continuous (scale or interval) and therefore treated as covariates (age, household structure, social networks, previous harm). Factors are transformed into dummy variables (of 0/1) to allow an easier interpretation of the results. For example, income (household and personal income) is treated as categorical data of two categories, testing differences between low versus medium-high incomes. For personal income, 'low' is defined as income up to 20.000 US$/year and medium-high is defined as income above 20.000 US$ per year 5 while the respective cut-off for households is 50.000 US $/year. 'Previous harm/damage (last 10a)' was treated as socio-economic characteristic and added as independent covariate, as it relates to social vulnerability decreasing a person's adaptive capacity in the future [47,48]. It is defined as either 'damage to the property', 'lost income', 'health-related damage', and/or 'other harm' having occurred during a heatwave or heavy rainstorm over the last ten years.
The model as a whole was evaluated via an omnibus test, checking whether all the independent variables collectively improve the model over the intercept-only model (reported as Likelihood Ratio Chi square with p values). Tests of significance of individual regression coefficients are performed via the Wald chi-square statistic. We report the standardized coefficient Exp(B), the odds ratio, as well as the exact p values. Other test parameters, such as Wald Chisquare and confidence intervals can be found as SM3. The analysis was carried out using SPSS Statistics 23.0. Figure 1 shows descriptive statistics for indicators relating to past experiences. Overall, respondents feel having been more affected by heatwaves than by heavy rainstorms-even though more respondents claimed harm/damages from heavy rainstorms. For example, 19% of respondents reported to be very much affected by heat (1a), and 14% by its secondary impacts (1b); while these numbers are 12% and 10%, respectively, for heavy rainstorm. Harm/damages from heatwaves were mostly health-related (49% of heat damages; 1d). In contrast, heavy rainstorms caused more damage to property and resulted in more lost income (41% and 31% of damages, respectively; 1d).
Results
Descriptive statistics to perception-related questions about the future are shown in figure 2. Respondents were slightly more worried about impacts of heavy rainstorms than heatwaves (14% and 12%, respectively) in the next 20 years (2a), although most respondents were only somewhat worried about these (approximately, 24%). This co-aligns with the perception to the severity of future impacts. Personal and family impacts due to both heatwaves and heavy rainstorms in the future are perceived to be not very to somewhat severe (M=2.43 and M=2.49, respectively). The largest future impact of heatwaves was perceived to relate to plant and animal species (M=3.21); the largest impact of heavy rainstorms was perceived to affect NYC in general (M=3.22; 2b).
Regarding adaptation, most respondents believe that more individual efforts are necessary to protect themselves against both, heatwaves and heavy rainstorms (26% and 27% of respondents; 2c). Fewer think much more individual adaptation is necessary (9% and 11% of respondents for heatwaves and rainstorms, respectively; 2c). Respondents perceive a higher need to individually prepare against rainstorms; accordingly, more respondents feel that they do enough when it comes to heatwave adaptation (13% of respondents feel to do enough, as compared with 11% for heavy rainstorm; 2c). In comparison, the need for communities to invest in adaptation is generally regarded somewhat to very important (2d). For heatwaves, respondents see the largest adaptation need when it comes to the electricity (M=3.76), the water supply (M=3.72) and the health sector (M=3.70). The largest adaptation need for heavy rainstorms is related to the drainage and sewer system (M=3.86), the electricity system (M=3.86), and the subway and rail systems (M=3.85; 2d).
Regression analyses (table 2, SM3 and SM4 for full results) reveal that for experiences with and perception of heatwaves are related to previous harm, ethnicity, income, gender, and age. Previous harm is significantly associated with all impact and adaptation dimensions tested and has large effects on the dependent variables. Previous harm is the most potent predictor in this study.
Ethnicity is decisive for the number of heatwave impacts mentioned, the perception of severity of future impacts from heatwaves as well as for individual adaptation responsibility. For example, being of Hispanic descent makes it [Exp(B) =2.345] and 28% more likely to perceive future heatwave impacts as 5 This is based on the poverty definitions of the American Community Survey (ACS, 2010) for a medium-sized household of 3-4 people. The data was drawn from the ACS. The mean number of household members in the sample is 3.6 person/household. However, this threshold is somewhat below the national official Income levels are important for the perception of future impacts. Respondents with larger household budgets are 12% less likely to perceive impacts as severe [Exp(B) =0.888]. Age significantly relates to affectedness by heatwaves and its secondary impacts as well as to adaptation responsibility of communities. All three dimensions increase with age, though the effects are small.
Gender plays a role for direct heatwave affectedness and adaptation responsibility. Females are 34% more likely to be more affected during heatwaves than men [Exp(B) =1.336] and 16% more likely to view the community as being (one unit) more important for adaptation than men [Exp(B) =1.161].
Regarding heavy rainstorms a number of predictors are similar. For example, previous harm is again the most important factor, being significant across all impact and adaptation dimensions tested. The strongest relation also exists with regard to worry about the future [Exp(B) =2.041], though being slightly weaker than as compared with heatwaves.
As regards ethnicity, being of African American descent makes it 115% more likely to regard adaptation during heavy rainstorms as (one unit) more of an individual task [Exp(B) =2.154]. Similarly to heatwaves, an ethnic dimension is the second most potent and influential factor among all models tested for heavy rainstorm.
Household and personal income are also potent predictors for heavy rainstorms, particularly relating to the impact dimensions. Interestingly, as for heatwaves, a large household income decreases the likelihood to worry about the future Gender is influential and significantly influences more impact and adaptation dimensions during heavy rainstorms as compared with heatwaves. Women are 38% more likely to be more affected during heavy rainstorms than men Comprehensibly, women view adaptation as being more of a community responsibility than men [Exp(B) =1.112]; being a women raises that likelihood by 11%.
The associations between past affectedness and the perception of future impacts and adaptation responsibility also reveal interesting patterns. Table 2 shows that affectedness does not well explain the perception of future impacts, but correspond to views about the role of communities in adaptation. Respondents who felt significantly more and directly affected by past heatwave and heavy rainstorm events believe that the community should invest more in and hence is responsible for adaptation.
Discussion
The study was driven by three research questions, which will structure the discussion section.
(1) How were impacts of heatwaves and rainstorms perceived by different socio-economic groups in NYC in the recent past?
The analysis reveals that more people are very much affected by heatwaves as compared to rainstorms, but that rainstorms affect more people somewhat. Impacts of heatwaves should therefore not be underestimated; they are different in nature. Heatwaves cause more healthrelated damages-which apparently are perceived as a stronger effect-than, e.g. property damages and lost income, as seen during rainstorms. This is an important policy-relevant finding and shows that impacts should not only be measured in terms of structural damage, but also other outcomes such as health and well-being [49]. After all, heatwaves cause more fatalities in the United States than other climate hazards [50,51]. Gender, age, the number of friends in the building, and personal income significantly determine the strength of affectedness. Females perceive to be significantly more affected than men during both weather events. Age is (only) significantly related to the impact of heatwaves. This is in line with other studies of heat risk in major US cities [52,53] and around the globe [6,7]. Number of friends is (only) significantly related to heavy rainstorms, and may show that recent hurricane Sandy affected residents in larger buildings and larger households (not statistically significant), though not necessarily families. Also a large personal income is positively related to affectedness by (secondary) impacts of rainstorms. Previous research on the issue is not conclusive. A recent study on post-Hurricane Sandy recovery has indicated that middle-income homeowners were most vulnerable to flooding [54]. Other studies indicate that lower income individuals and families have experienced more substantial impact as a result of Hurricane Sandy [40,[55][56][57][58], as well as Hurricane Katrina [59][60][61].
As a form of triangulation we included previous harm, a slightly differing but related indicator to affectedness. It is strongly related to past affectedness (as one would expect) and is the most important predictor for all the heatwave and heavy rainstorm dimensions. With that result, our study supports the finding that Hurricane Sandy has probably impacted middle-income households more than others [54].
(2) According to citizens views, which sectors are most impacted and therefore most in need of adaptation in the future? Respondents are slightly more worried about heavy rainstorms than heatwaves in the futuredespite the contrary as regards affectedness in the recent past. Such findings may be influenced by the recent damages of Hurricane Sandy, which are vividly remembered and not yet overcome [55]. It also suggests that affectedness does not directly correspond to perceptions of future impacts. Such finding may be explained with an optimism bias or valence effect-a cognitive bias that causes a person to believe that they are less at risk of experiencing a negative event compared to others [62,63]-or to the cognitive bias called gambler's fallacy [64]-the mistaken belief that, if something happens more frequently than normal during some period, it will happen less frequently in the future. These and other cognitive biases have been frequently documented in perception studies on climate change [33,34]. Adaptation need, i.e. severity of future impacts, is influenced by ethnicity, gender, income and the availability of A/C. African American and Hispanic respondents see a significantly larger adaptation need during heatwaves, while being insignificant as regards rainstorms. Women indicated a significantly larger adaptation need and worry about the future for rainstorms. Interestingly, a large household income decreases the perceived severity of future impacts as well as worry about the future in our study, whereas a large personal income increases these aspects, respectively. The directional change in the relation might be an education effect or information bias, with more educated and informed people (usually having higher incomes) also being more worried.
(3) What is the responsibility of citizens and of communities in adaptation?
Overall, the perception of individual adaptation responsibility is regarded to be higher during heavy rainstorms and lower during heatwaves (figure 2(c)), contrary to stated previous harm and affectedness and although one would assume that the means of personal adaptation are higher when it comes to heatwaves. Community education programs, particularly with focus on explaining the life threatening impacts heat can have as well as the substantial steps individuals can take to prevent these could be beneficial for raising awareness [65]. Such programs can also increase utilization of heat adaptation measures currently in place, such as cooling centers [11,12].
Ethnicity and previous harm are shown to significantly influence views on individual adaptation. Gender, age and previous harm are significantly related to views on community adaptation. With that, adaptation responsibility relates more to affectedness and previous harm than to the perception of future severity of heatwaves and rainstorms.
Respondents who state having been significantly more affected by heatwaves (elderly and females) and heavy rainstorms (mostly females) see the community in charge of adaptation, i.e. not the responsibility with the individual. This might e.g. explain why cooling centers are insufficiently used (going there is an individual action) and reflect an adaptation need that is currently not met. In contrast, people of Hispanic descent (during heatwaves) and African Americans (during rainstorms) regard adaptation to be more of an individual responsibility. Although these groups did not report to be significantly (more) affected they might be vulnerable. Martin [66] determined general social vulnerability factors in American cities through metaanalysis and found 'being of color' to be a particular driving force. Other studies also show that people of color are more at risk than other city dwellers because the housing they can afford tends to be located in environmentally riskier areas and of poorer quality [67], and because local governments overseeing such neighborhoods often fail to establish and maintain proper services [67]. As a consequence, people in such neighborhoods may choose to rely on themselves-an important finding. This is different from the results of other studies, e.g. in the UK and Ireland. There, the lack of government support led to a form of helplessness among citizens and subsequently unwillingness to take on personal responsibility for adaptation (in that case flood protection) [23].
The latter findings are important for two reasons: first, it is known from the disaster literature that extreme events are likely to have the most devastating impacts on the already vulnerable [66,[68][69][70]. Therefore, addressing the needs and improving the resilience of previously affected communities and subgroups is likely to be particularly beneficial in preventing impacts from repeated exposure to weather hazards [66]. Second, improving individual resiliency to heat and rainstorms among already affected populations may be particularly effective due to their increased sense of adaptation responsibility, in particular as regards community but also individual adaptation. The positive relation between harm from previous disasters and adaptation has also been shown in other studies [29,32,35], while potential future risk does not seem to play a substantial role in adaptation [71].
Our study has limitations that relate, e.g. to the use of online questionnaires. Online questionnaire surveys have found to be less likely than other survey forms to reach the elderly population, racial or ethnic minorities, unmarried, less educated, or highly affluent people [44]. In addition, females are often more likely to exhibit information-seeking behavior and participate in questionnaires [72,73]. While our sample was not found to be under representative of the elderly population and had roughly the same distribution of males and females (SM2), it was under representative of the African American, Hispanic, Asian and Native American populations, similarly to other nonprobability online surveys [44]. However, participants in our survey had a higher income compared to other online surveys [44]. It is possible that the participation of individuals with higher income is due to self-selection bias. Previous research has indicated that low income individuals ($30 000/year) can be less likely than higher income individuals (>$30 000/year) to be aware of climate change [74]. Thus, higher income individuals may have been more interested in responding to our questionnaire. However, the data were not subsequently weighted for analysis. Moreover, we acknowledge that some of the indicators are highly place-specific, i.e. indicative of the NYC socioeconomic environment.
Conclusion
This study investigated the relationship between experienced impacts, and perceived future impact severity, adaptation needs and adaptation responsibility for heatwaves and heavy rainstorms in NYC, and how these are influenced by different levels of social vulnerability. With that the study aims to support NYC authorities and individuals in climate change adaptation, in particular an increase in adaptation effectiveness and an equitable and just adaptation approach. Views of stakeholders and perceptions of residents constitute vitally important aspects for the effectiveness and the legitimacy of adaptation.
The findings show that working towards more equitable and just adaptation policies for heatwaves and rainstorms needs to address different social groups and vulnerability markers. Overall, effective and just adaptation is not an easy task and should not be understood as a one-size fits all activity-context matters [23]. We show that previous harm strongly affects views on adaptation responsibility-women and elderly, both groups significantly affected by previous events, see a greater responsibility of communities in adaptation. In contrast, Hispanics and African Americans perceive adaptation to be more of an individual responsibility, relying on themselves, potentially as a consequence of failing local government arrangements. Considering all findings and implications we conclude that in order for adaptation policies to be effective they need to consider previous harm and differential social vulnerability, specific to the weather event. This allows to harness an increased sense of adaptation responsibility among already affected populations, prevent impacts from repeated exposure, and leads to more just designs and implementation of adaptation measures.
Though we believe that the presented findings are relevant for other urban agglomerations, in particular in the US, similar studies to the conditions of adaptation effectiveness in other political, cultural and social contexts constitute an important direction of future research. | 2019-05-20T13:06:29.333Z | 2019-01-17T00:00:00.000 | {
"year": 2019,
"sha1": "0a882545c6291b948710416f6918d4e3df1205b3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/aaf07a",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "12627369300d5d084a9f11c7df42a3b702e910f7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
} |
70209024 | pes2o/s2orc | v3-fos-license | Fault diagnosis and classification for photovoltaic arrays based on principal component analysis and support vector machine
In order to dig out more typical features of photovoltaic (PV) with multitudinous characteristic parameters, and realize fault diagnosis and classification for PV arrays effectively. A method based on principal component analysis (PCA) has been proposed in this paper. At first, the data set of PV array is processed by PCA and then a transform matrix is produced. Second, the processed data will be classified by supporting vector machine (SVM). Finally, a classification model will be built. Two sets of data, collected from PV simulation system and actual PV array, are adopted to examine this method. The result shows that the method is able to recognize four kinds of states accurately (normal, open circuit, short circuit and partial shadow). Consequently, the fault of PV array can be diagnosed and classified.
Introduction
For the threat of greenhouse effect and energy shortage, the importance of fossil energy reducing has been recognized. To protect human beings from environmental pollution and meet the growing demand for energy around the world, solar energy has become a popular research orientation among various kinds of renewable energy. Nowadays, crystalline silicon is widely used to produce the main commercial solar cell that covers 90% of the market [1]. In addition, the rest of the market is occupied by thin film solar cell that has lower manufacturing cost but more efficient [2]. With the development of PV devices manufacturing technology and the reducing of cost, the installed capacity of PV systems has reached 2017GW in 2016. The rapid increasing of PV systems mitigate worldwide energy crisis, but the maintenance of PV systems is a challenge to technicians. For the bad working conditions of PV array, such as ultraviolet radiation, extreme weather, shadow covered and so on [3]. These working conditions will lead to numerous faults in PV array. The common faults are classified as line to line (L-L) fault, open circuit, arc fault or partial shadow [4].
To prevent the failure from affecting the operation of PV array, researchers have proposed different kinds of methods to diagnose fault of PV array. Recently, the methods based on machine learning and data mining have become the focus of this research area due to their ability of solving nonlinear problem of PV system [5]. For example, methods based on wavelet packets are introduced to detect fault [6]. However, it's hard to confirm which kind of fault happens. Furthermore, a six-layer detection algorithm has been used to detect different faults. The result is given by a fuzzy logic classification system and the accuracy is increased up to 98.8% [7]. A method use fractional-order color relation classifier is able to classify different faults such as mismatch fault, bridge fault, open circuit and so on [8]. In our another work, a method based on clustering by fast search and find of density peaks (CFSFDP) is introduced to detect and classify different faults [9]. Moreover, optimized kernel extreme learning machine has been used to improve the effect of classification [10]. The output of PV array contains lots of characteristic parameters. These characteristic parameters will be changed when a fault happens. Such changes will exist as long as the fault continues. By analysing the output of PV array, we suppose that different working states of PV array have their unique characteristic. In order to dig out multitudinous characteristic parameters of PV array for the more typical features, and realize fault diagnosis and classification for PV arrays effectively, the method of PCA has been used in this paper to process the dataset measured from PV array. In addition, PCA can also reduce the dimensionality of the dataset. So that the volume of dataset will be decreased as well.
In this paper, the sections are arranged as follows: Section 2 introduce the process of PCA and present the peculiarity of SVM briefly. Section 3 expound the fault diagnosis and classification for PV arrays with this method. The several working conditions of PV array, simulation results and experiment results are presented in Section 4. Finally, some conclusions are drawn in Section 5.
Methodology
PCA is a kind of linear conversion which is used for simplifying dataset. By analysing a data set composed of interrelated quantitative correlation variables, the purpose of PCA is to extract the important information from the dataset and represent it as a set of new orthogonal variables, which is called the main component [11]. The implementation method of PCA is to project the dataset into a new coordinate system, the maximum variance of the input dataset falls on the first coordinate (called the first principal component), the second largest variance falls on the second coordinates (the second principal component), and so on. According to this principle, PCA will come in handy when we try to reduce the dimensions of the dataset and preserve the characteristics which contribute most to variance. After being processed by PCA, the data set will be sent to SVM to classify.
Implementation of PCA
Assume there is a NM matrix X which contains M characteristic parameters i x , and each characteristic parameter contains N measurements. The i x is defined as equation (1).
2, )
Moreover, the NM matrix X is expressed as equation (2). 12 [ , , , ] To process the matrix X by PCA, the covariance matrix of X, which indicated as X C , should be obtained. First, we should get rid of the mean value of the matrix X and obtain the zero-mean-value matrix X as equation (3) where i x is the mean value of i x , it is calculated as equation (4) ( 1, 2, , ) Now the covariance matrix X C is calculated as equation (5).
The covariance matrix X C is MM matrix. Its principal diagonal elements are the variance of each characteristic parameter. The other elements of matrix X C are the covariance between any two characteristic parameters. The covariance reflects the noise and redundancy in the measurements [12].
After the covariance matrix X C has been calculated, it will be performed Eigen-decomposition. Assume there are matrix Q and matrix , the relationship between X C , Q and is shown as equation where the matrix Q is constituted by the eigenvectors of CX, and the matrix is a diagonal matrix constituted by the eigenvalues of CX. There is a one-to-one correspondence between eigenvectors and eigenvalues. Sorting eigenvalues in descending order, we choose the first k eigenvalues (k ≤ M) and the eigenvectors which are corresponding to the chosen eigenvalues. The Mk projection matrix P is built by these chosen eigenvectors. The method to select value of k will be introduced in section 3. In addition, the result of PCA is expressed as equation (7).
Furthermore, when k=M, the covariance matrix of matrix A is identical to the matrix . Commonly, the eigenvalues of matrix X C reflect the variance of each characteristic parameter in matrix A.
Support Vector Machine (SVM)
SVM is a powerful and universal machine learning method which is widely used in classification and regression. For the nonlinear classification, the main idea is to map the input vector to a highdimensional feature space non-linearly. In this feature space, a linear decision surface is constructed. The high generalization ability of SVM is ensured by the special properties of the decision surface [13]. After the classification has finished in high-dimensional space, the result will be mapped into the original space.
Since PCA can only process and improve the classification of dataset. In order to further classify the different working conditions of PV array, a classification model is needed. SVM adopts the Structure Risk Minimization to achieve the optimal generalization ability and avoid over-fitting by balancing the error of training set and maximizing the classification interval. It can solve practical problems such as small samples and non-linearity well. Therefore, SVM is applied in this paper to achieve this purpose. In this paper, the classification model will be trained by the toolbox LIBSVM which compiled in MATLAB [14].
Fault diagnosis and classification for PV arrays
To diagnose the failure of PV array by PCA, the output data of PV array should be collected from the very beginning. According to the dataset measures from PV array, we find that the measurements of some characteristic parameters change a lot in different working states of PV array (such as different faults). But they have tiny variation in same working state. Specially, some of working states have similar characteristics. As shown in figure 1, maximum power point current (IMPP) and maximum Then the PCA will be used to find out the most expressive way of transformation. Further, the separability and robustness will be improved as well. In this case, seven characteristic parameters as IMPP, VMPP, ISC, VOC, PMPP, TC and G are collected to build the original dataset. Where IMPP is the current of PV array at the maximum power point, VMPP is the voltage of PV array at the maximum power point, ISC is the short-circuit current of PV array, VOC is the short-circuit voltage of PV array, PMPP is the power of PV array at the maximum power point, TC is the temperature of PV module and G is the irradiance. Moreover, ISC·VOC is added into the original dataset, which contains N samples, as a characteristic parameter. Then the original dataset will be randomly divided into the training dataset, matrix X, and the testing dataset, matrix Y, on a proportional basis.
Refer to the process of PCA introduced in section 2.1, normally we make k<8 and produce an Mk projection matrix P. For k is the number of eigenvalues and corresponding eigenvectors that will be applied. Relatively, the rest of eigenvectors and eigenvalues will be removed. In theory, the value of k should be as small as possible to obtain a lower-dimensions result. However, some of the eigenvectors with small eigenvalues still contribute to classification. These eigenvectors and eigenvalues should be remained as well. Hence, to determine the value of k better, the classification effect under different values of k is checked via SVM and the k with the highest classification accuracy is selected. It should be noticed that PCA project the dataset only but never change the location of the data in the matrix. Hence we make the training matrix X and the testing matrix Y multiply by the projection matrix P in turn and will get matrix A and matrix B. They are expressed as equation (7) in section 2.1 and equation (8).
The matrix A will be imported into the SVM to train a classification model after normalization. To verify the effectiveness of the classification model, it's used to classify the matrix B which has been normalized. The flow chart of fault diagnosis and classification for PV array is shown in figure 2.
In above process, seven classification models and corresponding projection matrixes will be produced. We will select a model and the corresponding projection matrix with the best classification effect. After getting the best classification model and projection matrix, the new dataset of PV array will be processed directly by multiplying the projection matrix and do not have to redo the process of PCA. It will simplify the operation and achieve the purpose of fault detection by means of the
Start
Collecting the output of PV array, randomly divided into the training matrix X and the testing matrix Y on a proportional basis.
Processing X by PCA, a M×k projection matrix P will be produced. M is the number of characteristic parameters in matrix X.
A=XP B=YP k≤8?
End k=k+1 Importing A into SVM to train a classification model after it is normalized. Using it to classify the matrix B which has been normalized.
Yes No
Recording classification accuracy.
Simulation and experiment
The result of simulation and experiment will be displayed in this section. Furthermore, the detail of simulation system and experimental platform will be listed as well.
Simulation system and experimental platform
The PV array simulation model established by MATLAB/Simulink is built for testing. The model has 5×10 PV modules, of which 10 modules are installed in a series as a string, and 5 identical strings are connected in parallel to form an array. Under STC, the maximum output power of the simulation photovoltaic array is 2750 W, the open-circuit voltage is 217 V and the short-circuit current is 17A. A 1.8 KW grid-connected photovoltaic system is applied to test the performance of the proposed method under the real working conditions, as shown in figure 3. This PV array consists of 18 PV modules, of which 6 modules are installed in a series as a string, and 3 strings are connected in parallel to an array. In addition, two separate PV modules are used as the reference modules, one for collecting the open circuit voltage and the other for collecting short-circuit current. The parameters of the experimental photovoltaic array are shown in table 1.
Verification by simulation data
The daily working states simulated by the simulation system are include: (1) normal, (2) one of the strings open circuit (Open1), (3) one module in a string short circuit (Short1), (4) one module in a string shadow covered (Shade1). We mix data from four states and define labels for each state so that they can be trained and classified by SVM. In addition, a certain amount of data is randomly selected from each state as the test data, and these test data are mixed to form the data matrix. And the rest is the training data, which consists of the training data matrix. According to table 2, the highest classification accuracy is 100% when k=7 or k=8. Therefore, the value of k is selected as 7. The classification accuracy of simulation proves the validity of the proposed approach. After being processed by PCA, the feature vector of the training data becomes 7 dimensions, which is illustrated in figure 4 by using the boxplots. The figure 4 has 7 sub-figures, each sub-figure represents a dimension of the feature vector. The horizontal label represents the working states of PV array (1: normal, 2: Open1, 3: Short1, 4: Shade1). From figure 4, the following phenomenon can be observed. For the second dimension, the data sample of Open1 state is larger than other states. For the fifth dimension, the data sample of each state is significantly different. For the sixth dimension, the data sample of Short1 state is the largest. Therefore, these dimensions contribute a lot to enhance the classification accuracy due to their outstanding separability.
Verification by experiments
To further test the performance of the proposed method, an experimental work is conducted. The daily working states and the specific operation are the same as the simulation verification. Following the classification results which are enumerated in table 3, the classification accuracy reaches 100% when k=6, k=7 or k=8, respectively. Thus, the value of k is selected as 6 and the feature vector of the training data is 6 dimensions. The boxplots of feature vector are shown in figure 5. Similar to simulation, the horizontal label represents the working states of PV array and the 6 subfigures represent 6 dimensions of the feature vector. For the first and second dimensions, the data sample of Open1 state can be distinguished from other states. For the third dimension, the data sample of normal state is the largest. For the fifth dimension, the data sample of Shade1 state is the smallest. Moreover, in the sixth dimension, the data sample of Short1 state is the largest. The experimental results also prove the efficiency of the proposed model.
Conclusion
A fault diagnosis and classification method based on PCA and SVM is presented in this paper. The output data of PV array is processed by PCA to improve their separability. Moreover, the dimensions of dataset can be reduced. After that, SVM is adopted to train a classification model. Moreover, to determine the value of k, the effect of classification model with different k is studied and the best k is chosen. Moreover, if there are several models have the same classification accuracy, the model with the smallest k value is selected as the best solution. The aim of this method is to obtain a corresponding projection matrix and a model with the best classification. The simulated and experimental results indicate that the proposed method can accurately classify four types of working states of PV array, i.e. normal, open circuit, short circuit and partial shadow. | 2019-02-19T14:07:39.921Z | 2018-10-30T00:00:00.000 | {
"year": 2018,
"sha1": "36fc36e666803ce1868542f6fe1c2d7aa9acfeed",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/188/1/012089",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c6f5b72c2b067ac799e337e52b6b49ac0551dc74",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
243481240 | pes2o/s2orc | v3-fos-license | The Effect of Reproductive Health Education on Knowledge and Attitudes of Adolescent About Premarital Sex in Private Vocational School Surabaya
Introduction: Adolescence is a period of storms and stress; health problems that occur in adolescents are related to risky behavior, namely smoking, drinking alcohol, abuse of illegal drugs and having premarital sexual relations. The research objective was to determine the effect of reproductive health education on adolescent knowledge and attitudes about premarital sex. Methods: The research design used a quasi-experimental. The study population totalled 356 students. The sampling technique uses nonprobability sampling: purposive sampling. A sample size of 188 respondents was obtained—reproductive health education implemented by video and leaflet. Data analysis used the Wilcoxon and Mann Whitney tests with a significant level of 0.05. Results: The results showed that in the video group, the knowledge level was p=0.000, and the attitude was p=0.000. The leaflet group showed a level of knowledge of p=0.000 and an attitude of p=0.000. The difference between the two groups was tested by using the Mann Whitney test. It was found that there was no difference in the effect of the video and leaflet methods on knowledge of p=0.219 and attitudes of p=0.469. Conclusion: Leaflets are effective for health education because they can be read individually and contain topics about premarital sex, which are more personal. The school must provide integrated sexual education with formal lessons that use many methods to increase students’ knowledge. Hastuti, P., Prahesti, Y., & Yunitasari, E. (2021). The Effect of Reproductive Health Education on Knowledge and Attitudes of Adolescent About Premarital Sex in Private Vocational School Surabaya. Pediomaternal Nurs. J., 7(2), 101-108. Doi: http://dx.doi.org/10.20473/pmnj.v7i2.27498 ARTICLE HISTORY Received : June 14, 2021 Revised : August 16, 2021 Accepted : September 8, 2021 Published : September 15, 2021
INTRODUCTION
Reproductive health is healthy physically, mentally and socially, not only free from diseases or defects related to the reproductive system, function and process. The scope of reproductive health services consists of mothers and children, family planning, prevention and management of sexually transmitted infections including HIV / AIDS, adolescent reproductive health, prevention of abortion and so on (Kementrian Kesehatan RI., 2016). In some cases of abortion, some adolescents have had abortions more than once or repeatedly, even though they have experienced abortion as if abortion was the only way when experiencing pregnancy. This reflects adolescents' lack of knowledge about healthy life skills, the risks of premarital sexual relations and the ability to reject relationships they do not want (Kusumaningrum, 2009). Cases of pregnancy outside of marriage and immorality among students are indeed on the rise. In recent months many mass media such as newspapers, television news and radio have broadcast about students who cannot take the UN because they are pregnant.
The demographic and health survey results, especially the Adolescent Reproductive Health component, showed that the most significant proportion of first-time dating was at the age of 15-17 years. About 33.3% of girls and 34.5% of boys aged 15-19 years started dating when they were not yet 15 years old. The age is worrying because adolescents do not have adequate life skills and risk having unhealthy dating behavior, including premarital sexual intercourse. The survey also found the percentage of premarital sex among adolescents. In 2007 male adolescents aged 15-19 years were 3.7% and female adolescents were 1.3%. In 2012, there was an increase in the percentage of premarital sex aged 15-19 years for boys by 4.5% and for girls by 0.7% (Kementrian Kesehatan RI., 2016). The survey results found that premarital sexual relations were mostly due to curiosity or curiosity (57.5% men), just happened (38% women) and forced by a partner (12.6% women). Throughout 2015, there was an increase in cases of unplanned pregnancy among East Java students, namely as many as 30 cases; previously in 2014, there were only 23 cases. The Head of the Data and Research Division of the East Java Child Protection Agency (LPA) stated that students who experience unwanted pregnancies in the Surabaya area are between 12-18 years (Rahmawati & Devy, 2018). Researchers are interested in researching 4 Private Vocational School X Surabaya. When conducting the preliminary study, the researcher interviewed with the Counseling Guidance Teacher that there had never been any reproductive health education about premarital sex to class X. The researcher also interviewed 20 respondents, respectively. The interviews obtained include 50% of students who are already dating and 50% of students who are not yet dating.
Sexual education is still taboo to discuss, from parents to children to teachers in schools with their students. The barriers that parents feel when providing sex education to children are the lack of knowledge on how to provide sexual education and the discomfort where parents also feel awkward discussing sex problems openly (Siregar, 2014). Lack of knowledge in adolescents can increase curiosity in adolescents; adolescents will automatically seek information about sex through magazines, the internet and social media. Social media is very closely related to teenager's life today. On social media, teenagers are free to access any site, including watching pornographic videos, this habit, if less attention from parents, can cause teenagers' desire to implement what they see. Premarital sexuality is caused by many teenagers already dating with high curiosity and a willingness to try. The wrong adolescent dating behavior can negatively impact the adolescent, which leads to premarital sexual behavior. The impact of premarital sex occurs in pregnancy in women, which results in dropping out of school due to pregnancy, increasing cases of abortion, and an increase in sexually transmitted diseases such as HIV/AIDS in Indonesia. Youth is the nation's next-generation which is expected to replace previous generations with Work quality, and mental health are better. However, it turns out that risky sexual behavior in several regions in Indonesia tends to be high.
The researchers hope that after conducting this research, the school can provide education about reproductive health and sexuality education through formal lessons to get the correct knowledge about sexual education. The school can control adolescent behavior and increase extracurricular activities in their free time. The school can also provide spiritual education or other spiritual activities according to student beliefs. Parents are expected to be more able to control and direct their young men and daughters regarding positive activities that teenagers can participate in and be more open about knowledge about sex education for children. Based on the phenomenon, the researcher desires to conduct research on the effect of reproductive health education on the knowledge and attitudes of adolescents about premarital sexuality at a Private Vocational School, Surabaya, East Java in 2019.
Study Design
The type of research was quasi-experiment with a research design using pretestposttest two groups without control design. The independent variable in this study is reproductive health education. And the dependent variable in this study is adolescents' knowledge and attitudes. 2 Groups of students were measured knowledge and attitudes before being given treatment. Furthermore, 1 group was assigned health education treatment using the video method, and 1 group was given health education using the leaflet method. After the two groups were given treatment, their knowledge and attitudes were measured again.
Population, Samples, and Sampling
The population was all students of class X at Private Vocational School Surabaya, 356 people. The calculation of the sample size using the formula from Slovin found that the sample size used in this study was 94 groups of video method treatment and 94 groups of treatment with leaflet method. The sampling technique uses NonProbability Sampling, namely purposive sampling. The sample in this research is teenage girls at Private Vocational School Surabaya who meet the requirements. The criteria in this study are 1. Inclusion criteria: willing to be a respondent. 2. Exclusion Criteria a. Students who are sick and cannot attend the research. b. Respondents resigned during the research process.
Instrument
The data were collected using a questionnaire on the respondents' demographic data and an observation sheet containing data on the length of time the counseling took place. The questionnaire includes questions to measure the level of knowledge consisting of 19 questions. The query uses the Guttman scale. There are four indicators used, namely the definition of premarital sex four questions number 1 to 4, factors that encourage premarital sex four questions from numbers 5 to 8, the impact of free sex six questions from numbers 9 to 14 and premarital sexual solutions five questions from number 15 to 19. Questions with correct answers are given a value of 1, and incorrect answers are given a score of 0. Measurement of attitudes using a questionnaire with a 4-point Likert scale contains a statement of a person's perception of existing symptoms or problems. Positive Statement Value Strongly agrees: SS 4, Agree: S 3 Disagree: TS 2 Strongly disagree: STS 1 Negative Statement Value Strongly agree: SS 1 Agree: S 2 Disagree: TS 3 Strongly disagree: STS 4. Knowledge variable categorized by Good (>75%), Enough (>55%-75%) and Less (<56%). Attitude variable categorized by Negative 10-25 and Positive 26-40.
Procedure
The researcher conducted an ethical test before taking data. The researcher submitted a research permit application letter to the Principal of Senior High School Pawiyatan Surabaya. Students who are willing to become respondents are asked to sign an informed consent. Research respondents who agreed to participate in the study were given a questionnaire to fill in completely to measure the level of knowledge and attitudes before being given reproductive health education (pretest). After taking the pretest data, the researcher provided health education about the dangers of premarital sexuality with different methods, namely the experimental group 1 with video group given for 30 minutes duration, and group 2 with leaflets, in a sample of 94 respondents each according to the place and time of the activity by referring to the compiled procedure. Respondents who had been given intervention with different methods were given another questionnaire (posttest) to measure their level of knowledge and attitudes after being given reproductive health education. The post-test was carried out immediately after the intervention was given.
Data Analysis
The researcher conducted a univariate analysis with descriptive analysis, which was carried out to describe each variable under study separately by creating a frequency table of each variable. Furthermore, the data has been processed. Bivariate analysis is carried out with the statistical test used is the Wilcoxon & Mann Whitney test with a significant level of 0.05, meaning that if p > 0.05 means the hypothesis is rejected, which means that there is no effect of reproductive health education on the knowledge and attitudes of adolescents in Senior High School Pawiyatan Surabaya.
Ethical Clearance
Approval sheets are given before the research is carried out so that respondents know the aims and objectives of the research and the impacts that will occur in data collection. The ethical approval was obtained from KEPK Stikes Hang Tuah Surabaya, PE/32/V/2019/ KEPK/SHT. The researcher did not include the subject's name on the data collection sheet provided by the respondent to maintain the confidentiality of the respondent's identity. The confidentiality of the information that has been collected from the subject is guaranteed to be strictly confidential.
RESULT
Characteristic respondents showed that mean of age in video group was 15.17 and 15.14 in leaflet group. Most of respondents in both groups were female (Table 1). Bivariate using analysis showed that there was significant between pre-test and post-test on knowledge and attitude (p<0.05) ( Table 2, Table 3). However, there was no significant differences in both groups (p>0.05) (Table 4).
The effect of reproductive health education using the video method on adolescent knowledge about premarital sex
The results of this study, the level of knowledge of adolescents when the pretest was conducted, the majority of them had sufficient knowledge, namely as many as 84 students (89.4%), the data obtained were 73 adolescents who did not know that feeling of attraction, dating, and holding hands with their boyfriends were premarital sexual behavior. There was one student who experienced a decrease in the level of knowledge, 64 adolescents experienced an increase in the level of knowledge, and 29 adolescents experienced a determination of the level of knowledge. Researchers assume that adolescents need to get reproductive health education about premarital sex to improve healthy life skills, the risk of sexual intercourse and the ability to refuse unwanted premarital sexual relations. After being given treatment or reproductive health education about premarital sexuality using the video method, adolescent knowledge has increased to 68 students (72.3%) with good knowledge. The SKRRI, 2012 survey regarding adolescent information regarding reproductive health and symptoms of sexually transmitted diseases were not sufficient. The data shows that 35.5% of female adolescents and 31.2% of male adolescents aged 15-19 years know that women can get pregnant. With one sexual intercourse, while information about HIV was relatively more widely accepted by adolescents, although only 9.9% of girls and 10.6% of boys had comprehensive knowledge about HIV-AIDS. Supported by research results (Syihabudin, 2018), health education using audio-visual methods is effective at the level of knowledge.
Researchers assume that the video method visually and audio attracts the attention of adolescents so that millennial adolescents can more quickly capture it. In the concept of adolescent development, adolescents at this age are cognitively towards more mature development, where there is a change in the formal operational mindset of adolescents (Naedi, 2012). Research from (Tuong et al., 2014) on examining the effectiveness of video interventions in changing health behavior concluded that video interventions vary widely to modify health behavior depending on the target to be influenced. The video intervention in this study appears to be effective in breast self-examination, cancer screening, self-care, and medication adherence.
The effect of reproductive health education using the video method on adolescent attitudes about premarital sex
The results of this study, the majority of adolescents' attitudes during the pretest were bad, as many as 59 students (62.8%). There were two adolescents who experienced a decline in attitudes, 53 adolescents who experienced an increase in attitudes, and 39 students who experienced attitude determination. Researchers assume that the environment of adolescents and peers influences students with bad attitudes. With new and exciting information such as this video method, adolescents begin to analyze attitudes and initiate changes to bad attitudes. A good attitude about premarital sex is possible because it follows the task of adolescent development, as stated by Hurlock, that at this time adolescents begin to acquire and understand a set of values and an ethical system as a behavior guide.
A bad attitude is a tendency to stay away from, avoid, hate, and dislike certain objects with factors including personal experience, culture, other people who are considered important, mass media, educational or religious institutions or institutions and emotional factors in individuals (Kusumastuti, 2010). This study shows that adolescents need more guidance to be able to direct their attitudes to a better one, following the task of adolescent development where at that age hormonal changes occur in reproductive function, resulting in the ability to hypothesize and deal with changes that occur in adolescence. At this time, adolescents begin to strengthen selfcontrol or the ability to control themselves on a scale of values, principles and philosophy of life (Pratiwi, 2012).
The effect of reproductive health education using the leaflet method on adolescent knowledge about premarital sex
The data obtained from this study were one adolescent with a decreased level of knowledge, 67 adolescents with increased knowledge levels and 26 adolescents with fixed knowledge levels. In this study, it was shown that there were nine students (9.6%) who had a good level of knowledge before and after being given reproductive health education using the leaflet method (9.6%) to 76 students (80.9%). Based on the research, three students (3.2%) had a low level of knowledge, and only 51.1% of adolescents knew that frequently meeting with their lovers could encourage premarital sexual behavior. Researchers assume that the high intensity of meeting a lover will cause curiosity and want to try something new like what they see on social information media, often meeting will lead to a feeling of love for the boyfriend will grow so that when he gets a request from a teenage girlfriend do not think long enough to obey what her lover wants. Supported by research (Hasibuan, 2013) about the factors that influence the incidence of premarital sex in young girls at SMAN 1 Pagai Utara, the results show that there is a significant relationship between the influence of pressure from boyfriends on premarital sexual incidence in young girls at SMAN 1 North Pagai. According to (Freeman et al., 2009), knowledge can be changed with persuasion strategies, namely providing information to others with health education carried out by various methods, one of which is by providing leaflets. The addition of leaflet media allows respondents to read them more clearly because they are individual (Fatmawaty, 2017). The language used by researchers in the leaflets used common language, which made it easier for teenagers to understand. Freeman et al. (2009) suggest that the use of leaflets if optimized for most health practices, will have a good impact, leaflets are used as a medium of information when the information conveyed concerns a more intimate part, as in this study, leaflets are used for helps promote early detection of genitalia. Leaflets that are individual or can be read by individuals and are also designed in a pocketsize make it easy for patients to repeat reading so that it is easier to gain new knowledge.
The effect of reproductive health education using the leaflet method on adolescent attitudes about premarital sex
The data obtained from this study shows that there are two adolescents with decreased attitudes, 58 adolescents with increased attitudes, and 34 adolescents with fixed attitudes. The bad attitude in the pretest calculation was more dominant with 67 students (71.3%). Researchers assume that the factors that cause respondents' attitudes at the pretest are more bad attitudes due to the lack of education about premarital sex from both the health agency and the health agency itself. Another factor that can cause adolescent attitudes when the pretest is not good is the environment. The environment near the localization area or the so-called prone area is one of the factors that influence the attitudes of youth. According to (Azwar, 2013), the factors that influence attitudes are mass media as a means of communication for various forms of mass media, which have a major influence in forming people's opinions and beliefs. In conveying information about its main task, the mass media carry messages containing suggestions that can direct one's opinion. The existence of new information about something provides a basis for new cognitive thinking to form attitudes towards it. If it is strong enough, it will provide a sufficient basis in assessing something so that a confident attitude is formed. Information in the leaflet im¬proved the habit of accessing media related to reproductive health and positive premarital sexual attitudes (Damanik et al., 2020). This is also supported by research conducted by Barik et al., who suggested this form of media will be more effective when combined with other media such as videos, telephone interactions, games and others (Barik et al., 2019).
The difference in the effect of the video method and the leaflet method on adolescent knowledge about premarital sex
There is no significant difference in reproductive health education between the video method and the leaflet method on knowledge in adolescents. However, the average value for the leaflet method is higher, namely 98.09, while the average value for the video method under the leaflet method is 90.91, which means that the use of the leaflet method is more effective than the video method. Researchers assume that when adolescents are given reproductive health education using the video method using devices such as LCDs, projectors, and sound, it will be more attractive because they get information through visuals and get information via audio. Still, the drawback is that is due to 1 LCD. Children in groups and adolescents in the back will have difficulty reading or hearing less audio because the sound is in front. Whereas in the leaflet method group, respondents received one leaflet for one person, making it very easy for respondents to read and understand the material's content. Both the leaflet method and the video method are equally effective in increasing adolescent knowledge about the dangers of smoking, although leaflet media are more effective in increased knowledge of adolescents about the dangers of smoking compared to video. Still, the video method group also showed an average increase. Sexual health education materials are designed as easy as possible to be accepted by patients, for example distributing leaflets or by displaying them at the location that allows the patient to read. This method is the most used to distribute educational material, thus leaving the decision to have complete information on the individual patient. The practice nurse said, "Well, we have TV to play videos, but please note that most health promotion videos cover all aspects except sexual health. Obviously, there are some people who will be offended, so we also have to be careful of anything big". Opposite with the study of (Prawesti et al., 2018) that health education intervention using video has a higher impact in the development of maternal health literacy compared with the standard intervention using the brochure. This is possible because the material presented and the respondents are considered adults. Viewing the video makes it easier for them to understand the meaning of the message conveyed.
The difference in the effect of the video method and the leaflet method on adolescent attitudes about premarital sex
There is no difference in reproductive health education between the video method and the leaflet method on attitudes in adolescents. The average value for the leaflet method is 93.00, while the average value for the video method is 96.00. Researchers assume that adolescents are closely related to media served via electronic devices. Adolescents will be more interested if there is something more instantaneous. Therefore the use of the video method can influence adolescent attitudes about premarital sexuality. A scoping review from (Condran et al., 2017) suggests that the results of this study support the use of social media in sexual health promotion interventions, especially promoting environmental, individual behavior change. The most significant type of social media is youtube, which can display or provide health education through audio or visuals. Essentially the field of health promotion is intended to facilitate individuals and populations in obtaining positive health outcomes, often through action at the institutional, community and policy levels.
CONCLUSION
Reproductive health education using the video method affects the knowledge and attitudes of adolescents about premarital sexuality at Private Vocational School Surabaya. Reproductive health education using the leaflet method has an effect on the knowledge and attitudes of adolescents about premarital sex at Private Vocational School Surabaya. Reproductive health education using the video method is no different from the leaflet method in influencing the knowledge and attitudes of adolescents about premarital sexuality at Private Vocational School Surabaya. Suggestions for the School to provide integrated sexual education with formal lessons, provide spiritual education according to student beliefs, and procure information media in leaflets, for example, wall magazines. | 2021-11-05T15:16:07.868Z | 2021-09-15T00:00:00.000 | {
"year": 2021,
"sha1": "914a563a7dadb2c624d10165cf0f67e4bdb39b67",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.20473/pmnj.v7i2.27498",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4dd17e6f224c0fcb14db592a94df05f738e6a9d1",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": []
} |
55906429 | pes2o/s2orc | v3-fos-license | Monogenic Phosphate Balance Disorders
The last decade has seen that several of the dominant and recessive forms of hypoand hyperphosphatemic bone disease have received their molecular explanation. This has led to new insight into the pathophysiology of hypoand hyperphosphatemic bone disease, as well as the understanding of a bone-kidney axis which operates integrated and in parallel with the classical parathyroid-kidney axis in the regulation of phosphorus content in the body. In addition, it has led to the recognition of a Janus face of some of the involved genes, showing both hyperand hypofunction, dependent on the nature of the mutation. In this book chapter, we will present an update on the emerging insight of monogenic hypoand hyperphosphatemic disorders.
Introduction
The last decade has seen that several of the dominant and recessive forms of hypo-and hyperphosphatemic bone disease have received their molecular explanation.This has led to new insight into the pathophysiology of hypo-and hyperphosphatemic bone disease, as well as the understanding of a bone-kidney axis which operates integrated and in parallel with the classical parathyroid-kidney axis in the regulation of phosphorus content in the body.In addition, it has led to the recognition of a Janus face of some of the involved genes, showing both hyper-and hypofunction, dependent on the nature of the mutation.In this book chapter, we will present an update on the emerging insight of monogenic hypo-and hyperphosphatemic disorders.
Genetic mechanisms and pathophysiology
Hypophosphatemia may lead to bone or dental disease resulting from decreased mineralization (calcification) of bone or dental matrix or osteoid.The simultaneous blood calcium levels will also influence the degree of mineralization.Hypophosphatemia leads to rickets in children or to osteomalacia in adults.In many of the hypophosphatemic conditions, there is also an impairment of renal activation of vitamin D, further aggravating disease.The mineralization of teeth can also be affected, and there are clinical forms where bone affection is minimal and the dental disorders dominate.Hyperphosphatemia may lead to increased mineralization of both bone and non-bone tissues (ectopic calcification) due to an increase in the body content of phosphorus.This results in tumoral calcinosis with calcification of muscles, skin and vessels.The monogenic forms affect the renal handling of phosphorus by various mechanisms resulting from inactivation or activation of the involved genes.With the advancement of genetic insight and the subsequent possibility to study subjects with mutations and a wide range of phenotypes, a broader phenotypic pattern is recognized.Consequently, we suggest the more appropriate terms of monogenic hypophosphatemia and monogenic hyperphosphatemia for these disorders, and that the specific disorders should be classified according to the affected gene, e.g.PHEXhypophosphatemia and FGF23-hyperphosphatemia (Table 1).We will now provide an overview of the genes directly implicated in monogenic phosphate balance disorders.Please refer to textbooks for a discussion of genes indirectly affecting phosphate balance (i.e.genes leading to defective parathyroid gland development or disrupted PTH receptor function).
PHEX
The PHEX (Phosphate-regulating endopeptidase homolog, XB; MIM* 300550) gene consists of 22 exons (Sabbagh, Boileau et al. 2003) and was positionally cloned in 1995 (HYP Consortium 1995).This gene is encoding a transmembrane protein and belongs to the type II integral membrane zinc-dependent endopeptidase family.The gene is expressed in a wide variety of tissues including the kidney with a higher expression in mature osteoblasts and odontoblasts.The substrate for the gene product is not known, but the pathogenesis seems to involve phosphate regulating humoral factors, phosphatonins, where the fibroblast growth factor-23 (FGFR-23) is central (Jonsson, Zahradnik et al. 2003;Juppner 2007;Bastepe and Juppner 2008).
(See section 2.11 for a discussion on the physiological and pathophysiological mechanisms involved.)The protein is also believed to be involved in bone and dentin mineralization.Both the whole-body and bone-specific (osteocalcinpromoted inactivation) knockout mouse model of PHEX as well as the spontaneous Hyp mouse model display increased bone production, increased levels of serum FGF23, decreased kidney membrane NPT2 and osteomalacia (Yuan, Takaiwa et al. 2008).Cell studies indicate mechanistic defects both during protein processing in the endoplasmic reticulum and cell membrane (Sabbagh, Boileau et al. 2001) and as abrogated catalytic activity (Sabbagh, Boileau et al. 2003).There are several mutations associated with PHEX-hypophosphatemia (see the PHEX mutation database: http://www.phexdb.mcgill.ca/)and most of the mutations are located in the region encoding the extracellular domain, but there are also examples of pathological mutations in the 5'UTR (Dixon, Christie et al. 1998) and 3'UTR (Ichikawa, Traxler et al. 2008) of the gene.Fig. 1.PHEX gene structure and the corresponding encoded regions.Adapted from (Sabbagh, Boileau et al. 2003).
There is no clear genotype-phenotype correlation (Holm, Nelson et al. 2001).There is a slight dominance of familiar mutations (showing co-segregation with disease in a pedigree) to de novo mutations (sporadic) reported in literature (Holm, Nelson et al. 2001).The penetrance is high, although there are examples of non-penetrance (Gaucher, Walrant-Debray et al. 2009).The expressivity varies (Brame, White et al. 2004).
DMP1
The DMP1 gene (dentin matrix acidic phosphoprotein 1; MIM* 600980) gene consists of 6 exons on chromosome 4q21, and was first implicated in a phosphate balance disorder in 2006 Lorenz-Depiereux, Bastepe et al. 2006).DMP1 is highly expressed in osteocytes, and is a member of the 'SIBLING' (small integrin binding ligand n-linked glycoprotein) family of non-collagenous extracellular matrix proteins involved in bone mineralization (Huq, Cross et al. 2005).The DMP1 knockout model displays rickets and osteomalacia with isolated renal phosphate wasting associated with elevated FGF23 levels and normocalciuria (Feng, Ward et al. 2006).In humans, homozygous or compound heterozygous mutations in DMP1 leads to hypophosphatemic rickets with elevated FGF23, isolated phosphate wasting, and no evidence of hypercalciuria.The exact relation between DMP1 and FGF23 levels is not known, but in vitro studies have shown that vitamin D increases the expression of both (Farrow, Davis et al. 2009).There are only a few reported mutations in the literature (Feng, Ward et al. 2006;Lorenz-Depiereux, Bastepe et al. 2006;Farrow, Davis et al. 2009;Koshida, Yamaguchi et al. 2010;Makitie, Pereira et al. 2010;Turan, Aydin et al. 2010), making DMP1 mutations a rare cause of hypophosphatemic rickets (Gaucher, Walrant-Debray et al. 2009).(Huq, Cross et al. 2005).
GALNT3
The O-glycosylation of serine and threonine residues on many glycoproteins depend on enzymatic catalyzation of the reaction UDP-GalNAc + polypeptide-(Ser/Thr)-OH to GalNAc-alpha-O-Ser/Thr-polypeptide + UDP.GalNAcT3 is one of 24 members in the UDP-GalNAc:polypeptide N-acetylgalactosaminyltransferase protein family involved in this process.GalNAcT3 is encoded by the GALNT3 gene (MIM *601756) on chromosome 2q24-q31, which contains 10 exons.GalNAcT3 is thought to protect FGF23 from proteolysis (Kato, Jeanneau et al. 2006) by O-glycosylation, and a deactivating mutation in GALNT3 will thus lead to increased breakdown of FGF23.Mutations in GALNT3 were the first to be associated with familial tumoral calcinosis (FTC) (Topaz, Shurman et al. 2004), and are also seen in the closely related disease, the hyperphosphatemic hyperostosis syndrome (HHS).These are the only diseases known to be caused by mutations in the family of UDP-GalNAc:polypeptide N-acetylgalactosaminyltransferases.Although the process of O-glycosylation is important in many tissues, mutations in GALNT3 lead to a very restricted phenotype with hyperphosphatemia, periarticular calcifications and hyperostosis.This is thought to be explained by functional redundancy of this protein family.In addition to the effects on bone and renal phosphate handling caused by altered FGF23 metabolism, mutations in GALNT3 are also thought to have direct effect in the process of ectopic calcification in extraosseous tissues (Chefetz and Sprecher 2009).
FGF23
The FGF23 gene (MIM*605380) on chromosome 12 is composed of 3 exons, and encodes a member of the fibroblast growth factor family.The protein product, FGF23, acts via its receptor FGFR1 (fibroblast growth factor receptor 1, see 2.5), but is also dependent on the co-receptor Klotho (-Klotho) to exert its functions (see below).Furthermore, FGF23 belongs to the FGF19 family where the two other family members, FGF19 and FGF21, (also binding to FGFR1) are dependent on ß -Klotho to exert their functions, illustrating the role of a coreceptor to ensure tissue specificity and function (Kurosu, Choi et al. 2007).FGF23 exerts its physiological effects on the kidney by the downregulation of the CYP27B1 gene leading to a loss of compensatory increase in 1,25(OH) 2 vitamin D levels, and by the endocytosis of the type IIa and IIc Na/phosphate (Pi) cotransporters (Npt2a and Npt2c) from the renal proximal tubular brush border membrane.Heterozygous activating mutations in the cleavage site RXXR motif of exon 3 of FGF23, leads to stabilization and decreased degradation of the FGF23.The clinical phenotype is autosomal dominant hypophosphatemic rickets (Econs, McEnery et al. 1997;2000).Homozygous inactivating missense mutations in FGF23 lead to hyperphosphatemic familial tumoral calcinosis, due to decreased renal excretion of phosphate and increased renal αhydroxylation of vitamin D (Benet-Pages, Orlik et al. 2005;Ichikawa, Baujat et al. 2010).
FGFR1
The Fibroblast growth factor receptor 1 gene FGFR1 (MIM*136315) located on chromosome 8p11 encodes a protein member of the FGFR (1-4) family, where the members are all receptor tyrosine kinases.FGFR1-3 are implicated in skeletal development, and various mutations in the corresponding genes are responsible for a number of skeletal dysplastic syndromes (Passos-Bueno, Wilcox et al. 1999).There are several subclasses of FGFRs, depending on the number of immunoglobulin-like loops and splicing differences in the third loop.FGFR1C combines with Klotho (KL) to become the functional receptor for FGF23 (Urakawa, Yamazaki et al. 2006).Mutations in the FGFR1C lead to constitutive activation of the receptor and subsequent downregulation of the expression of the sodium-phosphate co-transporters NaPiIIa and NaPiIIc, as well as the downregulation of the CYP27B1 gene leading to a loss of compensatory increase in 1,25(OH)2vitamin D levels (Shimada, Hasegawa et al. 2004).
KL Klotho (KL) (MIM*604824
) is located to chromosome 13q12, comprises 5 exons, and encodes the protein Klotho (also known as α-Klotho), which in mice is considered a hormone with anti-aging properties (Kurosu, Yamamoto et al. 2005).KL knockout mice will go through a rapid aging process, and have decreased insulin secretion and increased insulin sensitivity (Kuro-o, Matsumura et al. 1997), while overexpression of KL leads to a prolonged life span in mice (Kurosu, Yamamoto et al. 2005).In addition, Klotho has been associated with disturbances of phosphate metabolism, as it is an obligate co-receptor for the binding of FGF23 to FGFR1C.In humans, there are two KL transcripts; one encoding a membrane bound protein and one encoding a secreted protein.Human KL is expressed mainly in the kidney, and the secreted variant seems to dominate (Matsumura, Aizawa et al. 1998).Recent findings from mouse studies suggest that Klotho has endocrine, paracrine and autocrine effects independent of FGF23 (Hu, Shi et al. 2010).
Inactivating mutations will lead to familial hyperphosphatemic tumoral calcinosis, similar to the phenotypes seen in GALNT3-hyperphosphatemia and FGF23-hyperphosphatemia (Ichikawa, Imel et al. 2007).There is also one report of an activating translocation of the KL gene, leading to hypophosphatemic rickets with a phenotype similar to PHEX-hypophosphatemia but with additional distinctive dysmorphic features of the head (Brownstein, Adler et al. 2008).
SLC34A1
The solute carrier 34 (SLC34) gene family includes the three genes SLC34A1, SLC34A2 and SLC34A3, all encoding sodium/phosphate cotransporters.SLC34A2 encodes the intestinal NaPi-IIb, and will not be further discussed.SLC34A1 and SLC34A3 encode the two renal sodium/phosphate cotransporters, and the latter is described in 2.8 section.The SLC34A1 (MIM*182309) gene is expressed in the renal proximal tubule, and encodes the type IIa Na/Pi cotransporter (NaPi-IIa), which plays a central role in renal phosphate handling in various animal models.The expression of NaPi-IIa in the brush border membrane is regulated at the post translational level, by endocytosis and lysosomal degradation or microtubular recruitment (Tenenhouse 2005).Both PTH and FGF23 lead to increased endocytosis of NAPi-IIa, and thus decreased reabsorption of Pi from filtered urine, whereas hypophosphatemia and 1,25 dihydroxyvitamin D stimulate phosphate reabsorption (Tenenhouse 2005).There also seems to be a directly regulating effect of dietary Pi on Na/Pi cotransport in proximal tubules, and the existence of an intestinal-renal axis for phosphate regulation has been proposed [review: (Biber, Hernando et al. 2009)].NaPi-IIa double knockout mice have hypophosphatemia, phosphaturia, elevated 1,25 dihydroxyvitamin D with resulting hypercalcemia, hypercalcuria and nephrocalcinosis/nephrolithiasis (Beck, Karaplis et al. 1998).This phenotype resembles hereditary hypophosphatemic rickets with hypercalcuria (HHRH) seen in humans, which interestingly is not caused by mutations in SLCA34A1, but rather by mutations in SLC34A3 (NaPi-IIc) (see 2.8).In man, a few cases have been described of heterozygous mutations in SLC34A1, leading to a syndrome of hypophosphatemia, osteoporosis and nephrolithiasis (Prie, Huart et al. 2002).
SLC34A3
The human SLC34A3 (MIM*609826) gene, consists of 13 exons on chromosome 9q34, and h o m o z y g o u s m u t a t i o n s i n t h i s g e n e l e a d t o hereditary hypophosphatemic rickets with hypercalciuria (HHRH) (Bergwitz, Roslin et al. 2006;Lorenz-Depiereux, Benet-Pages et al. 2006).The phenotype of HHRH resembles that of NaPi-IIa knockout mice, but the patients also display rickets or osteomalacia.In animal models the type IIc Na/Pi cotransporter (NaPi-IIc) has been shown to play a more minor role in proximal tubular phosphate resorption than NaPi-IIa.The opposite might be the case in man (Amatschek, Haller et al. 2010).
SLC9A3R1
The SLC9A3R1 (MIM*604990) gene on chromosome 17 encodes the protein NHERF1 (sodium/hydrogen exchanger regulatory factor 1), which plays a part in maintaining the cytoskeleton in polarized cells with microvilli, such as renal tubular cells.Three different mutations in SLC9A3R1 have recently been identified in 7 subjects with hypophosphatemia due to phosphaturia, nephrolithiasis and osteoporosis (Karim, Gerard et al. 2008).
ENPP1
The ENPP1 (ectonucleotide pyrophosphatase/phosphodiesterase 1) (MIM*173335) gene on chromosome 6q22-q23 comprises 23 exones and encodes a type II transmembrane glycoprotein ectoenzyme responsible for the generation of inorganic pyrophosphate (PPi).PPi is an inhibitor of hydroxyapatite crystal growth, and also suppress chondrogenesis.In mice, ENPP1 is expressed in plasma cells, on hepatocytes, renal tubules, salivary duct epithelium, epididymis, capillary endothelium in the brain, and chondrocytes (Harahap and Goding 1988).In man it has been shown that ENPP1 is expressed in liver, cartilage and bone, and is thought to regulate physiological mineralization processes and pathological chondrocalcinosis (Huang, Rosenbach et al. 1994).Homozygous mutations in ENPP1 are known to cause generalized arterial calcifications of infancy (GACI) (Rutsch, Vaingankar et al. 2001;Rutsch, Ruf et al. 2003).Recently, homozygous mutations in ENPP1 have been shown to cause autosomal recessive hypophosphataemic rickets (Levy-Litan, Hershkovitz et al. 2010).In some families, identical mutations cause GACI in some family members and hypophosphatemic rickets in other family members (Lorenz-Depiereux, Schnabel et al. 2010).Prolonged survival in GACI has been observed in subjects who have simultaneously displayed renal phosphate loss (Rutsch, Boyer et al. 2008).Mutations in ENPP1 have also been associated with susceptibility to insulin resistance and obesity (Goldfine, Maddux et al. 2008).
An integrated model for the physiological and pathophysiological mechanisms in the renal phosphate regulation
Figure 3 shows the integrated physiological and pathophysiological mechanisms in the renal phosphate regulation.The parathyroid-renal axis has been the traditional model explaining how PTH stimulates the renal tubular cells to phosphaturia as a negative feedback loop response to elevated phosphate levels (Figure 3A).In this model PTH acts via its receptor to block the sodium-phosphate co-transporters NaPiIIa and NaPiIIc encoded by the SLC34A1 and SLC34A3 genes, respectively.In addition, PTH stimulates the CYP27B1 gene leading to a compensatory increase in 1,25(OH) 2 vitamin D levels as a negative feedback loop to reduced serum levels of 1,25(OH) 2 vitamin D and calcium.There is, however, also a PTH-independent pathway where hormonal substances from bone, phosphatonins, stimulate the renal tubular cells to phosphaturia in a negative feedback response to elevated serum phosphate and 1,25(OH) 2 vitamin D levels.Recent emerging insight has laid the foundation for this model of a bone-kidney axis (Quarles 2003), where fibroblast growth factor 23 (FGF23) seems to be the central phosphatonin inhibiting phosphate reabsorption and hence inducing phosphaturia (Figure 3B).In contrast to PTH, FGF23 inhibits CYP27B1 gene leading to an absent compensatory increase in 1,25(OH) 2 vitamin D levels, recognized by clinicians as inappropriate normal 1,25(OH) 2 vitamin D levels.In the normal state PHEX and DMP1 gene products seem to inhibit FGF23 production, whereas the GALNT3 gene product seems to stimulate FGF23 production.By interfering with the bone-kidney axis, increased FGF23 levels seem to play a central role in the pathogenesis of PHEX-hypophosphatemia (Jonsson, Zahradnik et al. 2003) (Figure 3C) and potentially also DMP1-hypophosphatemia (Lorenz-Depiereux, Bastepe et al. 2006;Turan, Aydin et al. 2010) and FGF23-hypophosphatemia (Imel, Hui et al. 2007), but the mechanisms are still poorly known (Strom and Juppner 2008).It is also poorly known how increased FGF23 levels in FGF23-hyperphosphatemia and GALNT3-hyperphosphatemia explain the opposite condition of hyperphosphatemia (Topaz, Shurman et al. 2004;Benet-Pages, Orlik et al. 2005).A current model postulates that mutations in PHEX lead to increased FGF23 production by cancelled PHEX-mediated inhibition of FGF23 production (Figure 3 C).Both the parathyroid-renal axis and the bone-kidney axis seem to be negative feedback loops where increased serum phosphate levels compared to a biological set value leads to phosphaturia.These two axes are different with respect to 1,25(OH) 2 vitamin D: Whereas low 1,25(OH) 2 vitamin D levels stimulating 1,25(OH) 2 vitamin D activation is the major regulation in the parathyroid-renal axis, high 1,25(OH) 2 vitamin D levels inhibiting 1,25(OH) 2 vitamin D activation is the major regulation in the bone-kidney axis.Recent work also points to interactions between these feedback loops where FGF23 inhibits PTH, whereas PTH possibly stimulates FGF23 (Figure 3D).Mutations in genes encoding the sodium-phosphate co-transporters such as SLC34A1, SLC34A3 and SLC9A3R1 lead to increased phosphaturia but since the 1,25(OH) 2 vitamin D activation is unaffected, there is a normal compensatory increase in 1,25(OH) 2 vitamin D levels.Whether gene mutations lead to hypophosphatemia or hyperphosphatemia is dependent on the location of the gene product in the pathways outlined above and whether the mutation is activating or inactivating the affected gene.
Diagnostic considerations
The diagnosis of monogenic hypo-or hyperphosphatemia requires the demonstration of affected phosphate balance in patients in which acquired causes of phosphate disturbance have been excluded.A family history of rickets, kidney stones, soft tissue calcification, bone deformities or recurrent fractures, as well as an indication of monogenic inheritance pattern is usually found, unless the patient seems to represent a sporadic case.In the case of hypophosphatemia, there is typically low plasma phosphate, low renal tubular reabsorption of phosphate (% TRP) and tubular threshold maximum for phosphate for glomerular filtration rate (TmP/GFR), raised alkaline phosphatase, normal PTH, and inappropriate and normal 25(OH) and 1,25(OH) 2 vitamin D levels.Moreover, the urinary calcium excretion is normal, whereas X-ray changes may demonstrate rickets or osteomalacia.FGF23 levels are typically high, either due to overproduction or under-catabolism, and in children with rickets, the combined evaluation of FGF23 and PTH leads differential diagnosis in the direction of impaired phosphate homeostasis (high FGF23 and normal PTH) or altered metabolism of vitamin D, calcium or magnesium (low FGF23 and high PTH) (Alon 2010).In the case of hyperphosphatemia, there is usually high plasma phosphate, an inappropriate normal % TRP and TmP/GFR, a low or normal PTH and normal renal function.In some cases, the clinical picture and inheritance pattern will suggest a specific genetic diagnosis, and, in addition, the blood FGF23 levels and hypercalciuria may differentiate between different genetic disorders of phosphate balance, although the clinical role of blood FGF23 levels is at present not fully elucidated.
PHEX
PHEX-hypophosphatemia (X-linked dominant) is usually a progressive disorder with a typical onset at the age when the child starts to walk.The most common clinical manifestations include genu varus, radiological rickets, short stature, bone pain, dental abscesses and calcification of tendons, ligaments and joint capsules with boys being more severely affected than girls and a wide variation between families (Econs, Samsa et al. 1994;Carpenter 1997;Bastepe and Juppner 2008).Some patients may even have craniosynosteosis and spinal stenosis.Many patients suffer from long lasting dental problems, particularly tooth decay and recurrent spontaneous dental abscesses that occur in the absence of a history of trauma or dental decay.Histological findings include high pulp horns, globular dentin, and defects of dentin and enamel.The primary teeth are most commonly affected, as the mineralization process starts in utero.Permanent teeth develop after birth, and adequate treatment improve development in some cases (Batra, Tejani et al. 2006).In children with rickets, a low serum phosphorus level, combined with high serum alkaline phosphatase and normal serum calcium is typical (Carpenter 1997).Urinary leakage of phosphate is demonstrated by low % TRP and TmP/GFR, whereas urinary calcium is normal.The PTH levels are usually normal or slightly elevated, even before the onset of therapy.The 25(OH)vitamin D is normal, and there is no compensatory increase in 1,25(OH) 2 vitamin D levels due to defective renal activation of vitamin D, and, hence, no hypercalciuria.The FGF23 levels are increased (Jonsson, Zahradnik et al. 2003), and since the lower extremities are more severely affected than the other parts of the skeleton, radiographs of the knees and ankles will demonstrate the extent of rickets.The diagnosis of PHEX-hypophosphatemia is confirmed by genetic analysis.
DMP1
DMP1-hypophosphatemia (autosomal recessive) is usually a progressive disorder with a typical onset at the age when the child starts to walk.The condition is rarer than PHEXhypophosphatemia, but is phenotypically quite similar to PHEX-and FGF23hypophosphatemie.There is no compensatory increase in 1,25(OH) 2 vitamin D levels due to defective renal activation of vitamin D and hence no hypercalciuria.The circulation levels of FGF23 are increased (Feng, Ward et al. 2006).The degree of skeletal abnormalities varies between families (Makitie, Pereira et al. 2010).Some patients also have dental affection, with hypomineralization, enlarged pulp chambers, and decrease in the dentin and enamel layers, which can cause dental abscesses and loss of teeth (Koshida, Yamaguchi et al. 2010;Turan, Aydin et al. 2010).
GALNT3
GALNT3-hyperphosphatemia is the result of biallelic mutation in the GALNT3-gene, and leads to typical tumoral calcinosis (TC) (Topaz, Shurman et al. 2004) or hyperostosishyperphosphatemia syndrome (HHS).There are several mutations in the GALNT3 gene, and the same mutation can lead to TC in some patients and HHS in other (Ichikawa, Baujat et al. 2010).TC is characterized by ectopic calcifications in soft tissues and around large joints, recognized clinically as palpable masses and/or on radiography.Calcifications may also be found in the retina, in blood vessels, as testicular microlithiasis, and there might be dental abnormalities.HHC is characterized by hyperostosis of long bones, seen radiographically as cortical hyperostosis, diaphysitis and periosteal apposition.The biochemical findings in TC and HHC are similar, with elevated serum phosphate levels, increased or normal 1,25(OH)2vitamin D levels.The levels of serum calcium and parathyroid hormone are normal.Some authors suggest that TC and HHS are clinical variants of the same disease (Ichikawa, Baujat et al. 2010).
FGF23
FGF23-hypophosphatemia (autosomal dominant) shows a variable age at onset of disease.The expression of disease varies, and some children may have fracture tendency without skeletal deformities, whereas other children may have only temporary renal phosphate loss (Econs and McEnery 1997).Tooth abscesses and loss also occurs (Imel, Hui et al. 2007).FGF23-hyperphosphatemia (autosomal recessive) shows typical tumoral calcinosis , or more rarely the hyperostosis-hyperphosphatemia syndrome (Benet-Pages, Orlik et al. 2005).
FGFR1
FGFR1R-hypophosphatemia is characterized by osteoglophonic dysplasia and can be associated with hypophosphatemia (Farrow, Davis et al. 2006).Clinical features are skeletal abnormalities leading to dwarfism and facial abnormalities similar to achondroplasia.There is often failure of tooth eruption, and mandibular malformations.Patients may also have various degrees of craniosynostosis (White, Cabral et al. 2005).
KL
To date only one case of KL-hypophosphatemia has been described in the literature (Brownstein, Adler et al. 2008).A 1-year old girl suffered from poor linear growth and increasing head size.She had clinical and radiological signs of rickets, hypophosphatemia, renal phosphate wasting and elevated levels of parathyroid hormone and alkaline phosphatase.A balanced translocation between chromosomes 9 and 13 was detected (t(9,13)(q21.13;q13.1)).This translocation had led to upregulation of KLtranscription.After a few years she demonstrated dysmorphic features of the face, and also an Arnold-Chiari 1 malformation (Brownstein, Adler et al. 2008).Dental affection has not been described.KL-hyperphosphatemia has also been described in only one report (Ichikawa, Imel et al. 2007).A 13 year old girl presented with severe calcifications in soft tissues and in the vasculature, including the dura and the carotid arteries.In addition to hyperphosphatemia and hypercalcemia, she presented with hyperparathyroidism and elevated levels of FGF23.She had no signs of premature aging, which is seen in KL knockout mice.Dental affection has not been described.
SLC34A1 and SLC34A3
In SLC34A1-and SLC34A3-hypophosphatemia, there is hypophosphatemic rickets with hypercalciuria without other tubular defects (Tieder, Modai et al. 1985).The inheritance pattern is autosomal recessive.Since there is normal renal activation of vitamin D (in contrast to PHEX-hypophosphatemia and DMP1-hypophosphatemia), hypophosphatemia leads to a normal compensatory increase in 1,25(OH) 2 vitamin D levels and increased absorption of calcium and phosphate from the gut.
SLC9A3R1
A total of 7 cases of SLC9A3R1-hypophosphatemia (hypophosphatemia, nephrolithiasis/osteoporosis) have been described to date (Karim, Gerard et al. 2008).All patients were adults, and had either nephrolithiasis and/or bone demineralization combined with hypophosphatemia and hyperphosphaturia.1,25 (OH)2 vitamin D levels were either elevated or in the upper normal range.Dental affection has not been described.
ENPP1
ENPP1-hypophosphatemia (autosomal recessive) has a variable age at onset and a variable phenotype including Generalized Arterial Calcification of Infancy (GACI).Also there seems to be phenotypic variation within the same family among affected subjects carrying the same mutation.Whereas the classic presentation is that of severe arterial calcification leading to death in infancy, some patients have renal phosphate wasting and hypophosphatemia.This phosphate loss seems to attenuate the tendency of arterial calcifications, and is associated with prolonged survival (Lorenz-Depiereux, Schnabel et al. 2010).
Management principles 4.1 Hypophosphatemia
Hypophosphatemic rickets is in childhood usually treated with elementary phosphorus at doses preferentially between 30 and 60 (100) mg/kg bodyweight and 24 hours, usually divided by 4-6 doses, whereas the deficient 1,25(OH) 2 vitamin D production is treated with active vitamin D, e.g.alphacalcidol or calcitriol in doses of 20 to 70 ng/kg bodyweight and 24 hours, usually divided by 2 doses.It should, however, be emphasized that the dosage ranges for both phosphate and active vitamin D are wide, dependent on the severity of the disease, the compliance and the occurrence of complications.In SLC34A1-and SLC34A3hypophosphatemia, activation of vitamin D is normal, and, consequently, there is no need for treatment with vitamin D. It is important to adjust the drug doses individually and bear in mind that insufficient doses of elementary phosphorus and vitamin D may fail to prevent or correct skeletal deformities (rickets, osteomalacia) and can lead to growth retardation.On the other hand, excessive doses may lead to nephrocalcinosis (high phosphate doses), as well as hypercalciuria and hypercalcemia (high vitamin D levels).Secondary (and even tertiary) hyperparathyroidism is seen in patients with insufficient doses of vitamin D or excessive doses of phosphorus.We recommend aiming at normal levels of PTH, which in severe cases may be obtained by adding the calcimimetic drug cinacalcet to the treatment (Raeder, Bjerknes et al. 2008).Close monitoring is necessary to balance the effects of phosphorus supplement and active vitamin D. Growth, serum calcium, phosphorus, alkaline phosphatase, PTH, as well as urinary calcium/creatinine ratio should be determined every 3-6 months, and X-rays of ankles, knees and wrist should be taken yearly.Renal ultrasound should be obtained yearly to assess nephrocalcinosis.Supplementary treatment with growth hormone is currently not recommended for the growth retardation caused by hypophosphatemia (Huiming and Chaomin 2005), but may be warranted in selected cases.Corrective osteotomies are seldom necessary in childhood, and it should always be deferred until the rickets has healed.Future therapeutic possibilities may include direct targeting of blood FGF23 levels.
Hyperphosphatemia
Patients with hyperphosphatemia due to monogenic phosphate balance disorders, i.e.GALNT3-hyperphosphatemia, FGF23-hyperphosphatemia and KL-hyperphosphatemia, develop ectopic and vascular calcifications.Combined use of intestinal phosphate binders and the carbonic anhydrase inhibitor acetazolamide has been reported to lower serum phosphorus levels and reduce tumoral masses in some patients (Garringer, Fisher et al. 2006).However, other reports suggest that neither medical nor surgical treatment seems to be effective in controlling ectopic calcifications in these conditions (Carmichael, Bynum et al. 2009).Future therapeutic possibilities may include direct targeting of blood FGF23 levels.
Genetic diagnostics and predictive testing
Identification of a specific mutation has important therapeutic and prognostic implications and tailored follow-up as outlined above.Distinction between PHEX-hypophosphatemia and DMP1-hypophosphatemia can be done clinically based on the inheritance pattern, but in some cases there is an ambiguous inheritance pattern (Figure 4) and a genetic test will resolve this ambiguity.Fig. 4.An example of ambiguous inheritance pattern.Note that in this case, the pattern is compatible both with an X-linked disorder (i.e.PHEX-hypophosphatemia) and an AR disorder (i.e.DMP1-hypophosphatemia).
Monogenic phosphate balance disorders warrant genetic counseling, because of the known inheritance pattern and the high penetrance.This is also the case for novel gene variants where it is necessary to establish evidence for causality based on co-segregation studies and prediction tools (such as Polyphen http://genetics.bwh.harvard.edu/pph/)).Predictive genetic testing is less straightforward, and the legal regulations vary in different countries.Communicating genetic information can be difficult and it is important to take into account how well the individual understands both genetics in general and the disorder itself and the consequences of potentially diagnosing other family subjects.The basic fact that there is a 25% or 50% probability for a child to carry the family's mutation should be conveyed to the parents.In addition, the probability for the development of the disorder in the presence of a mutation (i.e. the penetrance) is not always 100%.The variable and in some cases unpredictable age of onset of some of the disorders should also be discussed.By increasing knowledge of the clinical spectrum of mutations, novel expected manifestations need to be discussed with the patient.A system for follow-up is required for children without a phenotype but with affected family members and where parents still request that their child be tested.This follow-up may include periodic testing for hypophosphatemia, with a frequency dependent on age and the suspected condition.
Research perspectives
We have established a national database of patients with hypophospahtemic bone disorder in order to study phenotype-genotype correlation in this disease and to be able to explore novel pathophysiologic pathways based on insight obtained from studies of families with no previously known genetic cause of monogenic hypophosphatemic bone disorder.We believe that a new classification of disease based on genetic etiology instead of clinical criteria may facilitate the finding of new phenotypes since it will facilitate the study of unobserved phenotypes, both in the patients and in their presumably unaffected relatives carrying mutations.In addition, it is possible that emerging new treatment options may vary based on the genetic diagnosis which warrants studies of associations between gene variants and therapeutic effects.Future studies of monogenic phosphate balance disorders will probably continue to include genomewide studies of families with genetically unexplained phosphate balance disorders.Animal and cell studies will probably also continue to contribute to the understanding of disease mechanism, and, in particular, the use of induced pluripotent stem cells (iPS) seems to be a promising new tool in the mechanistic and therapeutic studies (Rosenzweig 2010) as well as the use of small molecule screens in the search for new therapeutic options in monogenic disease (Shaw, Blodgett et al. 2011) .
Conclusion
As we have discussed in this book chapter, several of the dominant and recessive forms of hypo-and hyperphosphatemic bone disease have received their molecular explanation leading to new insight into the pathophysiology of hypo-and hyperphosphatemic bone disease.The major advancement in pathophysiological understanding has come from the understanding of a bone-kidney axis where the central bone phosphatonin FGF23 acts on FGFR1-receptors in the kidneys to promote phosphaturia and from the understanding of all the factors converging on this axis.In fact, this axis ties together the known monogenic forms of renal phosphate disorders.In addition, the understanding of the genetics and pathophysiology of these disorders has led to the recognition of the two faces of some of the involved genes, showing both hyper-and hypofunction, dependent on the nature of the mutation, which is in particular the case for mutations affecting the KL and FGF23 genes.We recommend the use of a genetic-oriented classification instead of the traditional diseaseoriented classification since we believe that this will facilitate a broader understanding of the phenotype of monogenic phosphate balance disorders.Whereas increased molecular understanding has led to a more precise diagnosis, it has not yet led to new established treatment.We believe, however, that the molecular understanding will indeed facilitate the development of new treatment options with the use of the powerful tools including iPS cells and small molecular screens.
Fig. 3 .
Fig. 3. Physiological and pathophysiological conditions in the phosphate regulation.For sake of clarity, only gene names are depicted and not the corresponding gene products.Adapted from (Bastepe and Juppner 2008; Strom and Juppner 2008). | 2018-12-07T06:15:49.471Z | 2011-11-30T00:00:00.000 | {
"year": 2011,
"sha1": "1e0537a2b4a130235d2d7f74351bf82fa4641e11",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/24149",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1e0537a2b4a130235d2d7f74351bf82fa4641e11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
196831403 | pes2o/s2orc | v3-fos-license | Flat Semimodules&von Neumann Regular Semirings
Flat modules play an important role in the study of the category of modules over rings and in the characterization of some classes of rings. We study the e-flatness for semimodules introduced by the first author using his new notion of exact sequences of semimodules and its relationships with other notions of flatness for semimodules over semirings. We also prove that a subtractive semiring over which every right (left) semimodule is e-flat is a von Neumann regular semiring.
Introduction
Semirings are, roughly, rings not necessarily with subtraction. They generalize both rings and distributive bounded lattices and have, along with their semimodules many applications in Computer Science and Mathematics (e.g., [HW1998], [Gla2002], [LM2005]). Many applications can be found in Golan's book [Gol1999], which is our main reference on this topic.
A systematic study of semimodules over semirings was carried out by M. Takahashi in a series of papers [1981][1982][1983][1984][1985][1986][1987][1988][1989][1990]. However, he defined two main notions in a way that turned out to be not natural. Takahashi's tensor products [Tak1982b] did not satisfy the expected Universal Property. On the other hand, Takahashi's exact sequences of semimodules [Tak1981] were defined as if this category were exact, which is not the case (in general).
By the beginning of the 21st century, several researchers began to use a more natural notion of tensor products of semimodules (cf., [Kat2004]) with which the category of semimodules over a commutative semiring is monoidal rather than semimonoidal [Abu2013]. On the other hand, several notions of exact sequences were introduced (cf., [Pat2003]), each of which with advantages and disadvantages. One of the most recent notions is due to Abuhlail [Abu2014] and is based on an intensive study of the nature of the category of semimodules over a semiring.
In addition to the categorical notions of flat semimodules over a semiring, several other notions were considered in the literature, e.g., the so called m-flat semimodules [Alt2004] (called mono-flat in [Kat2004]). One reason for the interest of such notions is the phenomenon that, a commutative semiring all of whose semimodules are flat is a von Neumann regular ring [Kat2004, Theorem 2.11]. Using a new notion of exact sequences of semimodules over a semiring, Abuhlail introduced ([Abu2014-SF]) a homological notion of exactly flat semimodules, which we call, for short, e-flat semimodules assuming that an appropriate ⊗ functor preserves short exact sequences.
The paper is divided into three sections.
In Section 1, we collect the basic definitions, examples and preliminaries used in this paper. Among others, we include the definitions and basic properties of exact sequences as defined by Abuhlail [Abu2014].
In Section 2, we investigate the e-flat semimodules. A flat semimodule is one which is the direct colimit of finitely presented semimodules [Abu2014-SF]. It was proved by Abuhlail [Abu2014-SF, Theorem 3.6] that flat left S-semimodules are e-flat. We prove in Lemma 2.13 and Proposition 2.14 that the class of e-flat left S-semimodules is closed under retracts and direct sums.
In Section 3, we study von Neumann regular semirings. In Theorem 3.11, we show that if S is a (left and right) subtractive semiring each of its right semimodules is S-e-flat, then S is a von Neumann regular semiring. Conversely, we prove that if S is von Neumann regular, then every normally S-generated right S-semimodule is S-m-flat.
Preliminaries
In this section, we provide the basic definitions and preliminaries used in this work. Any notions that are not defined can be found in our main reference [Gol1999]. We refer to [Wis1991] for the foundations of Module and Ring Theory.
If, moreover, the monoid (S, ·, 1) is commutative, then we say that S is a commutative semiring. We say that S is additively idempotent, if s + s = s for every s ∈ S.
• Every ring is a semiring.
• Let R be any ring. The set I = (Ideal(R), +, 0·, R) of (two-sided) ideals of R is a semiring.
• M n (S), the set of all n × n matrices over a semiring S, is a semiring.
• The log algebra (R ∪ {−∞, ∞}, ⊕, ∞, +, 0) is a semiring, where 1.3. [Gol1999] Let S and T be semirings. The categories S SM of left S-semimodules with arrows the S-linear maps, SM T of right S-semimodules with arrows the T -linear maps, and S SM T of (S, T )-bisemimodules are defined in the usual way (as for modules and bimodules over rings). We write L ≤ S M to indicate that L is an S-subsemimodule of the left (right) S-semimodule M.
Example 1.4. The category of Z + -semimodules is nothing but the category of commutative monoids. (1) The subtractive closure of L ≤ S M is defined as We say that L is subtractive if L = L. The left S-semimodule M is a subtractive semimodule, if every S-subsemimodule L ≤ S M is subtractive.
(2) The set of cancellative elements of M is defined as We say that M is a cancellative semimodule, if K + (M) = M.
normal, if f is both k-normal and i-normal.
Remark 1.11. Among others, Takahashi ([Tak1981]) and Golan [Gol1999] called k-normal (resp., i-normal, normal) S-linear maps k-regular (resp., i-regular, regular) morphisms. Our terminology is consistent with Category Theory noting that the normal epimorphisms are exactly the normal surjective S-linear maps, and the normal monomorphisms are exactly the normal injective S-linear maps (see [Abu2014]).
The following technical lemma is easy to prove. (1) Let g be injective.
(a) f is k-normal if and only if g • f is k-normal.
(c) Assume that f is k-normal. Then g is k-normal (normal) if and only if g • f is knormal (normal).
The proof of the following lemma is straightforward: (2) A morphism ϕ : L −→ M of left S-semimodules is normal (resp. k-normal, i-normal) if and only if id F ⊗ S ϕ : F ⊗ S L −→ F ⊗ S M is normal (resp. k-normal, i-normal) for every non-zero free right S-semimodule F.
(3) If P S is projective and ϕ : L −→ M is a normal (resp. k-normal, i-normal) morphism of left S-semimodules, then id F ⊗ S ϕ : There are several notions of exactness for sequences of semimodules. In this paper, we use the relatively new notion introduced by Abuhlail:
We call a (possibly infinite) sequence of S-semimodules
chain complex if f j+1 • f j = 0 for every j; exact (resp., proper-exact, semi-exact, quasi-exact) if each partial sequence with three terms → M j+2 is exact (resp., proper-exact, semi-exact, quasi-exact). A short exact sequence (or a Takahashi extension [Tak1982b]) of S-semimodules is an exact sequence of the form The follows examples show some of the advantages of the new definition of exact sequences over the old ones: (2) L ≃ Ker(g) and N ≃ M/ f (L); (3) f is injective, f (L) = Ker(g), g is surjective and (k-)normal.
In this case, f and g are normal morphisms.
Remark 1.19. An S-linear map is a monomorphism if and only if it is injective. Every surjective S-linear map is an epimorphism. The converse is not true in general. (1) If, moreover, q is a normal epimorphism, f is surjective and g is injective (an isomorphism), then h is injective (an isomorphism).
Lemma 1.20. Let
(2) If, moreover, A and B are cancellative, j, f and h are injective, then g is injective.
Proof. Since p is normal, the existence and uniqueness of h follows directly from the Universal Property of Cokernels. However, we give an elementary proof that h is well-defined using diagram chasing. Let . It follows that Thus h is well defined and h • p = q • g by the definition of h. Clearly, h is unique.
(2) Suppose that g(a 1 ) = g(a 2 ) for some a 1 , a 2 ∈ A. It follows that whence p(a 1 ) = p(a 2 ) (h is injective, by assumption). Since the first row is semi-exact, there exist y 1 , y 2 ∈ Im(i) = Ker(p) such that a 1 + y 1 = a 2 + y 2 . Let w 1 , w 1 , w 2 , w 2 ∈ A ′ be such that y 1 + i(w 1 ) = i( w 1 ) and y 2 + i(w 2 ) = i( w 2 ). It follows that Consequently, we have It follows that Since B is cancellative and both f and j are injective, we conclude that w 1 + w 2 = w 2 + w 1 .
Since A is cancellative, we conclude a 1 = a 2 .
(2) G preserves all limits which turn out to exist in D.
Corollary 1.23. Let S, T be semirings and T F S a (T, S)-bisemimodule.
Proof. The proof can be obtained as a direct consequence of Proposition 1.22 and the fact that is an adjoint pair of covariant functors [KN2011].
be a sequence of left S-semimodules and consider the sequence of left T -semimodules
(2) If (6) is semi-exact and g is normal, then (7) is semi-exact and G ⊗ g is normal.
(3) If (6) is exact and G ⊗ S f is i-normal, then (7) is exact.
Proof. The following implications are obvious: (1) Assume that g is normal and consider the exact sequence of S-semimodules Then N ≃ Coker(ι). By Corollary 1.23 (1), G ⊗ S − preserves cokernels and so G ⊗ g = coker(G ⊗ ι) whence normal.
(3) This follows directly form (2) and the assumption on G ⊗ f .
Flat Semimodules
The notion of exactly flat semimodules was introduced by Abuhlail [Abu2014-SF, 3.3] where it was called normally flat. The terminology e-flat was first used in [AIKN2018].
2.1. Let F S be a right S-semimodule. Following Abuhlail [Abu2014-SF], we say that F S is a flat right S-semimodule, if F is the directed colimit of finitely presented projective right Ssemimodules.
Let
of commutative monoids is exact. We say that F S is e-flat, iff the covariant functor F ⊗ S − : S SM −→ Z + SM repasts short exact sequences.
Remark 2.4. The prefix in "m-flat" stems from mono-flat semimodules introduced by Katsov [Kat2004], and is different from that of k-flat semimodules in the sense of Al-Thani [Alt2004], since the tensor product we adopt here is in the sense of Katsov which is different from that in the sense of Al-Thani (see [Abu2013] for more details).
Proposition 2.5. Let F be a right S-semimodule and M a left S-semimodule.
(1) F S is normally M-flat if and only if for every short exact sequence of the form (8), Sequence (9) is exact.
(2) F S is M-i-flat if and only if for every short exact sequence of the form (8), Sequence (9) is semi-exact, F ⊗ f is k-normal and F ⊗ g is normal.
(3) F S is M-m-flat if and only if for every semi-exact sequence of the form (8) in which f is k-normal and g is normal, Sequence (9) is semi-exact, F ⊗ f is k-normal and F ⊗ g is normal. Proof.
(=⇒) Notice that L = Ker(g) is a subtractive S-subsemimodule of M. Since F S is M-i-flat (normally M-flat), we know that F ⊗ f is a (normal) monomorphism. It follows by Proposition 1.24 (2) (and (3)) that (9) is semi-exact (exact) and F ⊗ g is a normal epimorphism.
(⇐=) Let L ≤ S M be a subtractive S-subsemimodule. Then is a short exact sequence of left S-semimodules, where ι is the canonical injection and π L : M −→ M/L is the canonical projection. By our assumptions, the induced sequence of commutative monoids is semi-exact (exact) and F ⊗ ι is k-normal, whence a (normal) monomorphism.
(3) (=⇒) Since F S is M-m-flat, we know that F ⊗ f is a monomorphism, whence k-normal. Moreover, it follows by Proposition 1.24 (2) that (9) is semi-exact and F ⊗ g is a normal epimorphism.
(⇐=) Let L ≤ S M be an S-subsemimodule. Then (10) is a semi-exact sequence of left Ssemimodules in which ι is k-normal and π L is normal. By our assumption, Sequence (11) is semi-exact and F ⊗ ι is k-normal, whence F ⊗ f is injective.
be a sequence of left S-semimodules, F a right S-semimodule and consider the sequence of commutative monoids.
(1) If (12) is exact with g normal and F S is e-flat, then (13) is exact and F ⊗ g is normal. (
2) If (12) is exact with g normal and F S is i-flat, then (13) is semi-exact and F ⊗ g is k-normal.
(
3) If (12) is exact and F S is m-flat, then (13) is semi-exact and F ⊗ g is k-normal.
Proof. By Corollary 1.18, we have a short exact sequence of left S-semimodules where ι and π are the canonical S-linear maps. Since Applying the contravariant functor F ⊗ S −, we get the sequence and we obtain the commutative diagram (1) Let F S be e-flat and g = g • π be normal. Then g is a normal monomorphism by Lemma 1.12 (2-b), F ⊗ g is a normal monomorphism and Sequence (16) is exact.
t t t t t t t t t t t t t t t t t t
Step I: We have Since F ⊗ g is injective and F ⊗π is normal (by Proposition 1.24 (1)), it follows by Lemma 1.12 that F ⊗ g = (F ⊗ g) • (F ⊗ π) is k-normal.
Step II: Since F S be e-flat, F ⊗ g is a normal monomorphism. Moreover, F ⊗ π is a normal epimorphism, it follows by Lemma 1.12 (1-c or 2-c) that F ⊗ g = (F ⊗ g) • (F ⊗ π) is normal.
(3) Let F S be m-flat. Since g is k-normal, g is injective whence F ⊗ g is a monomorphism and, as shown in (1), F ⊗ g is k-normal. As clarified in (2), im(F ⊗ f ) = Ker(F ⊗ g).
Theorem 2.7. Let M be a left S-Semimodule. The following are equivalent for a right Ssemimodule F : (1) F S is normally M-flat; (2) F S is M-e-flat; (
3) For every exact sequence of left S-semimodules (12) with g normal, the induced sequence of commutative monoids (13) is exact and F ⊗ g is normal.
Proof.
3) for every exact sequence of left S-semimodules (12) with g normal, the induced sequence of commutative monoids (13) is semi-exact and F ⊗ g is k-normal.
Proof.
(1) ⇒ (3) This follows by Proposition 2.6 (2). (2) For every semi-exact sequence of the form (8) in which f is k-normal and g is normal, Sequence (9) is semi-exact, F ⊗ f is k-normal and F ⊗ g is normal.
Corollary 2.10. Let S and T be semirings, F a (T, S)-bisemimodule and F a right T -semimodule. If F S is e-flat (m-flat) and F T is e-flat (m-flat), then ( F ⊗ T F) S is a normally flat (m-flat).
Proof. Let F S e-flat (m-flat) and F T be e-flat (m-flat). By our assumptions and Proposition 2.5, the two functors (2) Any retract of an i-flat (resp. e-flat, m-flat) right S-semimodule is i-flat (resp. e-flat, m-flat).
Proof. We only need to prove "1" for relative i-flatness (resp. relative e-flatness); the proof for relative m-flatness is similar.
Let M be a left S-semimodule, U ≤ S M a subtractive subsemimodule, F S an M-e-flat right S-semimodule and F a retract of F. Then there exist S-linear maps
Proposition 2.14. Let {F λ } Λ be a family of right S-semimodules.
we conclude that (2) If F is M-m-flat (resp. M-i-flat), then F is N-m-flat (resp. N-i-flat). Proof.
(1) Let F S be M-i-flat (resp. M-e-flat) and U ≤ S L be a subtractive S-subsemimodule.
Applying F ⊗ S − to Diagram (5) yields the following commutative diagram of commutative monoids. Assume that F S is M-m-flat (resp. M-i-flat), so that F ⊗ g ′ is injective. Since the rows are semi-exact, F ⊗ ι ′ is surjective and F ⊗ g is a normal epimorphism (by Proposition 1.24), it follows by Lemma 1.20 (1) that F ⊗ ι is injective.
Lemma 2.16. Let F be a right S-semimodule. (
1) Let M be a left S-semimodule. Then F is M-m-flat if and only if for every finitely generated S-subsemimodule and exact sequence
(2) Let L and N be cancellative left S-semimodules. If F is cancellative, L-m-flat and N-m-flat, then F is L ⊕ N-m-flat.
(3) Let {M λ } Λ be a collection of cancellative left S-semimodules. If F is cancellative and M λ -m-flat for every λ ∈ Λ, then F is Proof.
and consider K ≤ S M generated by {u 1 , · · · , u m , u 1 , · · · , u n }. Notice that (2) Let U ≤ S L ⊕ N and consider the short exact sequence of cancellative left S-semimodules. Consider the pullback (P, λ ′ , ι ′ ) of λ : U ֒→ L ⊕ N and ι : L ֒→ L ⊕ N given by of cancellative S-semimodules. Applying F ⊗ S − to Diagram (20) yields the following of cancellative commutative monoids in which the second row is exact. By Proposition 1.24, the first row is semi-exact and F ⊗ π ′ is a normal epimorphism. Since F is L-m-flat and N-m-flat, both F ⊗ ι ′ and F ⊗ h are injective. It follows by Lemma 1.20 In light of (1), we can assume that U is finitely generated, whence contained in a finite number of direct sums. So, we are done by (2).
Von Neumann Regular Rings
In this section, we study the so called von Neumann regular semirings that are not necessarily rings.
Definition 3.1. A semiring S is a von Neumann regular semiring if for every a ∈ S there exists some s ∈ S such that a = asa.
Assuming all semimodules of a given commutative semiring S to be (mono-)flat forces the semiring to be a von Neumann regular ring (cf., [Kat2004, Theorem 2.11]. This suggests other notions of flatness, e.g. e-flatness and i-flatness. Definition 3.2. [Gol1999, page 71] Let S be a semiring. We say that S is a left subtractive semiring (right subtractive semiring) if every left (right) ideal of S is subtractive. We say that S is a subtractive semiring if S is both left and right subtractive.
Homological Lemmata
The proofs of the following lemmata are adapted by diagram chasing, with appropriate modifications, which is a well-known tool in the classical proofs which can be found in standard book of Homological Algebra (cf., [Rot2009, Proposition 2.70, Corollary 3.59, Proposition 3.60]). Definition 3.6. We say that a left S-semimodule M is normally S-generated, if there exists a normal epimorphism S (Λ) π −→ M −→ 0. We say that S S is a normal generator iff every left S-semimodule is normally S-generated.
Lemma 3.4. Let A be a right S-semimodule and consider for every left ideal I ≤ S S the canonical surjective map of commutative monoids
Proposition 3.7. Let S be a cancellative semiring and F a cancellative right S-semimodule. The following are equivalent: (1) F S is S-m-flat; (2) The canonical map θ I : F ⊗ S I −→ FI of commutative monoids is injective, whence an isomorphism, for every (finitely generated) left ideal I of S; (3) F S is N-m-flat for every normally S-generated left S-semimodule N.
Proof. The equivalence (1) ⇐⇒ (2) follows by Lemma 3.4 (without assuming that S is cancellative). The implication (3) ⇒ (1) is trivial. Assume (1). Let N be normally S-generated so that there exists a normal epimorphism π : S (Λ) −→ N for some index set Λ. Consider the short exact sequence Since F S is S-m-flat by (1), it follows by Lemma 2.16 that F S is S (Λ) -m-flat. Then F S is N-flat by Proposition 2.15.
The assumptions of the following result hold in particular when S is a ring, whence it recovers the classical result (e.g., [Wis1991, 12.6]).
Corollary 3.8. Let S be a cancellative semiring such that S S is a normal generator. A cancellative right S-semimodule F is m-flat if and only if F S S-m-flat.
Lemma 3.9. Let F be a m-flat (i-S-flat) right S-semimodule and K ι ֒→ F a subtractive S-subsemimodule.
(1) If F/K is m-flat (S-i-flat) and KI ≤ S K is subtractive, then K ∩ FI = KI for every (subtractive) ideal I of S.
(2) If K ∩ FI = KI for every finitely generated left ideal of S, then F is S-m-flat.
(3) If K ∩ FI = KI (and FI ≤ F is subtractive) for every subtractive ideal I of S, then F/K is S-i-flat (S-e-flat).
Proof. Consider the right S-semimodule A := F/K and recall, by Lemma 1.17 (7), that we have a short exact sequence of right S-semimodules Let I ≤ S S be an arbitrary (subtractive) left ideal. Applying − ⊗ S I to the exact sequence (23), it follows by Lemma 1.24 (3) that the following sequence of commutative monoids is semi-exact and ϕ ⊗ I is a normal epimorphism. Consider the following commutative diagram of commutative monoids with semi-exact rows.
Notice that θ F is injective, whence an isomorphism, since F S is S-m-flat (S-i-flat). Since ϕ ⊗I and π are normal epimorphisms, θ K is surjective and θ F is injective, there exists by Lemma 1.20 a unique isomorphism γ : A ⊗ S I −→ FI/KI of commutative monoids that makes Diagram (24) commute. Since ϕ : F −→ A is surjective, ϕ(FI) = AI. Consider the restriction ϕ| FI : FI → AI and notice that Ker(ϕ| FI ) = FI ∩ K. Consider So, β is well defined as it is well defined on a generating set of AI. Since γ is an isomorphism, we conclude that σ is injective (whence an isomorphism) if and only if θ A is injective (an isomorphism).
Consider the commutative diagram
(1) Let A be S-m-flat (S-i-flat). In this case, θ A is an isomorphism for every (subtractive) left ideal I ≤ S S by Lemma 3.4 and it follows that σ is injective. In particular, (FI ∩ K)/KI = Ker(σ ) = 0. Since KI ≤ S K is subtractive (by assumption), we conclude that KI = FI ∩ K.
(2) If FI ∩ K = KI for any finitely generated ideal I of S, then σ is injective, whence θ A is injective. The result follows now by Lemma 3.4.
(3) The proof is similar to that of (2).
The proof of the following technical lemma is similar to that in the case of von Neumann regular rings (e.g. [Wis1991, 2.3, 3.10]). (1) S I is finitely generated; (2) S I is principal; (3) I = Se for some idempotent e ∈ S; (4) I ≤ ⊕ S (a direct summand); (5) S = I ⊕ Se ′ for some idempotent e ′ ∈ S.
The assumption that all left S-semimodules of a (left and right) subtractive semiring are S-eflat is sufficient for S to be a von Neumann semiring.
Theorem 3.11. Let S be a semiring.
(1) If S is subtractive and every right S-semimodule is S-e-flat, then S is a von Neumann regular semiring.
(2) If S is von Neumann regular, then every normally S-generated right S-semimodule is S-mflat. Proof.
(1) Let a ∈ S. By our assumption, S is right subtractive, whence the right S-semimodule K := aS is a subtractive right ideal of S and 0 −→ aS −→ S −→ S/aS → 0 is an exact sequence of right S-semimodules by Lemma 1.17 (7). Indeed, F := S S is (S)-eflat. By our assumptions, the right S-semimodules aS and S/aS are both S-e-flat and so it follows, by Lemma 3.9, that for every subtractive left ideal I of S : aS ∩ I = aS ∩ SI = K ∩ FI = KI = (aS)I.
By our assumption, S is left subtractive and so the left ideal I := Sa ≤ S S is subtractive, whence aSa = (aS)(Sa) = aS ∩ Sa.
It follows that a ∈ aSa, i.e. exists some s ∈ S such that a = asa.
(2) Let S be von Neumann regular. Let A be a normally S-generated right S-semimodule. Then there exists an exact sequence of left S-semimodules where F ≃ S (Λ) for some index set Λ, and K := Ker(π). Since F S is free, it is flat and in particular m-flat. Let I be a finitely generated left ideal of S. By Lemma 3.10, I = Se for some idempotent e of S. Since S is von Neumann regular, there exists some e ′ ∈ S such that e = ee ′ e. Let k = f e ∈ FI ∩ K for some k ∈ K and f ∈ F. Then k = f e = f (ee ′ e) = ( f e)(e ′ e) = (ke ′ )e ∈ KI.
The result follows now by Lemma 3.9.
Corollary 3.12. If S is subtractive commutative semiring such that every S-semimodule is S-eflat, then S is a von Neumann regular semiring.
In light of Theorem 3.11 and the fact that a commutative semiring over which all semimodules are flat is a von Neumann regular ring, we raise the following question: Question: Does the e-flatness of all right (left) semimodules characterize subtractive von Neumann regular semirings? | 2019-07-13T23:35:56.000Z | 2019-07-13T00:00:00.000 | {
"year": 2019,
"sha1": "7b3fe8e3f3552469198df114bf4ea02fc25b5e93",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7b3fe8e3f3552469198df114bf4ea02fc25b5e93",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
213651228 | pes2o/s2orc | v3-fos-license | Evaluating Soybean Meal Quality Using Near-Infrared Reflectance Spectroscopy
The objective of this study was to establish a range of soybean meal quality to evaluate the correlations between official analytical methods and near-infrared reflectance spectroscopy (NIRS). Crushed soybean white flakes (Mark Hershey Farms, Lebanon, PA) exposed to mechanical oil extraction, but not heat processing, were used in this experiment. Ground samples (500 g) were put into cotton bags and autoclaved at 262°F for 0, 5, 10, 15, 30, 45, and 60 min at 29 PSI. This was done to simulate varying degrees of heat processing. A total of 2 samples per treatment were autoclaved in 3 separate blocks. The duplicate samples were divided and analyzed using NIRS and official analytical analysis (wet chemistry). Crude protein (CP), total lysine (Lys), Lys:CP, available Lys, available Lys:total Lys, protein solubility in potassium hydroxide (KOH), trypsin inhibitor activity (TIA), urease activity index (UAI), individual amino acids (AA), and total AA were analyzed to determine the degree of processing using official analytical methods. The correlation coefficient (R) and coefficient determination (r2) between NIRS and official analytical methods were established for CP, total Lys, available/reactive Lys, Lys:CP and available/reactive Lys:total Lys. Data were analyzed using the SAS (v. 9.4, SAS Institute Inc., Cary, NC) GLIMMIX procedure and the CORR procedure to determine the degree of association of NIRS and official analytical analysis. When measured using official analytical methods, CP, total AA, Ala, Asp, Glu, Gly, Iso, Leu, and Val decreased (linear, P < 0.05), whereas available/reactive Lys:total Lys, Lys:CP, available Lys, KOH, trypsin inhibitor, urease, Lys, and Cys decreased (quadratic, P < 0.05) with increasing exposure time to the autoclave. There was a positive correlation between official analytical and NIRS results for CP, Lys:CP, available Lys:total Lys, total AA, Ala, Cys, Lys, and a negative correlation for Thr. A linear model was best fit (P = 0.011, r2 = 0.489) to predict CP using NIRS. A quadratic model was best fit to use NIRS total Lys (P = 0.011, r2 = 0.969), reactive Lys (P = 0.001, r2 = 0.988), and their ratio (P = 0.001, r2 = 0.981) to predict official analytical results. In conclusion, increasing soybean autoclave exposure time decreased soybean meal quality as measured by crude protein, total Lys, Lys:CP, available Lys, available Lys:total Lys, KOH solubility total AA, and additional AA. In addition, regression models were successful at using NIRS for Lys, reactive Lys, Lys:CP, and reactive Lys:total Lys to predict official analytical results.
Summary
The objective of this study was to establish a range of soybean meal quality to evaluate the correlations between official analytical methods and near-infrared reflectance spectroscopy (NIRS). Crushed soybean white flakes (Mark Hershey Farms, Lebanon, PA) exposed to mechanical oil extraction, but not heat processing, were used in this experiment. Ground samples (500 g) were put into cotton bags and autoclaved at 262°F for 0, 5, 10, 15, 30, 45, and 60 min at 29 PSI. This was done to simulate varying degrees of heat processing. A total of 2 samples per treatment were autoclaved in 3 separate blocks. The duplicate samples were divided and analyzed using NIRS and official analytical analysis (wet chemistry). Crude protein (CP), total lysine (Lys), Lys:CP, available Lys, available Lys:total Lys, protein solubility in potassium hydroxide (KOH), trypsin inhibitor activity (TIA), urease activity index (UAI), individual amino acids (AA), and total AA were analyzed to determine the degree of processing using official analytical methods. The correlation coefficient (R) and coefficient determination (r 2 ) between NIRS and official analytical methods were established for CP, total Lys, available/reactive Lys, Lys:CP and available/reactive Lys:total Lys. Data were analyzed using the SAS (v. 9.4, SAS Institute Inc., Cary, NC) GLIMMIX procedure and the CORR procedure to determine the degree of association of NIRS and official analytical analysis. When measured using official analytical methods, CP, total AA, Ala, Asp, Glu, Gly, Iso, Leu, and Val decreased (linear, P < 0.05), whereas available/reactive Lys:total Lys, Lys:CP, available Lys, KOH, trypsin inhibitor, urease, Lys, and Cys decreased (quadratic, P < 0.05) with increasing exposure time to the autoclave. There was a positive correlation between official analytical and NIRS results for CP, Lys:CP, available Lys:total Lys, total AA, Ala, Cys, Lys, and a negative correlation for Thr. A linear model was best fit (P = 0.011, r 2 = 0.489) to predict CP using NIRS. A quadratic model was best fit to use NIRS total Lys (P = 0.011, r 2 = 0.969), reactive Lys (P = 0.001, r 2 = 0.988), and their ratio (P = 0.001, r 2 = 0.981) to predict official analytical results. In conclusion, increasing soybean autoclave exposure time decreased soybean meal quality as measured by crude protein, total Lys, Lys:CP, available Lys, available Lys:total Lys, KOH solubility total AA, and additional AA. In addition, regression models were successful at using NIRS for Lys, reactive Lys, Lys:CP, and reactive Lys:total Lys to predict official analytical results.
Introduction
Soybeans are the most abundant oilseed worldwide that provide by-products, including soybean meal and oil. After solvent extraction, soybean meal is further processed by heating to destroy antinutritional factors. However, there can be negative effects of over-processing, such as the Maillard browning reaction and amino acid (AA) digestibility loss. In the Maillard reaction, a free amino acid-most commonly Lys-will bind to a reducing sugar, therefore browning the soybean meal. Quality of soybean meal is of the utmost importance when considered for diet formulation in swine diets. Over-processing or under-processing soybean meal can impact the nutritional value, and ultimately animal performance. Near-infrared reflectance spectroscopy (NIRS) can serve as a tool for a nutritionist to save time and money, compared to official analytical analysis to determine soybean meal quality for formulation. Therefore, the objective of this study was to create a gradient of soybean meal quality to evaluate the correlations between official analytical methods and NIRS when measuring the protein quality of soybean meal.
Procedures
A total of 900 kg of soybean white flakes were collected from a soybean crush plant (Mark Hershey Farms, Lebanon, PA). The soybean white flakes were defined as the soybean after oil extraction but prior to the heat processing step. Soybean white flakes were ground using a coffee grinder, and then two 1,000 g samples were autoclaved at 262.4°F for 0, 5, 10, 15, 30, 45, and 60 min. A total of 2 samples per treatment were autoclaved in 3 blocks to provide 3 replications per treatment and 6 observational units per treatment. Samples were split for official analytical methods and NIRS analysis. Treatments were randomized within block to ensure no effects of time or autoclave order. All treatments within block were run in the same week as the first sample of the block. Samples were collected from each treatment and analyzed for total AA, available Lys, UAI, TIA, and protein solubility KOH.
The autoclave initiation consisted of a 15 min warm-up to bring the chamber temperature and pressure to 262.4°F and 29 PSI, respectively. For the sterilizing stage, the chamber temperature and pressure remained as required (either 0, 15, 30, 45 or 60, and 29 PSI). The samples were cooled for 5 min at 230°F and 2 PSI before discharge from the chamber. International Method 22-90.01. Once treatments were analyzed, data were used to validate NIRS equations using a NIRS DS2500 (FOSS, Eden Prairie, MN). The estimates measured using NIRS were CP, total AA, and available lysine.
Statistical Analysis
Data were analyzed as a randomized complete block design using the GLIMMIX procedure in SAS v. 9.4 (SAS Institute Inc., Cary, NC) with soybean sample as the experimental unit, autoclave time as a fixed effect and period as a blocking factor. Orthogonal contrasts were used to evaluate means. The coefficients for the unequally spaced linear and quadratic contrasts were derived using the IML procedure in SAS. Least square means were calculated for each independent variable. Correlation analysis was performed to determine the degree of association between official analytical and NIRS results. Linear and/or quadratic regression was used to develop models for predicting official analytical total and available lysine, and Lys:CP using NIRS estimates. Results were considered significant if P ≤ 0.05 with tendencies set at 0.05 < P ≤ 0.10.
Results and Discussion
Increasing soybean autoclave exposure time darkened the color of the soybean meal and created evidence of the Maillard browning reaction (Figure 1). Crude protein, total AA, Ala, Asp, Glu, Gly, Iso, Leu, and Val decreased (linear, P < 0.05) when measured with official analytical methods with increasing autoclave exposure time (Table 1). Increasing autoclave exposure time decreased (quadratic, P < 0.05) Lys:CP, available Lys, available Lys:total Lys, KOH, trypsin inhibitor, urease, Lys, and Cys when measured using official analytical methods.
In conclusion, increasing soybean autoclave exposure time decreased soybean meal quality as measured by KOH solubility, Lys, available Lys, Lys:CP, and available Lys:total Lys. These results created a model to estimate the relationship between NIRS and official analytical results. Regression models were successful at using NIRS Lys, available Lys, their ratio, and Lys:CP. This demonstrated the ability of NIRS to be used as a tool to determine over-processing of soybean meal.
Swine Day 2019
Brand names appearing in this publication are for product identification purposes only. No endorsement is intended, nor is criticism implied of similar products not mentioned. Persons using such products assume responsibility for their use in accordance with current label directions of the manufacturer. Soybean white flakes after oil extraction but prior to heat processing, were ground using a blender, and 500 g samples were put into cotton bags to be autoclaved. Samples were autoclaved at 262°F for 0, 5, 10, 15, 30, 45, and 60 min at 29 PSI. A total of 2 samples per treatment were autoclaved in 3 blocks to provide 3 replications per treatment. Figure 1. Ground soy white flakes exposed to an autoclave at 262°F for 0, 5, 10, 15, 30, 45, and 60 min at 29 PSI, simulating the heat step in soybean meal processing to create a range of soybean meal quality. Available: total Lys, % Reactive: total Lys, % y = -0.041x + 0.00006x 2 + 65.36 r 2 = 0.981 Adjusted r 2 = 0.978 Figure 6. Quadratic regression analysis of near-infrared reflectance spectroscopy (NIRS) reactive Lys:total Lys compared with official analytical results. | 2019-11-14T17:09:53.983Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "6d883ce38437af7e12b9232d001b1b7a286b9f4b",
"oa_license": "CCBY",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=7864&context=kaesrr",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4d5e0ba06c624102bb17d5878da2c5acfc998dc3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
3269037 | pes2o/s2orc | v3-fos-license | P-body proteins regulate transcriptional rewiring to promote DNA replication stress resistance
mRNA-processing (P-) bodies are cytoplasmic granules that form in eukaryotic cells in response to numerous stresses to serve as sites of degradation and storage of mRNAs. Functional P-bodies are critical for the DNA replication stress response in yeast, yet the repertoire of P-body targets and the mechanisms by which P-bodies promote replication stress resistance are unknown. In this study we identify the complete complement of mRNA targets of P-bodies during replication stress induced by hydroxyurea treatment. The key P-body protein Lsm1 controls the abundance of HHT1, ACF4, ARL3, TMA16, RRS1 and YOX1 mRNAs to prevent their toxic accumulation during replication stress. Accumulation of YOX1 mRNA causes aberrant downregulation of a network of genes critical for DNA replication stress resistance and leads to toxic acetaldehyde accumulation. Our data reveal the scope and the targets of regulation by P-body proteins during the DNA replication stress response.
This manuscript examines the functions of P-bodies in regulating cellular responses to replication stress. P-bodies are sites of mRNA decapping to decrease mRNA abundance. Stresses including replication stress induce P-body formation and function and P-body proteins are important for cell viability in response to these stresses. The authors utilized RNA sequencing and genetics to identify mRNAs that may be regulated in a P-body dependent manner to yield resistance to replication stress. Their analysis identified Yox1, a transcriptional repressor as one of several candidates and they validated Yox1 mRNA regulation by P-body processing as important for replication stress responses. Furthermore, they identified two gene targets of Yox1 regulation that contribute to these responses.
Overall, I found the data in the manuscript to be compelling and the conclusions interesting. While it is not surprising that Yox1 is involved in controlling gene expression in response to replication stress, the control of Yox1 by P-body dependent processing is novel and interesting. The only thing that I would have liked the authors to do was to demonstrate unequivocally that ALD6 and ICS2 are really direct targets of Yox1 important for stress responses. For example, it would be helpful to demonstrate direct binding of Yox1 protein to the putative Yox1 binding sites in the promoters of these genes. Also, monitoring the effects of deletion of these binding sites on gene expression and cell viability would be useful.
Reviewer #2 (Remarks to the Author)
This manuscript addresses the role of mRNA decay factors in the control of gene expression during DNA replication stress. The main point of the work is that Lsm1 and Pat1 play roles in shaping the transcriptome during HU stress, and that by keeping specific mRNAs expressed at lower levels, they contribute to successful DNA damage response. This conclusion is supported by, i) RNA-Seq of Wt and lsm1∆ strains identifying mRNAs that are overexpressed either normally or in HU, ii) showing that deletion of 6 of these overexpressed mRNAs can partially suppress the HU sensitivity of lsm1∆ or pat1∆ strains, iii) focusing on YOX1, a transcriptional repressor, as key regulator whose over-expression in pat1∆ and lsm1∆ strains leads to HU sensitivity. They then go on and show the Yox1 mRNA can be localized to P-bodies, and that the Yox1 protein goes up a little bit in lsm1∆ with or without HU and shows increased nuclear localization. Finally, they show that overexpression of Yox1 via galactose induction, or deletion of the Lsm1 gene leads to changes in the expression of genes that can contribute to the sensitivity of the DNA damage response. However, taken together these observations are not striking because, a) broadly, a role for mRNA decay in regulating gene expression is well established, and b) an actual mechanistic role for any of the downstream targets in mediating HU-related stress (which might not be DNA replication stress-please see comment 5 below) is lacking in the manuscript. Hence, the insights on mRNA decay/P-bodies playing a role in gene expression during replicative stress not sufficient to warrant publication in Nature Communications, unless the specific targets and regulatory loops identified in this manuscript are important to the DNA-damage community. Specific comments: 1. Expression of several genes increased as well as decreased in the RNA-seq dataset as a function of HU stress in Wt vs lsm1Δ. One confounding issue with such analyses is that if some mRNA increase in abundance, they take up more sequence space, as a result some mRNAs get underrepresented and register as reduced levels. Is the subset of decreased mRNA in these datasets controlled for this eventuality? 2. An additional limitation of the RNA-seq data is the lack of target validation with an alternative technique such as northern blotting or RT-PCR. Such an additional validation can strengthen the argument for changes in expression of the identified target genes.
3. On that note, the authors model a role for P-bodies in regulating the level of Yox1 mRNA, yet the actual levels of total Yox1 mRNA in lsm1∆ and pat1Δ have not been measured. 4. Are these changes truly because of mRNA decay? Steady-state levels in gene expression don't often change much even with a change in decay rates. Do deletions in other mRNA decay genes yield the same changes in levels of targets identified? 5. Are the effects observed in gene expression, genetic suppression etc., due to HU-related stress or DNA replication stress, specifically? Is the expression of the identified targets affected in a similar manner upon exposure to other agents that lead to DNA replication stress? 6. One concern is whether the effects in gene expression observed in lsm1Δ (or pat1∆) are due to P-bodies. It is my understanding that lsm1Δ and pat1∆ strains do not prevent P-body assembly. While this does not impact the key observations that these proteins can affect gene expression during DNA stress, it does affect the suggested role for P-bodies per se. 7. Figure 4.-nRNA should be mRNA on Y-axis. 8) Why is td-tomato used as control in Figure 5a? Shouldn't the controls be done in +/-HU to show how that alteration affects the signal?
Reviewer #3 (Remarks to the Author) The manuscript Characterizes mRNAs associated with p-bodies caused by HU-induced replication stress. The transcriptional repressor Yox1 is identified as localizing in P-bodies, and accumulates in the nucleus of lsm1-mutant cells. In general, the manuscript makes great strides in characterizing the function of P-bodies. I think the manuscript is beautifully written and the figures look very clean and professional.
The subject matter seems to appeal to a specific readership because it combines DNA-replication, P-bodies, and hydroxyurea stress. Nevertheless, it is written in a very accessible way, and could appeal to a broad readership given that it may reveal some fundamental biology of P-bodies.
In summary, I would accept this manuscript. I point out a few places where the manuscript could be improved.
A few minor formatting issues such as two periods after the sentence: …exonuclease Xrn1, which together determine the decapping or degradation rate of mRNAs..
There is a super-script error in the sentence "Both RAD54 and RAD51 are induced during the DNA replication stress response or upon X-ray exposure 8; and our data,54. ".
Minor issues:
This particular sentence: "Amazingly, we could identify specific YOX1 targets whose de-repression is critical to avoid replication stress induced toxicity, ALD6 and ICS2." has two issues. First, "Amazingly" is a very strong word, and might be too strong for a publication. Would "Surprisingly" be more appropriate? Secondly, "replication stress induced" functions as an adjective, and should be hyphenated as "replication stress-induced toxicity".
Suggestions to the authors for improvement of readability: How are "fitness values" defined? In the methods I see that it is the ratio of colony size in HU vs no drug. Would it be more clear to briefly define fitness when first introduced in the Results on page 6, or at least mention that it is defined by colony size?
The section title "Suppressors of replication stress sensitivity of P-body mutants" seems grammatically strange to me.
Concerning the statement: "Alternatively, absence of LSM1 could stabilize transcriptional repressors, resulting indirectly in mRNA abundance decreases, as has been observed in cells lacking the 5'-3' RNA exonuclease Xrn1" This is an interesting hypothesis. I think it would strengthen this point to identify specific transcriptional repressors in the set of transcripts that increase in lsm1-mutants (other than Yox1) and mention here, although more detail is given in the discussion on this point. Along those lines, the statement in the manuscript about these differentially expressed genes does not state that the list of the 333 up-and 258 down-regulated genes in lsm1-mutants (in the absense of HU-stress) is also part of Table S2. As far as I can tell, Table S2 is only introduced in the ms in the context of HU-treatment. It would be helpful to reference this table when discussing these genes at the end of page 3.
Point-by-point responses:
Reviewer #1: This manuscript examines the functions of P-bodies in regulating cellular responses to replication stress. P-bodies are sites of mRNA decapping to decrease mRNA abundance. Stresses including replication stress induce P-body formation and function and P-body proteins are important for cell viability in response to these stresses. The authors utilized RNA sequencing and genetics to identify mRNAs that may be regulated in a P-body dependent manner to yield resistance to replication stress. Their analysis identified Yox1, a transcriptional repressor as one of several candidates and they validated Yox1 mRNA regulation by P-body processing as important for replication stress responses. Furthermore, they identified two gene targets of Yox1 regulation that contribute to these responses.
Overall, I found the data in the manuscript to be compelling and the conclusions interesting. While it is not surprising that Yox1 is involved in controlling gene expression in response to replication stress, the control of Yox1 by P-body dependent processing is novel and interesting. The only thing that I would have liked the authors to do was to demonstrate unequivocally that ALD6 and ICS2 are really direct targets of Yox1 important for stress responses. For example, it would be helpful to demonstrate direct binding of Yox1 protein to the putative Yox1 binding sites in the promoters of these genes. Also, monitoring the effects of deletion of these binding sites on gene expression and cell viability would be useful.
We agree that demonstrating a direct binding of Yox1 on ALD6 or ICS2 promoters would be of interest. However, it is a reasonable possibility that Yox1 could regulate ALD6 or ICS2 expression indirectly (for example by repressing ALD6 or ICS2 transcriptional activator) without changing the conclusions of our study. Yox1 or Mcm1 binding sites are present in multiple copies in both promoters, but Yox1 binding has not been detected in high throughput studies. Since direct binding by Yox1 is not an essential component of our model, we have modified the manuscript to clarify that Yox1 regulation could be indirect: We identified binding sites for both Yox1 and its co-repressor Mcm1 in the 1000-bp promoter regions of ALD6 and ICS2 using YeTFaSCo 44 (Table S9), although it is also possible that both are indirect targets. (p.11) Reviewer #2 Specific comments: 1. Expression of several genes increased as well as decreased in the RNA-seq dataset as a function of HU stress in Wt vs lsm1Δ. One confounding issue with such analyses is that if some mRNA increase in abundance, they take up more sequence space, as a result some mRNAs get underrepresented and register as reduced levels. Is the subset of decreased mRNA in these datasets controlled for this eventuality?
We agree that upregulation of hundreds of genes could take up more sequencing space and artificially decrease abundance of other transcripts. While this is a known limitation of normalization by RPKM, most current methods to identify differentially expressed genes from RNA-seq data (including Cuffdiff, which is what we used) apply more sophisticated normalization routines to overcome the limitation. It is true that different analysis methods can produce some differences in results, so we re-analyzed our RNA-seq data using two alternative RNA-Seq data analysis methods: EBSeq and edgeR. In particular, EBSeq uses a Bayesian statistics approach, which takes into account the compositional structure of RNA-seq data. edgeR uses a alternative normalization method as compared to cuffdiff (our initial analysis method). Using both methods, we were able to confirm our conclusions. The comparison of the three analysis methods has been added (Supplemental Table S3), and the text has been modified on p.3: Finally, to confirm that the differentially expressed genes that we identified were independent of the data analysis method used, we applied two different analyses of the RNA-Seq data to identify differentially expressed genes: EBSeq 23 and edgeR 24 . Between 34 and 79% of the genes identified in our initial analysis were also identified using EBSeq or edgeR, depending on the time point analyzed (Table S3). And on p.6: Independent reconstruction of the pat1∆ and lsm1∆ double mutants with each of the 11 genes resulted in validation of 6 putative target genes: ARL3, ACF4, HHT1, TMA1, RRS1 and YOX1. Increased mRNA abundance in HU for 5 of these transcripts was confirmed by two independent data analysis methods. ACF4 was confirmed by edgeR but not by EBSeq (Table S6).
We also note that the correlation between biological replicates in our RNA-Seq experiments was very high (R>0.92 for biological replicates, mentioned in the Methods section).
2. An additional limitation of the RNA-seq data is the lack of target validation with an alternative technique such as northern blotting or RT-PCR. Such an additional validation can strengthen the argument for changes in expression of the identified target genes.
We addressed the reviewer's concern by validating YOX1 up-regulation and ALD6 downregulation in lsm1∆ cells using qRT-PCR. The data are presented in Supplemental Figures S4 and S5. 3. On that note, the authors model a role for P-bodies in regulating the level of Yox1 mRNA, yet the actual levels of total Yox1 mRNA in lsm1∆ and pat1Δ have not been measured.
We have now measured YOX1 mRNA in both lsm1∆ and pat1∆ by qRT-PCR ( Figures S4 and S5), in addition to the original measurement in lsm1∆ by RNA-seq. 4. Are these changes truly because of mRNA decay? Steady-state levels in gene expression don't often change much even with a change in decay rates. Do deletions in other mRNA decay genes yield the same changes in levels of targets identified?
We found that YOX1 mRNA increases in pat1∆ and xrn1∆ cells ( Figure S4 and text on p. 9 (xrn1∆:wildtype = 3.7 ± 1.8)). Pat1 is an mRNA decapping protein, and Xrn1 is the predominant 5' to 3' exoribonuclease, indicating a role for mRNA decay functions in the reduction in YOX1 mRNA abundance. 5. Are the effects observed in gene expression, genetic suppression etc., due to HU-related stress or DNA replication stress, specifically? Is the expression of the identified targets affected in a similar manner upon exposure to other agents that lead to DNA replication stress?
We tested whether the lsm1∆ differentially expressed genes in HU overlapped with genes whose expression is affected during DNA replication induced by treatment with MMS and found good overlap (as much as 53%, depending on the dataset) suggesting that the transcriptional program that we identified is likely a response to DNA replication stress in general and not only HUspecific. The text has been modified on p.4: The correlation with data obtained using a distinct replication stress agent, MMS, indicates that a substantial fraction of the transcriptional program that we identified is due to DNA replication stress (Fig. S2b,c). 6. One concern is whether the effects in gene expression observed in lsm1Δ (or pat1∆) are due to P-bodies. It is my understanding that lsm1Δ and pat1∆ strains do not prevent P-body assembly. While this does not impact the key observations that these proteins can affect gene expression during DNA stress, it does affect the suggested role for P-bodies per se.
Deletion of LSM1 induces Dcp1, Dcp2, Edc3, Xrn1 and Dhh1 foci formation (due to the accumulation of RNA in the cytoplasm) and reduces Pat1 foci formation (see Teixeira & Parker, 2007, Mol Biol Cell). Deletion of PAT1 prevents the formation of P-body granules for almost all core P-body proteins, including Lsm1 (see Teixeira & Parker, 2007, Mol Biol Cell). Given that YOX1 mRNA abundance increases in lsm1∆ and pat1∆ cells, we suggest that the regulation of Pbody formation is required for the regulation of YOX1 mRNA abundance.
This has been corrected.
8. Why is td-tomato used as control in Figure 5a? Shouldn't the controls be done in +/-HU to show how that alteration affects the signal?
Td-tomato was not used as a control in Fig. 5a. We used Hta2-mCherry as a nuclear marker to segment the nuclei in order to quantify nuclear and cytoplasmic Yox1-GFP. Both RFP and GFP channels are shown in both conditions (-/+ HU) in Fig. 5a.
Reviewer #3 (Remarks to the Author): The manuscript characterizes mRNAs associated with p-bodies caused by HU-induced replication stress. The transcriptional repressor Yox1 is identified as localizing in P-bodies, and accumulates in the nucleus of lsm1-mutant cells. In general, the manuscript makes great strides in characterizing the function of P-bodies. I think the manuscript is beautifully written and the figures look very clean and professional.
The subject matter seems to appeal to a specific readership because it combines DNAreplication, P-bodies, and hydroxyurea stress. Nevertheless, it is written in a very accessible way, and could appeal to a broad readership given that it may reveal some fundamental biology of Pbodies.
In summary, I would accept this manuscript. I point out a few places where the manuscript could be improved.
A few minor formatting issues such as two periods after the sentence: …exonuclease Xrn1, which together determine the decapping or degradation rate of mRNAs.. This has been corrected.
There is a super-script error in the sentence "Both RAD54 and RAD51 are induced during the DNA replication stress response or upon X-ray exposure 8; and our data,54. ". This has been corrected.
Minor issues:
This particular sentence: "Amazingly, we could identify specific YOX1 targets whose de-repression is critical to avoid replication stress induced toxicity, ALD6 and ICS2." has two issues. First, "Amazingly" is a very strong word, and might be too strong for a publication. Would "Surprisingly" be more appropriate? Secondly, "replication stress induced" functions as an adjective, and should be hyphenated as "replication stress-induced toxicity". The text has been corrected as suggested Suggestions to the authors for improvement of readability: How are "fitness values" defined? In the methods I see that it is the ratio of colony size in HU vs no drug. Would it be more clear to briefly define fitness when first introduced in the Results on page 6, or at least mention that it is defined by colony size?
This has been clarified in the main text. The manuscript now reads: We then assessed the fitness of every double mutant, by measuring and comparing colony size in the presence and absence of HU, in triplicate.
The section title "Suppressors of replication stress sensitivity of P-body mutants" seems grammatically strange to me.
The title has been changed as follows Suppressors of the replication stress sensitivity of P-body mutants Concerning the statement: "Alternatively, absence of LSM1 could stabilize transcriptional repressors, resulting indirectly in mRNA abundance decreases, as has been observed in cells lacking the 5'-3' RNA exonuclease Xrn1." This is an interesting hypothesis. I think it would strengthen this point to identify specific transcriptional repressors in the set of transcripts that increase in lsm1-mutants (other than Yox1) and mention here, although more detail is given in the discussion on this point.
We looked whether there were up-regulated repressors and whether their targets were enriched in the subset of lsm1∆ down-regulated genes at the same time points but did not find repressors that showed this pattern, with the exception of YOX1.
Along those lines, the statement in the manuscript about these differentially expressed genes does not state that the list of the 333 up-and 258 down-regulated genes in lsm1-mutants (in the absense of HU-stress) is also part of Table S2. As far as I can tell, Table S2 is only introduced in the ms in the context of HU-treatment. It would be helpful to reference this table when discussing these genes at the end of page 3.
We added an earlier reference to Table S2 as suggested.
Reviewers' Comments:
Reviewer #1: Remarks to the Author: If the authors were able to show that it was direct and map the responsible DNA element, those mechanistic insights would significantly strengthen the overall conclusions. However, I agree with the author's response that the model does not require direct binding and the results would still be of interest.
Reviewer #2: Remarks to the Author: The authors have presented several lines of evidence to solidify claims made in the initial version of this manuscript. Specifically, the additional data presented are sufficient to address technical concerns raised previously. There is one major issue that I still find problematic. Specifically, I disagree with the claim made by the authors regarding the role for "P-bodies" per se in regulating levels of transcripts, such as Yox1 in vivo. Measurable, yet insufficient effects of single gene deletions on abrogation of P-body assembly is well documented since the manuscript by Teixiera andParker 2007 (e.g., Buchan et al., 2008, JCB). Furthermore, the change in mRNA levels and P-body assembly are correlative, and not causative. As result, a model for P-bodies in DNA replication stress response by "rewiring" transcriptome is not supported by the data, and most likely is incorrect. Certainly, the proteins found in P-bodies can have an effect, but whether it is P-body assembly per se has not been demonstrated. I recommend one of two things: A) I suggest that the authors change the title and the tone of the manuscript such that the title and tone does not overstate the observations, which implicates Pbodies in regulating the mRNA, or B) Examine how edc3∆ or edc3∆ lsm4∆c strains, which have very strong effects on P-bodies (Decker et al., 2007, JCB), affect this process. If they also have a strong effect, then I would be more convinced P-bodies per se are involved in the response. | 2018-04-03T00:11:22.828Z | 2017-09-15T00:00:00.000 | {
"year": 2017,
"sha1": "1d77c1ebbffd3fde2790101356d6ceeac7e0d85d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-00632-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a875d1a8f5d0a2aad15efbb44da4402402c29cb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
211141471 | pes2o/s2orc | v3-fos-license | Next-generation sequencing of drug resistant Mycobacterium tuberculosis clinical isolates in low-incidence countries
Drug resistant tuberculosis (TB), especially multidrug (MDR) and extensively drug-resistant (XDR) TB, is still a serious problem in global TB control. Slovenia and North Macedonia are low-incidence countries with TB incidence rates of 5.4 and 10.4 in 2017, respectively. In both countries, the percentage of drug resistant TB is very low with sporadic cases of MDR-TB. However, global burden of drug-resistant TB continues to increase imposing huge impact on public health systems and strongly stimulating the detection of gene variants related with drug resistance in TB. Next-generation sequencing (NGS) can provide comprehensive analysis of gene variants linked to drug resistance in Mycobacterium tuberculosis. Therefore, the aim of our study was to examine the feasibility of a full-length gene analysis for the drug resistance related genes (inhA, katG, rpoB, embB) using Ion Torrent technology and to compare the NGS results with those obtained from conventional phenotypic drug susceptibility testing (DST) in TB isolates. Between 1996 and 2017, we retrospectively selected 56 TB strains from our National mycobacterial culture collection. Of those, 33 TB isolates from Slovenian patients were isolated from various clinical samples and subjected to phenotypic DST testing in Laboratory for Mycobacteria (University Clinic Golnik, Slovenia). The remaining 23 TB isolates were isolated from Macedonian patients and sent to our laboratory for assistance in phenotypic DST testing. TB strains included were either mono-, poly- or multidrug resistant. For control purposes, we also randomly selected five TB strains susceptible to first-line anti-TB drugs. High concordance between genetic (Ion Torrent technology) and standard phenotypic DST testing for isoniazid, rifampicin and ethambutol was observed, with percent of agreement of 77%, 93.4% and 93.3%, sensitivities of 68.2%, 100% and 100%, and specificities of 100%, 80% and 88.2%, respectively. In conclusion, the genotypic DST using Ion Torrent semiconductor NGS successfully predicted drug resistance with significant shortening of time needed to obtain the resistance profiles from several weeks to just a few days.
Introduction
Mycobacterium tuberculosis (MT), an obligate pathogen that causes tuberculosis (TB), is highly transmissible agent with significant morbidity and mortality. Global strategies to treat and control TB are designed to accurately and rapidly diagnose, treat and reduce the transmission of TB. The increasing burden of multidrug resistant (MDR) and extensively drug-resistant (XDR) TB is a serious problem in global TB control. Slovenia and North Macedonia are countries with low incidence of TB cases with incidence rates 5.4 and 10.6 in 2017, respectively [1]. Moreover, in Slovenia in the period between 1996 and 2017 the average percentage of any drug resistance against isoniazid (INH), rifampicin (RIF), pyrazinamide (PZA), ethambutol (EMB) and streptomycin (SM) was 4.13%, dropping from 6.55% in 1996 to 1.89% in 2017. In the same period, 25 sporadic MDR-TB cases were observed with the last two cases of MDR-TB noted in 2009 and in 2017. For North Macedonia, the average percentage of any drug resistance against INH, RIF, EMB and SM (they do not perform DST testing for PZA) was higher and accounted for 10.84%, dropping from 9.93% in 2001 to 5.81% in 2017. During 1996 and 2017 75 MDR-TB cases were observed in North Macedonia with MDR-TB cases appearing each year [2,3,4].
Currently, the reference method for determining drug resistance in clinical laboratory is culture based drug susceptibility testing (DST), using either solid or liquid media. However, this method is labour-intensive, time-consuming (due to slow growth of MT it takes weeks to months to obtain DST results), technically challenging and requires handling viable and potentially infectious cultures of MT bacilli [1].
As alternative to phenotypic DST, several commercially available molecular assays rapidly detect common mutations related to resistance to isoniazid, rifampicin and some second-line anti-TB drugs. GeneXpert MTB/RIF Ultra assay (Cepheid, Sunnyvale, CA, USA) is a real-time PCR based assay that detects resistance directly from sputum samples. On the other hand, there are several line probe assays recommended by WHO, including GenoType MTBDRplus and MTBDRsl (Hain Lifescience, Nehren, Germany) and Nipro NTM + MDRTB II (Nipro Corporation, Osaka, Japan). However, the mechanisms of drug resistance are complex and not completely understood. Therefore, one of the main limitations of such molecular tests is that they evaluate only limited number of mutations linked with drug resistance in TB [9].
In recent years, significant and continued progress in next-generation sequencing (NGS) made this technology a promising clinical tool in comprehensive analysis of gene variants linked to drug resistance in TB [9]. Besides known mutations, NGS facilitates the discovery of novel variants in the entire coding regions of several genes previously implicated in MDR and/or XDR-TB resistance [9].
In this study, we examined the feasibility of a fulllength gene analysis for the drug resistance related genes using Ion Torrent technology and compared the results with those obtained from conventional phenotypic drug susceptibility testing (DST) in 61 TB isolates. In this short paper, we compare molecular results with phenotypic DST results for isoniazid (INH), rifampicin (RIF) and ethambutol (EMB), anti-TB drugs used in first-line treatment regimens.
Materials and methods
TB strains. Between 1996 and 2017, we retrospectively selected 33 TB strains that were isolated from various clinical samples of Slovenian patients and subjected to phenotypic DST testing according to routine procedures in Laboratory for Mycobacteria (University Clinic Golnik, Slovenia). Between years 1999 and 2010, National Laboratory for Mycobacteria (Institute for Pulmonary Diseases and Tuberculosis Skopje, North Macedonia) sent to our laboratory 23 TB strains for assistance in phenotypic DST testing. TB strains included were either mono-, poly-or MDR resistant. For control purposes, we also randomly selected five TB strains susceptible to first-line anti-TB drugs. Detailed information about the type of resistance is presented in Table 1.
Phenotypic drug susceptibility (DST) testing. Phenotypic drug resistance to first-line drugs was determined using Bactec MGIT 960 System (BD) from pure culture of MT strains. Critical concentrations for anti-TB drugs tested were as follows: INH 0.1 μl/ ml (high level 0.4 μl/ml), RIF 1 μl/ml and EMB 5 μl/ ml (high level 7.5 μl/ml). DNA extraction. Mycobacterial genomic DNA was isolated from pure cultures of M. tuberculosis using previously described protocol [13]. Purified genomic DNA was stored at -20 °C in Slovenian National Mycobacterial DNA Collection until further analysis.
AmpliSeq Library preparation, sequencing, data analysis and interpretation. Nucleic acid quality and quantity were assessed using NanoDrop 2000 (Thermo Scientific) followed by agarose gel electrophoresis. All DNA samples were normalized to 10 ng in 15 μl of starting sample dilution. To identify gene variants related with drug resistance in genomic DNA extracted from MT isolates, AmpliSeq libraries were generated using the AmpliSeq™ Kit for Chef DL8 and the Ion AmpliSeq TB Research Panel. This panel amplifies 109 amplicons in two highly multiplexed PCR reactions covering coding sequences of eight genes related to drug resistance (including inhA, katG, rpoB, and embB). NGS libraries were prepared automatically using the Ion Chef instrument. The automated protocol performs targeted amplification, digestion, ligation, and normalization on eight samples without any user intervention. Prepared libraries were then automatically clonally amplified, enriched and sequenced on two Ion 530 Chips using the Ion Chef and Ion S5 instruments. Signal processing, base calling and variant caller analysis were performed with the Torrent Suite software version 5.6 (all reagents, instruments and software Thermo Fisher Scientific). The sequencing data were analysed manually, comparing the determined variants with published data and data available in the Tuberculosis Drug Resistance Database [7]. Sequence of MTB H37Rv (NC_000962.3) was used as the reference sequence. The resistance genotyping profiles obtained with manual approach were compared to the results of phenotypic DST testing. Sequence data are available in SRA NCBI database under BioProject accession number PRJNA551916.
Results and discussion
The NGS analysis was successfully used to predict drug resistance profiles. Percentage of agreement between both methods and corresponding sensitivities, specificities, positive predictive values and negative predictive values are presented in Table 2.
The percent of agreement between molecular and phenotypic DST testing in our study was the lowest for INH and accounted for 77% with sensitivity of 68.2% and specificity of 100%. The studies based Table 2 on susceptibility testing so far have not demonstrated complete agreement between the phenotypic and genotypic assays [6,7,12,14,16]. Hazbon и соавт. [6] reported in their study on a large number of TB isolates that around 34% of phenotypically resistant isolates are not associated with any genotypic mutations in genes most commonly linked to INH resistance (katG, inhA, kasA, ahpC and ndh), suggesting that many genetic causes of INH resistance are yet to be discovered. Supporting this data, recent systemic review [12] on INH resistance demonstrated that 84% of global phenotypic INH resistance is associated with mutations in katG, inhA and ahpC-oxyR intergenic region, while the other 16% are, probably, linked to genetic changes in other genes.
Our study indicates high percentage of agreement between both methods used for RIF resistance testing (93.4%) with high sensitivity (100%) and specificity (80%). This is concordant with published literature [5,10,13] which reported very good agreement between genotypic and phenotypic DST. Zaw et al. [15] showed that mutations in rpoB gene (specifically mutations within 81-bp RIF-resistance determining region; RRDR) are responsible for approximately 95% of all RIF resistance cases in TB strains.
Similarly, we also noticed high percent of agreement between phenotypic and genotypic DST results for EMB (93.3%) with high sensitivity (100%) and specificity (88.2%). These results are concordant with some of the published studies [9,10], which showed large overlap in the estimated prevalence of EMB resistance by genetic sequencing and the estimated prevalence by phenotypic testing. However, several other studies observed discrepancies between the presence of common mutations at codon 306 of embB gene and phenotypic EMB resistance [5,8]. Authors of these studies determined mutations in embB306 also in clinical M. tuberculosis isolates that are susceptible to EMB. One large study (with international TB isolates included) noted that almost half of the TB isolates with embB306 mutations were fully susceptible to EMB. Furthermore, they found strong correlation between embB306 mutations and resistance to any antibiotic, suggesting that embB306 mutations were responsible for broad antibiotic resistance [5]. In our study with relatively low number of patients included, we observed 20 (20/60; 33.3%) TB isolates with embB306 mutations. Of those 20 MT isolates that harboured embB306 mutations, four (4/20; 20.0%) TB isolates that were phenotypically susceptible to EMB, remaining 16 (16/20; 80.0%) TB isolates were phenotypically resistant to EMB.
Overall, this study describes the first utilization of Ion Torrent sequencing of full-length genes to characterize drug resistant MT isolates for In conclusion, genotypic DST using Ion Torrent semiconductor NGS has the potential to provide useful information several weeks before phenotypic DST results. Therefore, genetic sequencing (NGS) seems to be a valuable tool for surveillance of drug resistance in TB. Before this takes place, there is a need to standardise the whole procedure including DNA extraction, recording, and reporting and data interpretation. | 2020-02-06T09:03:48.078Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "adb6266a37f60a0525eaec75e0e01c01368b1705",
"oa_license": "CCBY",
"oa_url": "https://www.iimmun.ru/iimm/article/download/851/883",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "adb6266a37f60a0525eaec75e0e01c01368b1705",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
3795424 | pes2o/s2orc | v3-fos-license | Doctors, Lawyers and Advance Care Planning: Time for Innovation to Work Together to Meet Client Needs.
Health organizations in canada have invested considerable resources in strategies to improve knowledge and uptake of advance care planning (acp). Yet barriers persist and many canadians do not engage in the full range of acp behaviours, including writing an advance directive and appointing a legally authorized decision-maker. not engaging effectively in acp disadvantages patients, their loved ones and their healthcare providers. This article advocates for greater collaboration between health and legal professionals to better support clients in acp and presents a framework for action to build connections between these typically siloed professions.
T T he Canadian population is ageing, more people are living longer with chronic conditions and, importantly, many people say they want more control over their care, especially at the end of life. The recent report of the Advisory Panel on Healthcare Innovation (2015) urges more work to break down siloed professions and create person-centred teams. Doing so is necessary to find new ways to deal with the persistent inadequacies in healthcare systems, including in the delivery of chronic disease care, aged care and end-of-life care.
The call for change comes in well-researched reports, like that of the Advisory Panel, and also in personal stories, like Dr. Duncan Sinclair' s essay (2015) on dignified care for the frail elderly and reflections on the deaths of two high-profile Canadian doctors, Dr. Donald Low and Dr. Larry Librach (Taylor and Martin 2014). Dr. Sinclair articulates his wishes -"respect for my continued dignity and personhood; staying in my home; no pain or suffering; and not being a burden to others" -that are described with remarkable consistency as what people want to prepare for a good death (Smith 2000). Dr. Sinclair also writes of his own sense of duty to "write those expectations down and put them on record" so others can meet their obligation "to follow my advance directive." Health organizations in Canada have invested considerable resources in strategies to improve knowledge of advance care planning (ACP) among health professionals and patients and to encourage people to think about and communicate their wishes for future healthcare (see, for example, the work of the National Advance Care Planning Task Group: www. advancecareplanning.ca/about-advance-care-planning/advance-care-planning-national-taskgroup). Despite these efforts, barriers persist: members of the public misunderstand ACP; professionals report they lack the time and confidence to broach ACP conversations with clients; and systems are inadequate to ensure plans are available when needed to guide healthcare decisions (Hagen et al. 2015;Lund et al. 2015). Many Canadians still do not engage in the full range of ACP behaviours, including writing an advance directive and appointing a substitute decision-maker to ensure their values, wishes and preferences are known (Teixeira et al. 2013).
Not engaging effectively in ACP disadvantages patients, their loved ones and their healthcare providers. Patients with an advance directive experience fewer medical interventions at the end of life, are less likely to be moved from their home or community care facility to a hospital and are less likely to die in a hospital (Lum et al. 2015). Substitute decisionmakers often report a significant negative emotional burden (Wendler and Rid 2011), but this burden can be eased if the decision-maker is guided by the values and preferences expressed in an advance directive. A study of Canadian hospitals found alarmingly low rates of communication between healthcare providers and terminally ill patients about whether they had advance directives and about their wishes for care during their hospital admission . It was reported that "close to 70% of the physician orders concerning intensity of treatment (such as cardiopulmonary resuscitation and intubation) were discordant with current patient wishes. In any other area of medicine, this would be viewed as an egregious 'failure of communication' error" (Allison and Sudore 2013: 787).
A recent systematic review concluded that improvement in the uptake and effectiveness of ACP depends on the ability to "transform systemic processes across a range of institutional settings" (Lovell andYates 2014: 1027). We agree and propose that one important systemic transformation is greater collaboration between health and legal professionals to better support their clients in ACP. As Dr. Sinclair and others observe, we need the "silos of our healthcare 'system' to work together in a boundary-free way" (Sinclair 2015) but we also need to recognize that older adults and people with chronic or terminal illnesses typically have intersecting medical and legal issues, and failing to address those issues in a coordinated way undermines their quality of life and care.
Three Reasons Why Health-Legal Collaboration Is Important
First, working within their professional silos, neither doctors nor lawyers are optimally effective in helping their clients with ACP. Uncertainties about the legal validity of advance directives and the authority of substitute decision-makers are barriers to doctors having ACP conversations with patients. Fears about liability for limiting care at the end of life are a further medico-legal obstacle. Lawyers also face challenges in helping their clients with ACP. A main criticism is that lawyers are too "transactional," helping clients prepare ACP documents, but not promoting the ongoing communication that is vital to ensuring the client' s wishes are known and respected (Castillo et al. 2011). Physicians express frustration with directives that use vague phrases like "no heroic measures" and focus on the rarely encountered vegetative state, but do not provide guidance to inform the range of in-the-moment decisions needed in care at the end of life (Sudore and Fried 2010). Doctors encounter situations where decision-makers for an incompetent patient say they do not know what the patient would want (Shalowitz et al. 2006). Teams provide intensive medical interventions to sustain a patient' s life only to be informed days or weeks later that a directive has been found that says the person would refuse these life-prolonging interventions.
Second, some patients are more likely to talk to a lawyer than a physician about ACP. A Saskatchewan survey found that nearly half of people who had a written care plan had sought help from a lawyer to prepare the document, while only 5% had consulted with a doctor (Goodridge et al. 2013). Similarly, patients at an Ontario family practice clinic were more likely to have discussed ACP with a lawyer than their family doctor (O'Sullivan et al. 2015). A national study of sick, elderly patients and their family members found that participants discussed their end-of-life-care wishes as often or more often with a lawyer than with a family doctor or medical specialist ). These findings are not surprising when one considers that people seek help from lawyers to plan for their future in various ways such as writing a will and appointing someone to manage their finances. Planning for future healthcare is a logical topic for such discussions.
Third, each Canadian province and territory has its own legislation governing ACP (see Resource Library here: http://advancecareplanning.ca/resource-library/#resource-library|category:your-province-or-territory). Doing ACP right requires an accurate understanding of the rules and policies in effect in the jurisdiction where the patient lives and receives care.
Health-Legal Collaboration to Support Advance Care Planning: A Framework for Action
How can we break down the silos between doctors and lawyers to better support clients with ACP? We suggest a framework for interprofessional collaboration along a continuum that represents a gradually increasing degree of connection between health and legal professionals. Professionals can develop specific activities within this framework based on local needs and can move back and forth along the continuum. This framework advances the recommendation of other Canadian ACP researchers that "new forms of interprofessional collaboration should be considered to increase the interface between physicians and lawyers" (Goodridge et al. 2013: 4). We advocate that new approaches should be evaluated and findings disseminated through health and legal sector organizations to build a strong evidence base for collaborative practices.
Interventions to build professionals' skills and confidence in discussing ACP are typically implemented and evaluated in health settings; however, best practice approaches can be adapted for use by legal professionals, including resources such as conversation scripts, workbooks and training programs available on national and provincial websites (for example: www. advancecareplanning.ca/resource/acp-workbook/ and https://myhealth.alberta.ca/Alberta/ Pages/advance-care-planning-resources.aspx). Organizations that produce ACP resources should disseminate them to the legal profession. Clients should receive common messages and information about ACP. For example, both health and legal professionals should promote ACP not as a one-time event but rather as a process of communication, and clients should be encouraged to share a care directive with key people who need to know their wishes.
Legal and health practitioners cooperate in interprofessional training
Continuing professional development events should bring legal and health professionals together for joint ACP training so they can learn from one another. Health professionals can increase their awareness of the law and lawyers can gain a better understanding of the practical realities of healthcare delivery. In Alberta, our research team recently delivered a continuing education event, Advance Care Planning: How Lawyers Can Help Their Clients. A palliative medicine specialist and a wills and estates lawyer shared their experiences of the challenges of doing effective ACP and suggested solutions and resources to an audience of Alberta legal professionals.
Legal and health practitioners collaborate in ACP clinics
Clinics would bring together lawyers and health professionals to lead ACP sessions for clients in community settings, aged care facilities and hospitals. This strategy can improve access to lawyers for people who are physically unable to attend law offices. Interprofessional clinics would facilitate the delivery of consistent messages and follow-up referral pathways can also be developed between legal and health organizations. Clinics can help identify clients who may need additional support, especially those with more complex situations, so they can access professional help before medical and legal crises develop.
Lawyers are integrated into healthcare settings and teams
The medical-legal partnership model (which is most developed in the US: http://medicallegalpartnership.org/) may be used to establish formal arrangements for lawyers to provide (2014: 184) and, indeed, high-quality evaluation data are crucial to sustain innovative models of collaborative service delivery beyond pilot projects. The Advisory Panel on Healthcare Innovation heard "laments about the pervasiveness of pilot projects in Canada" and noted the "failing … in the capacity of our healthcare systems to spread or scale up the best ideas from those projects" (2015: 27). Others have reflected on factors that support the spread of successful innovations to achieve integrated systems (Suter et al. 2009), especially collective work to engage and train key groups and shift cultures of practice (Zelmer 2015). Each increasing degree of connection in the health-legal collaboration framework presented here involves costs, benefits and a need to determine the cost-effectiveness of specific collaborative activities. Importantly, when using interprofessional approaches, members of each profession must meet their ethical duties to clients. These are not insurmountable barriers, however, as demonstrated by the success of medical-legal partnerships involving pro bono legal services (such as Pro Bono Law Ontario's Medical-Legal Partnerships for Children: www.pblo.org/volunteer/ medical-legal-partnerships-children/). ACP requires more "interdisciplinary attention, conversations, health research and practice [and] joining up professions …" (Russell 2014). Just as researchers have asked health professionals about barriers and enablers to ACP, we need to find out similar information from lawyers. Our research team will soon report on a survey of lawyers in Alberta to find out more about their experiences with ACP, their perspectives on barriers and facilitators and the resources that would help them. To our knowledge, no such survey has been done elsewhere and the results will help stakeholders in health, legal and government sectors to understand better the role that lawyers play. The results will also provide an evidence base for strategies to advance the first two components of the collaboration framework, namely, how legal and health practitioners can use common best practices to assist clients and ways in which legal and health practitioners can cooperate in interprofessional training.
Healthcare providers and lawyers need not be estranged by different professional cultures and language. To realize the benefits of ACP, they ought to find a common ground in preparing people for serious illness and death, helping people communicate what is important to them and allowing them to guide their care even beyond a time when they can speak for themselves. | 2018-04-03T03:19:45.831Z | 2016-11-01T00:00:00.000 | {
"year": 2016,
"sha1": "0456253c918e2dfbe20d518ff89ed5f7190d9a99",
"oa_license": "CCBYNC",
"oa_url": "https://www.longwoods.com/product/download/code/24944",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e61852524c88c8bc3efa0a01b4a7c42ea2a21f1",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
259033622 | pes2o/s2orc | v3-fos-license | Machine Learning Classification of Time since BNT162b2 COVID-19 Vaccination Based on Array-Measured Antibody Activity
Vaccines trigger an immunological response that includes B and T cells, with B cells producing antibodies. SARS-CoV-2 immunity weakens over time after vaccination. Discovering key changes in antigen-reactive antibodies over time after vaccination could help improve vaccine efficiency. In this study, we collected data on blood antibody levels in a cohort of healthcare workers vaccinated for COVID-19 and obtained 73 antigens in samples from four groups according to the duration after vaccination, including 104 unvaccinated healthcare workers, 534 healthcare workers within 60 days after vaccination, 594 healthcare workers between 60 and 180 days after vaccination, and 141 healthcare workers over 180 days after vaccination. Our work was a reanalysis of the data originally collected at Irvine University. This data was obtained in Orange County, California, USA, with the collection process commencing in December 2020. British variant (B.1.1.7), South African variant (B.1.351), and Brazilian/Japanese variant (P.1) were the most prevalent strains during the sampling period. An efficient machine learning based framework containing four feature selection methods (least absolute shrinkage and selection operator, light gradient boosting machine, Monte Carlo feature selection, and maximum relevance minimum redundancy) and four classification algorithms (decision tree, k-nearest neighbor, random forest, and support vector machine) was designed to select essential antibodies against specific antigens. Several efficient classifiers with a weighted F1 value around 0.75 were constructed. The antigen microarray used for identifying antibody levels in the coronavirus features ten distinct SARS-CoV-2 antigens, comprising various segments of both nucleocapsid protein (NP) and spike protein (S). This study revealed that S1 + S2, S1.mFcTag, S1.HisTag, S1, S2, Spike.RBD.His.Bac, Spike.RBD.rFc, and S1.RBD.mFc were most highly ranked among all features, where S1 and S2 are the subunits of Spike, and the suffixes represent the tagging information of different recombinant proteins. Meanwhile, the classification rules were obtained from the optimal decision tree to explain quantitatively the roles of antigens in the classification. This study identified antibodies associated with decreased clinical immunity based on populations with different time spans after vaccination. These antibodies have important implications for maintaining long-term immunity to SARS-CoV-2.
Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the novel coronavirus strain causing Coronavirus Disease 2019 (COVID-19) [1]. On 11 March 2020, COVID-19 was finally classified as a pandemic by the World Health Organization (WHO) [2]. More than 6.3 million people have died from COVID-19 globally, according to the WHO, and more than 500 million cases have been confirmed. Additionally, more than 11 billion doses of vaccine have been distributed [3]. Fever, sore throat, dry cough, and pneumonia symptoms are among the clinical manifestations of COVID-19 [4]. During the span of this study, the Omicron variant was prevalent. The Omicron variant, which evolved from the Alpha variant, has increased infectivity compared to earlier variants [5]. Increased infectiousness and antibody evasion have been linked to the mutations in the SARS-CoV-2 spike protein [6].
Scientists have developed COVID-19 vaccines to combat the pandemic. To date, some types of vaccines against SARS-CoV-2 have been developed and widely used worldwide, such as the RNA-based type, non-replicating viral vector type, and protein-based type [7]. , and other common vaccines require one to three doses, depending on the type [7][8][9][10]. BNT162b2 contains mRNA encoding a full-length stable S glycoprotein that elicits dosedependent SARS-CoV-2 neutralizing antibody titers [11]. Two doses of BNT162B2 exhibit approximately 95% protection against severe illness [9,[12][13][14][15]. As of early 2023, all vaccines have efficacy in reducing COVID-19 severe cases and death while their efficiency in controlling viral infection and mild symptoms is not very satisfactory [9,10,16,17]. Vaccine coverage must be extended to all countries while maintaining and improving public health control mechanisms to control COVID-19 morbidity and mortality worldwide.
However, the efficacy of the BNT162b2 mRNA vaccine against SARS-CoV-2 decreases over time [11,18]. In addition, there have been reports of vaccine-induced protection waning progressively due to the emergence of new variants [19,20]. Whether the decline in vaccine protection is linked to a decrease in virus resistance remains unclear. Vaccines trigger a complicated immunological response that includes B and T cells, with B cells producing antibodies [18,21,22]. Spike (S), membrane (M), nucleocapsid (N), and envelope (E) are the four structural proteins encoded by SARS-CoV-2 [23][24][25]. Most of the antibodies generated by vaccination are directed against the S protein, specifically the receptor-binding domain (RBD) [7,26]. A recent study of antibody alterations following two doses of inactivated COVID-19 vaccine, separated into three groups based on immunization duration, revealed that the levels of antibodies (anti-Spike IgG) decrease with time [27]. While existing studies have begun to chart the territory of antibody profiles post-COVID-19 vaccination [28][29][30][31], the detailed interplay between antibody and vaccination remains incompletely revealed. More comprehensive research is urgently needed to pinpoint the most critical antibodies that neutralize the virus effectively and determine their duration in the human body. This knowledge is paramount for enhancing vaccine strategies, potentially developing superior treatments, and guiding public health policies regarding booster shots and containment measures, ultimately fortifying our fight against the pandemic.
In the current study, we investigated the influence of vaccines on antibody synthesis and monitored changes in antibody levels in the body over time following vaccination. Data on blood antibody levels in a cohort of volunteers vaccinated for COVID-19 vaccines were sourced from the Gene Expression Omnibus (GEO). The GEO data used for our analyses were originally measured using antigen microarrays [32]. The volunteers were examined for their reaction before receiving the mRNA vaccine (Pfizer or Moderna), shortly after receiving the first and second doses, and up to 6 months later. Vaccine-induced antibodies are mainly directed against the S1 and RBD domains of the S protein and to a lesser extent against the S2 domain. Antibody levels were increasing significantly 2 months after vaccination and begin to decline after 6 months. Seventy-three antigens and 1373 volunteer records were involved in the study of Hosseinian et al. [32]. In the present Life 2023, 13, 1304 3 of 20 study, 1373 samples were classified into four groups according to the time of vaccination: before vaccination, within 60 days of vaccination, 60-180 days after vaccination, and over 180 days after vaccination. Multiple machine learning methods were integrated to identify key antigen-reactive antibodies that changed after COVID-19 vaccination over time and to establish quantitative rules for accurate prediction. Several essential antigen-reactive antibodies and classification rules were obtained, some of which were extensively analyzed. The results of this study could serve as a basis for developing effective vaccines with long-lasting protection and elucidating the defense mechanisms of COVID-19 vaccines.
Data and Preprocessing
Individualized antibody reactivity levels for SARS-CoV-2 antigens induced by mRNA vaccines were quantified using a coronavirus antigen microarray (CoVAM) following the procedure described by Hosseinian et al. [32]. Data were sourced from the GEO database using accession number GSE199668. The samples were divided into four classes according to the time of vaccination: 104 unvaccinated healthcare workers, 534 healthcare workers within 60 days after vaccination, 594 healthcare workers between 60 and 180 days after vaccination, and 141 healthcare workers over 180 days after vaccination [32]. In terms of features, the CoVAM contained 10 SARS-CoV-2 antigens, including nucleocapsid protein (NP) and several varying fragments of the S protein, as well as 4 SARS, 3 MERS, 12 Common CoV, 8 influenza, and 36 other antigens. In terms of feature naming, the virus name was placed at the beginning to distinguish between the different sources of antibodies, followed by the protein name, and the specific tag name followed after the protein name. The feature names and their descriptions are provided in Table S1. The normalized fluorescence intensity was used to characterize the expression levels of antigen-reactive antibodies in blood. The above features and four classes comprised the classification problem. By investigating the problem, essential features can be obtained.
Feature Selection Methods
Several features were adopted to represent samples. Some of them were important to classify samples into different classes, whereas others were not. In machine learning, the important features can be extracted by feature selection methods. To date, many such methods have been proposed. It is a challenge to select the correct one to process a given dataset. Generally, one single method can only output a part of the essential features as each method has its limitations. In this study, we adopted four feature selection methods: least absolute shrinkage and selection operator (LASSO) [33,34], light gradient boosting machine (LighGBM) [35], Monte Carlo feature selection (MCFS) [36] and maximum relevance minimum redundancy (mRMR) [37]. These methods were designed following different principles, meaning that they can overview the given dataset from different aspects. Thus, more essential features can be obtained by applying them to the same dataset. Their brief descriptions are as follows.
Least Absolute Shrinkage and Selection Operator. The LASSO is a statistical method used for regularization and feature selection [33,34]. This method reduces the regression coefficients of the redundant features to zero. The feature selection phase occurs after the reduction, where non-zero-valued features are sorted by the absolute value of their coefficients. This study used the LASSO program implemented in Scikit-learn [38], which was run with default parameters.
Light Gradient Boosting Machine. The LightGBM is a free and open-source distributed gradient boosting framework for machine learning that was created by Microsoft [35]. It performs regression and classification by transforming weak decision tree (DT) classifiers into strong learners. In addition to regression and classification, this method ranks features according to their importance, measured by the number of times they are picked up for building DTs. A high ranking is given to features that are used frequently. LightGBM was implemented through a Python module, which can be obtained at https://lightgbm.readthedocs.io/en/ latest/ (accessed on 10 May 2020). This program was also performed under default parameters. Monte Carlo Feature Selection. The MCFS is a useful tool for selecting informative features according to their relative importance in building DTs [36,[39][40][41]. Subsets of features are randomly constructed many times. For each subset, some samples are randomly selected for training, and the others are used for testing. For instance, a DT is built based on two out of three of the samples that are randomly selected, and the rest is used for testing, which is also repeated many times. The relative importance (RI) of each feature can then be estimated by considering the number of times they are used to construct the DTs, the information gain of the features, and the weighted accuracy of the DTs. Finally, features can be sorted according to their RI scores. The MCFS program adopted in this study was retrieved from http://www.ipipan.eu/staff/m.draminski/mcfs.html (accessed on 4 June 2019). Additionally, it was executed using default parameters.
Maximum Relevance Minimum Redundancy. The mRMR is a classic and powerful feature selection method [37]. It measures the importance of features according to two aspects: (1) relevance to class variable, (2) redundancy to other features. The relevance and redundancy are all measured by mutual information (MI). Similar to the above methods, mRMR also generates a feature list to indicate the importance of features. At first, the list is empty. Then, a loop procedure is executed. In each round, one feature with maximum relevance to class variable and minimum redundancy to features in the current list is selected from all remaining features, which is appended to the current list. The loop procedure stops until all features have been put into the list. The mRMR program used in this study was obtained from http://home.penglab.com/proj/mRMR/ (accessed on 2 May 2018) and it was executed with the default settings.
The above four feature selection methods were applied to the dataset mentioned in Section 2.1, resulting in four feature lists, which were termed as LASSO, LightGBM, MCFS and mRMR feature lists.
Incremental Feature Selection
Although the feature selection methods can sort features in lists, it still retains a gap for extracting essential features. It is not easy to determine how many top features should be selected. In view of this, incremental feature selection (IFS) was employed in this study [42]. It can find out the optimal number of features for building the classifiers with best performance [43][44][45]. In the present study, one step interval was applied to each given list in the IFS method. Under this setting, a series of feature subsets were constructed in the following manner. The first subset contained the first feature in the list, the second one contained the top two features, and so on. A classifier was built for each feature subset based on one classification algorithm and samples encoded by features in this subset. All classifiers were tested by tenfold cross-validation [46]. According to the evaluation results, the classifier providing the highest performance was selected. It was termed as the optimal classifier and the optimal feature set was defined as the corresponding feature subset.
Synthetic Minority Oversampling Technique
As mentioned in Section 2.1, there are significant differences in the size of the four classes. The classifier built on such datasets may generate bias. This should be tackled by using some advanced computational methods. Here, we selected the synthetic minority oversampling technique (SMOTE) [47][48][49]. The idea of this method is to generate synthetic samples for each minority class, thereby balancing the dataset. In detail, it randomly chooses a sample from one minority class and determines its k nearest neighbors in the same class. One of its neighbors is randomly selected and a synthetic sample is generated by the linear combination of the sample and its chosen neighbor. This newly generated sample is put into the minority class, thereby enlarging its size. This procedure can be performed several rounds until the minority class contains the same number of samples as the majority class. Herein, we used the SMOTE tool from https://github.com/scikit-learncontrib/imbalanced-learn (accessed on 24 March 2020) with default parameters.
Classification Algorithms
In the IFS method, one classification algorithm should be employed for building classifiers. This study adopted four classification algorithms: DT [50], K-nearest neighbor (KNN) [51], support vector machine (SVM) [52], and random forest (RF) [53]. These algorithms have wide applications in tackling various medical and biological problems [54][55][56][57][58][59][60]. DT uses a tree-like model to build classifiers, which can be extended by maximizing Gini index or information gain in each tree node [50]. The KNN algorithm finds the nearest neighbors of a new sample and categorizes the new sample into one that is shared by most of its nearest neighbors [51]. The SVM can map samples into a high-dimensional space and finds a hyperplane that distinctly classifies samples in different classes. The test samples are then mapped into the same space and the category to which they belong are predicted based on which side of the hyperplane they fall [52]. A RF consists of a large number of individual DTs that operate as an ensemble [53]. Each decision tree in an RF generates class predictions on a test sample, and the class with the most votes is taken as the prediction result.
Performance Assessment
The weighted F1 is a widely used measurement in multi-class classification, which was selected as the key measurement to assess the performance of the classifier. For the calculation of the measurement, the F1-measure in each class should be calculated in advance. It is defined as the harmonic mean of the other two measurements: recall and precision, where recall is the proportion of correctly predicted positive samples among all positive samples, precision is the proportion of correctly predicted positive samples among all predicted positive samples. The weighted F1 is the weighted average of all F1-measure values on different classes, where the weight for one class is defined as the proportion of samples in this class.
In addition, other measurements were also employed to give a full display of the performance of classifiers. The first one was Macro F1, which is another way to integrate the F1-measure values of different classes, which is defined as the mean of all F1-measure values. The second one was prediction accuracy (ACC) which is the most classic measurement to assess the performance of classifiers. It is defined as the ratio of the number of correctly predicted samples and the overall sample number. However, when the dataset is imbalanced, ACC is not accurate enough. Matthew correlation coefficients (MCC) [61] is a more balanced measurement than ACC. Two matrices are used to calculate MCC. One is to store the true class of each sample and the other one is to store the predicted class of each sample. MCC assesses the relationship between these two matrices.
Extraction of Essential Features for Each Class
Based on the IFS method, some essential features can be obtained. However, it is not clear which class they are highly related to. In view of this, we reconstructed a dataset for each class and applied the above feature selection methods to it. For one class, one dataset was generated, in which samples in this class were considered as positive samples and other samples were regarded as negative samples. Then, LASSO, LightGBM, MCFS, and mRMR were adopted to investigate this dataset, resulting in four feature lists. From each list, the top 20 features were picked, thereby obtaining four feature subsets. By investigating the overlap of these feature subsets, some essential features that occurred in multiple subsets can be obtained, which were deemed to be highly related to the given class.
Results
In this study, a dataset on the antibody reactivity levels for SARS-CoV-2 antigens induced by mRNA vaccines was investigated. The overall computational framework is illustrated in Figure 1. The results in each step are presented in this section.
Results
In this study, a dataset on the antibody reactivity levels for SARS-CoV-2 antig induced by mRNA vaccines was investigated. The overall computational framewor illustrated in Figure 1. The results in each step are presented in this section. were ranked in terms of feature importance by four feature selection algorithms, including LASSO, LightGBM, mRMR, and MCFS. Such procedure generated four feature lists, which were fed into the IFS method. Efficient classifiers were set up, which used the optimal feature subset from each list. At the same time, classification rules were also built. Obtained optimal feature subsets were investigated to obtain antigens recurring in multiple subsets. Lastly, a biological analysis was performed on the above-obtained antigens and classification rules.
Results of Feature Selection Methods
According to the framework, four feature selection methods were used to rank the 73 antigens based on the degree to which they contributed to the classification. These lists are provided in Table S2. For easy descriptions, they were called LASSO, LightGBM, MCFS and mRMR feature lists.
IFS Results and Feature Intersection
As mentioned above, four feature lists were obtained. Each list was put into the IFS method one by one. DT, KNN, RF, and SVM were adopted in the IFS method. The performance of each classification algorithm under some top features in each list is listed in Table S3. Using the weighted F1 as the major measurement, we compared the performance of the classifiers using the same classification algorithm and feature list. Several IFS curves were generated by plotting the weighted F1 on the y-axis and the number of features on the x-axis, as shown in Figures 2 and 3. 73 antigens based on the degree to which they contributed to the classifica are provided in Table S2. For easy descriptions, they were called LAS MCFS and mRMR feature lists.
IFS Results and Feature Intersection
As mentioned above, four feature lists were obtained. Each list was method one by one. DT, KNN, RF, and SVM were adopted in the IFS meth mance of each classification algorithm under some top features in each list S3. Using the weighted F1 as the major measurement, we compared the the classifiers using the same classification algorithm and feature list. Sev were generated by plotting the weighted F1 on the y-axis and the numbe the x-axis, as shown in Figures 2 and 3. For the LASSO feature list, Figure 2A shows the IFS curves based on four classification algorithms. When the top 47, 73, 21 and 73 features in each list were used, the DT, KNN, RF and SVM can yield the highest weighted F1 values of 0.702, 0.711, 0.735 and 0.733, respectively. Accordingly, the optimal DT, KNN, RF and SVM classifiers can be built with the corresponding top features. Their detailed performance, including ACC, MCC, macro F1 and weighted F1, is provided in Table 1. Evidently, the optimal RF classifier was better than the other three optimal classifiers. For the LASSO feature list, Figure 2A shows the IFS curves based on four classification algorithms. When the top 47, 73, 21 and 73 features in each list were used, the DT, KNN, RF and SVM can yield the highest weighted F1 values of 0.702, 0.711, 0.735 and 0.733, respectively. Accordingly, the optimal DT, KNN, RF and SVM classifiers can be built with the corresponding top features. Their detailed performance, including ACC, MCC, macro F1 and weighted F1, is provided in Table 1. Evidently, the optimal RF classifier was better than the other three optimal classifiers. For the LightGBM feature list, the obtained four curves are illustrated in Figure 2B. From this figure, four optimal classifiers can be obtained, which adopted the top 40, 18, 31 and 35 features in the list. They generated the weighted F1 values of 0.717, 0.744, 0.742 and 0.758. Table 1 also lists the performance of these optimal classifiers. Clearly, the optimal SVM classifier was a little better than the other three optimal classifiers.
For the MCFS feature list, the IFS results on this list were summarized as four IFS curves, as shown in Figure 3A. It can be observed that the optimal DT/KNN/RF/SVM classifier adopted the top 17/20/23/41 features in this list. The detailed performance of these optimal classifiers is provided in Table 1. Evidently, the optimal SVM classifier was the best among four optimal classifiers, which produced a weighted F1 of 0.765.
As for the last mRMR feature list, Figure 3B displays the IFS curves on four classification algorithms. The highest weighted F1 values for the classification algorithms were 0.728 (DT), 0.737 (KNN), 0.745 (RF) and 0.758 (SVM), respectively. This performance was obtained by using the top 14, 24, 26 and 30 features in the corresponding feature list. Thus, the optimal DT, KNN, RF and SVM classifiers can be set up using these features. Table 1 lists their detailed performance. The optimal SVM classifier yielded better performance than the other three optimal classifiers. According to the above results, we can find the best classifiers of four feature lists. In detail, the best classifier in the LASSO feature list was the optimal RF classifier, whereas it was the optimal SVM classifier in the other three lists. We picked up the optimal feature subsets for further investigation. A Venn diagram was plotted for these subsets, as illustrated in Figure 4. The intersection results of these optimal feature subsets are available in Table S4. The antigens appearing in several feature subsets suggest that they were identified as important by multiple feature selection methods. They can play important roles in differentiating healthcare workers at different time spans after vaccination. The biological significance of some antigens (features) will be discussed in Section 4.
Essential Features for Each Class
The essential features obtained above may not be highly related to one class. To extract the essential features for each class, four datasets corresponding to the four classes were constructed, as described in Section 2.7. Then, LASSO, LightGBM, MCFS and mRMR were applied to each dataset. Four feature lists were obtained. The top 20 features were selected for taking the intersection. A Venn diagram was drawn for each class, as illustrated in Figure 5. The specific antigen names are listed in Table S5. For the first class, namely, unvaccinated healthcare workers, antigens such as SARS.CoV.2.S1.RBD.mFc and SARS.CoV.S1.HisTag were identified by all four feature selection methods. For the second class, namely, healthcare workers within 60 days after vaccination, SARS.CoV.2.S1.mFcTag and HuIgM.0.30 were deemed to be important by all feature selection methods. For the third class, namely, healthcare workers between 60-180 days after vaccination, three features (SARS.CoV.2.S1.mFcTag, HuIgM.0.30, and SARS.CoV.2.S1.RBD.mFc) were identified to be essential. For the fourth class, namely, healthcare workers over 180 days after vaccination, MERS.CoV.S1.RBD.367.606.rFcTag, Flu.B_Mal/.HA1, and a-HuIgG_0.03 were screened out by all methods. The discussion on the importance and functionality of some features will be provided in detail in Section 4.
Essential Features for Each Class
The essential features obtained above may not be highly related to one class. To extract the essential features for each class, four datasets corresponding to the four classes were constructed, as described in Section 2.7. Then, LASSO, LightGBM, MCFS and mRMR were applied to each dataset. Four feature lists were obtained. The top 20 features were selected for taking the intersection. A Venn diagram was drawn for each class, as illustrated in Figure 5. The specific antigen names are listed in Table S5. For the first class, namely, unvaccinated healthcare workers, antigens such as SARS.CoV.2.S1.RBD.mFc and SARS.CoV.S1.HisTag were identified by all four feature selection methods. For the second class, namely, healthcare workers within 60 days after vaccination, SARS.CoV.2.S1.mFcTag and HuIgM.0.30 were deemed to be important by all feature selection methods. For the third class, namely, healthcare workers between 60-180 days after vaccination, three features (SARS.CoV.2.S1.mFcTag, HuIgM.0.30, and SARS.CoV.2.S1.RBD.mFc) were identified to be essential. For the fourth class, namely, healthcare workers over 180 days after vaccination, MERS.CoV.S1.RBD.367.606.rFcTag, Flu.B_Mal/.HA1, and a-HuIgG_0.03 were screened out by all methods. The discussion on the importance and functionality of some features will be provided in detail in Section 4.
Classification Rules
It can be observed from Table 1 that the optimal DT classifier was generally in to the other three optimal classifiers on the same feature list. However, the DT cla has a great merit that was not shared by the other three classifiers. It can provide a of classification rules, which made the classification procedures completely open. T timal DT classifiers on four feature lists adopted the top 47, 40, 17 and 14, respec features in the corresponding lists. All healthcare workers were represented by the features, respectively. Four trees were built by DT, from which four rule groups w tablished. These rules are provided in Table S6; 190, 183, 202, and 226 classification respectively, were contained in four groups. Each rule is composed of antigen fe and their associated fluorescence intensity values, which explains how the feature' or low fluorescence intensity influences the capacity to identify the classes of samp detailed discussion of some quantitative rules can be found in Section 4.
Discussion
We identified a set of antigen-reactive antibodies as potential features that cou veal the effect of COVID-19 vaccines on anti-viral immune activation and reflect ch in antibody levels in the body over time after vaccination by using data on serum an levels in volunteers after receiving COVID-19 vaccines. This confirms the potential o
Classification Rules
It can be observed from Table 1 that the optimal DT classifier was generally inferior to the other three optimal classifiers on the same feature list. However, the DT classifier has a great merit that was not shared by the other three classifiers. It can provide a group of classification rules, which made the classification procedures completely open. The optimal DT classifiers on four feature lists adopted the top 47, 40, 17 and 14, respectively, features in the corresponding lists. All healthcare workers were represented by the above features, respectively. Four trees were built by DT, from which four rule groups were established. These rules are provided in Table S6; 190, 183, 202, and 226 classification rules, respectively, were contained in four groups. Each rule is composed of antigen features and their associated fluorescence intensity values, which explains how the feature's high or low fluorescence intensity influences the capacity to identify the classes of samples. A detailed discussion of some quantitative rules can be found in Section 4.
Discussion
We identified a set of antigen-reactive antibodies as potential features that could reveal the effect of COVID-19 vaccines on anti-viral immune activation and reflect changes in antibody levels in the body over time after vaccination by using data on serum antibody levels in volunteers after receiving COVID-19 vaccines. This confirms the potential of such features to contribute to the development of effective vaccines with long-lasting protection. The serum antibody data we analyzed were detected by a coronavirus antigen microarray (CoVAM). The microarray approach has been extensively applied in SARS-CoV-2 research due to its excellent sensitivity and specificity [62][63][64]. Recently, this method was frequently employed for measuring antibody levels following mRNA vaccination [30,65]. Recent publications have found that some identified features, as well as the relevant quantification rules, are linked to vaccine-induced anti-viral immune activation and duration.
Key Features for Identifying the Effect of COVID-19 Vaccines on Antibody Production
Using these computational methods, we discovered a set of unique viral antigensreactive antibodies selected by at least three methods. The antigens we analyzed are from epidemic coronaviruses, including SARS-CoV-2, SARS-CoV, MERS-CoV, common cold coronaviruses, and multiple subtypes of influenza. S1, S2, and RBD are components of SARS-CoV-2's spike protein, which it uses to infect cells. Moreover, 'tags' were attached to these proteins to make them easier to study. For example, 'mFcTag' is a piece from a mouse antibody, and 'HisTag' is a chain of specific building blocks, both used for tracking and purifying the protein. These top-specific antibodies are closely related to the components of various COVID-19 vaccines, suggesting the protective effect of these vaccines. In the present study, we analyzed 13 specific antibodies, listed in Table 2. In this section, we compared the changes in significant viral antigen-reactive antibodies in the serum of vaccinated and unvaccinated individuals. We also discussed the plausibility and crossimmunization of important antibodies (including non-SARS-CoV-2 antibodies) induced by COVID-19 vaccines. The top eight features identified were from SARS-CoV-2: S1 + S2, S1.mFcTag, S1.HisTag, S1, S2, Spike.RBD.His.Bac, Spike.RBD.rFc, and S1.RBD.mFc. The compositions of COVID-19 vaccines are listed in a recent paper comparing these vaccines [7]. The S protein of SARS-CoV-2 was chosen as a promising target by the majority of COVID-19 vaccines because blocking the interaction between the RBD of echinocandin and human angiotensinconverting enzyme 2 (ACE2) is effective in preventing infection [66,67]. In addition, the RBD is part of the S protein's S1 subunit [68,69]. Suthar et al. highlighted that the S protein of SARS-CoV-2, particularly RBD, stimulates the production of neutralizing antibody NAbs [70]. Similarly, an animal study revealed that RBD-specific IgG accounts for half of the antibody responses induced by S proteins. As a result, given that popular COVID-19 vaccines such as BNT162B1 encode the S protein of SARS-CoV-2, they can stimulate the production of S protein (including S1 and S2 subunits) and RBD-specific antibodies.
SARS.CoV.S1.HisTag and SARS.CoV.S1.RBD.HisTag are top features from SARS-CoV. SARS-CoV and SARS-CoV-2, both belonging to β-B coronavirus, and share 79% of their gene sequences [71,72], and the S protein shares 76% of its amino acid identity [73]. SARS-CoV-2 and SARS-CoV share the same host cell receptor ACE2 and are structurally similar; thus, they may exhibit some degree of cross-immunity [67]. These data suggest the effectiveness of SARS-CoV-reactive antibodies against SARS-CoV-2. These results were further confirmed by Wec et al., who isolated several antibodies from a SARS survivor that neutralized coronaviruses such as SARS-CoV-2 [74]. Min et al. identified several monoclonal antibodies against SARS-CoV S protein or RBD that are cross-immunoreactive with SARS-CoV-2 [26], which agrees with our predicted features.
MERS.CoV.S1.RBD.367.606.rFcTag from MERS-CoV was the next feature identified. MERS-CoV also belongs to β coronavirus and shares a 50% sequence similarity to SARS-CoV-2 [71], a coronavirus with a high lethality rate. The S protein of MERS-CoV and the RBD in it share some similarities to SARS-CoV-2, suggesting that the cross-immunity of the RBD-specific antibody to the S protein of MERS-CoV against SARS-CoV-2 is less than that of the SARS-CoV-specific antibody, but still exists. The last two identified features, hCoV.HKU1.NP and hCoV.229E.S1, are antigens from β coronavirus hCoV-HKU1 and α coronavirus hCoV-229E, respectively. Crossimmunization with SAR-CoV-2 is possible due to their close relationship. HCoVs are composed of proteins called spike (S), membrane (M), envelope (E), and nucleocapsid (N) [75]. In addition to the S protein, the N protein is an important antibody target [70,76], implying that hCoV.HKU1.NP-specific antibodies contribute to SARS-CoV-2 prevention. Although hCOV-228E is less closely related to SARS-CoV-2 than the other coronaviruses we mentioned above, the potential preventive effect of its specific antibodies against COVID-19 cannot be ruled out. However, given that hCoV-HKU1 and hCoV-229E are common coronaviruses, the detection of these antibodies in the sera of volunteers may be attributed to their previous infection.
Research on pan-coronavirus vaccines has attracted increasing attention to prevent novel SAR-CoV-2 variants. Some studies reported that conserved regions on the inner surface of the RBD are potential targets for pan-coronavirus vaccines [77]. New studies of mRNA vaccines against a variety of the more common coronaviruses are underway [78]. In summary, the positive serum test for non-SARS-CoV-2 antigens could be due to the ability of certain antibodies induced by COVID-19 vaccines to act on other coronaviruses. Therefore, the non-SARS-CoV-2 antigens we mentioned above can be seen as useful features.
Features Related to Time since Vaccination for Determining the Duration of Specific Antibodies after COVID-19 Vaccination
The essential antigen-reactive antibodies were identified using computational methods and divided into four classes based on vaccination time. The top features from each subclass were selected for discussion. Figure 6 shows the values of these top features in each of the four classes to visualize the changes in the antibodies that target specific antigens over time. Unlike the previous section, this section focuses on the changes in important antibodies at different periods after vaccination according to subclasses, including unvaccinated cases. As shown in Figure 6A, the first identified feature was SARS.CoV.2.S1 + S2. Based on the overall structure of the S protein of SARS-CoV-2 [80], the specificity of the SARS.CoV.2.S1 + S2-reactive antibodies was the lowest among the four selected features. As shown in Figure Figure 6. Fluorescence intensity distribution of top antigens in four subclasses. Box plots show trends of four important antigen-reactive antibodies according to each subclass assigned by time after vaccination. (A) S1 + S2, (B) S1.mFcTag, (C) S2, (D) Spike.RBD.His.Bac. Numbers in the abscissa represent the indices of four classes. Classes 1-4 represent unvaccinated healthcare workers, healthcare workers within 60 days after vaccination, healthcare workers between 60 and 180 days after vaccination, and healthcare workers over 180 days after vaccination, respectively. The S protein of SARS-CoV-2 is currently the antigen targeted by a majority of COVID-19 vaccines [7,11,16,27,79]. The top features we identified are contained in the S protein of SARS-CoV-2, and antibodies against them all change significantly over time after vaccination.
According to the changes in the value of each feature in class 1 (unvaccinated healthcare workers), SARS.CoV.2.S1 + S2 and SARS.CoV.2.S2 showed elevated levels, whereas SARS.CoV.2.S1.mFcTag and SARS.CoV.2.Spike.RBD.His.Bac were almost undetectable in serum. Thus, antibodies against the S2 subunit of the S protein were produced earlier after immunization and resulted in relevant specific protection. However, volunteers infected with SARS-CoV-2 before COVID-19 vaccination may also increase the levels of SARS.CoV.2.S1 + S2 and SARS.CoV.2.S2.
Comparison of the levels of the four features in class 2 (healthcare workers within 60 days after vaccination) revealed that SARS.CoV.2.S1.mFcTag showed the most significant increase, and the values were relatively concentrated within a month after vaccination. The values of SARS.CoV.2.S2 increased less significantly and were less consistent than those of SARS.CoV.2.S1.mFcTag. A study of healthcare workers found a 14-day boost in serum anti-S antibodies, followed by a significant drop in anti-S antibody levels until 42 days after vaccination [81]. Therefore, the levels of other antigens contained within the S protein of SARS-CoV-2 can also elevate antibodies against them within 42 days after vaccination, which agrees with the results of the present study.
Based on the trend from class 2 (healthcare workers within 60 days after vaccination) to class 4 (healthcare workers over 180 days after vaccination), the values of all features showed varying degrees of decline after 60 days. Among them, the values of SARS.CoV.2.Spike.RBD.His.Bac and SARS.CoV.2.S1.mFcTag declined slower than those of the other features and stimulated some stable antibodies that existed for a longer period. By contrast, the levels of SARS.CoV.2.S1 + S2 and SARS.CoV.2.S2 decreased more rapidly, suggesting that the S2 subunit is less ideal as an antibody target than the S1 subunit and RBD after COVID-19 vaccination. Similarly, previous studies reported that the antibodies identified in the serum following immunization are predominantly anti-S or anti-RBD antibodies [9,10,14] which appears to support this hypothesis.
The levels of features in class 4 (healthcare workers over 180 days after vaccination) were maintained at high levels, except for SARS.CoV.2.S, which was lower. This result indicates that the features found after COVID-19 immunization can persist for more than 6 months (180 days). The immunogenicity of mRNA-1273 lasts for at least 3 months [82], whereas that of BNT162b2 lasts for at least 2 months [12]. The varied compositions based on the type of vaccines can lead to variation in the duration of specific antibody presence. However, the four features identified imply that the S-protein and RBD-specific antibodies are present in the serum for long periods in general.
Rules for Quantitative Time after COVID-19 Vaccination and Antibody Levels
In addition to the qualitative features, a set of quantitative rules for accurate classification at the time after COVID-19 vaccination were established. All criteria were linked to specific antibody levels, and they were selected using at least two sorting methods. Some top features have been validated as having the ability to classify samples. In the present study, we selected the most typical rules for each time group for further discussion. Table 3 lists all of the rules, followed by a comprehensive analysis. Rule 0 applies four criteria to identify unvaccinated samples. The thresholds for SARS.CoV.2.S1.mFcTag and SARS.CoV.2.S1.HisTag are outlined in Table 3. The low levels of anti-S1 antibodies suggested by these values are consistent with the lack of vaccination. Studies indicate that even a single vaccine dose can trigger a robust anti-S1/2 antibody response in SARS-CoV-2-infected individuals [83], and that antibody responses are not immediate following a single vaccine dose [13], validating the accuracy of these criteria. The third criterion, SARS.CoV.2.S1.RBD.mFc, should be within the range set out in Table 1, typically low in unvaccinated individuals. Vaccination raises anti-RBD IgG levels in the body [84], so this range helps to distinguish vaccinated individuals. The final criterion is hCoV.OC43.HE, an antigen from a common coronavirus that causes similar symptoms to the common cold, whose threshold is listed in Table 3. If its serum level is above the threshold specified in Table 1, it suggests prior exposure to hCoV.OC43, or possibly transient vaccine-induced cross-reactive antibodies to other HCoVs [85]. Over time, vaccinations prompt the production of more precisely targeted antibodies [18], which further aids in excluding vaccinated individuals.
Rule 1 incorporates three criteria for identifying individuals 0 to 60 days post-vaccination. The first criterion is SARS.CoV.2.S1.mFcTag, which should not exceed the limit outlined in Table 3. High levels of anti-S/RBD antibodies are typically observed 8 weeks after mRNA-1273 or BNT162b2 vaccination [14], and given that most vaccines generate antibody responses against S proteins, including the S1 subunit, an increase in anti-S1 antibodies is expected post-vaccination. However, due to the finite antibody production by vaccines [86], a maximum value is set within this period [9]. The second and third criteria refer to SARS.CoV.2.S2 and SARS.CoV.2.S1 + S2. Their serum levels should exceed the thresholds specified in Table 3. As the S1 and S2 subunits are included in the S protein, changes in the level of S1 + S2 specific antibodies should have a strong correlation with anti-S antibodies. A recent study has reported that the levels of anti-S antibodies in serum significantly increase 14 days after vaccination [81], supporting the high thresholds for SARS.CoV.2.S1 + S2 in this rule. Anti-S2 antibody levels also increase significantly postvaccination [87], although their reactivity is generally lower than that of anti-S1 and anti-RBD responses [13]. These results confirm that the high value of SARS.CoV.2.S2 facilitates the differentiation while the lowest value of SARS.CoV.2.S2 in Rule 1 can be lower than that of SARS.CoV.2.S1 + S2.
Rule 2 utilizes three criteria to identify individuals 60-180 days post-vaccination. The first two criteria, SARS.CoV.2.S1.mFcTag and SARS.CoV.2.S1.RBD.mFc, should have serum levels above the threshold set in Table 3, and between the range specified for SARS.CoV.2.S1.-RBD.mFc. The vaccine's protective capability is associated with antibody count, and research indicates that COVID-19 vaccine efficacy decreases from 1 to 6 months post-vaccination [19], suggesting a corresponding decline in antigen-reactive antibodies. Although no study has yet confirmed the range levels outlined in our rule, it is reasonable to predict that SARS.CoV.2.S1.mFcTag levels would be lower than in Rule 1, while SARS.CoV.2.S1.RBD.mFc levels would be higher than in Rule 0. The final criterion, SARS.CoV.S1.HisTag, stands out from the first two as it pertains to an antigen from SARS-CoV, not SARS-CoV-2. Given the substantial sequence similarity between SARS-CoV and SARS-CoV-2 [88], the existence of cross-reactive non-specific epitopes led us to include SARS.CoV.S1.HisTag as a criterion in Rule 2. Lv et al. reported that some SARS-CoV-2-infected individuals can create cross-reactive antibodies that bind to the RBD of SARS-CoV [89], implying that the COVID-19 vaccination can stimulate similar cross-reactive antibodies in individuals.
The final rule (Rule 3), for people who have been vaccinated for more than 180 days, sets thresholds for SARS.CoV.2.S1.mFcTag and SARS.CoV.2.S1.RBD.mFc as set out in Table 3. These values are similar to Rule 2, probably because the vaccine-induced production of these antibodies drops to its lowest level after 180 days [90,91]. In contrast to Rule 2, this rule sets a cap on SARS.CoV.2.S1.mFcTag levels, indicating an overall decrease. This helps rule out those vaccinated for COVID-19 within the past 180 days. Similarly, higher predicted SARS.CoV.2.S1.mFcTag and SARS.CoV.2.S1.RBD.mFc levels in this rule indicate the vaccine stimulates lasting anti-S1/RBD antibodies, effectively distinguishing unvaccinated individuals.
Limitations of this Study
There are some limitations in this study. First, several machine learning algorithms, including feature selection and classification algorithms, were adopted. The selection of essential antigens relied highly on the performance of the classification algorithms. It is known that an efficient classifier may not adopt two similar features. If these two features were all essential antigens, one would be omitted, i.e., some essential antigens may not be detected by our machine learning based framework. Second, a major limitation of microarray is the limited antibody coverage, which means only specific antibodies can be measured according to the predefined set of antigens on the array surface. Further study is required to take more COVID-19-related antibodies into consideration. Finally, the main purpose of this study was to discover essential antigens that were highly related to the classification of healthcare workers or one class, rather than to develop a machine learning classifier. Therefore, no test/train split was conducted on the dataset, and so accuracy metrics reported here should be considered as unvalidated in either an independent or test set.
Conclusions
Combining data on serum antibody levels in volunteers after COVID-19 vaccination and advanced machine learning methods, a set of antigen-reactive antibodies were extracted, which could reveal the effect of the vaccine on antiviral immune activation and reflect changes in antibody levels in the body over time after vaccination. In the computational framework, four efficient feature selecting algorithms, namely, LASSO, LightGBM, MCFS, and mRMR, were used to rank the features according to their contributions to the classification. Then, through the IFS method, the optimal features for four classification algorithms (DT, KNN, RF, SVM) in each feature list were confirmed. Subsequently, the overlapping features were identified by taking the intersection of the optimal feature subsets corresponding to the four feature selection algorithms, such as SARS.CoV.2.S1.mFcTag, SARS.CoV.2.Spike.RBD.His.Bac, and SARS.CoV.2.S1 + S2. Meanwhile, we determined the specific features that were highly related to one class. In addition, classification rules were constructed, which can quantitatively explain the important roles of features in the classification. Our findings have the potential to improve vaccine efficacy assessment and enable personalized vaccination strategies, ultimately contributing to more effective public health measures against COVID-19 and similar viral outbreaks.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/life13061304/s1, Table S1: Feature names and their descriptions; Table S2: Feature lists obtained by LASSO, LightGBM, MCFS, and mRMR methods; Table S3: IFS results with different classification algorithms on four feature lists; Table S4: Intersection results of the optimal feature subsets identified by LASSO, LightGBM, MCFS, and mRMR methods; Table S5: Results of the intersection of top 20 features identified by LASSO, LightGBM, MCFS, and mRMR methods for each class; Table S6: Classification rules generated by the optimal DT classifiers on different feature lists.
Data Availability Statement:
The data presented in this study are openly available in Gene Expression Omnibus database, reference number [32]. | 2023-06-03T15:19:54.455Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "00f36106ec0cbf4e240de7fd116ddc33bfbff058",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/life13061304",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f92030727bdb0edb5161770470109182dfb7345c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
14753241 | pes2o/s2orc | v3-fos-license | Mediterranean Journal of Hematology and Infectious Diseases Modern Immunotherapy of Adult B-lineage Acute Lymphoblastic Leukemia with Monoclonal Antibodies and Chimeric Antigen Receptor Modified T Cells
The introduction of newer cytotoxic monoclonal antibodies and chimeric antigen receptor modified T cells is opening a new age in the management of B-lineage adult acute lymphoblastic leukemia. This therapeutic change must be very positively acknowledged because of the limits of intensive chemotherapy programs and allogeneic stem cell transplantation. In fact, with these traditional therapeutic tools the cure can be achieved in only 40-50% of the patients. The failure rates are particularly high in the elderly, in patients with post-induction persistence of minimal residual disease and especially in refractory/relapsed disease. The place of the novel immunotherapeutics in improving the outcome of adult patients with B-lineage acute lymphoblastic leukemia is reviewed. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Introduction. Adult acute lymphoblastic leukemia (ALL) is biologically heterogeneous and can be subdivided into several clinico-prognostic entities. 1 The primary distinction is between B-cell and T-cell precursor (BCP, TCP) ALL, and in the former group between Philadelphia chromosome/BCR-ABL (Ph) positive and Ph-ALL. The overall outcome of adults with ALL is inferior to that of childhood ALL. Basically, survival is strictly related to a complete remission (CR) achieved early on, which is followed by an effective consolidation/maintenance therapy, in standard-risk patients (SR) and, an allogeneic stem cell transplantation (SCT), in high-risk (HR) patients. 2,3 In adolescent and adult patients with Ph-ALL in an age range between 15-18 and 60-65 years, the CR rate is 90% and, the overall survival (OS) rate is 40-50% at 3-5 years, with significant differences among age and risk groups. 4,5 In Ph+ ALL, results are suboptimal too despite the improvement due to the introduction of tyrosine kinase inhibitors. 6 In Ph-ALL, better OS, and disease-free survival (DFS) rates are increasingly reported using pediatric-inspired schedules, at least in
Introduction. Adult acute lymphoblastic leukemia (ALL) is biologically heterogeneous and can be subdivided into several clinico-prognostic entities. 1 The primary distinction is between B-cell and T-cell precursor (BCP, TCP) ALL, and in the former group between Philadelphia chromosome/BCR-ABL (Ph) positive and Ph-ALL. The overall outcome of adults with ALL is inferior to that of childhood ALL. Basically, survival is strictly related to a complete remission (CR) achieved early on, which is followed by an effective consolidation/maintenance therapy, in standard-risk patients (SR) and, an allogeneic stem cell transplantation (SCT), in high-risk (HR) patients. 2,3 In adolescent and adult patients with Ph-ALL in an age range between 15-18 and 60-65 years, the CR rate is 90% and, the overall survival (OS) rate is 40-50% at 3-5 years, with significant differences among age and risk groups. 4,5 In Ph+ ALL, results are suboptimal too despite the improvement due to the introduction of tyrosine kinase inhibitors. 6 In Ph-ALL, better OS, and disease-free survival (DFS) rates are increasingly reported using pediatric-inspired schedules, at least in patients aged up to 40-50 years. 7 The outcome is worse in patients older than 55 years, with smaller proportions of long-term survivors. 8 Moreover during CR induction about 5% of the patients succumb to early complications, mainly infectious, and the risk of non-relapse mortality is still rather high after an allogeneic SCT (15% on the average). Overall, the common perception is that treatment intensity cannot be increased any further beyond this point in adult patients, without incurring into unacceptably high rates of treatment-related toxicity and mortality. Instead, new alternative therapeutics should be developed with a view of reducing the toxicity burden other than improving the antileukemic efficacy of available antileukemic programs. In addition, the relapse rate in adult ALL remains high and salvage therapy is at present unsatisfactory, with an effective rescue rate of 10-20% in most studies.
The most recent therapeutic innovations are represented by newer monoclonal antibodies (MoAb) and the chimeric antigen receptor (CAR) modified T cells. These new, highly selective weapons target specific ALL cell antigens and would exhibit an improved activity versus toxicity ratio compared to chemotherapy or transplantation. In addition, they could be used sequentially or in combination with either treatment modality, to potentiate the overall treatment efficacy. Thus far, MoAb-based therapy and CAR T cell therapy were developed mainly for Blineage but not T-lineage ALL. They have been utilized in all B-lineage subsets (BCP and mature B/Burkitt ALL; Ph-and Ph+ ALL), and demonstrated considerable activity in relapsed/refractory disease (R/R ALL). Therefore, they need to be exploited in untreated ALL, especially in high-risk subsets such as the elderly and the patient with high post-induction levels of minimal residual disease (MRD). Here we review the evidence supporting the use of therapeutic MoAb and CAR T cells in BCP ALL. Additional data can be found elsewhere. [9][10][11] Results from childhood studies will be reported whenever appropriate to illustrate specific points of interest.
Modern
Immunotherapy with Monoclonal Antibodies.
The challenge of novel immunotherapeutics is to improve survival without increasing toxicity. With MoAbs, the different and manageable toxicity profile only occasionally overlaps or worsens that associated with chemotherapy and SCT. For instance, mucositis and gastrointestinal toxicity, usually of high concern with intensive chemotherapy and SCT, are not typical of MoAb therapy. The apparent lack of cross-resistance with standard antileukemic drugs constitutes a further theoretic advantage. The third major issue is whether MoAb therapy might substitute, at least partially, for some intensive chemotherapy elements and/or SCT in patients in CR1. Prospective clinical trials should address this most important topic.
ALL cells express several membrane antigens. The ideal therapeutic target should be consistently expressed in every ALL subset, by all blast cells, at high intensity, be stable upon MoAb challenging and play a crucial role in metabolic events. At present, no MoAb satisfies all these requirements and a target expression of 20% out of the entire ALL cell population is considered enough to start a MoAb trial with some chance of success.
According to their structural characteristics and mechanism of action, MoAbs for ALL therapy belong to three major categories: naked antibodies, T-cell engaging bispecific single-chain (BiTE ® ) antibodies, and immunoconjugates/immunotoxins. The several trials launched with the most representative and therapeutically promising MoAb's, with or without associated chemotherapy, are summarized in Figure 1 (frontline studies) and Figure 2 (studies in R/R ALL) and detailed below.
Naked Antibodies.
Rituximab and Ofatumomab: anti-CD20 MoAb. The CD20 receptor is the target of chimeric monoclonal antibody Rituximab. CD20 is expressed by approximately 40% of BCP ALL cases and virtually any case of mature B-ALL (Burkitt leukemia). The CD20 receptor functions as a calcium channel playing a role in cell cycle and differentiation. Rituximab works as a classical MoAb, reacting at one terminus (Fab/Fv) with the CD20 epitope on the cell membrane, while the other end (Fc) binds to complement and Fc receptors of effectors cells. The ensuing MoAb-target cell interaction activates a complement-mediated cell lysis and/or an antibody-dependent cellular cytotoxicity (ADCC). Importantly, CD20 expression in CD20 + ALL is upregulated by corticosteroids, which are commonly given in prephase and continued for several days during induction therapy. 10,12 Nothing is known about rituximab activity as a single agent in ALL, and contrary to other MoAbs experience in R/R ALL is very limited. One study indicated a response rate of 44% in 9 patients treated with a rituximab-chemotherapy combination. 13 Rituximab was instead used in first-line phase II and III programs, and is used in Burkitt leukemia/lymphoma in adjunct to aggressive rotational drug regimens.
The usual rituximab schedule in these studies was 375 mg/m 2 for four-eight times, throughout induction and consolidation blocks. A randomized trial in Burkitt lymphoma confirmed the usefulness of adding rituximab to intensive chemotherapy blocks, in both HIV negative and positive HIV patients. 14,15 Several other Burkitt leukemia/lymphoma regimens reported high response rates, with a curability rate consistently above 50% and most often between 70-80% and close to 90%-100% in fit patients younger than 55-60 years. [14][15][16][17][18][19][20][21] This means an average 20% or more improvement over prior results obtained with similar chemotherapy regimens without rituximab, with no substantial difference in toxic side effects. Nowadays rituximab is part of the standard of care for Burkitt leukemia/lymphoma. About rituximab in frontline therapy of BCP ALL, there were two randomized trials and two phase II trials in Ph-ALL, all evaluating its role in addition to induction and consolidation chemotherapy. In the GRAALL (France/Belgium/Switzerland) phase III trial, CD20+ BCP, ALL patients (CD20 expression >20%) were randomized with a 2x2 design concurrently testing an augmented cyclophosphamide dose; whereas in the randomized MRC (United Kingdom) trial, all BCP ALL patients were randomized to assess the role of the concomitant corticosteroid therapy in upregulating CD20 expression in CD20-patients. The results from these two controlled studies are not yet known and are awaited with interest. As to phase II trials, in the MD Anderson Hospital study 22 two sequential CD20+ BCP ALL patient cohorts receiving Hyper-CVAD chemotherapy with or without rituximab were analyzed. In patients aged 60 or less, the CR rate in the rituximab arm was 95% and 3-year survival 75% (n=68) compared with 47% without rituximab (n=46; P=0.003), with a proportional increase in MRD negativity evaluated by flow cytometry (81% vs 58%). A subsequent update showed for the rituximab-treated group a CR duration of 69% at 3 years with an OS of 71%. 23 In the small group of patients older than 60 (n=16), the CR rate was high (88%) but the OS was only 29%. The other firstline phase II trial was from GMALL (Germany) with rituximab added to the 07/2003 chemotherapy schema. 24 This report compared 181 rituximab-treated patients with 82 pre-rituximab patients. In SR patients (n=196), CR rate was 94% with rituximab and 91% without; however, minimal residual disease (MRD) response, evaluated molecularly at week 16 (<10 -4 ) and, 5-year survival were both improved in the rituximab group, from 59% to 90% and from 57% to 71%, respectively. Similarly, in HR patients (n=67), CR rate was 81% with rituximab and 88% without; and MRD response and 5-year survival were improved from 40% to 64% and from 36% to 55%, respectively. Toxicities were comparable in the two cohorts. In summary, rituximab could improve the long-term outcome of patients with CD20+ BCP ALL and seems to enhance the MRD response to induction and early consideration therapy. This issue arises considerable interest, given the strict relationship between MRD and outcome in adult ALL and the dramatically worse outcome of MRD+ CD20+ ALL as opposed to MRD-CD20+ ALL. 25 Although the CD20 antigen is expressed in a relevant proportion of Ph+ ALL cases, there is no data on the therapeutic role of this MoAb in this subset. The most significant data relative to the use of rituximab in B-lineage ALL Burkitt leukemia/lymphoma are summarized in Table 1.
Ofatumumab is another anti-CD20 MoAb, which binds to a different epitope on the CD20 molecule than rituximab, resulting in greater complement-dependent cytotoxicity. One study evaluated ofatumumab added to the Hyper-CVAD regimen as frontline therapy of adult patients with CD20+ ALL. 26 With this regimen, 22 of 23 evaluable patients achieved CR (95%) and were MRD-negative (by flow cytometry) after cycle 1.
One-year remission and OS duration was 91%.
Epratuzumab: anti-CD22 MoAb. Epratuzumab is a humanized MoAb targeting CD22. The CD22 antigen is a transmembrane sialoglycoprotein expressed explicitly by B lymphoid cells. It is expressed on 100% of mature B-cell ALL and up to 90% of BCP ALL. 27 CD22 regulates B-cell activation and the interaction of B-cells with T-cells and antigen-presenting cells. Because of that, CD22 is a good therapeutic target in BCP ALL. CD22 is rapidly internalized after binding the MoAb so that the exposure to epratuzumab results in downregulation of B-cell activation and signaling, with proliferation inhibition. 28 In a phase I protocol ofthe Children's Oncology Group (COG), applied to children with R/R BCP ALL , 15 children received four doses of epratuzumab twice weekly for two weeks, then four weekly doses with a standard reinduction chemotherapy. MRD was evaluated by flow cytometry, and the absence of MRD was defined as complete molecular remission (CMR). At the end of the six-week reinduction therapy, nine patients were in CR, and seven of them were in CMR. Two patients had dose-limiting toxicity, one grade four seizure, and one grade 3 transaminase elevation. A subsequent phase II trial (COG ADVL04P2) 28,29 enrolled 114 patients between 2-30 years of age in first relapse, comparing two different epratuzumab schedules in addition to traditional reinduction chemotherapy. The CR rate was comparable in the two study arms (epratuzumab weekly x 4 doses versus epratuzumab twice weekly x 8 doses: CR 65% vs. 66%) and not significantly higher than the historical control. The CMR rate was however higher in epratuzumab-treated patients (42%) than historical controls.
The adult trial SWOG S0910 30 evaluated 32 R/R ALL patients treated with epratuzumab (4 weekly doses) in association with clofarabine and cytarabine. The CR rate was 45%, significantly higher than the 17% CR rate observed in a similar trial with clofarabine/cytarabine without epratuzumab. 31 Two other recent reports available only in abstract form concerned a phase I escalation study of 90 yttriumlabeled epratuzumab tetratexan 32 and epratuzumab [14][15][16][17][18][19][20] added to vincristine/dexamethasone in R/R ALL. 33 In the first study (n=17), 2 of six patients treated with a dose of 10 mCi/m 2 achieved CR. In the second trial, including 26 elderly patients, four patients achieved CR, and one a CR with incomplete platelet recovery. These are promising results obtained in very poor risk patient populations. Epratuzumab is well tolerated. The most common adverse events were myelosuppression and mild to moderate infusion reactions such as fever, nausea, occasionally seizures, and transaminase elevation.
Alemtuzumab: anti-CD52 MoAb. Alemtuzumab is a genetically engineered humanized anti-CD52 MoAb. CD52 is a glycosylphosphatidylinositol-anchored membrane glycoprotein expressed by 70-80% of both BCP ALL and T-ALL, making it an attractive therapeutic target. Alemtuzumab has demonstrated significant activity in chronic lymphocytic leukemia but was not found effective as a single agent in acute myeloid leukemia and ALL.
In R/R ALL alemtuzumab was tested in a small adult series of 6 patients (3 with Ph+ ALL) at the dose of 30 mg given by subcutaneous route three times weekly for 4-12 weeks (no CR) and was also scarcely effective in a pediatric trial on 13 patients (one CR). 34,35 In untreated patients, alemtuzumab was administered as a single agent in a CALGB trial 36 after three intensive chemotherapy modules in an attempt to lower post-remission MRD. In 11 evaluable patients, there was a 1-log median MRD reduction and a noteworthy DFS (median 53 months), but follow-up was provided only for 14 surviving patients. Of note, the use of alemtuzumab was associated with CMV infection in 8 of 24 patients and herpes virus infection in 5 patients.
For these reasons, alemtuzumab, albeit partially effective, is unlikely to be developed any further in ALL therapy. It causes a drastic reduction of lymphocytes including CD4+ and CD8+ T cells predisposing to opportunistic infections such as CMV and other viruses and fungi. 36 Thereafter, it requires careful patient monitoring with serial CMV DNA determinations for pre-emptive therapy, as well as an adequate anti-infectious prophylaxis.
Immunotoxins and Immunoconjugate Antibodies Inotuzumab
Ozogamicin: anti-CD22 MoAb. Inotuzumab ozogamicin (IO) is an anti-CD22 MoAb conjugated to calicheamicin, which is a powerful anthracycline-like drug. Calicheamicin, a natural product of Micromonospora echinospora, 37 is a potent cytotoxic agent enabling cell killing even in the presence of relatively few target sites. Although CD22 expression is required, IO-related apoptotic effect is entirely mediated by calicheamicin and not by CD22 signaling. IO is rapidly internalized and delivers calicheamicin intracellularly. The toxin binds the minor DNA groove breaking the double-stranded DNA in a sequence-specific manner.
Forty-nine patients with R/R ALL were treated in a phase I/II trial at MD Anderson Hospital with single agent IO. 38 Their median age was 36 years and range 6-80 years. All patients had greater that 50% CD22 expression on lymphoblasts, and the majority were heavily pretreated. A starting dose of 1.3 mg/m 2 was used, subsequently increased to 1.8 mg/m 2 . The CR rate was 18% and another 39% of the patients had a CR with incomplete hematologic recovery (CRi), for an overall response rate of 57%. Among the 27 patients who achieved a hematological response, 17 (63%) attained an MRD remission (flow cytometry). Median response duration was six months, with a trend to improved survival for the 13 patients treated at first salvage. This study, updated including 90 total patients, confirmed the previous results (CR 19%, CRi 39%); furthermore, the non-hematological toxicity was reduced using the weekly schedule. 39 Thus, with IO a morphological CR was obtained in more than 50% of the subjects treated, in association with a complete MRD response in the majority of these cases. Most responses were short lived without proceeding to transplantation (n=36), however the obtaining a CR with associated MRD response, the absence of a complex karyotype such as t(4;11), t(9;22), or an abnormal chromosome 17 and a disease status at first salvage were predictive of an improved outcome with a survival probability of 42+ months. 40 A negative MRD was observed in 72% of the patients achieving CR/CRi. A new trial for R/R ALL incorporated IO into a reduced intensity Hyper-CVAD regimen. 41 Of 35 patients treated, 18 (51%) entered CR, 6 (17%) CRi and 1 (3%) marrow CR, and 12 of them could proceed to allogeneic SCT. Median survival of responders was 14 months and was not reached in patients at first salvage. The outcome of IO-treated patients proceeding to allogeneic SCT was examined separately. 42 The study analyzed the outcome of 26 such patients, of whom 23 were in CR at time of transplant (15 MRDnegative) and three were not. MRD-negative patients had the best outcome with a 1-year survival of 42%. However, non-relapse mortality was high in relation to liver toxicity (40% at six months), with 5 deaths by venoocclusive disease. These results could improve choosing the less hepatotoxic conditioning regimens and concomitant drugs. In conclusion, these singlecenter studies IO brought more patients with R/R ALL to allotransplantation (45%) than chemotherapy, but the salvage rate was affected by transplant-related toxicity, indicating the need of a careful design of all treatment components. An international phase III study comparing IO with standard reinduction therapy in R/R ALL is near to a conclusion.
In untreated patients, IO was added to mini-hyper-CVAD (dose reductions and no anthracycline) in elderly ALL. 43 Twenty-seven patients aged 60-79 years (median 69 years) were treated, and 25 (96%) entered CR, all with negative flow cytometry MRD. The 1-year survival was 81%, superior to the historical control group. Although the follow-up is short, these are outstanding induction results obtained in a high-risk patient population. Another US Intergroup trial is planned in patients aged 18-39 years, adding IO to the C10403 chemotherapy backbone.
On the toxicity side, IO is myelotoxic, as reflected by the high rates of CRi. Grade 3-4 non-hematologic adverse events included drug-related fever (18%) with hypotension, hyperbilirubinemia (4%) and transaminase elevation (1%). All the events but the increased bilirubin were reversible. A biopsy demonstrated liver fibrosis in two patients. A venoocclusive disease of the liver was reported in 5/22 patients after allogeneic SCT. 39 However, 4 of 5 of these patients received a preparative regimen of clofarabine/thiotepa. Furthermore, two distinct reports suggest a benefit toward liver toxicity with weekly rather than single dose IO administration. 39,44 The most significant data relative to the use of IO in B-lineage ALL are summarized in Table 2.
BL22 and CAT-8015: anti-CD22 MoAbs. Because the CD22 antigen-immunotoxins is rapidly internalized, CD22 is an attractive therapeutic target. 45 The firstgeneration immunotoxins BL22 demonstrated cytotoxicity in vitro and also in vivo and in a phase 1 trial. A decrease of leukemia blasts was observed in 16 out 23 ALL patients, but no CR was obtained. 46 Three of these patients developed neutralizing antibodies, 47 but no allergic reaction, vascular leak or hemolytic uremic syndrome occurred. A second-generation immunotoxins, CAT-8015, was subsequently developed, 45 trying to reduce non-specific toxicities, increase MoAb stability and improve activity. 46 In one small trial, 4 out of 9 treated patients achieved a CR. 10 Another phase I trial showed a CR in 4 out of 19 heavily pretreated children and young adults, plus one partial response and 8 hematological improvements. 47 Resistance due to low levels of DPH4 mRNA and target protein was described. 48 Further analysis of the DPH4 gene promoter demonstrated hypermethylation in the resistant cells. This mechanism could be reversed by hypomethylating agents such as 5-azacytidine.
Combotox: dual anti-CD19/CD22 MoAb. Combotox is a combination of anti-CD19 and anti-CD22 deglycosylated ricin-A chain immunotoxin. 49 This treatment has the advantage of targeting two different antigens. In a pediatric trial, 3 of 17 R/R patients achieved a CR. 50 The dose-limiting toxicity was a vascular leak syndrome, caused by an endothelial damage due to a unique aminoacid motif in the ricin-A toxin. Preclinical studies in murine ALL model demonstrated synergy with the sequential administration of combotox with cytarabine.
SAR3419 and anti-B4-blocked ricin: anti-CD19 MoAb.
SAR3419 is an anti-CD19 humanized MoAb linked to a highly powerful tubulin inhibitor, maytansinoid DM4, eliciting ADCC. 51 SAR3419 is internalized and then routed to lysosomes, whereupon it is degraded to yield the active drug. In preclinical models, an extended duration of remission was documented when SAR3419 was administered after an induction regimen as maintenance therapy. 52 A Phase II trial on R/R ALL is ongoing. Reversible corneal toxicity was described as dose-limiting toxicity.
The anti-B4-blocked ricin MoAb was used in a frontline CALGB study in patients with CD19+ ALL instead of high-dose cytarabine consolidation, reserved to CD19-negative ALL patients. 53 Forty-six patients were treated. Although feasible, this treatment did not result into an improved outcome and/or MRD response compared to the other patients.
Blinatumomab:
anti-CD3/CD19 construct. Blinatumomab is the first member of the novel class of BiTE ® antibodies. It is a bispecific single-chain antibody construct which simultaneously reacts to CD19 and CD3 epitopes, activating CD3+ T cells and re-directing their cytotoxicity against CD19+ ALL cells. Activated T cells induce perforin-mediated death on the target cells. 54 CD19 is the most commonly expressed antigen in BCP ALL, with the highest density of expression and a slower internalization rate compared with CD22. Blinatumomab is given by continuous intravenous infusion at nine µg/d on days 1-7 and 28 µg/d on days 8-28, using a portable infusion device. A two-week interval follows each cycle. Although blinatumomab is active at very low concentrations, the prolonged infusion is necessary to recruit and expand effector T-cells and achieve therapeutic efficacy in the bone marrow. 55 The first pilot trial was conducted in MRD+ ALL. 54,56 MRD+ ALL is a high-risk condition, recognized by the persistence of the molecular signal of the disease in remission marrows after inductionconsolidation therapy, usually, at a level of 10 -4 or, greater after induction/early consolidation therapy, or by the reappearance of the MRD signal during follow up. 57 MRD positivity heralds the clinical relapse within few weeks or months, but beside that, it is a more favorable setting than R/R ALL, because MRD+ patients still exhibit a good performance status and harbor a significantly lower disease burden. Blinatumomab was administered to 21 MRD+ patients as a four-week continuous infusion; the median patient age was 47 years and 7 patients had poor-risk cytogenetics (5 Ph+ and 2 with mixed lineage leukemia). Ten of 20 evaluable patients achieved a major MRD response <10 -4 , including 3 of 5 Ph+ ALL (60%). Most notably, 9 out of 11 patients with an MRD >10 -2 achieved a molecular remission and 6 out of 11 not having a subsequent allogeneic SCT remained in CR after a median follow-up of 30 months, compared to 6 of 9 patients receiving an allogeneic SCT. Treatment toxicity consisted of an early cytokine release syndrome (pyrexia, chills), plus increased transaminases, albumin reduction, hypokalemia and an acute neurological syndrome (seizure, syncope, headache, somnolence) which was reversible in all cases. Due to these encouraging results a larger confirmatory phase II trial was performed (n=116), with 106 patients evaluable in an early report. 58 Rates of complete MRD response were 78% after one cycle and 80% after two cycles, with no difference across baseline age, line of treatment and MRD burden categories. Toxicity included pyrexia (88%), headache (38%), tremor/chills (29%/25%), nausea/vomiting (22%). Serious adverse events occurred in 5% of the patients (including ataxia/aphasia/encephalopathy).
Blinatumomab was extensively used in Ph-R/R ALL. 59 In a first exploratory study, 17 out of 25 evaluable patients achieved CR or CRi within two cycles of treatment. Median response duration was 7.1 months and median OS 9.7 months. Three patients relapsed with a CD19 negative clone. A larger confirmatory study was performed in 189 heavily pretreated, high-risk subjects, either primary refractory or in first relapse after a CR lasting <12 months, failing allogeneic SCT or in subsequent relapse. 60 Forty-three percent of the patients achieved CR/CRi (79% after the first cycle). In responsive patients with evaluable MRD data (n=73), 51 (70%) had a complete MRD response and 9 (n=9) reached an MRD <10 -4 . 61 Median DFS was 6.9 months in patients with MRD response and 2.3 months in patients without MRD response. Moreover, 40% of responders underwent an allogeneic SCT after blinatumomab only. 62 The rate of serious adverse events affecting the central nervous system was 2-3%. A final phase III study comparing blinatumomab with standard "investigator choice" chemotherapy in R/R ALL is currently underway (study 311, n=400). Another smaller trial in R/R Ph+ ALL is near to completion in the fall 2014 (study 216, n=41). Of note, in blinatumomab studies some of the relapses occurred at extramedullary sites or were related to the expansion of a CD19-ALL clone.
Beside the several studies in MRD+ and R/R adult ALL, a randomized trial by the ECOG in patients 30-70 years of age (study 1910, n=360) will compare an early consolidation therapy with or without blinatumomab in newly diagnosed Ph-BCP ALL.
The most significant data relative to the use of blinatumomab in B-lineage ALL are summarized in Table 3.
Modern Immunotherapy with CAR T Cells.
A breakthrough in cellular therapy for BCP ALL. Normal autologous or allogeneic T cells can be harvested from patients or normal donors to be genetically modified to express a chimeric antigen receptor (CAR) recognizing specific targets on leukemic cells, then expanded and reinfused in the patient to exert antileukemic activity. A CAR consists of a single chain variable antibody fragment highly specific to a tumor antigen, which is fused to the transmembrane domain and a T cell signaling moiety 63 . The resulting receptor, when expressed on the surface of a T cell, mediates binding of the target tumor antigen and activates a signal to the T cell, inducing target cell lysis. Second and third generation CAR T cells present a single-chain variable fragment that resides outside of the T cell membrane and is linked to stimulatory molecules inside the T cell. The general schema for production of CAR T cells and their in vivo activity against CD19+ ALL cells is shown in Figure 3. chronic lymphocytic leukemia. 64 Preliminary results of this approach used in two children with R/R ALL were published. 65 In one case there was a sustained remission. Other recent pre-clinical studies support additional genetic modifications to achieve optimal clinical efficacy. [66][67][68] Altogether, there is accumulating evidence pointing to the relevant activity of CD19-CAR T cells and CD22-CAR T cells in R/R ALL. These patients are usually prepared with immune suppressive therapy before receiving the CAR T cell infusion (with cyclophosphamide and fludarabine). A breakthrough publication 69 demonstrated the potential of this treatment in 5 adult patients with R/R ALL (age range 23-66 years). At time of CAR T cell therapy, 3 patients were refractory to salvage chemotherapy, and one was MRD+. After CAR T, all were in clinical and molecular CR, and 4 out of the 5 patients could undergo an allogeneic SCT. Other reports soon followed with either CD19-CAR T or CD22-CAR T, expanding our knowledge about this innovative treatment method. [70][71][72][73] Two very recent publications reported the final results of prospective trials using CAR T cells obtained through different methodology on 30 and 21 patients with relapsed ALL, respectively, including a few adult subjects. 74,75 In the first study autologous CD19-CAR T cells induced a CR in 27 (90%) patients (of whom 2 had previously failed blinatumomab, and 15 had relapsed following allogeneic SCT). The event-free survival was 67% at 6 months, associated with persistence of CAR T cells (68%) and B-cell aplasia (73%). In the second trial, aimed at establishing the maximum tolerated dose of CAR T cells (defined as 1x10 6 /kg CAR T cells), the generation of CAR T cells was successful in 20 of 21 patients (90%). Treatment toxicity mediated by cytokine release was fully reversible, prolonged Baplasia did not occur, and 14 patients got a CR (70%) including 6 of 6 with primary refractory ALL. Moreover, 12 patients achieved MRD negative status, and 10 proceeded to allogeneic SCT. In both studies CAR T cells were detectable in the cerebrospinal fluid, clearing off blast cells in some patients with meningeal leukemia. Presently, CAR T cell treatment remains experimental and available only at selected centers due to its technical complexity. It is however highly promising and must be developed further as a potential major step forward in the management of adult BCP ALL. CAR T cells carry peculiar toxicities related to cell expansion/activation, resulting in a cytokine-release syndrome which is occasionally associated with cardiorespiratory failure requiring admission to intensive care unit. The interleukin-6 inhibiting agent tocilizumab is effective in this setting. The degree to Figure 4. Overview of ALL cell targets and mechanisms of action of rituximab/ofatumomab, inotuzumab ozogamicin, blinatumomab and CAR T cells in adult BCP ALL which this treatment causes a permanent B-cell depletion with severe hypogammaglobulinemia in long-term survivors is another critical point.
Conclusions. Rituximab, IO, blinatumomab and CAR
T cells can all contribute through different mechanisms to increase the cure rate in adult B-lineage ALL (Figure 4). Notably, both the clinical effectiveness and the manageable toxicity profile demonstrated by single agent IO and blinatumomab in hundreds of patients with R/R or MRD+ disease make them suitable for immediate evaluation in frontline therapy, with or without associated chemotherapy. The first clinical trials are ongoing. The expectations concern an overall, sound therapeutic advancement compared to current results, as well as a change in the indications for allogeneic SCT in CR1, at least in responsive patients previously defined at high-risk by the persistence of post-induction MRD. As regards CAR T cells, although their use in large scale trials is still precluded by the complexity and cost of the procedure, they could soon become another powerful option to treat this illness, whenever required and beyond the new therapeutic standards set by MoAb/chemotherapy combinations. | 2018-05-08T17:43:58.100Z | 0001-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "0ad51790761308d7deda389a4c1708fea0e0f1e4",
"oa_license": "CCBY",
"oa_url": "https://www.mjhid.org/index.php/mjhid/article/download/2015.001/2360",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ad51790761308d7deda389a4c1708fea0e0f1e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198147499 | pes2o/s2orc | v3-fos-license | From Minkowski to de Sitter in Multifield No-Scale Models
We show the uniqueness of superpotentials leading to Minkowski vacua of single-field no-scale supergravity models, and the construction of dS/AdS solutions using pairs of these single-field Minkowski superpotentials. We then extend the construction to two- and multifield no-scale supergravity models, providing also a geometrical interpretation. We also consider scenarios with additional twisted or untwisted moduli fields, and discuss how inflationary models can be constructed in this framework.
Introduction
We inhabit a universe with small but non-vanishing vacuum energy that is increasingly well described by a de Sitter geometry that is almost Minkowski at sub-cosmological scales [1]. Moreover, it is popular to hypothesize that the early universe underwent a period of nearexponential expansion, called inflation [2], that might correspond to a near-de Sitter (dS) geometry. These observations motivate the construction of models that accommodate dS and Minkowski spaces, and may be used to explore transitions between them.
We expect that physics below the Planck scale is approximately supersymmetric [3,4], in which case the appropriate theoretical framework for studying such cosmological issues is supersymmetry [5], more specifically N = 1 supergravity in order to accommodate chiral matter fields and general relativity. Generic supergravity models are well known to possess anti-de Sitter (AdS) vacua and have effective potentials that are far from flat, the 'η-problem' [6]. However, there is one class of supergravity models that avoid these problems, namely no-scale supergravity [7][8][9], which can accommodate flat potentials that may have vanishing energy density, corresponding to Minkowski vacua, or have constant positive energy densities, corresponding to dS vacua [10,11].
Another reason for favouring no-scale supergravity is that it emerges as the natural framework for the low-energy effective field theory derived from strings [12]. This was first shown in the context of a simplified model of compactification with a single volume modulus, but this first example has been extended to multifield models, including compactifications with three complex Kähler moduli and a complex coupling modulus, as well as some number of complex structure moduli [13].
Several issues then arise within this broader theoretical context. How unique are noscale supergravity models with Minkowski or de Sitter solutions? What are the relationships between them? Can they be given simple geometrical interpretations? How may constructions with a single complex modulus field be generalized to two-or multifield supergravity models? Can the de Sitter models be used to construct inflationary models predicting perturbations that are consistent with observations, e.g., resembling the successful [14,15] predictions of the Starobinsky model [16] as in [17]? How may the universe evolve from a (near-)de Sitter inflationary state towards the (near-)Minkowski contemporary epoch with its (small) cosmological constant, a.k.a. dark energy?
Aspects of these questions have been addressed previously in a series of papers by subsets of the present authors. In [11], we constructed dS vacua in two-and multifield models as could occur in string compactifications, discussed the conditions for their stability, and gave examples with only integer powers of the chiral fields in the superpotential. There is a long history of no-scale supergravity models of inflation [18][19][20][21], but only re-cently has it been realized that simple forms of the superpotential can yield Starobinsky-like inflation [17,[22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]. Indeed, there are several forms for the superpotential based on two chiral fields [22]. In [24] we presented a general discussion of two-field no-scale supergravity models of inflation yielding predictions similar to those of the Starobinsky model, using the non-compact SU(2,1)/SU(2)×U(1) symmetry to catalogue them in six equivalence classes. In [25] we constructed within this framework a specific minimal SU(2,1)/SU(2)×U (1) noscale model that incorporates Starobinsky-like inflation, supersymmetry breaking and dark energy. This construction was generalized in [26] to inflationary models based on generalized no-scale structures with different values of the Kähler curvature R, as may occur if different numbers of complex moduli contribute to driving inflation.
In this paper we discuss the uniqueness of superpotentials leading to Minkowski, dS and AdS vacua of single-field no-scale supergravity models, and how pairs of Minkowski superpotentials can be used to construct dS/AdS solutions. Expanding on previous work which showed how this construction may be extended to two-and multifield no-scale supergravity models, we show how matter fields can be incorporated in a multifield construction of Minkowski, dS and AdS vacua. We also provide a geometrical visualization of the construction. We also mention how Starobinsky-like inflationary models can be constructed in this framework, and comment on the inclusion of additional twisted or untwisted moduli fields.
The structure of this paper is as follows. In Section 2 we first review the of structure no-scale supergravity and previous work within that framework. We then discuss the uniqueness of single-field monomial superpotentials leading to a Minkowski vacuum and how they can be combined in pairs to yield dS vacua. Section 3 shows how these constructions can be extended to multiple moduli, and introduces a geometrical interpretation. Section 4.1 then further extends these constructions to include untwisted matter fields, and Section 4.2 considers the case of twisted matter fields. This is followed in Section 5 by a discussion of inflationary models with either untwisted or twisted matter fields. Finally, our results are summarized in Section 6.
No-Scale Supergravity Framework
We first recall some general properties of no-scale supergravity models, which emerge naturally from generic string compactifications in the low-energy effective limit [12]. The simplest N = 1 no-scale supergravity models were first considered in [7,8] and are charac-terized by the following Kähler potential [10]: where field T is a complex chiral field that can be identified as the volume modulus field, and T is its conjugate field. The minimal no-scale Kähler potential (1) describes a non-compact SU (1,1) U (1) coset manifold and its higher-dimensional generalizations [38] will be considered in the following sections. Furthermore, the Kähler curvature of a general Kähler manifold is given by the expression R ij ≡ ∂ i ∂j ln K ij , and the scalar curvature obeys the relation: where K ij is the inverse Kähler metric. If we consider the maximally-symmetric SU (1,1)
To account for interactions, the Kähler potential is extended by including a superpotential W : yielding the effective scalar potential: where the fields Φ i are complex scalar fields,Φī are their conjugate fields, and K ij is the inverse Kähler metric. For more on N = 1 supergravity models, see [3].
Review of Earlier Work
As was shown in [10,23,32], one can consider combining cubic and constant superpotential terms to acquire a de Sitter vacuum solution. Choosing the following superpotential form: together with the Kähler potential (1), and imposing the condition T = T , the effective scalar potential (5) yields a de Sitter vacuum solution V = 3 2 . However, the superpotential (6) leads to an unstable vacuum solution, since the mass-squared of the imaginary component of the scalar field is negative: m 2 Im T = −2. As we discuss in more detail below, the problem of instabilities can be addressed by adding a quartic term to the Kähler potential [22,26,41].
A detailed analysis of the general de Sitter vacua constructions for multi-moduli models was conducted in [11] and for convenience, we recall some of the key results. The Minkowski vacua solutions for a single complex chiral field T were found by considering the Kähler potential (3) with a monomial superpotential of the following form: where n ± are two possible solutions given by: Along the real T direction, V = 0. The scalar mass-squared in the imaginary direction is: where the choice T ±3 √ α corresponds to the two possible solutions n ± (8). As can be seen from (9), in order to obtain a stable Minkowski vacuum solution, the stability condition α ≥ 1 has to be satisfied. For cases when 0 < α < 1, quartic stabilization terms in the imaginary direction must be introduced in the Kähler potential (3).
As was shown in [10,11,26], de Sitter vacua solutions can be obtained from the Kähler potential (3) by choosing a superpotential of the form: where n ± is given by (8). In this case, along the real T direction the effective scalar potential (5) becomes: One of the most fascinating features of the de Sitter vacua construction (10) is that it is obtained by combining two distinct Minkowski vacua solutions (7). In the next sections, we will show that there is a deeper connection between dS/AdS and Minkowski vacua solutions and that this relation is not accidental.
While the scalar potential is flat in the real direction, the scalar mass-squared of the imaginary component is given by: For α > 0, in the absence of stabilization terms, there are always some field values for which the instability in the imaginary direction persists. The problem of instability can be remedied by modifying the Kähler potential (3) and introducing quartic stabilization terms in the imaginary direction [22,26,41]: with β > 0. The newly-introduced quartic stabilization term does not alter the potential in the real direction, while it stabilizes the mass of the imaginary component (18) so that:
Uniqueness of Vacua Solutions
By solving an inhomogeneous differential equation, we now show that the monomial Minkowski superpotential solutions (7) are the only possible unique solutions that yield V = 0, while the combination of two distinct Minkowski solutions (10) yield dS/AdS vacuum solutions.
We consider a general superpotential expression W (T ), which is a function of volume modulus T only, and solve the general homogeneous differential equation, which is equivalent to finding Minkowski vacuum solutions. As before, we assume that the VEV of the imaginary component Im T = 0, so that T = T and W (T ) = W (T ). Using the Kähler potential (3) and the effective scalar potential (5), we find where W ≡ W (T ) and W ≡ dW (T ) dT . In order to find Minkowski vacuum solutions, we set Eq. (21) to zero: Solving the homogeneous differential equation (22), we obtain two distinct Minkowski solutions: where λ i is an arbitrary constant. To find the dS/AdS vacuum solutions, we set the differential equation (21) equal to a constant and solve the following inhomogeneous equation: where Λ is an arbitrary constant. We look for a particular superpotential solution to the inhomogeneous equation (24) of the following form: Inserting the expressions (25) into (24), we find that m = n ∓ = 3 2 (α ∓ √ α) is a particular solution of the inhomogeneous differential equation and the general solution has the following form: where we have defined the constant Λ = 3 · 2 2−3α · λ 1 λ 2 . Thus, we have constructed the unique combination of two Minkowski solutions that yields dS/AdS solutions 1 .
Generalized Solutions and Vacuum Stability
Before concluding this section, we introduce a formalism with which the construction of Minkowski-dS-AdS solutions can be generalized and applied to more complicated Kähler manifolds. Let us write: where V is the argument inside the logarithm. For the simplest minimal no-scale SU (1,1) supergravity case with a single volume modulus field T , we have V ≡ T + T . As before, and in all the cases that we consider, we assume that the VEV of the imaginary part of the complex field is fixed to zero: Im T = 0, which can always be achieved by introducing quartic stabilization terms in Eq. (27).
For the single field case, the effective scalar potential (5) becomes: In the real direction, where T = T , we define: so that the argument inside the logarithm becomes ξ = 2T .
From our previous discussion, we already know which superpotential forms reduce to Minkowski solutions. We introduce the following notation, which will be used for all our Kähler coset manifolds 2 : where, as usual, the two possible choices n ± are given by Eq. (8). Note that, for this construction to work, we must impose the constraint ξ > 0 and the positive curvature condition α > 0, which are necessary features of the no-scale structure 3 .
With this redefinition, the scalar mass-squared in the imaginary field direction given in Eq. (9) becomes: where the sign depends on the choice of the Minkowski vacuum solution in (30). We will later show that the same Minkowski mass expression (31) holds for any Kähler potential form, and hence that the solution is stable when α ≥ 1. When 0 < α < 1, Minkowski vacuum solutions become unstable and we must introduce the quartic stabilization terms in the imaginary direction.
Similarly, dS/AdS vacua solutions are constructed by combining two different Minkowski solutions (30), 2 Note that we are using a trick in our definition of the superpotential. Strictly speaking, ξ is defined by the argument of the log in K when all fields are taken as real. However in the superpotential we are assuming that ξ is a function of (complex) superfields and ignore the restriction to real fields. 3 Note also that the definition of λ here differs from that in Eq. (7) by a constant factor of 2 n± .
and we call such constructions Minkowski pairs. The dS/AdS vacuum solution (32) yields an effective scalar potential (5): which allows three different types of vacua: • de Sitter vacuum solutions when λ 1 and λ 2 are = 0 and have the same sign.
• anti-de Sitter vacuum solutions when λ 1 and λ 2 are = 0 and have opposite signs.
• Minkowski vacuum solutions when either λ 1 or λ 2 is set to zero.
The generalization of the scalar mass in the imaginary direction m 2 Im T in equation (18) is given by: which should always be positive, m 2 Im T ≥ 0 for stability in the imaginary field direction. Recalling that dS vacuum solutions are acquired when λ 1 and λ 2 have the same sign, we introduce the ratio coefficient γ = λ 1 /λ 2 , which must always be positive.
To visualize this condition, we plot in Fig. 1 the (α, γ) plane with T on the vertical axis, and the size of log (m 2 Im T /4 λ 2 2 ) indicated by color coding. The boundary of the colored region corresponds to the critical value of m 2 Im T /4 λ 2 2 = 0, and it indicates when m 2 Im T becomes unstable. Interestingly, the same general expression (34) holds also for more complicated forms of ξ. It is important to note that Fig. 1 shows two colored regions which are separated by a gap, and indicates that the dS vacuum becomes unstable in the imaginary direction for certain values of T and α.
To understand the occurrence of the dS vacuum instability, we consider two specific cases with different values of α, where for illustrative purposes we choose λ 1 = λ 2 = 1, and we use the field parametrization T = (x + iy)/ √ 2. The effective scalar potential is plotted in the left panel of Fig. 2 for α = 1, which is characteristic of solutions with α ≤ 1. We see that dS vacuum solutions are always unstable in the imaginary field direction, so these solutions must be stabilized. In the right panel of Fig. 2 we show the scalar potential with α = 3, which is characteristic of solutions with α > 1. Here, we see that vacuum solutions might fall into an AdS vacuum, which corresponds to the gap region shown in Fig. 1. In both cases, the potential is completely flat along the line y = 0 corresponding to the dS solution up to the point where x = 0 (the potential is not defined at x ≤ 0).
To address the stability issue, we consider the modified Kähler potential (19), where if we compare it to the general Kähler potential (27), we see that in the real direction the argument inside the logarithm remains unchanged, with ξ = 2T . (34) as a function of (α, γ, T ), as shown by the color coding for log m 2 Im T /4 λ 2 2 on the right-hand side.
The generalization of the mass squared in Eq. (34) is: where it can readily be seen from the numerator of (35) that, by choosing a value of β that is large enough, we can always make the imaginary field direction stable 4 . We plot in Fig. 3 the unstable cases considered previously with α = 1 and α = 3, which have been each stabilized with the choice β = 2. Once again, the potential along y = 0 is flat. 3 Multi-Moduli Models
Minkowski Vacuum for Two Moduli
Our next step is to extend this formulation to the two-and multi-moduli cases. As before, we first construct the general Minkowski vacuum solutions and then use Minkowski superpotential pairs to obtain dS/AdS solutions. We begin by considering the following two-field Kähler potential: For now, we consider V 1 = T 1 + T 1 and V 2 = T 2 + T 2 . Along the real directions, T 1 = T 1 and T 2 = T 2 , we adopt the following notation: and we choose the following ansatz that yields Minkowski vacuum solutions: Inserting the superpotential (38) into the expression (5) for the effective scalar potential, we obtain: In order to recover Minkowski vacua, we set V = 0, which holds when the following expression is satisfied [11]: For ease of illustration, we introduce the following parametrization: in terms of which the general expression (40) becomes: Solving the constraint (40) for n 1 and n 2 , we find: which can be parametrized using (41): where the values of r 1 and r 2 are constrained by expression (42), and must satisfy the condition r i ∈ {−1, 1}. It can already be seen from these equations that the circular parametrization (41) simplifies our expressions significantly, and it will be useful in establishing a geometric connection. We must also satisfy the following inequalities: We see from (44) that we can consider a total of four different sign combinations that yield V = 0. The corresponding expressions for the imaginary masses-squared are given by: where stability in the imaginary direction is obtained when the condition α i − r 2 i ≥ 0 is satisfied. If we this combine this inequality with the constraint (42), we obtain another stability condition in terms of the curvature parameters:
Minkowski Pair Formulation for Two Moduli
Applying the same approach that we used for the case of a single modulus, we now show how to construct Minkowski pairs for the two-field case and recover dS/AdS vacuum solutions with V = 12 λ 1 λ 2 (as in (33)) along the direction where all fields are real. The general dS/AdS vacuum solutions for the two-field case are given by: where we define:n with the expressions for n 1,2 being given by Eq. (44) andr i = −r i . We note that the powers (49) describe the antipode of a point lying on the surface of a circle described by the coordinates (r 1 , r 2 ), and we discuss the geometric interpretation of our models in the next Section.
The scalar masses recovered from the dS/AdS superpotential (48) have complicated expressions that we do not list here. However, we note that we can always modify the initial Kähler potential (36) by including higher-order corrections in the imaginary direction: where these quartic terms easily remedy the stability problems [11]. If we compare it to the general two-field Kähler potential in Eq. (36), along the real directions, T 1 = T 1 and T 2 = T 2 , we recover ξ 1 = 2T 1 and ξ 2 = 2T 2 . In the next Section we extend this formulation to the N -field case.
Minkowski Pair Formulation for Multiple Moduli
We now show how to generalize our formulation and construct successfully the Minkowski pair superpotential for cases with N > 2 moduli. We first introduce the following Kähler potential: where V i = T i + T i . Next, we impose the condition that all our fields are real, therefore T i = T i , which leads to: Minkowski vacuum solutions are obtained with the choice: in the general N -field case. Inserting the superpotential (53) into Eq. (5), we find: and it can be seen from Eq. (54) that in order to obtain Minkowski vacuum solutions: V = 0, we must satisfy the constraint: Once again, we introduce the following parametrization: and combining the equations (55) and (56) we obtain: Therefore, Eq. (57) parametrizes the N -field Minkowski solutions as lying on the surface of an (N − 1)-sphere.
Solving Eq. (56) for n i , we obtain: where r i ∈ {−1, 1} and α i > 0. For the N -moduli case, we obtain the following expression for the scalar masses-squared in the imaginary directions: To obtain a stable solution in the imaginary direction, we must satisfy the condition α i − r 2 i ≥ 0. If we use the constraint of the (N − 1)-sphere (57), we obtain the following stability condition: Following the procedure described previously, we combine a pair of Minkowski solutions (53) and introduce the following dS/AdS superpotential: This superpotential form also yields the familiar dS/AdS vacuum result V = 12 λ 1 λ 2 .
It proves difficult to perform a detailed stability analysis for N -moduli models, because this would involve finding the eigenvalues of an N × N matrix. Nevertheless, one can always introduce higher-order corrections in the Kähler potential (51): where the quartic terms stabilize the imaginary directions [11]. If we compare the multimoduli Kähler potential (51) with (62), we see that along the real directions, T i = T i , and we recover ξ i = 2T i .
Geometric Interpretation
We now discuss the geometric interpretation of this Minkowski pair formulation. From equations (55-57) it is clear that our parametrization describes Minkowski superpotential solutions (53) that lie on the surface of an (N − 1)-sphere that is embedded in Euclidean N -space. We first return to the two-moduli case, in which the Eq. (57) reduces to (42), and all Minkowski solutions lie on a circle embedded in 2-dimensional space. We define the radius vector of points on a circle r by: As expected, equation (63) includes 4 possible sign combinations corresponding to different quadrants of a circle. To construct successfully a Minkowski pair superpotential that yields a dS/AdS vacuum solution, we must combine any chosen point on the circle with its antipodal point, given by the vector: In this way, we can construct an infinite number of distinct Minkowski superpotential pairs by considering different point/antipode combinations lying on the surface of a circle. The Minkowski pair construction on a circle is illustrated in Fig. 4. For any value of α > 0, Eq. (48) will yield a dS or AdS solution so long as n i = 3 We can readily generalize this framework to the N -moduli case, in which we define the radius vector r to lie on the surface of an (N − 1)-sphere, and it is expressed as: while the antipodal vectorr is given by: As an illustration, we consider the three-field case: N = 3. In this case, the Minkowski solutions lie anywhere on the surface of the unit sphere. dS and AdS solutions can be obtained from any point on the sphere, by combining it with this antipodal point with r i → −r i . In Fig. 5 we show an example where four different Minkowski vacuum solutions are combined into 2 distinct Minkowski pairs lying on the surface of a sphere.
We have seen how all Minkowski pair solutions lie on the surface of an (N − 1)-sphere of unit radius, and recall the general expressions for the corresponding powers, n i andn i of ξ given earlier: We show in Fig. 6 Minkowski pair solutions for these powers as functions , which lies in the fourth and sixth octants of the sphere, while the blue dots represent a Minkowski pair , which lies in the first and seventh octants of the sphere.
Having established successfully a geometric connection between unique vacuum solutions, in the remaining sections we show that identical patterns emerge for Kähler potential forms with untwisted and twisted matter fields.
The Untwisted Case
In this Section, we extend our formulation to no-scale models with untwisted matter fields. We begin by considering the following Kähler potential, which parametrizes a non-compact SU (2,1) SU (2)×U (1) coset space: where α is a curvature parameter, T can be interpreted as a volume modulus, and φ is a matter field. Moreover, we impose the conditions T = T and φ =φ by fixing the VEVs of the imaginary components of the fields to zero, along the lines discussed above. Clearly Eq. (67) can be written in the form of Eq. (27) with V set equal to the argument of the log in (67), V = T + T − φφ 3 . Once again, when we restrict to real fields, and in this case set T = T and φ =φ we obtain: We then consider the following form of superpotential: which leads to the following effective scalar potential: To obtain a Minkowski vacuum: V = 0, we solve the constraint for n, and recover the familiar result given in Eq. (8) for the case with a single modulus. Using the superpotential in Eq. (69), we find the following scalar masses-squared for the imaginary components of the fields T and φ: whereas, as anticipated, the masses-squared of the real components are m 2 Re T = 0 and m 2 Re φ = 0. It can be seen from Eqs. (71) that stability in the imaginary directions for both fields requires that the inequalities α ≥ 1.
To construct the SU (2,1) SU (2)×U (1) Minkowski pair formulation, we follow the previous discussion and use the same superpotential as in Eq. (32) Doing so, we recover dS/AdS vacuum solutions given by Eq. (33). In this case, the massessquared for the imaginary field components are given by: and m 2 Im φ = 4 We do not discuss here the stabilization of these components, but we can always include quartic stabilization terms in the Kähler potential (67), as discussed previously.
Having established the principles in the case of the SU (2,1) SU (2)×U (1) Kähler potential with an untwisted matter field φ, we can generalize our formulation to no-scale models that parametrize a non-compact SU (N, 1) SU (N )×U (1) coset manifold. Following the same recipe considered in previous sections, we start with the Kähler potential (27), and we define the argument inside the logarithm as: Furthermore, we fix the VEVs of the imaginary fields to zero, so that T = T and φ j =φ j . Using the same notation: the argument inside the logarithm in the Kähler potential becomes With this definition of ξ, Minkowski vacuum solutions are found for the same choice of superpotential given in Eq. (69). The masses-squared of the imaginary components, with m 2 Im T and m 2 Im φ j are given by (71). At this point, it should not be surprising that by combining two distinct Minkowski solutions we can form a Minkowski superpotential pair given by Eq. (72). This dS/AdS superpotential yields identical scalar masses-squared for the imaginary components, with m 2 Im T given by (73) and m 2 Im φ j given by (74). Finally, we can also extend our formulation to more complicated Kähler potentials that take the form K = i K i , where each K i is of no-scale type and given by: We again assume that T i =T i and φ ij =φ ij , which leads to: Thus, we obtain the following Minkowski pair superpotential: which coincides with the multi-moduli case considered previously.
The Twisted Case
An analogous Minkowski pair formulation can also be considered in the case of twisted matter fields. We consider the corresponding Kähler potential: where we introduce the notation ϕ for twisted matter fields. To this end, we first find a relatively simple superpotential form that yields Minkowski solutions, and consider the following Ansatz: Combining it with the effective scalar potential in Eq. (5), and setting T = T and ϕ =φ by fixing the VEVs of the imaginary components of the fields to zero, we obtain: From the form of the scalar potential, we see that it does not depend on Re ϕ. To obtain a Minkowski vacuum solution, we find the same solutions found in Eq. (8) for n. This yields the following scalar masses-squared for the imaginary components: and We can see from Eqs. (84) and (85) that Im ϕ is always stable, and that Im T is stable when α ≥ 1.
Similarly, we also consider the following Ansatz: If we combine this with Eq. (5), and set T = T and ϕ = −φ, we obtain the same effective scalar potential (83) with solutions for n given by Eq. (8). In this case, the scalar potential does not depend on Im ϕ, and the scalar masses-squared are given by Eqs. (84) and (85) 5 . Therefore, there are two ways to construct Minkowski vacuum solutions with twisted matter fields that do not depend on either the real or imaginary components of ϕ.
Next, we construct the dS/AdS superpotential by combining two distinct Minkowski solutions: where we choose a Minkowski pair construction which does not depend on Re ϕ, and, if we assume that T = T and ϕ =φ, the effective scalar potential (5) is given by Eq. (33) once again. In the case of the superpotential (87), the scalar masses-squared of the imaginary field components are: and It is important to note that for de Sitter solutions, while m 2 Im ϕ is always positive, m 2 Im T is not and may require quartic stabilization terms in the imaginary direction for the field T .
Analogously, one can also consider the following dS/AdS superpotential form: where, after setting T = T and φ = −φ, we obtain the dS/AdS scalar potential V = 12 λ 1 λ 2 , with the scalar masses-squared given by (88) and (89).
This analysis with a single twisted matter field can be generalized to include multiple fields. We consider the following Kähler potential form: In this case, all of the previous results hold after the simple substitution of ϕ 2 → ϕ 2 j . Another possible generalization is to consider Kähler potentials of the form K = i K i + j |ϕ j | 2 , where each K i is of no-scale type: As before, we assume that: In this case, with a superpotential of the form where ω j can take a value of either +1 or −1, we get a Minkowski solution V = 0 after setting T i = T i and ϕ j = ω j ϕ j . Similarly, we can obtain dS/AdS solutions V = 12 λ 1 λ 2 along the direction T i = T i , ϕ j = ω j ϕ j from the superpotential:
The Combined Case
We note finally that one can consider more complicated cases combining twisted and untwisted matter fields by following the principles discussed earlier in this Section. The only difference is that one needs to modify the Kähler potential in Eq. (92) and introduce untwisted matter fields φ ik : If we assume that all our fields are fixed to be real, this leads to: and for this case Minkowski solutions are given by superpotential (94) and dS/AdS solutions are given by (95).
Inflation with an Untwisted Matter Field
We now indicate briefly how to construct inflationary models in this framework [25,26]. For simplicity, we use a non-compact SU (2,1) SU (2)×U (1) Kähler potential (67), and we associate the matter field φ with the inflaton. If we set α = 1, the Kähler potential (67) becomes: Next, we introduce a unified superpotential that combines the Minkowski pair superpotential W dS with an inflationary superpotential W I = f (φ): We also require that supersymmetry is broken at the minimum through the Minkowski pair superpotential W dS instead of the inflationary superpotential W I . Therefore, we impose the conditions that f (0) = f (0) = 0. Again, we assume that T = T and φ = φ, and the superpotential (99) then yields the following effective scalar potential: where we can safely neglect the mixing terms between λ 2 and M , leading to the approximation: Supersymmetry is broken by an F -term, which is given by:
Inflation with a Twisted Matter Field
Following the same approach, we now show how to construct viable inflationary models with a twisted inflaton field. We use a non-compact SU (1,1) U (1) × U (1) Kähler potential form (81), and we associate the matter field ϕ with the inflaton. We set α = 1, and Eq. (81) reduces to: Next, we introduce the following unified superpotential form 6 : where the inflationary superpotential is given by W I = M f (ϕ)·e −ϕ 2 /2 . We again require supersymmetry to be broken through the Minkowski pair superpotential W dS , and we impose the conditions that at the minimum we must have f (0) = f (0) = 0. The superpotential form (104) leads to the following effective scalar potential: If we neglect the mixing terms between λ 2 and M , and fix T = 1 2 , we can approximate: and supersymmetry breaking is characterized by the same expression given in Eq. (102). In order to construct a Starobinsky-like inflationary potential that is a function of the field ϕ, we use the following canonical field redefinition: and we assume that ϕ = ϕ = x √ 2 . We then introduce the following inflationary superpotential form: and assume that ϕ = ϕ = x √ 2 and T = T = 1/2, which yields the Starobinsky inflationary potential with a positive cosmological constant at the minimum: or if we neglect the mixing terms between λ 2 and M , we obtain:
Summary
We have exhibited in this paper the unique choice of superpotential leading to a Minkowski vacuum in a single-field no-scale supergravity model, and also shown how to construct dS/AdS solutions using pairs of these single-field Minkowski superpotentials. We have then extended these constructions to two-and multifield no-scale supergravity models, providing also a geometrical interpretation of the dS/AdS solutions in terms of combinations of superpotentials that are functions of fields at antipodal points on hyperspheres. As we have also shown, these constructions can be extended to scenarios with additional twisted or untwisted fields, and we have also discussed how Starobinsky-like inflationary models can be constructed in this framework.
The models described in this paper provide a general framework that is suitable for constructing unified supergravity cosmological models that include a primordial near-dS inflationary epoch that is consistent with CMB measurements, the transition to a lowenergy effective theory incorporating soft supersymmetry breaking at some scale below that of inflation, and a small present-day cosmological constant (dark energy). As such, this framework is suitable for constructing complete models of cosmology and particle physics below the Planck scale.
For the future, two general classes of issues stand out. One is the construction of specific models for sub-Planckian physics, which should address the incorporation of Standard Model (and possibly other) matter and Higgs degrees of freedom. Should these be described by twisted or untwisted fields, and how are they coupled to the inflaton? Specific answers to some of these issues have been proposed in [42], and more details are forthcoming [43].
Another set of issues concerns the interface with string theory. For example, although no-scale supergravity theories arise generically in the low-energy limits of string compactifications, many different non-compact coset manifolds may be realized. Which of these is to be preferred? Another set of questions concerns the specific forms of superpotential that are needed to obtain a Minkowski or dS vacuum. In this paper we have constructed them from a bottom-up approach, and demonstrated their uniqueness. How could one hope to obtain them in a top-down approach, starting from a specific string model?
This question is particularly acute in the case of dS vacuum solutions, since swampland conjectures [44] suggest that string theory may not possess such vacua. At the time of writing controversy still swirls about these conjectures, and in this paper we have taken the pragmatic approach of exploring what such solutions would look like. As such, our solutions may suggest avenues to explore in searching for them, or at least the obstacles to be overcome. The existence or otherwise of dS vacua in string theory is clearly a key issue for the future that lies beyond the scope of this paper. | 2019-07-22T03:52:42.000Z | 2019-07-22T00:00:00.000 | {
"year": 2019,
"sha1": "ed666474e479974b801a7aebed11f2183948229d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2019)161.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "c7c5b2d393499b5ddfdf963e06e344ca326b97b0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119207254 | pes2o/s2orc | v3-fos-license | Equivalence between an optomechanical system and a Kerr medium
We study the optical bistability of an optomechanical system in which the position of a mechanical oscillator modulates the cavity frequency. The steady-state mean-field equation of the optical mode is identical to the one for a Kerr medium, and thus we expect it to have the same characteristic behavior with a lower, a middle, and an upper branch. However, the presence of position fluctuations of the mechanical resonator leads to a new feature: the upper branch will become unstable at sufficiently strong driving in certain parameter regimes. We identify the appropriate parameter regime for the upper branch to be stable, and we confirm, by numerical investigation of the quantum steady state, that the mechanical mode indeed acts as a Kerr nonlinearity for the optical mode in the low-temperature limit. This equivalence of the optomechanical system and the Kerr medium will be important for future applications of cavity optomechanics in quantum nonlinear optics and quantum information science.
I. INTRODUCTION
Photons are ideal carriers of quantum information [1]. They can propagate large distances in optical fibers before being absorbed, and their polarization has been used for quantum communication and quantum information applications. However, photons barely interact, and thus it is difficult to implement the quantum two-qubit gates needed for universal quantum computation [2]. This situation changes in an optical medium where the photons can inherit an effective interaction, often modeled as a Kerr nonlinearity. This is why so-called Kerr media are important for quantum technology based on photons [3][4][5][6].
In this paper, we will focus on the phenomenon of optical bistability, produced by the radiation pressure, and neglect other nonlinear effects such as the photothermal effect [20][21][22] or a mechanical Duffing nonlinearity. Under certain conditions and sufficiently strong driving there are two classically stable equilibrium positions for the mechanical oscillator and correspondingly for the optical cavity. Optical bistability in optomechanical systems has been discussed in the context of ponderomotive squeezing [23] and entanglement [24], and led to one of the first experimental observations of optomechanical coupling [25,26]. Optical bistability has also been discussed widely in the context of a Kerr medium [27,28]. This raises the question whether and in which way the optomechanical system and the Kerr medium in a cavity can be considered to be equivalent, see Fig. 1 that shows both of these systems schematically. In the following we will investigate in detail the similarities and differences between optical bistability in an optomechanical system and a Kerr medium.
The paper is organized as follows. In Sec. II we introduce the standard model of optomechanics -a cavity whose frequency is modulated by the position of a mechanical oscillator. We briefly introduce the steady-state mean-field equations of the system and the quantum Langevin description of quantum and thermal fluctuations for a linearized radiationpressure interaction. In Sec. III we show that the mean-field equation for the optical mode is identical to the one for a Kerr medium, with a lower, a middle and an upper branch. In the optomechanical system, fluctuations of the mechanical mode change the picture. A study of the stability of the different mean-field solutions against fluctuations reveals a feature that is absent from the Kerr medium: the upper branch becomes unstable for certain parameters. We derive conditions on the parameters for this upper branch to remain stable. The stability requires the system to be in the resolved sideband regime with a mechanical quality factor that is not too large. In this case we expect the mechanical resonator to act as an effective Kerr medium for the optical mode, even in the quan-tum regime. This is confirmed in Sec. IV, where we compare the quantum steady states of both the optomechanical system and the Kerr medium, obtained from numerical solutions of the quantum master equations in the low-temperature limit. The optomechanical system exhibits the expected characteristic quantum signatures proving that it can be regarded as an effective Kerr medium.
II. MODELS FOR THE OPTOMECHANICAL SYSTEM AND THE KERR MEDIUM
We first consider the standard model of optomechanics where the resonance frequency of an optical cavity is modulated by the position of a mechanical resonator (dispersive coupling). A monochromatic coherent light field with frequency ω d and amplitude drives the optical mode. The full Hamiltonian, accounting for driving and dissipation, iŝ H =Ĥ 0 +Ĥ d +Ĥ κ +Ĥ γm , where, in the rotating frame of the driving ( = 1), Here,â andb are the bosonic operators for the optical and mechanical modes, ∆ 0 = ω d − ω c is the detuning of the drive from the unperturbed cavity resonance frequency ω c , and ω m the resonance frequency of the mechanical mode. The optomechanical coupling is given by is the zero-point fluctuation amplitude of the mechanical resonator, M its mass, and (∂ω c /∂x) is the derivative of the cavity frequency with respect to the resonator positionx = x ZPF (b+b † ). The termĤ κ describes the damping of the optical cavity at rate κ, andĤ γm the damping of the mechanical resonator at rate γ m . This leads to the definition of two important ratios, the sideband parameter ω m /κ and the mechanical quality factor Q m = ω m /γ m . Using the input-output formalism [28,29], the dissipative dynamics of the system is described by the quantum Langevin equations (QLEs) whereâ in (t) =ā in +ξ(t) consists of a coherent driving amplitudeā in = / √ κ and a vacuum noise operatorξ which satisfies ξ (t)ξ † (t ) = δ(t − t ) and ξ † (t)ξ(t ) = 0. Similarly, the noise operatorη describes coupling to a Markovian bath at temperature T , i.e., η(t)η † (t ) = (n th + 1)δ(t − t ) and η † (t)η(t ) = n th δ(t − t ). In the absence of any other coupling, the bath gives rise to a thermal state with mean occupation number n th = [exp(ω m /k B T ) − 1] −1 for the mechanical oscillator. This treatment of the mechanical dissipation in the form of a QLE for the mechanical amplitudeb, rather than for the displacementx, is correct as long as Q m 1. The optical and mechanical field operators can be split into a coherent mean-field amplitude and fluctuations:â(t) =ā +d(t) andb(t) =b +ĉ(t). Inserting these expressions in the QLEs (2), we obtain two coupled mean-field equations (MFEs) for the amplitudesā andb. In steady state they read The coherent amplitude of the optical fieldā corresponds to a mean cavity occupationn = |ā| 2 and produces a static radiation-pressure force g 0n /x ZPF on the resonator, displacing its equilibrium position by an amount x ZPF (b +b * ). Proceeding this way we eliminate the coherent drive from the QLEs for the operatorsĉ andd which describe thermal and quantum fluctuations around the mean-field values. For large optical mean-field amplitudes |ā| 1 and small coupling g 0 κ, ω m , we can neglect the nonlinear terms liked †d ordĉ in the QLEs. As a result, the optomechanical interaction becomes bilinear: g 0â †â can write the linearized QLEs in matrix form, where A reads The new parameters entering the matrix A are the enhanced optomechanical coupling g = g 0ā and the effective detuning ∆ = ∆ 0 + g 0 (b +b * ) = ∆ 0 + 2ng 2 0 /ω m . The Kerr medium [27,28], to which we aim to compare the optomechanical system, is described by the Hamiltonian H =Ĥ K +Ĥ d +Ĥ κ , where, in the rotating frame of the driving,Ĥ andĤ κ describes again the damping of the optical cavity at rate κ. The QLE for this optical modeâ iṡ where the input operatorâ in (t) is the same as for the optomechanical system. The steady-state equation for the mean-field amplitudeā is Replacing ∆ 0 by ∆ 0 − g 2 0 /ω m in Eq. (8) yields the equation for the optical mean-field amplitudeā of the optomechanical system obtained from Eq. (3) by eliminating the mechanical mean-field amplitudeb. This frequency shift of the detuning ∆ 0 is consistent with the fact thatĤ 0 andĤ K are connected by the canonical (polaron) transformationÛ = exp[(g 0 /ω m )(b −b † )â †â ]. ApplyingÛ to the optomechanical HamiltonianĤ 0 , Eq. (1), we obtainÛĤ 0Û † =Ĥ K + ω nb †b . In this frame, the optomechanical interaction is eliminated and the optical mode acquires a Kerr nonlinearity of the form of Eq. (6a) [8,9].
III. OPTICAL BISTABILITY IN THE SEMICLASSICAL REGIME
In the following, we will first show that the optomechanical system has MFEs with three solutions in a certain range of driving frequency and driving amplitude, just as the Kerr medium does. After discussing the characteristic behavior of the mean-field solutions in the regime of optical bistability, we study the stability of the mean-field solutions against fluctuations of both the optical and mechanical mode and point out the differences with the Kerr medium. Finally, we find parameters for which the optomechanical system is accurately described by an effective Kerr medium.
A. Bistability at the mean-field level
We briefly review the origin of bistability in the mean-field equations of the optomechanical system [23,26,30,31].
To simplify the notation we define the dimensionless nonlinearity parameter χ, detuning y, and driving power z by Combining Eqs. (3a) and (3b) we obtain a third-order polynomial root equation for the mean-field cavity occupation, The MFE for the Kerr medium, Eq. (8), leads to the same equation forn, provided we replace y by y − χ in Eq. (9). Equation (9) indicates that the MFEs can have either one or three solutions, depending on the number of real roots of the polynomial. The three roots depend on the dimensionless detuning y and driving power z. Since the mean-field cavity occupationn follows from p(χn) = 0, the nonlinearity parameter χ determines whether optical bistability occurs at small or large driving power and photon number.
The optical mean-field amplitude isā = −e iϕ λ/χ, where ϕ = arctan(4λ − 2y). If the detuning y and driving power z are such that the equation p(λ) = 0 has three real roots, the smaller χ, the more distant in phase space are the different optical mean-field amplitudesā. A similar observation can be made concerning the mechanical resonator: the equation p(λ) = 0 also holds for λ = χω m /(4κ)(b +b * ), whereb +b * is the equilibrium position of the mechanical resonator in units of x ZPF . Therefore, the smaller χ and the sideband parameter ω m /κ, the more distant are the different equilibrium positions.
We now examine some characteristic features of the MFEs, which occur both in an optomechanical system (3) and a Kerr medium (8). To this end, we find the conditions on the detuning y and the driving power z for the MFEs to have three solutions, and illustrate them with a few examples.
First we observe that the equation p(λ) = 0 can have three real roots only if the detuning y and the driving power z exceed some threshold valueỹ andz [23,32], Therefore, optical bistability can only be found for reddetuned driving frequencies. In addition, the three roots are real only if where The region in (y, z)-parameter space where Eqs. (10) and (11) are satisfied is shown in Fig. 2(c) with the labels II (blue) and III (purple). In this region the three mean-field occupations satisfyn 1 < n − <n 2 < n + <n 3 , where n ± are found from p (χn ± ) = 0 and read In the following, we refer ton 1 ,n 2 , andn 3 as the lower, middle, and upper branch of the MFEs. In Fig. 2(a) we show the mean-field occupation χn as a function of the driving power z for fixed detuning y. For an increasing driving power z and a detuning above the threshold y >ỹ, the three branches of the mean-field occupationn form a characteristic S-shaped curve. The lower branch starts from the origin and ends at the turning point given by (z + , n − ) where the middle branch starts. The upper branch starts from the second turning (z − , n + ), where the middle branch ends, and increases further.
In Fig. 2(b) we plot the mean-field occupation χn as a function of the detuning y for fixed driving power z. The cavity line shape is approximately Lorentzian if the driving power is far below the threshold z z (not shown). For larger and larger z it becomes more and more asymmetric and tilts until for z =z, it has an infinite slope at y =ỹ. For a driving power beyond this threshold the cavity line-shape has three branches in the range of detuning y determined by Eq. (11).
According to these considerations, the optomechanical system and the Kerr medium are equivalent at the level of the steady-state MFEs. Our next goal is to discuss the stability of we also show the critical mean-field occupation nc (dash-dotted gray) obtained from the condition c2 = 0. In (c) we summarize the behavior of the mean-field solution as a function of the parameters y and z. In regions II and III, between the curves z− and z+, Eqs. (10) and (11) are satisfied and there are three distinct mean-field solutions; the middle branch is always unstable. In region II (blue) the lower and upper branches are stable. In region III (purple) the second stability criterion shows the upper branch to be unstable (c2 < 0) and only the lower branch is stable. In regions I and IV the mean-field equations (MFEs) have only one solution. Below the zc curve in region I (gray) this unique branch is stable, while in region IV (red) the second criterion again shows that this solution is unstable (c2 < 0). The values of the detuning y and driving power z used in (a) and (b) are indicated by the orange and green dashed lines. Note that none of these features depends on the nonlinearity parameter χ, due to appropriate scaling of the axes. The threshold detuningỹ and driving powerz indicate the minimal values of y and z needed for the MFEs to have three solutions. The sideband parameter and mechanical quality factor chosen to show the influence of the second stability criterion c2 > 0 are ωm/κ = 10 and Qm = 1000.
the different branches of the MFEs. The existence of three solutions to the MFEs indicates that the optomechanical system may be in a regime of bistability, with stable lower and upper branches, as well as an unstable middle branch. While for the Kerr medium this is always true [27], a stability analysis leads to different conclusions in the case of the optomechanical system. In addition, if the detuning y and driving power z lead to a unique solution for the mean-field cavity occupation n, this solution is always stable for the Kerr medium, but not necessarily so for the optomechanical system.
B. Stability analysis of the mean-field solutions
The upper and lower branches are always stable for the Kerr medium. To find the range of parameters where the optomechanical system reproduces this behavior, we analyze the stability of the different branches of the MFEs The differences and similarities between the optomechanical system and the Kerr medium are summarized in Table I. TABLE I. Stability for the different branches in an optomechanical system and a Kerr medium determined from the QLEs (4) and (7). The critical mean-field occupation nc is found from the stability criterion, Eq. (13b), and depends on the detuning y = −∆0/κ, the sideband parameter ωm/κ, and the mechanical quality factor Qm.
Branch
Kerr The difference between the two systems is explained by the parametric instability in the optomechanical system [33,34] that occurs at a mean-field occupationn above some critical value n c . Around such a mean-field solution, the linear dynamics of optical and mechanical fluctuations becomes unstable. This particular feature of the optomechanical system is illustrated in Fig. 2; it is absent for the Kerr medium.
In Figs. 2(a) and 2(b), we indicate the unstable segments of the branches wheren > n c . In case the MFEs have three branches, this critical value for the mean-field occupation n c systematically lies in the upper branch or in its extension to the region where there is only one branch.
In Fig. 2(a), for a fixed detuning above threshold y >ỹ, the upper branch is stable only in a finite segment near the second turning point n + at the beginning of the upper branch. The size of this stable segment diminishes as the detuning y increases, and shrinks to a single point in the limit of a far red-detuned driving frequency. The same effect is seen in Fig. 2(b). With increasing driving power z the stability in the upper branch is confined to a smaller and smaller segment near the maximum of the cavity line shape.
In Fig. 2(c), the regions in (y, z)-parameter space where the upper or only branch turns unstable are labeled by III and IV. These are the regions where the driving power z is larger than the critical value z c , found by solving the equation p (χn c ) = 0 for z, where p is given in Eq. (9). The range of detuning y or driving power z at which bistability is observed shrinks with increasing y or z.
We now characterize the regime leading to optical bistability in the optomechanical system, and therefore examine how the stability of the branches depends on the parameters. To this end, we apply the Routh-Hurwitz criterion to the linear QLEs (4). Two conditions have to be satisfied for a particular mean-field solution to be stable, c 1,2 > 0, where [35] The identification of the parameter regime leading to c 1,2 > 0 is done as follows. We replace |g| 2 and ∆ by theirndependent expressions, in Eqs. (13), and express c 1,2 as functions of the rescaled mean-field occupation χn, the detuning y, the sideband parameter ω m /κ, and the mechanical quality factor Q m = ω m /γ m .
From the condition c 1 < 0 we conclude that the middle branch is unstable [23,30,31]. This follows from sgn(c 1 ) = sgn [(n + −n)(n − −n)], where n ± , Eq. (12), are the values of the mean-field cavity occupation at the lower and upper limits of the middle branch. The physical interpretation of this condition is simple. In the middle branch, the modification of the mechanical frequency due to radiation pressure, also known as the optical spring effect, is such that the modified mechanical force is no longer a restoring force.
In the Kerr medium, the same stability condition, c 1 > 0, is found from the linear QLEs, obtained by substitutingâ = a +d in Eq. (7) and neglecting second-and third-order terms ind,d † . No other criteria are needed to establish the stability of the system, and therefore the lower and upper branches are always stable.
The condition c 2 = 0 is equivalent to the relaxation rate of the system going to zero [36]. In a stable system, this relaxation rate is the real part of the eigenvalue of A closest to zero. Above the critical mean-field occupation,n > n c , this relaxation rate becomes negative, c 2 < 0, and the branch turns unstable. If in additionn is the only mean-field solution, the system is parametrically unstable. We find n c by solving the equation c 2 = 0 forn, as a function of the detuning y, the sideband parameter ω m /κ, and the mechanical quality factor Q m .
It turns out that n c always lies in the upper branch or in its extension to the region with only one branch. This can be seen as follows. Since the condition c 2 > 0 is automatically satisfied for negative effective detuning, ∆ ≤ 0, we find a lower bound for the critical occupation, n c ≥ n ∆ = y 2χ .
In addition, the effective detuning ∆ always turns positive in the upper branch, since n ∆ ≥ n + . Thus the upper branch is only stable in the range n + <n < n c . This stable portion can be very small, e.g., in the extreme case −∆ 0 κ and γ m = 0, we have n c = n ∆ n + . At nc the mean-field solutionn leads to unstable linear dynamics for the optomechanical system. The cavity occupation n∆ = y/(2χ) marks the point at which the effective detuning ∆ becomes positive. We find nc from the second stability criterion, Eq. (13b). The bare detuning is y = −∆0/κ = 1.5. Note that the ratio nc/n∆ does not depend on the nonlinearity parameter χ. The black cross indicates the parameters used in Fig. 4.
In Fig. 3 we compare the critical mean-field cavity occupation n c to the occupation n ∆ at which ∆ changes sign. The ratio n c /n ∆ is shown as a function of ω m /κ and Q m . If n c /n ∆ is large, the upper branch is stable beyond the parameter range leading to bistability, n c n + , mimicking the behavior of the Kerr medium. On the contrary, if n c /n ∆ 1, the upper branch turns unstable for ∆ > 0 and is only stable on a finite segment near its beginning.
We can distinguish four parameter regimes which encompass most experimental situations.
Resolved sideband and large mechanical damping (Ia)
For extremely low cavity damping, ω m > γ m > κ, the critical occupation n c is approximately In the case of a fixed detuning satisfying y 2 2Q m ω m /κ, we have n c n ∆ and the upper branch is stable on a considerable segment, extending up to driving powers z and meanfield occupationsn that are much larger than those needed for bistable MFEs, i.e., z c z + and n c n + . We recall that z c is found by solving the equation p(χn c ) = 0, with p defined in Eq. (9). Therefore, the mean-field behavior of the optomechanical system is equivalent to the behavior of a Kerr medium in the regime of bistability. In Ref. 16, the optomechanical system was compared to the Kerr medium in terms of the full counting statistics of photons. Although the two systems can behave differently in some regime of parameters, the authors demonstrate that the influence of the mechanical resonator reduces to an effective Kerr nonlinearity when γ m ∼ κ, in particular with y = ω m /κ.
Resolved sideband and small mechanical damping (Ib and IIa)
In the regime characterized by ω m > κ > γ m , the critical mean-field cavity occupation is found to be approximately In this case, the parameter (ω m /κ) 3 /Q m plays an important role to characterize the mean-field behavior. If Q m > (ω m /κ) 3 , we obtain n c n ∆ for a detuning above the bistability threshold y >ỹ. In this case, the upper branch turns unstable if the effective detuning is positive, ∆ > 0. In addition, this means that if the detuning is negative and large, such that −∆ 0 κ, the stable segment is small, as n ∆ n + .
In the opposite limit, Q m (ω m /κ) 3 , we can have n c n ∆ as in the previous case (γ m > κ), provided the detuning y satisfies y 2 (ω m /κ) 3 /Q m . The same conclusions then apply, i.e., z c z + and n c n + , and the mean-field behavior of the optomechanical system and the Kerr medium is equivalent in the parameter regime of bistability.
Using the exact expression for n c , we see in Fig. 3 that the border between the region where the optomechanical system experiences a parametric instability as soon as ∆ > 0 (black region), and the region where the system is still linearly stable for some positive effective detuning, n c > n ∆ , is approximately given by y 2 = 2(ω m /κ) 3 /Q m . Above this line, an optomechanical system driven to the regime of bistability behaves like a Kerr medium, as described by Eqs. (6) and (7). This will be confirmed in the next section by obtaining the quantum steady state of both systems numerically and showing that the states of the optical mode are similar.
Unresolved sideband and small mechanical damping (IIb)
The critical occupation n c can be approximated in the limit of a small sideband parameter ω m /κ and large enough mechanical quality factor, such that 1 > ω m /κ > 1/Q m , as If the bare detuning ∆ 0 is negative and exceeds the threshold value for possible bistability, y >ỹ, we obtain that n c n ∆ . The upper branch turns unstable as soon as the effective detuning ∆ is positive, and for large bare red detuning, −∆ 0 κ, the upper branch is only stable on a small segment close to its beginning.
A simple interpretation of the critical mean-field occupation n c in Eqs. (14) and (15) can be provided by considering the total mechanical damping γ tot = γ m + Γ opt , where Γ opt is the additional mechanical damping induced by coupling to the optical degree of freedom. In the weak-coupling limit of linearized optomechanics, i.e., g, γ m < κ, this contribution is given by ] is the so-called optomechanical selfenergy and χ c (ω) = [κ/2 − i(∆ + ω)] −1 the optical susceptibility [11]. In this case, the conditionn = n c coincides with γ tot = 0 in both limits ω m ≶ κ.
Very small sideband parameter
In the regime where the sideband parameter is so small that ω m /κ 1/Q m , the situation is different. The upper branch is unconditionally stable as long as the detuning y is not too large, y < κ/( √ 32Q m ω m ). For larger values of y, an unstable segment of the upper branch develops, from the second turning point n + up to some value n of the mean-field cavity occupation given by The dynamical timescales of the two modes are different in this limit. The optical mode adiabatically follows the mechanical motion and produces an effective mechanical potential with two stable equilibrium positions. However, as we have seen in the previous paragraph, this picture holds only if Q m is not too large compared to κ/ω m .
In this parameter regime, early experiments with hertzscale mechanical resonance frequencies enabled the first observations of optical bistability and the related hysteresis cycle both in the optical [25] and the microwave domain [26].
In low-finesse cavities, the optical field can create several stable minima in the mechanical potential, a phenomenon sometimes referred to as multistability [30,31]. It has recently been observed with a torsion balance oscillator acting as the moving mirror [55]. This effect should not be confused with dynamical multistability [33], where mechanical limit-cycle orbits of stable amplitudes arise due to parametric instability.
IV. OPTICAL BISTABILITY IN THE QUANTUM REGIME
So far we have focused on the semiclassical regime, considering the mean-field solutions as well as the effect of fluctuations around them, and have identified the regime of parameters where the optomechanical system and the Kerr medium exhibit similar behavior. In the remainder, we want to confirm that the conclusions of this approach also hold in the quantum limit. To this end, we compare the quantum steady states of the optomechanical system and the Kerr medium, obtained from numerical solutions of the quantum master equations.
A. Quantum master equations description of dissipation
An alternative description of either the optomechanical system or the Kerr medium can be given in the form of quantum master equations, which describe the dynamics of their density operatorsρ, respectivelyρ K . This treatment is equivalent to the quantum Langevin description given by Eqs. (2) and (7). Instead of using input noise operatorsξ orη, dissipation is taken into account with Lindblad dissipative terms.
The quantum master equation for the optomechanical system reads where the dissipative terms have the standard form, Dô[ρ] = oρô † − 1 2 ô †ôρ +ρô †ô . In the same way, the quantum master equation for the equivalent Kerr medium is given by The steady-state density operators are found from the numerical solutions of L[ρ] = 0 and L K [ρ K ] = 0, respectively.
B. Comparison of the quantum steady states
To corroborate the fact that the optomechanical system behaves like an effective Kerr medium, we compare the quantum steady states of both systems for parameters that lead to bistable behavior. To this end, we calculate the photon number â †â , the cavity amplitude | â | 2 , and the second-order correlation function which describes fluctuations in the photon number. We also characterize the similarity between the optomechanical system and the Kerr medium with the help of the overlap whereρ opt is the reduced density matrix of the system, obtained by tracing out the mechanical degree of freedom from ρ. Finally, we investigate the Wigner distribution function of the optical mode, which reads W opt (α) = 1 π 2 d 2 λ Tr ρ opt e λ(â † −α * )−λ * (â−α) . For comparison we also show â †â (black dashed line) and | â | 2 (black dash-dotted line) for the equivalent Kerr medium. For both systems, y = −∆0/κ = 1.5 and χ = 0.08. The parameters of the optomechanical system are ωm/κ = 30, Qm = 300 (indicated by the black cross in Fig. 3), and kBT = 0 (dots) or kBT = ωm (crosses). Inset (b) shows the second-order correlation function g (2) (0) = ⠆⠆ââ / â †â 2 for the optomechanical system with kBT = 0 (green solid line) as well as kBT = ωm (green dashed line) and for the Kerr medium (black dash-dotted line). The first and third curves are indistinguishable. Inset (c) shows the overlap F (ρopt,ρK ), as defined in Eq. (18), between the density matrices of the pure Kerr mediumρK and the reduced density matrix of the optomechanical system ρopt, obtained by tracing out the mechanical degree of freedom fromρ. The temperatures chosen are kBT = 0 (solid line) and kBT = ωm The steady states of both systems are compared for a constant detuning above the bistability threshold, y >ỹ, and as a function of the driving power z. In this configuration the mean-field cavity occupationn forms a characteristic Sshaped curve.
The results are presented in Fig. 4. In the upper panel, we show the mean-field cavity occupationn, the photon number â †â , and the cavity amplitude | â | 2 for both the optomechanical system, with zero and finite temperature of the mechanical bath, as well as for the equivalent Kerr medium. The two insets show the second-order correlation g (2) (0) and the overlap F (ρ opt ,ρ K ). The lower panel of Fig. 4 shows the optical Wigner density function of the optomechanical system.
At low driving power before entering the region of bistability, z < z − , the state of the optical mode is rather well described by a coherent state in both systems, as â †â = | â | 2 n.
In the range of driving power where two stable mean-field solutions exist, z − < z < z + , the master equations (16) and (17) have unique quantum steady states. Thus, instead of any bistable behavior, a transition of â †â and | â | 2 , from the lower to the upper branch, occurs, as the driving power z increases. Simultaneously, both systems show large fluctuations in the photon number, g (2) (0) > 1. Such behavior, in the regime where the MFEs lead to bistability, is well-known from the Kerr medium [27].
In this regime, the Wigner function W opt (α), shown in the lower part of Fig. 4, exhibits two separate lobes peaked at the mean-field amplitudes, α ā. This is another well-known feature of the Kerr medium [32,56] and shows how classical bistability persists in the quantum regime. The two lobes are distinguishable if the phase-space separation of the two stable mean-field amplitudesā is larger than the fluctuations around them, which is satisfied here since χ 1. Since W opt > 0 everywhere, the optical mode can be regarded as an incoherent statistical mixture of two states with different amplitudes and non-Gaussian fluctuations. As the driving power z increases from z − to z + , the relative weights of the lobes continuously change from the lower branch to the upper one, describing the shift in probability for the system to be found in one or the other. This effect is robust to finite temperature of the mechanical environment.
The particular situation where the two stable branches are approximately equally likely (z 0.26 for k B T = ω m ) would enable the observation of noise-induced switching between both branches [57,58] and constitute a clear signature of the nonlinear interaction between the optical and mechanical mode.
At higher driving power, z > z + , when the MFEs have only one solution, both the optomechanical system and the Kerr medium exhibit sub-Poissionian statistics, g (2) (0) < 1. Photon blockade in optomechanical systems has already been predicted for χ > 1 [9]. In our case, photon blockade is not very pronounced: we chose χ 1 to have bistable meanfield solutions that are appreciably distant in phase space. For the parameters of Fig. 4, this effect is slightly suppressed even further due to the finite-temperature bath, n th > 0.
At various points of the paper, we have already demonstrated that the optomechanical system can be regarded as an effective Kerr medium in some range of parameters that we have specified. In particular, in the present section we have shown numerically that both systems exhibit the same features. For example, the photon number and the second-order photon correlation function follow the same parameter dependence, the Wigner function has a two-lobe structure, and both systems show photon blockade. As a further strong confirmation of this equivalence, we compare the statesρ opt andρ K of the optical field in both systems. As can be seen in inset (c) of Fig. 4, their overlap F is close to 1 even at a finite thermal occupation of the mechanical mode. All of these calculations clearly establish the equivalence of the optomechanical system and a Kerr medium in the appropriate parameter range.
V. CONCLUSION
The mean-field equations for the optical mode of a dispersively coupled optomechanical system agree with those of a Kerr medium, a paradigmatic quantum optics system whose nonlinearity induces optical bistability. This raises the question of whether and under which conditions the two systems can be considered to be equivalent. We have therefore compared the optical bistability in an optomechanical system and a Kerr medium. A stability analysis of the mean-field solutions reveals differences between the two systems: the upper branch of an optomechanical system can become unstable due to position fluctuations of the mechanical degree of freedom. We have identified the regime of parameters where the two systems are equivalent. Corroborating this semiclassical approach, we have shown that the (optical) quantum steady states of both systems, obtained numerically, show large overlap. Our results clarify when an optomechanical system can be used as a Kerr nonlinearity in applications of quantum optics and quantum information. | 2013-10-23T06:47:52.000Z | 2013-06-03T00:00:00.000 | {
"year": 2013,
"sha1": "bb3e0adfd34f82e1a642e2a7c873f607bdb6b1e0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.0415",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bb3e0adfd34f82e1a642e2a7c873f607bdb6b1e0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258379408 | pes2o/s2orc | v3-fos-license | Screening prognostic markers for hepatocellular carcinoma based on pyroptosis-related lncRNA pairs
Background Pyroptosis is closely related to cancer prognosis. In this study, we tried to construct an individualized prognostic risk model for hepatocellular carcinoma (HCC) based on within-sample relative expression orderings (REOs) of pyroptosis-related lncRNAs (PRlncRNAs). Methods RNA-seq data of 343 HCC samples derived from The Cancer Genome Atlas (TCGA) database were analyzed. PRlncRNAs were detected based on differentially expressed lncRNAs between sample groups clustered by 40 reported pyroptosis-related genes (PRGs). Univariate Cox regression was used to screen out prognosis-related PRlncRNA pairs. Then, based on REOs of prognosis-related PRlncRNA pairs, a risk model for HCC was constructed by combining LASSO and stepwise multivariate Cox regression analysis. Finally, a prognosis-related competing endogenous RNA (ceRNA) network was built based on information about lncRNA–miRNA–mRNA interactions derived from the miRNet and TargetScan databases. Results Hierarchical clustering of HCC patients according to the 40 PRGs identified two groups with a significant survival difference (Kaplan–Meier log-rank, p = 0.026). Between the two groups, 104 differentially expressed lncRNAs were identified (|log2(FC)|> 1 and FDR < 5%). Among them, 83 PRlncRNA pairs showed significant associations between their REOs within HCC samples and overall survival (Univariate Cox regression, p < 0.005). An optimal 11-PRlncRNA-pair prognostic risk model was constructed for HCC. The areas under the curves (AUCs) of time-dependent receiver operating characteristic (ROC) curves of the risk model for 1-, 3-, and 5-year survival were 0.737, 0.705, and 0.797 in the validation set, respectively. Gene Set Enrichment Analysis showed that inflammation-related interleukin signaling pathways were upregulated in the predicted high-risk group (p < 0.05). Tumor immune infiltration analysis revealed a higher abundance of regulatory T cells (Tregs) and M2 macrophages and a lower abundance of CD8 + T cells in the high-risk group, indicating that excessive pyroptosis might occur in high-risk patients. Finally, eleven lncRNA–miRNA–mRNA regulatory axes associated with pyroptosis were established. Conclusion Our risk model allowed us to determine the robustness of the REO-based PRlncRNA prognostic biomarkers in the stratification of HCC patients at high and low risk. The model is also helpful for understanding the molecular mechanisms between pyroptosis and HCC prognosis. High-risk patients may have excessive pyroptosis and thus be less sensitive to immune therapy. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-023-05299-9.
pyroptosis and HCC prognosis. High-risk patients may have excessive pyroptosis and thus be less sensitive to immune therapy.
Keywords: Hepatocellular carcinoma, Pyroptosis, Long noncoding RNA, Prognosis, Relative expression ordering Background Hepatocellular carcinoma (HCC) is one of the most common human malignancies, with over 800,000 new cases and nearly 700,000 deaths worldwide yearly [1]. HCC is highly heterogeneous and insidious; most patients are diagnosed at advanced stages with poor prognosis [2]. According to statistics, the overall median survival time of patients with advanced HCC is only 9 months, and the 5-year overall survival (OS) is solely 10% [3]. Therefore, exploring the molecular biomarkers associated with HCC prognosis has been a hot issue in HCC research [4].
Pyroptosis is a sort of programmed cell death related to inflammation, which is mediated by the intracellular inflammasome and gasdermins [5,6]. Appropriate induction of pyroptosis could trigger a moderate inflammatory reaction that might enhance innate immunity and generate an antitumor immune response [7,8]. In contrast, excessive pyroptosis might excite a hyperinflammatory response that disrupts immune homeostasis and promotes cancer progression [7,8]. Consequently, some researchers attempt to improve the prognosis of tumor patients by regulating the activation of pyroptosis to produce antitumor immune responses [9]. For instance, migration and invasion of oral squamous cell carcinoma cells could be inhibited via pyroptosis activation by anthocyanidins [10]. High expression of gasdermin E induced by miltirone could be used to provoke pyroptosis in cancer cells [11]. NLRP3 (NLR Family Pyrin Domain Containing 3) inflammasomes could be utilized to mediate pyroptosis to suppress the growth and metastasis of HCC cells [12]. These suggest pyroptosis is closely associated with cancer prognosis.
Long noncoding RNA (lncRNA) is a transcription product of DNA with a length greater than 200 nucleotides, which can regulate gene expression by interacting with proteins, DNA, or other RNAs [13]. It has been reported that lncRNAs are critical regulators of pyroptosis [14]. For example, lncRNA HOTTIP could inhibit pyroptosis and promote ovarian cancer cell proliferation by targeting miR-148a-3p/AKT2 axis [15]. LncRNA MEG3 could inhibit the growth and metastasis of triple-negative breast cancer by activating pyroptosis via NLRP3/caspase-1/GSDMD pathway [16]. These studies demonstrate that aberrant alterations of pyroptosis-related lncRNAs (PRlncR-NAs) also impact cancer prognosis.
Prognosis-associated PRlncRNA biomarkers have been identified for multiple cancer types. For HCC, seven and five prognosis-associated PRlncRNAs have been reported [17,18]. These PRlncRNA prognostic risk models have some efficacy in training and testing datasets. However, their risk thresholds are summarized from the absolute expression levels of PRlncRNAs, which are often data-dependent and unstable, leading to difficulties when applied in clinical settings [19]. In recent years, researchers have found that the within-sample relative expression orderings (REOs) of genes were more robust than the absolute expression levels of genes across samples. Furthermore, the REO-based molecular biomarkers can be easily applied to individual diagnosis, which is more suitable for clinics [20][21][22].
In this study, we tried identifying REO-based prognosis-associated PRlncRNA biomarkers to construct an individualized prognostic risk model for HCC and explore the molecular mechanisms between pyroptosis and HCC prognosis.
Data collection and preprocessing
The RNA expression data of the 371 HCC and 50 adjacent normal tissue samples analyzed in this study were downloaded from The Cancer Genome Atlas (TCGA) database. The data were obtained using Illumina HiSeq 2000 RNA Sequencing technology. A total of 60,483 RNAs were detected. The data were preprocessed by the following procedures. Firstly, remove the samples with a survival time of less than 30 days or missing survival time. Secondly, normalize expression values to Transcript Per Million (TPM). Thirdly, annotate each examined RNA in the GENCODE database. Finally, exclude mRNAs with count values less than 1 and lncRNAs with count values less than 0.5 in all samples.
For miRNA data downloaded from the TCGA database, sequencing was performed on an Illumina HiSeq miRNASeq platform. After deleting the miRNAs with a count value of 0 in more than 50% of the samples, 578 miRNAs were kept. The miRNA expression values were normalized to Reads Per Million (RPM).
Detection of prognosis-related lncRNA pairs
Clustering HCC samples with the ward linkage algorithm were performed on 40 pyroptosis-related genes (PRGs), which were derived from the pyroptosis pathway in the Molecular Signatures Database (MSigDB) (Additional file 1: Table S1). Then, differentially expressed lncRNAs detected between the two HCC groups were defined as pyroptosis-related lncRNAs. Any two pyroptosis-related lncRNAs can form a lncRNA pair. For a lncRNA pair (lncRNA i |lncRNA j ), there are two REO status, lncRNA i < lncRNA j or lncRNA i ≥ lncRNA j , in a sample. The prognosis-related lncRNA pairs were then identified by the following procedures.
(1) Combine the lncRNAs of interest two by two to form C 2 k (k is the number of pyroptosis-related lncRNAs) lncRNA pairs. (2) The REO matrix, X, was constructed based on lncRNA pairs for the training set.
X ij denoted the REO of the i-th lncRNA pair (lncRNA i1 |lncRNA i2 ) in the j-th sample, taking 1 or 0, with 1 representing lncRNA i1 < lncRNA i2 and 0 representing lncRNA i1 ≥ lncRNA i2 . (3) Remove lncRNA pairs for which the percentage of REOs with 1 was less than 20% or greater than 80%. This criterion ensured that the apparent reversal of REOs of lncRNA pairs occurred in a certain amount of HCC samples to facilitate the identification of sample subgroups with different prognoses, as lncRNA pairs with the same score (0 or 1) in more than 80% of samples were considered uninformative [21].
(4) For each remaining lncRNA pair, the correlation between its REO values and OS times was evaluated by univariate Cox regression. If the Wald test's p value is less than 0.005, the lncRNA pair was considered as a prognosis-related lncRNA pair.
Construction and evaluation of prognostic lncRNA pair risk model
The LASSO regression and stepwise multivariate Cox regression algorithms were applied to select candidate PRlncRNA pairs as prognostic biomarkers. LASSO regression was adopted to choose the prognosis-related PRlncRNA pairs most predictive of OS. Lambda values corresponding to the smallest partial likelihood deviance were chosen as the optimal parameters after tenfold cross-validation [23,24]. Then, multivariate Cox regression analysis based on the Akaike information criterion (AIC) method was used to determine the optimal model. The model with the lowest AIC value was considered as the optimal prognostic risk model, with the corresponding PRlncRNA pairs as the eventual predictive risk biomarkers [25]. The time-dependent receiver operating characteristic (ROC) curves were applied to assess the performance, and the Youden index determined the risk threshold. Multivariate Cox proportional hazards regression analysis was employed to evaluate independent prognostic factors associated with OS [26]. Covariates included risk scores for prognostic PRlncRNA pairs, gender, age, tumor stage, and grade.
Enrichment analysis and estimation of immune cell infiltration
Functional enrichment analysis of differentially expressed genes between the high-and low-risk groups was completed by Gene Set Enrichment Analysis (GSEA) based on the Reactome database with annotation information from the MSigDB database (v7.5.1).
The estimation of the absolute abundance of tumor-infiltrating (immune cells in HCC samples was achieved by the CIBERSORT algorithm [27].
Construction of prognostic pyroptosis-related competing endogenous RNA (ceRNA) network
The regulatory relationships of lncRNAs, miRNAs, and mRNAs were obtained from the miRNet database and the TargetScan database [28,29]. The miRNet database was utilized to predict target miRNAs for lncRNAs, and the TargetScan database was used to predict target miRNAs for PRGs.
Statistical analysis
All statistical analysis were completed with R 4.1.0 software. Cluster analysis was finished with the ward linkage algorithm. Differential expression analysis was implemented with the "limma"package. Survival analysis and corresponding plotting were based on the "survival", "glmnet", "MASS" and "survminer" packages. ROC analysis and the determination of risk threshold were completed based on the "survivalROC" package. GSEA was based on the "clusterProfiler" package. Tumor immune infiltration analysis was implemented by the "immunedeconv" package. The Benjamini-Hochberg (BH) method was applied to control the false discovery rate (FDR). Unless otherwise specified, the statistical significance level was set uniformly at 0.05.
Pyroptosis-related lncRNAs
The workflow of this study is illustrated in Fig. 1. After data preprocessing, the TCGA HCC data set included expression measurements of 8477 lncRNAs and 17,596 mRNAs from 343 HCC samples. Firstly, all samples were randomly categorized into training (n = 240) and validation (n = 103) sets. No significant differences have been observed in the clinical features between the training and validation sets (p value < 0.05, Additional file 1: Table S2).
Hierarchical clustering was performed on the expression levels of the 40 PRGs in the training set. Samples were clustered into two groups, containing 35 and 205 samples, respectively ( Fig. 2A). Kaplan-Meier survival analysis revealed a significant difference in survival between these two groups of patients (log-rank test, p value = 0.026 (Fig. 2B)), which suggested that the expression pattern of PRGs was associated with the prognosis of HCC patients.
Using the R package "limma", 104 lncRNAs that were differentially expressed between the two groups were identified at |log 2 (FC)|> 1 and FDR < 5%. These 104 lncRNAs were considered as PRlncRNAs for the following analysis.
Prognosis-related lncRNA pairs and risk model
The 104 PRlncRNAs were paired, and prognosis-related lncRNA pairs were identified based on the within-sample REOs in the training set (see Methods). The REOs of 83 PRlncRNA pairs were observed to be significantly associated with the OS time of HCC by univariate Cox regression analysis. To choose representative prognosisrelated lncRNA pairs, we performed the LASSO Cox regression analysis via tenfold cross-validation on these 83 PRlncRNA pairs, and 26 PRlncRNA pairs were selected at the smallest partial likelihood deviance (Fig. 3A, B). Then, the stepwise multivariate Cox regression analysis was performed on 26 PRlncRNA pairs to choose prognosis-related lncRNA pair biomarkers and construct the risk model. Finally, as shown in Fig. 3C, 11 PRlncRNA pairs involving 22 PRlncRNAs were selected at the smallest AIC value. The corresponding risk model is: risk score = 0.5447 × VIM- Fig. 3C). The threshold for high-and low-risk groups was determined by the point with the largest Youden index on the 5-year ROC curve of the training set (Youden index = 0.739, risk score = 0.025), with 105 and 135 patients classified as high-and lowrisk samples, respectively. Among the 22 PRlncRNAs, 16 were differentially expressed between high-and low-risk patients. Comparing all HCC samples to adjacent normal samples, 14 of these 16 PRlncRNAs were differentially expressed.
Validation of prognostic lncRNA pair risk model
According to the risk threshold, the patients in the validation set were divided into a high-risk group (risk score ≥ 0.025) and a low-risk group (risk score < 0.025), respectively. Kaplan-Meier survival analysis showed that OS rates were significantly different between the high-and low-risk groups in both the training and validation sets (logrank test: p value < 2 × 10 -16 and p value = 1.219 × 10 -5 , Fig. 4A, B). The AUCs of timedependent ROC curves showed the prediction accuracies of 1-, 3-and 5-year survival were 0.855, 0.891, and 0.902 for the training set, and 0.737, 0.705, and 0.797 for the validation set, respectively (Fig. 4C, D). In addition, the results of multivariate Cox regression indicated that the risk model was an independent prognostic factor for patients with HCC (p value < 0.001, Fig. 4E, F). These results suggest that the risk score model based on lncRNA pairs can be an efficient tool for predicting the prognostic risk of HCC.
Analysis of immune-related characteristics of high-and low-risk groups
Among the 40 PRGs, 27 were differentially expressed between high-and low-risk groups in the training set at FDR < 5%. Notably, 25 PRGs were significantly upregulated in the high-risk group. GSEA showed the inflammation-related interleukin-mediated signaling pathways, including the interleukin 4 and interleukin 13 signaling pathways (q-value = 0.021), and signaling by interleukins pathway (q-value = 0.030), were also found to be significantly upregulated in the high-risk group [8]. Furthermore, as shown in Fig. 5, the abundance of regulatory T cells (Tregs) and M2 macrophages was significantly higher in the high-risk group. In comparison, the abundance of CD8 + T cells was significantly lower in the high-risk group. These results suggested that the immune system might have overreacted in the high-risk group due to the upregulation
Establishment of prognostic pyroptosis-related ceRNA network
The target miRNAs of the 22 lncRNAs involved in the 11 PRlncRNA pair biomarkers in the risk score model were predicted by the miRNet database. Finally, 140 target miRNAs of 10 lncRNAs were obtained. The TargetScan database showed that 112 of the 140 miR-NAs targeted 39 PRGs. By univariate Cox regression, three of the 112 target miRNAs and eight of the 39 target PRGs were significantly associated with the prognosis of HCC in our data. Moreover, five of the eight PRGs were targets of the three miRNAs. These prognosis-related lncRNAs, miRNAs, and PRGs formed eleven lncRNA-miRNA-mRNA regulatory axes (Fig. 6), involving four lncRNAs, three miRNAs, and five PRGs.
Discussion
In this study, we have constructed a prognostic risk model for HCC by survival analysis of PRlncRNA pairs based on the within-sample REOs of PRlncRNAs. The risk model has good predictive performance at classifying the HCC patients into highand low-risk groups in the validation set. Given the use of within-sample REOs of PRlncRNA pairs, our prognostic risk model has potential implications for clinical translation and application. Our model is independent of systematic bias and suitable for the individualized clinic and may help to stratify HCC patients at high risk of poor prognosis.
The 240 training samples were clustered by 40 PRGs into two groups of samples with significantly different prognoses, containing 35 cases with relatively poor survival and 205 cases with relatively good survival, respectively. Moreover, these 240 samples were divided by our prognostic prediction model into 105 high-risk cases and 135 low-risk cases. They were significantly overlapped (p = 3.45 × 10 -4 , hypergeometric test): the 35 poor prognosis samples overlapped with the 105 high-risk samples by 25 and the 205 good prognosis samples overlapped with the 135 low-risk samples by 125. The discrepancy may be due to the different purposes of stratifying samples. The clustering of samples using PRGs was intended to identify differential lncRNAs as PRlncRNAs. The two groups clustered by them may reflect more the similarity of expression patterns of the samples across all PRGs. The high-and low-risk groups were stratified by the constructed prognostic model. They should be more associated with the prognosis of HCC.
We found that more PRGs were upregulated in the high-risk group compared with the low-risk group, indicating more inflammation responses in the high-risk group. Further evidence was provided by the pathway enrichment and immune infiltration analysis results. GSEA showed that interleukin-mediated inflammation-related signaling pathways were upregulated in the high-risk group. CIBERSORT-based analysis showed higher abundance of Tregs and M2 macrophages and a lower abundance of CD8 + T cells in the high-risk group. It has been reported that interleukin 4 and interleukin 13 signaling could induce type 2 inflammatory processes [30]. If the type 2 inflammatory responses were out of control, M2 polarization of macrophages could be promoted, effectively suppressing the cytotoxicity of CD8 + T cells and NK cells [30,31]. Except for M2 macrophages, Tregs, which function could be enhanced by chronic inflammation, could also efficiently inhibit the function of CD8 + T cells [8,32]. Relative low abundance of CD8 + T cells has been reported to indirectly induce weaker cytotoxicity, while lower cytotoxicity might induce more insensitivity to immunotherapy [33]. Therefore, we additionally analyzed the cytotoxic-related genes (GZMA, GZMB, GZMK, PRF1) [34] and observed downregulation of these genes in the high-risk group compared to low-risk group (Wilcoxon rank-sum test: p value < 0.05; p value < 0.05; p value < 0.05; p value < 0.001). We then applied the Immune Cell Abundance Identifier (ImmuCellAI) database to predict the immunotherapeutic responses. We found that patients in the high-risk group were likely to have lower scores and be less sensitive to immune checkpoint blockade therapy (Wilcoxon rank-sum test: p value = 0.010) [35]. Therefore, we inferred that excessive pyroptosis might have arisen in high-risk patients, reducing the amount and activity of tumor-infiltrating lymphocytes and worsening tumor prognosis [36].
In clinical practice, due to the simplicity and non-invasiveness, serum markers such as AFP, DCP, and AFP-L3 are often used to diagnose HCC and predict prognosis. Many scoring models have used the three markers for the diagnosis or prognosis of HCC. The GALAD, consisting of age, sex, and the three markers, has been reported to have high predictive accuracy for early HCC in patients with nonalcoholic steatohepatitis (AUC = 0.96) [37] and also accurately classified patients with HCC in stage 0/A of Barcelona Clinic Liver Cancer (AUC = 0.9242) [38]. Studies have found that another scoring model, BALAD-2, consisting of these three markers combined with serum bilirubin and albumin, could stratify the HCC patients into distinct prognostic groups [39]. Furthermore, in 2008, the combined biomarker Japan Integrated Staging was proposed to provide better survival predictions for HCC patients [40]. These studies illustrated the potential of these three markers in the diagnosis and prognosis prediction of HCC. However, in the TCGA data we used, only the serum levels of AFP and DCP were provided. Therefore, we could not directly compare the predictive efficacy of our model with BALAD-2. We compared the AFP and DCP levels between the high-and low-risk groups predicted by our model. Results showed that these two proteins were not significantly differentially expressed between the training set's high-and low-risk groups (Wilcoxon-Mann-Whitney test: p = 0.486 and p = 0.771, respectively,). Elevated serum levels of AFP, AFP-L3, and DCP at baseline had been reported to be associated with a worse prognosis after resection of HCC [41]. We thus compared the corresponding mRNAs in the HCC patients, which showed that the mRNAs of the two markers were significantly up-regulated in the high-risk HCC samples compared to the normal controls (Wilcoxon-Mann-Whitney test: p < 0.05).
To further validate the predictive value of our prognostic PRlncRNA risk model, we compared its performance with three different prognostic models previously reported, which were also constructed based on PRlncRNAs using the same TCGA RNA sequencing data. Zhang et al. constructed a risk-scoring model of 5 PRlncRNAs, with 5-year AUCs of 0.688 for the training and 0.714 for the validation set, respectively [42]. Liu et al. built a prognostic risk-scoring model using 5 PRlncRNAs, with 5-year AUCs of 0.707 for the training set and 0.642 for the validation set, respectively [18]. The 5-year AUCs for the 9-PRlncRNA model built by Zhang et al. were 0.812 for the training set, and 0.722 for the validation set, respectively [43]. The predictive performance of our PRlncRNA prognostic model was higher than the three published models, with 5-year AUCs of 0.902 for the training set and 0.797 for the validation set, respectively. The prognostic risk model in our study was constructed based on the REOs of PRlncR-NAs. Although it is technically simpler to detect serum levels of protein markers such as the commonly used AFP, analysis at the RNA level may provide additional information for understanding cancer mechanistically. Most lncRNAs involved in the lncRNA-miRNA-PRG regulatory axes have been reported to be prognostically relevant in various cancers, including HCC. For example, LINC01554-mediated glucose metabolism reprogramming could suppress the tumorigenicity of HCC through the downregulation of PKM2 expression and inhibition of the Akt/mTOR signaling pathway [44]. NRSN2-AS1 could promote ovarian carcinogenesis through the miR-744-5p/PRKX axis [45]. Upregulation of lncRNA FOXD2-AS1 expression could promote the progression of HCC by causing epigenetic silencing of DKK1 and activating the Wnt/β-catenin signaling pathway [46]. Thus, these eleven lncRNA-miRNA-PRG regulatory axes could be helpful for further understanding the relationship between lncRNAs and PRGs and deserve further investigation. | 2023-04-29T13:40:53.743Z | 2023-04-29T00:00:00.000 | {
"year": 2023,
"sha1": "a5202b74ba90465b7921e8364cc5fdfd1310912c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "a5202b74ba90465b7921e8364cc5fdfd1310912c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
253559186 | pes2o/s2orc | v3-fos-license | Gateways and gatekeepers: Two factors that influence the use of performance and image enhancing drugs (PIEDs) among UK military veterans
Recent reports have identified that PIEDs use is rising within the Armed Forces leading to concerns over health and concomitant operational risks. The aim of this study was to identify the roles of gateways and gatekeepers on PIEDs use among a cohort of UK military veterans. Semi-structured interviews were conducted with 14 ex-Service personnel. Interviews were transcribed verbatim and thematically analysed using NVivo12 software. Common themes were identified around the ways in which the veterans were introduced to PIEDs and how they accessed them. Gateways consisted of two categories of Circumstances and Behaviour , including excessive gym use, the need to cope with fitness demands of military service, overseas deployment, and previous experiences with nutritional and body-building supplements. Gatekeepers included friends, colleagues, and mentors and their roles were captured in two categories of Procurement of PIEDs and Information Dissemination . Recommendations include the need for further research on the roles of gatekeepers and gateways as important pathways to PIEDS use. Additionally, there is a need to build on themes suggested by earlier researchers to identify social, cultural, and economic factors that underpin motives for PIEDs use in the uniformed services. These two recommendations would inform the design and evaluation of PIEDs-related interventions.
INTRODUCTION
Performance and image enhancing drugs (PIEDs) is a collective term that covers substances that affect human performance and that have been highlighted as problematic by many national and international governing bodies of sport (Maughan et al, 2018).
Key motivations for the use of PIEDs are to change body shape and appearance and enhance physical performance (Brennan, Wells & Van Hout, 2017;Piacentino et al, 2017). However, it is recognised that the use of PIEDs carries health risks (Piacentino et al, 2017) from infections to sudden death (Darke, Torok & Duflou, 2014;Hope et al, 2013;van Amsterdam, Opperhuizen & Hartgens, 2010).
Given the above and the fact that drugs use has been associated with criminal behaviour among military veterans (Schultz et al, 2015), it is important to gain further knowledge about the use of PIEDs by former military personnel. In particular, we need to identify what influences PIEDs use. Whyte et al's (2021a) recent review of the literature concerning the use of PIEDs in both serving and retired Armed Forces personnel, highlighted that anabolic steroids (n=10 of 20 papers reviewed), weight loss supplements (n=10) and bodybuilding agents (n=7) were the most mentioned products. They identified that PIEDs were employed variously throughout military careers with use increasing substantively when personnel were deployed compared with before or after operational tours (Lui et al, 2018;Paisley, 2015;Varney et al, 2017).
Several motivations for PIEDs use among military personnel were highlighted, amongst which image enhancement and coping with the physicality of active service were prevalent (Whyte et al, 2021a). Image enhancement was related to weight reduction, muscle development, negative self-image, and body dysmorphia (Campagna & Bowsher, 2016;Carol, 2013;Mattila et al, 2010). Keeping up with the physical demands of service was associated with expectations placed upon the Armed Services and the demand for optimal fitness and strength to carry out military duties (Boos et al, 2010;Jacobson et al, 2012;Herbst, McAslin & Kalapatapu, 2017). Bucher's (2012) investigation noted PIEDs use helped combatants to deal with long foot patrols. This study also noted psychological motives for taking PIEDs to cope with the stresses and strains of combat, holding their nerve, and preparing for the possibility of killing another human being.
Negative physical health outcomes among military users have been reported including severe vascular, organ, muscle, and blood conditions (Mattila et al, 2010;Brazeau et al, 2015;Harris, Winn & Ableman, 2017;Liane & Magee, 2016;Magee et al, 2016;Young et al, 2012). Worries around negative mental health have also been reported following PIEDs use, including extreme aggression, and behavioural change (e.g Varney et al, 2017;Herbst, McCaslin & Kalapatapu, 2017). Whyte et al's (2021a) review suggested that PIEDs use often started in basic training or when overseas (Lui et al, 2018;Bucher, 2012). Additionally, a number of other factors were also found to be associated with taking PIEDs, including poor educational attainment, heavy alcohol intake, smoking cigarettes, and a history of high intensity physical training (Boos et al, 2010;Jacobson et al, 2012;Mattila et al, 2010). Those factors are viewed as important antecedents to commencing PIEDs use van de Ven & Mulrooney, 2017). As such, they are fundamental "gateways" to use. A gateway is a global construct that is used to explain how contextual factor(s) or behaviour(s) influence future lifestyle choices (Wilson, 2020), in this case, PIEDs use. VOLUME 1 | ISSUE 4 SPECIAL | 2022 | 357 in combat zones. Most had enlisted at 18 years of age (mean = 18.6 years, range 18-20 yrs.), and had served in the Armed Forces for a mean average of 7.3 years (range 6 to 10 years). Participants had been retired for between one and seventeen years (mean = 6.28 years). The mean length of time of PIEDs use was 7.93 years, however, the range was large (1 to 22 years of PIEDs use), with ten users reporting that they took PIEDs during their time in the Forces. Inductive analysis resulted in data being placed in two conceptually different areas (termed General Dimensions [GD] in this study). The GDs were designated as GATEWAYS and GATEKEEPERS. GATEWAYS consisted of seven themes which on further analysis were compressed to two categories (Figure 1), while GATEKEEPERS comprised of ten 1 st order themes. These were further assigned to four 2 nd order themes and ultimately to two categories ( Figure 2).
GATEWAYS: Circumstances and behaviour were the two categories in this GD. Circumstances
This category comprised of three themes: Gym User, Overseas Deployment and Work Demands.
© 2022 ARD Asociación Española
Gym User: One of the least reported themes (seven participant responses) made direct reference to gym usage as a gateway. Comments related to the role of the gym in providing a motivational culture due to the perceived ethos of the gym, and the people that use it: I went to that gym knowing that most of the members were serious lifters or body builders. Nobody was just a "gym bunny" to keep trim. There were always guys moving around and when they got to trust you, they give plans and help. I wanted to look like them and the culture was to work hard, work often, and take whatever you need. ( Another factor stated by 12 of the 14 participants as being important in their decision to take supplements and/or PIEDs, was the Army's historically brutal training regime. "Beasting" is a squaddie's [new enlistee] term for high intensity, highly demanding, and energy sapping drills as a short-cut to fitness development, or, at times, as a form of punishment. A former infantryman said: Beasting was the hardest part of it. I've no idea why it needed to be done as most of us were seriously motivated to do well anyway, or at least as well as we could and that's all that should be asked for. There were grown men in tears at times. That starts a culture of doing whatever you need to get through and for me that meant taking supplements and some pills. (Male,26). High-Intensity Strength-Based Physical Training: The final theme in this category reflected the role that intense physical activity had on military veterans. This form of physical training was commented on by almost all interviewees (n=13), highlighting that their desire to do more training at higher intensities was important in their PIEDs journey: I felt I wanted to do more and more and once I was taking the gear was able to go on for ever.
Behaviour
[In] fact I increased my time in the gym from about 70 mins a night to nearer 3 hours but taking them also allowed me to use my time and keep lifting (Male, 26).
GATEKEEPERS
Analysis of data identified that gatekeepers consisted of three key groupings: friends or peers as gatekeepers (n=14); work colleagues (n=14); and leaders or mentors (n=10) ( Figure 2). Their influence was strong.
This GD consisted of two categories: Procurement of PIEDs and Information Dissemination Procurement of PIEDs
This category comprised of two 2nd Order Themes: Supplier of PIEDs (consisting of two 1st Order Themes), and Facilitator (consisting of three 1 st Order Themes).
Suppliers of PIEDs: Suppliers were categorised in two ways. They were either Direct (suppliers) or they worked as Intermediaries. This was reflected in the statements with most of the respondents initially getting their supplies from someone, directly connected with them, usually a peer or work colleague, but then often moving on to intermediaries through whom PIEDs could be ordered: My first lot of PIEDs were bought from one of the guys in the gym that I used. I worked out a bit with him and he told me they were part of his own supply and to try them out. After a bit [some time] of experimenting he told me who he used in Newcastle to get his stuff. That guy got his gear from suppliers in Manchester or Leeds, so I never went direct just through him. It gave me a feeling of security as he knew most of the guys and had been the dealer for ages [ Likewise, as they gained more experience of using PIEDs, our cohort also noted that they used the same people, or an extended network, to gather information about new drugs or trends.
We all talk to each other and when anything new is around we always ask around, in the gym and outside. A real network of knowledge, like a tree with branches everywhere [laughs]. (Male,38).
Quantity: Similar comments were made with respect to the Quantities of PIEDs that should be used, with information again being passed via gatekeepers: It's not quite hit or miss but it's not as though there are instructions with what I buy. The people I went to for info or listened to most were the lads I was working out with and who were using the same stuff, or who had used it in the past (Male, 33).
Quality: The quality of PIEDs cannot be guaranteed. Our respondents agreed that there was always some sort of risk attached to using PIEDs but acknowledged that they had to trust their gatekeepers and the information offered by them. This faith in a supply chain seemed the norm: The idea that I am injecting some dodgy gear is always there, but I do my research before I try anything new and take advice from the older and more experienced guys in the gym. Method of Use: The 14 participants were specifically questioned about the manner in which they used PIEDS, with all responding that they injected their drugs. Eleven indicated also that oral ingestion was used at times. However, all referred to initial needs for assistance with injecting. This assistance was usually from peers as opposed to seeking clinical advice or instruction: My early experiences were by getting help from the lads in the gym. In fact, they injected it for me, showed me how to make it safe until I was ready to try for myself. Even then I had somebody watching me in case I made a mistake. Thy were as good as any nurse I've known which is just as well as I couldn't see me going to the surgery to ask them to inject me. (Male, 33).
Having been taught how to take their PIEDs, the veterans were given information how best to manage their consumption for best effect, and with a view to ensuring the process was as safe as possible. "Stacking", "pyramiding", "plateauing" and "cycling" are all methods employed regularly by PIEDs users to manage their intake. Similarly, another respondent said: My first load were orals [steroids] but they were making me feel sick and some of the lads said they were too risky, so I changed to injections. I still take oral supplements even now but not the heavy stuff. (Male, 31).
DISCUSSION
This study identified two key influences associated with PIEDs use in our cohort of ex-Services personnel: GATEWAYS and GATEKEEPERS. This paper is not the first to identify their roles in multiple settings of substance misuse, however, there are a number of findings that are novel to PIEDs use compared with other areas of abuse (e.g. drugs, alcohol, abusive behaviour) and to military situations in particular. The "gateway hypothesis" has developed since the 1970s (Kandel, 2002). This proposes that acquaintance with what have been classed as "entry" substances such as alcohol, cigarettes, and cannabis, reliably predicts deeper and more severe drug use.
GATEWAYS seem not to be linked explicitly to psychological construct of motivation in the literature, yet the association seems unequivocal with gateways being cited as (a) occupationally derived, and (b) culturally driven through the environment in which users in are embedded, such as "body-building gyms". (Coomber-Moore, 2017). These develop the needs on which PIEDs use is cultivated and fit well with our themes of Work Demands and Gym User. The third theme of Overseas Deployment is a theme that is military specific (Lui et al, 2018;Paisley, 2015;Varney et al, 2017). Ten of our interviewees indicated that they sought to appear tough or mean, to discourage approaches or aggression from others. With many of the respondents working in the security industry, preliminary thoughts were that this was based on participants' post-Services roles. However, further analysis identified a relationship between Work Demands and Deployment. Deployment offers both an access route to PIEDs as well as a rationale for taking the drugs. Access seemed relatively easy when deployed overseas, as UK Forces meet allied personnel, and in our context, work demands involved patrolling hostile environments while on active duty: We were working and walking among locals not knowing whether you would be attacked by a hostile, so the bigger and meaner and tougher you looked the better it made you feel. Wouldn't have stopped an IED [improvised explosive device] but made me and some of the lads feel better…and anyhow, if not out on patrol, camp was boring, so you are actively encouraged to keep fit and the Yanks [American troops] showed us what to take and where they got it (Male,44).
The data that contributed to the Behaviour category revealed that use of Recreational Drugs Use led to PIEDs use, despite all the veterans having previously taken recreational drugs. Despite this, they felt that there was no direct relationship between this and their PIEDs use, although their PIEDs consumption may indicate a broader acceptance of taking some drugs. Bandura's (2002) Theory of Moral Disengagement provides an explanation for this belief. His theory suggests that individuals accept unethical actions to justify other dubious behaviours. Thus, our participants saw no issues with taking recreational drugs, and concomitantly did not consider PIEDS use to be morally unjust (Boardley, Grix & Dewar, 2014).
Supplement Use involved different assumptions to those of recreational drugs users, with the main difference being that they recognised the links between taking legal supplementation and taking PIEDs. All participants stated they took nutritional supplements for training performance or body image benefits and the next stage for them was using PIEDs. (Herbst, McAslin and Kalapatapu, 2017;Jacobson et al (2010)), highlighting an issue that may be culturally specific to military environments. Further investigation is suggested to ensure that Armed Forces personnel are not (actively or passively) "encouraged" to look for support or help outside the boundaries of military norms.
GATEKEEPERS are controversial figures in much of the literature, particularly in medical texts, where general practitioners and primary care specialists prescribe to special services, diagnostic testing, and hospital visits or admissions, and, as such, are acting as gatekeepers (Greenfield, Foley & Majeed, 2016). Gatekeepers also tend to be holders of information, often viewed as experienced persons who can either hold back information or provide it to others with the added value of perceived wisdom (Metoyer-Duran, 1993).
In our study, gatekeepers had strong "helpful" roles, rather than acting as blockages. Our analysis identified that gatekeepers were composed of three distinct but often related groups of people: friends or peers, work colleagues, and mentors or leaders (e.g., physical training instructors; fitness leaders). All had a role to play in either the procurement of PIEDs or in providing information about substances, their use, and related medical issues, irrespective of whether the veterans began taking PIEDs as serving or non-serving personnel. This concurs substantively with Coomber & Moyle's (2014) and van de Ven's (2017) research which identified that peers, friends, or other, context specific, individuals (such as other gym users or associates of friends in gyms) are most commonly involved in the acquisition of PIEDs.
GATEKEEPERS as a dimension was derived from two categories of Procurement of PIEDS (from 2 nd Order Themes of Supplier and Facilitator) and Information Dissemination (from 2 nd Order Themes of Information About PIEDs and Medical and Health Issues).
With new users particularly, the Procurement of PIEDs necessitated that gatekeepers took on two distinct roles as either Facilitators or Suppliers. As stated earlier, Suppliers were denoted as being either Direct or VOLUME 1 | ISSUE 4 SPECIAL | 2022 | 363 Intermediary reflecting the fact that some gatekeepers provided PIEDs to users and were viewed as the "goto" person in their gyms, whereas intermediaries acted on their behalf, almost as allies or collaborators acting as "go-betweens" in the supply chain. Irrespective, all users put a great deal of faith in their suppliers. This "blind faith" corroborates the findings of van de Ven and Mulrooney (2017;2020) in Netherlands and Belgium, and Australia respectively , who learned that users of PIEDs implicitly trusted their suppliers.
Facilitators were considered to have one of three distinct functions, namely Guide, Influencer, and Director, a novel attribution. Guides were deemed to be gatekeepers who suggested, what PIEDs to take, when to take them, and how to take them. Influencers roles generally preceded the decision to take PIEDs, but it was definitely an active role, the purpose of which was to persuasively encourage engagement with PIEDs. Andreasson & Johansson (2014) noted that these influential roles are similar to those undertaken with both recreational and performance and image enhancing drugs users in the general population . The Director differentiated from the Guide in both focus and control insofar as a Director undertook their role once the decision to take PIEDs had been reached, informing the user of who to approach or where to go for their PIEDs. While the people who undertook the roles were at times entwined, their functions seemed to be quite discrete and is a further another novel finding of this study. Further work is needed to consider those roles and their relationships.
The final category reflected the GATEKEEPERS' role of Information Disseminator. Two distinct areas were developed: Information about PIEDs and Medical and Health Issues.
The former consisted of three themes: Substance Choice, Quantity (of PIEDs to be taken), and Quality (of PIEDs). The three areas were again discreet though closely aligned. What the results indicated is that gatekeepers, be they Facilitators or Suppliers, were trusted to ensure that the correct substances were being purchased for specific outcomes, that the users were taking them in appropriate quantities, and that the quality was "pure". Participants trusted that standards were sound, unlaced and free of toxins, and were supplied in appropriate doses. Our veterans were unaware of whether their purchases were safe but simply trusted their supply chains . Their gatekeepers "led" them through the maze of what drug to take, from among the many available. While there were recognised dangers of acquiring information, knowledge, and practices from non-clinical sources, there was a final thematic area that was drawn from the data, namely Medical and Health Issues.
Method of Use (including technique) is an important theme in health terms, recognising that most of the users employed intra-muscular injections to administer their PIEDs, and were taught injection protocols and techniques by friends, peers, or other users.
In terms of managing consumption, participants employed a number of key methods, and again they got the information from other users. These included relatively dangerous behaviours, such as "plateauing" in which doses are increased incrementally over a period of approximately two months with the aim of overcoming the body's natural adaptation to PIEDs.
Despite the potential for negative health outcomes, they are still employed by our participants, and they gained their knowledge about how to do so from their gatekeepers. Also, in common with PIEDs users from the general population (see Tighe et al, 2017;Zahnow et al, 2018), general advice from sources such as body-building magazines and internet forums, was used by our participants.
Problem Identification and Solving covered areas of concern and resultant strategies or support to deal with them. Within our cohort, information around medical issues were also garnered for the same sources, instead of accessing suitable medical personnel. The engagement with such sources for medical concerns is normal among PIEDs users (Andreasson and Johansson, 2014;Clement et al, 2012). Nonetheless, given the possible negative consequences on health, it is disquieting to realise that they are the principal options followed by PIEDs users when seeking information, advice, possible medical therapies or other interventions.
Responses to our questioning showed that a minority of interviewees (N=5) accessed clinical support, highlighting the normality of PIEDs users to avoid seeking medical opinion whenever possible. It seems that PIEDs users do not trust their Forces medical staff due to an awareness that medical staff are senior figures in the Armed Forces. A comment from one veteran supports this view: "Well you can't really, can you? They're part of the "brass" [senior staff]. They'll shop [inform on] you!" (Male, 26).
GATEKEEPERS were viewed by our participants as being the most important people in their journey of PIEDs use, using them to access the drugs, train them on how to use them, and as sources for knowledge, information, and contacts. In the context of this study, gatekeepers were fundamentally other gym users who introduced and then supported PIEDs users. This was especially so in the early days as users. It was noted that gatekeepers had multiple roles: My main man [for supply and information] is the guy that I knew in the Forces who was able to get gear from his mates in Liverpool. Anything that I need to know, I go to him. If he doesn't know, he finds out (Male, 44).
In spite of their experiences of negative health consequences, such as injection-site or blood infections, our participants followed the same paths of recreational drug addicts in mainstream society by continuing to use the same suppliers, products, and behaviours (Binswanger et al., 2012).
CONCLUSION
This work amongst former military personnel found similarities with other studies of PIEDs' studies with similar cohorts. There were also similarities found with the results of PIEDs' studies in wider populations. These included the manner in which users were first introduced to performance and image enhancing drugs as well as the gatekeeping roles of "significant others" in accessing information. Also noted are features of PIEDs use among Services personnel that require additional exploration. These include factors that might reflect adverse influences by colleagues, the "masculine" culture that is inherent within military life, the excessive demands on recruits and regular personnel alike during military physical training sessions, as well as the physical and psychological requirements of active service in foreign lands.
This paper addresses the issues of gateways and gatekeepers in PIEDs use among a small cohort of former military personnel. As such, it is the first paper that has specifically considered the two areas and attempted to map them, albeit independently of each other. Given the apparent importance of both GATEKEEPERS and GATEWAYS to PIEDs use, further knowledge must be gained. As such, there are two main recommendations that fall out of this work. The first is a call for further research of these topics to build and test a model that identifies where the interactions sit between gatekeepers of varying backgrounds, the roles they undertake, and the relationships with gateways. Secondly, in an effort to inform treatment options and initiatives to promote harm reduction, van den Ven & Mulrooney (2017) argued that the design of interventions to counter PIEDs use should, follow a holistic evaluation of social, VOLUME 1 | ISSUE 4 SPECIAL | 2022 | 365 economic, and cultural factors that play a part in the decision to take drugs, as well as the environments and people that facilitate the practice. We support this recommendation as it is of particular significance to the Armed Forces where the values and intense training seems to foster a culture in which the rewards of PIEDs use outweigh the risks.
While much of this study reinforces concepts and practices from other areas of substance abuse or antisocial behaviour, our findings have also identified a number of key issues that have thus far been unrepresented in the literature surrounding the use of PIEDs. The fact that this work has been conducted with a small cohort of ex-military personnel means that it cannot be considered to be representative of PIEDs users generally. This makes the topic one of significance for future research with larger samples in both competitive, vocational, and recreational settings. However, there are issues that are very specific to this present cohort which makes this study an important addition to the literature around the use of PIEDs in the Armed Forces. It supports and reinforces the need for greater knowledge.
SUPPORTING AGENCIES
This project was funded by the Forces in Mind Trust, London, United Kingdom.
DISCLOSURE STATEMENT
No potential conflict of interest was reported by the authors. | 2022-11-17T16:03:23.239Z | 2022-11-15T00:00:00.000 | {
"year": 2022,
"sha1": "b6a573fd454e9c013ef0b7fe618ee1db39c56e88",
"oa_license": "CCBYNCSA",
"oa_url": "https://sjsp.aearedo.es/index.php/sjsp/article/download/gateways-gatekeepers-drugs-pieds-uk-military-veterans/41",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8677a79871748889a0e884681b76c7fb528330af",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
} |
5748355 | pes2o/s2orc | v3-fos-license | Electrostatic Spray Deposition-Based Manganese Oxide Films—From Pseudocapacitive Charge Storage Materials to Three-Dimensional Microelectrode Integrands
In this study, porous manganese oxide (MnOx) thin films were synthesized via electrostatic spray deposition (ESD) and evaluated as pseudocapacitive electrode materials in neutral aqueous media. Very interestingly, the gravimetric specific capacitance of the ESD-based electrodes underwent a marked enhancement upon electrochemical cycling, from 72 F∙g−1 to 225 F∙g−1, with a concomitant improvement in kinetics and conductivity. The change in capacitance and resistivity is attributed to a partial electrochemical phase transformation from the spinel-type hausmannite Mn3O4 to the conducting layered birnessite MnO2. Furthermore, the films were able to retain 88.4% of the maximal capacitance after 1000 cycles. Upon verifying the viability of the manganese oxide films for pseudocapacitive applications, the thin films were integrated onto carbon micro-pillars created via carbon microelectromechanical systems (C-MEMS) for examining their application as potential microelectrode candidates. In a symmetric two-electrode cell setup, the MnOx/C-MEMS microelectrodes were able to deliver specific capacitances as high as 0.055 F∙cm−2 and stack capacitances as high as 7.4 F·cm−3, with maximal stack energy and power densities of 0.51 mWh·cm−3 and 28.3 mW·cm−3, respectively. The excellent areal capacitance of the MnOx-MEs is attributed to the pseudocapacitive MnOx as well as the three-dimensional architectural framework provided by the carbon micro-pillars.
Introduction
The technological advancement toward small-scale and portable devices has resulted in an increased demand for micro-power systems. For instance, the recent boom in the development of implantable medical devices, wireless sensors, smart cards, microelectromechanical systems (MEMS) and personal electronics has resulted in the need for reliable miniaturized energy storage devices. At present, the majority of the micro-devices rely on batteries to provide the required energy and power. Despite the commercial availability of thin-film or "microbatteries", their relatively poor power-handling ability and limited lifetime hinder their applicability to systems that require high Nanomaterials 2017, 7,198 2 of 12 current spikes [1,2]. As an alternative to batteries, energy harvesters hold significant promise for sustainable environments; however, the currently existing energy harvester systems require an energy storage device in tandem [1]. Electrochemical capacitors or supercapacitors, on the other hand, are electrochemical energy storage systems that possess higher power densities than batteries along with superior lifetime. Conventional supercapacitors, however, are too bulky for small-scale applications and their fabrication methods are not compatible with the currently existing Integrated Circuit (IC) technology. Therefore, of immediate need is downsizing supercapacitors with compatible microelectronic fabrication techniques, so that they can be placed directly on the chip. Such devices, also referred to as micro-supercapacitors (MSCs), generally possess negligible active material masses and, therefore, their performance metrics are typically normalized by the area of the system. The typical areal energy and power densities delivered by MSCs range from µWh-mWh·cm −2 and µW-mW·cm −2 [1,[3][4][5]. Volumetric/stack normalization is also popular for reporting MSC performance, since it provides insight into intrinsic material properties, as well as device architecture.
In the past decade, carbon microelectromechanical systems (C-MEMS) technique has emerged as a potential technique to successfully fabricate carbon-based current collectors and electrodes for on-chip energy storage and bio-sensing applications [6][7][8][9][10][11][12][13]. The C-MEMS process is a microfabrication technique that essentially involves the pyrolysis of a patterned photoresist into carbon structures. C-MEMS offers the feasibility of creating 3D micro-pillars, which offsets the limitation of the small footprint area required for miniaturized systems. The idea of using 3D C-MEMS-based microstructures as current collectors for the integration of other capacitive materials was first demonstrated by Chen et al. [11], where they grew carbon nanotubes (CNTs) on the surface of the C-MEMS structures. The CNT/C-MEMS structures were reported to exhibit specific capacitance as high as 20 times that of the bare C-MEMS structures. Other reports have documented the use of C-MEMS-based MSCs including electrochemically activated C-MEMS [12], and polypyrrole (PPY) decorated C-MEMS structures [13]. Apart from CNT and PPY, manganese oxides offer great promise as active materials, owing to their high theoretical specific capacitance, environmental benignity, large abundance, and low cost [14][15][16][17]. Of all polymorphs, birnessite MnO 2 , in particular, is very well suited for energy storage applications, given the simultaneous utilization of the double-layer capacitance as well as the Mn 3+ /Mn 4+ redox couple [17]. Its layered structure exhibits edge-sharing MnO 6 octahedra in the sheets, and the facile in-and-out cation motion from the interlayer region allows for partial electrolyte ion intercalation into the lamellar region [17]. One of the recently proposed synthetic routes to obtaining birnessite MnO 2 (layered) is the electrochemical phase transition from hausmannite Mn 3 O 4 (spinel) [18][19][20]. The synergy between the hausmannite and birnessite phases has been documented to yield superior current response as opposed to the pristine MnO 2 or Mn 3 O 4 phases [21,22]. Furthermore, as noted by Kim et al. [23], the spontaneous transition of layered to spinel manganese oxides is one of the critical factors that compromises with their structural stability, and therefore, the opposite transition from spinel to layered phases could be advantageous for enhancing the cycle life of layered birnessite materials.
In this work, porous manganese oxide films were synthesized via electrostatic spray deposition (ESD) and evaluated as pseudocapacitive materials and as active materials for C-MEMS integration. ESD is an electrohydrodynamic spraying technique, which essentially involves the disintegration of a precursor solution into an aerosol spray upon the application of a high voltage between the feeding source and a grounded preheated substrate. The ability to tailor the film morphology by fine-tuning deposition parameters, without the need for vacuum, is what makes ESD an attractive and cost-effective thin-film synthesis method [24][25][26][27][28][29]. The ESD-derived manganese oxide films were able to deliver specific capacitances as high as 225 F·g −1 from 72 F·g −1 upon electrochemical cycling in neutral aqueous media. The enhancement in capacitance was ascribed to a partial phase transformation from hausmannite Mn 3 O 4 (spinel) to birnessite MnO 2 (layered). In addition to the enhancement in charge storage capacity, a concomitant improvement in kinetics and resistivity was observed upon cycling. Given the coexistence of the two phases, the films are referred to as MnO x henceforth. Several reports have documented the use of ESD-based manganese oxide films but their direct applicability to microsystems has not been explored [30,31]. Given the potent advantages of thin films for microsystems, the ESD-based MnO x were integrated onto the C-MEMS-generated 3D carbon micro-pillars and evaluated as MSC systems. In a two-electrode configuration, the MnO x /C-MEMS microelectrodes (MEs) were able to deliver specific capacitance as high as 0.055 F·cm −2 , much higher than other microsystems [32][33][34]; the maximal volumetric energy and power densities delivered by the MnO x /MEs were 0.51 mWh·cm −3 (1.84 J·cm −3 ) and 28 mW·cm −3 (28.3 mJ·s −1 ·cm −3 ), respectively. The excellent areal capacitance and the relatively high stack energy density of the MnO x -MEs are attributed to the pseudocapacitive MnO x as well as the three-dimensional architectural framework provided by the micro-pillars.
Crystallographic, Spectroscopic and Microstructural Studies on the As-Prepared and Cycled MnO x Films
The XRD pattern of the as-deposited manganese oxide powders is depicted in Figure 1a. Figure 2a, on the other hand, did not exhibit any sharp peaks indicating the phase to be mostly amorphous. The broad peak centered at 18.64 • is indexed as (002) plane of the birnessite MnO 2 phase, whereas the two faint peaks located at 36.7 • and 65.7 • are identified as (006) and (119) planes, respectively (JCPDS Card Number: 00-018-0802) [35]. The Fourier Transform Infrared (FTIR) spectrum of the as-synthesized MnO x films between frequencies of 500-4000 cm −1 is depicted in Figure 1b. In the high-frequency region, the broad peak centered at 3345 cm −1 is assigned to -OH stretching vibrations [36], whereas the peak located at around 1624 cm −1 is attributed to -OH bending vibrations [36]. In the lower frequency region, the absorption peaks located at 529 cm −1 and 602 cm −1 are ascribed to Mn-O stretching modes in the octahedral and tetrahedral sites, respectively [20,[36][37][38]. The FTIR spectrum of the cycled MnO x (after 1000 cycles) film is shown in Figure 2b. It should be noted that while the majority of the peaks signaling -OH bending and stretching are still visible, the peak at 602 cm −1 which signals the presence of the tetrahedral stretch of Mn-O disappears, indicating a loss of order in the crystal structure as compared to the as-synthesized MnO x films. However, the peak at 525 cm −1 is still present indicating that the cycled films comprise predominantly Mn-O groups from the octahedral layers, which is also indicative of birnessite phase. The microstructure of the as-deposited manganese oxide sample is shown in Figure 1c; the microstructure was mostly porous with reticular network-like morphology. The microstructure of the electrochemically cycled manganese oxide (after 1000 cycles) is shown in Figure 2c. As evident, there is a dramatic change in the morphology as compared to the as-deposited films; as opposed to the previous reticular structure, the post-cycling structure is predominantly "layered" platelet-like, which is reminiscent of birnessitic-MnO 2 [19,20]. Figure
Electrochemical Characterization of the MnO x Films
The cyclic voltammograms (CV) of the as-synthesized manganese oxide films between potentials of −0.1 V and 0.9 V (vs. Ag/AgCl) at a sweep rate of 5 mV·s −1 are shown in Figure 3a. As evident for the 2nd cycle, the CV has no discernible redox peaks. Upon cycling, however, there is a gradual origin and increase in the intensity of anodic and cathodic currents around 0.63 V and 0.34 V, which is attributed to the oxidation and reduction between the Mn 3+ /Mn 4+ redox couple, as per the Pourbaix diagram [21,30,39]. It is worth noting that the area under the CV curve increases substantially upon cycling, which implies that there is enhancement in capacitance upon electrochemical cycling. The gravimetric specific capacitances (C s ) at the 2nd, 10th, 20th, 50th, 100th, 200th and the 500th cycles were approximated as 56, 102, 129, 156, 162, 163 and 160 F·g −1 , respectively, using Equation (1), where C s is the specific capacitance, m is the mass of the electrode, s is the scan rate, ∆V is the voltage window, I is the current, and V is the voltage.
IdV (1) Figure 3b depicts the galvanostatic charge-discharge (GCD) curves of the MnO x films at a current density of 0.5 Ag −1 . As evident, there is a significant enhancement in capacitance from the 2nd cycle accompanied with a marked decrease in the voltage drop in the subsequent cycles. At the 2nd, 10th, 20th, 50th, 100th, 200th and 500th cycles, the specific capacitance was approximated as 72, 139, 172, 197, 214, 225, 223 and 215 F·g −1 , respectively. In order to investigate the resistivity changes upon subsequent cycling in the sample, Electrochemical Impedance Spectroscopy (EIS) studies were carried between frequencies of 100,000 Hz and 0.01 Hz. A typical Nyquist curve comprised a depressed semicircle in the high-frequency region followed by a relatively linear slope in the lower frequencies.
The diameter of the semicircular region is associated with the charge-transfer resistance of the system at the electrode-electrolyte interface, and equivalent circuit analyses were done in order to verify the effect of cycling on the resistance of the system. The equivalent circuit for the Nyquist plots has been depicted as the inset of Figure 3c. R s , R ct , W, C dl and C p stand for solution resistance, charge-transfer resistance, Warburg impedance, double-layer capacitance, and pseudocapacitance, respectively [40]. The values of R ct and R s at different cycles have been tabulated in Table 1. As evident, the charge-transfer resistance decreases rapidly for the first 200 cycles, which can be ascribed to the transformation of the relatively insulating Mn 3 O 4 to the more conducting birnessite MnO 2, and is consistent with previous reports [19]. The slight increase in the resistance after the 200th cycle can be attributed to the resistance changes in the predominantly active birnessite phase upon cycling. The long-term cycling of the MnO x films has been shown in Figure 3d. The specific capacitance of the electrode steadily increases from ca. 72 F·g −1 at the second cycle to a maximal capacitance of 225 F·g −1 at the 200th cycle. It should be noted that even the starting capacitance of the MnO x electrodes was superior to the previously ESD-synthesized β-MnO 2 films, which yielded a low gravimetric capacitance of 13 F·g −1 [41]. At the 1000th cycle, the capacitance dropped to 199 F·g −1 , resulting in a capacitive drop of approximately 11.6% when compared with the maximal capacitance reached by the system. The GCD curves at different current densities are shown in 136, 110, 77, 47 and 30 F·g −1 at scan rates of 2, 5, 10, 20, 50 and 100 mV·s −1 , respectively. As evident, the capacitance of the MnO x electrodes decreases with increasing scan rate in addition to the clear deviation from the relatively rectangular capacitive shape. This is expected since the rapid electrolyte ion flux at higher scan rates limits the diffusion-controlled charge storage processes at the MnO x electrode surface, resulting in lower utilization of active charge storage sites, thereby limiting the charge storage [42]. Figure 4a shows the typical fabrication process of the MnOx-MEs and the detailed explanation is given in Section 3.3. The typical MnOx-ME architecture is shown in Figure 4b. A typical micro-pillar had an average height of ~75 µm and a width of ~35 µm (adjusted for tilt angle). Figure 4c shows the magnified view of the side of the manganese oxide-encrusted micro-pillars; as evident, the films were reticular with very porous and consistent network-like morphology, analogous to the ones synthesized previously. The thickness of the manganese oxide film was approximately 0.6-0.8 µm. For stack energy and power density calculations, the energy and power were normalized with the volume of the electrodes, for which the height of the posts was multiplied with the footprint of the device (1 cm 2 ), equating to a volume of v = 1 cm 2 × 0.0075 cm = 0.0075 cm 3 . In order to evaluate the electrochemical performance of the MnOx-MEs, two identical MnOx-MEs were used in a twoelectrode symmetric cell setup and the cell was analyzed between a voltage window of 0 and 0.7 V in 1 M Na2SO4. For the symmetrical cell system, one of the MnOx-MEs functioned as the working electrode, whereas the other served as the counter electrode. The typical cycling behavior of the MnOx-MEs is shown in Figure 4d. As evident, the cell capacitance increased from ≈14.7 mF•cm −2 to ≈19.8 mF•cm −2 from the 2nd to the 500th cycle, respectively. The enhancement in capacitance can be Figure 4a shows the typical fabrication process of the MnO x -MEs and the detailed explanation is given in Section 3.3. The typical MnO x -ME architecture is shown in Figure 4b. A typical micro-pillar had an average height of~75 µm and a width of~35 µm (adjusted for tilt angle). Figure 4c shows the magnified view of the side of the manganese oxide-encrusted micro-pillars; as evident, the films were reticular with very porous and consistent network-like morphology, analogous to the ones synthesized previously. The thickness of the manganese oxide film was approximately 0.6-0.8 µm. For stack energy and power density calculations, the energy and power were normalized with the volume of the electrodes, for which the height of the posts was multiplied with the footprint of the device (1 cm 2 ), equating to a volume of v = 1 cm 2 × 0.0075 cm = 0.0075 cm 3 . In order to evaluate the electrochemical performance of the MnO x -MEs, two identical MnO x -MEs were used in a two-electrode symmetric cell setup and the cell was analyzed between a voltage window of 0 and 0.7 V in 1 M Na 2 SO 4 . For the symmetrical cell system, one of the MnO x -MEs functioned as the working electrode, whereas the other served as the counter electrode. The typical cycling behavior of the MnO x -MEs is shown in Nanomaterials 2017, 7, 198 7 of 12 Figure 4d. As evident, the cell capacitance increased from ≈14.7 mF·cm −2 to ≈19.8 mF·cm −2 from the 2nd to the 500th cycle, respectively. The enhancement in capacitance can be attributed to the electrochemical activation of the MnO x films upon cycling. GCD curves at different current densities are shown as the inset; the predominantly triangular shape of the charge-discharge curves indicates the capacitive nature of the microelectrodes. Figure 4e shows rate-handling capability of the system, the system was able to deliver geometric capacitances (normalized by footprint area) as high as 55 mF·cm −2 at a current rate of 0.05 mA·cm −2 , while still maintaining a capacitance of 22.5 mF·cm −2 at a current density of 0.5 mA·cm −2 . The maximal stack capacitance (normalized by volume) achieved was 7.44 F·cm −3 , resulting in a maximal stack energy density of 0.51 mWh·cm −3 (1.84 J·cm −3 ), whereas the maximal power density achievable was approximated as 28.3 mW·cm −3 (28.3 mJ·s −1 ·cm −3 ), as shown in the Ragone chart in Figure 4f. The range of energy density achievable by the system was 0.21-0.51 mWh·cm −3 , for a power density range of 28.3-1.1 mW·cm −3 . It should be noted that the areal capacitance delivered by the MnO x -MEs was fairly high as compared to other microsystems reported in the literature [32][33][34], which is ascribed to the three-dimensional carbon micro-pillar framework which provides for a much larger surface area for active material integration. Despite the excellent areal capacitance and the relatively high energy density of the MnO x -MEs, their low power density needs to be addressed. Enhancement in the power handling is expected with the use of hybrid MnO x structures containing a combination of pseudocapacitive MnO x and conducting double-layer nanostructured carbons, as well as constructing on-chip inter-digitated systems. Designing inter-digitated designs with anode and cathode on the same chip reduces ion-transport resistance [1], as a result of which, the kinetics of the system can be enhanced. Devising such hybrid structures/inter-digitated systems to further enhance the power of MnO x -MEs is a subject of future works. MnOx-MEs, their low power density needs to be addressed. Enhancement in the power handling is expected with the use of hybrid MnOx structures containing a combination of pseudocapacitive MnOx and conducting double-layer nanostructured carbons, as well as constructing on-chip inter-digitated systems. Designing inter-digitated designs with anode and cathode on the same chip reduces iontransport resistance [1], as a result of which, the kinetics of the system can be enhanced. Devising such hybrid structures/inter-digitated systems to further enhance the power of MnOx-MEs is a subject of future works.
Manganese Oxide Electrode Synthesis
Manganese (II) acetate tetrahydrate (Mn(CH 3 COO) 2 ·4H 2 O, Alfa Aesar, Ward Hill, MA, USA) was first dissolved in 1,2-propanediol (Sigma Aldrich, St. Louis, MO, USA) in the ratio 2.4 mg·m·L −1 with constant stirring for 30 min and directly used as the precursor solution in order to deposit the MnO x films. For the ESD setup, the feeding rate was kept at 3 mL·h −1 and the voltage between the needle and substrate was kept between 6 and 8 kV. The distance between the needle and the substrate was approximately 4 cm and the deposition was carried out at 300 • C on stainless steel substrates. The typical gravimetric mass yield was 1 ± 0.05 mg per sample.
Structural and Material Characterization
Spectroscopic studies were carried out using FTIR spectroscopy in order to study the effect of cycling on the surface chemistry of the MnO x films using a JASCO FTIR 4100 equipped with an Attenuated Total Reflectance (ATR) accessory. For crystallinity studies, the powders were scratched off from the stainless steel substrates before and after cycling. The crystallinity of the as-deposited and cycled manganese oxide films was studied using a Siemens 5000D X-ray Diffractometer with Cu Kα radiation (Siemens, Munich, Germany). Further crystallographic studies on the as-deposited and cycled powders were carried out using a Philips CM-200 200 KeV Transmission Electron Microscope (TEM) (FEI Philips, Hillsborough, OR, USA). The morphology of the as-deposited and cycled films, as well as the MnO x /C-MEMS microelectrodes, was studied using a scanning electron microscope (JEOL SEM 6330F, Peabody, MA, USA) in the secondary electron imaging (SEI) mode.
MnO x /C-MEMS Fabrication
The experimental setup and details of the C-MEMS process used in this work have been reported previously [3][4][5][6]. A schematic illustration of MnO x -ME fabrication is shown in Figure 4a. In brief, the C-MEMS-based 3D micro-pillars were prepared by a two-step photolithography process followed by a pyrolysis step. In the first photolithography step, a two-dimensional square (10 mm side) pattern was created using NANO TM SU-8 25 (Microchem, Westborough, MA, USA), as the current collector. The photoresist film was spun-coated onto a silicon oxide wafer (4", (1 0 0)-oriented, n-type) at 500 rpm for 12 s and at 3000 rpm for 30 s by using a Headway Research photoresist spinner, followed by soft baking at 65 • C for 3 min and hard baking at 95 • C for 7 min on a leveled hotplate. The baked photoresist was thereafter patterned with a UV exposure dose of 300 mJ·cm −2 using an OAI (800) Mask Aligner. After the exposure process, a post-expose bake was conducted at 65 • C for 1 min and 95 • C for 5 min on a hotplate. The second photolithography step comprised building the cylindrical micro-pillar arrays using NANO TM SU-8 100 (Microchem, Westborough, MA, USA) on the patterned current collector. SU-8 100 was spun-coated at 500 rpm for 12 s and at 1500 rpm for 30 s using a Headway Research photoresist spinner. The spun-coated photoresist was then soft baked at 65 • C for 10 min on a leveled hot plate and hard baked at 95 • C for 45 min in an oven. The exposure was done using a UV exposure dose of 700 mJ·cm −2 using an OAI (800) Mask Aligner, following which a post-expose bake was performed at 65 • C for 3 min and 95 • C for 10 min in an oven. Afterward, the sample was developed by NANO TM SU-8 developer (Microchem, Westborough, MA, USA) for 15-20 min to wash away any remaining unexposed photoresist, followed by isopropanol rinsing and nitrogen drying. Finally, the resultant SU-8 structures were pyrolyzed at 900 • C for 1 h in a Lindberg alumina-tube furnace with a continuous flow of argon at a ramp of 5 • C/min. After the carbonization process, the carbon samples were allowed to cool to room temperature naturally and directly used as substrates for MnO x integration using ESD as described in Section 3.1.
Electrochemical Characterization
The electrochemical characterization on the manganese oxide films deposited on stainless steel substrates as well as carbon micro-pillars was carried out using a Bio-logic Versatile Multichannel Potentiostat (VMP3). For half-cell studies, the MnO x films were used as the working electrode, a platinum wire served as the counter electrode, while an Ag/AgCl electrode served as the reference electrode; the MnO x films were tested between potentials of −0.1 V and 0.9 V (vs. Ag/AgCl). A neutral aqueous electrolyte of 1.0 M Na 2 SO 4 was used for the cell assembly. For the MnO x /C-MEMS characterization, two of the manganese oxide-encrusted carbon micro-pillars were used in a symmetric configuration in 1.0 M Na 2 SO 4 aqueous electrolyte for a cell potential of 0-0.7 V.
Conclusions
In this paper, manganese oxide films were synthesized using electrostatic spray deposition (ESD) and characterized as pseudocapacitive materials for electrochemical capacitor applications in neutral aqueous media. The initial phase synthesized was the relatively insulating hausmannite-Mn 3 O 4 , which partially transformed into the conducting birnessite-MnO 2 upon electrochemical cycling, resulting in an enhanced gravimetric capacitance from 72 F·g −1 to 225 F·g −1 . Furthermore, MnO x -MEs were created by combining bottom-up ESD approach and top-down C-MEMS approach. In a two-electrode setup, the MnO x -MEs were able to deliver geometric specific capacitances as high as 0.055 F·cm −2 , and maximal volumetric energy and power densities of 0.51 mWh·cm −3 and 28.3 mW·cm −3 , respectively. The excellent areal capacitance and the high stack energy density of the MnO x -MEs are attributed to the pseudocapacitive MnO x as well as the three-dimensional architectural framework of the micro-pillars.
The feasibility of using ESD for high gravimetric thin-film manganese oxide films and high areal capacitance 3D microelectrodes was therefore established. | 2018-04-03T05:55:17.991Z | 2017-07-26T00:00:00.000 | {
"year": 2017,
"sha1": "d4b8ecc3f72648c039407618c08bf13377afe808",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/7/8/198/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4b8ecc3f72648c039407618c08bf13377afe808",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
237486903 | pes2o/s2orc | v3-fos-license | Three-Dimensional Volumetric Measurement of Endolymphatic Hydrops in Meniere's Disease
Objective: We used volumetric three-dimensional (3D) analysis to quantitatively evaluate the extent of endolymphatic hydrops (EH) in the entire inner ear. We tested for correlations between the planimetric and volumetric measurements, to identify their advantages and disadvantages. Methods: HYDROPS2-Mi2 EH images were acquired for 32 ears (16 patients): 16 ipsilateral ears of MD patients (MD-ears) and 16 contralateral ears. Three-T MR unit with a 32-channel phased-array coil/the contrast agent to fill the perilymphatic space and the HYDROPS2-Mi2 sequence. We calculated the EH% [(endolymph)/(endolymph+perilymph)] ratio and analyzed the entire inner ear in terms of the volumetric EH% value, but only single cochlear and vestibular slices were subjected to planimetric EH% evaluation. The EH% values were compared between MD ears and non-MD ears, to evaluate the diagnostic accuracy of the two methods. Results: The volumetric EH% was significantly higher for MD vestibules (50.76 ± 13.78%) than non-MD vestibules (39.50 ± 8.99%). The planimetric EH% was also significantly higher for MD vestibules (61.98 ± 20.65%) than non-MD vestibules (37.22 ± 12.95%). The vestibular and cochlear volumetric EH% values correlated significantly with the planimetric EH% values of the MD ear. Conclusion: Volumetric and planimetric EH measurements facilitate diagnosis of MD ears compared to non-MD ears. Both methods seem to be reliable and consistent; the measurements were significantly correlated in this study. However, the planimetric EH% overestimates the extent of vestibular hydrops by 26.26%. Also, planimetric data may not correlate with volumetric data for non-MD cochleae with normal EH% values.
INTRODUCTION
Meniere's disease (MD) is characterized by repeated spells of vertigo accompanied by low-frequency hearing loss, hearing fluctuation, ear fullness, and tinnitus (1). In 2015, the Classification Committee of the Barany Society established guidelines for the diagnosis of MD. An MD patient should exhibit (A) two or more spontaneous episodes of vertigo (lasting 20 min to 12 h); (B) audiometrically documented, low-to medium frequency, sensorineural hearing loss in one ear; and (C) fluctuating aural symptoms (hearing, tinnitus, or fullness) in the affected ear (2). This standardized definition served as an important milestone for clinicians and researchers. However, all of the aforementioned criteria are subjective, or based on subjective hearing tests. As no criterion is objective, the diagnosis may sometimes be controversial or unclear. Classically, endolymphatic hydrops (EH) has been regarded as objective histopathological evidence of MD (3). However, histopathology can be performed only postmortem: it is not possible to evaluate a patient who is currently suffering from recurrent vertigo attacks. The time gap between the development of active MD and postmortem evaluation limits our understanding of how the disorder progresses. An objective diagnostic parameter would be very useful, especially when considering (invasive) intratympanic gentamicin injection, labyrinthectomy, or vestibular neurectomy.
Recently, magnetic resonance imaging (MRI) of EH has become possible (4,5). The Nagoya group, among others, separated the perilymphatic and endolymphatic spaces in MR images. At least three different MRI techniques have been reported: (1) subtraction of two sequences with different inversion times; (2) turbo spin-echo inversion recovery via real reconstruction; and, (3) three-dimensional (3D)-fluid-attenuated inversion recovery (FLAIR) (6). All three techniques objectively imaged EH in MD patients (6)(7)(8)(9). EH imaging studies are promising but it is remains unclear how to objectively grade the extent of hydrops. Several grading systems have been developed to objectively classify the extent of EH; these vary by the imaging techniques used and the goals of the analysis. Most systems evaluate the relative size (planimetric ratio) of the endolymphatic area (mm 2 ) in one or two slices of two-dimensional (2D) MR images (10)(11)(12)(13). Slices including the mid-modiolar cochlear section and lower axial vestibule are typically analyzed. However, these approaches evaluate only a small proportion of the inner ear. As the inner ear has a complex 3D shape, and as some endolymphatic organs are not aligned along the axial plane, it may not be optimal to evaluate only one or two (supposedly representative) axial MRI slices. Some pioneering studies (14)(15)(16) sought to evaluate the relative 3D size (i.e., the volumetric ratio) of the endolymphatic volume (in µL) of the entire inner ear. However, these studies did not quantitatively compare the volumetric and planimetric EH ratios; a semi-quantitative approach was taken (16) but other studies (14,15) lacked planimetric controls. Here, we explored the characteristics and advantages/disadvantages of volumetric EH measurements by directly and quantitatively comparing the volumetric and planimetric data.
Patients
Thirty-two ears were imaged in patients clinically diagnosed with definite (n = 11) or probable (n = 5) MD according to the 2015 criteria of the Classification Committee of the Barany Society (2). Patients with conditions that might affect MRI or hearing were excluded, as were those with a history of seizures, organic brain damage, or implantation of cardiac pacemakers, cochlear implants, or intraocular ferromagnetic materials. EH imaging was performed when MD was inactive, i.e., when no severe attack of dizziness had occurred within the prior month and hearing had been stable for at least 2 months. The gender ratio (M:F = 7:9), average age (47.3 ± 8.1 years) and duration of illness (53.6 ± 63.6 months) were similar to those of previous reports (17,18). More detailed demographic data are listed in Tables 1, 2.
MRI
Four hours after injection of contrast agent (Magnevist; Bayer Ltd., Leverkusen, Germany), MRI scans were performed using a 3-T MR unit (3-T Magnetom Tim Trio; Siemens Medical Solutions, Erlangen, Germany) with a 32-channel phasedarray coil (4). All MRI protocols were those of the Nagoya group (19
Endolymphatic Hydrops Analysis
We analyzed the HYDROPS2-Mi2 EH images of 32 ears (16 ipsi-lesional MD ears and 16 contralateral non-MD ears). The perilymph and endolymph were clearly demarcated in all 32 ears. We used a threshold technique based on the signal intensity of HYDROPS2-Mi2 images to quantitatively analyze the endoand peri-lymphatic space volumes, which were segmented as negative (< −1) and positive (>5) threshold signal intensities, respectively, on manually drawn regions of interest (ROIs) of the cochlea and vestibule evident on MR cisternographs. Although the signal intensities of bony structures in ROIs are set to zero in HYDROPS2-Mi2 images, any remaining bony structures within cochleae and vestibules may be of non-zero intensity were removed. We used cutoff values of−1 and 5 to minimize volume overestimations near the boundaries of the cochlear and vestibular systems. For volumetric analysis, all MR images (10-15 slices) covering the vestibule ( Figure 1A) and cochlea ( Figure 1B) were 3Dstacked. The absolute volumes (in µL) of the endolymph and perilymph were compared between MD and non-MD ears. The quantitative volumetric EH% (endolymph volume (µL))/(endolymph+perilymph volume (µL)) was calculated automatically by the software. For conventional planimetric analysis, two representative cross-sectional MR images were analyzed by drawing cochlear and vestibular ROIs (Figures 1C,D) using the method of (4). For the vestibular ROI, the lowest slice wherein over 240 • of the lateral semicircular canal ring was apparent was selected ( Figure 1C). For the cochlear ROI, the slice exhibiting the largest cochlear modiolus was selected ( Figure 1D). The absolute areas (in mm 2 ) of the endolymph and perilymph were compared between MD and non-MD ears. The quantitative planimetric EH% [endolymph area (mm 2 )/endolymph+perilymph area (mm 2 )] was calculated. MD and non-MD ears were compared to determine if the volumetric and planimetric analyses identified the pathological side. Also, the volumetric and planimetric EH% values were compared within each subject.
Statistical Analysis
Continuous variables are expressed as means ± standard deviations (SDs). All statistical analyses were performed using SPSS software (ver. 25.0; SPSS Inc., Chicago, IL, USA). The Wilcoxon test was used to compare the volumetric and planimetric data. Also, Mann-Whitney U test was used to compare the EH% between definite and probable MD patients. Correlations were derived using the Spearman method. A p-value < 0.05 was considered statistically significant.
Correlations Between Volumetric and Planimetric Data
In the MD cochlea, the volumes of the endolymph (Rs = 0.668, p = 0.005) and perilymph (Rs = 0.774, p < 0.001) FIGURE 2 | Comparison of volumetric (µL) and planimetric (mm 2 ) measurements between MD and non-MD ears. The endolymph volume and area were significantly larger in MD than non-MD ears (A,B,D,E). In contrast, the perilymph volume and area were significantly lower in MD than non-MD ears. Thus, the volumetric and planimetric EH% values (the percentages of endolymphatic hydrops) were significantly greater in MD ears than non-MD ears (C,F). The EH% difference between MD and non-MD ears was more pronounced on planimetric than volumetric measurement. correlated significantly with the planimetric measurements (Figures 4A,B). The volumetric EH% also significantly correlated with the planimetric EH% (Rs = 0.753, p = 0.001, Figure 3C, volumetric EH% = planimetric EH% * 1.013). However, weak correlations were seen for non-MD cochleae (endolymph, Rs = 0.447, p = 0.083; perilymph, Rs = 0.356, p = 0.176; and EH%, Rs = 0.371, p = 0.158). That is, the volumetric and planimetric data were not correlated in the non-MD cochleae (Figures 4D-F). Figures 5A,B compare the endolymphatic hydrops percentage (EH%) between the two groups. The volumetric vestibular EH% was similar between definite MD (51.79 ± 6.97 µL) and in probable MD (48.50 ± 24.11 µL; p = 0.913, Figure 5A). The volumetric cochlear EH% was also similar between the definite MD (49.54 ± 13.71 µL) and probable MD (42.36 ± 22.07 µL; p = 0.377, Figure 5B) group.
Quantitatively Compare the Volumetric and Planimetric EH Ratios
Single vestibular and cochlear slices adequately represented the 3D EH status of the entire inner ear. The volumetric data correlated significantly with the conventional planimetric data (Figures 3C, 4C) of MD ears, which exhibited significantly higher EH% values than did non-MD ears (Figures 2C,F). EH imaging objectively distinguished the pathological side both volumetrically (accuracy, 75-94%) and planimetrically (accuracy, 68-81%). Given the need for complicated post-processing of volumetric data, this is encouraging. Although underestimation of EH volume may be of scientific concern, conventional planimetric measurements suffice in everyday practice. This is the first report to describe significant correlations between quantitative volumetric and planimetric analyses.
Certain differences between the two methods require consideration. First, the planimetric EH% overestimated the extent of vestibular hydrops by 26.26%. As shown in Figure 1C, planimetric measurements tend to inflate the EH% values of MD, but not non-MD, ears. Possible overestimation of hydrops volume on planimetric analysis was mentioned in a previous report (16). The single MRI slice that is planimetrically analyzed includes the anatomical location where hydrops is most pronounced. Thus, this single slice is exceptional. This may be an advantage of planimetric analysis; it becomes easier to distinguish MD and non-MD ears (Figures 1C,F). However, such increased sensitivity may decrease the specificity of MRIbased EH diagnosis. Also, it is important to not misinterpret the planimetric EH% value. For example, a planimetric EH of 89.40% (in subject 2 of Figure 2C) does not mean that 89.40% of the inner ear volume is filled with endolymph (the volumetric value measurements (A,B). The volumetric EH% value (the percentage of endolymphatic hydrops) correlated significantly with the planimetric EH% value (C). The planimetric EH% values were greater (data points below the solid diagonal line) in most subjects (C). Regression modeling revealed that the planimetric measurement overestimated the EH% by 26.26% (volumetric EH% = planimetric EH% *0.792). Good correlations were also evident for the non-MD vestibule: The volumetric and planimetric endolymph, perilymph, and EH% values were significantly correlated (D-F).
was only 50.17%). However, the regression formula (volumetric EH% = planimetric EH% * 0.792) allows the volumetric EH% of the entire inner ear to be simply determined using a planimetric EH% value derived from one MRI slice.
Second, the vestibular correlations were better than the cochlear correlations. As shown in Figure 4F, the EH% values of the volumetric and planimetric analyses were not correlated in non-MD cochleae, probably because the cochlear duct is a spiraled narrow tube, whereas the vestibule has a simple 3D structure. Endolymph and perilymph imaging and demarcation are more challenging in the cochlea. Also, the cochlea seems to be less affected by hydrops because the tight surrounding structures (including the bony spiral lamina) restrict cochlear expansion. Especially, we found no correlations between parameters in non-MD (normal) cochlea ( Figure 3F) because the data is not spread out. Compared to that of the vestibule, EH imaging data of the cochlea may be difficult to clinically interpret. Similarly, other studies found that the cochlear EH was not well-correlated with clinical findings, such as the hearing threshold (20).
Many studies have sought to use EH imaging to diagnose MD (4,10,18,21). Most studies reported more EH in affected ears, but the simplistic nature of the analyses created a great deal of controversy and some tension (6,22). We found that the conventional planimetric method of the Nagoya group was reliable and consistent, being both simple and well-reflecting the volumetric EH of the entire inner ear. The EH% results did not differ by measurement type (volumetric or planimetric analysis).
MD Diagnosis Issue Using EH Imaging
EH imaging aids objective MD diagnosis (4). However, there are certain underlying issues. First, the resolution of MRI is relatively low (voxel size, 0.47 × 0.47 × 1.00 mm 3 ) (23) compared to the size of the inner ear (180-300 mm 3 ) (14,16,24). Thus, it may be difficult to accurately calculate the hydrops volume, regardless of the post-processing or analysis methods used. The boundary between endolymph and perilymph may be blurred; the curvature of the cochlea or the canals may not be smooth; and the volume of the inner ear may vary depending on the imaging technique used. Second, EH may not be a pathognomonic sign of MD. Some authors have suggested that hydrops may be common to various inner ear disorders, including vestibular neuritis and vestibular migraine (20,25). Hydropic ear disease (HED), which encompasses a wider spectrum of EH, including clinical variants and primary and secondary MD, may be a more appropriate FIGURE 4 | Correlations between volumetric (µL) and planimetric (mm 2 ) measurements of the cochlea. In the MD cochlea, the endolymph and perilymph volumes correlated significantly with the planimetric measurements (A,B). The volumetric EH% also correlated significantly with the planimetric EH% (C). However, the correlation was weak for non-MD cochleae; there were no correlations among the endolymph volumes, perilymph volumes, or EH% values of volumetric and planimetric measurements (D-F). diagnosis (26). Third, the long waiting and imaging times can be problematic. It takes 4 h for the contrast agent to fill the perilymphatic space and the HYDROPS2-Mi2 sequence requires at least 30 min of MRI (4).
Limitation
Our work had certain limitations. First, the number of subjects was small; more subjects would strengthen our conclusions. Also, we lacked a control group (normal subjects with no inner ear symptoms). However, as our aim was to compare the volumetric and planimetric EH% values within subjects, these limitations do not undermine our conclusions. Not all subjects were diagnosed with definite MD using the criteria of the Barany Society. As in other EH imaging studies (25,27,28), a few subjects fulfilled the diagnostic criteria of probable MD.
CONCLUSION
EH volumetric and planimetric measurements facilitate objective distinction of MD from non-MD ears. Both methods are reliable and consistent; the measurements correlate significantly. Although conventional planimetric analysis considers only one or two MRI slices, it nonetheless reflected the volumetric EH of the entire inner ear in this study. But it should be noted that the conventional planimetric measurement overestimated vestibular hydrops by 26.26%. Also, planimetric and volumetric cochlear data were not correlated in subjects with normal EH% values.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by institutional review board at Seoul National University Hospital (1212-081-451 and 1806-047-950). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
MP, JL, and SO contributed to the design conception of the study, collected, and analyzed the data. T-SN, IS, and M-WS contributed to data analysis and the writing of the manuscript. J-HK and M-WS contributed to the design conception of the study and data analysis. All authors contributed to the article and approved the submitted version. | 2021-09-13T13:14:45.324Z | 2021-09-13T00:00:00.000 | {
"year": 2021,
"sha1": "a74e090a95e9c58f5c8a72807d6399bbc5131908",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.710422/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a74e090a95e9c58f5c8a72807d6399bbc5131908",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52901640 | pes2o/s2orc | v3-fos-license | Paths for Improving Bevacizumab Available in 2018: The ADZT Regimen for Better Glioblastoma Treatment
During glioblastoma treatment, the pharmaceutical monoclonal antibody to vascular endothelial growth factor A, bevacizumab, has improved the quality of life and delayed progression for several months, but has not (or only marginally) prolonged overall survival. In 2017, several dramatic research papers appeared that are crucial to our understanding of glioblastoma vis-a-vis the mode of action of bevacizumab. As a consequence of these papers, a new, potentially more effective treatment protocol can be built around bevacizumab. This is the ADZT regimen, where four old drugs are added to bevacizumab. These four drugs are apremilast, marketed to treat psoriasis, dapsone, marketed to treat Hansen’s disease, zonisamide, marketed to treat seizures, and telmisartan, marketed to treat hypertension. The ancillary attributes of each of these drugs have been shown to augment bevacizumab. This paper details the research data supporting this contention. Phase three testing of AZDT addition to bevacizumab is required to establish safety and effectiveness before general use.
Introduction
This paper presents the physiological basis of the ADZT regimen, a new proposed augmentation strategy to improve the effects of bevacizumab (Avastin™) during the treatment of primary glioblastoma. Bevacizumab is a monoclonal pharmaceutical antibody directed against vascular endothelial growth factor A (VEGF). Initially the Food and Drug Administration (FDA) and European Medicine Agency (EMA) approved to treat some forms of macular degeneration, and it is now also approved for, and commonly used during, glioblastoma treatment after resection, radiation and temozolomide. Initial clinical studies of glioblastoma showed that bevacizumab delayed progression from~7 to~10 months, but did not impact overall survival (~16 months) [1]. Others found similar results [2]. Newer bevacizumab regimens with 100 mg/m 2 /day cycles of temozolomide and newer studies of lower bevacizumab doses have indicated some survival benefits [3]. Glioblastomas have been an unusually treatment-refractory cancer, justifying our exploration of unproven but low risk regimens like ADZT.
Crucial papers appeared in 2017 on the physiology of bevacizumab, each giving new data, and each independently converging on potential improvements to the bevacizumab treatment. By using four older drugs that are available now (mid-2018) we might be able to exploit these new
Apremilast
Introduced to clinical practice in 2004, apremilast is a 461 Da, selective phosphodiesterase (PDE) 4 inhibitor. There are over a dozen currently-recognized isoforms of PDE. The problem with some past studies of pan-PDE inhibitors like pentoxifylline was that some PDE inhibitors have substrates that result in opposite intracellular effects to other PDE inhibitors. Since PDE4's predominant intracellular role is to catalyze the reaction of cyclic adenosine monophosphate (cAMP) to AMP, apremilast results in increased intracellular cAMP. The cAMP is synthesized by ATP conversion to cAMP, mediated by adenylate cyclase. Multiple pro-inflammatory cytokines are partially inversely controlled by intracellular cAMP levels [7,8]. As intracellular cAMP decreases, synthesis and release of tumor necrosis factor alpha (TNF-α), interleukin-2 (IL-2), IL-8, and interferon-gamma tend to increase [7,8].
In accordance with these theoretical considerations, IL-6, IL-8, monocyte chemoattractant protein 1 (MCP-1), and TNF-α were reduced in people being treated with 20 mg of apremilast twice daily for psoriasis or psoriatic arthritis [8,9]. Since these cytokines have also been shown to participate in glioblastoma growth facilitation, we might expect benefits from apremilast on this basis alone.
Apremilast is generally well tolerated, with mild nausea, diarrhea and headache being the most common side effects. Discontinuation due to side effects was 5% with placebo and 7% with apremilast [12]. Another PDE4 specific inhibitor, rolipram, was investigated in the 1980s as an antidepressant; however, development stopped due to excessive nausea [18]. Rolipram inhibited the growth of A172 and U87MG glioblastoma cell lines through a PDE4 mediated pathway [19].
In March 2017, Ramezani et al. published a crucial paper for our next step in improving glioblastoma treatment by improving the effectiveness of bevacizumab [20]. They showed that adding a PDE4 inhibitor, rolipram, to bevacizumab enhanced in vitro cytotoxicity and reduced free VEGF in the culture medium, compared to bevacizumab alone. This finding makes sense in the larger general context of pro-inflammatory cytokine release.
Understanding that VEGF action might also be inversely related to intracellular cAMP opens several exciting augmentation pathways by which we might make bevacizumab more effective in treating glioblastoma. Alternatively, diminished free VEGF after rolipram could be a secondary effect of the previously established reduction of TNF-alpha, IL-8 and other cytokines by PDE4 inhibition.
Therefore, PDE4 inhibitors have evidence of (a) augmentation of bevacizumab effects, and independent of that, (b) anti-glioma growth effects and (c) lower synthesis of inflammation-related cytokines secondary to increased intracellular cAMP.
Apremilast would be a low-risk addition to bevacizumab.
Bevacizumab
Introduced clinically in 2004, bevacizumab is commonly called an anti-angiogenic agent, but it should be more accurately termed what it simply and literally is-a monoclonal humanized antibody to soluble VEGF. Beyond direct vessel effects, bevacizumab strongly suppressed the glioblastoma cell expression of 130 kDa platelet endothelial cell adhesion molecules and slightly reduced proliferation but upregulated matrix metalloproteinase-2 production [20]. Further, bevacizumab is cytotoxic (in vitro at least) to VEGF, synthesizing glioblastoma cells by binding to the outer cell membrane bound VEGF [21].
When a glioblastoma progresses while on bevacizumab, survival is under half a year [22][23][24]. Performance status and quality of life usually improve with bevacizumab; however, overall survival does not. ADZT aims to address this discrepancy.
Distorted, flawed vessels are common in glioblastomas. The pruning of these pathologic vessels occurs during bevacizumab treatment with a consequential reduction of tumor-related brain tissue edema [24]. However, an interesting paradox occurs here-vessel density and vessel morphological and functional abnormality decrease under bevacizumab treatment, yet hypoxia seems to increase [25].
Dapsone
Introduced in the mid-1940s, dapsone is a 248 Da sulfone antibiotic still in wide use. In addition to antibacterial activity in treating Hansen's disease and pulmonary tuberculosis, dapsone has anti-protozoal effects and is currently used in the treatment of Plasmodia infections. Unrelated to antibiotic activity, dapsone has found some utility in treating neutrophilic dermatoses like bullous pemphigoid, dermatitis herpetiformis and others, including the neutrophilic rash caused by epidermal growth factor receptor inhibiting drugs [26]. In a series of five papers, my colleagues and I have amply documented the rationale for using dapsone to deprive the tumors of neutrophil-delivered VEGF during the treatment of glioblastoma [27][28][29][30][31].
As predicted in 2015 and in 2016 [27,28], dapsone was shown to an ameliorate anti-epidermal growth factor receptor mediated rash in 2017 [32,33], a rash mediated by VEGF containing neutrophils drawn to rash areas by IL-8 during erlotinib or cetuximab treatment, but countered by dapsone. We therefore expect dapsone to augment bevacizumab by reducing neutrophil borne VEGF in glioblastomas.
Dapsone has some in vitro anti-glioma activity on its own [34].
Zonisamide
Introduced in 1993, zonisamide is a 212 Da anti-seizure drug with carbonic anhydrase (CA) inhibitory activity that also blocks voltage-sensitive Na + channels and T-type Ca ++ channels [35,36]. There are a dozen CA isoforms. In vitro, the carbonic anhydrase IX (CA IX) Ki of zonisamide is 5.1 nM [37]. Zonisamide, unique among anticonvulsants, also inhibits monoamine oxidase [38].
Carbonic anhydrase catalyzes the reversible hydration of carbon dioxide to bicarbonate and a proton (H 2 O + CO 2 ↔ HCO 3 − + H + ). Of the many isoforms of CA active in cancer physiology, CA IX is particularly prominent, including in glioblastomas [39][40][41][42]. CA IX resides on the outer cell membrane's exterior. The resulting bicarbonate ion is imported by various pumps such as the Na + /HCO 3 − cotransporter, raising intracellular pH but lowering extracellular pH as the proton remains extracellular. This is one of the primary mechanisms generating cancer's-and specifically glioblastoma's-abnormal extracellular acidic milieu. Concordant with the above mechanism of cancer-related extracellular acidification, topiramate, an anti-seizure and CA IX inhibiting drug similar to zonisamide, increased intracellular glioblastoma pH [43].
An immunohistochemistry study of grades II, III and IV glioma biopsy tissue by Yoo et al. found that they had strong CA IX expression, 21%, 33% and 79%, respectively [41]. The degree of CA IX expression inversely correlated with survival in this and in other similar studies [41,44]. Higher CA IX expression facilitates more vigorous in vitro growth of glioblastoma cell lines and in a xenograft glioblastoma model [45]. In this xenograft model inhibiting CA IX converted non-response to bevacizumab to growth inhibition by bevacizumab [45].
Remarkably, CA IX was absent in normal brain but expressed in 100% of human glioblastomas [39]. In that study Proescholdt et al. found that high CA IX expression had 15-months median survival compared to 34 months in patients with low tumor CA IX expression [39]. In a similar study Cetin et al. found median survival of 18 months in low CA IX expressing glioblastomas compared to 9 months with high expression [46]. These alone would seem to justify a clinical trial of already-marketed and well-tested CA IX inhibitors like acetazolamide, topiramate, and zonisamide.
In light of that inverse correlation, the conversion of a high CA IX expression glioblastoma to a poor CA IX functioning tumor by zonisamide may well prolong survival.
Both an experimental CA IX inhibitor and temozolomide individually inhibited the growth of human glioblastomas that were xenografted in nude mice. The effect was synergistic when used together [47]. The FDA approved pan-CA inhibitor acetazolamide augmented temozolomide cytotoxicity to glioma cells in vitro [48].
Acetazolamide reduces the production of cerebrospinal fluid (CSF) and is used clinically for this purpose [49], thus forming another potential benefit for the use of CA IX inhibitors like zonisamide during glioblastoma treatment, in addition to the potential augmentation of bevacizumab.
Dexamethasone use tends to worsen prognosis in glioblastomas [50] but must be used to decrease elevated CSF pressure during the course of glioblastomas. Since we expect dapsone and zonisamide will lower the need for steroids, we might see an overall survival increase on that account as well.
Acetazolamide is a sulfonamide pan-CA inhibitor that has had continuous clinical use since the 1950s with demonstrable preclinical anti-glioma activity [51,52]. However, to date, there have been no clinical trials in human glioblastomas, other than to treat plateau waves [53], as far as I was able to determine. Acetazolamide is currently used clinically to treat mountain sickness, elevated intraocular pressure and pseudotumor cerebri syndrome [54,55]. Acetazolamide could be substituted for zonisamide in an ADZT-type regimen.
The use of CA IX inhibition with zonisamide (or acetazolamide) would be a realization of Koltai's "repurposed drug combinations targeting this vulnerable side (i.e., decreased extracellular pH and need to export increased intracellular protons) of cancer development" [56].
Telmisartan
Telmisartan is an angiotensin receptor blocking drug (ARB) with several unique features that recommend its use in glioblastomas, particularly in combination with bevacizumab. ARBs, like angiotensin converting enzyme (ACE) inhibitors, are marketed for a variety of indications, prominently hypertension. Telmisartan is uniquely lipophilic, has a tighter affinity to the angiotensin 2 type 1 receptor, and happens to inhibit peroxisome proliferator-activated receptor-γ (PPAR-γ) as well [62,63]. All of these attributes would be useful during the treatment of glioblastomas, particularly in co-administration with bevacizumab.
In 2017, Levin et al. suggested adding an ARB or ACE inhibitor to bevacizumab based on their retrospective glioblastoma study showing the overall survival of~25 months in those receiving low dose bevacizumab plus an ARB or ACE inhibitor, compared to~14 months for those receiving only a low dose bevacizumab [64]. They notably found that those receiving <3.6 mg wk/kg bevacizumab did better than those getting ≥3.6 mg wk/kg. This inverse dose-response relationship is currently (as of 2018) unexplained and is a huge hint in our efforts to understand how bevacizumab works.
Additionally in 2017, Menter et al. found similar but slightly different results in non-squamous, non-small cell lung cancer treated with carboplatin and paclitaxel with or without bevacizumab.
Bevacizumab prolonged survival, as did an ACE inhibitor or ARB, but increased survival by the addition of an ACE inhibitor/ARB to bevacizumab did not reach statistical significance for additive effect [65].
A potential added benefit of adding telmisartan is that it is also a PPAR-γ agonist and PPAR-γ agonism has significant glioblastoma growth inhibiting effects [66].
In metastatic colon cancer, those receiving bevacizumab with an ARB compared to those receiving bevacizumab only had longer progression-free survival, eight versus six months, and longer overall survival, 26 versus 16 months [67].
Limitations and Conclusions
This paper outlined past human, in vitro, and animal studies showing evidence of the physiologic pathways by which the glioblastoma inhibiting effects of bevacizumab might be enhanced. The four drugs of ADZT engage these enhancing pathways. The ADZT drugs-apremilast, dapsone, zonisamide, and telmisartan-are low risk drugs when used individually, are inexpensive, and are generally well known and available to physicians worldwide.
Below is a list of the potential or expected benefits of the ADZT regimen during the course of glioblastomas: • There are several factors that may limit ADZT effectiveness. Hence our need for empirical, clinical evidence of the benefits established in a phase three trial. A main potential limitation of the ADZT regimen is the uncertain degree to which apremilast penetrates the blood-brain barrier. Dapsone, zonisamide and telmisartan are known to penetrate well.
The cancer research literature is replete with data on inflammation as both cancer causing and cancer sustaining, as well as data on inflammation as a cancer cell destruction element and a defense against cancer. A one-year phase three trial of apremilast in plaque psoriasis found no evidence of immune system dysfunction [68]. Apremilast treatment of psoriatic arthritis reduced the plasma inflammation markers of IL-8, TNF-α, IL-6, macrophage inflammatory protein 1β (MIP-1β), monocyte chemoattractant protein 1, (MCP-1), and ferritin [69]. Notably, all of these markers are precisely the inflammatory markers that are documented as being among the core pathophysiologic drivers of glioblastoma growth [70]. This, plus the demonstrated enhancement of bevacizumab's anti-VEGF function and the synergistic decrease in glioblastoma cell viability by adding rolipram to bevacizumab [20], and the inherent documented anti-glioblastoma effects of rolipram alone [71], warrant risks of a phase three trial of adding apremilast to bevacizumab during primary glioblastoma treatment as part of the ADZT regimen. Unfortunately, there is no adequate murine model to test ADZT.
The ADZT regimen follows other efforts to improve the anti-glioblastoma effects of bevacizumab. Adding the chemokine receptor CXCR4 inhibitor plerixafor for example, did not improve survival over bevacizumab but did provide some clues as to the resistance or circumvention pathways around the anti-VEGF effects of bevacizumab-plerixafor plus bevacizumab increased CXCL12 (SDF-1) [72]. As seen in many cancer chemotherapies, the exposure of glioblastomas to bevacizumab engages tumor growth enhancing compensatory pathways like CXC12 in addition to intended growth inhibition. The ADZT regimen was designed to enhance bevacizumab mediated growth inhibition by blocking several of these circumvention pathways.
Funding: This work was carried out under the aegis of the IIAIGC Study Center, Burlington, VT, USA. There was no further specific funding. This article does not contain any studies with human participants or animals performed by any of the authors.
Conflicts of Interest:
The author declares no conflict of interest. | 2018-10-14T17:56:42.162Z | 2018-09-12T00:00:00.000 | {
"year": 2018,
"sha1": "3b0b11773f08b54e5822d5df3e1638858401be3e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3271/6/4/84/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b0b11773f08b54e5822d5df3e1638858401be3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218628692 | pes2o/s2orc | v3-fos-license | Neutrality May Matter: Sentiment Analysis in Reviews of Airbnb, Booking, and Couchsurfing in Brazil and USA
Information and communications technologies have enabled the rise of the phenomenon named sharing economy, which represents activities between people, coordinated by online platforms, to obtain, provide, or share access to goods and services. In hosting services of the sharing economy, it is common to have a personal contact between the host and guest, and this may affect users' decision to do negative reviews, as negative reviews can damage the offered services. To evaluate this issue, we collected reviews from two sharing economy platforms, Airbnb and Couchsurfing, and from one platform that works mostly with hotels (traditional economy), Booking.com, for some cities in Brazil and the USA. Trough a sentiment analysis, we found that reviews in the sharing economy tend to be considerably more positive than those in the traditional economy. This can represent a problem in those systems, as an experiment with volunteers performed in this study suggests. In addition, we discuss how to exploit the results obtained to help improve users' decision making.
some of the possible reasons to induce positive evaluations on the hosting services of the sharing economy, but, indeed, others could also be playing a role.
Based on those points, an important issue to be investigated is: do hosting reviews tend to be less negative in the sharing economy? Understanding this issue is crucial, as users' opinions are typically taken into account in the decision-making [23,26,37,15]. The correct understanding of what reviews really mean can help in the construction of recommendation systems and ranking of services offered, which can help users to make better choices.
Our contributions to evaluate this issue can be summarized as follows: -We collected reviews from two sharing economy platforms, Airbnb and Couchsurfing, and a representative of the traditional economy, Booking 5 , a popular Web service to find hotel accommodations. We consider accommodations offered in three Brazilian cities and three cities in the United States; -In possession of these reviews, we perform sentiment analysis on the shared texts. We find that reviews in the sharing economy tend to be more positive than those in the traditional economy. Besides, we present some key features of these comments, which reinforce the insights observed. -We performed a study with volunteers to evaluate how the observed phenomenon affects the user decision-making process. Our results suggest that the classification of establishments at Airbnb made by users could be affected due to the lack of negative evaluations. Our study still discusses how to explore the results obtained to assist in the decision-making of choosing accommodation in the sharing economy.
The remainder of the paper is organized as follows. Section 2 presents related works. Section 3 describes the platforms evaluated, and the databases studied. Section 4 discusses the concept of sentiment analysis and how we accomplish this task. Section 5 discusses the results obtained regarding the sentiment polarity in the analyzed systems, as well as some of the main characteristics related to the content and other factors. Section 6 presents the results obtained with an experiment performed with volunteers to validate our results. Section 7 proposes a new way of evaluating hosting on the sharing economy. Finally, Section 8 concludes the study and presents future work.
Related Work
Recent research efforts have attempted to characterize and understand to what extent human language is biased towards positivity or negativity [16]. While there is controversy on this topic [22,17] there are spaces in the Web full of negativity. A key example is the comment section of online newspapers, which are likely to attract negative comments independently of the content of the news [38].
An online space in which opinions have become critical for the development of valuable tools is product reviews. Indeed, Pang et al. [37] observe that ratings and opinions of other users in products are increasingly important in consumer choice. Not surprisingly, many efforts were dedicated to proposing a review summary and comparisons out of a large dataset of reviews [25,30,29].
More recently, a new wave of efforts has attempted to extract opinions out of social media data, using mostly Twitter as a data source. Applications vary widely, from inferring political polls [33], and inferring stock marketing fluctuations based on Twitter reactions [7], to the extraction of urban perceptions [41]. In common, most of these efforts rely on methodologies that exploit one out of the many existing methods for sentiment analysis to infer opinions [39,4,5].
Despite the undeniable advance that existing efforts have made on this space, all those efforts are devoted to exploring a large set of opinions that users express freely towards products, politicians, or companies in the case of stock marketing. Our work explores opinions in a novel environment, in which the target of the review is a service or a product that involves some level of personal relationship, such as renting someones place for a short period.
There are only a few efforts in the literature related to these kinds of environments. Particularly, Fradkin et al. [19] studied reviews and ratings on Airbnb websites and found that, on average, 72% of users write at least one review of the place they stayed. The authors also found that 94% of star ratings ranged from 4 to 5 (the scale goes from 1 to 5), which is important to show that, in fact, considering only the stars at one location for decision making may not be the best strategy. This is consistent with the results of Zervas et al. [49], reporting that the average rating on Airbnb to be 4.7. To get a better understanding of possible reasons for this phenomenon, Bulchand-Gidumal and Melin-Gonzlez [11] performed a study using surveys and interviews to investigate if guests faithfully convey their experiences on Airbnb. They found that a significant part of guests did not tell the whole truth when they evaluated, or avoid review when the experience was not positive. Some of the most important reasons for these behaviors include avoiding harming the host. Researchers have also studied a similar problem in the context of restaurants [21].
Fu et al. [20] analyzed reviews on real estate, aiming to develop methods to classify properties according to market value. This could provide decision support for real estate buyers and, thus, can play a strategic role in the real estate market. Oliveira and Bermejo [34] applied sentiment analysis in social media data in the context of social and political management. This is important because traditional media landscape has changed considerably; in the past, traditional media was predominant (e.g., newspapers, magazines, and TV), and, recently, they are being complemented or replaced by online interactive social communication [43]. Leung [27] uses sentiment analysis in product reviews. Leung shows that through sentiment analysis, it is possible to use the measured sentiment to support a product or give directions for improvements.
More related to the context of sentiment analysis in online hosting services, Duan et al. [18] decompose user reviews into five dimensions envisioning to measure the service quality of a hotel. The authors used those dimensions in econometric models to study their effect on influencing users' behavior regarding the content generation and evaluation. Bridges and Vsquez [10] inspected the language used in 400 Airbnb reviews finding that 93% of them were categorically positive. Tian et al. [47] analyzed reviews of some three to five star hotels in China. They found that the star rating (given in the review) correlates well with the sentiment scores for both the title and the full content of the review. Mankad et al. [31] studied reviews shared in hotels located in Russia and found, among other things, that negative reviews (negative sentiment) have a more significant downward impact than positive reviews.
To the best of our knowledge, our work differs from all previous related work because our focus is to study whether reviews on online hosting platforms on the sharing economy tend to be less negative than their competitors in the traditional economy. Also, we examined some of the key features of the comments we analyzed, performed user experiments to reinforce our findings, and discussed the implications for designing new features for sharing economy platforms.
Data and Platforms Studied
This section is divided into two parts: Section 3.1 presents the platforms studied, whereas Section 3.2 describes the data collected from them.
Platforms Considered
We considered in this study three online hosting platforms. Booking is a traditional representative where hotels are the most common accommodations. In this platform, monetary payment is always demanded, and employees typically do the negotiation of the services, that is, without having personal contact with the owner of the business. Besides, we consider a platform that does not charge for the accommodations: CouchSurfing. In this platform, the negotiation is usually done by the owner of the lodging, and the personal contact between guest and host is high. Finally, we consider an intermediary representative: Airbnb. On Airbnb, the payment for the lodging is required, but the personal contact between host and guest tends to be high, often the accommodation is shared with the host.
Airbnb
Airbnb, founded in 2008, is a platform for private accommodation rentals around the world. Present in 190 countries and more than 34 thousand cities, it currently has more than 2 million accommodations [2]. Airbnb economically empowers millions of people worldwide to open and capitalize on their spaces, becoming entrepreneurs of the hosting area. This platform has helped many travelers to save money, as it might be cheaper than hotels. Also, differentiated and more personal experiences are other factors that might help explain the success of this platform.
CouchSurfing
CouchSurfing (CS), created in 2004, is an online hosting platform, similar to Airbnb. It has served more than 11 million travelers in over 150,000 cities around the world [13]. The differential of this service is that the accommodations are free and, typically, the personal contact between the host and the guest is higher than in Airbnb and Booking. While on Airbnb, users can share accommodation with hosts, in CS, this happens at practically every time.
Numerous features are available such as detailed personal profiles, an identity verification system, a personal certification system, as well as a personal referral system to increase security and trust among members. Perhaps because it does not involve monetary values, the user profile has a great value in this platform. Candidates for a poorly rated or dubious profile may find it more difficult to find accommodation than users with a flawless profile.
Booking
Booking, founded in 1996, is now one of the largest online hosting companies in the world. It has more than 15,000 employees in 198 offices in 70 countries around the world. This hosting service aims to connect travelers to various accommodation options, from small family-run inns to large 5-star hotels. Booking also offers options for private homes, similar to Airbnb. Despite the growth of this type of alternative, the majority of available accommodations, about 68.5%, are hotel rooms. Of the remainder, 8.5% are temporary rental options ("Vacation Rental Rooms", where several Airbnb style options are available), and 23% are unique hosting options ("Unique Categories of Places to Stay"), where you can find unusual accommodations [8].
This company represents one of the most popular hosting sites in the traditional economy. Every day, more than 1,550,000 accommodations booked through its platform [9].
Dataset Describtion
We collected reviews of the systems considered (Airbnb, Couchsurfing, and Booking) for the cities of Curitiba, Rio de Janeiro, and São Paulo, Brazil, and the cities of Boston, Las Vegas and New York, in the United States. We chose these cities because they are popular with tourists and offer different types of attractions, possibly attracting different tourist profiles. For example, Curitiba is the third most visited city by foreigners for business tourism in Brazil [48]. For each service, we first look for accommodations in the selected cities. We then collect all reviews made by users on all available hosting options. With this, our dataset is composed of an establishment identifier, a user identifier that made a review, as well as the review text itself. If the platform allows the feedback response, we also collect these responses; however, they were discarded in the analyses performed. Figure 1 illustrates a review made on Booking. In this figure, the host responded to a review. Also, in this example, the word "ameiii" ("loooved it" in English) does not exist formally in the Portuguese language, however people are free to use this type of construction, and this is not uncommon to be found in shared texts in social media. The tool for the sentiment analysis chosen understands and treats these cases. In the example, the word "amei" was considered boosted, and this is reflected in the final sentiment evaluation of the review.
Between October 2016 and March 2017, we collected reviews from the platforms studied. Reviews can appear in several languages; however, we concentrate on written comments in English and Portuguese. After this filtering, we consider 648, 030 reviews from Booking, 115, 760 from Airbnb, and 8, 589 from CouchSurfing. Table 1 summarizes the data collected for each platform.
Sentiment Analysis
Sentiment analysis aims to extract opinions, sentiment, and emotions in different communication channels, mainly in the textual format [32,4], but also Table 2 Examples of comments and the associated strength of the sentiment polarity. Mentioned names have been replaced by X, Y and Z to preserve users' privacy.
Sentiment polarity
Review -4 Staff was extremely rude! ridiculously over priced, harsh and unwilling to assist, overall just not good. -3 Very dirty and run down, tv remote coated in thick dust and the staff were so rude and unwelcoming. -2 I am disappointed. They've checked in us into a dirty room, even the towels were not fresh and clean. So be careful and try to aware of staying there. -1 Run down hotel in desperate need of renewing 0 As shown in the pictures. Rooms could be bigger. 1 Location was good. And the facilities were on point. 2 Stayed with X the first couple of days when I came to Rio. She is a great hostess, relaxed and very helpful. Recommended. 3 My second time in this apartment, now 5 days, again one of the best places I've rented, really great host Y and Z !!! 4 Absolutely loved staying in Z's place in Copacabana!! Needless to say, the views are amazing. Great location -walking distance to many bars and restaurants. It's also close to the train. I would highly recommend it! in other formats, such as in images [12,35]. Particularly, the identification of sentiment in texts has become an important tool for the analysis of social media data, enabling several new services [28,4]. For example, companies can get users' opinions on the acceptance of a new product.
Current methods for detecting sentiment in sentences can be divided into two groups: based on machine learning, and based on lexical methods. Methods based on machine learning generally rely on a labeled database to train [36] classifiers, which can be considered a disadvantage due to the cost of obtaining labeled data. On the other hand, lexical methods use lists, dictionaries of words associated with specific sentiments. The efficiency of lexical methods is directly linked to the vocabulary used for the various contexts that exist. Hybrid approaches are also possible.
Several tools offer support to sentiment analysis, each one with particular characteristics. Abbasi et al. [1] and Ribeiro et al. [39,5] developed a benchmarking on several of these tools. Authors demonstrated that the tool SentiStrength [46] achieves high precision to analyze texts from social media, including reviews and other types of comments.
Sentistrengh uses a lexical dictionary labeled by humans that has been enhanced by machine learning. SentiStrength classifies the sentiment of content analyzed as positive or negative on a scale of −4 (strongly negative) to +4 (strongly positive), 0 indicates neutral sentiment. This tool was chosen in this study because it presents better results in sentiment analysis for texts representing reviews in Web systems [39,1]. Table 2 illustrates examples of comments from our dataset and the polarity strength of the associated sentiment computed using the SentiStrength tool.
Sentiments in Reviews
In this section, we present the analysis related to sentiment observed in the collected reviews. Figure 2 shows the distribution of sentiments for all platforms, Airbnb, Booking, and Couchsurfing (CS). The data considered are aggregated; that is, they contain all cities without separation. The X axis represents the sentiment polarity for a particular review, and the Y axis represents the percentage of reviews assigned to each polarity.
Aggregate Assessment
As we can see, platforms from sharing economy present more comments with positive polarity than in the platform from the traditional economy. The result presented in Figure 2 suggests that the type of economy can affect consumer sentiment in a review. Fradkin et al. [19] and Bulchand-Gidumal and Melin-Gonzlez [11] present some reasons that may explain the greater positivism or the lack of negativity in a review made on Airbnb. Among them: the personal interaction between host and guest often tends to occur; fear of receiving negative feedback from the host concerning the comment made; fear of harming the host with a poor rating that may discourage other potential guests.
Since Couchsurfing is a business with Airbnb-like relationships, these same motifs could also be used to explain the results found. Perhaps because it is free, some of these motives can still be leveraged, helping to explain the lower percentage of negative comments about Airbnb, mainly the reason for fear of receiving negative feedback from the host. In Couchsurfing, the user profile is very important to get accommodation. Users may be motivated not to perform inadequate evaluations of their hosts to avoid receiving any poor rating from them. Also, in [19], the authors note that in Airbnb, when a consumer is not satisfied with their experience, he/she tends not to write a comment, instead of negatively evaluating the place. This could also be explained by the above points.
At Booking, as the hosts are companies, typically hotels and often large properties, this sense of closeness between host and guest may not occur at the same frequency as in the other services analyzed. This might help explain the greater negativity in the comments when compared to Airbnb and Couchsurfing.
Evaluation by Cities
To check if there is any relationship between the city/country of the venue evaluated, we analyze each city considered separately. Figure 3 shows the distribution of sentiments for all platforms, considering all cities studied separately. The X axis represents the sentiment polarity for a particular review, and the Y axis represents the percentage of reviews assigned to each polarity.
With the help of this figure, it is possible to note that the country, or even the city alone, does not appear to be a relevant influence factor concerning the polarity of the reviews on the platforms. That is, a higher positivity regarding sharing economy accommodations is also seen when we separate the results by cities of the two countries analyzed. It is also possible to observe that the tendency for this disaggregated analysis is very similar to that found for the aggregate analysis, including the lower percentage of negative reviews seen in CouchSurfing compared to Airbnb.
Our results indicate that the presence of negative reviews is much smaller for online hosting services of sharing economy (≈ 3% of all comments: 4% for Airbnb, and 2% for Couchsurfing) compared to what is observed for the traditional economy (≈ 17% of all comments). This may hinder users' perception of the quality of a particular location. As negative evaluations tend to be more scarce in reviews in the sharing economy, neutral opinions can become more important. It is as if the scale of polarity began near the neutral, that is, representing the most negative opinions expressed by the users. This suggests that neutral evaluations should be taken into account at the time of choosing accommodation. These evaluations can perhaps make a difference in the classification and decision making when selecting a place to stay.
Sentiments by Number of Comments
One question that may arise at this point is whether the popularity of an establishment can influence the sentiment expressed by users on the analyzed platforms. To measure the intuition of popularity, we consider the number of comments that a given establishment received. Figure 4 shows the relationship between the number of comments of a venue by sentiment polarity observed in reviews. The types of accommodations were grouped according to the number of reviews, up to 9 reviews (unpopular), from 10 to 99 reviews (reasonably popular), over 100 reviews (very popular). Please note that no CouchSurfing accommodation has received more than 100 comments.
The result, presented by Figure 4, suggests that the popularity of an establishment does not appear to have a significant influence on user opinion since the results for different venues groups according to their popularity did not vary considerably. This indicates that the results observed in Section 5.2 are not affected by venue popularity.
Sentiment in Reviews on Homestays at Booking
In this section, we study whether the phenomenon observed (higher positivity in reviews of platforms of the sharing economy) is dependent on the platform. For that, we collected all the accommodations available in the category "Home Stays" announced by Booking for all studied cities. This new collection was necessary because our previous dataset does not have the information on the type of accommodation. Table 3 summarize the collected dataset. As expected, homestays at Booking are less available than hotel rooms. As we can see in the table, some accommodations did not have any review; several of them were recently added to the platform. We only considered accommodations with at least one review. For Las Vegas, this happened only one time; however, we have a reasonable number of reviews for this accommodation: 33 reviews. In our dataset, São Paulo is the most popular city for this type of accommodation, having 27 accommodations with reviews, presenting, in total, 670 reviews. All data were collected between 5th -10th January 2019. We applied the same methodology presented earlier to evaluate the sentiment of the reviews of this new dataset. Figure 5 shows the distribution of sentiment polarity for all cities. The X axis represents the sentiment polarity for a particular review, and the Y axis represents the number of reviews assigned to each polarity.
As we can observe with the help of this figure, the results are similar to the one presented in Figure 3, i.e., it was found very few negative reviews, and most of them are neutral or positive. We found that negative comments represent 4.9% of the total, similarly to the proportion observed for Airbnb and Couchsurfing. This result also reinforces the hypothesis that the proximity between host and guest favors this sort of phenomenon.
Content of Negative Comments
In this section, we investigate the topics most addressed by users in negative comments. We focus on negative comments because we hypothesize that, in addition to rarer, they tend not to be very informative on hosting platforms of sharing economy. To do this analysis, we use the Latent Dirichlet Allocation (LDA) technique, a popular technique for modeling topics in textual content [6]. Table 4 Ten latent topics from negative comments written in Engligh shared on Booking and Airbnb.
Topics for Booking 1 2 3 4 5 6 7 8 9 10 hotel room check staff day charg breakfast like room bathroom stay night get rude call book poor didnt bed dirti park door time desk told price terribl just small clean never nois elev front back wifi expens realli smell shower will sleep wait servic ask pay coffe even smoke old locat noisi hour one said use pool dont uncomfort need lot air peopl help got hotel servic look view water better window long guest card fee food can chang floor money work took recept arriv extra area one bad carpet place cold one custom first paid bad everyth move Topical modeling is a method for unsupervised classification of documents similar. In our context, documents are reviews, containing different words. Particularly, LDA treats each document as a mixture of topics. This allows documents to "overlap" each other in terms of content, rather than being separated into distinct groups, in order to mirror the typical use of natural language. For example, in a two-topic model, we could say that Review 1 is 85% topic X and 15% topic Y , while Review 2 is 40% topic X and 60% topic Y . In addition, LDA treats each topic as a mixture of words. Thus, consider a twotopic model for accommodation containing, for example, a topic for "room" and another for "breakfast". The most common words in the topic "room" may be "bed," "sheet," and "noise.". Whereas the topic "breakfast" can be better represented by words like "food", "coffee" and "juice". It is important to note that words can be shared between topics; a word like "environment" may be expressive in both topics [6,42].
Before identifying topics, we pre-process the reviews. We have removed URLs, special characters, unnecessary blank spaces, punctuations, numbers, stopwords, and performed a stemming process. After these steps, we identified 10 topics for negative reviews (polarity −4 to −1) on Airbnb and Booking. Table 5.5 presents ten words that best describe each of the topics. We note that all topics for Booking are negative. For example, Topic 2 is related to complaints regarding the room, and Topic 4 is more related to the staff. However, when analyzing the topics for Airbnb, we can identify several topics that suggest positive sentiments, all marked in bold and with "**" in the table. For example, Topic 4 suggests being related to the accommodation in general, where the topic indicates that users have approved the stay. The results for reviews in Portuguese follow a pattern similar to that presented for English. All topics for Booking are negative, and we can find several topics for Airbnb suggesting to be positive. These results are presented on A.
Experiment with Volunteers
The analysis of reviews about venues before decision making is a task that is commonly performed by users of online hosting systems, such as those studied in this work [18]. With that, based on the results of Section 5, one question arises: based on reviews only, can users accurately rank accommodations of online hosting services of the sharing economy?
To evaluate this question, we have recruited 30 volunteers with a diverse profile. In this group, we have representatives of various age groups, from adolescents to adults, having different levels of education, from incomplete higher education to postgraduate studies in progress.
Between February and April 2018, these volunteers were asked to respond to a questionnaire. In this questionnaire, volunteers needed to evaluate a particular venue only from the reviews made by other users. For this, we first chose four accommodations listed on Airbnb. These accommodations were selected with the aid of the mean sentiment polarity, as presented above. Taking into account the average polarity for each Airbnb establishment, the database of venues was divided into quartiles according to polarity. After this step, one venue was chosen randomly from each quartile. For each quartile, we assign the following labels, according to the polarity interval they represent: Q1 (quartile 1); Q2 (quartile 2); Q3 (quartile 3); and Q4 (quartile 4). Note that Q1 represents the best values and Q4 the worst.
We collected all available reviews for each selected venue. After a short text that serves to contextualize the volunteer, it should read the reviews of the venue and then classify it, assigning a note in a scale between 0 and 5, being 0 bad and 5 excellent. In the questionary, the order of presentation of the venues was scrambled. This was done to reduce some bias that the order of venues might bring in the respondents' perception. Table 5 summarizes the 30 responses provided by the volunteers.
The results of the experiment with users show that all venues were considered good or very good by the majority of users. For example, the venue representative of class Q4 had a good/average evaluation in the view of the volunteers, despite being in the quartile of venues with the lowest mean sentiment polarity.
This corroborates with the previously observed result, that is, in the sharing economy, which includes Airbnb, the reviews try to be more positive, which may hinder a human assessment. This suggests that venues with average polarity neutral (i.e., around zero polarity) may not be good accommodation options, it is as if the neutral is a negative polarity in this case. It is important to note that the representative venue of class Q3 has an average grade higher than the representative venue of class Q2. This experiment reinforces the suggestion that the perception of quality through reviews on Airbnb can be difficult.
Implications for the Design of New Functionalities
After analyzing the results presented in Sections 5 and 6, it is necessary to reflect on their possible implications. The greater positivity of Airbnb and Couchsurfing can be detrimental to consumers; after all, bad hosts may not be evident. Bad experiences that are not shared end up imposing difficulty in choosing accommodation. This fact can take the user to disregard a good place because of the ambiguity of sentiments present in the comments.
In some hosting services, such as Airbnb, users, in addition to doing reviews about their experience with the service obtained, can give a star rating, which can range from 1 (lowest) to 5 (highest ). However, as noted in an earlier study, 94% of stars given by users on Airbnb range from 4.5 to 5 [19]. This makes this star rating not a sufficient metric for an analysis of the quality of a place to be done properly.
Our results suggest that it could be strategic to consider the polarity of reviews to help users in better decision making on hosting services of the sharing economy. Therefore, in this work, we exemplify a new way of evaluating these types of venues, taking into account the average polarity and the number of reviews of a given venue. For this, we suggest the following equation: where P ∈ [−4, 4] represents the mean polarity of the venue's reviews, and C ≥ 0 represents the number of reviews for a venue.
Thus, a higher score value tends to be attributed to venues with a more positive average sentiment in the reviews. The equation also takes into account the number of evaluations performed, where the more opinions, the better. By taking into account the number of reviews, it is possible to distinguish whether venues have enough reviews for an evaluation.
For each Airbnb venue collected, a score was assigned considering the equation above. B shows some examples of score for this analysis, helping to understand how each variable impacts score. For the venues studied in Section 6 (shown in the Table 5), score would rank the locations as follows: 1st Q1 (score = 51.07); 2nd Q2 (score = 40.66); 3rd Q3 (score = 34.91); 4th Q4 (score = 14.04).
We believe score, or some variation in this direction, can be useful, for example, to design a new venue ranking. Just as there are already rankings from the lowest to the highest price, a ranking could be created according to the calculated score. This may perhaps help users make better decisions when choosing a place to stay. A qualitative assessment with users to see if this approach can help improve the user experience is outside the scope of this work; however, it is important to be carried out.
Conclusion and Future Work
By evaluating reviews on two hosting platforms of the sharing economy and one of the traditional economy, we find evidence that reviews tend to be more positive on platforms of the sharing economy. This corroborates with the hypothesis that this phenomenon happens due to personal contact that occurs between the host and the guest on those services. This result can be detrimental to consumers as bad hosts may not be evident. More importantly, our findings suggest that reviews on different platforms might require different interpretations, especially for algorithmic decision-making approaches that use review datasets in the learning phase. We hope our quantitative analysis and observations may inspire new approaches able to account for this perceived bias towards positivity in hosting platforms.
As future work, we intend to study patterns between sentiment in reviews with other attributes, such as gender and time. In addition, we want to conduct a qualitative assessment to investigate whether new ways to rank venues exploring the insights obtained in this study, such as the approach presented, can bring a better experience for users. | 2020-05-15T01:00:29.348Z | 2020-05-13T00:00:00.000 | {
"year": 2020,
"sha1": "737887d1b3e5835141ef897909048796c6eab651",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2005.06591",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "737887d1b3e5835141ef897909048796c6eab651",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business",
"Computer Science"
]
} |
213678514 | pes2o/s2orc | v3-fos-license | Gel formulations of Merremia mammosa (Lour.) accelerated wound healing of the wound in diabetic rats foot ulcer, Animal model for drug effect, Antioxidant studies, Anti-in Journal of Traditional and Complementary Medicine
Background and aim: The treatment of diabetic ulcers is dif fi cult because of defective blood vessels and frequent co-occurrence of bacterial infections. In a previous study, we found a water fraction of Merremia mammosa (Lour.) (Mm(Lour.)) had bene fi cial effects on wound healing in diabetic rats. This study aimed to evaluate the in fl uence of different gelling agents added to Mm(Lour.) water fraction gel on wound healing treatment in diabetic rats. Experimental procedure: Diabetic Wistar rats were divided into the following fi ve groups: 1. positive control (Neomycin Sulfate 0.5% and Placenta Extract 10%), 2. negative control (distilled water), and 10% water fraction of Mm(Lour.) extract in 3. HPMC, 4. Carbopol, and 5. CMC Na gelling agents. The wound was made by the Morton method and treatment applied every other day for 25 days, then the wound healing process was observed. Data were observed and analysed using appropriate statistic tools. Results: Histopathology observation, VEGF expression and hydroxyproline levels showed a signi fi cant acceleration of wound healing in all treatment groups compared to the negative control group. This study showed all of Mm(Lour.) gel formulations could restore the delayed healing process on wound in diabetic rats and were equally effective in accelerating wound healing. CMC Na was the most preferable because it did not irritate. Conclusion: The results suggest that Mm(Lour.) water fraction in CMC Na gelling agent provided an option to be developed as a topical drug on diabetic wound healing treatment, showed by enhancement of collagen synthesis and angiogenesis.
Introduction
Wound healing involves complex processes including the inflammatory phase, granulation, and tissue remodeling. 1 This process is triggered by growth factors and cytokines, and complications may be occur, influenced by many factors like Diabetes mellitus (DM). 2 The high costs incurred treating wounds in DM patients, the risk of amputation, 3 the fact that topical drugs of choice do not contain comprehensive activities 4,5 and the difficulty of handling diabetic wounds 6,7 requires the development of effective drugs that come from local Indonesian resources.
Indonesia has about 40,000 endemic plants (typical of the region) and 7,000 of them are considered medicinal in some way by local cultures. However, there are only about 200 medicinal plants that have been studied extensively. Merremia mammosa (Lour.) (Mm (Lour.)) is locally used as antidiabetic, antibacterial and antiinflammation therapy. 8e10 Our previous study examined extract and extract fractions of Mm (Lour.) showed that each could accelerate wound closure in diabetic wound healing and increase the density of collagen in diabetic wounds. 11e13 The effective dose of a water extract fraction of Mm (Lour.) was 50 mg/625 mm 2 wounded area. 12 Modern treatment protocols for wound advise maintaining moist environmental conditions on the wound itself, 14 which could not be achieved if simply applying a water fraction. Therefore, this study aimed to develop the Indonesian original plant, Mm (Lour.), preparation in a gel form for the treatment of diabetic wounds to maintain a moist environment for a longer period of time. Gel formulations are also more easily spread, greaseless, and are also water-soluble. 15 However, in a topical preparation, drug release from the matrix is an important parameter for bioavailability and efficacy. Thus, the choice of gelling agent can influence drug release in topical gel preparations. 16,17 In this study, we used a gel containing neomycin and placenta extract as a positive control. Topical gels with neomycin and placenta extract have been shown to have a regenerating effect with antibacterial, anti-inflammatory, antioxidant, and proangiogenesis activities. For comparison Mm (Lour.) fraction were prepared using cellulose derivates of gelling agents i.e., hydroxypropylmethylcellulose (HPMC), and carboxymethylcellulose sodium (CMC Na), as well as a synthetic gelling agent (Carbopol). These gelling agents have high stability and compatibility, low toxicity, and increase skin contact time. Thereby, they can increase the effectiveness of gel usage. 18 Plants were labelled and deposited in the Herbarium Jemberiense, Biology Department, Mathematics and Natural Science Faculty, University of Jember (84/ HB/7/2017). Extraction and fractionation of the plant tissues were carried out based on protocols from previous research. 11,12 Briefly, a total of 1 kg of simplicia powder was extracted by ultrasound using 70% ethanol solvent for 1 h. Then, it was filtered with a Buchner funnel to obtain the filtrate. The residue was re-extracted once. The resulted filtrate was concentrated with a rotary evaporator until a thick ethanol extract was obtained.
Chemicals and reagents
The ethanol extract was added to water in a ratio 1:2 and stirred until homogeneous. This water fraction was subsequently placed in a successive partition using n-hexane and ethyl acetate with a ratio of 2:3. This process was repeated three times and then the resulting solution freeze-dried until viscous water fraction was obtained. The water fraction was standardized before being used for formulation. The parameters of standardization were organoleptic, drying shrinkage, thin layer chromatography profile and total flavonoids content. The total flavonoids content of Mm (Lour.) water fraction using the AlCl 3 method was 0.17 ± 0.009% w/w 11 .
Formulation and physical properties testing of gels
Mm (Lour.) gels were prepared by incorporating different gelling agents (HPMC, Carbopol or CMC Na) to the most potent dose and extract fraction of Mm (Lour.) according to our previous study (a 10% water fraction). 11,12 The formulation was as follows: 10% water fraction of Mm (Lour.) ethanol extract, 1.5% gelling agent, 0.5% triethanolamine, 20% propylene glycol, and 68% distilled water. The process of making a gelling agent began by developing the gelling agent in hot water, stirring until homogeneous, and then adding TEA slowly until a gel mass was formed. The water fraction was mixed with propylene glycol until it was homogeneous. Then it was mixed into the gelling agent and stirred again until homogeneous. The remaining distilled water was added slowly until homogeneous. The physical properties of gel formulation to be tested, including organoleptic test, pH, viscosity and spread ability.
Diabetic induction and wound excision
This study used a post-test only control group design. Early adulthood male Wistar rats weighing between 200 and 250 g kept in individual cages with a standard feed of ad libitum food and water. Fifty rats were divided into five groups (n ¼ 10 per group), which consisted of positive control (Neomycin Sulfate 0.5% and Placenta Extract 10%), negative control (distilled water) and 10% water fraction of Mm (Lour.) extract in each 1.5% gelling agent (i.e., HPMC, Carbopol, CMC Na). Diabetes induction was carried out using STZ solution in 0.05 M (pH 4.5) citrate buffer at a dose of 40 mg/ kg body weight (BW), given to rats intraperitoneally. Random blood glucose level examinations were carried out 24 h before and after induction each week during the experiment to monitor the diabetic condition of the rats. 19 List of abbreviations Wound excision was carried out in rats that had GDA levels !200 mg/dL. Ketamine of 80 mg/kg BW and xylazine of 10 mL/kg BW were injected intramuscularly as anesthesia. Excision was done on the rat backs 1 cm from the left side of the vertebral column by the Morton method. 20 A 2 Â 2 cm area of skin was excised from the epidermal layer to the subcutaneous layer as well as the connective tissue below it. Wound healing rate was calculated according to Heidari et al.. 21 This study followed the standard of ethics of Health Law research number 23/1992 and obtained ethical approval number 1175/H25.1.11/KE/2017 from the Faculty of Medicine, University of Jember.
Wound healing parameters
Wound healing processes were observed at day 3, 10, and 25 after excision (n ¼ 3; n ¼ 4; n ¼ 3, respectively, for each treatment group of n ¼ 10), following wound healing phases. At days 3, 10, and 25, the observed rats were then sacrificed for further test via cervical dislocation. Excisions were performed to obtain tissue for histopathological examination with HE staining and, at day 10 only, for VEGF immunohistochemical examination. Histopathological observation with HE staining was performed according to methods listed in a previous study. 12 VEGF examination on day 10 groups was performed using an immunohistochemical kit (BIOSSUSA) with paraffin blocks cut 4 mm thick and then deparaffinized and rehydrated. Then, 0.5% endogenous peroxidase was added for 30 min for blocking, and placed in decloaking chamber at 110ᵒC with Diva solution added and blocked with 5% normal horse serum for 30 min. Immunostaining used the indirect method with polyclonal rabbit primary antibody at a 1: 100 standard dilution incubation for 60 min, and incubated with Universal Link secondary antibody for 30 min. Trekavidin HRP labels were added and incubated for 30 min before counterstaining with HE. VEGF expression was assessed using a 400x magnification Olympus CX21LED microscope on five visual fields with the help of ImageJ software. Assessment of VEGF expression was carried out quantitatively. A histology score was used in calculating VEGF expression by multiplying the percentage of the browned areas with brown intensity and averaging over each field of view. 22,23 Connective tissues were observed at day 10 after excision using histopathological examination with modified Masson's Trichrome (MT) staining specific to collagen, which stained collagen fibers a bright blue color, 13,24 and measuring hydroxyproline levels. Observation of collagen density was carried out with 400x magnification at 6 fields of view, and then, pictures were taken using Olympus DP21 series microscope. The percentage of collagen density was measured using imageJ software according to a previous study. 13 Hydroxyproline level measurement followed the General Hydroxyproline Assay Kit® protocol. A standard curve to measure concentration of hydroxyproline was made by measuring standard concentration 0; 7,5; 15; 30; 60; 120; and 240 ng/mL at 450 nm wavelength absorbance.
Safety test
A skin irritation test based on OECD guideline 404 using rabbits (three animals) was carried out for the safety test. A 2 Â 2 cm area of hair on the back of each rabbit was carefully shaved off 24 h before treatment. A total of 0.5 g of each gel was applied to the 2 Â 2 cm area, then it was covered with gauze and tape. After 4 h, the gels were washed off with water and the appearance of erythema and oedema observed at 1, 24, 48, and 72 h after the gels were removed. Erythema and oedema were assessed via a scoring method ranking severity on a 0e4 scale.
Statistical analysis
Statistical tests were performed with a one-way ANOVA test or Kruskal-Wallis test, depending on the normality of the data distribution and were followed by post hoc Least Significant Difference (LSD) or Mann-Whitney test as appropriate. SPSS v15 software was used for analysis. P < 0.05 showed statistical significance.
Wound healing parameters
The percentage of reduction in wound size comparison showed a difference for every gelling agent when compared with negative control, although it was not statistically significant (Fig. 1A). The study showed a tendency that among the three gelling agents, HPMC and CMC Na had a similar healing rate, relevant to the high bioavailability and Carbopol was the less. This result showed that although not significant, different gelling agents provide a tendency for different release rate that may affect the topical drug potency.
Photomicrograph observation of HE staining in each healing phase showed that there were no observable differences yet on day 3 after wound excision. On day 10 and 25, Carbopol had a slower wound healing rate compared to other gelling agents, as seen by evaluating the expression of angiogenesis, macrophage, fibroblast and collagenas well as macroscopic appearance. Other gelling agents were similar to the positive control healing phase, as described by an optimum level of VEGF expression, and no macrophage (Figs. 2 and 3). VEGF expression data were normally distributed, and with same variants, analysis used a one way ANOVA test, showing a significant treatment effect with p ¼ 0.017. The data, then, were tested with post hoc LSD test, which showed the negative control (C(À)) significantly difference than the positive control (C(þ)), T1, T2, and T3 (p ¼ 0.023; p ¼ 0.005; p ¼ 0.017; p ¼ 0.002, respectively). C(þ) did not differ significantly from T1, T2, and T3. There was also no significant difference among T1, T2 and T3 (p > 0.05).
Collagen density results by modified MT staining in diabetic wound of rats can be seen in Figs. 1C and 4. The one way ANOVA test result showed p ¼ 0.001, which means that there were significant differences in collagen density among the groups. Then, LSD test was conducted to find out the differences in each group and significant results are shown in Fig. 1C. Hydroxyproline measurement in skin tissue represented the collagen index. Kruskal-Wallis test results showed p ¼ 0.033, which means that there were significant differences in hydroxyproline level among the groups. The results of the Mann-Whitney test are shown in Fig. 1D.
Gel physical properties
The organoleptic examination was done to assess the properties of each gel. Several physical properties, such as, odor, taste, and others, which are shown in Table 1.
Safety test
The wound irritation test to see the safety of the gel showed that only the rabbit with the CMC Na gelling agent applied did not experience any erythema or oedema during the test (Table 2).
Discussion
The diabetic animal model was STZ injected Wistar rats; the injection created a hyperglycemic condition characterized by random blood glucose levels of more than 200 mg/dL during the experiment. The wound healing in diabetic rats as an experimental model was expected to resemble wound conditions in diabetes patients. Diabetic wounds have significant fibroblast dysfunction and a disruptive maturation of epithelium and granulation tissue, resulting in decreased collagen synthesis acting as an extracellular matrix, as well as hydroxyproline and VEGF levels also low. 25 An ANOVA test of blood glucose levels was carried out in all groups of animals, and the result was p > 0.05 (data not shown), showing there were no significant differences in blood glucose levels among the groups. This result indicated that the differences in blood glucose, which served as the confounding variable that affected wound healing in each Wistar rats could be ignored.
Group C(À) with distilled water treatment as the negative control, showed lower wound healing parameters, with significant differences (p < 0.05) when compared to C(þ), T1, T2, and T3 (Fig. 1B, C, 1D). Distilled water would not be expected to improve diabetic wound healing and would only function as wound cleanser. 26 The transition from the inflammatory phase to the proliferative phase was hampered. This was relevant to the our previous studies 11e13 of diabetic wound, which showed that wound healing in the negative control took longer than in the positive control and treatment groups, as assessed via several wound parameters. This was also similar with the result of Ackermann et al. study on diabetic wounds. 27 Improved wound healing was observed in the treatment groups compared to the negative control group. The time needed to achieve 50% closure of the wound decreased from 8.3 ± 0.6 days (n ¼ 6) in control negative group to 5.7 ± 0.4 days (n ¼ 18) in the Mm (Lour) gels (Fig. 1A). Although not statistically significant, probably due to high variation in control negative data, as shown by the higher standard error. The reason for that were perhaps because of individual non-treatment responses in diabetic condition related to various factors. 28 When we examined photomicrographs of HE staining, the negative control group at day 25 still showed a low density of extracellular matrix, a small amount of fibroblasts and prominent macrophages, which indicated incomplete healing, in contrast to the other groups (Fig. 2). This result was consistent with our previous study on Mm(Lour) extract fractions. [11][12][13] Therefore, the findings in this study allowed the presumption that the gel formulation of Mm (Lour.) water fraction possibly restored the delayed process of diabetic wound healing. This result might be attributed by the activities of merremosida and mammoside, a group of resin glycosides compound mainly found in Mm (Lour.) 10 and also because of flavonoid activity. Based on the standardization result, the water fraction of Mm (Lour.) contained flavonoids (0.17 ± 0.009% w/w).
The treatment of wounds requires the combined effects of antibiotics, anti-inflammatory agents, astringents, and antipyretics. 29 Resin glycosides as antibacterial agents have the activity of inhibiting pump efflux on bacterial membranes so that resistant bacteria become more sensitive. 30 The anti-inflammatory activity of resin glycosides can inhibit COX-1 and COX-2 enzymes that are overproduced in DM 31 while flavonoids that are also contained in Mm (Lour.) 11 inhibit bacterial cell wall synthesis and damage cell walls directly. 32 The anti-inflammatory activity of flavonoids are in the form of activation of M2 macrophages secreting various growth factors such as platelet-derived growth factor (PDGF), transforming growth factor b (TGF-b), and fibroblast growth factor (FGF). 33 These mechanisms can reduce the risk of infection in diabetic wounds that experience decreased immunity. 34 The growth factors induced by flavonoids of Mm (Lour.) accelerate the transition from the inflammatory phase to the proliferation phase by stimulating migration, proliferation, and activity of fibroblasts in collagen synthesis. When there is an increase in collagen synthesis activity done by fibroblasts, the synthesis of hydroxyproline by cells, which acts as a base material for collagen increases as well, which results in increasing hydroxyproline levels. 35,36 This statement is relevant to the results of this study, where collagen density, hydroxyproline and VEGF in all groups of Mm (Lour.) gel were significantly higher than the negative control group (Fig. 1B, C, 1D).
Flavonoids have an antioxidant activity by binding to free radicals and preventing oxidative reactions by increasing the activity of the super oxide dismutase (SOD) enzyme. 12 This enzyme will reduce reactive oxygen species (ROS) in the form of superoxide anion (O 2 -) to H 2 O 2 , which will then undergo catalysis by catalase enzyme to neutral H 2 O. If the level of ROS exceeds the amounts of antioxidants as in DM conditions, oxidative stress will occur and increase the risk of lipid peroxidation (the release of electrons in the lipid layer of cell membranes that can cause damage to membrane stability). 37 The decrease in ROS caused by flavonoids will prevent tissue damage during the inflammatory phase so that the transition to the proliferative phase will be faster and the tissue hydroxyproline level and VEGF expression higher. Juneja et al. 38 reported that the flavonoids from Boerhavia diffusa leaf extracts improved wound healing by enhancement in fibroblast growth and collagen fibrils, similar to our result.
Making gel formulations from Mm (Lour.) aims to maintain skin moisture, increase the penetration of active substances into wounds, and protect skin from the external environment. A humid environment can improve retention of growth factors such as TGFb, PDGF, and FGF so that the proliferation response of fibroblasts and extracellular matrix synthesis increases. Moist conditions will also increase autolytic debridement. 39 Furthermore, the gel also plays a role in helping the drug penetrate the skin by changing the nature of the stratum corneum to become more tenuous, so that the drug can reach the dermis layer and affect fibroblast cells, macrophages and other immune cells located there. 40 The C(þ) group containing neomycin and placenta extract showed higher levels of hydroxyproline and differed significantly when compared to the C(À) group (p ¼ 0.034), but there was no significant difference when compared to the T1 group (p ¼ 0.513), T2 (p ¼ 0.480), and T3 (p ¼ 0.480). This result likely happened because all these groups have a comprehensive activity on diabetic wound healing, which includes anti-inflammatory, antioxidant, antibacterial, and stimulation of collagen synthesis. Neomycin is a broad-spectrum antibiotic in the aminoglycoside group that is commonly used topically for infections of the skin and mucous membranes. 41 The activity of placenta extract in wound healing is as an anti-inflammatory, antioxidant, and stimulant of collagen synthesis. 42 Aside from their activities, the positive control and treatment groups were in the form of gels suitable for wound therapy.
The statistical analysis of the results showed a significant difference in collagen density between the HPMC gelling agent group compared to the Carbopol gelling agent group, where the average percentage of collagen density in the HPMC gelling agent group (52.71%) was higher than the Carbopol gelling agent group (38.59%). Meanwhile, the comparison of collagen density between the HPMC gelling agent group and the CMC Na group did not show any significant difference. On the other hand, there was a significant difference in collagen density between the gelling agent CMC Na group compared to the Carbopol gelling agent group where the average percentage of collagen density in the gelling agent CMC Na group (54.88%) was higher than the Carbopol gelling agent group (38.59%). As other parameters showed no significant difference among treatment groups, it can be concluded that there were no differences in the effect of gelling agents (HPMC, Carbopol, and CMC Na) on the activity of the water fraction gel of Mm (Lour.) in diabetic wound healing.
The effect of Mm (Lour.) gel on T1 (HPMC) and T3 (CMC-Na) groups were mostly similar, likely because HPMC and CMC-Na were both cellulose derivative polymer gelling agents that had good viscosity and swelling properties. Tas et al. 43 compared HPMC, CMC-Na, and methylcellulose as gelling agents to the active ingredient chlorpheniramine maleate and proved that HPMC showed a higher drug release rate compared to CMC-Na. This was probably the caused at day 10; HPMC group showed a tendency to be the fastest healed, but at day 25 the HPMC and CMC-Na groups healed similarly.
The analysis of differences in almost all wound healing parameters in groups T1, T2, and T3 showed non-significantly different results. These data demonstrated that the gelling agent played no significant role in the wound healing enhancing effect of Mm (Lour.) gels. Based on the physical properties and safety test results, it appeared that CMC Na gel with pH 6.48, the highest viscosity (200 dpas), the widest scattering power (7.5) and having no erythema or oedema on the skin was the better option to be chosen as a gelling agent. Our study showed promising results for further development for clinical use since it is safer (as a natural product) and the activity can still be enhanced by purification as well as combination treatment. Further experiments are needed to continue the development of gel Mm (Lour.) water fraction as a topical drug, such as clinical trial studies.
Conclusion
The gel formulation of Mm (Lour.) water fraction possibly restored the delayed process of wound healing in the diabetic rat model. It can be concluded that there were significant differences in diabetic wound healing between the negative control group and every other group. There was no significant difference between the positive control group and Mm (Lour.) water fraction gel in HPMC, Carbopol, or Na-CMC gelling agents. There was also no significant difference of wound healing between the different gelling agent, groups: HPMC and Carbopol; Carbopol and Na-CMC; Na-CMC and HPMC. However, Mm (Lour.) water fraction gel with CMC Na base showed a tendency to be the best in accelerating healing rate, meets the gel property requirements and was the only base that showed no erythema or oedema during the safety test. Therefore, the safest and suggested gel formulation to be developed as a topical drug is Mm (Lour.) water fraction gel with CMC Na gelling agent. | 2019-12-12T10:14:48.093Z | 2019-12-09T00:00:00.000 | {
"year": 2019,
"sha1": "f50a65f828aff9e63d182c607db2a5b56e372d5e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jtcme.2019.12.002",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09d05b60dcc247dcf4ca20be6eaaeb09d62872c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258832740 | pes2o/s2orc | v3-fos-license | Small-data global existence of solutions for the Pitaevskii model of superfluidity
We investigate a micro-scale model of superfluidity derived by Pitaevskii (1959 Sov. Phys. JETP 8 282–7) to describe the interacting dynamics between the superfluid and normal fluid phases of Helium-4. The model involves the nonlinear Schrödinger equation (NLS) and the Navier–Stokes equations, coupled to each other via a bidirectional nonlinear relaxation mechanism. Depending on the nature of the nonlinearity in the NLS, we prove global/almost global existence of solutions to this system in T2 —strong in wavefunction and velocity, and weak in density.
Introduction
Superfluids constitute a phase of matter that is achieved when certain substances are isobarically cooled, resulting in Bose-Einstein condensation.That Helium-4 (and also its isotope Helium-3) undergoes such a quantum mechanical phase transition was first experimentally discovered [Kap38,AM38] over 80 years ago and has been the subject of intense inquiry ever since.Despite this, a single theory that describes the phenomenon continues to elude us.
The general picture is that at non-zero temperatures, there is a mixture of two interacting phases: the normal fluid and the superfluid [PL11,Vin04,Vin06,BSS14,BDV01,BLR14].It is important to note that this is not like classical multiphase flow, where one can define a clear boundary between the two phases.Instead, some atoms are in the normal fluid phase, and some are in the superfluid phase, with both fluids occupying the entire volume.The normal fluid is well-modeled by the Navier-Stokes equations (NSE), while the description of the superfluid varies by the length scale that we are interested in (see [BBP14,Jay22] for a discussion).Briefly, the superfluid is described by the NSE at large scales [Hol01], a vortex model at intermediate scales [Sch78,Sch85,Sch88], and the nonlinear Schrödinger equation (NLS) at small scales [Kha69,Car96].The macro-scale, NSE-based description is a current topic of numerical research [VSBP19, RBL09, SRL11], and has also been rigorously analyzed [JT21].In this paper, we use the micro-scale, NLS-based model by Pitaevskii [Pit59], which has previously been considered in [JT22a,JT22b].
A missing piece of the physics puzzle here is the nature of the interaction mechanism.It is known that the interaction between the fluids is dissipative/retarding.Pitaevskii thus derived a micro-scale model that intertwines the NLS (for the superfluid) and the NSE (for the normal fluid).The coupling is nonlinear, bidirectional and transfers mass, momentum, and energy between the two fluids.For the combined system of both phases, the model respects the conservation of total mass and total momentum, while the total energy decreases in accordance with the dissipation.
The NLS, in its most popular form, is fundamentally a dispersive partial differential equation with a cubic nonlinearity that models systems with low-energy wave interactions, such as dipolar quantum gases [CMS08,Soh11].The well-posedness issues of NLS have been tackled in many situations [CKS + ], and its scattering solutions [Tao06,Dod16] have been of particular interest.The NLS can also be recast as a system of compressible Euler equations (referred to as quantum hydrodynamics or QHD) with an additional quantum pressure term [CDS12].This system is a special case of the more general Korteweg models, subject to much mathematical analysis.Hattori and Li [HL94] showed that the 2D QHD equations are locally well-posed for high-regularity data, and improved this to global well-posedness in the case of small data [HL96].Jüngel [JMR02] established local strong solutions to the QHD-Poisson system, formed by including a potential governed by the Poisson equation.The same model possesses local-in-time classical solutions in 1D when the data is highly regular [JL04].For initial conditions close to a stationary state, the solutions are globalin-time and converge exponentially fast to the stationary state.Blow-up criteria have also been derived for QHD [WG20,WG21].While the discussion so far has focused on strong solutions, there has also been rising interest in the weak formulation of QHD-like models.Antonelli and Marcati [AM09, AM12, AM15] introduced the novel fractional step method in the pursuit of finite-energy global weak solutions.The idea was to revert (from QHD) to NLS, which was easier to solve, and account for collision-induced momentum transfer via periodic updates to the wavefunction.In this process, the occurrence of quantum vortices could also be characterized by imposing irrotationality of the velocity field (away from vacuum regions).Using special test functions that permit better control of the quantum pressure term, Jüngel [Jün10] proved that the viscous QHD system admits weak solutions in 2D.For small values of viscosity, these solutions were global in time.The proof utilized a redefinition of the velocity that converts the hyperbolic continuity equation into a parabolic one, a technique that was pioneered by Bresch and Desjardins [BD04] for Korteweg systems in general.Vasseur and Yu [VY16b] expanded Jüngel's result to a wider class of test functions while adding some physically-motivated drag terms.Various forms of damping have appeared in the literature, primarily serving two different roles: (i) as an approximating scheme for both the compressible Navier-Stokes with degenerate viscosities [LX15,VY16a] as well as Korteweg-type systems [AS17, ACLS20, AS22], and (ii) as a means of proving global existence [Cha20] or relaxation to a steady state [BGLVV22,SYZ22].Most works involving Korteweg systems use the notion of κ-entropy that was first demonstrated in [BDZ15].Furthermore, even questions of non-uniqueness (and weak-strong uniqueness) of weak solutions have been addressed for the QHD-Poisson system with linear drag using convex integration [DFM15].
It is only at absolute zero temperature that superfluids can be well-approximated by the use of the NLS alone.For temperatures above zero and below about 2.17K, we have a mixture of both fluids.In this article, we consider Pitaevskii's model [Pit59] which couples the NLS and the NSE.The model was initially derived for a fully compressible normal fluid.While compressible fluids are more realistic in some scenarios, they are also much more challenging to both rigorously analyze and numerically simulate.[Fei04,Lio96a] contain several classical results on the compressible NSE.On the other hand, the incompressible NSE (no density equation) is arguably the most studied nonlinear partial differential equation in mathematics (see [Tem77,MB02,RRS16] for classical results).In this article, we approximate the normal fluid to be incompressible, but the density persists, varying from point to point in the flow domain.What results is an incompressible, inhomogeneous flow: compressible NSE appended with the condition of divergence-free velocity.This model of fluids was first investigated by Kazhikov for local weak solutions when the initial density is bounded from below [Kaz74], and vacuum states were allowed in an improvement by Kim [Kim87].Further advances for weak solutions were made by Simon [Sim90], who in particular analyzed their continuity at t = 0, and also proved the existence of global solutions in a less regular space.Meanwhile, Ladyzhenskaya and Solonnikov [LS78] presented the case for strong solutions: With the density bounded from below, it is possible to construct local (global) unique solutions in 3D (2D).Furthermore, if the data is small enough, one obtains global-in-time unique solutions.Results in the same spirit were proven by Danchin for small perturbations from the stationary state in critical Besov spaces [Dan03].He further established the inviscid limit of the incompressible inhomogeneous NSE in subcritical spaces [Dan06].The local existence theorem by Ladyzhenskaya and Solonnikov was shown to be valid for non-negative densities as long as the initial data satisfied a compatibility criterion [CK03].This work by Choe and Kim has since spurred on several other results that utilize such compatibility conditions on the initial data.
Given the immense interest in the NLS and NSE, the rigorous study of a coupled system should be a natural next step.Indeed, one such two-fluid model of superfluidity was analyzed by Antonelli and Marcati in [AM15].The superfluid was described by the NLS, and the normal fluid by the compressible NSE.This is similar to the system considered in this article, save for two key differences.Firstly, their model did not permit any mass transfer between the two fluids (which allows for global-in-time solutions).As we shall discuss, this is the biggest roadblock in Pitaevskii's model and essentially defines the strategy used.Secondly, the momentum transfer in their model is unidirectional and linear, affecting only the superfluid phase (as opposed to the bidirectional and nonlinear nature of the coupling in this work).
Thanks to the retarding interactions between the two phases, the NLS acquires a dissipative flavor and renders it parabolic.This lets us extract dissipative contributions to the energy estimates.To analyze the momentum equation of the NSE, we work with initial velocity in H 1 d .This yields appropriate regularity for the velocity, in order to adequately control the relaxation mechanism which contains quadratic terms in the velocity.Parting ways from [Kim87], we begin with an initial density field that is bounded from below.This is necessary since the continuity equation is unusual and is not a homogeneous transport equation.Our primary goal is to avoid the occurrence of zero or negative densities at any time.To this end, we must limit the effect of inhomogeneity, which is the relaxation mechanism that allows for mass and momentum transfer between the two fluids.As a serendipitous by-product of this non-zero density field, we also obtain control of x , which allows the use of compactness arguments to actually obtain strong continuity in time of the velocity field.
The crux of this work is to derive a priori estimates and carefully extract coercive terms that allow for norms to decay, while avoiding any derivatives on the density of the normal fluid.To engineer this decay, we include a linear drag term for the NSE.Additionally, we also present results for any polynomial-type nonlinearity in the NLS.We now mention the notation used in the article before describing the model and stating the results.
1.1.Notation.We denote by H s (T 2 ) the completion of C ∞ (T 2 ) under the Sobolev norm H s , while we use Ḣs (T 2 ) when referring to the homogeneous Sobolev spaces.Consider a 2D vector-valued function u ≡ (u 1 , u 2 ), where ) under the H s norm.The L 2 inner product, denoted by ⟨•, •⟩, is sesquilinear (the first argument is complex conjugated, indicated by an overbar) to accommodate the complex nature of the Schrödinger equation, i.e., ⟨ψ, φ⟩ := T 2 ψφ dx.Since the velocity and density are real-valued functions, we ignore the complex conjugation when they constitute the first argument of the inner product.
We use the subscript x to denote Banach spaces that are defined over T 2 .For instance, L p x := L p (T 2 ) and H s d,x := H s d (T 2 ).For spaces/norms over time, the subscript t denotes the time interval in consideration, such as L p t := L p [0,t] .The Bochner spaces L p (0, T ; X) and C([0, T ]; X) have their usual meanings, as L p and continuous maps (respectively) from [0, T ] to a Banach space X.
We also use the notation X ≲ Y and X ≳ Y to imply that there exists a positive constant C such that X ≤ CY and CX ≥ Y , respectively.When appropriate, the dependence of the constant on various parameters shall be denoted using a subscript as the article, C is used to denote a (possibly large) constant that depends on the system parameters listed in (2.4), while κ and ε are used to represent (small) positive numbers.The values of C, κ, and ε can vary across the different steps of calculations.
1.2.Organization of the paper.In Section 2, we present and discuss the mathematical model, along with statements of the main results.Several a priori estimates, at increasing levels of regularity, are derived in Section 3. The construction of the semi-Galerkin scheme and the renormalization of the density are discussed in Section 4.
Mathematical model and main results
The superfluid phase is described by a complex wavefunction, whose dynamics are governed by the nonlinear Schrödinger equation (NLS), while the normal fluid is modeled using the compressible Navier-Stokes equations (NSE).In all generality, the full set of equations can be found in [Pit59,Section 2].In what follows, we use a slightly simplified and modified version of the equations, arrived at by making the following assumptions.
(1) We consider a general power-law nonlinearity for the NLS.This is done by choosing the internal energy density of the system to be 2µ p+2 |ψ| p+2 , for 1 ≤ p < ∞ (see Remark 2.5).We also assume that the internal energy is independent of the density of the normal fluid.
(2) We work in the limit of a divergence-free normal fluid velocity.This means that the pressure is a Lagrange multiplier, rendering the equations of state and entropy unnecessary.Note that, due to the nature of the coupling between the two phases, the density of the normal fluid is not simply transported.(3) A linear drag term has been included in the momentum equation to account for the lack of coercive estimates for the velocity.(4) Planck's constant (h) and the mass of the Helium atom (m) have both been set to unity for simplicity.
We now state the equations used in this paper: Here, ψ is the wavefunction describing the superfluid phase, while ρ, u, and q are the density, velocity and pressure (respectively) of the normal fluid.The normal fluid has viscosity ν and drag coefficient α, while µ (positive constant) is the strength of the scattering interactions within the superfluid 1 .This scattering nonlinearity has an exponent p ∈ [1, ∞).Finally, λ is a positive constant that indicates the coupling strength between the two phases.The coupling is denoted by the nonlinear operator B.
The Schrödinger equation dictates the evolution of the wavefunction, generated via the action of the Hamiltonian (roughly, the energy) of the system.The coupling B resembles the relative kinetic energy 2 between the two phases.This is evident upon recalling that the quantum mechanical momentum operator (in the position basis) is −ih∇.The purpose of this coupling is to allow for mass/momentum transfer between the two phases as a means of relaxation or dissipation.
These equations are supplemented with the initial conditions We use periodic boundary conditions, i.e., we are working on the two-dimensional torus [0, 1] 2 .
2.1.Weak solutions and the existence theorems.Having stated the model, the notion of weak solutions to (NLS), (NSE), (CON), and (DIV) (with initial conditions (INI) and periodic boundary conditions), henceforth referred to as the Pitaevskii model, is as follows.
Definition 2.1 (Weak solutions 3 ).For a given time T > 0, a triplet (ψ, u, ρ) is called a weak solution to the Pitaevskii model if the following conditions hold.
2 There is also the nonlinear wavefunction term, so that the relaxation to equilibrium also depends on the potential energy of the superfluid.
Remark 2.2.We note that the last two terms in (NSE) are gradients, just like the pressure term, and thus vanish in the definition of the weak solution (since the test function is divergence-free).Henceforth, we absorb these two gradient terms into the pressure, relabeling the new pressure as q.
We are now ready to state our main results.
Theorem 2.3 (Global existence).Fix any p ∈ [1, 4), and let e. in T 2 .Then, there exist a global weak solution (ψ, u, ρ) to the Pitaevskii model such that the density is bounded between m f ∈ (0, m i ) and M f := M i + m i − m f , if the initial data satisfy the smallness criterion Also, the solution has the regularity for 1 ≤ r < ∞.Additionally, the solution also satisfies the energy equality 1 2 ρ(t)u(t) (2.8) For the case of higher-order nonlinearities, i.e., when p ≥ 4, we obtain "almost global" existence.
Theorem 2.4 (Almost global existence).In the case of p = 4, the solution to the Pitaevskii model has the same regularity properties as in Theorem 2.3, except that their existence is guaranteed on , where ε is the size of the (sufficiently small) initial data.
For p > 4, the existence time scales polynomially with the size of the data, as T ∼ ε − p p−4 .In both cases, these solutions also satisfy the energy equality on [0, T ].
While deriving the a priori estimates, we have to distinguish between the cases 1 ≤ p < 2, p = 2, 2 < p < 4, p = 4, and p > 4.This is due to the poor control we have on the superfluid mass.Given that we are on T 2 , and our equations do not preserve functions with vanishing mean, the L 2 norm becomes the limiting factor even in the decay of higher norms.In the case of the wavefunction, this corresponds to the mass of the superfluid.Similarly, for the velocity, we do not get coercive estimates from the viscosity term alone, at least at the level of the kinetic energy estimate.Thus, we introduce a linear drag term.
Remark 2.5.Since the self-interaction term in (NLS) involves a discontinuity due to the complex magnitude, evaluating the H 2 norm as in (3.51) requires p ≥ 1.In particular, points of superfluid vacuum (ψ = 0) may lead to problems.As an illustration, consider D 2 (|f | p f ) for a real-valued function f , which can be regularized as D 2 (f 2 + ε) p 2 f .Upon differentiation, the most problematic term is (f 2 + ε) p 2 −2 f 3 (Df ) 2 .To be able to handle this term in the limit ε → 0, at the points where f = 0, we require that 2 p 2 − 2 + 3 = p − 1 ≥ 0. This argument can be easily extended to a complex-valued function.
Remark 2.6.The regularity of the solutions seem to suggest that the wavefunction and velocity are strong solutions.Indeed this is true, as they are strongly continuous in their topologies.On the other hand, the density is truly a weak solution and is the reason for referring to the triplet as a weak solution.This low regularity of the density influences the nature of the calculations that are employed.
The proofs of both Theorems 2.3 and 2.4 follow from detailed a priori estimates, and a semi-Galerkin scheme to construct the solutions.The a priori estimates only differ slightly for various ranges of the values of p, as will be illustrated.The general approach to the problem is motivated by that of [Kim87], but we do not allow the density to vanish anywhere.This is because the presence of u in the nonlinear coupling means we are required to control it in L ∞ (T 2 ) to prevent the formation of vacuum (and regions of negative density).Beginning from the usual mass and energy estimates, we derive a hierarchy of several energies for the wavefunction and velocity.
2.2.Significance of the results.The holy grail of superfluid modeling is to find a unified description that works at all length scales, and rigorous validation of any proposed models is crucial to this process.The thrust of this paper is the analysis of Pitaevskii's description of superfluidity, the most important feature of which is to characterize the mass transfer between the two fluids.In the course of proving the main theorems, we quantify the conversion of superfluid into normal fluid (Lemma 3.1), confirming the interaction-induced relaxation mechanism.We establish the validity of the model in the limit t → ∞ even as the superfluid mass decreases (polynomially) quickly.The transition in the behavior of the solutions, from global to almost-global, as the self-interactions are increased in strength, is in accordance with the decreasing mass decay.However, the threshold p = 4 still begs for a physical explanation.Of the assumptions underlying our theorems, relaxing the demands of small data and positive normal fluid density would be important future advancements in the context of the Pitaevskii model.
The rigorous analysis of superfluid models is a fairly new topic, and we expect for this work to pave the way for further results in this direction.Some questions of interest, particularly of consequence to physicists and engineers, are the issues of stability and compressibility.For example, in [Pit59], Pitaevskii investigated the propagation of sound waves in superfluid Helium by studying the case when the superfluid has only small density gradients.It has to be noted that his derivation of the model accounted for the contributions to the internal energy of the system from both fluids.Thus, by utilizing appropriate self-interactions (for instance, non-local potentials, or including the normal fluid density), it would be important to test the model against experimental findings.A mathematical guarantee of the existence of solutions to the Pitaevskii model is essential to complement the efforts to numerically simulate such complicated systems [BSZ + 23].It is worth mentioning that a better understanding of superfluidity could be revolutionary to most modern experiments in physics (including the Large Hadron Collider [Leb94,RM18]), and also to the fields of quantum computing [HDT21], gravitational wave astronomy [SDLPS17], and dark matter [vKEE + 23].All of these use helium as a cryogen, often as a superfluid-normal fluid mixture due to the superfluid's excellent thermal conductivity [Vin04].
2.3.The strategy.The nonlinear coupling terms in (NLS) and (NSE) may be the most obvious differences between this model and other standard fluid dynamics models, but the source term in (CON) is the most troublesome.The backbone of our approach towards proving global existence is ensuring a positive lower bound for the density at all times.This involves a meticulous handling of the a priori estimates so as to obtain coercive terms that lead to global-in-time bounds.Throughout the calculations, we ensure that the density norms are only in Lebesgue spaces: ρ is not smooth enough to be differentiated (even weakly).Before we outline the strategy, we discuss some properties of the coupling operator B. Henceforth, we refer to the linear (in ψ) part of B as B L .Thus, (2.9) Proof.Both calculations follow using integration by parts.
(1) By (2.9) and incompressibility of u, we have (2) Similarly, In the last inequality, we used Hölder's and Young's inequalities to cancel the third term with the first two terms: x .□ Remark 2.8.Given that B provides a relaxation mechanism, it is tempting to treat it, or at least its linear part B L , as a dissipative second-order elliptic operator whose eigenfunctions can be used as a basis for the semi-Galerkin scheme.Even though B L is symmetric and has a non-negative real part, this cannot work since it has time-dependent coefficients, and so its eigenvalues and eigenfunctions also depend on time.Moreover, B L does not have a spectral gap at 0. Its eigenvalues are not known to be bounded from below by a positive number.
Thus, by integrating (CON) over T 2 , the advective term vanishes and using Lemma 2.7, we have This implies that the overall mass of the normal fluid does not decrease with time.Put differently, the coupling causes superfluid to be converted into normal fluid, on average.However, the RHS of (CON) need not be non-negative pointwise in T 2 .So it is not inconceivable that the density of the normal fluid may locally vanish, or even take negative values!To prevent physically unrealistic density fields, and because our estimates require a strictly positive density, we fix a positive lower bound for ρ.Based on this, we define our existence time T , so that ρ does not drop below the lower bound until time T .Our goal is to show that this lower bound can be maintained for arbitrarily long, provided we begin from sufficiently small data.
Definition 2.9 (Existence time).Start with an initial density field 0 Given 0 < m f < m i , we define the existence time for the solution as (2.11) A formal solution to the continuity equation can be written using the method of characteristics.Let X α (t) be the characteristic starting at α ∈ T 2 .To wit, the characteristic solves the differential equation (2.12) Here, u is the velocity of the normal fluid.So, along such characteristics, (2.13) From (2.11) and (2.13), it is clear that a sufficient condition to ensure the density is bounded from below by m f is 2λ for all T ≤ T * .This can in turn be ensured through the sufficiency So, we are looking to show that (2.15) − actually a stronger version of it − holds irrespective of T , so that we can conclude that the density is always greater than m f .This is achieved by selecting small enough data, and allows us to deduce the global existence of solutions.Since Bψ involves a second-order derivative, its L ∞ x boundedness leads us to high-regularity spaces.The momentum equation (NSE) is used to estimate ∥u∥ L 2 t H 2 x and ∥u∥ L 2 t H 1 x , which are useful in handling parts of ∥Bψ∥ L ∞ x .As a by-product of these calculations, we are also able to bound x , which plays a part in the compactness arguments for the strong time-continuity of u.The Schrödinger equation (NLS) is used to derive increasingly higher-order a priori estimates of ψ.In all these calculations, we work with density that is only in L ∞ x .
A priori estimates
Throughout this section, we derive the required a priori estimates, using formal calculations.We assume the wavefunction and velocity are smooth functions and that the density is bounded from below by m f > 0 in [0, T ].Here, T is any time less than the local existence time T * , and is extended to global existence in Section 3.5.Proof.Multiplying (NLS) by ψ, taking the real part, and integrating over T 2 gives 1 2 The Laplacian term on the RHS of (NLS) vanishes using integration by parts.By Lemma 2.7, the second term in (3.1) is bounded from below by the L p+2 x norm, so we get 1 2 Since we are in a domain of unit volume, Hölder's inequality leads to d dt It is now easy to conclude that the mass of superfluid (using the quantum mechanical interpretation of the wavefunction) decays algebraically in time.Namely, where x is the initial mass of the superfluid.□ 3.2.Energy estimate.In this subsection (Section 3.2), we derive the governing equations for the energy In Section 3.3, we work with a higher-order energy X(t), combining it with E(t) in Section 3.3.3.We begin by acting with the gradient operator on (NLS), multiplying by ∇ ψ, and taking the real part.This gives Integrating over T 2 , we observe that the first term on the RHS vanishes upon integration by parts due to the periodic boundary conditions.The second term on the RHS is similarly integrated by parts to yield 1 2 Now, we rewrite the first term on the RHS by expressing the Laplacian in terms of the operator B, giving us a dissipative contribution to the energy estimate.Namely, |ψ| p Re( ψBψ). (3.7) We also have to account for the potential (self-interaction) energy of the wavefunction.To obtain this, we multiply (NLS) by 2 ψ and take the real part to obtain Multiplying the above equation with µ|ψ| p and integrating over T 2 leads to 2µ p + 2 The terms on the RHS are canceled once we include the energy of the normal fluid.We first rewrite (NSE) in the non-conservative form, and apply the Leray projector (see Remark 2.2) to get Here, P is the Leray projector, which projects a Hilbert space into its divergence-free subspace, thus removing any purely gradient terms.We also apply the Leray projector to (NSE) to obtain Taking the inner product of both (NSE') and (NSE-L) with u, using incompressibility, and adding them, we arrive at the energy equation for the normal fluid, 1 2 Therefore, by adding (3.9) and (3.10), we obtain the energy equation Thus, the energy is bounded from above as with denoting the initial energy of the system.Next, we wish to show that the energy actually decays algebraically in time, under a certain smallness condition on the initial data.First, note that where we used an argument similar to the one from the proof of Lemma 2.7 to get the last inequality.We now use (2.9) to see that We then bound the first term on the RHS using Hölder inequality and Gagliardo-Nirenberg (GN) interpolation as For the second term in (3.15), we interpolate the L 3 x norm, while also applying the Hölder, Poincaré, and Young inequalities, as well as the GN interpolation inequality, to get (3.17) For sufficiently small values of κ and E 0 , the RHS of (3.17) can be absorbed into the LHS of (3.15).We also use the Poincaré inequality to convert the last term on the LHS of (3.15) into a coercive term for the internal energy term 2µ p+2 ∥ψ∥ p+2 L p+2 x in E(t).To this end, we observe that (3.18) In the last inequality, we interpolated between the L p+2 x and L 2 x norms, which may be done when p > 2. By choosing κ sufficiently small, we can absorb the second term on the RHS into the LHS.For p ≤ 2, we can simply replace ∥ψ∥ p+2 While we have the required coercive terms on the LHS, we cannot yet obtain a decay estimate for E(t), since the second term on the RHS is out of reach using E only.In order to control it, we set up an analogous inequality for a higher-order energy.
3.3.Higher-order energy estimate.In this subsection, we obtain further bounds for ψ and u, this time with one more derivative than the energy E.
Once again, the first term on the RHS of (NLS) vanishes due to the boundary conditions.We now estimate the terms on the RHS of (3.20).For the first term, x , which gives a dissipative term for ψ.For the term I 4 , we again integrate by parts, followed by Hölder's inequality to obtain x .
Thus, (3.20) becomes (3.21) The first of these terms is bounded as using the Poincaré and GN interpolation inequalities.We have also applied Young's inequality to extract out dissipative terms in the last step.We again use κ to denote a small number whose value shall be fixed later on, and C κ is a constant whose value depends on κ and the system parameters.
Similarly, for the second term on the RHS of (3.21), we have Finally, we apply the Sobolev embedding and Poincaré inequalities to bound I 7 .This leads to Combining all these inequalities into (3.21)results in where we have absorbed κ D 3 ψ 2 L 2 x into the LHS with a sufficiently small κ.
3.3.2.
The Navier-Stokes equations.We shall now derive a higher order estimate for the velocity field, which shall be combined with (3.25).Starting with (NSE'), we first multiply it by ∂ t u and integrate over the domain to obtain , we control the RHS.For the first term, x .In going to the last inequality, we used the GN interpolation and Poincaré inequalities.Finally, Young's inequality lets us extract the required dissipative term.For the second integral in (3.26), where the Bψ term is handled via the GN interpolation and Young's inequalities.In the third integral in (3.26), x ∥ψ∥ 2 L 6 x ∥Bψ∥ 2 x , where the term Bψ is handled just like in I 9 .Finally, for the last term in (3.26), Re(ψBψ)|u| 2 . (3.27) We estimate the second term on the RHS of (3.27) using the Hölder and GN interpolation inequalities.This gives Similarly, for the third term in (3.27), αλ Substituting the above estimates into (3.26),we arrive at where C κ depends on κ and the system parameters.
So far, we have obtained equations for ∥∇u∥ L 2 x and ∥∆ψ∥ L 2 x , while including the higher-order dissipation corresponding to the wavefunction, ∥∇(Bψ)∥ 2 x .What remains is to consider the higher-order velocity dissipation ∥∆u∥ 2 L 2 x .To this end, we multiply (NSE') by −θ∆u, with θ to be determined, and integrate over the domain.This gives x with a small coefficient, so it can be absorbed into the LHS.Thus, we have The second integral is manipulated just as I 8 and yields x .The bound for the integral I 14 follows from the GN interpolation, Poincaré, and Young inequalities, as x .In a similar manner, we have Finally, for the last integral in (3.29), Thus, (3.29) becomes x . (3.30) We now add (3.25), (3.28), and (3.30).We also observe that where the last three terms on the RHS are the same as I 5 , I 6 , and I 7 in (3.21).We bound them just as in (3.22)-(3.24).Choosing θ sufficiently small, and subsequently κ also small enough, we absorb x and ∥∆u∥ 2 L 2 x on the RHS into the corresponding terms on the LHS.Finally, what remains is where we absorbed x with an appropriate choice of κ.This is the higher-order energy estimate.
The Grönwall inequality step.
Having derived the equations for the higher-order norms of u and ψ, and while accounting for the relevant dissipative terms, the goal now is to use a Grönwalltype argument.Lemma 3.2 (Algebraic decay rate for energies).The sum of the energy E(t) and the higher-order energy X(t) := ∥∆ψ(t)∥ 2 x decays algebraically in time as (1 + t) Proof.We begin by denoting x , so we can rewrite (3.31), after updating θ, κ, E 0 , and S 0 to be sufficiently small, as where Q 1 (X + E) is a strictly super-linear polynomial, while Q 2 (X + E) contains both linear and super-linear terms.To arrive at (3.32), we have also expanded the Sobolev norms as for the velocity, and ∥ψ∥ 2 for the wavefunction.Next, we add (3.11) and (3.32) to end up with We use the Poincaré inequality to rewrite Y in order to get decaying norms.Indeed, x ≳ X.Additionally, we also use the analysis in (3.14 x , which in turn can be downgraded to ∥∇ψ∥ 2 L 2 x using the Poincaré inequality.One can also represent ∥Bψ∥ 2 L 2 x on the RHS of (3.35) by where we have used the estimates (3.16) and (3.17), and the GN inequality.After all of the above manipulations, (3.35) now reads where β depends on the system parameters, and the polynomials Q 1 and Q2 are strictly superlinear.The first term on the RHS results from the estimates in (3.18).As for the second term on the RHS, we note that this can be absorbed into the LHS by tweaking S 0 .
For notational convenience, we write Z := X + E and use Q := Q 1 + Q2 to denote the strictly super-linear polynomial in the RHS of (3.36), leaving us with The Duhamel solution for Z(t) obeys We set the size of the initial data as Z(0) =: Z 0 ≤ ε.We need to use a bootstrap argument to show that Z(t) ≤ 3ε for t ∈ [0, T ].Specifically, we prove that the hypothesis Z(t) ≤ 4ε for t ≤ t 1 leads to the stronger conclusion Z(t) ≤ 3ε for t ≤ t 1 , where t 1 ∈ [0, T ].To this end, we estimate each integral on the RHS of (3.38).The first integral is split into two parts to take advantage of the exponential decay factor.Thus, The last inequality is a result of the exponential decay of the first term, compared to the algebraic decay of the second.The second integral in (3.38) is more straightforward, with Now we choose ε small enough (call it ε 0 ) so that the RHS ≤ ε.Similarly, the contribution from the first integral term in (3.38) is made less than ε for all S 0 ≤ ε 0 < 1 small enough.This completes the bootstrap argument, and we can see that indeed Z(t) ≤ 3ε.For ε 0 sufficiently small, the linear dissipation in (3.37) dominates the nonlinearities, and we may write the equation as whose solution, following (3.37)-(3.39),obeys Returning to (3.35), we now absorb the last term on the RHS into the LHS, which is possible for small enough data since sup 0≤t≤T Z(t) ≤ 3ε 0 .Furthermore, in the regime of small data, the super-linear polynomial Q 1 can be dominated by the linear term on the RHS, which leaves us with Employing the bound for Z from (3.41) in the RHS of (3.42) and integrating over [0, T ], we estimate the dissipation as (3.43)The last inequality holds because S 0 < 1.Thus, we can achieve small values for the RHS of (3.43) by selecting appropriate Z 0 and S 0 .
Another useful estimate for the dissipative terms results from integrating (3.42) over the time interval [t, 2t], where t ≥ 1.This gives (3.44)This time-decaying bound on the dissipation is necessary to obtain a sharp control of the dynamics at large times.□
3.4.
The highest-order a priori estimate for ψ.From the previous analysis, we have obtained Bψ ∈ L 2 [0,T ] H 1 x .However, as pointed out in the discussion following Definition 2.9, we seek To this end, we would like to obtain an even higher order a priori estimate, only for ψ.Lemma 3.3 (Algebraic decay rate for highest-order norm of ψ).For S 0 , E 0 , and Z 0 small enough, and with s = 5 4 , the homogeneous Sobolev norm ∥ψ(t)∥ Ḣ2s x decays algebraically in time as (1+t) x is sufficiently small, the higher-order dissipation ∥ψ∥ L 2 [0,T ]
Ḣ2s+1
x may be made as small as required, independent of the time T .
x .This inclusion is precisely what we need to control the term |u| 2 ψ in the coupling using the a priori estimates up to this point.
Proof.With s = 5 4 , we apply (−∆) s to (NLS), to get Just as in Sections 3.2 and 3.3.1,we multiply by (−∆) s ψ, and integrate the real part over T 2 .As a result, the second term on the LHS yields Re Using the self-adjoint property of the Laplacian allows us to conclude that the first term on the RHS of (3.45) vanishes, since For the second term on the RHS of (3.45), we have In the last expression of (3.46), we expand the operator B and use Hölder's inequality to arrive at (3.47) Rewriting the LHS in terms of the homogeneous Sobolev norms and the RHS in terms of the usual Sobolev norms, we have Since 2s − 1 = 3 2 , the algebra property of Sobolev norms is applicable.Using this, (3.4) and (3.41), we estimate the RHS of (3.48).The first term requires interpolation4 and yields where we have retained only the terms that decay the slowest.In arriving at the last inequality in (3.49), we use the fact that Z 0 < 1.For the second term in the RHS of (3.48), we similarly obtain While the H 3 2 x norm could have been interpolated between H 1 x and H 2 x , it does not provide an improved estimate since ∥u∥ x are both bounded by (3.43).In the last term of (3.48), in view of Remark 2.5, we have where the penultimate inequality is obtained using (3.4) and (3.41).Therefore, (3.48) becomes With the Poincaré inequality, we replace the dissipative term on the LHS with W (t) := ∥ψ(t)∥ 2
Ḣ2s
x .We employ calculations similar to (3.39) to estimate the integrals, i.e., splitting them over 0, t 2 and t 2 , t .We also use (3.43), in particular ∥u∥ 2 ≲ 1, to simplify the exponential factors outside the integrals.In all, we end up with for all t ∈ [0, T ].We simplify further by making use of (3.43) and (3.44) for ∥u∥ 2 x , respectively.This leads to We use (3.55) in (3.52) and integrate over [0, T ] to obtain the final dissipative estimate This shows that with small enough data one can achieve an arbitrarily small value (independent of T ) for this highest-order dissipation.
Similarly to (3.44), it is possible to also get a time-decaying estimate by integrating (3.52) over the time interval [t, 2t] for t ≥ 1.This leads to where we have used (3.44) and (3.55), and retained only the slowest decaying terms.□ The high-norm control in (3.56) and (3.57) is important because the inequalities can be translated into the desired bounds (on two fewer derivatives) for Bψ.Indeed, where for the last three terms, we replaced the homogeneous Sobolev norms by the larger inhomogeneous norms.Combining the analysis in (3.49)-(3.51)with (3.4), (3.43), (3.56), and (3.58), we get the sought-after dissipation bound The estimates in (3.59) and (3.60) are used to ensure that the density remains bounded from below.
3.5.Ensuring positive density.We now have all the a priori estimates to return to (2.15).For it to hold true, a sufficient condition is Depending on the value of p, we now divide the analysis into several cases: 1 ≤ p < 2, p = 2, 2 < p < 4, p = 4, and p > 4.
3.5.1.The case 1 ≤ p < 2. Owing to the Poincaré inequality and (3.43), we have and this bound holds for all p ≥ 1.For the first term of (3.61), we integrate (3.4), yielding since 2 p > 1.From (3.59), (3.62), and (3.63), we conclude that the condition in (3.61) can be achieved if is sufficiently small.Thus, the density satisfies m f ≤ ρ ≤ M i + m i − m f for all T > 0, as long as the initial data are small enough.
For p ≥ 2, the integral of the superfluid mass, i.e., ∥ψ(t)∥ 2 L 2 x , cannot be bounded uniformly in [0, T ].This is where the decaying estimates in (3.44) and (3.60) prove to be useful.
3.5.2.The case p = 2.We split the time integral in (3.61) over the ranges 0 ≤ t ≤ 1 (short-time) and t ≥ 1 (long-time).We start with the long-time estimate the LHS of (3.61) with p = 2.For the first term, we have Using the Poincaré inequality and (3.44) gives . This leads us to which is the long-time contribution (independent of N ) of the constraint in (3.61).It can be made as small as required with an appropriate choice of W 0 + Z 0 + S 0 .Finally, we verify the short-time control as well.The superfluid mass bound in (3.4) means that Similarly, using (3.43), we get From (3.59), (3.67), and (3.68), we have which can be made small enough to satisfy (3.61).This lets us conclude that the density is bounded from below uniformly in time, for the case p = 2. Thus, we have the necessary global bound.
3.5.3.The case 2 < p < 4. We begin, once again, with the long-time analysis, i.e, for t ≥ 1.From (3.4), we have Using the Poincaré inequality and (3.44), .71), we have Once again, the slowest decaying term is the dominant one.Therefore, we have (3.73)The sum converges (uniformly in N ) because p < 4. Hence, we obtain good long-time control of the LHS of (3.61) for 2 < p < 4.
What remains is to check that we also maintain short-time control.To this end, we have from (3.4), and from (3.43), which is the short-time control we are seeking.This implies global solutions, since the density is bounded from below uniformly in time.
3.5.4.The case p ≥ 4. The arguments for short-time control in Section 3.5.3remain valid even for p ≥ 4.However, the long-time estimates breaks down.Specifically, the geometric series in (3.73) diverges.We see that for , p > 4.
(3.76) Therefore, in this scenario, global-in-time estimates elude us due to the logarithmic/polynomial dependence on T .We can, however, guarantee almost global existence of solutions.Given a set of system parameters, we can ensure that ρ ≥ m f for any finite time T > 0 as long as we start from small enough initial data (depending on T ).In other words, if the size of the data is ε, then we have T ∼ e Having derived the required a priori estimates, we now establish the existence of a weak solution for a truncated form of the governing equations, and then pass to the limit.4.1.Constructing the semi-Galerkin scheme.The finite-dimensional wavefunction and velocity are constructed using eigenfunctions of the Laplacian and the Leray-projected Laplacian, respectively.4.1.1.The approximate wavefunction.Consider the negative Laplacian −∆ on the torus T 2 , with the domain D(−∆) = H 2 .It has a discrete set of non-negative and non-decreasing eigenvalues {β j }, and the corresponding eigenfunctions {b j } ∈ C ∞ (T 2 ) can be chosen to be orthonormal in L 2 x and orthogonal in H 1 x .We define the approximate wavefunction as for N ∈ N ∪ {0} and d N k (t) ∈ C.
4.1.2.The approximate velocity.We consider the Leray-projected Laplacian (or Stokes operator) A = −P∆ with the domain D(A) = L 2 d ∩ H 2 (see [RRS16, Chapter 2], for instance).The Stokes operator (like the Laplacian) has a discrete set of non-negative and non-decreasing eigenvalues {α j }, and the corresponding divergence-free, vector-valued eigenfunctions {a j } ∈ C ∞ (T 2 ) can be chosen to be orthonormal in L 2 d,x and orthogonal in H 1 x .We define the approximate velocity as for N ∈ N ∪ {0} and c N k (t) ∈ R. 4.2.The initial conditions.4.2.1.The initial wavefunction and initial velocity.We begin by defining P N (respectively, Q N ) to be the projections onto the space spanned by the first N + 1 eigenfunctions of A (respectively, −∆).Then, we truncate the initial conditions for the velocity and wavefunction accordingly: ), it is necessary to establish that the truncated initial conditions converge to the actual ones in the relevant norms.
ψ, and (2) The proof utilizes the equivalence of norms between Sobolev spaces and fractional powers of the negative Laplacian/Stokes operator (see Theorem 2.27 in [RRS16]).Given the regularity of ψ 0 and u 0 , we deduce the convergence of the approximate initial conditions by applying Lemma 4.1.
4.
3.1.The continuity equation.Having described the (approximate) initial conditions and the semi-Galerkin scheme, we now establish the existence of solutions to the "approximate" equations, starting with the continuity equation.It is given by where Just as in (2.15), we see that the constraint that fixes the local existence time Since the norms in (4.5) are bounded by the size of the initial data, the time T N is independent of N .Hence, we use T to denote the time of existence, with T arbitrarily large for 1 ≤ p < 4 and T bounded for p ≥ 4 (as specified in Theorem 2.4).We now establish the analogs of Lemmas 2.2 and 2.3 from [Kim87].These constitute the existence of a unique solution to (4.4) and a Picard iteration scheme for the same, respectively.
Proof.Consider the evolution equation for the characteristics of the flow, x N (0) = y N ∈ T 2 .
x N is well-defined.We now write the solution to (4.4) along characteristics as That (4.7) uniquely solves (4.4) can be verified using the property of the "inverse-characteristics" y(t, x).For any τ ∈ R, where the last equality is due to Euler's chain rule.□ Now, we consider a convergent sequence of velocities and wavefunctions that belong to the finitedimensional subspaces spanned by the truncated Galerkin scheme.Given such a convergent sequence, we show that the sequence of density fields satisfying (4.4) is also convergent, and this shall be used to complete a contraction mapping argument below.
x the unique solution to the system , where ρ N solves (4.4).
Proof.We begin by defining Ψ Given the convergence of y N n derived above, and because ρ N 0 ∈ C 1 x , the first term on the RHS vanishes.The second and third terms vanish on account of the following argument.Note that Ψ N n has its highest order term of the form ψ N n ∆ψ N n (second derivative), and so the assumed convergence of ψ N n in the C 0 x is finite, uniformly in n. □ 4.3.2.The Navier-Stokes equation.We now consider an "approximate momentum equation", composed of the approximate wavefunction and velocity fields defined by (4.1) and (4.2), respectively.Namely, .11) Recall that the incompressiblity condition is built-in, because the eigenfunction basis used to construct the velocity fields are divergence-free.Now, taking the L 2 inner product of (4.11) with a j (x) for 0 ≤ j ≤ N , we arrive at a system of equations for the coefficients describing the time-dependence of the approximate velocity fields, as where Since we have both lower and upper bounds on the density in the chosen interval of time, we can use Lemma 2.5 in [Kim87] to show that the matrix R N (t) is invertible.Therefore, we arrive at which is the desired evolution equation (written vectorially) for the coefficients c N j (t).
4.3.3.
The nonlinear Schrödinger equation.As in the previous section, we derive an evolution equation for the coefficients of the approximate wavefunction, by considering an "approximate NLS".Namely, Recall that B L = B − µ|ψ| p , i.e., the linear (in ψ) part of the coupling operator.Performing an L 2 inner product with b j (x), we get where Written vectorially, the evolution equation for the coefficients d N j (t) becomes where 4.4.Compactness arguments.We now extract convergent subsequences from the a priori estimates in Section 3. Beginning with the density, we know that ρ Moreover, from (4.4),
.18)
The second inequality is due to the (compact) embedding L 2 x ⊂ H −1 x for T 2 .All the terms in the last line are finite (uniformly in N ) by virtue of the a priori estimates.Therefore, using the Aubin-Lions-Simon lemma, we conclude the strong convergence of a subsequence of the density as Consider a relabeled subsequence ρ N that strongly converges to ρ in C([0, T ]; H −1 x ), so that (4.1) and (4.2) are also appropriately relabeled.For a.e.s, t ∈ [0, T ] and any ω ∈ H showing that ⟨ρ N (t), ω⟩ H −1 ×H 1 is uniformly continuous in [0, T ], uniformly in N due to (4.18).Due to the embedding H 1 x ⊂ L r x for all 1 ≤ r < ∞, we conclude, using the Arzela-Ascoli theorem, that ρ N is relatively compact in C w ([0, T ]; L r x ).We move on to the velocity.Based on the a priori estimates, we extract a subsequence of u N that weakly converges to u Applying the Lions-Magenes lemma (see [Tem77, Chapter 3]), we deduce that u ∈ C([0, T ]; H 1 d,x ).Based on the L ∞ t L ∞ x bound on the density, and the above strong convergences, it is easy to see that ρ N u N and ρ N u N ⊗ u N converge in C([0, T ]; L 2 x ) to ρu and ρu ⊗ u, respectively.Next, we consider the wavefunction.Again, we extract a subsequence that converges weakly to x ∩ L 2 [0,T ] H 7 2 x .From this and (NLS), we have ∂ t ψ ∈ L 2 [0,T ] H 3 2 x .Thus, the Lions-Magenes lemma yields ψ ∈ C([0, T ]; H x and H 1 d,x , respectively.For the momentum, we have where 1 r + 1 r ′ = 1 2 .Using the embedding H 1 x ⊂ L r ′ x to handle the velocity in the first term of the RHS, it is easy to see that the initial momentum converges in the L 2 x norm.The approximate solutions (ψ N , u N , ρ N ) are smooth enough to satisfy (2.1)-(2.3).The aforementioned compactness results allow us to pass to the limit of N → ∞ and arrive at the weak solutions (ψ, u, ρ).4.5.Renormalizing the density.At this point, we know that ρ N * − ⇀ ρ in L ∞ t L ∞ x .We wish to use the technique of renormalization to extend this to ρ N → ρ in C 0 t L r x , for 1 ≤ r < ∞.To achieve this, we will adapt a classical argument (see, for instance, Theorem 2.4 in [Lio96b]).We begin by defining a sequence of unit-mass mollifiers ζ h (x) = 1 h 2 ζ x h , where h will eventually be taken to 0. Next, for a given weak solution ρ ∈ L ∞ t L ∞ x , we mollify (CON) to obtain where g h := g * ζ h , Ψ := 2λ Re(ψBψ), and R h := u • ∇ρ h − (u • ∇ρ) h is a commutator.We multiply this by η ′ (ρ h ), for a C 1 function η : R → R.This yields The Sobolev embedding H 2 x ⊂ W 1,r 1 x for any r 1 ∈ [1, ∞) implies that u ∈ L 2 t W 1,r 1 x .From Lemma 2.3 in [Lio96b], we note that R h vanishes in L 2 t L r 1 x (and also in L ∞ t L 2 x ) as h → 0, by choosing r 1 > 2. Similarly, Ψ h converges to Ψ in C 0 t L 2 x .Finally, note that η ′ (ρ h ) is uniformly continuous since ρ (and ρ h ) take values in a compact subset of R. Therefore, using a test function σ, we may pass to the limit h → 0 in (4.22).In other words, if ρ is a weak solution of the continuity equation, then η(ρ) solves (in a weak sense) ∂ t η(ρ) + u • ∇η(ρ) = η ′ (ρ)Ψ.(4.23)This is the renormalized continuity equation.Taking the difference of (4.21) for h 1 , h 2 > 0, we write the analog of (4.22) for η(ρ h 1 − ρ h 2 ), with η(x) = x 2n where n ∈ N. Integrating over T 2 leads to x , it follows from the Sobolev embedding and Hölder's inequalities that Ψ = ψBψ ∈ L 1 t L r 1 x for any r 1 ∈ [1, ∞).Between this, the commutator estimate in Lemma 2.3 of [Lio96b], and the boundedness of ρ 0 , we find that all of the terms on the RHS of (4.24) vanish as h 1 , h 2 → 0, giving us a Cauchy sequence in C([0, T ]; L 2n x ).Hence, ρ h converges to ρ in C([0, T ]; L 2n x ).We have, so far, proved that our "original approximations" of the continuity equation ρ N converge in C w ([0, T ]; L r x ) to ρ, and that ρ also belongs to C([0, T ]; L 2n x ).To achieve what we set out to prove, i.e., that ρ N converges strongly in C([0, T ]; L r x ) to ρ, it remains to show that the L r x norms are continuous in time.It is sufficient to illustrate this for r = 2 (or n = 1), in order to deduce it for the other values of r.Explicitly, if there is a sequence of times t N → t, then we need ρ N (t N ) to converge in L 2 x to ρ(t).Returning to (4.4), we look at its renormalized version with η(x) = x 2 , and integrate over T 2 (and then from 0 to t N ) to get Since we know that ρ ∈ C([0, T ]; L 2 x ), we can do the same calculation with (CON), except over the time interval 0 to t.This yields x , we can use the strong convergence in (4.19) to handle the first term on the RHS.The second and third terms follow from simple Hölder's inequalities, and the strong convergence of ψ N of B N ψ N .Finally, the last term is integrable on [0, T ], so as t N → t, it vanishes.In summary, which, along with the weak-in-time continuity deduced earlier, implies strong convergence of ρ N to ρ in C 0 t L 2n x for all n ∈ N. Interpolating between Lebesgue norms extends this result to C 0 t L r x for all r ∈ [1, ∞).
4.6.The energy equality.The smooth approximations to the weak solutions satisfy an energy equation, given by (2.8), i.e., 1 2 ρ N (t)u N (t) for a.e.t ∈ [0, T ].From our choice of the initial conditions and their approximations (see Section 4.2), we can ensure that as N → ∞, the RHS converges to the initial energy E 0 defined in (3.13).Indeed, for the first term, Moreover, based on the results of Section 4.4, we can conclude that all the terms on the LHS of (4.26) converge strongly to the corresponding terms with the approximate solutions replaced by the weak solution.The first term on the LHS can be dealt with the same way as the first term on the RHS in (4.27), by simply including a sup t outside the absolute values.□ This completes the construction of the solutions.Together with the global/almost global estimates from Section 3, we can conclude the results of Theorems 2.3 and 2.4.
L p 2 +1 by ∥ψ∥ p+2 L 2 x since we are on a finite-size domain.Thus, irrespective of the value of p, (3.15) becomes
ε 4 .
for p = 4 and T ∼ ε − p p−4 for p > 4.This is the scaling expressed in Theorem 2.4.Existence of weak solutions (Proof of Theorems 2.3 and 2.4)
Lemma 4 .
1 (The projections Q N and P N are convergent).If ψ ∈ H r x and u ∈ H s d,x for any 0
(4. 6 )
Since u N ∈ C 0 [0,T ] C 1 x , there exists a unique solution x N (t, y N ) ∈ C 1 [0,T ] C 1 x .Owing to the incompressibility of the flow u N , it follows that det allowing us to conclude that the characteristics are C 1 diffeomorphisms and therefore, invertible.This means
4. 3 . 4 .
Fixed point argument for the coefficients.For a fixed N , a standard contraction mapping argument shows that (4.13) and (4.16) have unique solutions that are continuous in [0, T ].For a pair (u N n , ψ N n ), equivalently (c N n , d N n ), using Lemma 4.2, we can find a solution ρ N n .Owing to the smoothness (in space) of the eigenfunctions used in the approximate velocity and wavefunction, we conclude that u Therefore, performing an iteration on the triplet (c N n , d N n , ρ N n ) and using Lemma 4.3, we conclude that the sequence ρ N n converges to ρ N ∈ C 0 [0,T ] C 0 x .
x
).Additionally, we also haveB N ψ N C 0 t L 2 x − −−→ Bψ,due to the regularity of u and ψ.As for the initial conditions, by construction itself (Section 4.2.2),we have ρ N 0 L r x −→ ρ 0 for 1 ≤ r < ∞.Also, Lemma 4.1 states that ψ N 0 and u N 0 converge to ψ 0 and u 0 in H 5 2
ρ
Re ψBψ .Subtracting the last two equations, and taking the limit N → ∞, we observe that the first terms on the RHS cancel (recall from Section 4.2.2 that ρ N Thanks to the uniform boundedness of ψN B N ψ N in L 1 [0,T ] H 3 2 this is just the inverse of the characteristic, i.e., if the flow were reversed.Due to the flow being incompressible, we know that the matrix ∂y N n ∂x is invertible.Also, as shown in the proof of the previous lemma, ∂ ∂t y N n = −u N n • ∇ x y N n .This implies that the derivatives of y N n with respect to both space and time are bounded uniformly in n, t and x.Thus, by the Arzela-Ascoli theorem, we can extract a subsequence that converges N .Just as in (4.8), we can show that the solution to (4.9) is n→∞x N .Consider the map y → x N n (t, y) and define its inverse y N n (t, x); n→∞ y | 2023-05-23T01:16:20.470Z | 2023-05-21T00:00:00.000 | {
"year": 2024,
"sha1": "c3f1f062deec1acada2abba769b9653018db703c",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6544/ad3cae/pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "238443de2cc756276ad76a7ed65da9703a1b5a8f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
3610705 | pes2o/s2orc | v3-fos-license | Gravity and neuronal adaptation, in vitro and in vivo—from neuronal cells up to neuromuscular responses: a first model
For decades it has been shown that acute changes in gravity have an effect on neuronal systems of human and animals on different levels, from the molecular level to the whole nervous system. The functional properties and gravity-dependent adaptations of these system levels have been investigated with no or barely any interconnection. This review summarizes the gravity-dependent adaptation processes in human and animal organisms from the in vitro cellular level with its biophysical properties to the in vivo motor responses and underlying sensorimotor functions of human subjects. Subsequently, a first model for short-term adaptation of neuronal transmission is presented and discussed for the first time, which integrates the responses of the different levels of organization to changes in gravity.
Introduction
Of the four fundamental interactions (strong interaction, weak interaction, electromagnetic force, and gravity), gravity is the weakest. Nevertheless, gravity is responsible for Bloomberg et al. 1999;Homick and Reschke 1977;Paloski et al. 1993). Life is based on these sensorimotor competencies.
Decades of space research made gravity-induced changes in the NS apparent, and since the first manned space missions, the effect of microgravity on humans has been investigated, as various effects on astronauts and cosmonauts have been observed. With an emphasis on weightlessness and our astronomical neighbors Mars and the moon (Margaria and Cavagna 1964;Spudis 1992), authors found directly related health effects, among others a persistent modulation in the sensory (Paloski et al. 1993;Reschke et al. 1986) and motor system (Blottner and Salanova 2015) and the resulting structural loss of muscle (Di Prampero and Narici 2003) and bone mass (Loomer 2001). In addition, there are modulations in the neuromuscular system underlying those health-related changes that open up a lot of questions on how gravity, and the absence of it, influences the NS. These questions led to numerous experiments to investigate the effect of varying gravity conditions on the different levels of organization, from the molecular and cellular level up to the whole NS and the interconnection with movement control and mobility. The functional properties of these levels were thoroughly investigated, however, with barely any interconnection.
The aim of this review is to give an overview of the acute gravity-dependent adaptations of the NS of humans and animals from the molecular level up to the sensorimotor systems and to present and discuss a first model of neuronal short-term adaptation that takes into account the findings on the different levels of organization. This has been done on the basis of in vitro and in vivo studies executed in varying gravity environments. Consequences and prospects for space missions and countermeasure applications were integrated. Regarding the gravity-dependency on a functional level of human organisms, beside direct motor responses of single nerves, the monosynaptic reflex arc (Crone et al. 1990;Zehr 2002) have been selected for a functional description with focuses on afferent and efferent pathways. Even though many ongoing experiments are focusing on the human brain (e.g., the NEUROMapping program from NASA), the brain and its sub-compartments have been excluded from the analysis as the interpretation of the different studies is quite challenging and should be addressed separately.
Literature search
We performed a computerized systematic literature search in PubMed and Web of Knowledge from January 1950 up to February 2016. Keywords were included in our final Boolean search strategy as follows: 'space' OR 'parabolic flight' OR 'rocket' OR 'drop tower' AND 'neuro' PR 'neuron' OR 'ion channel' OR "action potential" OR 'sensorimotor' OR 'reflex' OR 'latency' OR 'neuromuscular'. The search was limited to English and German languages, cells and human species, and to full-text original articles, books, and conference abstracts. We scanned each article's reference list in an effort to identify additional suitable studies for inclusion in the database.
Selection criteria
To be eligible for inclusion, studies had to meet the following criteria: experiments had to be executed in real microgravity conditions in either space missions (MIR, ISS, or Shuttle), parabolic flights, sounding rockets, or drop tower. Studies were excluded if experiments were performed in simulation studies (random positioning machine, clinostat, bed rest, immobilization, water immersion, and partial weight bearing) under the influence of gravitational acceleration due to confounding side effects.
Human life science studies had to meet the following criteria: (1) controlled study design related to (2) neuromuscular effects executed in (3) participants had to be healthy with an age range of 18-70 years.
Coding of studies
Each study was coded for the following variables for cell physiology: setting (parabolic flight, sounding rocket, space flight) gravity conditions (hypo, normal, hyper), neuronal properties (action potential, resting potential), ion channels (open state, closed state, conductivity), biophysical properties, membrane properties.
The following variables have been selected for human life science studies: type of study (cross-sectional, longitudinal), setting (parabolic flight, space flight), gravity conditions (hypo, normal hyper), nerve (sensory, motor, sensory motor interconnection).
In vitro experiments
Due to the complexity of the experiments, most of the experiments have been performed on short-term gravity research platforms like the parabolic flight missions or drop towers.
A summarizing table of the used literature is given at the end of the in vitro chapter (Table 1).
Subcellular parameters Ion channel parameters
Up to now, all experiments investigating ion channel parameters like open and closed state probability have been performed with ion channels or pore forming peptides that were reconstituted into artificial planar lipid bilayers.
It was shown that a porin channel from Escherichia coli has a clear gravity dependence (Goldermann and Hanke 2001). Under microgravity conditions, the mean open state is significantly decreased; at increased gravity conditions, the mean open state is increased. This effect is also fully reversible. The conductance of this porin channel was not affected significantly. A second model system used is alamethicin, a poreforming peptide from Trichoderma viride. Similar to the E. coli porins, the activity of alamethicin is increased towards higher gravity (>1 g) and is decreased towards microgravity (Klinke et al. 2000;Wiedemann et al. 2003).
Membrane parameters
Biological cell membranes are complex structures and are mainly composed of lipids and proteins (Pollard and Earnshaw 2008). In neurons, the functional changes to modify the membrane potential are usually attributed to the integrated membrane proteins, the ion channels and ion pumps. Nevertheless, it is well established that parameters of the lipid matrix are directly modifying the function of proteins (Lee 2004). For the sensorimotor system, i.e., it has been shown that the closed state probability of nicotinic acetylcholine receptor channels increased towards an amplified membrane viscosity (Zanello et al. 1996).
As single neuronal cells do not have a specific gravity-sensing structure, a logical experiment is to monitor membrane properties under conditions of variable gravity. Experiments with an adapted 96-well plate reader have been performed and it was shown that membrane viscosity clearly shows a gravity dependence. Under microgravity conditions, membrane viscosity is significantly decreased (the membrane is getting more fluid), under conditions of 1.8 g, the viscosity is significantly increased (the fluidity is decreased). Membrane viscosity of artificial asolectin vesicles and of human SH-SY5Y cells have been investigated and both samples show a similar gravity dependence, but Pore frequency Artificial planar bilayer 0 g < 1 g < 6 g Wiedemann et al.
0 g 1 g 1.8 g SF-21 cells (insect) Membrane potential Fluorescence intensity 0 g > 1 g > 1.8 g in a different distinctness (Sieber et al. 2014). It is assumed that the cytoskeleton or lipid composition might explain the difference in the gravity-dependent changes of membrane viscosity, but this has to be verified in future experiments. This finding potentially has a huge impact on cellular experiments, as this effect might be a basic mechanism of how single cells detect changes in gravity, without having dedicated sensory structures.
Cellular parameters
The electrophysiological properties of various cell types have been investigated with different methods.
It was shown that the resting potential of human neuronal cells is slightly depolarized by 3 mV under microgravity and slightly hyperpolarized under hypergravity conditions (Kohn 2012).
A similar depolarization under microgravity was observed in SF-21 cells (Wiedemann et al. 2011). Electrophysiological experiments with oocytes from Xenopus laevis also show a significant decrease in transmembrane current at a holding potential of −100 mV during microgravity and show a trend of increased transmembrane currents at hypergravity (Schaffhauser et al. 2011).
The changes in electrophysiological properties are very fast and reversible, they change within milliseconds as soon as the gravity is changed and return to normal when gravity returns to 1 g.
Action potentials
Two parameters of action potentials (AP) were analyzed. In spontaneous spiking leech neurons, it was shown that the rate of action potentials is increased under microgravity (Meissner and Hanke 2005).
To monitor the propagation velocity of action potentials, intact earthworms, isolated earthworm, and rat axons have been used. All three systems show a similar, with varying degree of significance, decrease in AP velocity under microgravity and an increase in AP velocity at hypergravity. Similar to the cellular and subcellular level, the changes are very fast and reversible.
In vivo experiments
Based on the knowledge that molecular and cellular properties in neurons are modulated by gravity, complex life science studies were conducted to describe gravity-induced neuroplasticity in humans using micro-and hypergravity research platforms in parabolic flight campaigns or during long-term space missions with a duration of 10 days up to 1.5 years in human subjects. Stimulation techniques such as peripheral nerve stimulation (PNS) have been applied in order to gather a deeper understanding of microgravity-induced deconditioning in motor control (Crone et al. 1990;Zehr 2002). In those methodological approaches neurons, axons or cell bodies are depolarized and muscle membrane potentials serve for the interpretation of output signals. The nerve tibialis posterior and muscle soleus have been used in terms of a model to describe overall adaptation to micro-, hypo-, or hypergravity in most experiments. Changes in characteristics of neuromuscular responses, displayed as H-reflexes have been described according to their attributes related to timing and shaping (Ritzmann et al. 2016): stimulation threshold, amplitude neuromuscular latency, and inter peak interval. A summarizing table of the used literature is given at the end of this chapter (Table 2).
Threshold
Changes in threshold levels to depolarize an axon or nervous cell body describes the responsiveness of a nerve to the input stimulus. Threshold data exist for short-term microand hypergravity. Higher stimulation currents were necessary for PNS to depolarize axons of efferent and afferent neurons in gravity conditions equal to the moon and Mars corresponding to 0.16 and 0.38 g, respectively. In hypergravity, smaller stimulation currents were necessary to depolarize the axons (Ritzmann et al. 2016). Thus, in microgravity the threshold is increased; in hypergravity the threshold is decreased.
Amplitude
The amplitude describes the output signal after peripheral nerve stimulation. Gravity dependency has been reported in cross-sectional study designs with neuroplastic changes for amplitudes of H-reflexes and stretch reflexes (Ritzmann et al. 2015;Sato et al. 2001;Miyoshi et al. 2003;Nomura et al. 2001;Ohira et al. 2002;Kramer et al. 2013). Independently of stimulation methodology, the peak-to-peak amplitudes and integrals increased when acutely exposed to hypergravity in parabolic flight maneuvers (Ritzmann et al. 2015;Miyoshi et al. 2003).
For reduced-gravity conditions, study results are equivocal: lunar and Martian gravity studies revealed a gradual decrease in peak-to-peak amplitudes of H max with decreasing gravitation (Ritzmann et al. 2016). However, microgravity caused either an increase in H-reflex amplitude Miyoshi et al. 2003;Nomura et al. 2001;Ohira et al. 2002) or revealed no changes (Ritzmann et al. 2015;Kramer et al. 2013). An ISS experiment executed by Watt (2003) in weightlessness documented a decline of H-reflexes in space. These adaptations persisted during 5 months of weightlessness upon returning to earth and recovered the days after. Threshold adaptations most probably caused the inhomogeneous findings observable in H-reflex amplitudes due to differences in methodology (Ritzmann et al. 2016). As M-wave and H-reflex amplitudes depend on the stimulation threshold, the increase in H-reflex amplitudes should be interpreted on the basis of threshold declines in microgravity when H-reflexes are recorded with a constant stimulation intensity Miyoshi et al. 2003;Nomura et al. 2001;Ohira et al. 2002). While H/M recruitment curves are independent of the stimulation threshold (Ritzmann et al. 2015;Kramer et al. 2013), gravity-induced changes in H-reflexes elicited submaximally with a constant stimulation threshold result rather from threshold shifts than gravity changes Miyoshi et al. 2003;Nomura et al. 2001;Ohira et al. 2002).
Neuromuscular latency
Neuromuscular latency describes the axonal and/or nerve conduction velocity until a muscle response is observable in the electromyogram. Various experiments investigated the latency of the H-reflex and M-wave in the Soleus muscle in settings of short- (Ohira et al. 2002;Ritzmann et al. 2016) and long-term (Ruegg et al. 2003) varying gravity with equivocal findings: with gradually decreasing gravity from hyper-to earth to Martian to lunar gravity, Ritzmann et al. demonstrated in eight subjects an increase in latencies of H-reflexes while M-wave latencies likewise showed a strong tendency to increase towards microgravity (Ritzmann et al. 2016). In contrast, exposure to micro-or hypergravity showed no short-term effects in H-reflex and M-wave latencies in experiments executed by Ohira et al. (Ohira et al. 2002). The authors did not state the sample size.
Inter-peak-interval (IPI)
The IPI between the negative and positive maxima of the biphasic amplitude describes the conduction velocity along the muscle fibers via the motor endplates at the neuromuscular junction where the nerve interconnects with the muscle. Short-term experiments executed in parabolic flights revealed that IPIs significantly increase for the biphasic m. Soleus M max and H max with decreasing gravitation from hyper-to earth to Martian to lunar gravity conditions (Ritzmann et al. 2016).
A first model of neuronal short-term adaptation to microgravity
The proposed model aims to integrate the results from the cellular level up to the neuromuscular interface. To exclude possible adaptation processes, it only contains data from short-term experiments.
Molecular level
On the molecular level, gravity has an effect on both the membrane and integrated functional membrane proteins including ion channels. Under microgravity conditions, the membrane viscosity is decreased (the fluidity is increased). This changed membrane viscosity decreases the open-state probability of ion channels (Fig. 1). At hypergravity, these effects are inversed: membrane viscosity increases and the open-state probability increases.
Non-space related biophysical experiments clearly show that ion channel properties are dependent on membrane parameters such as lateral pressure. For alamethicin, it is known that the open state of the pore clearly depends on the lateral pressure of the membrane (Hanke and Schluhe 1993), with increased pressure, the activity increases. For other ion channels it was also shown that ion channel parameters are affected by changes in lateral membrane pressure, e.g., the closed-state probability of nicotinic acetylcholine receptor channels increases towards increased membrane viscosity (Zanello et al. 1996).
Resting potential
The resting potential of single cells is depolarized several millivolts under microgravity and hyperpolarized under hypergravity. With a slightly increased resting potential, Fig. 1 A model of the biophysical gravity dependence of cell membranes and the incorporated ion channels. With the onset of microgravity, the membrane viscosity is decreased and the open-state probability of ion channels is decreased the threshold to trigger an action potential (AP) can be reached more easily (Fig. 2). In spontaneously spiking neurons, this gravity-dependent effect was found. The rate of APs is increased in microgravity.
Propagation of action potentials
In isolated single axons as well as in living animals and in human test subjects, the effect of microgravity can clearly be seen, the propagation speed of APs decreases under microgravity and increases under hypergravity.
In humans, the properties of neuromuscular reflexes are affected by microgravity. The latencies are increased, which can be interpreted as a decreased conduction speed. The peak-to-peak amplitude of the H-reflex is decreased under reduced gravity (with heterogenous data at real microgravity) and a higher stimulus has to be given to get the same H max as in 1 g. The stimulation and recording method cannot be compared directly to single cell patchclamp experiments, but the effect might be explained by a decreased propagation velocity along the axon in microgravity compared to 1-g conditions: less action potentials per time stimulate muscle contraction and therefore H max is decreased. This interpretation is supported by the decrease in IPI under microgravity, which indicates a decreased signal speed at the neuromuscular junction. All these findings are also reversed under hypergravity.
The previously described effects can be summarized as a gravity-dependent decrease in neuronal conduction velocity (or as an increase in electrical and chemical time constants) under reduced gravity with an increase under hypergravity.
At first glance, it might look like an inconsistency that at the same time the rate of action potentials is increased in microgravity but the propagation velocity of APs is decreased. In 1977, Matsumoto and Tasaki (1977) found a mathematical equation to calculate the speed of conduction in unmyelinated nerve fibers, which can be used to estimate the speed also in myelinated fibers. With this equation, the apparent inconsistency can be resolved: where the v axon is the conduction velocity, C is the membrane capacity, d is the diameter of the nerve, R * is the resistance of the membrane, ρ is the axoplasmic resistance. According to the proposed model, the resting potential is increased due to a reduced open-state probability of the ion channels, therefore the resistance of the membrane (R * ) is increased. If membrane capacity (C), diameter of the axon (d) and axoplasmic resistance (ρ) are treated as constant in varying gravity, the increased resistance of the membrane leads to a decreased conduction velocity (v axon ).
With this proposed model (Fig. 3), the short-term reaction of the sensorimotor system can be explained without any inconsistencies from the single neuronal cell up to the neuromuscular level. But of course, it opens up a lot of questions and open points, which will be discussed subsequently.
Discussion
The aim of this review was to sum up and interconnect relevant publications about the adaptation of neuronal processes from the molecular to the (sub-) cellular level up to the complex neuromuscular system. Many separate in vitro and in vivo experiments on the different levels of the NS have been performed, each with a discrete result. Until now, no effort has been made to integrate these findings to either a working model, and/or to illustrate possible unresolved discrepancies, aiming for a better understanding of neuronal adaptation to variable gravity conditions and for a "roadmap" for future experiments. It is also an appeal for a more interdisciplinary approach to new experiments and to unite results of previously acquired data serving as a better comprehension of the gravity-induced challenges on organisms during prolonged manned space missions and to face them.
The presented short-term model interconnects results of separately working life science disciplines and these interconnections are based on assumptions, which have to be verified in future experiments. They are discussed as follows: On the cellular level, it is not clear if membrane viscosity and the open-state probability of ion channels are the only gravity-sensitive parameters. For instance, the cytoskeleton of different cells is also affected by changes in gravity v axon ≈ d 8 · ρC 2 · R * , Fig. 2 The extended model of the cellular gravity dependence of a single neuronal cell. Due to the changed membrane viscosity and the changed open-state probability, the cell depolarizes several mV. This leads to a decreased potential difference between the resting potential and the AP threshold, therefore action potentials can be triggered more easily and therefore it is an additional sensor for g-load (Li et al. 2008), but the possible effect of a changed cytoskeleton on membrane fluidity has not been investigated in detail, yet. Until now, a model system (based on artificial membrane vesicles) and neuronal cells have been investigated separately and the authors showed a gravity-induced difference, which might be due to the absence of a cytoskeleton in the artificial vesicles (Sieber et al. 2014). It might also be possible that the lipid composition plays a role. The artificial vesicles were made of asolectin but the lipid composition of real cell membranes is more heterogeneous, depending on the cell type.
Two models for ion channels have been used to show the clear gravity dependence of the open-state probability of ion channels (Goldermann and Hanke 2001;Klinke et al. 2000), but there is no single channel data for real (neuronal) ion channels. Although the whole cell recordings from several research groups indicate that there is a gravity dependence of ion channels (Goldermann and Hanke 2001;Klinke et al. 2000;Schaffhauser et al. 2011;Richard et al. 2012), it is still unclear from the literature if all ion channel families, e.g., the relevant ion channel families for AP generation, react similar to changes in gravity. For the proposed model, this is assumed, but it still has to be investigated much more systematically. This has to be done with single-channel electrophysiology. Despite the challenge of doing this in microgravity, outside the ground-based laboratory, there are several promising approaches already indicating ion channel sensitivity to gravity (Wiedemann et al. 2011;Schaffhauser et al. 2011;Richard et al. 2012). In addition, the open-state probability is not the only relevant parameter. In regards to the completeness of nerve condition characteristics, e.g., the conductivity of the ion channels has to be investigated, as there are publications that indicate a dependence on gravity (Schaffhauser et al. 2011;Richard et al. 2012).
A detailed analysis of these parameters in the future would significantly help understanding the gravity dependence of cellular electrophysiology and ultimately the multicellular communication as in the neuromuscular system transferred to complex sensorimotor function. Based on the existing literature database, it is evident that molecular and cellular changes in response to gravity mentioned above affect the sensorimotor system in regard to human movement.
Immediate adaptations are reported in short-term experiments as well as in long-term investigations executed on the ISS or pre-post space flight, respectively. Analysis of motor and sensory responses regarding their timing and shaping (Ritzmann et al. 2015(Ritzmann et al. , 2016Sato et al. 2001;Miyoshi et al. 2003;Nomura et al. 2001;Ohira et al. 2002;Kramer et al. 2013;Davey et al. 2004) demonstrate that NS function for muscle activation is changed and these changes most probably rely on molecular dysfunction: when axons conduct AP more slowly, consequently the motor response and muscle contraction are delayed (Ritzmann et al. 2016). Regarding human space flight, this is known to be a limitation for a safe return to earth as well as stopovers on other planets: for practical issues such as movement precision and control required for fall prevention or force generation, the cellular changes impact space mission safety (Blottner Fig. 3 The final model from subcellular to multicellular level. Due to the changed membrane viscosity and the changed open-state probability, the cell depolarizes and the threshold to generate action potentials is reached more easily, but the AP velocity of the axons and the transmission speed at synapses in the motoric end plate are decreased, which seems to have a bigger impact than the reduced AP threshold and Salanova 2015). Likewise, smaller neuromuscular responses as demonstrated by changes in reflex and motor response amplitude are associated with a reduction of muscle force (Aagaard 2003).
Based on gravity-induced changes in frequency originated on the subcellular level, this is also of considerable relevance: a reduced muscle response concomitant with a slowed down reaction negatively impacts motor control in daily relevant activities, such as in gait, posture control, or fine motor tasks (Layne et al. 2001;Bloomberg et al. 1999;Paloski et al. 1993;Mulavara et al. 2010).
This is exactly where security debates and countermeasure development move into focus: as reported in many space experiments, astronauts suffer from motor dysfunction associated with neuromuscular degradations and a performance decline after their return to earth (Blottner and Salanova 2015;Mulavara et al. 2010;Hargens et al. 2012;Clark and Bacal 2008).
Thereby, in cohorts of astronauts and cosmonauts with stays in space for 10-241 days, a sustaining increase of amplitudes (Reschke et al. 1986;Kozlovskaya et al. 1981;Grigoriev and Yegorov 1990;Baker et al. 1976), neuromuscular latency (Davey et al. 2004;Ruegg et al. 2003), and IPIs (Ruegg et al. 2003) concomitant with decreased PNS thresholds (Kozlovskaya et al. 1981;Grigoriev and Yegorov 1990) for H-reflexes, stretch and vibration reflexes after returning to earth could be demonstrated. Importantly, adaptations persisted beyond weightlessness for up to 2 weeks of earth life after space. As astronauts suffer from sensorimotor impairments associated with gravity-dependent changes in the nervous system, which limit the duration of space-stays (Edgerton et al. 2001), this concern is a major issue for the space agencies. Achievements of critical task under variable gravitation conditions depend on sensorimotor function (Edgerton et al. 2001). They are crucial for a safe space flight and return to earth.
However, there are limitations to the model that need to be considered: in vitro studies revealed contradictory results concerning the amplitude and latency of reflexes. Due to different methodologies, the outcomes are hardly comparable and a conclusive statement integrated into our working model is still speculative. Furthermore, the model does not take into account possible changes in nerve geometry, as there are electrophysiological properties as axoplasmic resistance and other electrical parameters. However, up to now, there are no experiments that have been performed focusing on this point, although it might be possible to investigate this with single cells, nerve fibers, or tissue samples of animals and humans. Furthermore, the model only takes into account the changes of the electrophysiological component of neuromuscular latency. Of course, the sensorimotor reflex system has more components than that. Surely the electrochemical coupling in the neuronal synapses and the motoric end plate must be taken into account, as the findings on the IPIs indicate that this chemical component is also affected by gravity (Ritzmann et al. 2016). Experiments have to be designed that focus on receptorligand interactions under varying gravity conditions to clarify the possible gravity dependence of this process, as it might have a huge impact on neuronal and neuromuscular communication.
Besides neuroplasticity of the gravity-induced cell physiology, modulations in neural excitation could also be causal for the in vivo experimental outcomes. While cell physiology involves molecular adaptation based on electrochemical changes of the neural cell body or axon (Goldermann and Hanke 2001;Sieber et al. 2014;Kohn 2012), in contrast the excitability of reflexes and motor responses rely on a non-persisting phenomenon caused by spontaneous and task-or environment-specific inhibition or facilitation of neuronal pathways (Crone et al. 1990;Zehr 2002;Aagaard 2003).
Although an interlink of the observed gravity effects on the above-mentioned subcellular structures and the resulting nerve's level of depolarization with the timing of reflexes and motor responses is apparent: the less fluid the membrane and the less open the ion channels are, the slower the action potentials will be transmitted via the axon. Hence, the period of the reflex latency, the duration, and the inter-peak-intervals as they occurred in reduced gravitation below normal gravity are longer. Consequently, the overall decrease in timing in micro-, lunar, and Martian gravity compared to earth and hypergravity most probably relies on gravity-induced cellular changes in neurons. Nevertheless, for gravity-dependent threshold changes and adaptations in amplitude, the underlying origins are less clear. Besides gravitational cell physiology as illustrated in the model, modulated excitations such as pre-and postsynaptic inhibition or facilitation should be taken into account (Ritzmann et al. 2015;Zehr 2002;Kohn 2012). Modulated proprioceptive sensory feedback and related central changes in motor commands descending from brain structures may have caused an inhibition of spinal reflexes in microgravity and a facilitation in hypergravity (Ritzmann et al. 2015;Davey et al. 2004). Vestibular, visual, and somatosensory input is altered in varying gravity (Layne et al. 2001;Homick and Reschke 1977;Paloski et al. 1993;Bloomberg et al. 1997) reflected by a highly reduced vestibulesomatosensory feedback concomitant with a predominance in vision for microgravity conditions (Layne et al. 2001;Paloski et al. 1993;Bloomberg et al. 1997). This may also have a large impact on motor commands and the inhibition or facilitation of Ia afferent pathways. For a more conclusive statement to clarify the origin of timing and shaping of reflex adaptations, further experiments are mandatory.
Conclusions
The prospect of a sustainable overview and proper understanding of the gravity-dependency of the NS requires a number of new investigations. A lack of knowledge can be reduced by interdisciplinarity, including the gravitational influence on the cytoskeleton and conductivity of the ion channels of nervous cells as well as experiments in the motoric end plate. Moreover, novel approaches including the brain and peripheral circuitries using electrophysiology with an emphasis on long-term adaptations have the potential to further clarify if excitability changes are gravity-dependent and influence motor control.
In contrast to in vivo data, there is basically no data for cellular long-term adaptation processes as the technical and biological requirements for cellular long-term experiments in microgravity are challenging. Nevertheless, this topic should be addressed in the future to be able to extend the short-term adaptation model with the long-term adaptation mechanisms. | 2018-03-03T14:20:51.892Z | 2017-06-27T00:00:00.000 | {
"year": 2017,
"sha1": "c0e4e797e2dd346819b87c49d5feeab02cf082f0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00249-017-1233-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a22d66d6a8e814172b859ec26f33f04de1a9b68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
138612426 | pes2o/s2orc | v3-fos-license | Cracks width-corrosion rate correlation on the durability of reinforced concrete in a very high aggressiveness tropical marine environment
The aim of this investigation was to evaluate the correlation between crack width and apparent corrosion rate in reinforced concrete specimens exposed for more than six years to a tropical marine environment, at the natural test site La Voz, Venezuela. Six specimens from DURACON Project (prismatic 15x15x60 cm, with 0.65 w/c ratio) were monitored; each specimen having six reinforcing steel bars placed at three different depths (two each at 15, 20, and 30 mm) for electrochemical tests (corrosion potential and corrosion rate). An empirical correlation between surface crack propagation rate and iCORR was established, which may help iCORR estimation indirectly if values of maximum surface crack widths due to reinforcement corrosion are obtained in at least one-year period of monitoring.
INTRODUCTION
During the last 20 years, the term concrete durability has been used more frequently among members of the scientific society worldwide.In some developed countries as the United States of America, Spain, France, United Kingdom, and Japan, durability has been addressed as a very important subject, attracting seven figures investment for research in this area.Reinforced concrete structure deterioration due to rebar corrosion has increased as consequence of cracks on the concrete cover surface.Many investigations so far, have been performed based on the study of durability during the initiation period.However, very few have been focused toward its performance during its residual life.Some studies related to the residual life stage of concrete structures, have been made where accelerated corrosion was performed by applying a constant anodic current to the rebars (Tachibana, et.al., 1990;Huang and Yang, 1997;Rodriguez et.al., 1997;Almusallam, et.al., (1997); Cabrera, 1996).
After applying such anodic currents to the rebar in a short period of time, reduction of the structural capacity was correlated with corrosion parameters such as gravimetric metal loss and corrosioninduced concrete cracking (Almusallam, 1997;Mangat and Elgarf, 1999;Torres-Acosta, 1999).Torres-Acosta, 1999;Torres-Acosta and Martínez-Madrid, 2003, have conducted several studies related to Revista ALCONPAT, 8 (3), 2018: 317 -332 Cracks width-corrosion rate correlation on the durability of reinforced concrete in a very high aggressiveness tropical marine environment O. Troconis de Rincón, V. Milano, A. A. Torres-Acosta, Y. Hernández-López 319 this subject, but also under natural conditions (Torres-Acosta and Castro-Borges, 2013; Cabrera-Madrid et al, 2014).In a previous investigation (Torres-Acosta and Martínez-Madrid, 2003), they reported results on residual life degradation parameters, using reinforced concrete slabs (0.42 w/c ratio, chloride contamination during mixing to accelerate rebar corrosion) and no anodic current application.At the end of the experimentation, corrosion-induced crack position, width, and length were measured and correlations with the cross-section mass loss were also performed.Based in their experimental results, empirical relationships between average rebar radius loss (xAVG) divided by the original rebar radius (r0) and load capacity were established.As an example, 10% radius loss might result in a 50% load capacity loss in reinforced concrete beam elements.They also developed an empirical relationship between crack width, WC, and the relation xAVG/r0.Apparently, when corrosion rate is small (12-60 µm y -1 ) cracks appear and grow in length and width faster than in accelerated corrosion tests.Finally, the last empirical correlation obtained included xAVG and maximum pit depth (PITMAX), giving a factor of seven times: PITMAX ~ 7•xAVG (Torres-Acosta and Martínez-Madrid, 2003).Subsequently, in 2003, Vidal et.al.studied crack width and rebar diameter loss due to corrosion in reinforced concrete beams (0.5 w/c, 35 g l -1 NaCl contamination).They discussed that reinforcement corrosion obtained in this investigation is closer to what is observed in natural conditions (with respect to the distribution of corrosion, types of corrosion, and oxides produced) than the obtained by impressed current or addition of calcium chloride in concrete.They developed a new model relating crack width vs. rebar cross section loss and observed that the rebar cross section loss seems to be independent of their diameter and the concrete cover/rebar diameter ratio, except when evaluated in the period of crack initiation.
In 2007, Torres-Acosta et.al.reported an empirical correlation between rebar corrosion rate and crack width, using reinforced concrete beams (0.6 w/c, contaminated with NaCl: 1 wt% Cl -on cement basis), and subjected to a bending stress.The beams were sprayed in a central area of 25 cm long twice a week, with saline solution (3.5 wt% Cl -) in order to accelerate rebar corrosion in this area.They concluded that in a process of natural corrosion, cracks generated by the corrosion products expansion develop more slowly (width and length) than those generated by accelerated corrosion.The results obtained, showed that for a corrosion radius loss (xAVG/r0) from 4% to 10%, cracks were produced with a maximum width (CWMAX) of 0.1 mm and 1 mm, respectively.The trend obtained in this study was similar to that obtained in previous research with natural corrosion (Torres-Acosta and Castro-Borges, 2013;Cabrera-Madrid et al, 2014).The present work show the results of one of DURACON project test natural exposure sites (La Voz, Venezuela).It was located in a coastal marine environment of high aggressiveness, where some of the reinforced concrete prisms (0.65 w/c ratio concrete) in this project presented surface corrosion-induced cracks, and an empirical correlation was obtained between maximum cracks width and corrosion rate (iCORR) expressed as average rebar radius loss (xAVG/r0) from natural corrosion data.
Prismatic specimens
In this investigation, reinforced concrete prismatic specimens from DURACON project (Troconis de Rincón et.al., 2007) were used.These were installed in a project's natural exposure sites called La Voz (in Venezuela), classified as marine environment of very high aggressiveness (>C5 according to ISO (1) 9223: 2012.Figure 1 shows a schematic diagram of the prismatic specimens under evaluation.Concrete prisms of 15x15x30 cm (0.65 w/c ratio) and reinforced with six rebars Revista ALCONPAT, 8 (3), 2018: 317 -332 Cracks width-corrosion rate correlation on the durability of reinforced concrete in a very high aggressiveness tropical marine environment O. Troconis de Rincón, V. Milano, A. A. Torres-Acosta, Y. Hernández-López 320 (9.5 mm in diameter), placed at different concrete depths (two each at 15, 20 and 30 mm).Three of them were placed at the windward face and the other three on the leeward face.The ends of each bar were protected with epoxy coating to avoid oxygen differential and crevice corrosion, leaving a central portion of 15 cm length uncovered.Figure 2 shows the specimen supports installed at La Voz test station.
2.2
Environmental assessment Climatic and environmental parameters were assessed according to the methodology established by ISO 9223 standard determining environmental aggressivity in the test station.Parameters as relative humidity (RH), time of wetness (TOW/), wind speed and direction, rainfall amount, daily temperature, chloride concentration, CO2 concentration and sulfur compounds concentration, were measured during the experimental period.It is important to mention that there are currently no regulations to identify the aggressiveness of the environment for reinforced concrete structures; therefore, ISO standard for metallic materials was used as a first approach.
2.4
Cracks survey Rebar corrosion-induced concrete surface cracks, both in the windward face as well as in the leeward face were monitored by a careful visual examination using a (nonstandard) 15cm x 30cm grid to report the length and location of each corrosion crack.Cracks widths were measured using a crack comparator card.Thereby, an overview crack map was recorded, showing the length, location, and width of all cracks in all specimens.Experimental data were fitted linearly and compared with data obtained by other authors with natural and accelerated corrosion techniques.In order to assess the rebar cross section loss, an estimate was made using the area under iCORR vs. time plot.This value was then correlated with maximum crack width (MCW), corresponding to each of the rebars of the specimens tested.Rebar mass loss estimates were calculated using Faraday law (Equation [1]): (1) Where: ΔWf is faradaic mass loss (g); 55.85 g/mol is the atomic weight of Fe; Idt is the area under the curve iCORR vs. time; n is the valence number for iron (+2), and F is Faraday's constant (96,500 C/mol).This value is then used to estimate the average radius loss due to corrosion (XAVG), in mm, which is calculated using Equation ( 2): (2) Where: ρ, iron density (gr/cm 3 ); D, rebar diameter (mm) and l, length of the rebar (mm).At the end of the experimentation, concrete specimens were demolished and the steel rebars were retrieved to determine the real cross section loss based on average pit depth estimates.
3.1
Environmental assessment Figure 3 shows the weather parameters results of the natural test site monitored during the exposure time.It is clear to observe the rain-drought periods, typical of tropical environments.There was only one short period of high rain precipitation and was at the end of year 2006, as a result of the weather phenomenon caused by hurricane Ivan, which passed through the Lesser Antilles and the Caribbean Sea.Regarding monthly average temperature, it varied only 3 °C during the entire evaluation period (six years).The minimum value was 26.7 °C (March 2003 andFebruary 2009), while the maximum value was about 30 °C (October 2004 andSeptember 2008).The small variations observed for this parameter, shows a climatic stability in this test station and the geographic region itself.
( †2) Trade name The highest monthly average relative humidity (RH) value from the whole evaluation period was observed in August 2004, which was 84%.This coincides with the highest rain precipitation value for the year.
For wind speed data, in general it can be seen that it vary in a range between 17 and 24 km/h, with large variations when sudden changes occur in the microclimate, as the phenomena that have been explained above where the wind speed was substantially increased.Chloride and sulfate estimates present in this atmosphere and the time of wetness during the 6-year exposure time are shown in Table 1.A very high corrosive ambient, according to ISO 9223, for the first three years of this test station, was corroborated based on the monitoring of parameters in Table 1.For the 4th and 5th year, a decrease in corrosivity was noticed, possibly due to high rainfall from storms and hurricanes occurred in those years; however, remains highly corrosive.The time of wetness (TOW) was also estimated with the weather parameters such as temperature and RH using ISO procedure (see Table 1).
Rain
Figures 4 and 5 show the results obtained from the electrochemical monitoring: corrosion potential and corrosion rate vs. time for 15-mm and 30-mm concrete depth rebar, respectively.These figures clearly show the time in which the bars began to depassivate (ECORR and iCORR more negative than -250 mV vs. Cu/CuSO4 and greater than 0.1 µA/cm 2 , respectively); coinciding with the first change in slope of the accumulated corrosion rate vs. time curve.In addition, these figures show 30-mm depth rebars stayed passive for longer time than the 15-mm depth rebars, but the propagation rate for the first set were higher than the second set.This might be due to the winds at the La Voz natural test site did not show a preferential trade wind direction (North-East in this case), but rather slashing winds which also allows the ingress and diffusion of chloride ions through the prism's bottom face.This was the top cast face and the most porous one, which was the closer to the deeper bars (30-mm depth rebars), thus giving this unusual performance.
Crack width and corrosion rate correlation
Figure 6 shows the state in which one of the three representative specimens were found after a 6 year exposure period in La Voz natural test site.This figure also shows a photograph of the specimen's windward face and a schematic representation of the surface fissures/cracks survey that presented such specimen.Figure 7 shows that with increasing loss of cross section area of the rebars (estimated from ∫ iCORRdt data and equations 1 and 2), the surface MCW also increases (at the specimen's windward face).
The effect of the concrete cover on crack initiation and propagation (on the windward face) was also demonstrated in Figure 7: crack widths were bigger at rebars with smaller concrete cover having the same rebar cross section loss.This might be due to the concurrent effect of high relative humidity prevailing in the area (> 80%) together with a high chloride ions content, which maintains moisture inside the bulk concrete, such that the chloride ions (129-684 mg m -2 d) can diffuse easily, promoting reinforcement corrosion.
Figure 7. Average maximum crack width of concrete in relation to the rebar cross section loss, at the natural test site La Voz, w/c ratio 0.65, windward face It is also important to mention that there is a direct relationship between the MCW and rebar's xAVG/r0.The goodness of the correlation is high for the first years of exposure (MCW<0.3mm), while for MCW's wider than 0.3 mm the data was disperse, causing the correlation to decrease.Additionally, the mean MCW increases with very little loss of material due to corrosion.Also when MCW's are too wide, voids were created, which interfered with the iCORR measurement using the described corrosimeter field instrument.This raises some doubts over the last year's iCORR data, which was used to obtain xAVG/r0.Thus, it was necessary to discard the latest iCORR measurements and determine the correlation between MCW and xAVG/r0 using first year data.
A more representative correlation between MCW and xAVG/r0 are presented in Figure 8, where the last years iCORR bias measurements of both bars were removed, thus better correlations were obtained (R 2 ~0.9722 and 0.9038 for 15-mm and 30-mm depth rebar, respectively).Figure 9 shows a typical crack surveys and photograph of one of the specimen's leeward face from a representative 0.65 w/c concrete specimen after 6 years of exposure.As observed from these results, wider cracks were observed than at the windward face.This performance might be due because it remains wet for longer periods of time, which favors the transport of characteristic environment´s aggressive agents, and spread into the bulk concrete as compared to the windward face, which was in continuous contact with hot/high speed wind that could dry out the concrete's internal moisture.This performance is observed also in Figures 4 and 5, where the corrosion rates of leeward face rebars show small increments at the end of the exposure period.
Figure 9. Left, General map of cracks and right, photo of the specimen 6, w/c ratio 0.65, leeward face Similarly to the windward face data, the leeward face data did not have good correlation between the MCW and rebar´s xAVG/r0 when cracks were wider than 0.5 mm, and the last year data points were also bias.Thus in Figure 10, the more representative relationship between MCW and xAVG/r0 was obtained by removing last year´s data of the two rebars, which significantly improves the correlation (R 2 ~0.9397 and 0.9843 for the 15-mm and 30-mm depth rebar, respectively).
Figure 10.Representative behavior (last year's data removed) of the average maximum crack width in relation to the loss cross section area of the bar at the test station La Voz, w/c ratio 0.65, leeward face On the other hand, the effect of concrete cover on crack propagation unexpectedly was the opposite than the windward performance: the largest crack widths were found for the 30-mm depth rebar.This might be due to (as explained in Section 3.2) the winds at the La Voz natural test site did not show a preferential trade wind direction (North-East in this case), but rather slashing winds which also allows the ingress and diffusion of chloride ions through the prism's bottom face.This was the top cast face and the most porous one and the closer to the deeper bars (30-mm depth rebar), thus giving this unusual performance.Figure 11 shows as a comparison, a compilation of the MCWavr (average MCW) and xavr/r0 data obtained in this investigation together with data from a previous investigation (Torres-Acosta and Martínez-Madrid, 2003) with natural and accelerated corrosion conditions.Accelerated corrosion data was plotted in Figure 11 using symbols without any filling color, as compared the natural corrosion data symbols, which all are either black filled for previous investigations data or bluepink-orange for this investigation data.
Figure 11.Data compilation of average maximum crack width in relation to the loss cross section area of the bar for different authors and test conditions (Torres-Acosta and Martínez-Madrid, 2003) It is observed that in the case of accelerated corrosion methods, data follow a good trend and are close from each other (Torres-Acosta and Martínez-Madrid, 2003).There is also a difference between accelerated corrosion data from reinforced concrete (Δ, x, ◊, ○, ӿ, □ symbols) vs prestressed concrete (+ symbol) elements when general corrosion was obtained: wider cracks were observed from reinforced concrete elements than from prestressed concrete elements.If the corrosion is localized in a small area of the strand (-symbol) in prestressed concrete elements instead of general corrosion (+ symbol), the crack widths trend was similar than the obtained in reinforced concrete elements.Therefore, if the entire prestressed strand (or wire) is corroded, the crack propagation apparently was mitigated by the compressive state of stresses in the concrete, but if the prestressed strand (or wire) corrodes only in a short portion of the entire length, the crack propagation follows reinforced concrete elements trend.On the other hand, natural corrosion data presented a more disperse performance than accelerated corrosion, as seen from the colored symbols.In general, the natural corrosion data follows similar trend than accelerated corrosion data, but with higher crack widening (higher Crack Width vs x/r0 slope).The higher crack propagation rate in natural corrosion tests may indicate that crack repair might be done earlier than the obtained from accelerated corrosion tests.This performance must be checked with collection of a larger data from the literature and data in the remaining DURACON project outcomes.Data from this investigation follows a well-defined trend: less corrosion-induced material loss is required for cracks to appear at the concrete element surface.In natural conditions, like the present investigation's specimens, the concrete is affected by the ingress of aggressive agents such as chlorides ions, which produce a localized rupture of the passive film until corrosion products are formed in sufficient amount to crack the concrete, which depends on concrete quality (internal porosity).This cracking process on low quality concrete, may requires a smaller amount of corrosion products for crack formation and propagation (Torres-Acosta and Castro-Borges, 2013;Torres-Acosta et al., 2007).But compared with previous investigations with natural corrosion specimens exposed during a period between 3 and 6 years (■, •, ♦ symbols), there is a difference of, approximately, 10 times the amount of mass needed to produce the same crack wideness.
It is important to remind that data from this investigation were obtained from electrochemical mass loss determinations, mainly linear polarization resistance (or also known as Rp).If corrosion were uniform, the faradaic metal loss might be twice as much as the estimated gravimetric metal loss, but if rebar corrosion is localized (i.e.pitting corrosion), the faradaic metal loss could be estimated even up to ten times the gravimetric metal loss (González et.al., 1995).All rebar radius loss data in Figure 11 was estimated from gravimetric procedure, except data from Hernández et.al.2016(green color points) and the present investigation.Actual rebar loss estimates in these two investigations also have the particularity of being performed in highly porous concrete (with w/c ratio > 0.65), therefore, lower mechanical strength and easier crack formation is also expected.Similar concrete type was used by Hernández et.al., 2016, to fabricate beams that were some of them loaded at the same time they were exposed to chloride rinse at the center of the beam elements to produce corrosion without using anodic currents.As seen in Figure 11, data from loaded beams (Hernández et.al., 2016) separate from all the natural and accelerated data to lower radius loss for same MCWavr opening.This performance might be due to not only the possible differences between gravimetric and the faradaic mass loss, but also from the applied tensile stresses from flexure loading application, that may increase the crack opening propagation rate.In same reference some other beams were unloaded, thus the MCWavr vs x/r0 data follows similar trend than the present investigation where concrete prism tested maintained unloaded during experimentation.
Empirical correlation between reinforcement corrosion rate and surface crack propagation rate
Figure 12 shows the crack width propagation vs. time of exposure.As observed in this figure, there is no correlation between rebar depth and crack propagation for these specimens located at La Voz, Venezuela, test site.Two of the cracks on each rebar depth behaved in the same range of maximum crack widths (between 0.05 and 0.3 mm), and only one of such cracks showed wider maximum crack (about 0.4 mm and above).The regression lines for each crack propagation are also shown in Figure 12, showing goodness fitness above 0.8.The slope of such regression lines are considered in this investigation as the surface crack propagation rate (SCPR in mm/month).
Based on the available data up to date, an empirical correlation between SCPR and iCORR results was performed and shown in Figure 13.As observed from this figure, there is not an apparent difference between the correlation for 15 mm and 30 mm.Upon further experimental data from the other w/c ratio concrete prisms in La Voz, Venezuela, test site and the other active corrosion prisms, when surface cracks appeared at the concrete element, the rate of widening is directly proportional to the iCORR of the rebar, which in turns is the expansive oxides to produce such cracks.This empirical correlation will help to establish an indirect estimate of the corrosion rate of the reinforcing steel if the people in charge of the maintenance of the corroded structure is not able to have test equipment to determine such electrochemical values, and only a crack width survey is performed in a period of time for at least one year (12 months).
Figure 13.Empirical correlation between SCPR and iCORR, 0.65 w/c ratio concrete prisms, La Voz, Venezuela, natural test site
An excellent correlation between average maximum crack width (MCWAVER) and corrosion-induced radius loss (xAVG/r0) (rebar with 15 mm and 30 mm concrete cover, at windward and leeward faces, for 0.65 w/c ratio specimens) was found, which can be used to predict the rebar section loss for a given crack width.2.
MCWAVG vs x/r0 trend slope for natural corrosion data was higher than the obtained from accelerated corrosion data.This might reduce time for rehabilitation of corroded concrete elements in naturally exposed structures in marine environment. 3.
An empirical correlation between surface crack propagation rate (SCPR) and iCORR was established for 0.65 w/c ratio prisms exposed to La Voz, Venezuela, test site, which can help to estimate iCORR indirectly if values of MCWAVR of corroding element are obtained in a period of time of at least one year.
AKNOWLEDGMENTS
The authors would like to thanks CYTED and the Universidad del Zulia for funding this research, and all people whom help with the corrosion and crack survey monitoring for such a long period of time, this would not have being possible without them; also to Dr. Douglas Linares to help in the translation of the paper.
Figure 1 .
Figure 1.Schematic diagram of the rebar configuration in the concrete specimen
Figure 2 .
Figure 2. Test Station in Marine Environment (La Voz)
Figure 6 .
Figure 6.Specimen 6 (w/c ratio 0.65, and windward face) surface crack survey (left) and photographic evidence of such distress (right)
Figure 8 .
Figure 8. Representative behavior (last year's data removed) of the average maximum crack width of concrete in relation to the rebar cross section loss, at the natural test site La Voz, w/c ratio 0.65, windward face corrosion rate correlation on the durability of reinforced concrete in a very high aggressiveness tropical marine environment O. Troconis de Rincón, V. Milano, A. A. Torres-Acosta, Y. Hernández-López 327 Rate, SCPR ( mm / month )
Table 1 .
Aggressive agents and time of wetness (TOW) at the test station La Voz Corrosion induced by chloride ions attack was favored because high relative humidity facilitates the transport of aggressive agents in the atmosphere enhanced by the high temperature, which accelerates localized corrosion of the bars.Figure 3. Behavior of meteorological parameters at test station La Voz.Time(month) wind Speed (km/h) Temperature (°C) Relative Humidity (%) Rain(mm) | 2019-04-29T13:08:46.602Z | 2015-05-12T00:00:00.000 | {
"year": 2015,
"sha1": "a74352b98ed3b8046885e2bfae50a7516869eff9",
"oa_license": "CCBY",
"oa_url": "https://revistaalconpat.org/index.php/RA/article/download/321/404",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a74352b98ed3b8046885e2bfae50a7516869eff9",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
250934973 | pes2o/s2orc | v3-fos-license | The Effect of GLP-1 Agonist Treatment On Subclinical Atherosclerosis GLP-1 Agonist Tedavisinin Subklinik Ateroskleroz Üzerine Etkisi
Introduction: Although GLP-1 agonists have been shown to reduce cardiovascular events, their effect on the progression of subclinical atherosclerosis is not clear. In this respect, it was planned to evaluate cardiovascular risk markers in obese and diabetic patients receiving exenatide therapy. Materials and Methods: This retrospective study included 56 patients with Type 2 Diabetes Mellitus (DM) with a body mass index (BMI) >35. Demographic, anthropometric and clinic characteristics before and after six-month treatment with exenatide were screened. Cardiovascular risk marker Atherogenic Index of Plasma (AIP), uric acid, carotis intima media thickness (CIMT), HbA1c, fasting blood glucose (FBS) and postprandial blood glucose (TCG) levels were evaluated. Results: Eleven of the fifty-six patients had discontinued exenatide due to side effects, etc. 45 patients (35 females, 10 males; age 50 ± 9.5 years) completed the study. AIP, HbA1c, uric acid, fasting plasma glucose, postprandial glucose, waist circumference, hip circumference, body mass index (BMI), total cholesterol, and triglyceride levels were improved with exenatide treatment. However, no change was detected in CIMT, blood pressure, spot urine albumin/creatinine ratio, LDL, and HDL levels. Conclusion: Glycemic parameters, AIP and uric acid levels, which are biochemical predictors of subclinical atherosclerosis, were improved with GLP-1 agonist exetide treatment. However, no change was observed in CIMT measurements. These findings can be interpreted as exenatide therapy, can slow down the progression of subclinical atherosclerosis, but has no effect on existing atherosclerotic plaque.
Sonuç: Subklinik aterosklerozun biyokimyasal belirleyicileri olan AIP ve ürik asit düzeylerinde eksenatid tedavisi ile düzelme sağlandı. Ancak KIMK ölçümlerinde herhangi bir değişiklik gözlenmedi. Bu sonuçlar, kardiyovasküler çalışmalarda olumlu etkileri gösterilen GLP-1 agonist tedavisinin subklinik aterosklerozun ilerlemesini yavaşlatabileceği, ancak mevcut aterosklerotik plak üzerinde etkisi olmadığı şeklinde yorumlanabilir. Plasma (AIP), serum uric acid and carotid intima media thickness (CIMT) are some of these cardiovascular markers. AIP has been shown to be a strong predictor of atherosclerosis and coronary heart disease risk (1,2), which reflects the relationship between non-atherogenic and atherogenic lipoprotein level and correlates with lipoprotein particle size (2,3). Uric acid is one of the leading indicators of cardiovascular diseases (4). High uric acid levels are considered an independent risk factor for atherosclerosis and cardiovascular events (5). CIMT has recently been used as a non-invasive indicator of the development of atherosclerosis, and studies have reported that increased CIMT is a strong predictor of the risk of stroke, myocardial infarction, and cardiovascular death (6). When planning DM treatment, weight control and cardioprotective efficiency should be targeted in addition to optimal glycemic control. In this respect, glucagon-like peptide-1 (GLP-1) agonist therapies targeting multiple cardiovascular risk factors simultaneously can be considered a good option (7,8). GLP-1 agonists have been shown to have beneficial effects on inflammation, endothelial function, and cardiovascular disease (9)(10)(11). Although it has been claimed that at least part of the cardioprotective effect of GLP-1 agonist treatments is due to slowing the progression of atherosclerosis, it has not been conclusively demonstrated in clinical studies (12)(13)(14)(15)(16)(17). Exenatide is a short-acting GLP-1 analog that increases glucose-dependent insulin secretion. In this respect, this real-life study was planned to evaluate the effects of exenatide treatment, on cardiovascular markers as well as metabolic control.
Materials and Methods
Patients: This retrospective study was conducted with 56 diabetic patients for whom exenatide treatment was initiated. The study was approved by the Ankara Dışkapı Yıldırım Beyazıt Training and Research Hospital Ethics Committee (date: 27/11/2017 number: 43/27). The study was conducted in accordance with the ethical principles of the Declaration of Helsinki. The patients included were over the age of 18 years, were taking at least two antidiabetic treatment agents, and had a body mass index (BMI) over 35. Diabetic diet and regular exercise were recommended to all patients. The patients were monitored by a dietitian every three months. Exenatide at a dose of 5 μg twice a day was started for all patients, and after one month, the dose was increased to 10 μg twice daily. Four (7.2%) patients were excluded from the study as they did not attend follow-up visits. A total of seven patients discontinued exenatide treatment: one due to drug eruption (1.8%), one due to abdominal pain (1.8%), one due to angioedema (1.8%), one due to diarrhea (1.8%), one due to fatigue (1.8%), and two (3.6%) due to high blood glucose/need for insulin therapy/cost ( Figure 1).
Figure 1. Study flow diagram
Clinical and Biochemical Measurements: The clinical and laboratory characteristics of the patients were scanned. Demographic data, comorbidities, complications and antidiabetic drugs used were recorded. The weight, height, and BMI of all patients were measured before the exenatide treatment and after six months. Biochemical examinations were performed before 56 obese diabetic patients for whom exenatide initiated were included in the study One patient discontinued exenatide treatment due to drug One patient discontinued exenatide treatment due to angioedema One patient discontinued exenatide treatment due to fatigue Two patients discontinued exenatide treatment due to high blood glucose/need for insulin One patient discontinued exenatide treatment due to diarrhea One patient discontinued exenatide treatment due to abdominal pain Four patients were excluded from the study as they did not attend follow-up visits In conclusion, 45 obese diabetic patients for whom exenatide initiated were evaluated in the study (Table 3). A significant correlation was found between uric acid levels and AIP both before and after treatment (pretreatment r:0.405 p:0.006; post-treatment r:0.349 p:.0.020). A significant decrease was detected in AIP values with treatment. CIMT values were found to be higher in patients with less uric acid level reduction (r:-0.553, p<0.001). It was determined that the decrease in AIP was correlated with the decrease in HbA1c, but not with weight loss (r:0.303 p:0.043; r:0.267 p:0.076, respectively).
Discussion
To assess whether GLP-1 agonists slow the progression of atherosclerosis in DM, this real-life study was conducted evaluating the six-month progression of patients who received exenatide. Improvements were detected in anthropometric measurements. It was observed that the patients lost an average of 8 kg, and parallel to this, improvements were observed in glycemic parameters, uric acid and AIP levels. No statistically significant difference was found between CIMT values before and after exenatide treatment. Previous studies have suggested that GLP-1 agonist therapy may protect against atherosclerosis and cardiovascular disease through its effects on risk factors such as hyperglycemia, obesity, hypertension, and dyslipidemia (7,8). In clinical studies conducted in recent years, neutral or minimal efficacy in plasma lipids and blood pressure has been detected with GLP-1 agonist treatment (9,(19)(20)(21). In the current study, although there was an improvement in triglyceride and total cholesterol levels, there was no change in blood pressure. This supports the thesis that the cardioprotective effect of GLP-1 agonist treatment is due to the effect of multiple mechanisms (22). In recent years, studies have been published that have failed to show evidence of slowing the progression of atherosclerosis, altering plaque composition, or reducing CIMT, despite improvement in glycemic profile and lipid parameters (23,24). There are also studies in the literature in which CIMT has been shown to decrease with GLP-1 agonist treatment (12,13).
The main difference between this study and studies that have found a decrease in CIMT is that the pre-treatment CIMT levels were higher in other studies. Similar to CIMT, non-invasive imaging methods are also available for the evaluation of atherosclerosis and endothelial dysfunction. Flow-mediated vasodilation of the brachial artery and magnetic resonance imaging of the carotid artery wall may provide an appropriate assessment (12,25,26). Magnetic resonance imaging, in particular, has been shown to be more consistently associated with CVD, particularly strokes, compared with CIMT (27). Since this was a retrospective real-life study, magnetic resonance imaging and flow-mediated vasodilatation of brachial artery were not evaluated. More sensitive evaluation can be made with effective imaging such as magnetic resonance imaging and flowmediated vasodilation of the brachial artery in prospectively designed large-participant studies. It has been reported that AIP plays a predictive role for atherosclerosis and can be used to evaluate cardiovascular risk factors and predict acute coronary events (2). In the current study, AIP decreased significantly with exenatide treatment. This is the first study in the literature to have evaluated the level of AIP with exenatide treatment. AIP reduction was determined to be correlated with HbA1c reduction, but not with weight loss, and a correlation was also found between AIP and uric acid levels. This is in line with other previous studies (28). In addition, although some studies have shown no change in uric acid level with exenatide treatment, the results of the current study showed that uric acid levels decreased with exenatide treatment (29). This may have been due to the higher uric acid level in this study. This study has some limitations. Some data may have been overlooked because it was made by scanning the data of patients followed up in routine outpatient clinic conditions. In addition, the high rate of patients excluded from the study may also have affected the results. The absence of a control group can also be said to be a limitation. Finally, due to the nature of the study, a causeeffect relationship could not be established. Nevertheless, this study confirms the need for larger prospective studies to determine the mechanisms underlying the relationship between serum uric acid, AIP, and atherosclerosis.
Conclusion
In conclusion, glycemic parameters, AIP, and uric acid levels, which are biochemical predictors of subclinical atherosclerosis, were improved with exenatide treatment. With treatment, patients lost weight and their BMI decreased. However, no change was observed in CIMT measurements. These results can be interpreted as GLP-1 agonist therapy, the efficacy of which has been shown in cardiovascular outcome studies, can slow down the progression of subclinical atherosclerosis, but has no effect on existing atherosclerotic plaque according to the CIMT measurements. Further studies with more effective imaging modalities than CIMT can be performed to confirm this thesis. | 2022-07-22T15:02:56.203Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "d77bd92c2e4a33e3e9ab6a269aea37ae65d8764c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5505/vtd.2022.09815",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "62d4b75cf5bbab6c4ce81a495f45ea4f61133afb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
132928777 | pes2o/s2orc | v3-fos-license | Climate and topographic controls on snow cover dynamics in the Hindu Kush Himalaya
Snow governs interaction between atmospheric and land surface processes in high mountains, and is also source of fresh water. It is thus important to both climate scientists and local communities. However, our understanding of snow cover dynamics in terms of space and time is limited across the Hindu Kush Himalaya (HKH) region, which is known to be a climatically sensitive region. We used MODIS snow cover area (SCA) data (2003–2012), APHRODITE temperature data (2000–2007), and monthly long term in‐situ river discharge data of the Gandaki (1968–2010), Koshi (1977–2010) and Manas (1987–2004) basins to analyse variations among four basins. We gained insights into short term SCA and temperature, long term discharge trends, and regional variability thereby. Strong correlations were observed among SCA, temperature and discharge thereby highlighting the strong nexus between them. Temporal and spatial snow cover variability across the basins is strongly coupled with the variability of two weather systems: Western Disturbances (WD) and Indian Monsoon System (IMS), and strongly influenced by topography. Manifestation of these variability in terms if downstream discharge can have repercussion to water based sectors: hydropower and agriculture, as low flow seasons is seen affected. This study adds to our knowledge of snow fall and melt dynamics in the HKH region, and intra‐annual snow melt contributions to downstream discharges. The study is limited by short span of data and it is desirable to perform a similar study using data representing a much longer time span.
Introduction
The 'Himalaya', a term coined by the ancient pilgrims of India meaning 'abode of snow', owes its name to snow (Saraf and Choudhury, 2006). In recent years, the relevance of snow has become more scientific than aesthetic due to its role in climate change and as a source of fresh water, which has a direct effect on economic development and social wellbeing (Cao and Liu, 2005). The albedo of snow (clean, fresh, dry snow) is as high as 0.9 in some parts of the light spectrum, meaning that 90% of the incident radiation is reflected back to the atmosphere (Munneke et al., 2009). The low thermal conductivity and high albedo of snow insulates the Earth's surface from incoming solar energy (Weller and Holmgren, 1974), thereby strongly affecting climate change (Vavrus, 2007). The role of snow in energy exchange and climate change has resulted in its recognition as an essential climate variable (ECV) by the Global Climate Observation System (GCOS). Snow exhibits a close negative relationship with atmospheric temperatures (Brown, 2000;Hosaka et al., 2005) and thus is often used as proxy indicator of climate change (Robinson, 1987;Kropacek et al., 2010). Because the relationship between air temperature and precipitation affects the occurrence of snow fall (Bednorz, 2004) and both temperature and precipitation are affected by climate change, which is pronounced at higher elevations, the changing climate is altering temporal and spatial patterns of snow fall. Changes in snow fall patterns are manifested in changes in snow cover area (SCA), snow depth, and shifts in snow accumulation and timing of melting on the ground. These changes will have serious consequences in downstream areas because both water balance and peak runoff in cold regions are strongly affected by snow accumulation in drainage basins (Pomeroy, 2002). Changes in water balance and peak runoff will have implications downstream in terms of access to water resources for drinking, irrigation, hydro-power and hydro-based industries.
The Hindu Kush Himalaya (HKH) region is the source of water in 10 major drainage basins, which support a population of more than 1.3 billion (Jianchu et al., 2007), who are dependent on glacial and snow melt to support life and livelihoods. Country like Bhutan, one of the smallest economies in the world, has its economic development tied to hydropower resources, with 17.61% of its GDP being derived from it (NEC, 2012). Snow cover and associated changes have a direct bearing on rangeland productivity 3874 D. R. GURUNG et al. (Buus-Hinkler and Tamstorf, 2006;Shang et al., 2012;Paudel and Andersen, 2013), and these changes threaten the livelihoods of mountain residents, many of whom are nomadic. Rangeland covers approximately 60% of the HKH region and supports many communities in the high mountains, where livelihoods are derived from pastoral production (Sharma et al., 2007).
Research on snow cover in the HKH region in generally indicates a decrease in SCA (Immerzeel et al., 2009;Shrestha and Joshi, 2009;Gurung et al., 2011b;Gurung et al., 2011a;Maskey et al., 2011). In contrary studies (Tahir et al., 2015) have reported an increase in SCA in the western Himalaya and the Karakoram area. These differences indicate that snow cover variability is high as it is affected by micro-climates, and research on snow therefore needs to be performed at an appropriate scale to capture micro-climatic effects. Although much is at stake, our understanding of the spatial variability of snow accumulation and melt (altitude, east to west) is limited by a lack of research on an appropriate scale in the HKH region. The bulk of these studies have a regional to sub-regional focus (Zhang et al., 2004;Dahe et al., 2006;Li et al. 2008;Zhang et al., 2010;Gurung et al. 2011b;Maskey et al., 2011;Jin et al., 2015;Singh et al., 2014), and catchment level research is even more sparse (Jain et al., 2009;Kulkarni et al., 2010;Sharma et al., 2012).
This paper attempts to explain snow fall and melt patterns based on SCA variations in time and space at the catchment level across four comparable basins (the Jhelum, Gandaki, Koshi and Manas basins) spread across the Himalaya range from east to west. An attempt was also made to compare SCA with temperature and discharge to shed light on temperature-snow-discharge nexus.
Study sites
The study sites consisted of four comparable transboundary basins spread across the Himalaya range ( Figure 1); the sites were selected to represent microclimatic variations across the range. From west to east, these basins are the Jhelum basin, which is part of the greater Indus basin; the Gandaki and Koshi basins, which are parts of the greater Ganges basin; and the Manas basin, which is part of the greater Brahmaputra basin. The Jhelum basin, located in the Western Himalaya, is influenced by westerlies, whereas the other three basins in the Central (Gandaki and Koshi) and Eastern (Manas) Himalaya are influenced by the Indian Summer Monsoon (ISM). These basins listed in descending order of size are the Koshi (88 605 sq. km), Jhelum (50 858 sq. km), Gandaki (44 665 sq. km) and Manas (29 638 sq. km) basins ( Figure 2). Mean elevations in descending order are 4408 m asl in the Koshi basin, 4065 m asl in the Gandaki basin, 3756 m asl in the Manas basin and 3077 m asl in the Jhelum basin ( Figure 2). Hypsometric integral (HI) values, which range from 0.48 (Jhelum) to 0.50 (Koshi, Gandaki, Manas) indicate that all four basins are at a mature stage (Singh et al., 2008;Ramu and Mahalingam, 2012).
Snow data
In situ snow stations are sparse in the Himalaya in general and more so on the southern flank. Even where available, the station data represent a point, are often not representative of a large area, and thus are not appropriate for basinwide snow analysis. Alternatively, remote sensing provides continuous (spatially and temporally) snow information useful for spatio-temporal variability analysis at various geographic levels. One remote sensing derived data that has become almost a de facto standard for snow cover research and has been widely used (Immerzeel et al., 2009;Shrestha and Joshi, 2009;Gurung et al., 2011b, Gurung et al. 2011aMaskey et al. 2011) is the moderate resolution imaging spectroradiometer (MODIS) snow product. There is a suite of MODIS snow products consisting of a sequence of products beginning with the 500-m-resolution swath product (Hall et al., 2002;Hall and Riggs, 2007), which is made available for public use by the National Snow and Ice Data Center (NSIDC) Distributed Active Archive Center (DAAC). Daily snow products produced by MODIS sensors onboard two satellites, the Terra (MOD) and Aqua (MYD), have been available since February 2000 and July 2002, respectively. The MODIS snow algorithm has been described by Hall and Riggs, 2007. The accuracy of MODIS products reported by many researchers based on comparisons with in situ data (Klein and Bernett 2003;Simic et al., 2004;Parajka and Bloeschl 2008;Wang et al., 2008) is as high as 94-95%, although the accuracy is low (<39%) where the snow depth is less than 4 cm . MODIS snow products has been found suitable and used for such basinwide analysis (Barman and Bhattacharjya, 2015).
In this study, binary snow information from Level 3 MODIS snow products obtained by Terra (MOD10A1) and Aqua (MYD10A1) during overpasses in the morning (approximately 0445 GMT) and afternoon (approximately 0745 GMT), respectively, and available at daily temporal resolution was used. The MOD10A1 and MYD10A1 snow products are tile gridded product in the sinusoidal projection and measure approximately 1200 x 1200 km (10 ∘ x10 ∘ ) (Riggs et al., 2006). The daily MODIS snow products available in hierarchical data format (HDF) were first re-projected into the Lambert equal-area projection system prior to conversion to a GIS-friendly format (GeoTIFF) using the MODIS reprojection tool (MRT). Instead of using already available daily MODIS snow products, which are limited by cloud pixels, the daily products were generated using moving 8-day composites. Cloud pixels were thereby replaced by information from their corresponding cloud-free pixels, which resulted in a cloud-filtered daily snow product.
Temperature
To develop a spatially explicit representation of temperatures for comparison with the SCA in the four basins, the Asian Precipitation -Highly Resolved Observational Data Integration Towards Evaluation (APHRODITE) Source: Esri, DigitalGlobe, GeoEye, Earthstar Geographics, CNES/Airbus DS, USDA, USGS, AEX, Getmapping, Aerogrid, IGN, IGP, swisstopo, and the GIS User Community daily temperature dataset of 2000 to 2007 was used. The APHRODITE gridded daily mean temperature data (product version V1204R1) at a 0.25 ∘ spatial resolution available in netCDF (nc) format was first converted to GeoTIFF format. The gridded daily temperature raster file was clipped based on basin boundaries (shape files) and re-projected to create a Lambert equal-area projection. The gridded temperature datasets were resampled to a 500-m resolution using the cubic convolution technique to make them consistent with the snow cover data. Temperature statistics were extracted in table format for the basins as a whole and for 1000-m elevation zones.
Discharge
River discharge data from three downstream stations corresponding to the Gandaki (Narayan Ghat station along Narayani river), Koshi (Chatara-Kothu station along Sapta Koshi) and Manas (Autsho station along Kuri Chu) basins ( Figure 1) were used. These data were from stations operated by the Department of Hydrology and Meteorology (DHM) of Nepal and Department of Hydro-Met Services (DHMS) of Bhutan. Stations capture 70, 61 and 10% of the surface flow of Gandaki, Koshi and Manas basins, respectively. Although these stations are not located at the drainage basin outlets, these stations were best alternative available for the analysis.
Topographic
Topography is an important factor in snow cover distribution. This paper describes an attempt to study the variations in SCA and temperature between basins and topographic zones. We used a commonly used, freely available topographical dataset, the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) available at a 90-m spatial resolution. This DEM was first re-projected to create a Lambert equal-area projection and then resampled at a 500-m spatial resolution to make it consistent with other data layers. Delineation of the drainage basins was performed using the 90-m-resolution SRTM DEM.
Snow cover area analysis
We analysed trends spanning 10 years (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012), referred as the short-term SCA trend, for the entire basins, in each 1000-m elevation belt, and for different slope aspects. Linear regression is one of the most widely adopted (Pu et al., 2007;Wang et al., 2008;Immerzeel et al., 2009;Gurung et al., 2011a) approaches, and SCA trends were analysed using liner regressions at the 95% confidence level. As a measure of the statistical significance of the observed trends, we used the P-value test, which is a statistical method of testing one or more hypotheses. Linear regression was performed on the average SCA% in 10 individual years to evaluate the short-term trend. Because the topography has a strong effect on snow accumulation and melting (Jain et al., 2009), similar regressions were developed for each 1000-m elevation belt and for various slope aspects (N, NE, E, SE, S, SW, W, NW) to characterize the spatial variability. Spatial resolution (500 m) of MODIS snow product is found to be adequate to perform elevation and aspect based snow cover analysis (Table S3, Supporting information). Because it is important to analyse seasonal dynamics, average SCA% statistics were generated for four seasons (winter, i.e. December-March; spring, i.e. April-May; summer, i.e. June-August and autumn, i.e. September-November) following schema adopted by Immerzeel et al., 2009 andMaskey et al., 2011. Intra-annual variability is important for water users like hydropower sector and farming community to understand the changes and adapt, average monthly SCAs spanning a 10-year period were plotted, and variations within individual months were captured by standard deviations (SDs) using 10 sets of monthly average SCAs.
Temperature analysis
The daily temperature statistics extracted from resampled (500 m) daily gridded APHRODITE temperatures spanning 2000 to 2007 were averaged over various time periods (annual, monthly and seasonal) and geographic areas (entire basins, elevation zones and aspects). Linear regression at the 95% confidence level was used to analyse short-term temperature trends (annual, seasonal and monthly). Correlations between basinwide daily temperatures and SCA and in individual elevation belts were analysed using a nonparametric measure of association called Kandell's tau-b correlation ( ). Similar to Spearman's ( ) and Pearson's (r) product moment correlation it measures relationship between two variables.
Inter-annual cyclicity of SCA
The Jhelum basin which lies at the higher latitude ( Figure 1) despite its lesser relief has highest average decadal SCA (6402.53 sq. km) and the Manas basin with average SCA of 3999.60 sq. km has the least (Table 1). In terms of SCA in percentage of total basin area, the Manas and Koshi basins with 13.49% and 5.79% are highest and lowest respectively. The short-term SCA trend in these basins is largely negative and statistically not significant except in Manas (Table 2). Similar observation has been reported from this region by other studies and has attributed to high degree of SCA variability and shorter temporal span of data (Immerzeel et al., 2009). Positive SCA trend in western Himalaya has also been reported by others (Singh et al., 2014;Tahir et al., 2015), may be due to increase in winter precipitation as a result of stronger westerly circulation (Archer and Fowler, 2004;Hewitt, 2005). A similar positive SCA trend is also reported from western China from 1951 to 1997 (Dahe et al., 2006). The observed rate of SCA decline is however not consistent, a case of high spatial variability in terms of snow response to climate change across the Himalaya. It is observed to be highest in the Manas basin and is inferred to be comparable in the Gandaki and Koshi basins ( Table 2). The observed tendency of SCA decline is found consistent (Table S1). Observed decreasing SCA trend is statistically significant between 4000-7000m in Manas basin and 6000-7000m in Gandaki basin. Similar short term SCA trend analysis based on aspect also indicated decreasing tendency in all aspect except in case of Jhelum basin, and statistically significant in case of Manas basin (Table S2). The observed short term SCA trend is consistent with observed short term temperature trend. Inter-annual temperature in all the basins indicates an increasing trend which is statistically significant except for Jhelum basin. This is in agreement with a positive long-term temperature trend in the HKH region and vicinity reported based on the CRU TS 2.1 (Immerzeel et al., 2009). The observed temperature increase is found across all elevation and aspect (Tables S1 and S2). However, Manas basin has the highest rate of short-term temperature increase while it is comparable for Koshi and Gandaki basins, indicating a case of spatial variability. Jhelum basin indicate increasing tendency and at rate lowest amongst the four basins. Intra-annual variability by way of seasonal trend analysis shows increasing trend for all seasons except autumn temperature (Table 2).
Inferring on positive annual temperature trends in the Gandaki and Koshi basins, in conjunction with observed negative inter-annual SCA trend in the Manas basins, the region in general has experienced a decade of decreasing snow cover. This observation is consistent with trends from this region reported earlier from Nepal (Ojha, 2009;Shrestha and Joshi, 2009), and the upper Indus basin (Immerzeel et al., 2009).
Intra-annual variation in SCA
Intra-annual variability as manifested by monthly and seasonal SCA variation shows conspicuous differences between the Jhelum basin and the other three basins (Figure 3) weather systems. The Jhelum basin is affected by western system known as Westerly Disturbances (WD) originating over Mediterranean and Black Sea area (Hatwar et al., 2005). The WD is dominant winter system which results heavy snow fall in Western Himalaya. Other three basins receive snow mostly from the ISM during summer and partly also from WD during winter. Characteristics of Nepalese Himalaya being fed by summer and winter snowfall was why it was referred as 'summer accumulation type' glaciers (Ageta and Higuchi, 1984). Despite the different weather systems, all four basins experience maximum snow fall during winter month which is seen to peaks either in February (Jhelum, Gandaki and Manas basins) or March (Koshi basin). Similarly month with lowest SCA is during summer months: in August (Jhelum basin), July (Gandaki and Manas basins) and June (Koshi basin). Figure 4 shows intra-annual variability of SCA using 8-day average SCA% in each 1000-m elevation belt. The SCA% increases with elevation and there is marked increase above 6000m asl which is very prominent in the Gandaki, Koshi and Manas basins. The intra-annual variation is strong below 6000 m asl, which has also been observed in Nepal (Maskey et al., 2011). Above 7000 m asl, the SCA% is much greater in summer than in winter in all four basins. This pattern has also been reported in Nepal and has been attributed to greater cloud cover in summer than in winter (Maskey et al., 2011). The SDs of the monthly SCAs represent the degree of variation for each individual month during the 10-year period. The SD plot (Figure 3) shows that the variation is greater during winter months, consistent with report from Nepal (Maskey et al., 2011) and Loess Plateau, China (Jin et al., 2015). The SD is greatest in February in the Gandaki, Koshi and Manas basins and is greatest in December in the Jhelum basin, a case of spatial variability.
Relationship between SCA and temperature
The inverse correlation between SCA and temperature is well established (Bednorz, 2004;Hosaka et al., 2005;Gurung et al., 2011b;Maskey et al., 2011;Barman and Bhattacharjya, 2015), whereas the spatial variability across elevations is not well understood. Table 3 shows the correlation ( ) between the daily average temperature and daily SCA of individual basins during the period between 2000 and 2007. The correlation is negative in all four basins and is statistically significant at the 0.01 confidence level. The strongest correlation is observed in the Jhelum basin ( = −0.62), and the weakest correlation is observed in the Koshi basin ( = −0.46). Spatial variability is explicit with stronger but negative correlation in lower-elevation belts than lower and positive correlation in higher-elevation belts. The 7000m asl elevation marks the transition from negative to positive correlation which has also been reported in Nepal. This pattern may be due to temperatures being well below critical above 7000 m asl, and as a result small changes in temperature do not lead to perceptible changes in SCA (Barman and Bhattacharjya, 2015). The other possible reason could be ablation at higher elevation due to wind erosion and sublimation.
Relationship between SCA and topography
Topography has a major effect on weather and climate in the Himalaya, and elevation and aspect therefore play important roles in the SCA distribution (Jain et al. 2009;She et al., 2015). We analysed elevation and aspect wise SCA distribution to shed light on topographic control on SCA distribution. Figure 5 presents a radar chart showing the distribution of SCA% based on aspect. The Jhelum and Manas basins receive maximum snow fall on their west-and east-facing slopes, whereas north-and south-facing slopes receive the maximum snow fall in the Gandaki and Koshi basins. This pattern is observed during all seasons. A strong correlation between elevation and SCA% is observed as indicated by a high coefficient of determination (R 2 ) in all four basins: Jhelum (0.96), Gandaki (0.92), Koshi (0.83) and Manas (0.84) ( Figure S2). Variations in SCA% with every 100-m increase in elevation were calculated and were found to be 1.9% in the Gandaki basin, 1.6% in the Koshi and Jhelum basins, and 1.5% in the Manas basin. Figure 6 is a hypsography curve showing yearly and seasonal SCA across every 100 m elevation belts. SCA is maximum in winter in all the basins across all elevation belts. The difference in seasonal SCA is very prominent below 6000 m in the Gandaki, Koshi and Manas basins, which confirms the observation shown in Figure 4. In the Jhelum basin, this critical elevation is much lower: 4700 m asl.
Relationship between SCA and stream flow
Snow melt is particularly important for sustaining river flows during winter and spring months, when glacial melt is impeded by lower temperatures. Snow melt and thus SCA is therefore important for future water security, particularly during dry winter months (Stewart, 2009;Immerzeel et al., 2010). Studies have indicated a strong correlation between SCA and downstream discharge (Yang et al., 2009;Delbart et al., 2015). average SCA at different elevation belts and average discharge was analysed for Gandaki basin where the station captures variability of snow melt over 70% of the basins area. The analysis (Table 4) showed stronger positive correlation in lower elevation and vice versa, which may indicate greater contribution to river discharge from snow at lower elevation. The elevation at which the transition from positive to negative correlation between SCA and discharge is varying across seasons, indicating contribution of snow cover from different elevation for different seasons. The contribution of snow melt to river discharge is confined to elevation below 5000 and 6000 m in winter and spring, respectively. Due to warmer temperature during summer contribution from snow is from all elevations but with decreasing significance with elevation.
Discharge trend
Long term discharge trend was also analysed using monthly average, monthly maximum and monthly minimum for Gandaki (Narayan Ghat station) and Koshi (Chatara-Kothu station), and monthly average discharge for Manas (Autsho station) basins. Span of data used was different: 1970-2010 for Gandaki basin, 1968for Koshi basin and 1987-2004 for Manas basin. None of the long term discharge trend was statistically significant thus was not conclusive. However, tendency is largely (Smadja et al., 2015). Average monthly discharge plot ( Figure S5) shows highest average discharge for August in all three basins, while lowest average discharge for Manas basin is in January and for Gandaki and Koshi basins in March.
Conclusions
Based on the analysis of MODIS snow cover data from 2003 to 2012, the Himalayan region has experienced decline of SCA, a trend found consistent across all elevation belts and aspects. The statistically significant negative correlation between SCA and temperature indicates that this trend is partly a result of increasing temperatures. However, there are spatial and temporal variations, which is important to understand to be able to develop regionspecific adaptation interventions. The east-west variability seen in Jhelum basin in Western Himalaya and other three basins is due to difference in weather system. Jhelum basin is predominately dependent on winter snow fall showered by WD, and characterized by high degree of inter-annual and intra-annual variability compared to other three basins. Topography has strong effects on the snow distribution and snow melt processes, as indicated by the high correlation coefficient (R 2 ) between SCA and elevation. The correlation between SCA and temperature is inverse and stronger at lower elevations. Topography has also influence on intra-annual and seasonal SCA variability, with stronger variability observed below 6000 m asl. This may introduce greater variability (less stability) to winter and spring (low flow) season discharge as snow melt contribution to discharge is confined to snow from Table 5. Long term monthly trend (liner regression) and standard deviation of discharge from stations in Gandaki (1968Gandaki ( -2010, Koshi (1977Koshi ( -2010 and Manas (1987Manas ( -2004 elevation below 5000 and 6000 m, respectively. It is also to note that month wise long term discharge trend indicates decline during low flow months. This study indicates that it is critical to understand the nexus between climate, snow and water because future water security issues will have many inter-related adverse consequences, threatening the very existence of human civilization. There is a need for better research into the impacts of climate change in alpine environments using long-term data derived from remote sensing and in situ stations. There is a general lack of high-elevation hydro-climatic stations in the Himalaya region, and it is imperative to fill this gap if we are to unravel the complex climate-snow-water nexus and develop adaptations. The findings from this study will help advance our understanding of these alpine processes and complexities and help lead to better strategy on water resource management. pare this paper. The views and interpretations expressed in this paper are those of the author(s). They are not necessarily those of ICIMOD and do not imply any opinion regarding the legal status of any country, territory, city or area, the delineation of its frontiers or boundaries, or the endorsement of any product.
This study was partially supported by core funds of ICI-MOD, which received contributions from the governments of Afghanistan, Australia, Austria, Bangladesh, Bhutan, China, India, Myanmar, Nepal, Norway, Pakistan, Switzerland, and the United Kingdom. | 2019-04-26T14:25:21.051Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "a3904315227c2b484f90512d50747ff1905c0715",
"oa_license": "CCBY",
"oa_url": "https://rmets.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/joc.4961",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ecab5dc9ef40528a8af7add7e269175c916a5726",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
243344834 | pes2o/s2orc | v3-fos-license | Delivery service satisfaction and associated factors among mothers who gave birth at West Shewa Zone public hospitals, Ethiopia
Objective: This study assessed mothers satisfaction towards delivery Services and its associated factors at public Hospitals found in West Shewa Zone, Ethiopia, from March 1 to April 15/2018. Result : This study nding showed that the overall satisfaction level of mothers towards delivery service was 82.1%. Those Mother’s who planned their pregnancy were 4.93 times more satised with delivery service than those who did not planned (AOR: 4.93; 95% CI: 2.172-11.208). The odds of satisfaction for women who had pain management were 1.56 times higher than those who did not. Moreover, Gestational age at birth for pre-term and full term [(AOR: 0.027; 95% CI: 0.003-0.254), (AOR: 0.067; 95 % CI: 0.011-0.401)], means of transportation (use of Ambulance) (AOR: 3.785; 95% CI: 1.24-11.51) and stay in hospital (AOR: 0.10, 95% CI 0.01, 0.93) were the signicant predictors of mother’s satisfaction with delivery service at the study area. Therefore notable attention should be given to those factors as they may inuence the future utilization of service.
Introduction
Sub-Saharan Africa countries are characterized by high maternal mortality ratio and lowest coverage of births attended by skilled health care provider [1]. Ethiopia is also one of the Sub-Saharan African countries(SSA) with high maternal mortality ratio of 412 per 100,000 live births and 19,000 maternal deaths annually [2].This is due to delays in seeking health service, in reaching the health facility and in receiving timely and effective intervention after reaching the health facility [3].
Worldwide, 63.1% of births were attended by a skilled health care provider. Almost all births were attended by skilled health care professionals in developed countries. In Africa and Asia, only 46.5% and 60.8%, respectively, of women gave birth with the help of a skilled attendant [4]. In Ethiopia, only about 28% of deliveries were attended by a skilled birth attendants and only about 26% of delivery was institutional. On contrary, home delivery was 73% which is still high. Similarly, in Oromia region (Ethiopia) Home delivery accounts 81% while institutional deliveries was only about 19% [1,5,6].
Low utilization of delivery service at health facility was mainly related to maternal satisfaction during delivery services [7,8]. That's why WHO emphasizes ensuring client satisfaction as a means of secondary prevention of maternal mortality, since satis ed women are more likely to adhere to health care providers recommendations and utilization [9].
Maternal satisfaction with hospital care during delivery plays a role in utilization of maternal health services [14]. It improves client friendliness, cultural sensitivity of institution based delivery and postpartum care [15].Women who were satis ed with delivery care have better self-esteem and con dence, faster in establishing a maternal-neonatal bond, and more likely to breastfeed their infant [16,17].
Studies had shown that, women who were dissatis ed with their delivery experiences were more prone to develop a fear of delivery, postnatal depressive symptoms, face di culties in breastfeeding and in performing self-care and their new born [18,19]. Furthermore, it may also resulted with poorer postnatal psychological adjustment, a higher rate of future abortions, preference for a caesarean section, more negative feelings towards the infant and breast-feeding problems [20].
In Ethiopia few studies were conducted, meanwhile all studies were quantitative which prevent mothers from expressing their deepest feeling about delivery service they had provided. Therefore using both quantitative and qualitative methods, this study aimed to assess the current status of mother's satisfaction towards delivery services and its associated factor at the study area. Sample size determination and Sampling procedures All randomly selected mothers who gave birth in selected Hospitals during the study period were included in the study. Single population proportion formula was used to get nal sample size of 390, at 95% con dence level, 5% marginal error, 10% non-response rate, 1.5 design effect and 19% level of satisfaction taken from previous study [17]. Moreover, for in-depth interview, ten participants (6 mothers who gave births and 4 family members) were selected purposively from selected hospitals.
Data collection methods
For Quantitative part: Data were collected using a pre tested Interviewer administered structured questionnaire. The questionnaire were adapted by reviewing related studies and presented using a 5-scale or likert scale measurements (1-very dissatis ed, 2-dissatis ed, 3-neutral, 4-satis ed and 5-very satis ed).
Questions were prepared originally in English language and then translated to local language for easy management, then translated back to English to maintain the quality and consistency of information. The overall level mother's satisfaction was calculated from 20 items of satisfaction questions; which reported a good internal consistency (Cronbach s alpha = 0.76).
For qualitative part: In-depth interview was employed on mothers who gave birth at selected hospital and their family or supporter during the study period. However, those mothers who participated in quantitative study were excluded from participating in interviews.
Data analysis procedures
Quantitative data analysis: Collected data were cleaned, coded and entered to Epi-data version 3.1 and exported to SPSS version 20 windows for further analysis. Bivariate and multivariable logistic regression was performed. Independent variables with p-value < 0.2 in bivariate logistic regression were included for multivariable logistic regression. Variables were considered as statistically signi cant if p-value < 0.05 in multivariable logistic regression.
Qualitative data analysis: Collected data were transcribed and translated exactly from local language to English. The manual transcription and thematic content analysis was conducted. Then identi ed themes were triangulated with quantitative part to determine differences and similarities in the perspectives of the mother's satisfaction on delivery service.
Results
Socio-demographic characteristics of the respondents A total of 390 mothers were participated in this study making 100%response rate. The mean age of the respondents was 26.92 ±5.10SD. About 322(82.6%) of delivery were planned, about 123 (59.9%) mothers came to hospitals using Ambulance whereas 297(76.3%) of them waited care provider for <15 minutes (Table 1).
However, the ndings of this study is higher than the studies done in western Nepal, Northern Jordan, Kenya, Felege Hiwot Referral hospital (Ethiopia) and St.paulose hospital (Ethiopia) which were 67.8%, 64%, 56%, 74.9% and 19% respectively [8,[10][11][12]18]. This variation may be due to difference back ground of the study participants, satisfaction measurement tools used, study time difference, method of analysis. On the other hand, this nding is lower than studies conducted in west India (86%) and Kenya (96%) [27,28]. This discrepancy may be due to cultural difference and difference in quality of service provided at this study area.
This study revealed variables like: gestational age at birth, duration of stay, mode of transportation (use of free Ambulance), fetal outcome, status of pregnancy and pain managements were predictor's of delivery service satisfaction among study participants. This nding is consistent with other studies conducted at different part of Ethiopia [10,12,28,29]. Likewise, Mothers who gave birth at preterm and full term were less satis ed than those who gave birth at post term (AOR: 0.027, 95% CI: 0.003-0.25), AOR: 0.067, 95% CI: 0.01-0.40) respectively. However, this nding was found in contrast with the nding from a Nepal study [10]. The reason may be due to difference in expectation of mothers and cultural difference.
Additionally duration of stay in hospital was other predictor of delivery service satisfaction. Mothers who stayed in hospitals for 48-72 hours were less satis ed compared to those who stayed >72 hours (AOR:0.107,95% CI: 0.01-0.93).This nding was in contrast with studies conducted at Nepal and St Paul's hospital [10,12]. The possible reason could be mother's perception of getting a better care through longer stay in hospitals.
The other important predictor of delivery service satisfaction was means of transportation. Mothers who came to hospital during their labor by free Ambulance were 3.785 times more likely to be satis ed than those who used other means of transportation like by cart, on foot by shoulder (AOR:3.785,95% CI: 1.24 -11.51). It is supported by qualitative study. "…The existence of ambulance was unforgettable for its contribution in my arrival very quickly…". Nevertheless, means of transportation was not statically associated with maternal satisfaction with delivery service [24] .The possible reason may due to different in topography of the study area and accessibility of Hospitals and awareness of study participant on use of ambulance.
Regarding the status of mothers pregnancy, mothers who planned their pregnancy were 4.93 times more satis ed with delivery care than those who did not planned (AOR: 4.93, 95% CI; 2. 17-11.20) .This nding is similar with studies conducted at Kenya, Amhara region referral hospitals and Jimma zone, Ethiopia [11,29,30].This may due to having planned pregnancy may be associated with better psychological preparation for the delivery service.
Lastly pain management was signi cantly associated with delivery service satisfaction. These mothers who get medication for pain management were 4.5 times more likely to be satis ed as compared to who did not (AOR: 4.782; 95% CI: 1.98 -10.52). This result is in line with studies conducted elsewhere [25,29,31], as well as supported by qualitative result which states "When they were stitching up the episiotomy site they gave me anti-pain and I felt much better." Similarly another mother reported "I cannot tolerate its pain and I shouted to them if anything is their which can reduce it. Then they gave me one injection and I became better." In this study variables like: income, residence, ethnicity, ANC follow up and Educational status of the respondents were not signi cantly associated with delivery service satisfaction as it was explained in previous studies [8,12,25,27].
Limitation of the study The nature of the study does not allow the study to establish causal relationship between the different independent variables and the dependent variable. was explained to the study participants and written informed consent was obtained prior to participation in the study. Privacy and con dentiality of collected information was ensured.
Consent to publish
Not applicable Availability of data and materials All data that support the ndings of this study is available from the corresponding author upon request. | 2019-09-17T03:00:07.114Z | 2019-09-08T00:00:00.000 | {
"year": 2019,
"sha1": "cb9b5a068173635fdfe6ed1df59ef0f362dd61c7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-4844/v1.pdf?c=1631841601000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bf243ecfd8c6f63378bf14b18268f0ff91d27760",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
253857850 | pes2o/s2orc | v3-fos-license | A case of repeated focal motor seizures as expression of an inflammatory cerebral process with suspected dysimmune etiology
Highlights • The detection of a specific neuronal antibody is not mandatory to diagnose autoimmune encephalitis.• If the suspicion of autoimmune encephalitis is high, an immune therapy (intravenous steroids/immunoglobulins) or plasma exchange should be promptly started to obtain a better outcome.• High levels of angiotensin-converting enzyme on serum and cerebrospinal fluid have been associated to cerebral inflammatory processes, other than systemic sarcoidosis.
Introduction
Autoimmune encephalitis (AE) includes a heterogeneous group of immune-mediated inflammatory disorders of the brain whose estimated incidence is 1.2/100.000 person-years (2006-2015) [1]. Generally, the prevalence of AE is about 13.7/100.000 personyears, varying according to the specific neuronal antibody (NA): MOG (1.9/100.000), GAD65 (1.9/100.000), LGI1 (0.7/100.000), NMDAR (0.6/100.000) and so on [1]. Nevertheless, the prevalence and the incidence of this condition may be underestimated, considering that the number of new NAs and related syndromes is increasing over time [2]. Clinicians should immediately start immune therapies (intravenous steroids/immunoglobulins) or plasma exchange when AE is highly suspected, even when the antibody is not detected or its identification is ongoing [3], because better outcomes have been associated with earlier immunotherapy [4,5]. We present the case of a highly suspected AE based on clin-ical presentation, brain magnetic resonance imaging (MRI) and cerebrospinal fluid (CSF) findings, with complete recovery after high-dose immunotherapy, even though initial NA detection in serum was not confirmed later.
Case report
Our patient was a 19-year-old woman without familial history of epilepsy, febrile seizures or sudden death. She had a forceps delivery, but early development was normal. The patient was not taking any medication at home. She came to our attention due to three episodes of rhythmic and involuntary movements of the right lower face (orbicularis oris and buccinator) with tonic deviation of the tongue to the right side, with preserved awareness. Each episode continued for several minutes, resolving only after intravenous administration of delorazepam. Two weeks before, the patient had been starting to experience psychiatric symptoms (anxiety and irrational fear) and sleep-related disorders (insomnia). After a negative initial CT scan, she was hospitalized for further evaluation. Video-EEG monitoring recorded focal motor seizures, as previously described, with spike and polyspike discharges within theta activity starting from the left temporal regions and spreading to frontal ones bilaterally (Fig. 1). Brain MRI with contrast enhancement (c.e.) was unremarkable. During the hospitalization seizures repeated in cluster and became more frequent and longer; after each cluster a full recovery of the neurological state was observed. Due to the inefficacy of intravenous diazepam (0.15 mg/kg) and midazolam (10 mg), intravenous lacosamide (200 mg) and levetiracetam (40 mg/kg) were sequentially administered, without response. After four days brain MRI with c.e. was repeated, showing T2/FLAIR (fluid attenuated inversion recovery) hyperintense areas of altered signal without c.e. in the left thalamus and homolateral cortical and sub-cortical fronto-parietal regions (Fig. 2). CSF analysis revealed lymphocytic pleocytosis (21 cells/mm 3 ) and four CSF-restricted oligoclonal bands (OCBs). Serum and CSF virological screening (CMV, EBV, HSV1, HSV2, HHV6, HHV7, HH8, VZV) was not suggestive of active cerebral infection. Surface (NMDA, AMPAR, DPPX, IgLON5, LGI1, CASPR2) and intracellular antibodies (anti-Hu, Yo, Ma1-2, CRMP-5, amphiphysin, Ri, GAD65, SOX1, TR, Zic4, CV) were negative both in CSF and serum, even though weak positive GluR3 antibodies were identified in low dilution (1:10). To exclude a paraneoplastic etiology of the disorder, serum tumor markers and a total body CT scan were performed, without pathological findings. Blood tests for autoimmune diseases and thyroid function were in range. An unusual high titer of angiotensin-converting enzyme (ACE) was detected in serum (12.672 pg/ml; normal values: 0-2.600) but was not confirmed in CSF. Neuropsychological assessment demonstrated frontal dysfunction and impairments in working memory, visual and spatial memory and verbal fluency. Intravenous administration of valproate (20 mg/kg) and phenytoin (15 mg/kg) was attempted to stop recurring seizures, without clinical modifications. Intravenous human immunoglobulins (0.4 g/kg/day for 5 consecutive days) were ineffective too.
Eventually, after high-dose intravenous methylprednisolone (1 g/day for 5 days repeated after one week with the same posology upon seizure recurrence), focal motor seizures stopped with progressive normalization of the neuroradiological imaging. After hospital discharge, the patient did not experience other seizures and both the EEG and the serum levels of ACE and GluR3 were negative. Antiseizure therapy consisted of phenytoin 300 mg/day and levetiracetam 3000 mg/day.
Discussion
Recognizing AE can be challenging due to the clinical heterogeneity among patients. In some cases, the cerebral autoimmune disorder can be suspected based on specific demographics (age, gender) or clinical clues (movement disorders, such as faciobrachial dystonic seizures characteristic of LGI1 encephalitis), and the diagnostic hypothesis could be confirmed by the NA positivity [5]. In adult people, the most frequent NAs detected in serum and CSF are NMDAR (24.6 %), GAD65 (21.5 %) and LGI1 (20.5 %), with a different distribution according to age and sex [6].
Moreover, it is possible that a patient presents a combined positivity of neuronal and non-neuronal antibodies, making the diagnosis of AE even more difficult. In fact, some authors have demonstrated the presence of high titers of antinuclear antibodies (ANA) in >20% of patients with positivity to anti-Hu or anti-Yo antibodies [7]. It follows that the coexistence between neuronal and non-neuronal antibodies can be possible, considering the high frequency of non-neuronal antibodies in the general population (i.e., prevalence of ANA: 13.8 % [8], prevalence of anti-thyroglobulin antibodies: from 5% to 20% [9]).
However, a specific NA is not detected in up to 50% of cases [10]. Nevertheless, possible AE can be diagnosed, considering suggestive clinical manifestations and ancillary tests alterations. The classical presentation of AE includes subacute onset (<3 months) of altered cognition, sleep disturbances and psychiatric manifestations; seizures are common in the early stages. Steriade et al. [11] identified some clinical features of seizures secondary to AE: they are usually resistant to treatment, with high frequency and perisylvian semiology (i.e., facial clonic seizures) and early occurrence of status epilepticus can be seen. Considering our patient, she had a negative history of epilepsy and presented seizures two weeks after a newonset condition of cognitive and behavioral dysfunction. If the suspicion of AE is high, other possible causes must be ruled out, such as metabolic and toxic disorders and infectious diseases. Brain MRI and CSF analyses are usually the first exams of the diagnostic work-up. According to neuroradiological Graus' criteria [5], definite AE can be diagnosed only in the presence of bilateral limbic encephalitis; however, T2/FLAIR hyperintense multifocal areas suggestive of brain inflammation can support the diagnosis of probable AE. In some cases, brain MRI can be normal. The brain MRI of our patient presented T2 hyperintense areas of altered signal in cerebral regions corresponding to the seizures' semiology.
CSF analysis (leukocyte count, total protein, presence of OCBs) can be helpful in the diagnostic process and other useful information can be obtained if broad viral studies, cytology and NAs' panel are performed in CSF as well. Lymphocytic pleocytosis, OCBs and negativity of viral panel are usually expected, as identified in this case, but unremarkable findings do not exclude the diagnosis [12]. Recently, investigators have identified that the subtype of AE could influence some CSF parameters as the leukocyte count, the total proteins' count and the presence of OCBs [13]. It seems that AEs with NAs against NMDAR, GABABR, AMPAR or DPPX more often present inflammatory pathological changes in all three parameters in opposition to other subtypes of AEs (CASPR2, LGI1, GABAA and GluR) in which CSF abnormalities are less frequent. Nevertheless, AEs in the latter group can rarely show positive OCBs as well as pleocytosis [13]. EEG helps to identify subclinical ictal discharges or to monitor drug response in patients with seizures. Abnormalities are common in AE, but a normal EEG is not unusual [11].
The presence of autoantibodies is not mandatory, but their identification supports the diagnosis. We found a weak positivity of GluR3 in low serum dilution. NAs detected only in serum are a laboratory finding reported in previous works [14,15]. The association between GluR3 antibodies and different types of epileptic disorders has been previously demonstrated [16], even though with low specificity and sensitivity [17,18].
However, it has to be underlined that the risk of false-positive diagnoses exists, especially in two situations: 1) when CSF analysis is replaced by the positivity to NA testing in serum or cell-based assays (CBA); 2) when the clinical picture does not fit the NA positivity [5]. In the case of NA positivity in serum/CBA, CSF analysis is strongly suggested due to its higher sensitivity and specificity, as it happens in NMDAR encephalitis, in which the concentration of NA in CSF correlates better with the clinical course [19,20]. In the case of discrepancy between laboratory and clinical findings, sample retesting or use of confirmatory tests (i.e., brain immunohistochemistry or cultured neurons) are needed [5]. On the other hand, high levels of NAs can be identified in CSF without an underlying autoimmune brain disorder, as in the case of the detection of GAD65 antibodies in absence of a suggestive clinical phenotype [21].
A peculiar laboratory finding in our patient is the high title of ACE in serum. ACE levels have been analyzed in serum and CSF of patients affected by different inflammatory neurological conditions, including viral encephalitis, in which ACE levels in CSF were higher compared to healthy controls [22]. The elevation of ACE in CSF was lower if treatment was started [23]. This could be our case in which the determination of ACE in both serum and CSF was conducted when immunotherapy was ongoing, explaining why ACE in CSF was normal.
Based on the existence of paraneoplastic AE, cancer screening should be performed. Small cell lung cancer is the most frequent tumor associated with paraneoplastic syndromes, followed by thymoma, ovarian cancer and teratoma [24]. CT scan and tumor markers resulted negative in our case.
Finally, in addition to the previously described pathological exams, the lack of response to antiseizure medications and the complete recovery after corticosteroids made stronger the diagnosis of probable autoimmune encephalitis for our patient.
It should be underscored that our case presents the limitation of being a single case report and every information should be carefully considered.
Conclusion
Diagnostic criteria for definite AE were not met in this patient, given the lack of bilateral involvement of medial temporal lobes. However, considering the subacute clinical presentation with cognitive and behavioral involvement, the new-onset focal seizures, CSF alterations and brain MRI features, a diagnosis of possible AE was made independent of recovering NAs from serum and CSF. Even though the presence of a positive NA is not strictly required [5], a second evaluation of NAs was unrevealing in our patient. When confirmatory results are initially absent to support a clinical diagnosis of AE, repeating NAs may provide further support. Overall, our case illustrates the impact of clinical judgement when making a diagnosis of AE.
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: GB has received speaker's or consultancy fees from Eisai, Angelini Pharma and UCB Pharma. MT has served on scientific Advisory Boards for Biogen, Novartis, Roche, Merck, and Genzyme; has received speaker honoraria from Biogen Idec, Merck, Roche, Teva, Sanofi-Genzyme, and Novartis; and has received research grants for her Institution from Biogen Idec, Merck, Roche, and Novartis. ALN has received speaker's or consultancy fees from Eisai, Mylan, Sanofi, Bial, GW, Arvelle Therapeutics, Angelini Pharma and UCB Pharma.
The remaining authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-11-25T17:12:40.030Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "e2ed41211ecc76e41f07f7b21fa9d6c1455fc6c2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ebr.2022.100576",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70ee4369f4c33ef48099def3333ad0da0aa432eb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239616810 | pes2o/s2orc | v3-fos-license | TNF-Mediated Inhibition of Classical Swine Fever Virus Replication Is IRF1-, NF-κB- and JAK/STAT Signaling-Dependent
The sera from pigs infected with virulent classical swine fever virus (CSFV) contain substantial amounts of tumor necrosis factor (TNF), a prototype proinflammatory cytokine with pleiotropic activities. TNF limits the replication of CSFV in cell culture. In order to investigate the signaling involved in the antiviral activity of TNF, we employed small-molecule inhibitors to interfere specifically with JAK/STAT and NF-κB signaling pathways in near-to-primary endothelial PEDSV.15 cells. In addition, we knocked out selected factors of the interferon (IFN) induction and signaling pathways using CRISPR/Cas9. We found that the anti-CSFV effect of TNF was sensitive to JAK/STAT inhibitors, suggesting that TNF induces IFN signaling. Accordingly, we observed that the antiviral effect of TNF was dependent on intact type I IFN signaling as PEDSV.15 cells with the disrupted type I IFN receptor lost their capacity to limit the replication of CSFV after TNF treatment. Consequently, we examined whether TNF activates the type I IFN induction pathway. With genetically modified PEDSV.15 cells deficient in functional interferon regulatory factor 1 or 3 (IRF1 or IRF3), we observed that the anti-CSFV activity exhibited by TNF was dependent on IRF1, whereas IRF3 was dispensable. This was distinct from the lipopolysaccharide (LPS)-driven antiviral effect that relied on both IRF1 and IRF3. In agreement with the requirement of IRF1 to induce TNF- and LPS-mediated antiviral effects, intact IRF1 was also essential for TNF- and LPS-mediated induction of IFN-β mRNA, while the activation of NF-κB was not dependent on IRF1. Nevertheless, NF-κB activation was essential for the TNF-mediated antiviral effect. Finally, we observed that CSFV failed to counteract the TNF-mediated induction of the IFN-β mRNA in PEDSV.15 cells, suggesting that CSFV does not interfere with IRF1-dependent signaling. In summary, we report that the proinflammatory cytokine TNF limits the replication of CSFV in PEDSV.15 cells by specific induction of an IRF1-dependent antiviral type I IFN response.
Introduction
The first line of protection of host cells from invading viruses is mediated by the innate immune system. By sensing unique pathogen-associated molecular patterns, conserved cellular pattern recognition receptors initiate multiple intracellular signaling cascades involving interferon regulatory factors (IRF) that culminate in the transcriptional activation and secretion of type I interferons (IFN-α and IFN-β) and type III IFN (IFN-λ) [1]. Specific interactions of IFNs with cellular type I (IFNAR1) and type III IFN receptors subsequently activate Janus kinase (JAK)-and signal transducer and activator of transcription (STAT)dependent signaling in neighboring cells. Consequently, the IFN-mediated JAK/STAT signaling leads to the expression of a multitude of IFN-stimulated genes that synergistically orchestrate cellular antiviral defense [2].
Classical swine fever virus (CSFV) causes a highly contagious hemorrhagic fever in pigs [3]. CSFV is a non-cytopathogenic pestivirus of the Flaviviridae family, and the enveloped virion harbors a single-stranded positive-sense RNA genome [4]. Like most viruses, CSFV is highly susceptible to the antiviral actions mediated by type I and type III IFN and has evolved potent strategies to interfere with the cellular antiviral defense [3,[5][6][7]. In most cells except plasmacytoid dendritic cells (pDC), CSFV interferes with type I IFN induction by means of the viral N pro protein which interacts with IRF3 and induces its proteasomal degradation [7,8]. IRF3 is a key transcription factor of the type I IFN induction cascade triggered by DNA and RNA viruses that is targeted by many viral and bacterial pathogens [9]. Despite N pro -mediated IRF3 degradation, CSFV induces potent IFN-α and proinflammatory host responses in vivo involving pDC, conventional DC and monocytic cells (reviewed in [7]). Among the proinflammatory cytokines, tumor necrosis factor (TNF) represents a key cytokine promoting pleiotropic cellular effects, such as apoptosis, proliferation, survival or differentiation [10]. TNF activates nuclear factor κB (NF-κB) and mitogen-activated protein kinase signaling pathways [11]. Notably, pigs infected with virulent CSFV induce high levels of TNF [12][13][14], and TNF was reported to inhibit the replication of CSFV in porcine cells [15,16]. The antiviral effect of TNF was reduced in p65-silenced PK-15 cells indicating that TNF inhibits CSFV replication via the NF-κB signaling pathway [16]. Interestingly, porcine reproductive and respiratory syndrome virus (PRRSV) infection leads to TNF secretion that in turn inhibits the proliferation of a subsequent CSFV C-strain infection, which may explain CSFV vaccination failures caused by PRRSV infection in the field [15].
Studies conducted with primary macrophages and murine microvascular endothelial cells revealed that TNF induces IRF1-dependent IFN-β responses [17][18][19]. Like IRF3, IRF1 does also bind and activate the IFN-β promoter [20]. Furthermore, IRF1 is critical for the TNF-driven type I IFN response in rheumatoid fibroblast-like synoviocytes [21]. Interestingly, CSFV infection or dsRNA stimulation of PK-15 cells upregulate IRF1 mRNA [22,23]. Overexpression of IRF1 in PK-15 cells triggers antiviral responses against different porcine viruses, although IRF1 is dispensable for IFN-β induction by RNA viruses [23]. Finally, a recent study showed that CSFV N pro antagonizes IRF1-mediated type III IFN production by downregulating IRF1 expression and inhibiting its nuclear translocation in a porcine intestinal epithelial cell line [24]. Altogether, the data described above show that antiviral TNF signaling involves NF-κB and IRF1 and that the anti-CSFV activity of TNF relies on type I IFN responses in an IRF1-and/or IRF3-dependent manner, but the formal proof for a direct link of these signaling elements in the context of CSFV is still missing. In order to explore this in more details, we aimed at deciphering the cellular signaling pathways exhibited by TNF-driven anti-CSFV responses using pharmacological and genetic targeting of selected cellular signaling factors. For this, we used the immortalized near-to-primary porcine aortic endothelial cell line PEDSV.15 [25] that we found to be highly sensitive to the antiviral action triggered by physiological levels of TNF, including porcine (pTNF) and murine TNF (mTNF), as opposed to the common porcine kidney cell lines PK-15 and SK-6. For quantitative virological readouts, we employed a firefly luciferase-expressing CSFV (CSFV-luc). With inhibitory drugs, CRISPR/Cas9-mediated gene knockout and anti-TNF antibodies, we demonstrate that TNF limits the replication of CSFV by activating JAK/STAT signaling in an IRF1-, NF-κB and IFNAR1-dependent way, independently of IRF3.
Viruses
The bicistronic CSFV-luc was derived from a full-length cDNA construct obtained by replacing the N pro -C gene cassette in the pA187-1 cDNA backbone [27] with the corresponding N pro -Luc-IRES-C gene cassette from the bicistronic pA187-N pro -Luc-IRES-C-delE rns replicon construct [28] using standard PCR-mediated cloning. The CSFV-luc and the virulent CSFV vEy-37 [29] were rescued from cDNA as described elsewhere [30] and propagated in PEDSV.15 cells. Viral titers were determined by endpoint dilution in PEDSV.15 cells and expressed as 50% tissue culture infectious dose (TCID 50 )/mL. CSFV E2 was detected in infected cell monolayers by immunoperoxidase staining with the HC/TC-26 monoclonal anti-E2 hybridoma supernatant [31] as described elsewhere [32].
Antiviral TNF Assay and JAK/STAT Compound Library Screening
The PEDSV.15 cells seeded in 96-well plates (3 × 10 4 cells/100 µL/well) were treated with small-molecule compounds of the JAK/STAT Compound Library (Targetmol, Wellesley Hills, MA, USA, cat. No. L3700) at two concentrations (0.5 µM and 5 µM) for approximately one hour prior to stimulation with either LPS (100 ng/mL), pTNF (5 ng/mL) or the medium. After a stimulation period of six hours, the cells were infected with CSFV-luc at a multiplicity of infection (MOI) of 0.1 TCID 50 /cell, and after 22 h of cultivation, the cell extracts were assayed for firefly luciferase activity (Firefly Luciferase Assay Kit 2.0, Biotium, Fremont, CA, USA) using a Centro LB 960 luminometer (Berthold Technologies, Bad Wildbad, Germany). Average relative luminescence units (RLU) with standard deviations from triplicate values were calculated. The data obtained from cytotoxic or antiviral compounds were eliminated from the analysis (RLU below 50% of non-stimulated infected cultures).
Target
Target Sequence (gRNA) 1 The PAM sequences are underlined. Table 3. Oligonucleotides for the amplification of edited genomic regions.
Western Blot Analyses
The cells were lysed with a denaturing lysis buffer composed of 62.5 mM Tris HCl (pH 6.8), 2% sodium dodecyl sulfate (SDS), 10% glycerol and 0.05% bromophenol blue. The proteins were separated using 4-12% gradient SDS-polyacrylamide gel electrophoresis under nonreducing conditions (ExpressPlus, GenScript, Piscataway, NJ, USA) and analyzed by means of Western blotting using PVDF transfer membranes (Immobilon-FL, Merck Millipore, Burlington, MA, USA) and an Odyssey Infrared Imaging System (LI-COR Biosciences, Bad Homburg, Germany). Porcine IRF3 and viral N pro proteins were detected using the rabbit anti-IRF3 and anti-N pro sera as described previously [8,37]. Using the mouse monoclonal Anti-β-Actin Antibody C4 (Santa Cruz Biotechnology, Dallas, TX, USA), β-actin was detected as the loading control.
TNF Inhibits CSFV Replication in Porcine PEDSV.15 Cells and MDM, but Not in the PK-15 and SK-6 Cell Lines
CSFV-infected pigs show elevated serum TNF, and TNF was shown to inhibit replication in cell lines [15,16]. In order to characterize the antiviral activity of TNF against CSFV more extensively, we quantified the effect of TNF of different origin on the replication of CSFV expressing a firefly luciferase reporter (CSFV-luc) in primary porcine cells versus permanent cell lines ( Figure 1). The near-to-primary endothelial cell line PEDSV.15 [25] responded to mTNF with a significant reduction of CSFV-mediated luciferase activity 20 h after infection, which was not observed in the PK-15 and SK-6 cells, two permanent porcine cell lines used commonly to propagate CSFV (Figure 1a). The PEDSV.15 cells stimulated for six hours with increasing concentrations of mTNF, from 0.4 ng/mL to 10 ng/mL, displayed a dose-dependent reduction of CSFV replication, as determined by CSFV-mediated luciferase activity ( Figure 1b) and by titration of infectious viruses from cell culture supernatants (Figure 1c). Notably, the TNF treatment did not affect the viability of PEDSV.15 cells at 20 h or three days post-treatment (Figure 1d). Time-of-addition experiments revealed the highest mTNF-mediated inhibition of CSFV infection after six hours of treatment (Figure 1e). Prolonged overnight TNF treatment of the PEDSV.15 cells did not result in an enhanced antiviral state (Figure 1f). TNF pre-stimulation of MDM did also interfere with CSFV (Figure 1g), although not as strongly as in the PEDSV.15 cells (Figure 1b), without affecting the viability of the cells (Figure 1h). This suggests differences in TNF responsiveness of MDM versus PEDSV.15 cells. In order to examine the specificity of mTNF and pTNF for triggering the antiviral effects observed, we employed adalimumab (Humira, Abbott Laboratories), a neutralizing human anti-hTNF monoclonal antibody [38]. TNF treatment in presence of increasing concentrations of adalimumab reduced specifically the antiviral effect of TNF but not of lipopolysaccharide (LPS), known to induce a TLR4dependent antiviral type I IFN response (Figure 1i). This was observed with both mTNF and pTNF. Notably, mTNF neutralization blocked the antiviral TNF activity completely and specifically. Altogether, these data demonstrate that TNF inhibits CSFV replication in primary porcine cells but not in PK-15 and SK-6.
The Anti-CSFV Activity of TNF Involves JAK/STAT Signaling
The JAK/STAT pathway is the key element of the signaling cascade engaged in response to type I IFN [39]. In order to explore whether this pathway is also involved in the antiviral action of TNF, we targeted JAK/STAT signaling with small-molecule inhibitors. Strikingly, the antiviral effects of pTNF and mTNF were sensitive to the JAK inhibitor ruxolitinib (Figure 2a Figure S1c,d for drug concentrations of 0.5 µM or 5 µM, respectively. JAK/STAT signaling is triggered typically by type I IFNs. Therefore, we assessed whether TNF induces the IFN-β mRNA in PEDSV.15 cells. As expected, elevated IFN-β mRNA levels were detected four hours after pTNF stimulation (Figure 2e). The pTNF-mediated upregulation of the IFN-β mRNA was independent of the JAK inhibitor ruxolitinib, suggesting that pTNF elicits a direct induction of the IFN-β promoter. Despite several attempts, we failed to detect bioactive type I interferon in cell culture supernatants of LPS-and pTNFstimulated PEDSV.15 cells using a firefly luciferase-based MX-promoter assay or a sensitive VSV-luc-based assay. By applying a transient firefly luciferase reporter gene assay for NF-κB-dependent promoter activity (NF-κB-RE), we observed JAK/STAT-independent activation of NF-κB with pTNF-and LPS-stimulated PEDSV.15 cells (Figure 2f). As expected, IFN-β stimulation did not induce the NF-κB response element. TNF-mediated NF-κB activation was sensitive to TPCA-1, a selective small-molecule inhibitor of IκB kinase 2 known to inhibit NF-κB nuclear localization. Ruxolitinib, on the contrary, did not inhibit the pTNFand LPS-mediated NF-κB-dependent promoter activation, indicating that this activation was JAK/STAT-independent. The discrepancy between the JAK/STAT-dependent antiviral activity and the JAK/STAT-independent activation of an NF-κB-dependent promoter suggests that pTNF-mediated induction of NF-κB-dependent pathways is not sufficient to trigger the antiviral effect. In conclusion, we observed that in porcine PEDSV.15 cells, pTNF stimulates a JAK/STAT-specific antiviral response, induces IFN-β mRNA and activates JAK/STAT-independent NF-κB signaling.
The Anti-CSFV Activity of TNF Requires the Type I IFN Receptor, While IRF3 Is Dispensable
In order to explore the roles of the type I IFN receptor and of IRF3 in antiviral IFN-β, LPS and pTNF signaling, we generated IFNAR1-and IRF3-knockout (KO) PEDSV.15 cell lines (IFNAR1-KO and IRF3-KO, respectively) by introducing small genetic deletions within early exons using CRISPR/Cas9 gene editing (Figure 3). We determined the respective genotypes after editing and clonal expansion using PCR combined with Sanger DNA sequencing. We identified two IFNAR1-KO clones #5 and #23 carrying deletions within the IFNAR1 loci ( Figure 3a) consisting of heterozygous open reading frame disruptions. One allele from each clone encodes an mRNA with an internal deletion after the first 72 codons leading to a frameshift mutation. The other alleles have an in-frame deletion leading to a 39 amino acid (aa) deletion after aa position 72. This resulted in functional disruption of IFNAR1, since the two clones (#5 and #23) lost the capacity to establish antiviral states upon IFN-β, LPS and pTNF stimulation, contrary to the parent PEDSV.15 cells (Figure 3b). One PEDSV.15 clone with two intact wild-type (WT) IFNAR1 loci called IFNAR1-WT#4 served as the Cas9-exposed negative control and responded to all three stimuli similar to the parent PEDSV.15 cells (Figure 3b). Collectively, these data confirm that IFNAR1 is necessary for the antiviral activity triggered by IFN-β and LPS and demonstrate the requirement of the type I IFN receptor for the antiviral signaling induced by pTNF.
For IRF3, we identified three knockout PEDSV.15 clones with identical out-of-frame homozygous deletions of 190 nucleotides within the IRF3 open reading frame on the deduced mRNA level, leading to a frameshift mutation after the first 38 codons. The IRF3-KO clones #4 and #16 (Figure 3c) served for functional analyses (Figure 3d). During isolation of IRF3-KO clones, we did not obtain any unedited Cas9-exposed negative control. However, since we performed the stimulations in parallel with the IRNAR1-KO clones, the unedited IFNAR1-WT#4 cells (Figure 3b) served as the Cas9-exposed negative control for the IRF3-KO cells. As expected, IRF3-KO cells maintained their capacity to respond to IFNAR1-dependent IFN-β stimulation (Figure 3d). Importantly, the disruption of IRF3 resulted in a fundamental difference between LPS-and pTNF-triggered antiviral innate immune responses. While the LPS-mediated antiviral state was mostly abolished in IRF3-KO cells, IRF3 was completely dispensable for the pTNF-mediated anti-CSFV activity ( Figure 3d). This highlights the mechanistic differences in the initiation of innate immune responses between pTNF-and LPS-triggered signaling.
The Anti-CSFV Activities of LPS and TNF Are IRF1-Dependent
Besides IRF3, IRF1 can also trigger type I IFN induction [20]. Therefore, we generated functional IRF1-KO PEDSV.15 cells using CRISPR/Cas9 to test whether pTNF-mediated antiviral responses require IRF1. We obtained two IRF1-KO clones-IRF1-KO#2 and IRF1-KO#12-that carried homozygous genomic deletions without disruption Two independent knockout clones (#2 and #12) and the Cas9-exposed negative control (IRF1-WT#1) with intact IRF1 were stimulated with pTNF, LPS, IFN-β or the medium for seven hours followed by infection with CSFV-luc at a MOI of 0.1 TCID 50 /cell for 22 h before the cell lysates were processed for firefly luciferase measurement. (c,d) The effect of two hours of stimulations with LPS (c) or pTNF (d) on the expression of IFN-β mRNA normalized to 18S ribosomal RNA was assessed in the PEDSV.15 and IRF1-KO#2 cells. The data in (b) represent the means and the standard deviations of six independent experimental replicas and significant differences compared with the medium (p < 0.05) were calculated with one-way ANOVA and post hoc tests (p-value indicated; ns, nonsignificant). The data in (c,d) represent the means and the standard deviations of three independent experimental replicas. Significant differences compared with the medium (p < 0.05) were calculated with the unpaired, two-tailed Student's t-test (the p-values are indicated; ns, nonsignificant).
The Anti-CSFV Activity of TNF Is NF-κB-Dependent, but NF-κB Can Function Independently of IRF1
The induction of type I IFN depends on NF-κB, as opposed to the downstream IFN-β signaling [40]. Therefore, we tested the role of NF-κB in mTNF-and pTNF-mediated anti-CSFV activity using the NF-κB inhibitor TPCA-1 and included LPS, p(I:C) and IFN-β as control stimulations (Figure 5a). TPCA-1 prevented the antiviral actions of mTNF, pTNF, LPS and p(I:C), but not of IFN-β. These results confirm that NF-κB signaling is required for the induction of antiviral activity; however, it is dispensable for downstream IFN-β signaling. Accordingly, the JAK inhibitor ruxolitinib efficiently blocked mTNF, pTNF, LPS, p(I:C) and IFN-β-driven antiviral effects as expected. In parallel, we measured the activation of NF-κB after stimulation of the PEDSV.15 cells (Figure 5b). Treatment with mTNF-, pTNF-, LPS-and p(I:C), but not with IFN-β, resulted in NF-κB activation, which was JAK/STAT-independent (see also Figure 2f). In contrast, mTNF-, pTNF-, LPS-and p(I:C)-mediated NF-κB activation was sensitive to TPCA-1, demonstrating the specificity of the drug. As demonstrated previously, IRF1-KO cells were unable to induce antiviral actions triggered by TNF and LPS (Figures 4b and 5c), which was also the case for the p(I:C) trigger (Figure 5c). Interestingly, despite impaired induction of an antiviral state in IRF1-KO cells, we noted intact mTNF-, pTNF-, LPS-and p(I:C)-mediated NF-κB responses, implying that IRF1 is not involved in the activation of NF-κB-dependent signaling (Figure 5d). In summary, we observed that TNF-, LPS-and p(I:C)-mediated activation of NF-κB is required for the establishment of antiviral activity against CSFV, but that NF-κB-dependent signals can function independently of IRF1. cells (a, c) were infected with CSFV-luc seven hours after stimulation. Firefly luciferase (a,c) or firefly and Renilla dual-luciferase activities (b,d) were measured 24 h after infection or six hours after stimulation, respectively. The NF-κB-dependent promoter activity is plotted as fold induction compared to the medium. The data represent the means and the standard deviations of at least three independent experimental replicas. Significant differences compared with the medium (p < 0.05) were calculated with the unpaired, two-tailed Student's t-test (the p-values are indicated; ns, nonsignificant).
CSFV Infection Does Not Interfere with TNF-and LPS-Mediated IFN-β mRNA Induction in PEDSV.15 Cells
CSFV antagonizes the induction of type I IFN by means of N pro through IRF3 targeting [8]. In addition, a recent report showed that N pro inhibits the expression and nuclear translocation of IRF1, thereby suppressing the production of type III IFN [24]. Therefore, we hypothesized that CSFV may antagonize the TNF-induced IRF1-dependent and the LPS-induced IRF1-/IRF3-dependent IFN-β mRNA induction in PEDSV.15 cells. In order to address this, we infected the PEDSV.15, IRF1-KO#2 and IRF3-KO#4 cells with the virulent CSFV strain vEy-37 for 3 days prior to stimulation with pTNF, LPS or p(I:C) and measured the induction of IFN-β mRNA in comparison with the stimulated mock-infected cells (Figure 6a,b). Mock-and CSFV-infected PEDSV.15 cells had comparable IFN-β mRNA levels after pTNF or LPS stimulation (Figure 6a,b). This was different with p(I:C), where pre-infected PEDSV.15 cells had significantly lower levels of IFN-β mRNA than mockinfected cells. Similarly, the p(I:C)-mediated induction of IFN-β mRNA was sensitive to CSFV in IRF1-KO#2 cells (Figure 6a). As expected (see Figure 4c,d), the IRF1-KO#2 cells did not respond with IFN-β mRNA to pTNF or LPS stimulation (Figure 6a). With IRF3-KO#4 cells (Figure 6b), CSFV pre-infection had no significant effect on the pTNF, LPS and p(I:C)-mediated induction of IFN-β mRNA, suggesting that CSFV is unable to interfere with IRF1-dependent antiviral signaling. At the time of LPS or pTNF treatment, all the infected cells were positive for the virus antigen, as shown by immunostaining of the E2 protein (Supplementary Figure S2). Unfortunately, we were unable to detect endogenous IRF1 protein by Western blot analysis. However, as expected from our previous studies with PK-15 cells [8], the IRF3 protein was degraded in the CSFV-infected PEDSV.15 and IRF1-KO#2 cells (Figure 6c), which is consistent with reduced or absent IFN-β mRNA induction upon p(I:C) stimulation (Figure 6a,b). Notably, IRF3 was not detected in IRF3-KO#4 cells, confirming successful genome editing leading to IRF3 protein knockout. Collectively, these findings indicate that CSFV lacks countermeasures to interfere with IRF1-dependent TNFand LPS-mediated type I IFN induction in PEDSV.15 cells.
Discussion
The induction of high levels of proinflammatory cytokines including TNF is a hallmark of severe and hemorrhagic CSF following infection with highly pathogenic CSFV [7]. Several independent studies report secretion of TNF peaking at 100-500 pg/mL serum 4-5 days after infection of pigs with CSFV [12][13][14]. CSFV-infected alveolar macrophages can also secrete up to 1 ng/mL TNF at 16 h after infection [41]. TNF was reported to inhibit CSFV replication in porcine PK-15 cells [16] and may be at the origin of the vaccination failure with live-attenuated CSFV in PRRSV-infected pigs [15]. Therefore, this study aimed at dissecting the intracellular signaling cascade required for the anti-CSFV activity of TNF.
First, we observed that different cell types responded differently to TNF. While TNF induced an antiviral state in the endothelial cell line PEDSV.15 and in the porcine MDM, TNF did not inhibit CSFV replication in the SK-6 and PK-15 cells (Figure 1a). This was unexpected given the TNF-mediated inhibition of CSFV replication in PK-15 cells reported earlier [16]. Differences in the steady-state levels of rate-limiting factors such as IRF1 may explain this discrepancy. A different degree of dedifferentiation in general may be a reason for the difference between the PEDSV.15 and MDM cultures and the permanent cell lines used commonly for the propagation of CSFV. The PEDSV.15 are immortalized porcine aortic endothelial cells that maintained most morphological and functional properties of primary endothelial cells and were therefore proposed to serve as a prototypical alternative to normal endothelial cells [25]. The MDM were primary cells prepared from porcine blood (see materials and methods). These results emphasize the importance of cell typedependent differences in cellular responses to infection. Further investigation may include selected approaches such as comparison of the IRF1 levels and activity, as well as highthroughput differential transcriptomic and proteomic analyses.
Next, we dissected the TNF-IRF1-IFN-β signaling axis in the context of a CSFV infection of PEDSV.15 cells. More specifically, we showed that TNF, similarly to LPS, induces the expression of IFN-β transcripts by activating IRF1 and NF-κB independently. Thereby, TNF triggers type I IFN receptor-dependent JAK/STAT signaling leading to decreased CSFV replication. Synergistic antiviral effects between TNF and IFNs are known to enhance antiviral responses (reviewed in [18]). Although we cannot rule out such cooperative effects with TNF stimulations, we were able to efficiently blunt the antiviral effects of mTNF and pTNF with an anti-TNF neutralizing antibody (Figure 1i), which confirms the specific activity exhibited by the TNF formulations. The antiviral effects mediated by TNF and LPS were similarly sensitive to individual compounds of a JAK/STAT inhibitor library (Figure 2c,d). Importantly, the downstream antiviral effects of TNF were independent of the type III IFN antiviral pathway in the PEDSV.15 cells since there was a strict IFNAR1 requirement (Figure 3b). This may be different in other cell types such as intestinal epithelial cells that induce type III IFNs in an IRF1-dependent manner [24]. In the PEDSV.15 cells, we demonstrated that the TNF-triggered induction of IFN-β transcripts and the resulting antiviral effect rely on intact IRF1 (Figure 4b,d), whereas IRF3 was dispensable (Figure 3d). IRF1 is also required for adequate LPS-mediated induction of IFN-β mRNA and subsequent antiviral activity (Figure 4b,c). However, in contrast to TNF that solely depends on IRF1, LPS also requires functional IRF3 to establish its antiviral effect. The latter is consistent with the well-established MyD88-independent LPS/TLR4 signal transduction pathway (reviewed in [42]). Previous studies conducted with primary macrophages, murine microvascular endothelial cells and rheumatoid fibroblast-like synoviocytes described TNF-mediated IRF1-dependent type I IFN responses [17,19,21], but this study shows for the first time inhibition of CSFV replication through this axis.
Besides IRF1, NF-κB is also required for the TNF-mediated antiviral effect on CSFV ( Figure 5). Li et al. showed that TNF interferes with CSFV replication via the NF-κB signaling pathway as the antiviral TNF effect was lost in p65-silenced PK-15 cells [16]. Our observation that the antiviral effect of TNF is strictly IRF1-and IFNAR1-dependent (Figures 3 and 4) and that NF-κB activation is independent of functional IRF1 ( Figure 5) demonstrates clearly that the induction of the type I IFN pathway is required besides NF-κB activation for interference with CSFV replication.
Interestingly, CSFV did not interfere with TNF-or LPS-triggered IFN-β mRNA induction ( Figure 6) that both depend on IRF1 (Figure 4c,d). However, as expected from our previous studies with PK-15 cells [8], CSFV prevented p(I:C)-mediated IFN-β mRNA induction in the PEDSV.15 and IRF1-KO#2 cells, which was consistent with the absence of IRF3 when N pro was expressed ( Figure 6). These results are not surprising when considering that CSFV targets specifically IRF3 for proteasomal degradation by means of N pro [8] while the TNF-induced anti-CSFV activity we observed in the PEDSV.15 cells was completely independent of IRF3 ( Figure 3d). However, the lack of interference of CSFV with the IRF1-dependent pathway described here is in apparent contradiction with recent data showing that N pro of CSFV inhibits IRF1 expression and nuclear translocation in porcine intestinal epithelial IPEC-J2 cells, thereby suppressing type III IFN production [24]. This latter finding may, however, be a specific feature of IFN-λ producing cells. The fact that TNF exhibits substantial anti-CSFV activity in certain cell types and that TNF is secreted in response to CSFV infection in pigs may imply that CSFV evolved yet unidentified means of antagonizing antiviral signaling triggered by TNF in vivo. This is matter of ongoing and future studies.
Conclusions
Several reports suggest altogether that the antiviral activity of TNF involves NF-κBand IRF1-dependent signaling and type I IFN responses. In order to test this formally for CSFV, we targeted NF-κB, IRF1, IRF3 and IFNAR1-dependent JAK/STAT signaling pharmacologically or by CRISPR/Cas9-mediated gene knockout. The anti-CSFV activity of porcine and murine TNF was inhibited by antibody-mediated TNF neutralization, NF-κB and JAK/STAT inhibitors and was abrogated completely in the IRF1 and IFNAR gene knockout cells but not in the IRF3 gene knockout cells. IRF1 gene knockout prevented TNFand LPS-mediated IFN-β mRNA induction. Interestingly, CSFV did not counteract TNF-or LPS-mediated IFN-β mRNA induction. This is consistent with CSFV targeting IRF3 for proteasomal degradation [8] but is in apparent contradiction with CSFV-mediated inhibition of IRF1-dependent signaling reported recently [24]. The latter may be restricted to specific cell types that induce type III IFN. Whether CSFV does selectively inhibit the antiviral activity of TNF through IRF1 targeting in mucosal cells still needs to be explored. Nevertheless, the findings of this study contribute to a better understanding of the CSF immunopathogenesis and of the virus-host interaction of CSFV. More generally, this knowledge is valuable for the development of antiviral and immunoprophylactic interventions. | 2021-10-15T15:27:01.519Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "7bd3b9f4d83e9e70aa6d799ce5d460114e18c82e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/13/10/2017/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab2a6c94a4efd9989736dbeb420d85874fe86092",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205296874 | pes2o/s2orc | v3-fos-license | Inducible Priming Phosphorylation Promotes Ligand-independent Degradation of the IFNAR1 Chain of Type I Interferon Receptor*
Phosphorylation-dependent ubiquitination and ensuing down-regulation and lysosomal degradation of the interferon α/β receptor chain 1 (IFNAR1) of the receptor for Type I interferons play important roles in limiting the cellular responses to these cytokines. These events could be stimulated either by the ligands (in a Janus kinase-dependent manner) or by unfolded protein response (UPR) inducers including viral infection (in a manner dependent on the activity of pancreatic endoplasmic reticulum kinase). Both ligand-dependent and -independent pathways converge on phosphorylation of Ser535 within the IFNAR1 degron leading to recruitment of β-Trcp E3 ubiquitin ligase and concomitant ubiquitination and degradation. Casein kinase 1α (CK1α) was shown to directly phosphorylate Ser535 within the ligand-independent pathway. Yet given the constitutive activity of CK1α, it remained unclear how this pathway is stimulated by UPR. Here we report that induction of UPR promotes the phosphorylation of a proximal residue, Ser532, in a pancreatic endoplasmic reticulum kinase-dependent manner. This serine serves as a priming site that promotes subsequent phosphorylation of IFNAR1 within its degron by CK1α. These events play an important role in regulating ubiquitination and degradation of IFNAR1 as well as the extent of Type I interferon signaling.
Phosphorylation-dependent ubiquitination and ensuing down-regulation and lysosomal degradation of the interferon ␣/ receptor chain 1 (IFNAR1) of the receptor for Type I interferons play important roles in limiting the cellular responses to these cytokines. These events could be stimulated either by the ligands (in a Janus kinase-dependent manner) or by unfolded protein response (UPR) inducers including viral infection (in a manner dependent on the activity of pancreatic endoplasmic reticulum kinase). Both ligand-dependent and -independent
pathways converge on phosphorylation of Ser 535 within the IFNAR1 degron leading to recruitment of -Trcp E3 ubiquitin ligase and concomitant ubiquitination and degradation. Casein kinase 1␣ (CK1␣) was shown to directly phosphorylate Ser 535 within the ligand-independent pathway. Yet given the constitutive activity of CK1␣, it remained unclear how this pathway is stimulated by UPR. Here we report that induction of UPR promotes the phosphorylation of a proximal residue, Ser 532 , in a pancreatic endoplasmic reticulum kinase-dependent manner. This serine serves as a priming site that promotes subsequent phosphorylation of IFNAR1 within its degron by CK1␣. These events play an important role in regulating ubiquitination and degradation of IFNAR1 as well as the extent of Type I interferon signaling.
Ligand-induced down-regulation of cell surface receptors represents a major mode of actions for the branch of signaling that leads to its elimination (1). For example, Type I interferons (including IFN␣ and IFN), 2 the cytokines that play a paramount role in anti-viral defense (2) and elicit potent antiproliferative effects (3), stimulate down-regulation of the cell surface levels of their receptor (4,5). This receptor consists of IFNAR1 and IFNAR2 chains and functions via activation of associated Janus kinase (Tyk2 and Jak1) leading to activating tyrosine phosphorylation of the signal transducers and activators of transcription proteins (STAT1 and STAT2). In turn, these STAT proteins govern transcription of IFN-stimulated genes whose products mediate anti-viral, anti-proliferative, and immunomodulatory functions (reviewed in Refs. 6 and 7). Early studies have reported that IFNAR1 is rapidly down-regulated and degraded upon internalization in response to IFN␣ (8,9).
Mechanisms of ligand-induced degradation of IFNAR1 rely on IFN␣/-stimulated and Tyk2 catalytic activity-dependent phosphorylation of this receptor chain on Ser 535/539 within a specific phospho-degron (10 -12). This phosphorylation enables the recognition of IFNAR1 by the -Trcp2/HOS F-box protein, followed by the recruitment of the SCF -Trcp E3 ubiquitin ligase (10,11). This ligase facilitates polyubiquitination of IFNAR1 on a specific cluster of lysines. Through a yet to be identified mechanism, this site-specific ubiquitination results in an exposure of a previously masked linear endocytic motif that enables the recruitment of the AP2 complex and ensuing internalization of IFNAR1 and of entire Type I IFN receptor (13,14). Ubiquitination of IFNAR1 was also shown to stimulate post-internalization sorting of this chain to the lysosomes for efficient proteolysis (13).
Intriguingly, the mechanisms of ligand-induced receptor down-regulation could be also utilized by other unrelated stimuli. As a result, a cell might be rendered refractory to a particular ligand even before this ligand has had a chance to initiate signaling. For example, we have recently described a basal ligand-and JAK-independent mechanism of Ser 535 phosphorylation, ubiquitination, and degradation of IFNAR1. This relatively low efficacy mechanism contributes to an ability of cells to avoid the anti-proliferative effects of high levels of IFNAR1 expression (15). Interestingly, this mechanism could be robustly induced by some ligand-independent stimuli including the products of tobacco smoking (16) and the inducers of unfolded protein responses (UPRs) such as treatment with thapsigargin (TG) or forced overexpression of the receptor (15,17). Importantly, UPR is known to be induced by a rapid synthesis of viral proteins during infection with diverse viruses (18,19). Indeed, we have recently demonstrated that infection with vesicular stomatitis virus (VSV) or hepatitis C virus (HCV) promotes Ser 535 phosphorylation-dependent ubiquitination and down-regulation of IFNAR1 in a manner that does not require JAK activity but relies on activation of pancreatic endoplasmic reticulum kinase (PERK) by UPR. Serendipitously for these viruses, these events enabled them to decrease the efficacy of IFN␣/ signaling and to use this mechanism (along with numerous virus-specific means previously described in Refs. 20 and 21) to avoid the IFN-induced anti-viral defenses (17). Given that IFN␣ is used for treatment of chronic viral infections including hepatitis C (22,23), identification of signaling pathways that mediate UPR-stimulated IFNAR1 degradation is of obvious importance.
Casein kinase 1␣ (CK1␣) was purified and characterized as a bona fide Ser 535 IFNAR1 kinase that functioned within the ligand-independent pathway but was dispensable for IFN␣-induced Ser 535 phosphorylation (24). Although TG-or virus-induced Ser 535 phosphorylation and down-regulation of IFNAR1 required CK1␣ activity, this kinase activity per se was not increased by UPR (24), suggesting that UPR signaling abets CK1␣ via another mechanism. Here we report that UPR stimulates phosphorylation of yet another serine residue proximal to the phospho-degron of IFNAR1 that increases the efficacy of IFNAR1 degron phosphorylation by CK1␣. We further demonstrate that this priming site plays an important role in ligandand JAK-independent regulation of IFNAR1 ubiquitination and degradation as well as in the regulation of the extent of cellular responses to IFN␣/.
EXPERIMENTAL PROCEDURES
Plasmids and Reagents-TG, cycloheximide, and methylamine HCl were purchased from Sigma. Human pCDNA3-FLAG-IFNAR1 mammalian expression construct and retroviral pBABE-puro-based construct for expression of FLAG-tagged mouse IFNAR1 as well as GST-IFNAR1 bacterial expression vector were described previously (10). Mutants lacking the priming sites (Ser 532 in human IFNAR1 and Ser 523 in mouse IFNAR1) were generated by site-directed mutagenesis. The sequence of mutants was confirmed by dideoxy sequencing. Constructs for expression of human Myc-tagged CK1␣ (a kind gift from J. Wade Harper, Harvard University, Cambridge, MA) was described previously (25). Vector for expression of FLAG-STAT1 was kindly provided by J. Darnell (Rockefeller University, New York, NY). HA-tagged Leishmania CK1 (L-CK1) pEF-BOS-based expression vector (wild type or kinase dead K40R mutant) was described elsewhere (24). pLKO.1-puro (Sigma) vector-based small hairpin RNA constructs targeted against PERK or irrelevant control were described previously (17). Construct for bacterial expression of GST-CK1␣ (described in Ref. 26) was a kind gift from Jiandong Chen (H. Lee Moffitt Cancer Center, Tampa, FL). Construct for bacterial expression of constitutively active PERK (⌬N-PERK described in Ref. 27), as well as for mammalian expression of wild type or catalytically inactive PERK (K618R) (27), were previously described. Human IFN␣ (Roche Applied Science) and murine IFN (PBL) were purchased.
Cell Culture, Treatment, and Viral Infection-All of the cell lines were maintained in Dulbecco's modified Eagle's medium supplemented with 10% (v/v) fetal bovine serum (Hyclone) and various selection antibiotics when indicated. Human HeLa and 293T cells were obtained from ATCC. Mouse embryo fibro-blasts from IFNAR1 Ϫ/Ϫ mice and their wild type counterparts were kindly provided by S. Hemmi (Institute for Molecular Biology, Zürich, Switzerland). To obtain reconstituted cells expressing wild type or mutant IFNAR1, these cells were transduced by pBabe-Puro-based mIFNAR1 constructs and selected in puromycin for 2 weeks before analysis. 11,1-Tyk2-null cells reconstituted with catalytically inactive Tyk2 (KR cells) (12) were a generous gift of S. Pellegrini (Pasteur Institute, Paris, France). PERK-deficient mouse embryo fibroblasts were generous gift from D. Ron (New York University, New York, NY). Huh7 and derivative cells that express a complete HCV genome were a kind gift from R. Aldalbe (University of Navarra, Pamplona, Spain). These cells (described in detail in Refs. 17 and 28) were cultured in the presence of 500 g/ml of G418. Transfection of 293T cells, HeLa cells, and Huh7 cells was carried out with Lipofectamine Plus reagent (Invitrogen) according to the manufacturer's recommendations. VSV (Indiana serotype, a gift from R. Harty, University of Pennsylvania, Philadelphia, PA) was propagated in HeLa cells. For infection, the cells were inoculated with a multiplicity of infection 0.1-0.2 of VSV for 1 h, washed, and incubated with fresh medium as indicated.
In Vitro Kinase Assay-Kinase assays were carried out as described in detail elsewhere (24). Briefly, 2 g of substrates (bacterially expressed and purified GST-IFNAR1, wild type, or S532A mutant) were incubated with 4 g of lysate (from untreated or thapsigargin-treated cells) that were cleared of CK1␣ (by immunodepletion) and 0.25 g of bacterially produced GST-CK1␣ (where indicated) in kinase buffer (25 mM Tris HCl, pH 7.4, 10 mM MgCl 2 , 1 mM NaF, 1 mM NaVO 3 ) and ATP (1 mM). Where indicated, 100 g of bacterially produced ⌬N-PERK or undepleted lysates from 293T cells were used as a source of kinase activity. Radiolabel was provided as [␥-32 P]ATP (1 Ci; Amersham Biosciences). The reactions were carried out at 30°C for 30 min shaking at 600 rpm on the tabletop incubator. The products were analyzed either by immunoblotting with phospho-specific antibodies or by autoradiography.
RESULTS
We sought to investigate how inducers of UPR promote phosphorylation-dependent ubiquitination and degradation of IFNAR1. Previous studies demonstrated that these signals feed into the ligand-independent pathway (15,17) that utilizes CK1␣, which directly phosphorylates Ser 535 within the degron of IFNAR1 (24). Given that constitutively high activity of CK1␣ was not further stimulated in cells treated with UPR inducers yet lysates from these cells augmented the ability of CK1␣ to phosphorylate Ser 535 in vitro (24), we proposed that UPR signaling may lead to additional post-translational modification of IFNAR1 that improves its phosphorylation by CK1␣ on Ser 535 . Indeed, a large body of literature suggests that priming phosphorylation of a substrate at a Ser/Thr residue in the n Ϫ 3 position may greatly increase its phosphorylation by various casein kinase 1 species (30 -37). Analysis of primary sequences of IFNAR1 showed that a highly conserved Ser residue (Ser 532 in humans and Ser 523 in mice) is located at this position and may act as a priming phosphorylation site (Fig. 1A).
Ligand-independent IFNAR1 phosphorylation, ubiquitination, and degradation are readily observed in cells that overexpress this receptor (15,17). We compared the stability of wild type FLAG-IFNAR1 expressed in 293T cells with its mutant counterpart that lacks Ser 532 using cycloheximide chase assay. In this assay, the levels of protein become indicative of its pro-teolytic turnover because they are assessed under conditions when protein synthesis in cells is inhibited for various times. Replacement of Ser at the putative priming site within IFNAR1 with Ala yielded a receptor chain that displayed a noticeably longer half-life (Fig. 1B). Furthermore, a substitution of this serine residue with a phospho-mimicking Asp produced IFNAR1 mutant protein that underwent a more robust turnover that of its wild type counterpart (Fig. 1B). This result indicates that priming phosphorylation might be important for regulating the rate of IFNAR1 proteolytic turnover.
We next determined whether the priming site contributes to CK1-mediated phosphorylation of the IFNAR1 degron on Ser 535 . In line with our previous observations (15,17,24), forced expression of wild type FLAG-IFNAR1 in 293T cells allowed us to observe the basal level of Ser 535 phosphorylation, and co-expression of Myc-tagged CK1␣ further increased this phosphorylation. Under these conditions, Ser 535 phosphorylation was not found in mutant IFNAR1 S532A ( Fig. 2A), although phosphorylation of another proximal serine mutant, S529A, remained unaffected (data not shown). This result could be explained neither by differences in the levels of Myc-CK1␣ expression ( Fig. 2A, bottom panel) nor by the possibility that The conserved putative priming site (Ser 532 in human IFNAR1) is denoted by an asterisk. B, degradation of FLAG-IFNAR1 (wild type (WT) or S532A mutant) overexpressed in 293T cells was analyzed by cycloheximide (CHX, 2 mM) chase for the indicated times followed by immunoblotting using anti-FLAG antibody. The levels of -actin were also analyzed as a loading control. FIGURE 2. Priming phosphorylation is required for the ligand-independent phosphorylation of IFNAR1 degron. A, degron phosphorylation of FLAG-IFNAR1 (wildtype(WT)orS532Amutant)co-expressedin293TcellswithMyc-taggedhuman CK1␣ or empty vector (Vec) and treated or not with IFN␣ (1000 IU/ml for 30 min as indicated)wasanalyzedbyFLAGimmunoprecipitation(IP)followedbyimmunoblotting using the indicated antibodies. The levels of Myc-CK1␣ in whole cell lysates were also determined. B, FLAG-IFNAR1 (wild type or S532A mutant) was co-expressed in 293T cells with HA-tagged Leishmania CK1 (HA-L-CK1, wild type, or kinase dead (KD)) and purified by FLAG immunoprecipitation. Phosphorylation of the IFNAR1 degron and levels of IFNAR1 were analyzed by immunoblotting using the indicated antibodies. The levels of HA-L-CK1 in whole cell lysates (WCL) were also determined. C, characterization of anti-Ser(P) 532 antibody. FLAG-IFNAR1 proteins (wild type, S535A, or S532A mutants) were expressed in 293T cells, immunopurified, and analyzed using the indicated antibodies. Vec, reactions from cells transfected with empty vector (pCDNA3). D, 293T cells were left untreated (UN) or were treated with TG (1 M for 30 min) and harvested. Lysates from these cells were twice immunodepleted with anti-bodiesagainstCK1␣,andtheCK1␣-freesupernatants(4g)wereusedalone(lanes2 and 3) or together with 0.5 g of bacterially produced recombinant GST-CK1␣ (lanes 1 and 4-9) for in vitro phosphorylation of GST-IFNAR1 (wild type, lanes 1-6, or S532A mutant, lanes [7][8][9] in the presence of ATP (except in lane 1) at 30°C for 30 min as indicated. Phosphorylation of GST-IFNAR1 on Ser 532 and Ser 535 , levels of GST-IFNAR1 (using anti-GST antibody), and levels of CK1␣ were analyzed by immunoblotting. mutation in Ser 532 might alter the recognition of the Ser(P) 535specific epitope by the antibody, because Ser 535 phosphorylation of IFNAR1 S532A mutant was still observed in the cells treated with IFN␣. These observations suggest that the priming site is indispensable for ligand-independent IFNAR1 degron phosphorylation but not when phosphorylation is induced by IFN␣.
Similar to human CK1␣, the Leishmania L-CK1 was also shown to promote phosphorylation of IFNAR1 on Ser 535 upon expression in human or mouse cells (24). In line with this report, expression of wild type HA-tagged L-CK1, but not of a kinase dead mutant of L-CK1, stimulated Ser 535 phosphorylation of co-expressed FLAG-IFNAR1 WT (Fig. 2B). However, phosphorylation of Ser 535 in the S532A mutant was not observed under these conditions (Fig. 2B). Together these data further indicate that priming phosphorylation might be required for ligand-independent CK1-mediated phosphorylation of the IFNAR1 degron.
To determine whether the putative priming site is phosphorylated in cells, we generated a polyclonal anti-Ser(P) 532 antibody (see "Experimental Procedures"). FLAG-IFNAR1 proteins expressed in 293T cells were immunopurified and analyzed by immunoblotting using this antibody as well as the previously characterized anti-Ser(P) 535 antibody (11). The latter antibody recognized wild type receptor but neither the S535A mutant nor the S532A mutant, whereas S532D mutant exhibited an increased phosphorylation on Ser 535 (Fig. 2C). This result is consistent with data shown in Fig. 2A. Importantly, the anti-Ser(P) 532 antibody recognized both wild type and the S535A mutant (but not the S532A or S532D priming site mutants; Fig. 2C), indicating that overexpressed IFNAR1 undergoes phosphorylation on the putative priming site in cells.
We next tested whether this priming phosphorylation is directly mediated by CK1␣ or by another kinase that is induced by UPR. Incubation of recombinant CK1␣ with wild type GST-IFNAR1 substrate and ATP in vitro resulted in phosphorylation of Ser 535 but not of Ser 532 (Fig. 2D, lane 4). This result confirms the previously published suggestion that CK1␣ is a direct kinase for the IFNAR1 degron residue Ser 535 (24) but also indicates that phosphorylation of the putative priming site might be mediated by another kinase. Indeed phosphorylation of Ser 532 was detected in CK1␣-depleted lysates from cells treated with TG, an inducer of UPR. Moreover, the extent of this phosphorylation was not changed when recombinant CK1␣ was added to this reaction (Fig. 2D, compare lanes 3 and 6).
Importantly, a combination of CK1␣ and lysates from TGtreated cells increased the efficacy of phosphorylation of Ser 535 in a manner that depended on the integrity of Ser 532 as seen from the reaction using the GST-IFNAR1 S532A mutant (lane 6 versus lane 9). These results suggest that TG treatment induces activity of an unknown (yet different from CK1␣) protein kinase that phosphorylates IFNAR1 on Ser 532 . Furthermore, this phosphorylation increases the efficacy of CK1␣-mediated phosphorylation of Ser 535 within the degron of IFNAR1, suggesting that Ser 532 represents a bona fide priming site.
We next sought to investigate whether phosphorylation of the priming site may occur within the context of endogenous IFNAR1 in cells where UPR is induced. Treatment of HeLa cells with TG or infection of these cells with VSV led to phosphorylation of endogenous IFNAR1 on both Ser 532 and Ser 535 (Fig. 3A). In line with previously published results (11)(12)(13), treatment of cells with IFN␣ stimulated Ser 535 phosphorylation. However, priming phosphorylation on Ser 532 in response to the ligand was not efficient (Fig. 3A). This result, together with ligand-induced Ser 535 phosphorylation of the IFNAR1 S532A mutant ( Fig. 2A), suggests that IFN␣-induced signaling is capable of promoting IFNAR1 degron phosphorylation in a manner that does not require priming phosphorylation. Furthermore, in human KR cells (which harbor catalytically inactive Tyk2 and were shown not to support IFN␣-induced IFNAR1 phosphorylation, ubiquitination, and degradation) (12,15), the phosphorylation of the priming Ser 532 site and of the degron Ser 535 in response to TG was also detected (Fig. 3B). Collectively, these results suggest that phosphorylation of the priming site occurs in a ligand-and Tyk2-independent manner and is dispensable for the ligand-induced pathway.
We have previously reported that induction of UPR promotes ubiquitination and degradation of endogenous or exogenously expressed wild type IFNAR1 in human cells (17). Here we sought to determine the role of phosphorylation of the priming site in UPR-induced ubiquitination of IFNAR1. Treatment of cells with TG noticeably increased the extent of ubiquitination of wild type IFNAR1 but not of the S532A mutant (Fig. 3C). Furthermore, this mutant was less sensitive to a decrease in the levels of IFNAR1 induced by TG (Fig. 3D). These data suggest that the priming phosphorylation of IFNAR1 plays an important role in the ubiquitination and down-regulation of IFNAR1 in response to UPR induction.
UPR stimulates Ser 535 phosphorylation of IFNAR1 and accelerates ubiquitination and degradation of this receptor in a manner that relies on PERK activity (17). We next sought to investigate whether PERK is required for phosphorylation of the priming site within IFNAR1. Transfection of HeLa cells with small hairpin RNA targeted against PERK led to a partial knockdown of this kinase as evident from its decreased level and decreased phosphorylation of its known substrate eIF2␣ in cells treated with TG (Fig. 4A, lower set of panels). Under these conditions, the efficacy of TG-induced phosphorylation of IFNAR1 on the priming Ser 532 was also decreased (Fig. 4A, upper set of panels). This result suggests that PERK is required for UPR-induced priming phosphorylation. Consistent with this suggestion, TG-induced phosphorylation of mouse IFNAR1 on Ser 523 (analogous to Ser 532 in human receptor) was not observed in mouse embryo fibroblasts from PERK knockout animals (Fig. 4B). Given that PERK plays an important role in UPR-induced ubiquitination and degradation (17) and these events also depend on the priming site of IFNAR1 (Fig. 3, C and D), the findings that PERK regulates Ser 532 phosphorylation also indicate that this kinase might function upstream of the phosphorylation of the priming site.
Expression of wild type but not catalytically inactive PERK mutant led to a noticeable down-regulation of endogenous IFNAR1 (Fig. 4C), suggesting that kinase activity of PERK is required for ligand-independent IFNAR1 degradation. We next sought to determine whether PERK may serve as a direct kinase for the priming site. Incubation of recombinant active PERK with GST-IFNAR1 and ATP in an in vitro kinase assay similar to the one shown in Fig. 2D did not yield any phosphorylation of the substrate on Ser 532 that would be detectable by immunoblotting using anti-Ser(P) 532 antibody (data not shown). Furthermore, when this reaction was carried out in the presence of radiolabeled ATP, we could not detect the incorporation of phosphate into GST-IFNAR1 (Fig. 4D). Having excluded the possibilities that the integrity of the substrate might be somehow compromised (by analyzing protein load using Coomassie staining and demonstrating that this very substrate was efficiently phosphorylated by the whole cell lysate) or that the kinase was inactive (given efficient autophosphorylation and phosphorylation of contaminants denoted by asterisks in Fig. 4D), we conclude that PERK is not capable of directly phosphorylating IFNAR1. This result together with data from Fig. 2D demonstrating induction of a Ser 532 kinase in cells treated with TG also suggests that UPR stimulates a PERK-dependent activation of another serine kinase that functions as a direct kinase for priming phosphorylation. Alternatively, PERK activity might negatively regulate a hypothetical Ser 532 phosphatase.
UPR induced by some viruses including VSV and HCV was shown not only to down-regulate IFNAR1 but also to inhibit the extent of IFN␣/ signaling, providing these viruses with the means to evade the control from the Type I IFN system (17). We next sought to determine whether phosphorylation of the priming site is important for attenuation of cellular responses to IFN. In line with previously reported data (17), expression of the HCV genome in human Huh7 hepatoma cells noticeably down-regulated the level of endogenous IFNAR1 (Fig. 5A, top panel). When loading was normalized to yield comparable amounts of IFNAR1 in the immunoprecipitation reaction, we also observed that HCV induced phosphorylation on both Ser 535 and Ser 532 (Fig. 5A, lower set of panels). This result suggests that cells expressing the HCV genome display an increased priming phosphorylation of IFNAR1 that may lead to down-regulation of the receptor.
The response of these cells to IFN␣ was markedly attenuated (17). We further sought to determine whether this inhibition could be rescued by expression of IFNAR1 deficient in Ser 532 phosphorylation. Because of limited transfection efficacy in Huh7 cells, we have co-expressed FLAG-tagged STAT1 with FLAG-tagged IFNAR1 proteins and then analyzed STAT1 phosphorylation and levels in FLAG immunoprecipitation reactions. This analysis revealed a decreased phosphorylation , and endogenous IFNAR1 proteins were immunopurified (IP) and analyzed for phosphorylation on the priming site and for total levels by immunoblotting using the indicated antibodies. Phosphorylation of a known PERK substrate eIF2␣ (as well as its total levels) and the levels of PERK itself were also determined in whole cell lysates (WCL). B, mouse embryo fibroblasts from wild type or PERK knock-out animals were treated with TG as indicated. The levels and priming phosphorylation of endogenous murine IFNAR1 on Ser 523 (analogue of human Ser 532 ) were analyzed by immunoblotting using indicated antibodies. The phosphorylation and levels of eIF2␣ and the levels of PERK in whole cell lysates were also determined. C, levels of endogenous IFNAR1 in 293T cells transfected with wild type or the catalytically deficient mutant (K618A) of PERK were analyzed by immunoprecipitation followed by immunoblotting using an anti-IFNAR1 antibody. The levels of PERK, phosphorylated PERK, and levels of eIF2␣ were also examined. D, whole cell extracts (WCE) from 293T cells or recombinant bacterially produced constitutively active ⌬N-PERK were incubated alone or with GST-IFNAR1 in the presence of radiolabeled [␥-32 P]ATP as indicated. Resulting phosphorylation of GST-IFNAR1 or contaminants and autophosphorylation of PERK was determined by SDS-PAGE followed by Coomassie staining and autoradiography. The positions of PERK, GST-IFNAR1, and some irrelevant contaminants (denoted by asterisks) are indicated. Vec, empty vector.
of FLAG-STAT1 (Fig. 5B) likely caused by a decreased level of IFNAR1 (as shown in Fig. 5A). Co-expression of a FLAG-IFNAR1 S532A mutant that is insensitive to HCV-induced priming phosphorylation restored the efficacy of IFN␣-induced FLAG-STAT1 phosphorylation. However, equal amounts of vector for expression of wild type FLAG-IFNAR1 failed to reverse the HCV-mediated inhibition, most likely because of the fact that wild type IFNAR1 is susceptible to ligand-independent ubiquitination and degradation (as seen in Fig. 3, C and D) and, as a result, is expressed at levels markedly lower than that of the priming site phosphorylation-deficient mutant (Fig. 5B, bottom panel).
These data indicate that priming phosphorylation of IFNAR1 may regulate IFN␣/ signaling. To further explore this possibility, we reconstituted mouse embryo fibroblasts from IFNAR1 knock-out mice with either wild type murine IFNAR1 or its priming site Ser 523 mutant and compared the ability of murine IFN to induce an anti-viral state in these cells. Cells that express the priming site mutant exhibited a noticeably higher innate resistance to VSV infection (as judged from lower levels of expression of VSV-M protein in the absence of exogenous IFN (Fig. 5C). Furthermore, these cells required an at least five times lower dose of exogenous IFN than cells expressing wild type receptor to mount a comparable defense against VSV (compare VSV-M levels at dose 50 IU/ml in wild type cells versus 10 IU/ml in S523A in Fig. 5C). These data together indicate that priming phosphorylation of IFNAR1 contributes to the regulation of the cellular responses to Type I IFN.
DISCUSSION
Ligand-independent phosphorylation of Ser 535 within the degron of IFNAR1 is a central event in ubiquitination and ensuing down-regulation of this receptor chain in response to UPR stimuli including viral infection (17). Identification of CK1␣, a constitutively active enzyme refractory to further stimulation by UPR, as a kinase responsible for this phosphorylation (24) posed a question of how this phosphorylation is induced by UPR. Here we propose that UPR signaling activates a yet unknown kinase activity that phosphorylates IFNAR1 on Ser 532 , located proximal to the degron. Phosphorylation of Ser 532 in turn functions as a priming event that facilitates CK1␣-mediated phosphorylation of the IFNAR1 degron followed by IFNAR1 ubiquitination and degradation. In support of this hypothesis, we show that TG treatment of cells induces a Ser 532 kinase that is different from CK1␣ and that cooperates with CK1␣ in phosphorylating the IFNAR1 degron in a manner that is dependent on the integrity of Ser 532 (Fig. 2D). Our findings further demonstrate that phosphorylation of the priming site indeed occurs in cells where UPR is induced by IFNAR1 overexpression (Fig. 2C) or treatment of cells with TG or infection with VSV (Fig. 3A) or expression of the HCV genome (Fig. 5A). This phosphorylation requires activity of PERK, which by itself is incapable of phosphorylating IFNAR1 (Fig. 4), suggesting the function of another kinase in this process. The data shown in Figs. 1B and 3 (C and D) suggest that basal and TGinduced ubiquitination and degradation of IFNAR1 depends on the priming site, which is also implicated in the regulation of the extent of cellular responses to IFN␣/ (Fig. 5, B and C). These findings suggest that priming phosphorylation of IFNAR1 plays an important role in IFNAR1 proteolysis stimulated by UPR.
In addition, the data presented here showed that priming Ser 532 phosphorylation is not efficiently induced by IFN␣ (Fig. 3A) and can occur in cells that lack catalytically active Tyk2 (Fig. 3B). Moreover, an IFNAR1 mutant lacking the priming site remained sensitive to IFN␣-inducible phosphorylation of the IFNAR1 degron (Fig. 2A). These data further contribute to the characterization of ligand and JAK-dependent and independent pathways (Fig. 5D). Both pathways converge at the level of phosphorylation of the IFNAR1 degron, which is followed by recruitment of -Trcp, ubiquitination, endocytosis, and degradation. However, the ligand-inducible pathways require JAK activity but need neither CK1␣ (24) nor the integrity of the priming site (this study). By contrast, the latter site and CK1␣ activity are crucial for ligand-independent pathways that can be induced by UPR stimuli in the absence of exogenously added IFN␣/ and in cells lacking activity of Tyk2 (15, 17, 24). FIGURE 5. Priming phosphorylation of IFNAR1 contributes to regulation of the extent of IFN␣/ signaling. A, control (Con) human Huh7 cells and those expressing the HCV replicon were analyzed for IFNAR1 levels by immunoprecipitation-immunoblotting (upper panel). The lower three panels depict the experiments where gel loading was normalized to achieve comparable levels of immunopurified IFNAR1 in each lane. Phosphorylation of IFNAR1 on Ser 532 and Ser 535 was determined by immunoblotting using the indicated antibodies. B, control human Huh7 cells and those expressing the HCV replicon were transfected with FLAG-tagged STAT1 alone with empty vector (Vec) or FLAG-IFNAR1 (wild type (WT) or S532A mutant) and were untreated or treated with IFN␣ (60 IU/ml for 30 min) as indicated. Lysates of these cells were immunoprecipitated (IP) using anti-FLAG antibody and analyzed by immunoblotting using antibodies against phospho-STAT1, total STAT1, and IFNAR1. C, mouse embryo fibroblasts from IFNAR1-null animals were reconstituted with murine FLAG-IFNAR1 (wild type or S523A mutant, which is a mouse analogue of human S532A mutant). The cells were treated with indicated doses of murine IFN for 1 h, incubated for 8 h in fresh medium, and then infected with VSV (multiplicity of infection of 0.1). Expression of VSV-M protein was analyzed 16 h later by immunoblotting. Levels of -actin were also determined (lower panel). D, model for ligand-dependent and ligandindependent ubiquitination and degradation of IFNAR1. Both pathways converge at the level of degron phosphorylation (Ser(P) 535 , pS535). Signaling induced by IFN and dependent on the activity of Tyk2 does not require either CK1␣ (24) or priming phosphorylation (this study). Ligand-independent pathway initiated by inducers of UPR does not need either ligand or Tyk2 activity but requires CK1␣ (26) and PERK-dependent priming phosphorylation (this study).
The overall mechanism of utilizing inducible priming phosphorylation for subsequent degron phosphorylation in IFNAR1 is reminiscent of a similar regulation that has been described in other substrates of SCF -Trcp E3 ubiquitin ligase including -catenin (38 -40), Cdc25a (41)(42)(43)(44), and Gli/Ci (45)(46)(47)(48). Priming phosphorylation of a substrate at proximal Ser/Thr residues in the n Ϫ 3 position has been extensively demonstrated to promote subsequent phosphorylation of this substrate by CK1, sometimes on several consecutive and properly spaced phospho-acceptors (30 -37). In the case of IFNAR1, the priming effect seems to be limited to just one specific site (Ser 532 ). Despite the presence of another putative site (Ser 529 , seen in IFNAR1 of many species but not in chicken; Fig. 1A), mutation of this site did not affect phosphorylation of the IFNAR1 degron on Ser 535 . 3 Another prominent characteristic of priming phosphorylation in IFNAR1 is that this post-translational modification is inducible and seems to underlie the mechanism by which degron phosphorylation, ubiquitination, and degradation can be promoted by UPR signaling in a PERK-dependent manner.
Because PERK itself is not capable of directly phosphorylating IFNAR1, the identity of the priming kinase that acts downstream of PERK to phosphorylate Ser 532 in response to UPR (Fig. 5D, Kinase Y) remains to be determined. Our previously reported data that CK1␣ activity is not induced by UPR (24) and in vitro data presented here (Fig. 2D) rule out the possibility that CK1␣ itself may function as the priming kinase. In that sense, IFNAR1 is dissimilar from another substrate of -Trcp, -catenin, whose degron is phosphorylated by the glycogen synthase kinase 3 upon priming phosphorylation of distal phosphoacceptors by CK1␣ (40). Future studies aimed at identification and characterization of the priming kinase of IFNAR1 are important for designing the means for inhibition of the ligandindependent pathway and, accordingly, preventing the ability of viruses to decrease the therapeutic effects of Type I IFNs. | 2018-04-03T04:28:26.732Z | 2009-11-30T00:00:00.000 | {
"year": 2009,
"sha1": "9d2335c34f9da0ea3b6b9ceccb80f07e9f08ebd2",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925819637269/pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "6d82e7c038bddd79c3aa4d3a21aef9f5101c09b2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15888793 | pes2o/s2orc | v3-fos-license | Small RNA class transition from siRNA/piRNA to miRNA during pre-implantation mouse development
Recent studies showed that small interfering RNAs (siRNAs) and Piwi-interacting RNA (piRNA) in mammalian germ cells play important roles in retrotransposon silencing and gametogenesis. However, subsequent contribution of those small RNAs to early mammalian development remains poorly understood. We investigated the expression profiles of small RNAs in mouse metaphase II oocytes, 8–16-cell stage embryos, blastocysts and the pluripotent inner cell mass (ICM) using high-throughput pyrosequencing. Here, we show that during pre-implantation development a major small RNA class changes from retrotransposon-derived small RNAs containing siRNAs and piRNAs to zygotically synthesized microRNAs (miRNAs). Some siRNAs and piRNAs are transiently upregulated and directed against specific retrotransposon classes. We also identified miRNAs expression profiles characteristic of the ICM and trophectoderm (TE) cells. Taken together, our current study reveals a major reprogramming of functional small RNAs during early mouse development from oocyte to blastocyst.
Recent studies have shown that siRNAs and piRNAs are expressed in mammalian germ cells and play important roles in retrotransposon-silencing and gametogenesis (13,14,18). Many siRNAs, like piRNAs, appear to be derived from repetitive sequences including retrotransposons. The contribution of such small RNAs (including miRNAs) to mammalian gametogenesis is further validated by failures of gametogenesis in mice carrying loss-of-function of the Piwi family genes and Dicer associated, respectively, with piRNAs (15)(16)(17) and siRNAs/miRNAs production (11,12).
In early development of pre-implantation mammalian embryos, the first transition from maternal to embryonic (zygotic) programs takes place as early as the two-cell stage (19), and qualitative and quantitative changes in gene expression occur over the subsequent development. At the blastocyst stage, when the embryo is composed of two distinct cell populations, the inner cell mass (ICM) and trophectoderm (TE), marked differences in gene expression between them can be detected (20,21). Although small non-coding RNAs play key roles in gametogenesis (13,14,18), little is known about their subsequent contribution to early mammalian development. In the present study we investigated the expression of small RNAs in mouse unfertilized (metaphase II: MII) oocytes, 8-16-cell stage embryos, blastocysts as well as pluripotent ICMs by high-throughput pyrosequencing. While the recent study presented the expression profile of known miRNAs in early mouse development (12), our current study has revealed comprehensive profiles of small RNAs containing uncharacterized small RNAs in pre-implantation embryos. The data thus demonstrate a drastic change in the expression of small RNAs associated with the transition from oocyte to embryo during mammalian development.
Collection and culture of unfertilized eggs and embryos
Female ICR mice (5-8 weeks old) were superovulated via intraperitoneal injection of 7-10 i.u. of pregnant mare serum gonadotropin (PMSG) and human chorionic gonadotropin (hCG) at 48 h intervals. The female mice were then mated with male ICR mice and inspected for vaginal plugs the next day. Unfertilized eggs (metaphase II eggs) were also collected from female mice without mating at 16-20 h post hCG, and subjected to treatment with hyaluronidase (300 U/ml in M2 medium). Fertilized embryos were collected from plug-positive female mice at the expected embryonic age as hours post-hCG: 8-16-cell stage embryo, 64-70 h; blastocyst, 88-94 h. Embryos were cultured in KSOM-AA medium containing 4 mg/ml BSA in a 5% CO 2 humidified chamber (22).
Isolation of ICM and TE
Immunosurgery for isolation of ICM was carried out as described previously (23,24). Briefly, blastocysts were placed in acidic Tyrode's solution (pH 2.5) to remove the zona pellucida and rinsed 3 times with M2 medium (Sigma). Zona-free embryos were incubated in anti-mouse antiserum (1:20 in M16 medium, Rockland) at 37 C for 10 min in a 5% CO 2 humidified chamber. The embryos were then washed 3 times in M16 medium and incubated in guinea pig complement (1:20 with M16 medium, MP Biomedicals, LLC) for 30 min at 37 C in a 5% CO 2 humidified chamber. After incubation and washing 3 times in M16 medium, ICM was isolated from the embryos by gentle pipetting with a glass micropipette.
Microsurgery was carried out to isolate TE populations. Briefly, zona-free blastocysts were placed in a drop of M2 medium on a plastic Petri dish, and the drop covered with liquid paraffin. Excess medium was slowly removed using a glass micropipette so that the blastocyst could be fixed on the Petri dish in a position suitable for dissection (25). The blastocysts fixed onto the dishes were equatorially cleaved using a 30-G needle under a ZEISS stereomicroscope (Stemi 2000-C). Mural TE fragments were then collected via attachment to the tip of a 30-G needle.
Construction of small RNA libraries from pre-implantation mouse embryos Small RNA libraries were constructed based on previous protocols (26,27). Total RNA was extracted from 1470 unfertilized eggs, 960 embryos at the 8-16-cell stage, 438 blastocysts and 405 ICMs using TRIzol reagent (Invitrogen), and subjected to size fractionation using flash PAGE (Ambion) according to the manufacturer's instructions. Approximately 17-40-nt RNA fragments were collected and subjected to ligation with 5 mM of the 3 0 -adaptor RNA oligonucleotide (Linker-1), which is 5 0 -adenylated and 3 0 -blocked with a dideoxy-C base (IDT, Supplementary Table S1), by T4 RNA ligase (Amersham) without ATP at 37 C for 1 h. The RNAs were then purified by polyacrylamide gel electrophoresis, eluted from gels in elution buffer (0.5 M ammonium acetate, 1 mM EDTA and 0.1% SDS), collected by ethanol precipitation and dissolved in H 2 O. The collected RNAs were further ligated to the 5 0 -adaptor RNA oligonucleotide (Supplementary Table S1) by T4 RNA ligase in the presence of ATP at 37 C for 1 h, and used as templates to synthesize the first strand complementary DNAs (cDNAs) using SuperScript II reverse transcriptase (Invitrogen) with the 3 0 PCR Oligo (Supplementary Table S1) complementary to the Linker-1 (3 0 -adaptor oligonucleotide) according to the manufacturer's instructions. The resultant cDNAs were subjected to amplification by PCR using the ABI GeneAmp PCR system 9700 (Applied Biosystems). The thermal cycling was carried out as follows. In the first PCR, template and 5 0 PCR Oligo and 3 0 PCR Oligo primers were heat denaturation at 96 C for 1 min, followed by 20 cycles of amplification at 95 C for 10 s, 50 C for 1 min and 72 C for 20 s. The second PCR was carried out using the first PCR product and the 2nd and 3rd PCR-F and -R primers for 8-10 cycles of the thermal cycling profile in the first PCR. The third PCR was carried out using the second PCR product with eight cycles of the thermal cycling profile of the second PCR. The PCR products were purified by polyacrylamide gel electrophoresis, and collected and dissolved in TE (pH 8.0) as described above after every PCR amplification.
High-throughput sequencing analysis and annotation of small RNAs
Sequence determination of the cDNAs prepared from small RNAs was carried out using the 454 pyrosequencing technology (Roche). The obtained sequence data were mapped to the mouse genome using Blastn (ftp://ftp.ncbi. nih.gov/blast), and the sequences that perfectly matched the mouse genome were selected for further analyses. Annotation of the sequences was preformed as described previously (13). Briefly, to identify small RNAs corresponding to various repeats such as rRNA, tRNA, retrotransposon and DNA transposon, genomic positions of the repeats were retrieved from the University of California, Santa Cruz (UCSC) website (http:// hgdownload.cse.ucsc.edu/downloads.html) and compared with the genomic positions of small RNAs. If the genomic position of a particular small RNA overlapped with any repeats by at least 15 nt, this small RNA was considered to be repeat-derived. Repeat names were retrieved for all positions to which a small RNA was mapped, and if multiple repeat names were retrieved, the class (such as LTR/MaLR or rRNA) and subclass (such as IAP), where applicable, were determined according to the majority of positions. If the top two repeats had the same number of positions, the class or subclass was not determined. To identify small RNAs corresponding to tRNAs, rRNAs, snRNAs, snoRNAs, scRNAs, miRNAs, piRNAs (previously identified in adult and neonate mouse testes and growing unfertilized eggs) and mRNAs based on sequence similarity, the sequences of these RNAs were extracted from the flat files (sequence and annotation files) of GenBank (ftp://ftp.ncbi.nih .gov/genbank/) and sequences downloaded from the following databases: tRNAs, Genomic tRNA Database (http://lowelab.ucsc.edu/GtRNAdb); snoRNAs, snoRNA database (http://www-snorna.biotoul.fr) and RNA database (http://jsm-research.imb.uq.edu.au/ rnadb); piRNAs, RNA database (http://jsm-research .imb.uq.edu.au/rnadb) and the Gene Expression Omnibus (GEO) database (accession number: GSE7414); miRNAs, miRBase (http://microrna.sanger.ac.uk/ sequences); mRNAs, Refseq Genes (ftp://ftp.ncbi.nih .gov/refseq) and Ensembl Genes (http://www.ensembl .org). Blastn searches were then performed using the small RNA sequences determined in this study as queries and the downloaded sequences as a database. After the annotation of small RNA clones, small RNA clusters were identified and characterized using in-house programs based on our previous study (13).
We also carried out sequence determination of small RNA cDNA libraries by means of conventional DNA sequencing. The cDNAs prepared from 1470 MII oocytes and 438 blastocysts as described above were subjected to 20 cycles of amplification followed by eight cycles of the second-amplification by PCR, and the resultant PCR products were cloned into the pCR4-TOPO vector (Invitrogen) according to the manufacturer's instructions. Approximately 600 and 700 colonies derived from the MII oocyte and blastocyst libraries, respectively, were examined by the ABI 3730 xl DNA Analyzer (Applied Biosystems) followed by sequence annotation as described above.
Synthetic oligonucleotides
DNA oligonucleotides and siTrio siRNA duplexes were obtained from Invitrogen and B-Bridge, respectively. DNA and RNA oligonucleotide sequences synthesized in this study are presented in Supplementary Table S1. The siTrio negative-control (B-Bridige) was also used as a non-silencing siRNA.
Amanitin treatment
Alpha-amanitin (SIGMA-Aldrich) was used to inhibit RNA polymerase II. Thirty 1-cell stage embryos (28h post-hGC) were cultured in the presence or absence of 24 mg/ml alpha-amanitin (28), and collected after 16 h incubation: the resultant embryos appeared to normally develop to two-cell stage embryos. When total RNA was prepared from the embryos, in vitro synthesized DsRed mRNA was added as an external control for assessment of RNA preparation. The isolated total RNAs were subjected to reverse transcription followed by Q-PCR analysis.
In vitro transcription
To construct GFP fusion genes, the MuERVL-Mm retrotransposon and -actin were amplified from cDNA prepared from two-cell stage embryos and blastocysts as a templates, respectively. The LINE-1 sequence in pd2EGFP-L1 (6) was also amplified by PCR. The resultant PCR products were digested with XbaI and NotI, and inserted into the phMGFP vector (Promega) treated with the same restriction enzymes. The PCR primer sets for amplification of the templates for synthesis of sense-and antisense-strand RNAs are presented in Supplementary Table S1. We also constructed a template plasmid encoding the DsRed gene via replacement of the SmaI-NotI fragment carrying GFP in the phMGFP vector with the SmaI-NotI fragment carrying DsRed, isolated from the pDsRed-Monome-N1 vector (Clontech). The constructed plasmids were digested with NotI and used as templates in RNA synthesis. In vitro transcription was carried out using a mMessage mMachine T7 ultra kit (Ambion) according to the manufacturer's instructions.
Electroporation
Electroporation was carried out according to the previous reports (29,30). Approximately 20-30 fertilized eggs (one-cell stage) and 8-16-cell stage embryos were subjected to electroporation in 30 ml of HBS buffer [20 mM HEPES, pH 7.0-7.6 (SIGMA-ALDRICH), 150 mM NaCl] containing 2 mg of tetramethylrhodamine-labeled dextran (3000 MW) (Molecular Probes) or 2-4 mg of GFP-fusion gene mRNA together with 1 mg of the DsRed mRNA as a control. Three sets of four electric pulses (21 V for fertilized eggs and 28-30 V for 8-16-cell stage embryos; duration, 1 ms; interval, 99 ms) were delivered using an electronic pulse generator (model CUY-21, BEX, Tokyo) with electrodes having a gap of 1 mm (BEX, Tokyo) with 1-min intervals and polarity changes between the sets of pulses. Delivery of rhodamin-labeled dextran into the embryos was examined by a ZEISS fluorescent microscope (Axiovert 40 CFL). Embryos electroporated in the presence of the reporter mRNAs were incubated in KSOM-AA medium at 37 C in a 5% CO 2 humidified chamber. Approximately 30 min after electroporation, 5-10 live embryos were collected for isolation of total RNA and the remaining samples were further incubated. At 18 h after electroporation (corresponding to the two-cell and early blastocyst stages), total RNA was isolated from 5 to 10 embryos. In the case of suppression of Dicer, two-cell stage embryos were electroporated in 30 ml of HBS buffer containing 10 ml of 100 mM siTrio siRNAs against Dicer (Supplementary Table S1) under the same conditions as for fertilized eggs. The electroporated embryos were then cultured in KSOM-AA medium as described above. The levels of suppression of Dicer were determined by Q-PCR.
Gene expression analyses under Dicer knockdown
More than 100 two-cell stage embryos were subjected to electroporation with the siRNAs against Dicer as described above. Three days after electroporation, total RNA was extracted from 71 electroporated embryos, which developed to blastocysts, and examined by the Affymetrix GeneChip Õ Mouse Genome 430 2.0 Array (Affymetrix) followed by analyses using the GeneChip Operating Software Program ver1.4 (Affymetrix) with default parameters (considered significance: P < 0.005) according to the manufacturer's instructions. Genes presenting with possibly significant changes in their expressions were further examined by Q-PCR. Total RNA was extracted from 15-20 Dicer-knockdown embryos and subjected to cDNA synthesis as described above, and Q-PCR using primers indicated in Supplementary Table S1 was carried out.
Small RNAs present in pre-implantation mouse embryos
To investigate small RNAs in pre-implantation mouse embryos, we collected 960 embryos at the 8-16-cell stage [2.5 days postcoitum (dpc)], 438 blastocysts (3.5 dpc) and also 1470 unfertilized eggs (MII oocytes). Small RNAs were extracted from the samples and examined by 454 pyrosequencing followed by sequence annotation. We obtained 204080, 227157 and 244650 small RNA sequences completely matched the mouse genome from the MII oocytes, 8-16-cell embryos and blastocysts, respectively. The annotated small RNA sequences revealed a marked increase in the known miRNA population and a marked decrease in small RNAs corresponding to retrotransposons sequences over the course of development ( Figure 1A). The size distribution of the small RNAs revealed two peaks, one at 22 nt and the other at 27-30 nt (Figure 1B-D). The most prominent peak at 22 nt was consistently present at all stages, but its major constituent changed from retrotransposon-and mRNAderived small RNAs (MII oocyte and 8-16-cell embryo) to miRNAs (blastocyst). The other peak at 27-30 nt was observed in MII oocytes and 8-16-cell embryos, but hardly in blastocysts; and it enriched retrotransposon sequences. The data described above appear to be reproducible, because similar results have been obtained from independent experiments by means of conventional DNA sequencing procedures (Supplementary Figure S1).
Many of the 22-nt peak small RNAs other than miRNAs and many of the 27-30-nt small RNAs were mapped to the previously identified oocyte siRNA and piRNA clusters, respectively (13,14) (Supplementary Table S2); and new small RNA clusters were also detected. But, the number of the small RNA clusters progressively decreased over the course of development (Supplementary Table S2).
Small RNAs derived from retrotransposons
The cloning frequency of the retrotransposon-derived small RNAs was decreased as a whole over the course of development: the LINE-1 (L1)-derived small RNAs, which constitute a major fraction in MII oocytes and contain both putative siRNAs and piRNAs, obey this rule (Supplementary Figure S2A and B). However, the small RNAs derived from some retrotransposons exhibited unique expression patterns. For example, the frequency of small RNAs derived from a member of the LTR/ERVL family, MERVL-Mm, showed a transient increase at the 8-16-cell stage (Supplementary Figure S2A and C) and the increased small RNAs were putative siRNAs.
The frequency of small RNA derived from SINE/B1 was also initially low but increased at the 8-16-cell stage and stayed at a similar level thereafter (Supplementary Figure S2A). Interestingly, the SINE/B1 small RNAs in MII oocytes comprise only putative siRNAs, but the small RNAs increased at the 8-16-cell and blastocyst stages appear to contain both putative siRNAs and piRNAs (Supplementary Figure S2B). The expression profiles of the Piwi family genes indicated that Mili, whose protein can be associated with piRNAs ranging from 25 to 27 nt (3), was transiently expressed at the eight-cell stage (Supplementary Figure S3), suggesting the possibility that the SINE/B1 piRNAs may be associated with the transiently expressed Mili in the embryos.
Small RNAs derived from retrotransposons mediate gene silencing
We investigated whether the retrotransposon-derived small RNAs play an active role in gene silencing. A previous study demonstrated that exogenously introduced target RNAs containing the L1 sequence were specifically degraded in oocytes by an RNAi-dependent mechanism (6). To examine whether a similar degradation of L1-containing RNAs occur in pre-implantation embryos, in vitro synthesized GFP RNAs carrying the L1 sequence were introduced into fertilized one-cell and 8-16-cell embryos Figure S4A), and degradation of the target RNAs was monitored by real-time PCR (Q-PCR). As a result, the RNAs carrying the L1 sequence were specifically degraded at both the one-cell and 8-16-cell stages (Supplementary Figure S4B), suggesting that a silencing mechanism similar to that in oocytes may also operate in pre-implantation embryos.
When GFP RNAs carrying the MERVL-Mm sequences ( Figure 2A) were introduced into one-cell and 8-16-cell embryos, the level of the GFP RNAs carrying the MERVL-Mm sense-strand sequence was markedly decreased in the 8-16-cell embryos, and slightly reduced in the one-cell embryos ( Figure 2B and C). The GFP RNAs with the MERVL-Mm antisense-strand sequence was unaffected and slightly affected in one-cell and 8-16-cell stage embryos, respectively. Thus, our data indicate a stage-specific silencing of MERVL-Mm in early developing mouse embryos, and the silencing is likely dependent upon the level of the siRNAs derived from the MERVL-Mm retrotransposon itself (Figure 2A and Supplementary Figure S2A and C). In addition, Dicer-knockdown embryos ( Figure 3A) showed a marked increase in the level of the MERVL-Mm transcript, consistently suggesting the involvement of RNAi in the MERVL-Mm retrotransposon silencing [ref. (31), Figure 3B].
Expression of miRNAs in mouse MII oocytes and pre-implantation embryos
As a basis for analyzing the expression of miRNAs during early mouse development, an appropriate control(s) is necessary in order to normalize the levels of miRNAs. We thus examined the expression levels of snoRNA135 and the miR-16 and -200c miRNAs, whose cloning frequencies were essentially unchanged in our small RNA libraries, by means of Q-PCR with equal amount (11 ng) of total RNAs prepared from MII oocytes, 8-16-cell embryos and blastocysts. As a result, similar levels of expression in each of the examined small RNAs among the samples were detected, suggesting that miR-16, miR-200c and snoRNA135 are suitable for the control (Supplementary Figure S5A).
The expression of the miRNAs belonging to the let-7 family and miR-290 and -467 clusters during early mouse development was investigated by Q-PCR and normalized to the level of the controls (miR-16, miR-200c or snoRNA135). As shown in Supplementary Figure S5B, similar expression profiles of the miRNAs normalized to any of the controls were detected and the profiles were compatible with the data of the cloning frequencies.
To investigate miRNAs during transition from maternal to embryonic (zygotic) programs in early development, we treated zygotes (one-cell stage embryos) with alpha-amanitin, a RNA polymerase II inhibitor, for 16 h and examined by Q-PCR the expression of the miRNAs belonging to the let-7 family (let-7b and -7g), which were present in MII oocytes and markedly decreased during pre-implantation development, and the miR-290 cluster (miR-292 and -294), which were barely present in MII oocytes and significantly increases toward 8-cell and blastocyst stages (see Supplementary Table S3). The alpha-amanitin treated embryos, developed during the 16 h culture into the two-cell stage, showed lower levels of the miR-292 and -294 expressions compared with the control embryos untreated with alpha-amanitin, in contrast to the let-7s and miR-16, a ubiquitous miRNA, with little difference in their levels (Supplementary Figure S6). The data suggest that, while maternal miRNAs reportedly decrease in two-cell stage embryos (12), the zygotic expression of miRNAs most likely initiates as early as the two-cell stage.
Small RNAs present in ICM
The blastocyst is composed of two distinct cell populations: the ICM and TE. The ICM cells exhibit pluripotency and generate the embryo proper, and the TE cells participate in the formation of the placenta after implantation. To investigate small RNAs in the ICM, we collected 405 ICMs from blastocysts by immunosurgery (Supplementary Figure S5C) (23) and prepared small RNAs. High-throughput sequencing analysis as described above was carried out and we obtained 195401 small RNA sequences that matched the mouse genome ( Figure 1A and E). As shown in Figure 1E, miRNAs were the most predominant small RNA class of the ICM as in the blastocyst profile and the members of the miR-290 cluster appeared to be expressed abundantly. The piRNA peak was hardly detected ( Figure 1E) and 22-nt putative siRNAs corresponding to retrotransposons and other sequences occupied a small fraction. Numerous rRNA derivatives were detected, but they were likely to be degradation products resulting from the immunosurgery. This is because they showed a broad distribution in the small size range (<21 nt) ( Figure 1E) and were not so abundant in the blastocyst profile ( Figure 1A and D).
Asymmetries in miRNA expressions between ICM and TE
Based on the data of each miRNA in the ICM and blastocyst, we estimated miRNAs exhibiting asymmetrical expression between the ICM and TE (Supplementary Table S3). To verify such an asymmetric distribution of miRNAs, we collected TE cells from blastocysts by microsurgery (Supplementary Figure S5D), and their total RNAs were examined by Q-PCR for the expression of Oct3/4, Nanog (ICM marker) and Cdx2 (TE marker) (20,24,32,33), in comparison with those from the blastocyst as a control. The ratio of the expression level of either Oct3/4 or Nanog to that of Cdx2 is substantially smaller in the TE sample than in the blastocyst (BL) sample against Dicer (siDicer) or non-silencing control siRNAs (siCont.) were introduced into two-cell stage embryos (1.5 dpc) by electroporation, and total RNA was extracted from the electroporated embryos (2.5, 3.5 and 4.5 dpc). The expression level of Dicer was examined by Q-PCR and normalized to that of Gapdh examined as a control. The resultant level of Dicer in the presence of siDicer was further normalized to that in the presence of siControl (siCont.) as 1 (n = 3; error bars represent SEM). The data indicate that the level of Dicer mRNA in the presence of siDicer remains low until 3.5 dpc and recovers toward normal level thereafter, suggesting that Dicer-knockdown at the RNA level appears to last until 3.5 dpc. (B) MERVL-Mm and miR-99b expressions under Dicer-knockdown. The expression of MERVL-Mm, which appears to be regulated by endogenous siRNAs, and miR-99b, which appears to be a zygotic miRNA, were examined by Q-PCR using the same samples as in A. Marked increase and decrease in the levels of MERVL-Mm and miR-99b, respectively, were detected at 4.5 dpc, i.e. the effect of Dicer-knockdown on the expression of MERVL-Mm and miRNA became evident on the third day after the introduction of siDicer when the Dicer mRNA level recovered from RNAi suppression. These suggest that there is a time lag in changes in Dicer mRNA and protein levels, and we then examined the effect of Dicerknockdown on the stability of transcripts of interest on the third day after siDicer was introduced (4.5 dpc). (C) Influence of Dicer-knockdown on gene expression. Electroporation with the siRNAs against Dicer (siDicer) and siControl (siCont.) and also preparation of total RNA were carried out as in B. The expression levels of indicated genes were examined by Q-PCR and normalized to those of Gapdh as a control. The resultant expression levels in the presence of siDicer were further normalized to those in siControl as 1. Data are averages of at least three independent experiments [error bars represent SEM; *P < 0.05 (t-test)].
(Supplementary Figure S5E), thus validating our method for collection of the TE sample. Using those RNA samples, we investigated the levels of miR-99b and miR-210 by Q-PCR followed by normalization to the level of miR-200c as a control. The results indicated that the relative amount of either miR-99b or miR-210 in the TE was significantly larger than that in the BL, suggesting predominant expression of the miRNAs in the TE cells (Supplementary Figure S5F).
We further selected the top twenty miRNAs in each lineage (the ICM and TE groups) and examined their cloning frequencies in MII oocytes, 8-16-cell embryos and blastocysts ( Figure 4B and C). As a result, while most of the miRNAs in the TE group exhibited a gradual increase in expression over the course of development, many of the miRNAs in the ICM group, which included five miR-290 cluster members, showed an increase at the 8-16-cell stage followed by a smaller change between the 8-16-cell and blastocyst stages. Accordingly, the data suggest distinct differential expressions of miRNAs between the ICM and TE cell lineages.
Dicer-knockdown in pre-implantation development
We wished to examine potential involvement of small RNAs in pre-implantation development, and performed suppression of Dicer, a key enzyme for production of both siRNAs and miRNAs in gene silencing, by means of electroporation using siRNAs against Dicer ( Figure 3). Expression profiles in pre-implantation mouse embryos with Dicer knockdown indicated marked increase in the level of MERVL-Mm and also the reduction of the levels of several genes including miR-99b ( Figure 3B). Interestingly, Q-PCR confirmed the significant reduction of the expression of N-myc, which appears Supplementary Table S3) and plotted for three stages from MII oocyte to blastocyst. The members belonging to the miR-290 cluster are indicated in green.
Maternal and zygotic siRNAs and piRNAs in pre-implantation mouse embryos
Endogenous siRNAs and piRNAs in mammalian germ cells play essential roles in retrotransposon silencing and gametogenesis (13,14,18). Our current study has investigated subsequent contribution of such small RNAs to early development of pre-implantation mouse embryos. Our data revealed that, some siRNAs and piRNAs derived from retrotransposons are transiently (zygotically) upregulated and likely directed against specific retrotransposons, although siRNAs and piRNAs as a whole are markedly decreased over the course of development ( Figure 1, Supplementary Figure S2 and Table S2).
The transient increase in the MERVL-Mm small RNAs represents a new production, and most of the small RNAs are putative siRNAs. Since previous studies indicated that sense and antisense MERVL-Mm transcripts were abundantly expressed at the two-cell stage, but not at other stages (34,35), and since the MERVL-Mm transcript was significantly increased in Dicer-knockdown mouse embryos ( Figure 3B) (31), our present evidence strongly suggests that endogenous siRNAs are zygotically produced from the MERVL-Mm transcripts following fertilization and work as mediators in RNAi-dependent silencing directed against the MERVL-Mm transcript itself. As a result, the level of MERVL-Mm is reduced after the two-cell stage. Thus, an autonomous suppression of MERVL-Mm via RNAi appears to occur.
The transiently increased piRNAs and siRNAs, which are derived from SINE/B1, together with the increased Mili at the same stage might also participate in autonomous suppression of the SINE/B1 retrotransposon.
In addition to zygotically produced siRNAs and piRNAs, maternally-derived small RNAs may also contribute to the early development. For example, it is noteworthy that maternally derived L1 small RNAs appear to work to at least the 8-16-cell stage, although the small RNAs are markedly decreased following fertilization (Supplementary Figure S2A and B). Given that genomewide reprogramming including protamine/histone exchange and DNA demethylation takes place in early mammalian development (36)(37)(38), maternally and zygotically produced siRNAs and piRNAs presumably participate in the defense against harmful retrotransposons activated during the reprogramming, which are presumably silenced by chromatin modifications such as DNA methylation in somatic cells.
Expression of miRNAs in pre-implantation mouse embryos
MiRNAs play essential roles in gene regulation during early development (7)(8)(9). Various mouse miRNAs are synthesized after fertilization and the miRNA complexity rapidly increases (Figures 1 and 4A, Supplementary Table S3). An increase in miRNAs of the miR-290 cluster, which are specifically expressed in embryonic stem (ES) cells (39)(40)(41), is particularly remarkable ( Figure 4A), as previously described (12). In the ICM, miRNA is also the most predominant small RNA class as in the blastocyst profile and the members of the miR-290 cluster are expressed abundantly. Based on the expression ratio of each miRNA in the ICM and blastocyst, we estimated miRNAs exhibiting asymmetrical distribution between the ICM and TE (Supplementary Table S3), and the biased presence of some of the miRNAs was verified using dissected TE samples (Supplementary Figure S5F). The data suggest that difference in miRNA expression occurs between the ICM and TE cell lineages, which may contribute to specialization of cells in the two cell lineages, and maintenance of stemness specifically in the ICM cell lineage. To elucidate such contributions, more extensive studies need to be carried out.
Potential involvement of small RNAs in pre-implantation development
It should be noted that the expression of N-myc is significantly decreased in Dicer-knockdown embryos ( Figure 3C), although the N-myc gene regulation involving functional small RNAs produced by Dicer remains to be investigated. N-myc is reported to be involved in maintenance of stemness in ES cells (42,43), and its expression is reduced after differentiation of the cells (44,45). N-myc is also predominantly expressed in the ICM cells at the blastocyst stage (46). As for the association of Dicer with pluripotency, Dicer-deficient mice are lethal at embryonic day 7.5, lacking pluripotent cells (47). Since our Dicer knockdown was carried out in pre-implantation embryos, the gene silencing involving Dicer may contribute to the maintenance of stemness even before implantation, and N-myc might play a key role in such maintenance of stemness. Taken together, the data suggest the possibility that the gene silencing may participate in the maintenance and differentiation of pluripotent cells in addition to the suppression of retrotransposons over the course of pre-and post-implantation development.
Transition of a major small RNA from siRNA/piRNA to miRNA during pre-implantation development Together with the findings of recent studies (3,5,6,(12)(13)(14), our current study has revealed that the transition of a major small RNA class from siRNA/piRNA to miRNA takes place during pre-implantation development ( Figure 5). The data further suggest that the zygotic expression of miRNAs as the beginning of the transition presumably starts as early as the two-cell stage. Given that miRNA is a major mediator of gene regulation, and predominantly present in mammalian somatic cells including pluripotent ES cells (40,48,49), it is conceivable that early mammalian embryos shape the somatic type small RNA modality and gene regulation involving miRNAs prior to their implantation. | 2014-10-01T00:00:00.000Z | 2010-04-12T00:00:00.000 | {
"year": 2010,
"sha1": "183fa49113f0b743f8cbab90dbfdd294cb704edb",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/38/15/5141/16765259/gkq229.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "183fa49113f0b743f8cbab90dbfdd294cb704edb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244611968 | pes2o/s2orc | v3-fos-license | “Eurowhite” Conceit, “Dirty White” Ressentment: “Race” in Europe
This paper offers tools to rethink global critical insights on “race” in the contemporary structural transformation of European identity politics from the perspectives of postcolonial global historical sociologies. “Race” regimes rest on the following background assumptions: (1) The claim that humankind consists of a finite number of disjunct (non-overlapping) “groups,” “populations” or, in the extreme, “races”; (2) The presumption that it is valid to arrange those “groups,” “populations” or “races” in a system of moral super-and subordination; (3) The contention that the resulting moral hierarchy forms a single constant, irrespective of socio-historical contexts, criteria, or purposes of comparison; (4) Insistence that single, ahistorical/decontex-tualized hierarchy can be mapped on to body shape, skin pigmentation or other epiphenomenal “features” of “groups,” “populations,” or “races,” such that (5) “Whiteness” is always already at the top, “Blackness” is always already at the bottom of that hierarchy. This paper focuses on the workings of “Whiteness” as a moral-geopolitical superiority claim, whose defining element is an ahistorical/decontextualized claim, indeed demand, for unconditional global privilege. “Whiteness” is an unfounded, un-found-able — hence eminently unstable and contested — identity category. It is a relational category whose core is fixed as a constant, inaugu-rating the “White” subject’s relations (“superiority”) to its constitutive outside. I introduce two conceptual innovations: “eurowhiteness” — result of an internal structuring of the category of “Whiteness” whose purpose is separating an even more exalted, even more superior “cultural” — “racial” distinction within the universe of “Whiteness” and “dirty whiteness” — to capture the epistemic position of quantitative undervalued, positions within the moral quasi-community of “White” claims for global privilege
A few years ago, in Budapest, I attended a presentation on the conditions of European Roma communities, given by a scholar from a Scandinavian university.Later I asked the author why he consistently avoided use of the terms "race"/racism/ racist, given that the material he presented could be seen as a textbook illustration of those as I understood them.He replied, "We don't have "race" in Europe.That is an American concept." To be sure, the first part of his response is almost too easy to contradict2 : Viewers of practically any European soccer game will notice the "endless barrage" (Greig 2020) of the "monkey sounds" made by fans to bully players racialized as non-'White'3 (Bentson 2019;Greig 2020;Harris 2019).This custom is exceedingly difficult to read as anything but a crass expression of a popular culture of biological racism-so much so that a group of "hard core" fans of a leading Italian team have written an open letter to a newly recruited, international superstar, who had expressed apprehension about their chants, admitting to the practice but rejecting the label "racism," explaining that, in Italy, there is no racism, that they only do the monkey sounds to "mess up" the opposing team ("Inter Fans . .." 2019).All that rhymes closely with the well-known charges of racism of the institutional structures and the underrepresentation of minorities in international sports (e.g., Bradbury 2013), including European soccer.Dehumanization of players on the field is so widespread that two clubs, both in northern Germany (Haasen 2018;Negley 2020), have chosen to place "antiracism"/"antinazism" in the center of their identity marketing strategy.Not to be left too far behind the soccer clubs, the European Commission has recently raised (European Commission 2019;Gachet n.d.;Mijatovic 2021) what mainstream liberal political practice labels as "the problem of racism," signifying, at a minimum, a combination of a political unease with, and an inability to repress, the issue.
Image 1. Tableau Natural Selection of Skin Color (Selection naturelle de la couleur de la peau) Demography Exhibit, Mus ee de l'Homme, Paris, mid-1990s.Photo (c) J ozsef B€ or€ ocz.
I took the photograph presented in Image 1 in the mid-1990s.It is the snapshot of a tableau near the entrance to the then, intendedly, permanent (since then removed) human demography exhibit, titled "Natural Selection of Skin Color" 4 in the Mus ee de l'Homme, the-"reinvented" (Grognet 2015;Lebovics and Bo€ etsch 2018)-historical and "biological" (Lebovics and Bo€ etsch 2018) anthropology museum, clearly a shrine of French colonial science, 5 in Paris.The image included in the tableau performs at last three acts of symbolic violence.In each, the violence occurs in drastically reducing complexity of reality.
• It squeezes minute empirical variations in human skin pigmentation into eight dis- junct categories of decreasing darkness (see the body-less, stylized human heads, at the bottom left of the image, presented as a legend of sorts).In the language of an introductory sociological methods course, the creators of this tableau impose a fixed, ordinal scale on a non-hierarchical, and fluid reality. 6Both scales, indeed the very thought of scaling human "populations," explicitly denies what was considered, by the mid-1990s, the scientifically valid-unitary-portrayal of humankind.
• The map assigns a single "skin color" to mark the empirical range of skin tonali- ties of people all over Planet Earth.Representing a range of distributions with a single value is a truly unprofessional slip-up-at least if viewed, again, with late-20 th -century standards of scholarly representation.At this point, dispassionate observers develop a feeling that the tableau in question may be a relic from an earlier period of west European "colonial science."Indeed, even a cursory investigation into the past of "racial" categorizations in global space reveals that the tableau of the mid-1990s bears striking resemblance 7 to European images 4 "La selection naturelle de la couleur de la peau." 5 Readers might be familiar with the story of Sarah Baartman, a young Khoisan woman removed from her homeland in colonial South Africa only to serve as a disrobed and violently objectified "freak show" exhibit in Britain and France (Maseko 1998).She died, at 26, in 1816.Mus ee de l'Homme, which "owned" her remains, was in the center of related accusations (Qureshi 2004, Scully andCrais 2008) of engaging in acts of disgraceful violence against her by being in possession of, and frequently exhibiting, parts of her preserved body until post-Apartheid South Africa successfully claimed her body for a proper funeral in 2002 (see, e.g., Saartjie Baartman . . .2002). 6The ordinal scale differs, we may recall, from a nominal scale in that it forces a ranking order on the taxonomy it offers so that "f(x) means any monotonic increasing function" (Stevens, 1946: 678, Table I). 7I don't have the space here to reconstruct the specific intellectual-political pathways through which patterns the "Racial Doctrine" imagery of humankind-predominant in the period of the 1920s through 1940s in Italy-might have found its way to the "Demography" exhibit at the Mus ee de l'Homme.Nor do I have information about the producers of the mid-1990s exhibit.So my argument stops at stating the striking isomorphy between the two projects.
"Eurowhite" Conceit, "Dirty White" Ressentiment of the world produced under the "racial doctrine" (Taylor 1988) of the Fascist era. 8 • It confines the "location" of the lightest "skin" color-let us call it Pinkness-por- trayed, in an astonishing assertion of semiotic power, 9 as more transparent than any of the others, making it, perforce, the unmarked10 point of reference 11 for all other "skin colors" on this map-to a well-defined, contiguous geopolitical space.
The realm of Pinkness-characterized, according to the creators of the tableau, by the presence of people with exceptionally low levels of pigmentation in the epidermis (see Image 2 above)-extends from the southern border of France northnorthwestward, to include the British Isles, Iceland, and much, but not the northern littoral, of the Scandinavian peninsula (placing the Salmi communities in the Arctic in the darker-than-Pink category), to the northern Urals.Then it makes a sharp turn southwest, cuts through much of north-central Europe, to reach an imaginary line separating the northern and southern regions of Italy, eventually ending the tour on the Mediterranean coastline of France. 12 In this schema, Pinkness is an utterly exceptional feature of humankind.It only occurs as a separate and unique blot on the planet.The area it occupies includes France, containing the spot inside the museum where the visitor of the Mus ee de l'Homme 13 stands while, presumably, absorbing the message of the exhibit.
The creators of this image thought it unimportant to explain just what the purpose of this exercise might be-unless the "scientific" inauguration, naturalization and, hence, normalization, of superficial difference in individual levels of epidermic pigmentation, seen as a variable that points at other, presumably "deeper," moral, socio-cultural meanings.Unable to find a scholarly answer to the question why we should care about all this, especially in the context of a social science exhibit, I see it befitting to use this epidermic-reductionist, biocultural image of the world as an illustration to what the creators of the exhibit seem to have considered the underlying "truth" of human existence: a "scientific" rendition of "race difference." I take this skin-color-coded map of the world as an iconic representation of the quasi-scientific practice of the biocultural splitting (Zerubavel 1996) of humankind, created in order to advance a rudimentary biocultural "map" of the human universe.More accurately, what we see are the results of a veritable splitting spree, resulting in eight putatively disjunct categories of humanity.The splitting spree to which the unity of humankind was subjected also involved, perforce, a simultaneous lumping (Zerubavel 1996) spree, stuffing all empirical variation within each of the eight categories of skin tonality into single "colors."Split-and-lumped to conform to the needs of pseudo-scientific categorization, we now have a hierarchy of putative human difference where the violence that produces it is hidden behind the pseudoobjective reference to "skin color." Image 3. "The Various Skin Types."Source: Focus Fit und Gesund, 2021, 1: 32.
B€ or€ ocz
Simple and pseudo-objective as it is, it would be difficult to over-exaggerate the influence of the symbolic violence of splitting-lumping on the minds of mainstream west European scientists, scholars and, more broadly, intellectuals, as they occupy their exalted positions, and execute their pattern-producing power, in the world of ideas.With the mapping of "White" identity on western Europe and insisting on its distinction from all Others in global space, 14 the European subject has prepared him-or herself to navigate, whether in thought or physically, the seas and continents of the world outside western Europe.
The skin-color-coded image of a human world divided into eight "populations" thus carries a profound meaning in global moral geopolitics.The placement of this tableau at the entrance of the "human demography" exhibit at the Mus ee de l'Homme, in Paris / France / western Europe, clearly foregrounds it as an orienting device, a chart suggesting a simple, definite structure-a veritable world model.The creators of the tableau turned a metaphor-an epidermically color-coded categorization of humankind-around, making the implicit message of ostensibly immutable difference explicit and visually graspable.The effect is immediate and visceral.
The presence of such imageries can be established with relative ease in west European public cultures.Take, for instance, a recent piece of conventional dermatological advice offered to the readers 15 of a special edition, titled Fit and Healthy, 16 of the popular German magazine Focus (Liebich 2021) concerning proper protection of skin from the harmful rays of the Sun.In it (see Image 3), the audience is treated to a taxonomy of skin colors (illustrated, again, with bodyless images of human heads, just as in the tableau in Images 1 and 2 above)-referred to as "The Various Skin Types." 17 To be noted, here we only have four categories in the ordinal scale of epidermic pigmentation.
To state what might be evident by now, the taxonomy of human skin presented in Focus Fit and Healthy matter-of-factly leaves out a majority of humankind, including some who are residents-let alone citizens-in a legal, historical, artistic or emotional sense, part (Goertz 1997;Plumly 2007;Schilling 2015) of the much thematized "Volk" of Germany.Obviously, considerable segments of humankind are ignored here.Arguably, the authors and editors of Focus Fit and Healthy offer an even more partial view of humanity so that their symbolic violence of simplificationby-exclusion goes beyond those coded in the "scholarly" typology presented in the tableau in the Mus ee de l'Homme.
"Eurowhite" Conceit, "Dirty White" Ressentiment
To be precise, Focus Fit and Healthy designates four "skin types": "very light," "light," "light brown," and "brown." 18An important clue (Ginzburg 1989) lies in which parts of the empirical range of human skin tonalitiesare omitted.Needless to say, it is those parts farthest from the Pink extreme.The focus (pardon the pun) is, thus, on "lighter"-skin-as-"skin"-genus-an explicitly scandalous claim that is, hence, implicitly, normalized. 19 All this converges on a world model wherein the west European "White"identified subject (1) creates a hierarchy of all people, (2) places itself on the top of that hierarchy, and (3) propagates the model as objective truth in which (4) all that, including, most important, its self-placement at the top of the global human hierarchy, is fully transparent.Consequently, (5) that "Whiteness" can be read, at will, as a synecdochic reference to "humanness" genus serving as a master plan for all Others. 20From a geopolitical perspective, cognitive regimes based on "race" categorization emerged as moral / ideological / emotional instruments that guide and govern west European actors' practices regarding Other humans, in-and outside western Europe, as tools for making and maintaining "difference" (Griffin and Braidotti 2002).As moral tools, they have eased feelings of dissonance between "European" claims of west European "Goodness" (B€ or€ ocz 2006;Burton 2007;Dzenovska 2013) on the one hand, and the genocidal practice inherent to colonial capital accumulation on the other.They worked to allay west European anxieties at the time of the completion of global colonial expansion, inculcating-or, as with Fanon, "epidermalizing," (Sardar 2008:xiii, Irizarry and Raible 2014)-a deeply biocultural pattern of an inferiority complex into a vast majority of humankind, a psychological state the colonizer used deliberately to promote the cause of colonial oppression, plunder and genocide (Fanon 2008(Fanon (1952)))."Racial" ideologies centered on "Whiteness" have inaugurated concentric gradations of putatively decreasing humanity, roughly proportionate to distance from western Europe-say, from the Trocad ero, the square at the entrance of the Mus ee de l'Homme.As an emotional device, "race" cognition has provided ways for west European subjects to experience feelings of relative relaxation and happiness, even in the colonial context, even in situations where they witnessed abject suffering by Others-even as that agony was directly caused by their very own acts committed from their subject positions as agents of colonialism."Race" cognition has been a key tool in centuries of colonial oppression, normalizing a preposterous self-exception by, and in favor of the colonizer operating in a world marred by a devastating pattern of inferiorization projected on the world by the "White"-identified subject-only to obfuscate issue of the perpetrators' colonial accountability through what Fanon (2008Fanon ( (1952))) called 18 "Sehr hell," "hell," "hell braun" and "braun." 19Of course the skin care advice in Focus Fit and Healthy is not the only instance in which the lighterskin-as-skin-"genus" model appears in west European public culture.That violent, synecdochic representation of human epidermal variation, skewed in the "lighter" direction as it is, rhymes well, for instance, with the French debates over the national soccer team being "too black," first thematized by the extreme right, eventually burning through almost the entirety of the political spectrum, including a politician in the Socialist Party (Beaumont 2007;Thompson 2015), only to subside, for now, after two decades of bitter rows (Robins-Early and Clavel 2018). 20Gurminder Bhambra (2017) calls this feature, as it crops up in the social sciences, "methodological whiteness."
B€ or€ ocz
"the racial redistribution of guilt."21Therein lie the moral implications of the cognitive practice of the colonial difference (Chatterjee 1993;Mignolo 2002) i.e., the process in which "White"-identified Pink west European subjects impersonated the abstract moral object of "Europe" and defined it "in sharp distinction to the colonized world" (Berger 2017:17, quoting Kaelble 2013), creating a "systematic racial division of labor" (Quijano 2000:535)."Race" regimes rest, then, on the following interlocking background assumptions: 1.The claim that humankind consists of a finite number of disjunct (non-overlapping), internally homogenous "groups," "populations" or, in the extreme, "races."2. The presumption that it is "scientifically" valid to arrange, analyze, and fix those "groups," "populations" or "races" in a system of moral super-and subordination.3. The contention that the resulting moral hierarchies converge on a single constant, irrespective of socio-historical contexts, criteria, or purposes of comparison.4. Insistence that that ahistorical/decontextualized hierarchy can be mapped on to epidermic pigmentation, body shape, or other epiphenomenal "features" of "groups," "populations," or "races," such that 5. "Whiteness"-a moral category pseudo-empirically tied to mis-operationalized, and anchored, in the first (and most often the only) instance as, low epidermic pigment levels, as in Pinkness above-is always already at the top, while its putative "opposite," "Blackness" is always already at the bottom of that hierarchy.
Each of those assumptions is of course an obvious fallacy, an imprudent misconception, and a brutal lie.In my view, a combination of those five assumptions is what racism is.
It has often been pointed out that the biocultural superiority claim of "Whiteness" also comes with demands for, and comparatively extremely easy access, to privileges of all kinds.I suggest that we might gain considerable theoretical advantage from reversing the logic of that argument.Instead of viewing "Whiteness" as an already existing superiority claim to which certain secondary privileges are "also" attached, I suggest we regard it as a social process in which the claim for privilege is constitutive and primary."Whiteness," in this conceptualization, would be an epidermalized moral-geopolitical superiority claim whose defining, core element is an ahistorical/decontextualized moral-geopolitical demand for unconditional, collective global privilege.Quijano's (2000) colonial "racial" division of labor hence converts, with the demise of many formal organizational structures of the colonial system, into a global "racial" division of privilege."Whiteness" has been a key cognitive mechanism that helped that conversion take place."Whiteness" helps understand the magnitude of the global privileges showered upon the west European occupants of the erstwhile colonizer "White" subject category as it contextualizes the persistence, indeed expanded reproduction, of "racial" cognition, generations after the demise of colonialism.
"Whiteness" is an unfounded, un-earned, and un-deserve-able-hence eminently unstable and forever contested-identity category, not only because constructing hierarchies of superiority/inferiority among human populations is the charlatan pseudoscience that is, but also, crucially, because the attractiveness of the superiority claim ensconced in "Whiteness" is deeply ensconced in the largely unequal global distribution of social, economic, political, moral, esthetic, psychological, etc. privileges, at least in the form of the racialized capitalist world economy of the longue dur ee.
"Whiteness" can be defined, then, working with Ziauddin Sardar's reading (2008:xiii) of Fanon, as the obverse epidermalization (see also Stephens 2016) of the an inferiority complex manufactured in the colonial relationship and projected on the colonized by the colonizer, deriving a sense of un-proven, un-prove-able superiority in a world after the collapse of the colonial system, performed as a tool for global privilege making."Whiteness" is ritual recitation of a contrived and insincere, hence insecure and unsustainable, compensatory mechanism, a veritable superiority complex on part of the west European subject who still "lacks"-to make a playful reference to Kant's famous definition of the Enlightenment here-"the determination and courage" to see him/herself for what s/he is, without inferiorizing the Other.In Enrique Dussel's formulation, modernity-a product of the European Enlightenment-is: in fact, a European phenomenon, but one constituted in a dialectical relation with a non-European alterity that is its ultimate content.Modernity appears when Europe affirms itself as the "center" of a World History that it inaugurates; the "periphery" that sur-rounds this center is consequently part of its self-definition (1993:65).
The "White"-identified subject produces his/her self-image by racializing as non-"White," i.e., inferiorizing the non-west-European as Other-always already in the context of his/her pursuit of the global privileges.What Habermas (1992) called "the unfinished project of the Enlightenment" cannot, even in this, elementary sense, possibly be depicted as a process completed, let alone left behind.When it comes to the un-earned privilege claims of "Whiteness," it has hardly even been begun. 22Whiteness" is a relational concept whose core is fixed as a constant, inaugurating the "White" subject's relations (a putative "superiority") to its constitutive outside."Whiteness" is, I emphasize, a moral-geopolitical category.At its core, it has nothing to do with skin tonality.The frequent justificatory reference to the empirical fact of Pinkness-i.e., to repeat, the fact of individual epidermic tonality of the globally privileged collective subject is, ostensibly, on the lighter-hued end of the empirical spectrum of human skin coloration, the condition of low pigmentation-as alleged "evidence" supporting the privilege claims lodged in "Whiteness" is an impertinent ruse, a ham fisted yet effective excuse that allows users of the "Whiteness" scheme to establish a quasi-objective, pseudo-scientific foothold on which its practitioners mis-justify the core of the idea, their unsubstantiated, indefensible claim of global privilege.
The identity schema of "White" entitlement to a wide range of global privileges "based" on pseudo-objective, epidermic "criteria" is undoubtedly present in west European public and academic cultures.What sets apart west European social prac-10 B€ or€ ocz tices of "Whiteness" from Other instances of racialization is that it is available for borrowing, almost like books in an open-shelf public library in western Europeespecially for "White"-identified Pink west European subjects who possess a membership card in the form of a west Schengen passport.
It is in the context of "White" privilege claims and the "racially" coded identity imaginaries, including that of "Whiteness," that are used to substantiate them, that the intellectual and official discourses of the reverse-synecdochic representation of the European Union as "Europe" has been embedded. 23The creation of the supra-state public authority called the European Union assumed, invoked, re-articulated, and preserved the subject position of "Whiteness" in three moral-geopolitical relations.
First, it established what has become the effective physical, moral, and social closure of the political/physical space occupied by west European societies, staving off people racialized as non-"White" (White 2019:387), rendering them epidermically ineligible to the privileges that accrue on the inside of the territory of European integration.The physical exclusion of non-"White," non-west-European subjects (White 2019:387) takes place through supra-state legal means-via the European Union's shared visa regulations 24 -and through a murky reference to the requirement of the never meaningfully defined "European identity" as a legal precondition for any non-EU-member state 25 to be allowed to file a membership request in the European Union.All that is taking place in a context in which, as we have seen, the semantic fields of west "Europeanness," Pink skin tonality and "Whiteness" overlap to a considerable degree, particularly if we define "Whiteness", as I have proposed above, as a set of global privilege claims.In that sense, the institutional arrangement of the European Union, especially its shared border policing and foreigner/migration "management" systems, function as quasi-state organizations created with the purpose of preventing access to the territory of western Europe-defined, hence, as a "White" space-by members of Other societies, racialized as non-"White."Some exceptions and special arrangements-having to do with the member states' legacies as colonizing powers, spot demand by west Schengen capital for labor, and a trickle of refugee flows-add variations to this scheme, but the underlying idea is this. 23That process was long in the making, as suggested by the fact that the EU's original six founding member states included most major west European colonial powers.The United Kingdom was left out of the group of the EU's founders because the status of its recent / still existing colonial holdings could not be reconciled with the EU's principle of supra-state sovereignty.(It is reasonable to assume that some of the historical legacies of that condition have something to do with Brexit as well.) 24Max Andruski argues, as a result of EU regulations (I add: regulations that are firmly rooted in colonial practices of "Whiteness" by erstwhile colonial powers, today's member states of the European Union) "certain [non-European born] bodies with European ancestry and phenotype" (2010: 358) practice frequent "motility" out and in South Africa so that "the complex regulation of transnational (and internal) migration on the part of states as well as supra-state networks works as part of an ensemble out of which race emerges" (2010: 361).As a result, "pre-histories of colonial movement [. ..] thus hang spectrally-and materially-over the present" (2010: 358). 25The European Union's official-legal terminology refers to non-EU-member states as "third countries"-a reference in which the numeral "third" is perfectly meaningless unless viewed against the background of the idea of the "third world," a residual category widely used in the context of the tripartite cognitive splitting-lumping of humankind during the cold war.
"Eurowhite" Conceit, "Dirty White" Ressentiment Second, it created a field of everyday, popular conversation concerning the particulars-ostensibly, some loftier, superior features-of "proper" west European "Whiteness" as contrasted to other claims to "Whiteness," e.g., the "White"-settler varieties that had defined the erstwhile-colonized context.
Third, it thematized a host of tensions, uncertainties, incongruities, paradoxes and impossibilities as the moral-geopolitical distinction of western "Whiteness" was eagerly, and with some unease, mapped on the political cartography of Europe, foregrounding, with much intensity, the supposedly unalterable "eastern" borders of west European "Whiteness." Arguably those moral-geopolitical borders-as illustrated by the map of comparative "Europeanness" adopted from a French geography textbook (see Image 4)had been drawn much before the mid-1950 when the predecessor to today's European Union was established.There is convincing evidence26 that something very similar to the "rule of European difference," defined elsewhere 27 (B€ or€ ocz 2006) had already been present at the dawn of capitalist modernity.Such splitting and lumping of the territory of the European continent characterized the approach of the philosophes of the west European Enlightenment as they discovered, possessed, and gave (often openly condescending, if not hostile) meanings to, the peoples and societies of the lands east of today's Germany, Austria, 28 and northern Italy.The legacies of 19 th century European Orientalism-cognitive structures that further emphasized and reinforced the putative moral differences between the "west" and "the rest" of Europe-are undeniable (Boatc a 2006) in the histories of the societies that have found themselves on the outside of the so-defined lands of west European "Whiteness."The establishment of the European Union and the unexpected collapse of the political-geographical separation between the eastern more-than-half of the continent and the territories where west European "Whiteness" flourished raised the volume of the conversation concerning the "center of gravity" of proper "Whiteness" and the outside borders-or, in the words of Joschka Fischer, the then Foreign Minister of Germany, the "finality" (Fischer 2000)-of west European integration to unprecedented levels.The stakes, almost everyone feels, are enormous 29 and the attendant anxieties and risks were distributed very unevenly.For, drawing the boundaries of the European Union-a geopolitical organization devoted to maintaining the global flows that ensure the exceptional privileges accruing inside it-has dramatic implications for the abilities of societies to partake in, or be barred from access to, those global privileges.The question of the imaginary boundary separating the land of west European "Whiteness" from its eastern neighbors goes to the heart of the "modern" identity constructions of the societies east of the Germany-Austria-Italy line-measuring themselves, ever so desperately, against the etalon of a set of idealized images of French society/culture (B€ or€ ocz 2006;Melegh 2006)-at least since the mid-19 th -early-20 th centuries.The very existence of such struggles over "properly" European-read, here: properly "White"-practices 27 The rule of European difference involved the launching of three interlinked cognitive operations: [. ..]Insistence that, within Europe, goodness is distributed unevenly;.[. ..] Th claim that the uneven distribution of goodness maps on the west-east, north-south, and/or northwest-southeast axes, or the west-centric core-periphery structure, of the continent of Europe, so that locations in the eastern, southern, south-western, and/or simply peripheral parts are marred by the insufficiency, absence, or opposite of goodness; and, finally, [. ..] the key conclusion of the entire exercise in terms of geopolitical identities: the suggestion that goodness, an essentially "European" quality, is found in its highest empirical density in western (northern, north-western, or west-central) Europe.28 Even the name of Austria, in the original German-€ Osterreich (literally: "Eastern Empire/Realm / Land in the East")-thematizes its status as an "eastern" borderland. 29Roberto Dainotto (2007: 2) describes the feeling of being European and outside the borders of "eurowhiteness" from an Italian subject position, along a north-south divide, instead of the west-east boundary described above.(Italy had not been included in the Schengen system until the mid-1990s): The anxiety we felt at that initial exclusion is hard to describe.As Giuseppe Turani used to write on the pages of the daily La Repubblica, we badly wanted "to become like all others . . . to become a European country, not so Mediterranean, not so pizza-and-mandolin, not so defective" [. ..]And how could we possibly overcome our parochial-let alone "defective"-identities if we were denied the "promised disappearance of physical borders" that alone granted "an enhanced meaning of Europe" as a cultural identity [. ..]?
"Eurowhite" Conceit, "Dirty White" Ressentiment lends support to Manuela Boatc a's (2013) suggestion of "replacing the notion of a single Europe producing multiple modernities by the one of multiple Europes with different and unequal roles in shaping the hegemonic definition of modernity and in ensuring its propagation."Little surprise that the epidermalization of inferiority is also palpably present (Lazarevi c Radak 2015) beyond the "eastern" borders of "Eurowhiteness."Both relational references have created identity discourses that carve out a special, putatively trans-historical, place for the west European subject, self-racialized as "White", in a field of moral geopolitics.In conclusion, I propose a way to make explicit the two key identity practices that have implicitly emerged in regulating these fields of identity.The first one-I will call it "eurowhiteness"-encapsulates the idea of a self-racialization that is imagined as a pristine, un-tainted "White" subjecthood.It distinguishes itself from identity locations racialized as non-"White," as well as distancing itself from presumably less immaculate, either diasporic or "eastern" varieties of "Whiteness."Its counterpoint-I will call it "dirty whiteness"30 -embodies a demand for acceptance as properly "White" despite the absence of any apparent willingness on part of occupants of the "eurowhite" subject position to accept it as such.
The "dirty white" subject position-conceived as a reaction to what it experiences as a conceited "eurowhite" condescension, or even insult, imposed on the societies of a vast land on the eastern parts of Europe-could have produced, at least hypothetically, a reaction that would challenge the racialization of humanity on principled grounds.Several political ideologies quite well known in the "dirty white" societies-from socialist internationalism through an all-encompassing view of humanity based on a political identification with the legacies of the Non-Aligned Movement through various green, liberal or even, to a limited extent, the "liberation theology" component in Roman Catholicism-could have served as a basis for such a reaction.The end of the period of state socialism and the opening of the European Union for the movement of all "factors of production," including labor, resulting in a steep increase in the proportion of east European subjects who had gained experience in working in western Schengen-Land, to a considerable extent working alongside co-workers who had a long experience in being racialized as non-"White," could have been expected to raise a popular consciousness of anti-racism among east European subjects racialized as "dirty white."Had the "eurowhite" condescension to the eastern parts of Europe happened suddenly, as part of a historical event-say, in the immediate aftermath of the collapse of state socialism in 1989-1991-perhaps it would have been possible for such a reaction to emerge.
However, that was not the case.The "eurowhite" putdown of eastern Europe has had its own longue dur ee history.So have east European reactions to it.
As a result, by and large the opposite happened.Instead of encouraging a search for possibilities of global anti-racist solidarity, the intellectual elites, the extended informal networks that had monopolized access to political power, and the populations at large resorted to re-warming their intellectual and political traditions from as early as the 19 th century.East European "dirty white" subjects proceeded to assert their demands for being accepted as "eurowhite" (with the attendant privileges of course) at an ever-increasing volume.Their demands were met with silence in the "eurowhite" context.Open acknowledgment of the strategy of demanding inclusion in "eurowhite" subjecthood on account of a series of dermatological and geographical accidents-a combination of the presence of Pinkness and their physical location just outside the realm of "eurowhiteness"-is a taboo as that would be tantamount to admitting to the "embarrassment" of being racialized as less-than-"eurowhite." Similarly, the ever more loudly repeated demands for acceptance are so protected on the "inside" that its critiques, let alone alternative conceptualizations are, for all intents and purposes, forbidden.31Tied up in this game, they keep producing cultural practices that the "eurowhite" subject position interprets as ever dirtier "dirty white." As a result of the repeated demands and the staunch silence they receive, the discursive soundscape of erstwhile state socialist, soon-would-be-EU-member societies soon began sounding like an illustration to Max Scheler's (2015) century-old idea he called "ressentiment": Ressentiment is a self-poisoning of the mind [. ..] a lasting mental attitude, caused by the systematic repression of certain emotions and affects which, as such, are normal components of human nature.[. ..]The emotions and affects primarily concerned are revenge, hatred, malice, envy, the impulse to detract, and spite.[. ..] [R]essentiment is [. ..] chiefly confined to those who serve and are dominated at the moment, who fruitlessly resent the sting of authority.[. ..]If an ill-treated servant can vent his spleen in the antechamber, he will remain free from the inner venom of ressentiment, but it will engulf him if he must hide his feelings and keep his negative and hostile emotions to himself (Scheler 2015:4-6).
As far as I can see, it is not true that "there is no "race" in Europe."Nor is "'race' an American concept."The problem is that it is essentially almost impossible to talk about it openly in contemporary Europe.The almost-five-centuries long period of genocidal practices in the colonial context, culminating in the simultaneous emergence of Fascism and Nazism, followed by a several-generations-long cleansing of the mind of the west European subject about explicit racial aggression make it very difficult for west European subjects to address "race" openly, even in a scholarly-analytical fashion.It is too embarrassing to admit the continued existence of "race"-especially for a collective mindset that is, otherwise, fully committed to a self-image as possessor of the greatest cultural / civilizational achievements of humankind.Arguably, the very existence "eurowhiteness" is a key reason why it is so difficult to talk openly about "race" in Europe today.
The inability to confront explicit "race" references is clearly demonstrated, for instance, in situations where west European "eurowhite" subjects experience surprise and a mild sense of distaste at being asked a direct question about their own "race"-e.g., in official questionnaires on arrival in the United States.That repulsion is usually followed by the same west European subjects-who, as we should recall, supposedly have no "race"-seamlessly to categorize themselves as "White," instead of refusing to answer the question or choosing another category, lodging a complaint, etc.A similar reaction is clearly detected in the reactions of "eurowhite" commentators to news of racist police brutality, the "race"-discriminatory penal system, or the forms of resistance to systemic racism such as the Black Lives Matter movement or the acts defacing symbols of colonial-racist violence in the United States.The "eurowhite" subject sighs and marks his/her distance from those expressions of "race" as being too crass.Instead of "taking the knee" together with the young people protesting systemic violence, members of "eurowhite" intelligentsia suddenly discover the hidden "aesthetic value" of the statues of colonial "White" statesmen, even if the person they commemorate had been genocidal murderers and their presence in public space is an affront.
The "eurowhite" discursive strategy is, by and large, all about trying to forget "race" into oblivion.In that sense, it resembles the attitude of the small town in Germany, depicted in Nasty Girl, a film by Michael Verhoeven (1990), based on real events after World War II, whose citizens united in a conspiracy of silence against a girl who discovers and wishes to discuss their shared nazi past.That European silence about "race" is practiced from a position of global power, and its main consequence is forestalling any possibility of openly questioning "Whiteness" as a system of global privileges.
My sense, hence, is not that "there is no race" in Europe.Nor can we read "race"-a practice brought to the Americas, as well as just about everywhere in the world, by European colonizers-as an "American concept."If we relegate "race" to the US context, we expressly deny the west European origins and the centrality of western Europe in the history of five centuries of colonialism; those five centuries which have produced the splendor and grandeur displayed in just about any west European city, filled their grandiose museums, the value transfer that has served as the material infrastructure for the persistence of "eurowhite" privilege claims to this day.
If we view the silence about "race" and the practice of "eurowhiteness" as excuses for global privilege claims, as I have proposed above, we might be able to notice that they are difference making devices developed as part of the colonization of the rest of the world by west European subjects who racialized themselves and everyone else to suit the cognitive needs of an extremely complex global colonial world."Whiteness" is, hence, a conceptual instrument that refused to vanish after the collapse of the colonial system.Instead, it became a partly explicit, partly implicit, partly formal, partly informal, marker of difference all over the world, very much including Europe, in the service of providing cognitive scaffolding for regulating the moral geopolitics of access to privileges.
Image 2 .
Enlarged Part of the tableau Natural Selection of Skin Color (Selection naturelle de la couleur de la peau) Demography Exhibit, Mus ee de l'Homme, Paris, mid-1990s.Photo (c) J ozsef B€ or€ ocz. | 2021-10-19T15:09:28.470Z | 2021-10-13T00:00:00.000 | {
"year": 2021,
"sha1": "8f667e807d4c6b0748b5b3b755ad05a8303a5c78",
"oa_license": "CCBY",
"oa_url": "http://unipub.lib.uni-corvinus.hu/6990/1/Borocz_Wiley.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "6e3a91acb86aa6cf14e3a3844619e26191874b07",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
224854528 | pes2o/s2orc | v3-fos-license | Reviewing quantum dots for single-photon emission at 1.55 μm: a quantitative comparison of materials
In this work, we present a review of quantum dot (QD) material systems that allow us to obtain light emission in the telecom C-band at 1.55 µm. These epitaxial semiconductor nanostructures are of great technological interest for the development of devices for the generation of on-demand quanta of light for long-haul communication applications. The material systems considered are InAs QDs grown on InP, metamorphic InAs/InGaAs QDs grown on GaAs, InAs/GaSb QDs grown on Si, and InAsN QDs grown on GaAs. In order to provide a quantitative comparison of the different material systems, we carried out numerical simulations based on envelope function approximation to calculate the strain-dependant energy band profiles and the associated confined energy levels. We have also derived the eigenfunctions and the optical matrix elements for confined states of the systems. From the results of the simulations, some general conclusions on the strengths and weaknesses of each QD material system have been drawn, along with useful indications for the optimization of structural engineering aiming at single-photon emission in the telecom C-band.
Introduction
The generation and manipulation of single photons is a keystone for the development of quantum photonics devices [1][2][3]. In particular, for quantum-safe long-haul communication systems, reliable on-demand sources of quantum light in the telecommunication C-band at 1.55 µm are urgently needed [4].
Epitaxial self-assembled quantum dots (QDs) can be considered as the most investigated semiconductor nanostructures for the generation of quantum light and, after convincing success in the generation of quantum light in the 1.0 µm range [1,5,6], research efforts have been recently focused on obtaining quantum photonic sources in the telecom range.
One of the weaknesses of QDs is related to the low temperature of operation for single-photon emission, due to a reduction of the emission coherence, caused by phonon-related linewidth broadening. Other structures such as carbon nanotubes (CNTs) and GaN nanostructures have been demonstrated to be able to emit single photons up to room temperature. However, only As-based QDs have purities in excess of 99% with high brightness of emitted light, whereas CNTs suffer from low brightness and poor stability [3], while GaN structures are incapable of emitting light in the infrared region, as they are limited to the 280-600 nm range [7]. Moreover, epitaxial QDs provide various advantages for the development of single-photon devices, such as the possibility to control position during growth and their easy integrability with optical cavities or advanced photonic structures for enhancement and control of emitted quantum light.
As the amount of research on this topic is currently increasing, at this stage it can be useful to compare different QD material systems from a theoretical point of view, to ascertain the peculiarities, the intrinsic advantages and the disadvantages of specific systems. This in-depth knowledge will concur with other elements to achieve a complete assessment of the possible different options available; many other criteria should be considered when choosing a material system: growth requirements, fabrication technology, costs and technology transfer considerations.
Nevertheless, a review of the general theoretical scenario of available material systems is indeed useful to understand what performances can be expected, in particular from the point of view of single-photon emission.
To this aim, we performed numerical simulations by means of the simulation software tool TiberCAD [13], to calculate an energy system for QD structures that can give 1.55 µm emission at 10 K and derive some relevant figures for single-photon emission: • Intrinsic spontaneous power density, determined by the oscillation strength, that gives an indication of the emission intensity from QDs. • Confined ground and excited levels for electrons and holes: the energy level system has relevant effects for carrier kinetics (due to thermalization processes) and for optical pumping (resonant or quasi-resonant). • Energy distance between QD levels and states that can act as channels for thermal escape of carriers (wetting layers-WLs or confining layers-CLs): this a fundamental parameter for light emission at temperatures higher than 10 K. As evidenced in the literature [14,15], under low excitation levels, this energy barrier (given by the total barrier heights for electrons and holes) corresponds to the activation energy for thermal quenching of the emission. It should be noted that this is valid for QD ensemble emission: it has been very recently discussed how, for a single QD, this barrier energy might correspond to the separation of singleparticle confined levels [16]. • Localization of carrier wavefunctions and type of quantum confinement (type I or type II), that has a direct effect on exciton and multi-exciton binding energy.
Although the calculation of fine structure splitting (FSS) goes beyond the scope of this work, we will report values from the available literature to give some indications on the possibility of having entangled photon emission from the different materials considered here.
TiberCAD simulation tool
In this study, we used the TiberCAD multi-scale simulation tool, which has proved effective in the numerical study of several semiconductor low-dimensional nanostructures [13,[17][18][19]. TiberCAD offers a modeling environment where several different physical models may be coupled in an integrated simulation, even on different scales, ranging from continuous to atomistic level. Analysis and optimization of electronic and optoelectronic devices may be performed at all the relevant length scales, including linking and self-consistent coupling of different models. Continuous models based on the finite element method, such as drift-diffusion transport, the k · p multiband quantum electronic model, heat flow, strain and piezoelectricity may be combined with atomistic methods, such as empirical tight binding, applied on material structures generated in user-defined device regions.
Numerical methods
A QD system is generally composed of a nanometer region of a lower-gap semiconductor surrounded by a larger-gap material: this confining potential causes the confinement of electrons and holes in the conduction and valence band, respectively. As the confining potential is effective along all three spatial dimensions, the quantum confinement results in atomic-like discrete energy levels. Consequently, optical transitions between single confined carriers yield single-photon emission. A calculation of emission energy, therefore, depends on quantum energy levels and essentially consists of solving the Schrodinger equation for electrons and holes in the conduction and valence bands: the energy levels and carrier wavefunctions depend on system parameters such as material compositions, QD size and shape, and strain effects due to lattice mismatch.
In TiberCAD, the calculation of lattice mismatch-induced strain is based on the linear elasticity theory of solids [20], assuming pseudomorphic interfaces between different materials. This approach is computationally favorable and allows results to be easily included in a k · p-model [21].
Thus, in our simulations, firstly the strain and deformation fields due to the lattice materials are found through minimization of the elastic energy, then the conduction and valence band edges, along with the effective masses, are obtained from bulk k · p calculations, including the local corrections due to strain. Strain-dependent energy bands for all materials of the quantum system are then calculated by solving the Poisson equation for the system at equilibrium.
Finally, the Hamiltonian of the system is built following the eight-band k · p theory [21] in the framework of the envelope function approximation. By solving the eigenvalue problems resulting from this model we obtain the energy spectrum, that is eigenenergies and eigenfunctions of the system, and the optical matrix elements. Further information on the calculations of these systems with TiberCAD can also be found in [22].
For each studied QD system, we designed a 3D model of a single undoped QD alongside its associated WL; data on QD composition, size and shape were taken from literature reports and details are given in the sections dedicated to each specific material system.
We must stress the relevance of particular material parameters such as the band offset between QD and CL materials, which were found to have a strong influence on the confining potential. In this work, the best available data in the literature have been considered, taking into consideration also the bowing effect in the case of alloys [23]. Similarly, QD properties such as the size, shape and composition have an important impact on the results. For this reason, uncertainties in the precision of these values determine the range of confidence of the model calculated results; it has been shown in previous publications [22] that, for the case of metamorphic InAs/InGaAs QDs, the range of variation of values can be set at ± 20 meV.
As output values of the simulation, we considered (i) strain-dependent energy band values, (ii) eigenenergies for the confined levels for electrons and holes, (iii) 3D probability densities for calculated states, as derived from eigenfunctions, and (iv) the optical spectrum of spontaneous power density for optical transitions in the 0.75-0.90 eV range (1.65-1.38 µm) at 10 K, coming from the calculation of the dipole matrix elements for the energy states. As the calculation of the emitted power does not take into consideration external parameters such as non-radiative recombination mechanisms and carrier pumping processes, it should be considered as a semi-qualitative indication to compare only the effects due to different oscillation strengths in different material systems.
The calculated QD systems are those reported as able to emit at 1.55 µm at 10 K: (i) metamorphic InAs/InGaAs QDs grown on GaAs, (ii) InAs QDs grown on InP substrates, (iii) InAs/GaSb QDs grown on GaAs, and (iv) InAsN/GaAs QDs grown on GaAs. It is worth mentioning that in all the above cases, single-photon emission has been demonstrated in recent years, except for the diluted nitride InAs QDs where issues related to material quality have hampered this achievement.
For each system, we will present the main results of the calculation in a figure that summarizes the essential features, while at the end of the paper we have included a table to give a quantitative general picture of the different system parameters.
InP-based QDs
QDs grown on InP have historically been the most studied ones for emission at 1.55 µm, due to the favorable strain situation that allows a lower energy gap for QDs, compared to GaAs-based QDs. Indeed, single-photon emission from such QDs has been demonstrated since 2007 [11,24] and InP-based QDs have been shown to have an intrinsic FSS lower by an order of magnitude than their Ga-As-based counterparts [25]. In addition, InP is the substrate of choice for commercial telecom lasers; therefore the technological transfer towards mass production of single-photon devices based on this material is expected to be viable.
Many works have been published in recent years with various QD designs and features. As the aim of this work is to give a general scenario of different QD systems, we have chosen to simulate two structures that can be considered as archetypical for this material system: (A) the basic InAs/InP one, consisting of an InAs QD of truncated conical shape with a ratio between lower and upper diameters equal to 3; the QD has a diameter of 20 nm and a height of 4 nm and it is embedded in InP layers, following the results of [26] and [25]; (B) a more complex design that considers an In 0.53 Ga 0.23 Al 0.24 As layer (lattice-matched to InP) embedding larger In 0.8 Ga 0.l Al 0.1 As QDs having diameters of 22 nm and heights of 12 nm, as described in [27]. It is important to note that in this case the shape is more conical with a smaller upper diameter of 2 nm.
It is clear that the calculated properties are given as an indication of the range that can be covered, as other InP-based QDs could have different values of relevant parameters. Nevertheless, it still seems useful to have a general overview of the figures-of-merit for this system, in particular as a benchmark for less studied material systems.
The results of our simulations agree with reports that indicate that both types of structures can emit in the C-window, but the consistent differences in QD morphology and composition have an impact on the properties of the quantum energy system. In particular, for the InAs/InP system a photoluminescence (PL) peak emission around 0.780 eV at low temperature was measured [26], agreeing with an expected value of 0.806 eV based on the result of the simulations. For the system in case B the calculated ground levels for electrons and heavy holes determine an emission energy of 0.808 eV, in substantial agreement with a PL experimental value in the range of 0.820 eV [27]. As shown in figures 1 and 2, both structures allow emission in the 1.55 µm range, although the larger QDs have a slightly lower oscillation strength, possibly due to the smaller overlap between electron and hole wavefunctions.
The very different band alignment shown in figures 1(a) and 2(a) has a direct effect on the confined energy level configuration: in case of smaller InAs/InP QDs, ground states are well separated from excited states (by 57 meV for electrons and by 26 meV for holes), while in the larger InAlGaAs QD this energy separation is reduced to 17 meV for electrons and 4 meV for holes.
The different configurations of these quantum systems affect some physical processes that involve confined carriers: on one hand, photogenerated carriers have different paths to relax from CL states to confined QD levels, and on the other, closer energy levels might allow direct generation of photocarriers in the excited levels.
Such differences in the physics of the quantum systems could result in very different performances of single-photon devices from the point of view of (i) device dynamics due to different thermalization processes that can influence carrier kinetics [28], (ii) quasi-resonant and non-resonant pumping configurations, and (iii) separation of exciton and multi-excitons coming from different confined states. As can be observed in figures 1(b) and 2(b), the expected optical spectrum is rather different, aside from the QD 0 line: for smaller QDs the emission from excited states is much more distant from that from ground states.
The last point concerns the distance in energy from confined ground states to WL states that are known to be the most effective escaping channel for thermally activated carriers: the calculation of WL states for the two structures resulted in a WL level at 1.146 eV for case A and at 1.044 eV for case B. These values agree well with the reported experimental PL emission energy for the two systems: 1.160 eV for case A [26] and 1.030 eV for case B [29].
This means that the energy separation for small InAs/InP QDs is 312 meV while it is 214 meV for larger InAlGaAs QDs, with considerable effects on the thermal quenching of emission. This is true also in the case of considering the energy separation between confined levels as the effective energy barrier for emission quenching of single QDs, as there is a larger shell spacing for smaller InAs/InP QDs.
It should be noted that InAs/InAlGaAs/InP quantum dashes (that are rather elongated nanostructures) have recently been shown to emit single-photon emission up to 80 K [10]: our results suggest that for further increasing the operating temperature a configuration with higher band discontinuities would be advisable.
Metamorphic InAs/InGaAs QDs
InAs QDs grown on metamorphic InGaAs buffers have been studied for almost 15 years as a useful design to redshift the QD emission towards 1.3-1.55 µm for structures grown on GaAs substrates [30][31][32]: this is obtained thanks to the reduced mismatch between QDs and CLs [33,34].
More recently, the use of InAs QDs grown on metamorphic InGaAs buffers has gained considerable interest for the development of single-photon sources in the C-band with considerable success [8,9,16,35]. A dedicated review was published very recently, discussing how this design is very valuable from a technological point of view, thanks to the use of a GaAs substrate, and how single and entangled photons can be obtained [36].
Indeed, there is a technological push towards the use of this material rather than InP, as substrates of the latter material are more expensive and provide inferior heat sinking to GaAs ones. Moreover, growth on GaAs allows for the fabrication of AlAs/GaAs Bragg stacks needed for devices with vertical cavities emitting photons from the surface. This is not easily achievable with InP, as a lattice-matched material with the required refractive index difference is lacking [37] .
Moreover, in-depth studies of electrical properties of metamorphic QDs concluded that the defect density in these strain-relaxed nanostructures is comparable to that of standard InAs/GaAs QDs, thus making the fabrication of devices with good performance feasible [38,39].
In this work, we considered a metamorphic QD structure engineered for emission at 1.55 µm at 10 K as derived by a previous study [22] that considers an In x Ga 1-x As metamorphic buffer with x = 0.50, an In z Ga 1-z As QD with z = 0.75 and an In y Ga 1-y As upper CL with y = 0.55. The QD has a truncated conical shape with d = 20 nm and h = 7 nm: such values were extracted by experimental data published previously: agreement between calculated values of QD emission and PL emission provided a confident validation of the model [40,41].
An important feature to note is that in these nanostructures the degree of confinement is considerably low, up to the point of having electrons or heavy holes not confined in the QD, leading to type-II quantum-confined systems [22]. This change of the nature of the physical quantum system has a direct effect on the physics of confined carriers and on their recombination properties, as a weak confinement affects the probability of radiative recombination and, hence, the emission efficiency of quantum light. In addition, smaller confinement energies result in a lower population of ground states when the temperature is increased. This feature can have an impact if single-photon operation at higher temperatures is required, in particular for a large population of ground levels.
The results of the calculations yield a value of emission energy of electron-heavy hole recombination from ground states of 0.806 eV; this value is within the confidence interval of 20 meV with reported PL emission from metamorphic QDs of 0.810 eV [16].
In figure 3 the results of the simulation are presented. The most noticeable features ( figure 3(a)) are the very low band discontinuities between QDs and InGaAs CLs, that result in a very small confinement, in particular for heavy holes. Indeed, only two confined states for electrons and holes were found when solving the eigenvalue equation and, hence, the simulated optical spectrum of figure 1(b) consists of one strong QD 0 emission line, while the emission from excited states (QD 1 ) is almost totally covered by the emission from states in the InGaAs CLs.
To highlight the fact that in this system the control of composition is of paramount importance, we carried out a simulation where the value of upper CL In concentration y was increased to 0.60: the result was that the electron was no longer confined in the QD, resulting in a type-II quantum system. See supporting information (available online at stacks.iop.org/JPMATER/3/042005/mmedia) for values of calculated probability densities. This result confirms the indications of [22] where it was concluded that with x = 0.50, the values of y at which a type-I system could be obtained are limited to 0.60 > y > 0. 40. It has been previously discussed that in metamorphic QDs with high x the WL state might not be present [42,43]; therefore, in this particular case, there are reasons to identify the higher-energy levels, where thermally activated carriers might escape, with the In y Ga 1-y As CL states. As a matter of fact the present calculation shows that the second excited states for electrons and holes are located mostly in the capping layer, as shown in the supporting information.
The value of the calculated energy gap of In 0.55 Ga 0. 45 As, equal to 0.825 eV, is confirmed by an experimental value of PL emission energy of 0.837 eV reported in [44]. Moreover, it should be mentioned that WL calculations carried out with the same model for lower compositions of In in CLs were validated by similar PL characterization data [45].
Another important consequence of the low degree of confinement is that the energy distance between ground QD states and CL levels is 19.3 meV, a value that makes the thermal escape of confined carriers a very strongly competing process with radiative recombination when temperatures are raised. For this reason, for higher-temperature operations, advanced designs for increasing carrier confinement are needed, such as the use of InAl(Ga)As barriers [41].
It should be noted that very recently single-photon emission at 77 K from metamorphic QDs was reported [16]: it was argued that the activation energy for a single QD corresponds to the shell spacing, thus resulting in much lower values than for the ensemble. For a QD ensemble the activation energy for thermal escape of confined carriers is determined by the energy difference between QD ground levels and states that can act as escaping channels (WL or CL). This energy barrier is given by the sum of the energy differences for electrons and heavy holes.
From the point of view of emission efficiency, the oscillation strength for the ground state is very similar to the InP-based QDs, making these structures an attractive design for low-T single-photon emission, thanks to the technological advantage of being grown on GaAs substrates; for example, it is possible to fabricate lattice-matched distributed Bragg reflectors based on high refractive index contrast materials such as AlAs and GaAs.
The presence of higher energy levels at 12 meV for electrons and 4 meV for holes also allows for the use of quasi-resonant optical pumping schemes.
InAs-GaSb QDs
The method of capping InAs QDs with GaSb layers to redshift their emission towards the telecom C-window has been known for some years, and the results are convincing [46,47]. The peculiar band alignment results in type-II confinement of carriers: thus, a strongly reduced emission efficiency might have discouraged research into the possibility of having quantum light from such nanostructures. However, recently, single QD emission from InAs/GaSb nanostructures grown on silicon was observed, a first step towards the realization of single-photon sources based on this material system [12]. Therefore, despite the expected reduction of recombination efficiency due to the spatial separation between confined electrons and holes, we included InAs/GaSb QDs in this overview of material systems for single-photon sources.
For this calculation we relied on the parameters provided in [12], with the QD having a truncated conical shape with a ratio of 3 between diameters. Base diameters of 23 nm are reported. It is known that the GaAsSb cap layer usually has a non-uniform composition and that the vertical InAs profile is not easy to determine accurately, making the values of QD heights and composition somewhat arbitrary [12,48]. As these calculations should be considered as a tool to indicate general properties and provide guidelines, we considered a height value of 8 nm and QD consisting of In 0.90 Ga 0.10 As alloy (considered as an average value from published experimental data) as such values resulted in QD emission at 1.55 µm. The cap layer was 6 nm thick and composed of GaAs 0.74 Sb 0.26 , in agreement with [12].
The results of the calculation predict an emission energy from ground state recombination at a low temperature of 0.792 eV, while the experimental PL characterization showed a broad emission band centered at 0.820 eV [12]. This slight disagreement can be attributed to higher uncertainties in the values of sizes and compositions of the considered nanostructures.
In figures 4(a) and (b) the band profiles and calculated energy levels along the vertical and horizontal axes are presented: due to strain effects and staggered band alignment, the minimum of the valence band occurs outside the QDs and this causes holes to be localized in the GaSb capping layer on the sides of the QD, as shown in figure 4(e). This spatial separation results in a highly reduced spontaneous power density for the QD 0 emission (see spectrum of figure 4(c)), as compared to type-I structures described above, of about three orders of magnitude.
On the other hand, there is a relevant separation between the electron ground and excited levels of 52 meV, that allows well-separated QD emission lines, as shown in figure 4(c).
In this system the calculated InAs WL energy level results to be 1.103 eV, thus the GaSb layer, that has a bandgap of 1.090 eV, can be considered as the dominant escaping channel for thermally activated carriers; hence, the energy barrier for thermal emission quenching of confined carriers results to be 300 meV.
InAsN-GaAs QDs
The incorporation of N into InAs QDs was proposed almost 20 years ago as a method to consistently redshift its emission towards 1.55 µm due to the very large bandgap bowing of diluted III-As-N alloys [49,50]. Despite this, the very first encouraging results were later hampered by the discovery that even a very low amount of nitrogen results in strong reduction of QD emission efficiency due to an increase of defects in the nanostructure. Most probably due to this issue, any report on the measurement of single photons in the C-band from In(Ga)AsN QDs is yet to be published. Nevertheless, research work is still devoted to this challenging material, hence we have included it in these theoretical calculations, hoping that soon some breakthrough will allow the defect-related problem to be solved. For this case, we relied on the results of papers reporting the achievement of 1.55 µm emission from In(Ga)NAs QDs [51,52] with a composition value of nitrogen of about 4%, considered as an average value taken from available experimental data.
In these works, precise values of sizes and composition of QDs were not reported: therefore, for the sake of argument, we considered InAsN QDs of truncated conical shape with standard sizes of 20 nm for diameters and 6 nm for heights. The WL state was calculated under the hypothesis of having a 1.6 ml layer of the same InAsN material of QDs. Material parameters for the low-N InNAs material were calculated on the basis of the most reliable values for the large bowing parameters [23].
The calculated QD energy levels indicate an expected emission energy of 0.786 eV, while to date the PL emission energy recorded for 4% of N is observed at 0.810 [51], although it should be noted that values of actual nitrogen incorporation could be prone to large uncertainties.
The results of the calculations, shown in figure 5, indicate that the holes in the ground states are localized on the side of the QD, leading to a reduction of the spatial overlap with the electron wavefunctions. This increased separation between confined particles results in the physical effect of a decrease of the recombination probability that causes a reduction of the intrinsic spontaneous power density in comparison with InAs/InP QDs or metamorphic QDs of almost an order of magnitude, as shown in the optical spectrum of figure 5(b). Thanks to the large band discontinuities with the GaAs CLs, many confined states were found, thus many QD emission lines are present in the spectrum. Moreover, a calculation of the expected WL emission has given a value of 1.396 eV, bringing the energy separation between QDs and WL to a very high value of 600 meV, the largest for all systems considered here. This value is very similar to the emission energy of InAsN/GaAs quantum wells, reported to be at 1.370 eV for an N composition of 4.4% [53].
Conclusions
To provide the reader with a quick view and to summarize the results, we present in table 1 the calculated relevant quantities for single-photon emission, as discussed in the Introduction: • spontaneous power density for QD emission at 10 K in W/eV • difference in energy between ground and excited state levels for electrons (E e0 -E e1 ) and holes (E h0 -E h1 ) in meV • energy distance between QD levels and channels for thermal escape of carriers in meV We have included in the table values reported in the literature of the FSS in µeV, of the g (2) (0) second-order autocorrelation function and of the zero-phonon line (ZPL) width at low temperature, when available from the literature.
For entangled photon emission, the FSS value should be lower than the ZPL emission linewidth. To the best of the authors' knowledge, up to today no reports on measurements or calculation of FSS for InAs/GaAsSb QDs and InAsN/GaAs QDs have been published.
By looking at table 1, some general conclusions on the strength and weaknesses of the different systems can be drawn: the two systems with the largest oscillation strength and, thus, the highest emission efficiency are InAs/InP and metamorphic InAs/InGaAs, while the Sb-based system, having type-II confinement, has the lowest.
The metamorphic system and the InAs/InAlGaAs (grown on InP) have the most closely spaced confined levels, with possible useful consequences for quasi-resonant pumping but potential limitations for higher-temperature operation. This is of particular relevance for the metamorphic InAs/InGaAs system where the confinement potential is very low, up to the point of risking having a type-II confinement configuration. Conversely, the largest energy barrier that carriers have to overcome to be captured by bulk or WL states is that of InAsN QDs.
Until now, FSS has been measured only for InP-based QDs and for metamorphic InAs/InGaAs ones with results hinting at possible entangled photon emission, particularly if techniques to reduce the FSS below the emission linewidth are used, such as strain engineering via piezoelectric actuators [55,56].
From a technological point of view, it is evident that, at this stage, InP-based QDs allow us to obtain the most relevant results and that the research is more advanced in this material system. On the other hand, systems based on GaAs or Si are more interesting from an industrial point of view as these are the most preferred semiconductor platforms for electronic and photonic devices. For GaAs QD systems, however, it should be kept in mind that some design/growth issues still need to be addressed: (i) the degree of confinement for metamorphic QDs needs to be increased, in particular for higher-temperature quantum light emission, (ii) it would be advisable to increase the emission efficiency of type-II GaSb QDs, and (iii) more research effort is needed to obtain good material quality of InAsN QDs.
In any case, considering this review of available QD materials, it can be argued that epitaxial semiconductor QDs represent the best option to obtain an efficient and reliable source of quantum light in the telecom C-band, a fundamental element for the future development of quantum-based light communication systems. | 2020-10-19T18:08:50.057Z | 2020-09-30T00:00:00.000 | {
"year": 2020,
"sha1": "75c5a8c11e0ed846b6e4e0cd3de122db3e5e42b1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2515-7639/abbd36",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ee5992bc2996d518878dd8600ca5b74f29f047be",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
249971164 | pes2o/s2orc | v3-fos-license | Synthesis and in vitro α-glucosidase and cholinesterases inhibitory actions of water-soluble metallophthalocyanines bearing ({6-[3-(diethylamino)phenoxy]hexyl}oxy groups
In this paper, we have prepared peripherally tetra-({6-[3-(diethylamino)phenoxy]hexyl}oxy substituted cobalt(II), copper(II), manganese(III) phthalocyanines (3, 4, 5) and their water-soluble derivatives (3a, 4a, 5a). Then, in vitro α-glucosidase and cholinesterases inhibitory actions of the water-soluble 3a, 4a, 5a were examined using spectrophotometric methods. 4a had the highest inhibitory effects among the tested compounds against α-glucosidase due to IC50 values. 4a and 5a had 40 fold higher inhibitory effects than the positive control. For cholinesterases, the compounds showed significant inhibitory actions that of galantamine which was used as a positive control. According to the SI value, 3a inhibited acetylcholinesterase enzyme selectively. In kinetic studies, 4a was a mixed inhibitor for α-glucosidase, 3a was a competitive inhibitor for AChE, and 4a was a mixed inhibitor for BuChE. The therapeutic potential of these compounds has been demonstrated by in vitro studies, but these data should be supported by further studies.
Experimental
The used materials, equipment, and in vitro inhibition assay on α-glucosidase, AChE, and BuChE are given in supplementary information.
Inhibition study of α-glucosidase
In this study, in vitro anti-α-glucosidase effects the compounds 3a, 4a, 5a were investigated by spectrophotometric methods.Acarbose was used as a positive control.The results were tabulated as IC 50 values in Table 1.All of the compounds showed higher inhibitory effects on α-glucosidase than that of acarbose (IC 50 = 60.28 ± 3.42 µM).4a has the best inhibitory actions with 1.36 ± 0.01 µM of IC 50 value among the tested compounds.4a and 5a had an inhibitory activity about 40 times higher inhibitory activity than that of acarbose.According to the literature, Güzel et al. investigated inhibitory effects of peripheral furan-2-ylmethoxy-substituted copper and manganese phthalocyanines on α-glucosidase [26].The IC 50 values of these compounds were determined as 911.20 µM and 695.37 µM, respectively.The peripherally tetra-({6-[3-(diethylamino)phenoxy]hexyl}oxy substituted cobalt(II), copper(II), manganese(III) phthalocyanines displayed a higher inhibitory effect on α-glucosidase than that of furan-2-ylmethoxy-substituted compounds on α-glucosidase, according to the IC 50 values.
In this work, Lineweaver-Burk and Dixon plots were investigated to evaluate the inhibitory type and inhibition constant (K i ) for 4a which had the best inhibitory actions on α-glucosidase.The results are given in Table 2 and Figure 4. On enhancing substrate and inhibitors concentrations against α-glucosidase, V max (maximum rate) value diminished and K m value increased.This result claimed that the compound inhibited the enzyme via mixed inhibition.
Inhibition studies of AChE and BuChE
The in vitro antiChEs actions of all synthesized compounds were investigated to determine their therapeutic potential in AD.Galantamine was used as a positive control.The results are shown in Table 3.The results showed that the compounds have higher inhibition efficiency on AChE and BuChE when compared to galantamine (IC 50 = 30.20 ± 0.58 µM for AChE and 52.04 ± 0.55 µM for BuChE).The IC 50 values of 3a, 4a, 5a were 0.65 ± 0.01 µM, 1.08 ± 0.03 µM, and 1.35 ± 0.01 µM, [34].The results showed that copper, zinc, cadmium, and mercury inhibited on AChE [34].
In literature, Arslan reported novel peripherally tetra-chalcone substituted metal-free, manganese, cobalt and copper phthalocyanines and their inhibitory effects against AChE [34].These compounds had lower inhibitory effects than neostigmine (IC 50 = 0.136 ± 0.011 µM) which was used as a positive control but 3a showed higher inhibitory actions about 46 times than galantamine on AChE [35].In our previous study, the ChEs inhibitory effects of peripheral or nonperipheral tetra-[4-(9H-carbazol-9-yl)phenoxy] substituted cobalt, manganese phthalocyanines were investigated and IC 50 values of these compounds were determined ranging from 7.39 ± 0.25 µM to 58.02 ± 4.94 µM [36].The compounds used in this study were found to be more effective when compared to our previous study according to the IC 50 values [36].
In kinetic analysis, 3a showed competitive inhibition with V max unchanged and K m increased on AChE, according to the Lineweaver-Burk plot (Figure 5a).On the other hand, 3a was a mixed inhibitor due to the change of V max and K m values toward BuChE (Figure 6a).K i values of the compounds were determined as 0.51 ± 0.04 µM for 3a and 0.05 ± 0.01 µM for 4a, respectively (Figures 5b and 6b).
Conclusion
In this work, we have synthesized and characterized peripherally tetra-({6-[3-(diethylamino)phenoxy]hexyl}oxy) substituted metallophthalocyanines (3,4,5) and their water-soluble derivatives (3a, 4a, 5a).Also, we have reported α-glucosidase and ChEs inhibitory actions of 3a, 4a, 5a using spectrophotometric methods.All compounds had significant inhibitory properties on α-glucosidase and ChEs.According to the IC 50 values, 4a had the highest inhibitory effects among the tested compounds against α-glucosidase.4a and 5a showed 40 fold higher inhibitory effects than acarbose.In ChEs studies, the compounds had significant inhibitory actions when compared to galantamine (p < 0.0001).3a inhibited the AChE enzyme selectively, according to the SI value.In kinetic studies, 4a was a mixed inhibitor for α-glucosidase, 3a was a competitive inhibitor for AChE, and 4a was a mixed inhibitor for BuChE.Although it has been determined that these compounds have potential against DM and AD treatments in vitro, these data should be supported by further studies.
Acknowledgment
This study was not supported by any organization.
Table 1 .
IC 50 values of the compounds on α-glucosidase.
Table 2 .
Kinetic parameters of the compounds on AChE, BuChE, and α-glucosidase.
Table 3 .
IC 50 and SI values of the compounds on AChE, BuChE.In addition, the compounds displayed significant BuChE inhibitory properties that of galantamine (p < 0.0001).4a had the best BuChE inhibitory actions with 0.29 ± 0.01 µM of IC 50 value following 3a with 3.06 ± 0.02 µM of IC 50 value.According to the SI (selective index (IC 50 = BuChE/IC 50 = AChE)) values, 3a inhibited AChE selectively (SI value = 4.70).The difference in the results is thought to be due to the metal effect.Frasco et al. reported the metals inhibitory effects of different metals (copper, nickel, zinc, cadmium, and mercury) on AChE | 2022-06-24T15:09:51.892Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "0d8737f23e3330fdb30843f12b70e597173e1256",
"oa_license": "CCBY",
"oa_url": "https://journals.tubitak.gov.tr/cgi/viewcontent.cgi?article=3368&context=chem",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "980b6be00c8022f7044fae0ba308cd1efc040913",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234848915 | pes2o/s2orc | v3-fos-license | Productivity of firms using relief policies during the COVID-19 crisis
This study, based on an original survey of Japanese firms, analyzes the productivity of firms that used relief policy measures during the COVID-19 pandemic. The productivity of firms using these relief measures was lower than that of non-user firms prior to the pandemic, suggesting that inefficient firms have been affected seriously. The result cautions against the excessive and prolonged relief policies.
Introduction
The COVID-19 pandemic has had a serious impact on the global economy. Many countries have adopted emergency measures to mitigate the impact of the pandemic on business activity; Japan is no exception. To relieve firms that are affected seriously, the Japanese government enacted various emergency measures, such as financial assistance from governmental financial agencies, the Subsidy Program for Sustaining Businesses (hereafter ''sustainability subsidy''), and the Employment Adjustment Assistance Subsidy (hereafter ''employment subsidy'').
Financial assistance programs offering low-or zero-interest loans target small-and medium-sized firms experiencing pandemic-related sales declines. The sustainability subsidy began in May 2020, delivering a maximum of two million yen to smalland medium-sized firms with a drop in sales of more than 50%. The employment subsidy is a measure that supports firms' efforts to maintain employment and has been in place since long before the COVID-19 crisis. However, the subsidization rate was raised significantly in April 2020. Specifically, for small-and medium-sized firms with sales that declined by more than 5%, * Corresponding author at: Hitotsubashi University, 2-1 Naka, Kunitachi, Tokyo 186-8603, Japan.
E-mail addresses: morikawa@ier.hit-u.ac.jp, morikawa-masayuki@rieti.go.jp, BXZ00354@nifty.ne.jp. the subsidization rate is set to 100% of the maximum. Even for large firms, the maximum subsidy rate is raised to 75%. 1 If a firm that could have survived goes bankrupt or goes out of business voluntarily because of a temporary shock, the sunk investments will be lost. For this reason, policies that mitigate the impacts of temporary shocks can be justified. However, it is necessary to acknowledge the risk that such policies may weaken the resource reallocation mechanism and have a negative impact on medium-to long-term economic growth potential.
The cleansing effect of recessions, that is, the increased productivity that arises from the exit of unproductive firms from the market during recessions, has been pointed out in the literature (e.g., Caballero and Hammour, 1994). Generally, industry-level or economy-wide productivity growth can be broken down into the within-effect and the reallocation effect. Empirical studies found stronger reallocation effects during recessions (e.g., Baily et al., 2001;Foster et al., 2001;Disney et al., 2003;Carreira and Teixeira, 2008).
However, recent studies indicate weak reallocation effects after the Great Recession. Foster et al. (2016), for example, found that the intensity and productivity enhancing effects of reallocation were small in the Great Recession. Using a sample of manufacturing firms in European countries, Landini (2020) indicated that the market selection mechanism based on productivity differentials was weak during the Great Recession. The negative impacts of surviving inefficient firms on the overall economy are referred to as a problem of ''Zombie'' firms (e.g., Caballero et al., 2008;Kwon et al., 2015;Imai, 2016). Malfunction of the financial market -banks continuing to keep credit flowing to otherwise insolvent borrowers -is often cited as a primary cause of weak reallocation mechanisms during the long stagnation in Japan. McGowan et al. (2018) applied the methodology of Caballero et al. (2008) to nine OECD countries, indicating that the prevalence of Zombie firms has risen since the mid-2000s, and the increasing survival of these unproductive firms congests markets and constrains the growth of more productive firms.
The lesson learned from these studies is that firm relief measures during the recent COVID-19 crisis might serve to suppress the function of cleansing or reallocation effects and exert a negative impact on the medium-to long-term productivity of the economy. Barrero et al. (2020) analyzed the reallocation of employment and sales under the COVID-19 pandemic in the U.S. and cautioned against the excessive use of policies that inhibit resource reallocation.
Against the background, this study analyzes the productivity of firms that have used relief policies during the COVID-19 pandemic. The result indicates that the productivity of the firms that benefited from these relief measures tended to be lower than non-user firms prior to the COVID-19 crisis. The policy implication is that relief measures under the recent COVID-19 crisis should be temporary and such policies should be modified to enable the smooth reallocation of resources.
Survey design and method of analysis
The data used in this study are from the ''Survey of Corporate Management and Economic Policy'' (SCMEP). The SCMEP is an original firm survey conducted by the Research Institute of Economy, Trade and Industry from late August to early September 2020. The survey questionnaire was sent to 2498 Japanese firms that had responded to the previous SCMEP in early 2019. As the sample of the SCMEP was selected from the Basic Survey of Japanese Business Structure and Activities (BSJBSA, conducted by the Ministry of Economy, Trade and Industry), the firms chosen to take part in the SCMEP had at least 50 employees, capital of at least 30 million yen, and belonged to manufacturing, wholesale, retail, and service industries. The number of firms that responded to the current SCMEP is 1579 (a response rate of about 63%).
The question on the use of relief policy measures was: ''Which of the following policies that have been introduced due to COVID-19 has your firm used or would like to use in the future?'' The specific policies included were (1) financial assistance from governmental financial agencies, (2) the sustainability subsidy, and (2) the employment subsidy.
The firm characteristics available from the SCMEP are limited, but more information can be obtained by linking the data with the BSJBSA. In this study, we calculate labor productivity (LP) and total factor productivity (TFP) for fiscal year 2018 (i.e., April 2018-March 2019) from the BSJBSA and analyze the relationship between the use of relief policies and productivity prior to the COVID-19 crisis. In addition, the three-digit industry classification is taken from the BSJBSA. 2 We calculate firms' LP as the firms' value-added divided by the total hours worked and express the value in logarithmic form. We calculate TFP as a cost-share-based index number using valueadded, the book value of capital, total hours, and the cost shares of capital and labor. The index number is the relative productivity level compared with a hypothetical representative firm of the same three-digit industry. Using the data set, we compare the productivity of policy users and non-users before the onset of the COVID-19 crisis.
2 Three-digit industry classification is the finest breakdown in the BSJBSA.
Results
The percentages of firms used relief policies are (1) financial assistance (25.0%), (2) the sustainability subsidy (19.3%), and (2) the employment subsidy (44.1%). 3 By firm size, the percentages of policy users are higher in small-and medium-sized firms (i.e., firms capitalized at 100 million yen or less) than in large firms across all policies. As many policies are designed to place importance on small-and medium-sized firms, this is a natural result.
Columns (1) and (2) of Table 1 presents the mean productivity of firms that used relief policies relative to those that did not use policies with t-test results. For all three policies, the productivity of firms using relief policies is lower and the differences are statistically significant at the 1% level, irrespective of the productivity measures. Quantitatively, the TFP of firms using financial assistance, sustainability subsidy, and employment subsidy are 18.9, 11.9, and 12.4 log points lower, respectively. In short, the productivity of firms that used relief policies was lower than the non-users, even before the onset of the COVID-19 crisis. Columns (3) and (4) indicate the difference in mean firm size. The size of firms using relief policies is generally smaller, with an exception of the number of employees for employment subsidy. Table 2 reports the OLS regression coefficients for policy users, wherein the firm size and three-digit industry are controlled. The coefficients for the use of relief policies are all negative and statistically significant at the 1% level. In the case of the LP, the absolute sizes of the coefficients are slightly smaller than the figures presented in Table 1, but the sizes are almost unchanged in the case of the TFP. Overall, these results indicate that various support measures may have the aspect of bailing out not only firms with suddenly deteriorating business performance due to the COVID-19 pandemic but also firms that had low-productivity prior to the pandemic.
Conclusion
Using data from a survey of Japanese firms, this study indicates that the productivity of firms that use relief policies is lower before the onset of the COVID-19 pandemic than non-user firms. Temporary relief policies to support affected firms can be justified, but the results of this study caution against the potential negative side effects of excessive or overly prolonged relief policies from the viewpoint of long-term productivity of the economy. As it will take some time to end the COVID-19 pandemic and the industrial structure after the crisis will undoubtedly be different from that before the pandemic, a gradual downsizing of the relief policies and restructuring policy measures toward supporting growing sectors are desirable approaches.
While this study presents unique evidence on Japanese firms' use of relief policies during the COVID-19 pandemic, we observe the productivity distribution of firms only before the COVID-19 crisis. Evaluating the ex post performance of firms that used relief policies and the productivity dynamics of the economy is left for future research. Table 1 Mean productivity and size of firms using relief policies.
Table 2
Regression results on the productivity of firms using relief policies.
(1) Notes: OLS estimations with robust standard errors in parentheses. *** p < 0.01. Both the LP and TFP expressed in logarithm are for fiscal year 2018. Firm size is the number of employees (expressed in logarithm). | 2021-04-23T13:10:38.454Z | 2021-04-20T00:00:00.000 | {
"year": 2021,
"sha1": "7711c852b169d93cf5b0e9ae85351365f6a2ec0e",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.econlet.2021.109869",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b711f316a8191dc7351726bdf2fcd22d0cda5cf8",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.