id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
259803914 | pes2o/s2orc | v3-fos-license | A novel tumor 4-driver gene signature for the prognosis of hepatocellular carcinoma
Background Hepatocellular carcinoma (HCC), the main type of liver cancer, is the second most lethal tumor worldwide, with a 5-year survival rate of only 18%. Driver genes facilitate cancer cell growth and spread in the tumor microenvironment. Here, a comprehensive driver gene signature for the prognosis of HCC was developed. Methods HCC driver genes were analyzed comprehensively to develop a better prognostic signature. The dataset of HCC patients included mRNA sequencing data and clinical information from the TCGA, the ICGC, and the Guangxi Medical University Cancer Hospital cohorts. First, LASSO was performed to develop a prognostic signature for differentially expressed driver genes in the TCGA cohort. Then, the robustness of the signature was assessed using survival and time-dependent ROC curves. Furthermore, independent predictors were determined using univariate and multivariate Cox regression analyses. Stepwise multi-Cox regression analysis was employed to identify significant variables for the construction of a nomogram that predicts survival rates. Functional analysis by Spearman correlation analysis, enrichment analysis (GO, KEGG, and GSEA), and immunoassay (ssGSEA and xCell) were performed. Result A 4-driver gene signature (CLTC, DNMT3A, GMPS, and NRAS) was successfully constructed and showed excellent predictive efficiency in three cohorts. The nomogram indicated high predictive accuracy for the 1-, 3-, and 5-year prognoses of HCC patients, which included clinical information and risk score. Enrichment analysis revealed that driver genes were involved in regulating oncogenic processes, including the cell cycle and metabolic pathways, which were associated with the progression of HCC. ssGSEA and xCell showed differences in immune infiltration and the immune microenvironment between the two risk groups. Conclusion The 4-driver gene signature is closely associated with the survival prediction of HCC and is expected to provide new insights into targeted therapy for HCC patients.
Introduction
Primary liver cancer is the sixth most diagnosed cancer and the third most common cause of cancer mortality worldwide as of 2020, leading to more than 1.3 million deaths yearly [1]. Its survival rate is only 18%, being the lowest of all cancer types [2]. HCC accounts for approximately 90% of primary liver cancers [3]. The survival rate of HCC patients remains low after surgical resection and treatment because of the relapse that follows [4].
Thus, it is important to continuously seek advances in screening, diagnosis, and treatment strategies to improve the prognosis of this malignancy [5]. Early diagnosis is a promising strategy to reduce HCC mortality, as it can greatly contribute to detecting HCC at the early stage and applying effective treatments [6]. However, there is still no good method for the early diagnosis and accurate prognosis prediction of HCC [7]. Therefore, predicting the prognosis of patients with HCC can be another way to raise the survival rate because it can help clinicians choose more suitable treatment options for patients [8].
Nevertheless, prognostic prediction of HCC patients has high complexity [9]. On the one hand, the complex etiological landscape contributes to the dilemma created by the molecular and biological heterogeneity of cancer cells [10]. On the other hand, the accurate prognostication of HCC patients necessitates a comprehensive understanding of the entire natural history of the disease, coupled with the identification of optimal survival predictors at each stage. Regrettably, these data are only partially attainable [11].
Specific biomarkers for auxiliary examination are tremendously helpful for the prognostic evaluation of HCC patients in clinical practice [12]. In recent years, many biomarkers for the prognosis of HCC have been reported. For example, CTNNB1 can serve as a prognostic biomarker for HCC, which is related to metabolic reprogramming [13]. CDT1 is an essential factor in the initiation of DNA replication, playing a pivotal role in regulating eukaryotic cell cycle progression and replication. Its expression in HCC was associated with potential diagnostic and prognostic significance [14]. APEX1 has also been shown to be a potential diagnostic and prognostic biomarker in HCC by analyzing multiple databases [15]. Despite the recent noteworthy improvements in the development of new biomarkers for HCC, it still lacks biomarkers able to predict the prognosis or identify subgroups of patients who would benefit from clinical management [16,17]. Therefore, it is urgent to find out specific new biomarkers to evaluate the prognosis and help clinical management of HCC.
In some recent studies, it has been found that driver genes may serve as important genomic biomarkers for diagnosis and prognosis [18,19]. Various studies have shown that driver genes play a crucial role in tumorigenesis, including HCC, lung adenocarcinoma, bladder cancer, prostate cancer, and other malignant cancers [20][21][22][23]. Cancer driver genes are a set of mutational genes that possess the ability to drive tumorigenesis [24]. Driver mutations occurring in driver genes are causally implicated in oncogenesis, which confers a growth advantage on cancer cells and affects the homeostatic development of a set of key cellular functions [25,26]. With the clonal expansion, these mutations will be present in all subsequent cancer cells [27]. Although researchers have confirmed the presence of driver genes in HCC by bioinformatics, the significance of all driver genes for the prognosis of liver cancer is still unclear [26,28].
In this study, the overall survival (OS) rate of HCC patients was systematically associated with driver genes, and a signature was established. The robustness of the signature was externally validated by the ICGC cohort and Guangxi Medical University Cancer Hospital cohort. The signature was constructed to help clarify the underlying mechanism of HCC and accurately predict the prognosis for patients with HCC. At the same time, the association between the prognostic signature and the immune cell infiltration of HCC was explored, which makes a valuable contribution to treatment strategies.
RNA-seq data collection from public databases
The transcriptome data and related clinical details of HCC were downloaded from the TCGA-LIHC project using The Cancer Genome Atlas (TCGA; v31.0) transfer portal GDC (Genomic Data Commons) Data Portal (GDC (cancer.gov)). Additional mRNA expression data and information on 230 clinical HCC samples, the ICGC-LIRI-JP cohort, were obtained from the International Cancer Genome Consortium (ICGC) portal (http://dcc.icgc.org). Simple nucleotide variation data (SNV) were collected by the R package "TCGAbiolinks" [29].
Clinical information collection from Guangxi Medical University Cancer Hospital
We collected paired HCC and normal tissues from 116 HCC patients after surgical procedures at Guangxi Medical University Cancer Hospital in recent years. Transcriptome expression data for each sample were obtained by tissue sequencing. Apart from the collection of tissue samples, comprehensive clinical and pathological data were obtained from patients, encompassing age, gender, tumor quantity, and size, Barcelona Clinic Liver Cancer (BCLC) stage, Edmondson grade, recurrence status, and survival outcome. All experimental procedures adhered to relevant guidelines and regulations.
Construction of driver genes sets of HCC
To construct a comprehensive set of HCC driver genes, HCC driver genes were extracted from the IntOGen platform (IntOGen -Cancer driver mutations in Hepatic cancer) and a previous study [26,28].
Identification of differentially expressed driver genes
The expression of 78 driver genes in cancer and normal tissues was analyzed by the Wilcoxon test in the TCGA database (n = 424). Meanwhile, the degree of differential expression of each driver gene was presented in box plots. The driver genes with significant differential expression were selected as candidate driver genes.
Development of the 4-driver gene signature by LASSO cox regression analysis
Survival-associated candidate driver genes were identified by univariate Cox regression analysis of OS, and P values were adjusted using Benjamini and Hochberg (BH) correction.
Thereafter, the least absolute shrinkage and selection operator (LASSO) Cox regression was used to select prognosis-related driver genes through the R package "glmnet" with normalized expression data considered independent variables, and the corresponding OS time and patient survival states (dead or alive) were viewed as response variables. The penalty parameter (λ) in the prognostic signature was determined using Tenfold cross-validation and followed minimum criteria. The normalized expression value of each prognostic driver gene was combined with its LASSO Cox regression coefficient to calculate the risk score for each patient: Exp i × β i Exp i represents the expression value of driver genes within the signature, while β i denoting the regression coefficient values of differential genes obtained through Cox regression analysis. After that, the signature was evaluated in the TCGA cohort, Kaplan-Meier (K-M) survival analysis was carried out by using the "ggsurvplot" function in the R "survminer" package, and time-dependent receiver operating characteristic curves (ROC) were generated by the "survivalROC" package. The efficacy of the signature can be effectively estimated on prognosis.
Independent validation of the 4-driver gene signature
To exclude the differences between data samples and clinical variations, 230 HCC samples from the ICGC cohort and 116 HCC patients undergoing long-term follow-up at Guangxi Medical University Cancer Hospital cohort were used as the validation cohort. To evaluate the robustness of the signature, two independent validation cohorts were analyzed through K-M survival curves and timedependent ROC curves. K-M survival curves were undertaken on high-and low-risk groups to compare the survival rate. The area under the curve (AUC) of the time-dependent ROC curve was used to estimate the signature prediction efficiency.
Construction and validation of a predictive nomogram
Nomograms are widely utilized in predicting the prognoses of HCC patients, primarily due to their ability to condense statistical prediction models into a singular numerical evaluation of the probability of overall survival that is customized to an individual patient's profile. Stepwise multi-Cox regression analysis was used to filter nomogram variables that included clinical information and risk score. The calibration plots and the C-index were utilized to assess the predictive accuracy of the nomogram through the R-related package and "Nomogram" function. The consistency of the model can be assessed by the overlap degree of the correction curve and the reference line.
Differentially expressed genes analysis in high-and low-risk groups
HCC patients were divided into high-and low-risk groups by median risk score, after which "DESeq2" was used to analyze two clusters of differentially expressed genes (DEGs) in the TCGA cohort. The threshold values were |log2FoldChange | >1.5 and adjusted P value < 0.05. It was considered to be statistically significant for the determination of DEGs.
Functional analysis based on the prognostic signature
Spearman correlation analysis was used to compare the correlation between the expression of 4 driver genes, and the obtained coefficient represented the degree of correlation between the linear changes of two continuous variables. Enrichment analysis was performed to find important biological pathways between high-and low-risk group DEGs, which involved the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO). In addition, gene set enrichment analysis (GSEA)was used with the "cluster-Profiler" package to determine molecular pathways and consistent heterogeneities.
Single nucleotide variant analysis and immune infiltration assessment
SNV data, which were stored in the Mutation Annotation Format (MAF) of the TCGA database, depicted the HCC patients' mutation burden landscape according to waterfall plots through the R package "maftools". The ESTIMATE algorithm was utilized to compute the immune score, stromal score, ESTIMATE score, and tumor purity based on the relative proportions of immune cells and stromal cells. Single-sample GSEA (ssGSEA) and xCell were utilized to analyze the immune landscape and calculate the activity of immune cells and immune infiltration between the high-and low-risk groups. The marker genes of different immune cells were obtained from previous studies of 28 immune cell types in the MSigDB database (https://www.gsea-msigdb.org/gsea/msigdb/). P < 0.05 was considered statistically significant.
Statistical analysis
All statistical analyses were conducted by R (4.1.2), and corresponding figures were generated. Box plots were analyzed with the Wilcoxon test. Spearman's coefficient was utilized to compute the correlations among the four driver genes Univariate and stepwise multivariate Cox analysis were used to evaluate the survival status of patients in the TCGA cohort and the clinical characteristics affecting the cohort. Survival curves were constructed by the log-rank test using the K-M method. The ssGSEA scores were subjected to the Mann-Whitney test, and an adjusted p-value was calculated using the BH method. All hypothetical tests were two-sided, and a significant p value was defined as <0.05.
Identification of DEGs of driver genes in HCC
A gene set consisting of 78 HCC driver genes was constructed (Supplementary Table 1). Based on the TCGA database, the driver gene mRNA expression between normal and tumor samples of HCC was analyzed using Wilcoxon tests. The analysis showed that there were 69 differentially expressed driver genes: 56 driver genes (p < 0.001, Fig. 1A), 10 driver genes (0.001 < p < 0.01, Fig. 1B), and 3 driver genes (0.01 < p < 0.05, Fig. 1C). Additionally, 9 driver genes showed no differential expression (Fig. 1D).
Development of a prognostic-related risk signature
Among 56 (p < 0.001) DEGs, 7 candidate driver genes were significantly related to the OS rate of HCC patients in the TCGA cohort by univariate Cox analysis (using a cutoff threshold of Cox P < 0.001) ( Fig. 2A). Then, the above 7 genes were utilized to build a prognostic signature by LASSO Cox regression analysis. LASSO-penalized Cox analysis with penalty parameter tuning performed via 10-fold cross-validation was established to further narrow the candidate driver genes, whose confidence interval was under each lambda. Then, a 4-driver gene signature involving NARS, DNMT3A, CLTC, and GMPS was successfully identified. The risk score of each HCC patient was calculated in the TCGA cohort (n = 231) using the following formula: risk score = 0.0317 × exp (NRAS)+0.0028 × exp (CLTC)+0.0426 × exp (DNMT3A) + 0.1256 × exp (GMPS) (Fig. 2B). The patients in the TCGA cohort were divided into high-risk (n = 115) and low-risk groups (n = 116) according to the median risk score (Fig. 2C). The heatmap showed high expression of DNMT3A, GMPS, NARS, and CLTC in the high-risk group, while the other group was in the opposite situation (Fig. 2D). At the same time, the K-M survival curves indicated that the high-risk group had a significantly poorer OS rate than the low-risk group (p < 0.0001) (Fig. 2E). In addition, time-dependent ROC curves were used to evaluate the prediction efficiency of the 4-driver gene signature. The areas under the ROC curve (AUCs) for 1-, 3-, and 5-year survival were 0.75, 0.71, and 0.70, respectively (Fig. 2F). This supported the accurate prediction of the 4-driver gene signature of OS in HCC patients.
Further analysis of the K-M survival curves in the TCGA cohort showed that the high-expression groups of CLTC, DNMT3A, GMPS, and NRAS had worse OS than the low-expression group (Fig. 3A, B, C, and D). The 4-driver gene signature (Fig. 2E) was able to distinguish the survival probability of HCC patients more significantly than the respective 4 genes (Fig. 3 A, B, C, and D).
Validation of the 4-driver gene signature in different independent cohorts
The median risk score of the two independent cohorts (Supplementary Table 2) was calculated according to the same formula as the TCGA cohort. According to the median risk score, these patients were also classified into high-or low-risk groups (Fig. 4A and B). Two cohorts, ICGC, and Guangxi Medical University Cancer Hospital, had worse survival outcomes, as illustrated by the K-M survival curves in the high-risk group. (p < 0.0001 and p = 0.0078) (Fig. 4C, E).
Construction and validation of a nomogram integrating independent predictive factors
Univariate and multivariate Cox analyses showed that the risk score could be an independent prognostic factor for HCC (Fig. 5A).
The variables for the nomogram were chosen using stepwise Cox regression based on the risk score and the clinical information of 231 HCC samples from TCGA. Ultimately, risk score (P < 0.01, HR = 1.905), T stage (P < 0.001 H R = 2.218), and age (P = 0.58, HR = 1.735) were included in the nomogram prognosis prediction (Fig. 5B).
A nomogram incorporating these predictive factors has been developed to accurately quantify the probability of survival in HCC patients (Fig. 5C). To understand the performance of the model, its discrimination and calibration ability were evaluated. The time- dependent ROC curves in terms of 1-, 3-, and 5-year prognoses were analyzed, and the AUCs were 0.80, 0.78, and 0.76, respectively, in the TCGA cohort (Fig. 5D). This model showed better discriminative ability than the 4-driver gene signature (1-year AUC: 0.80 vs. 0.75, 3-year AUC: 0.78 vs. 0.71, 5-year AUC: 0.76 vs. 0.70). The nomogram's calibration curves closely followed the 45 • lines (Fig. 5E), with a C-index of 0.73 indicating high consistency between predicted and actual results. cohort (A, B) and Guangxi Medical University Cancer Hospital cohort (C, D) through ssGESA analysis and xCell analysis, respectively. (***P < 0.001, **P < 0.01, *P < 0.05, NS (no significance)).
Functional analysis of subgroups in the TCGA cohort
Above all, the relationship between the 4 driver genes was investigated and a positive correlation was observed in their expression levels (Fig. 6A).
Then, the volcano plots displayed 727 DEGs between the high-and low-risk groups with thresholds of p < 0.05 and | log2 (fold change) | >1.5 (Fig. 6B). The biological characteristics of the DEGs were further investigated through function and pathway annotations. KEGG analysis showed that the key pathways correlated with tyrosine metabolism, cholesterol metabolism, protein digestion, absorption, and sphingolipid metabolism (P < 0.05) (Fig. 6C). GO analysis indicated that these DEGs were mainly involved in the regulation of cell cycle processes and lipid catabolic processes (Fig. 6D), suggesting that the differences in survival outcomes between subgroups may be related to the patient's metabolic status. GESA analysis implied that the high-risk group was involved in pathways such as DNA-binding transcription factor activity, double-stranded DNA binding, and other cell cycle processes (Fig. 6E). Interestingly, compared to the outcomes of GO enrichment, the low-risk group was involved in material metabolic processes such as cellular amino acid catabolic processes and fatty acid metabolic processes (Fig. 6F).
Mutation landscape and differences in immune cell infiltration
Genomic landscape single nucleotide variants for the two risk groups were visualized by waterfall plots (Fig. 7A and B). The top 20 mutated genes between the high-and low-risk groups were shown for SNV dates in the TCGA cohort. Missense mutations were the most common somatic mutational types. TP53 had a high mutation rate and more abundant mutation forms in the high-risk group, while CTNNB1 had a high mutation rate in the other group.
ssGSEA and xCell were used to determine the level of immune cell infiltration in the TCGA cohort and Guangxi Medical University Cancer Hospital cohort, and a median risk score was adopted to classify HCC patients into high-and low-risk groups. ssGSEA results showed significant differences in activated B cells, activated CD4 T cells, activated CD8 T cells, CD56dim natural killer cells, central memory CD4 T cells, effector memory CD8 T cells, eosinophils, neutrophils, and type 1 T helper cells in the TCGA and Guangxi Medical University Cancer Hospital cohorts (all P < 0.05, Fig. 8A, C). Interestingly, the xCell results indicated evident differences in CD4 + Tem cells, hepatocytes, HSCs, ly endothelial cells, M2 macrophages, melanocytes, MSCs, mv endothelial cells, and smooth muscle cells in the two cohorts (all P < 0.05, Fig. 8B, D).
Discussion
To date, it is generally acknowledged that mutational cancer driver genes confer selective advantages on cells relative to the surrounding environment [30]. Different driver gene mutations bring heterogeneity to tumors [31]. Cooperation among driver genes leads to a hepatocellular immune landscape with unique histology. Combinations of expression levels and specific changes in driver genes can shape tumor phenotypes and result in intratumor heterogeneity [32]. HCC is highly heterogeneous, which makes its diagnosis and prognosis difficult. However, proper molecular biomarkers can assist in predicting the prognosis of HCC patients and improve clinical decisions. Hence, there is no doubt that driver genes, as key factors in tumorigenesis and development, can be considered markers for personalized tumor therapy [33].
The 78 driver genes of HCC were comprehensively analyzed and 7 differentially expressed driver genes significantly associated with the survival of HCC patients were found by univariate Cox regression analysis. Furthermore, LASSO Cox regression analysis was used to construct a 4-driver gene signature (CLTC, DNMT3A, GMPS, and NRAS). Ingeniously, the 4-driver gene signature showed excellent predictive efficacy in both the public database and Guangxi Medical University Cancer Hospital cohort. Meanwhile, the risk score was an independent predictor, shown by univariate and multivariate Cox regression analyses. Stepwise multi-Cox regression analysis ascertained that T stage, age, and risk score were nomogram variables for HCC patients in the TCGA cohort. The construction of the nomogram model combining the risk score and clinical features can provide a more accurate prediction measure of HCC prognosis. Compared to the recently constructed prognostic signature of HCC driver genes [34], only HCC driver genes were specifically used to construct the prognostic model. In addition, fewer detection targets were incorporated into this study, thus reducing the clinical detection burden and equipping the model with higher specificity.
This study focused on differentially expressed driver genes in HCC that were associated with OS in HCC patients. As a result, four driver genes were selected to develop the prognostic signature. CLTC encodes the clathrin heavy chain (CHC), which interacts with ATG16L1 and is involved in the autophagic process of endocytosis and degradation of cytoplasmic contents in the transport of various macromolecules [35]. Clathrin can inhibit apoptosis by impairing NOX4 upregulation and ROS production by TGF-β. High expression of TGF-β1 and CLTC is related to worse prognosis and lower OS in HCC patients [36]. Additionally, alterations in CHC prolong mitosis, leading to destabilization of mitotic fibers, defective chromosome binding to the midpalatal, and sustained activation of the spindle checkpoint [37]. The DNMT3A (DNA methyltransferase 3 alpha) gene plays an essential role in encoding enzymes that catalyze DNA methylation [38]. Research has confirmed that upregulation of the DNMT3A gene promotes cancer development. During HCC tumorigenesis, the upregulated mRNA level of DNMT3A is an early event and contributes to the progression of HCC, which can predict patient survival of HCC [39,40]. GMPS (guanine monophosphate synthase) is an important bifunctional dual-domain enzyme. According to previous findings, when it enters the nucleus, the GMPS-USP7 complex is formed with ubiquitin-specific protease 7 (USP7), which can promote the degradation of the p53 protein by deubiquitinating the negative p53 regulatory protein MDM2 [41,42]. Thereafter, it leads to a drop in the intracellular level of p53 and helps tumors evade attack. Meanwhile, GMPS plays a key role in infection, pathogenicity, and axonal transmission [43,44]. NRAS belongs to the RAS superfamily of proteins with physiological functions that regulate cell proliferation, differentiation, and survival [45]. In addition, when NRAS is activated by promoting MAPK and PI3K signaling under normal physiological conditions, NRAS mutation activates MAPK signaling indefinitely, leading to dysregulated cell cycle and cell proliferation signaling, which induces tumor formation [46,47].
Immunotherapy is a promising stratagem for treating cancers by harnessing the cytotoxic potentiality of human immune system [48]. Immune checkpoint blockade (ICB) immunotherapy based on the programmed death-1/ligand-1 (PD-1/PD-L1) axis, which can augment tumor-directed T-cell responses, has reshaped oncology [49]. Some immune checkpoint inhibitors (ICIs), such as atezolizumab, pembrolizumab and nivolumab, have now become the first-line systemic therapy drugs for some advanced cancers, significantly increasing the rate of long-term survival, whereas some other tumors do not respond to ICI monotherapy [50,51]. The expression of PD-L1 has been used to forecast the response to ICB in some cancers [48,52]. However, the predictive role of PD-L1 expression in HCC patients receiving immunotherapy remains controversial [51]. There is significant heterogeneity of PD-L1 expression. Only approximately one-tenth of tumor cells express PD-L1 [53,54]. Therefore, it is necessary to find new biomarkers to identify priori patients who will respond to ICI treatment. A series of researches in some different cancers have revealed connectivity between the tumor immune infiltrating cell square measure and immune checkpoint therapy response [55,56]. In this study, activated B cells, activated CD4 T cells and activated CD8 T cells showed marked differences between the low-risk and high-risk groups. A previous study showed that B-cell markers were the most differentially expressed genes in the tumors of patients who responded to ICB versus nonresponding patients. Significantly higher expression of B-cell-related genes was observed in responders versus nonresponders [57]. The presence of polyclonal CD8 T cells in the tumor is also related to effective anti-PD-1 immunotherapy [58]. Different CD8 T-cell populations can be used as relevant biomarkers of HCC outcomes during immunotherapy studies [59]. ICB therapy can increase the percentage of CD4 T cells [60]. These results imply that grouping based on the risk score may screen potential patients who may benefit from immunotherapy.
Admittedly, there were also certain limitations in the current study. First, since median cutoff values retrospectives data were used in each cohort, more prospective data are needed for accurate validation. Second, considering that the association between the prognostic risk score and immune cell infiltration was based on estimated tumor characteristics, this association may be incomplete. Therefore, their association remains to be experimentally resolved. Third, the signature was not combined with widely used HCC markers, such as AFP, which limits its accuracy in further improving prognosis.
Although it had some deficiencies, the prognosis signature in three different cohorts all showed a better prediction performance. With the advent of the era of precision medicine, clinical trials can be designed using a risk score based on the 4-driver gene signature to select patients who are most likely to develop poor prognoses. This can provide potential translational value for the clinical management of HCC patients. Therefore, novel or more intensive postoperative therapies can be developed in the future. Consequently, it is of great value for future tumor treatment to focus more on precision medicine and molecular markers.
Conclusion
A 4-driver genes signature was successfully built to predict prognostic viability. The signature has resulted in being of significant predictive value in the ICGC cohort and Guangxi Medical University Cancer Hospital cohort. This indicates that driver genes play a considerable function in the development of HCC, thus inspiring a novel research direction for future targeted therapy of liver cancer.
Author contribution
Houtian Guo, Fei Lu, Meiqi Huang, Rongqi Lu: Performed the experiments; Analyzed and interpreted the data; Contributed analysis tools and data; Wrote the paper.
Xuejing Li: Analyzed and interpreted the data; Wrote the paper. Jianhui Yuan: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed analysis tools and data: Wrote the paper.
Feng Wang: Conceived and designed the experiments; Performed the experiments, Analyzed and interpreted the data; Contributed analysis tools and data; Wrote the paper.
Data availability statement
Data included in article/supp. Material/referenced in article.
Ethical approval
TCGA cohort and ICGC cohort data that we analyzed were available to the public in the TCGA and ICGC databases and all processes followed the relevant guidelines and policies of the present study. Informed consent of each patient in Guangxi Medical University Cancer Hospital cohort data was obtained, which was approved by the Ethics Committee from Guangxi Medical University Cancer Hospital (Ethical Number : 20,200,137).
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2023-07-12T16:29:14.981Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "c0693dd9915b7aea1d12006a8723455920568ab7",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "52158e4388f8d3e5db3e58283c9450da708f7913",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
232417990 | pes2o/s2orc | v3-fos-license | Itchy Capillary Malformations: Unusual Appearance of Meyerson Phenomenon, a Case Series
Meyerson phenomenon, also known as “halo-eczema,” has been widely described over melanocytic and non-melanocytic lesions. However, its appearance over vascular anomalies is rarely observed and could lead to diagnostic errors. A case study of five patients aged between four months and two years is reported. These patients developed unique erythematous and pruritic scaly patches, being diagnosed and treated as fungal infections. Due to the lack of response to the treatment, they were referred to the pediatric dermatology practice, where the diagnosis of Meyerson phenomenon over capillary malformations was made. Topical treatment with corticosteroids led to improvement in all cases. Although Meyerson phenomenon developing over vascular anomalies is a rare condition, it is important for pediatricians and dermatologists to assess it as a part of the differential diagnosis when treating a patient with skin lesions. Recognizing this phenomenon will prevent diagnostic and therapeutic errors.
Introduction
Meyerson in 1971 first described two patients who presented with erythema, pruritus and desquamation over pre-existing melanocytic nevi and whose lesions improved after treatment with topical corticosteroids [1]. Since then, this phenomenon has been known as "Meyerson Phenomenon" or "Halo-Eczema" and has been described in a variety of pigmented and non-pigmented lesions. Most of the cases reported in children have been associated with congenital and acquired melanocytic nevi [2][3][4][5]. In adult patients, this phenomenon has been also described in nevi and melanoma [6]. However, only a few cases over vascular anomalies have been reported [7][8][9][10]. As the appearance of this phenomenon over vascular anomalies is rarely observed in pediatrics and general dermatology consultations, initial diagnostic errors are common. Therefore, recognizing its clinical characteristics is important for pediatricians and dermatologists to prevent diagnostic and therapeutic mistakes.
Case Presentation
A case series of five patients who presented with pruritic erythematous lesions which developed over pre-existing cutaneous vascular anomalies is reported. Patients were three males and two females, aged between four months and three years. All of them were healthy and had no relevant medical history, including the lack of criteria for atopic dermatitis. Patients had been diagnosed as having capillary malformations on different skin locations. In all the cases, the skin lesions were located over these vascular anomalies.
The overview of patient's characteristics and the location of the capillary malformations can be seen in Table 1. All the patients were tested for the diagnosis of fungal infection with fungal cultures, which were eventually negative. Because of the lack of diagnosis, they were referred to the pediatric dermatology unit. Patients did not identify any association with triggering events nor application of topical products. Only in the case of patient number 5 did the lesions develop after a laser treatment. Physical examination showed poorly defined erythematous-scaling patches in all cases. These lesions were located on the cheek, right hemifacial skin, nape of the neck and gluteal skin (see Figure 1).
locations. In all the cases, the skin lesions were located over these vascular anomalies. The overview of patient's characteristics and the location of the capillary malformations can be seen in Table 1. All the patients were tested for the diagnosis of fungal infection with fungal cultures, which were eventually negative. Because of the lack of diagnosis, they were referred to the pediatric dermatology unit. Patients did not identify any association with triggering events nor application of topical products. Only in the case of patient number 5 did the lesions develop after a laser treatment. Physical examination showed poorly defined erythematous-scaling patches in all cases. These lesions were located on the cheek, right hemifacial skin, nape of the neck and gluteal skin (see Figure 1).
Due to these clinical characteristics and their appearance over pre-existing cutaneous anomalies, the diagnosis of Meyerson phenomenon over vascular anomalies was made. Low-potency hydrocortisone-based topical corticosteroids were prescribed once a day for one week. Complete resolution of the condition was achieved in four cases. Only patient number 3 presented with recurrent eczema when the treatment was finished. In this case, a maintenance treatment with topical corticosteroids twice a week for four weeks was started, and complete improvement was finally achieved.
Discussion
Meyerson phenomenon or "halo-eczema" developing over vascular anomalies is rarely recognized in pediatric and dermatologic consultations, and there are few reports Due to these clinical characteristics and their appearance over pre-existing cutaneous anomalies, the diagnosis of Meyerson phenomenon over vascular anomalies was made. Low-potency hydrocortisone-based topical corticosteroids were prescribed once a day for one week. Complete resolution of the condition was achieved in four cases. Only patient number 3 presented with recurrent eczema when the treatment was finished. In this case, a maintenance treatment with topical corticosteroids twice a week for four weeks was started, and complete improvement was finally achieved.
Discussion
Meyerson phenomenon or "halo-eczema" developing over vascular anomalies is rarely recognized in pediatric and dermatologic consultations, and there are few reports concerning this issue [7][8][9][10]. It consists of pruritic, erythematous and scaling patches developing over pre-existing cutaneous lesions. Although the etiology is unknown, there are multiple pathogenic theories to explain the appearance of this phenomenon in vascular anomalies. On the one hand, as the exact mechanism regarding how this phenomenon is associated with the underlying disease is still not clear, its potential actual influence on the common vascular anomalies classifications, such as the International Society for the Study of Vascular Anomalies (ISSVA), remains to be established. On the other hand, the appearance of eczema coexisting with a skin lesion without pathogenic relationship to the lesion itself could be misdiagnosed as Meyerson phenomenon.
Atopic dermatitis itself might be responsible for the development of eczemas over any skin lesion; vascular stasis and sensitization to own antigens present in capillary malformations might also trigger an inflammatory eczematous reaction [9]. As we report in the case of the patient number 5, laser treatment might be a triggering event of Meyerson phenomenon. This fact might be erroneously considered by parents as a poor response to laser treatment. However, it has also been reported as a possible treatment which might improve the eczematous lesions over capillary malformations [8]. An overview of possible pathogenic pathways can be seen in Figure 2.
When assessing a patient, pediatricians and dermatologists should recognize that the appearance of pruritic eczematous patches over any type of pre-existing cutaneous lesion is very suggestive of Meyerson phenomenon. These clinical characteristics and the good response to topical corticosteroid treatment are sufficient to make the diagnosis, so skin biopsies are not routinely necessary. In case of diagnostic doubt, a skin biopsy can be performed. Histopathology usually shows acanthosis with marked spongiosis in the epidermis and perivascular lymphocytic infiltration in the dermis [7], which are features usually observed in eczematous disorders.
Although spontaneous resolution of Meyerson phenomenon is possible, symptomatic treatment is usually chosen. It is based on topical low-potency corticosteroids or calcineurin inhibitors, improving in most cases. The good response to topical treatment is a typical characteristic which supports the clinical diagnosis.
Pediatr. Rep. 2021, 13, FOR PEER REVIEW 3 concerning this issue [7][8][9][10]. It consists of pruritic, erythematous and scaling patches developing over pre-existing cutaneous lesions. Although the etiology is unknown, there are multiple pathogenic theories to explain the appearance of this phenomenon in vascular anomalies. On the one hand, as the exact mechanism regarding how this phenomenon is associated with the underlying disease is still not clear, its potential actual influence on the common vascular anomalies classifications, such as the International Society for the Study of Vascular Anomalies (ISSVA), remains to be established. On the other hand, the appearance of eczema coexisting with a skin lesion without pathogenic relationship to the lesion itself could be misdiagnosed as Meyerson phenomenon. Atopic dermatitis itself might be responsible for the development of eczemas over any skin lesion; vascular stasis and sensitization to own antigens present in capillary malformations might also trigger an inflammatory eczematous reaction [9]. As we report in the case of the patient number 5, laser treatment might be a triggering event of Meyerson phenomenon. This fact might be erroneously considered by parents as a poor response to laser treatment. However, it has also been reported as a possible treatment which might improve the eczematous lesions over capillary malformations [8]. An overview of possible pathogenic pathways can be seen in Figure 2.
When assessing a patient, pediatricians and dermatologists should recognize that the appearance of pruritic eczematous patches over any type of pre-existing cutaneous lesion is very suggestive of Meyerson phenomenon. These clinical characteristics and the good response to topical corticosteroid treatment are sufficient to make the diagnosis, so skin biopsies are not routinely necessary. In case of diagnostic doubt, a skin biopsy can be performed. Histopathology usually shows acanthosis with marked spongiosis in the epidermis and perivascular lymphocytic infiltration in the dermis [7], which are features usually observed in eczematous disorders.
Although spontaneous resolution of Meyerson phenomenon is possible, symptomatic treatment is usually chosen. It is based on topical low-potency corticosteroids or calcineurin inhibitors, improving in most cases. The good response to topical treatment is a typical characteristic which supports the clinical diagnosis.
Conclusions
In conclusion, although Meyerson phenomenon over vascular anomalies is a rare condition, it is important for pediatricians and dermatologists to assess it as a part of the differential diagnosis when treating a patient with skin disorders. Pruritus, appearance over pre-existing lesions, erythematous-scaling patches and good response to topical corticosteroids are clinical keys for the diagnosis. Recognizing this phenomenon will prevent diagnostic and therapeutic mistakes. Informed Consent Statement: Informed consent was obtained from all subjects or legal representatives involved in the study. | 2021-03-31T05:15:49.456Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "bf9f630a559c4dab021bfaaca776327b683ebf8e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2036-7503/13/1/19/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf9f630a559c4dab021bfaaca776327b683ebf8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210813033 | pes2o/s2orc | v3-fos-license | Exploring TOF capabilities of PET detector blocks based on large monolithic crystals and analog SiPMs
Monolithic scintillators are more frequently used in PET instrumentation due to their advantages in terms of accurate position estimation of the impinging gamma rays both planar and depth of interaction, their increased e ffi ciency, and expected timing capabilities. Such timing performance has been studied when those blocks are coupled to digital photosensors showing an excellent timing resolution. In this work we study the timing behaviour of detectors composed by monolithic crystals and analog SiPMs read out by an ASIC. The scintillation light spreads across the crystal towards the photosensors, resulting in a high number of SiPMs and ASIC channels fi red. This has been studied in relation with the Coincidence Timing Resolution (CTR). We have used LYSO monolithic blocks with dimensions of 50×50×15mm 3 coupled to SiPM arrays (8×8 elements with 6×6mm 2 area) which compose detectors suitable for clinical applications. While a CTR as good as 186ps FWHM was achieved for a pair of 3×3×5mm 3 LYSO crystals, when using the monolithic block and the SiPM arrays, a raw CTR over 1ns was observed. An optimal timestamp assignment was studied as well as compensation methods for the time-skew and time-walk errors. This work describes all steps followed to improve the CTR. Eventually, an average detector time resolution of 497ps FWHM was measured for the whole thick monolithic block. This improves to 380ps FWHM for a central volume of interest near the photosensors. The timing dependency with the photon depth of interaction and planar position are also included.
Introduction
Since the development of the first Positron Emission Tomography (PET) scanners already back in 80's, several efforts have been devoted to provide an accurate timing determination of the 511 keV annihilation photons [1][2][3].This information, typically known as Time-Of-Flight (TOF), is directly impacting improvements in the contrast of the reconstructed medical images [4].Unarguably, the continuous research in this field and the potential achievement in the so-called Coincidence Time Resolution (CTR) to values below 100 ps Full-Width-at-Half-Maximum (FWHM), will lead to a certain technological revolution of both clinical and pre-clinical PET practice [5].
In order to achieve an excellent time resolution in PET instrumentation, several factors need to be considered, such as an efficient photosensor exhibiting a fast rise time, high quantum efficiency (QE) and relatively high gain [6].A photosensor with these characteristics and very often used in gamma ray detectors is the photomultiplier tube (PMT).They have been used in several applications, demonstrating also their feasibility to be essential components in TOF-PET detectors [6,7].An alternative photosensor device is the silicon photomultiplier (SiPM) [8,9].Latest works show that SiPMs are gaining ground over the PMTs in gamma-ray detectors due to their compact size, their compatibility to magnetic fields, and a high photodetection efficiency (PDE).Shortly, the operating principle of SiPMs is based on the sum of all internal single-photon avalanche diodes (SPADs).This inherently introduces some uncertainty in the event timestamp generation.An alternative approach to SiPM was introduced by Philips Digital Photon Counting (Aachen, Germany), with the so-called digital silicon photomultipliers (dSiPMs).In their architecture, each cell is composed by its independent SPAD and its readout electronics, and it is capable of detecting exactly one photon.Detailed descriptions of their working principle and characteristics can be found [10,11].
Another key element in the performance of a detector block is the scintillation crystal.Recent advances in this area have well enabled the development of TOF-PET systems [12][13][14].A scintillator crystal suitable for TOF-PET detectors, besides high stopping power, must also exhibit high initial photon intensity [4].This characteristic can be achieved by an adequate light output and a short decay time.
Nowadays several crystal types and compositions suitable for TOF are available [15].There are mainly two types of scintillators used in gamma ray detectors namely pixelated crystals and monolithic blocks.Both types are briefly described below, while emphasis in the present work is given to the second type.
When aiming to achieve a very good timing resolution, the claimed most efficient approach is to use arrays of pixelated crystals with pixel dimensions that match that of the photosensor element active area, dubbed as one-to-one coupling.An example of this approach can be seen in Fig. 1 left.In this configuration, after some internal reflections of the generated optical photons inside the crystal pixel, they eventually exit and are collected mainly by a single photosensor element with no significant losses to neighbor photosensors.This allows one to collect high amounts of visible photons at a given short time frame.The main degradation observed in this case is some delay of the optical photons to reach the photosensors due to the light transfer efficiency (LTE) and the light transfer time spread (LTTS) [6,16].Moreover, the detector spatial resolution is limited to the pixel size.The deterioration in the CTR increases for longer light paths meaning for thicker scintillators.However, this difference does not exceed some tenths of picoseconds.An alternative detector configuration towards improving the detector block spatial resolution makes use of crystal arrays with pixel sizes smaller than the photosensor elements, implying scintillation light sharing among few photosensors [17].Optical lightguides are employed avoiding accumulation of events in a single photosensor.This approach tends to degrade the CTR due to the spread of the optical photons among neighbor photosensor elements.
Detector block configurations which make use of monolithic crystals provide some advantages when compared to pixelated crystals and, therefore, make them good candidates for PET applications [18][19][20][21].The crystal thickness and geometry, as well as treatments on the walls vary depending on the application.
In monolithic blocks the scintillation photons are isotropically emitted travelling straight in all directions, differently from the pixelated crystal case in which the optical photons are trapped inside bouncing on the walls until they reach one photosensor.The light spread in the monolithic block permits an accurate position decoding of the gamma ray impact, being a convenient choice for a high intrinsic detector spatial resolution [22].In addition to the position, a monolithic scintillator could ideally show a better timing performance compared to a pixelated one, due to the fact that the generated optical photons are not suffering from the aforementioned internal reflections inside the crystal pixel introducing time delays.However, the wide spread of the scintillation light does not facilitate the collection of a high number of photons at each single photosensor element in a very short time, which is mandatory for a good TOF.The poor collection of optical photons and the resulting low Signal-To-Noise ratio (SNR) for each channel, leads to noise and false signal triggering.In order to reach a good CTR, it is critical to use high performance readout electronics especially sensitive to the first photoelectrons.Ideal candidates for this purpose are the aforementioned dSiPMs, but also novel Application Specific Integrated Circuits (ASICs) specifically designed with low noise electronics.
Few works have been published showing that dSiPMs can successfully be combined with monolithic blocks to provide accurate TOF information even below 200 ps FWHM [23,17].Due to their operating principle, these photosensors can be sensitive to the very first photoelectrons while keeping the noise level very low.Typically, this is achieved by operating them at low temperatures of −20 °C, while at the same time they show the capability to disable microcells with higher levels of noise.
In this work, we explore the limits, in terms of timing resolution, when large and thick continuous crystals are read by analog SiPMs and ASICs.Emphasis has been given in analyzing the contribution of all photosensors which are involved in each generated scintillation distribution, aiming to get a better understanding of the light shape and its relevance with respect to the timing information.Evaluation results as well as methods to improve the CTR are being presented and discussed, aiming to shed light on the limits of timing resolution for this kind of detector configurations.
ASIC readout
We have selected an ASIC to read, digitize and process all photosensors signals.All photosensors were individually read out avoiding reduction schemes introducing noise or additional delays in the timepaths of the signals.The ASIC used all throughout the measurements was the TOFPET2 (PETsys, Portugal).This particular chip can read up to 64 channels, and for each of them includes charge integration Analog-To-Digital Converters (ADCs), and Time-To-Digital Converters (TDCs) with 30 ps binning.Inside the ASIC, the incoming signal is evaluated by two analog circuit schemes, before it becomes a valid gamma signal.The first one is related to the timing of the signal and is composed by two discriminators.The first discriminator, namely vth_t1, uses a very low voltage threshold which typically corresponds to few photoelectrons and is designed to start the process.The output of this discriminator is fed into an AND gate after a programmable delay.To the same AND gate, the output of the second discriminator (vth_t2) is fed.vth_t2 is set to a higher voltage threshold in order to discard dark counts without introducing any dead time to the system.The output of the AND gate results in a trigger signal which generates the timestamp using a 200 MHz clock.The second circuit scheme is based on a discriminator (vth_e) designed to discard pulses with relatively low amplitude and is operated as the energy threshold.Only when the three thresholds are met, a gamma-ray event is considered valid.Further information about the ASIC and the DAQ system can be found in references [24,25], among others.
SiPM photosensors
Two types of SiPM photosensors were used.A pair of SiPMs with 3 × 3 mm 2 active area (PA3325 model, KETEK, Germany) configured at a bias voltage of 31 V were tested with small pixel crystals.Other experiments were carried out using two 8 × 8 SiPM arrays with 6 × 6 mm 2 active area each (ON-Semi, J-series model).The wide total active area of these arrays suggested them as good candidates for their integration in clinical TOF-PET systems and especially in combination with large monolithic crystals [26].Those arrays have an active coverage area of 92% permitting the collection of high amounts of scintillation photons and, thus, improving the SNR.The larger capacitance of SiPMs with 6 mm will not significantly influence the CTR when combined with monolithic blocks, as the expected uncertainty due to the light spread might be larger [27].Those SiPMs arrays were operated at two bias voltages, 29 and 30.5 V, depending on the experiment.
Detector set-ups
Two types of experiments were designed.First, pixelated crystals following the one-to-one coupling were tested.The aim was to characterize the ASIC and the whole DAQ system performance.A coincidence measurement was carried out using the PA3325 SiPM sensors coupled to LYSO crystals wrapped with Teflon of 3 × 3 × 5 mm 3 .Thereafter, experiments were carried out with two photosensor elements from the J-Series arrays and two LYSO crystal pixels covered with Enhanced Specular Reflector (ESR) of 6 × 6 × 15 mm 3 .For both experiments, the ASIC discriminators were set to their default values, that means vth_t1 = 20, vth_t2 = 20 and vth_e = 15, respectively.
The monolithic LYSO crystals had dimensions of 50 × 50 × 15 mm 3 , matching the SiPM array dimensions.These crystals were treated with black paint in the four lateral walls in order to avoid undesired internal reflections which typically influence the spatial resolution.A retroreflector layer was added to the entrance face (Fig. 2 top-left).This particular optical element bounces back the light towards the emission point, improving the light collection at the photosensors while preserving the light distribution [20].
For the evaluation of the monolithic blocks we first studied one detector block against a reference detector composed by an individual LYSO pixel of 6 × 6 × 15 mm 3 coupled to one photosensor element of an identical SiPM array (Fig. 2 bottom-right).This approach provides an optimal characterization of the performance of the monolithic crystal, as it exhibits the minimum uncertainty in terms of timing.The two detectors were independently configured in terms of SiPM bias and thresholds.For the reference pixel-based detector, we used the same configuration as for the initial one-to-one coupling experiments (29 V and default thresholds).However, the detector with the monolithic block was set to 30.5 V. Lower thresholds were used, since we observed a lower collection of photons per channel.In particular, the voltage discriminators vth_t1, vth_t2 and vth_e were set to 4, 8 and 8 DAQ units, respectively, meaning that the timestamp is generated at the first 1-3 photoelectrons.This set-up was also used during the calibration procedure designed to compensate the uncertainties in the timestamps introduced by the time-skew and time-walk errors, but also due to the SiPM energy non-linearity.The time-walk is referred to the dependency of the timing determination of a signal with its charge amplitude, while the time-skew refers to the timing error introduced by the different time-paths among ASIC channels, see Section 3.4 for further details.
Afterwards, two identical detectors, both based on monolithic blocks were tested in coincidence.It should be noted that for the experiments using the SiPM photosensors array, custom printed circuit boards (PCBs) were developed, as an interface between the DAQ boards and the SiPM arrays (see Fig. 2
top-right).
All measurements were carried out at stable temperatures environment ( ± 0.5 °C) in the range of 7 to 19 °C, depending on the experiment.Small temperature drifts may affect the results and, thus, special attention to this aspect was taken.Moreover, the whole assembly was placed inside a light tight box.A 22 Na source (1 mm in diameter, 475 kBq) was used for all experiments.All results mentioned below, have been obtained after applying about 30% (350-650 keV) energy window around the 511 keV photopeak.
Analysis on the monolithic detector
A simple Center-of-Gravity calculation was applied to estimate XY planar coordinates of each recorded gamma-ray event.Regarding the calculation of the Z coordinate, here referred as the Depth of Interaction (DOI), for each gamma-ray impact in the monolithic block we summed the energies collected for every row and column of the 8 × 8 SiPMs.Thereafter, the Z coordinate was determined using the estimator described as the ratio of the impact energy to the SiPM row (or column) to the highest signal [20].The energy of each event is simply extracted by the sum of all channels fired.
For optimum timing determination, we have investigated an offline positioning filter.This means that an event is valid as long as the channels fired are in adjacent SiPMs (maximum of 8 therefore).In that way, false triggering due to SiPM dark counts can be rejected.
Timing linearity tests
The timing linearity of the system was studied with two detectors at a fixed distance, while the 22 Na source was moved across the field of view in between them.This experiment was carried out using the two monolithic blocks.We recorded the centroid of the timing distributions and compared the linearity observed from the measured experimental centroids and the expected values.
Pixelated crystals: one-to-one coupling
The experiments with the KETEK PA3325 SiPMs and the small crystal pixels showed a CTR of 186 ps FWHM using default ASIC thresholds.Both detectors showed an energy resolution near 10.8% after correction for the SiPM saturation.Fig. 3 top shows both the energy plot and CTR histogram.The measurement was carried out at 19 °C.The tests were repeated using the 6 × 6 mm 2 photosensors and LYSO pixels of 6 × 6 × 15 mm 3 .Despite the larger active area of the photosensors that might introduce signal jitter due to larger capacitances, and the thickness of 15 mm of the LYSO pixels, a timing resolution of 330 ps FWHM was obtained (Fig. 3 bottom).For this set-up, the energy resolution was found to be 13.7% after again applying an energy calibration.
Monolithic blocks, light sharing
The small size source was placed right in front of the reference detector and, therefore, the whole area of the monolithic crystal was irradiated during the coincidence measurements.An energy profile of all events in the monolithic crystal is shown in Fig. 4 top-left.Events within the photopeak (30-48 ADC units) were selected for data analysis.Three different Regions of Interests (ROIs) at the corner, middle and center of the detector block, were selected by applying a position filter, as depicted in Fig. 4 bottom-left.Moreover, for each ROI, the DOI distribution of events was obtained, allowing us to further split the data in three DOI regions (about 5 mm each) depending on the gamma ray impact Z coordinate.They are named as DOI1 for events at the crystal entrance, DOI2 for events occurring at the middle of the scintillator and DOI3 for events impinging at the bottom crystal layer (see Fig. 4 topright).Therefore, an estimation of the average number of channels that crossed the threshold and, hence, of the SNR per channel could be obtained for each gamma-ray impact.As seen in Fig. 4 bottom-right, we observed that independently of the XY position, a larger spread of the scintillation light was found for events at the upper crystal layers (DOI1).For impacts impinging deeper in the monolithic crystal e.g.DOI2 and DOI3, we can observe a slightly decreased number of channels fired, but still high suggesting a poor SNR per ASIC channel.
A significant dependency of the number of fired channels with the gamma-ray impact position is observed.The nearest an event occurred to the edge of the crystal, the more it suffers from light truncation as a high amount of scintillation photons are absorbed by the black painted walls.This fact explains the decreased number of channels fired for ROI1 and ROI2.It should be noticed, that these distributions are in general directly related to the dimensions and thickness of the crystal block as well as to the crystal treatment and photosensor geometry.
We have shown that the generated SNR per photosensor element strongly depends on the position of each particular event.Since an average of 25 channels are fired for each gamma-ray event, a poor SNR in the ASIC channels is expected.Gamma-ray impacts near the crystal entrance (DOI1), which is the most probable scenario, will fire many photosensors but with a reduced number of collected scintillation photons per photosensor.This statement limits the basic TOF requirements, namely a short and sharp rise time of the signals [12].On the contrary, events near the photosensor show a narrower light spread (DOI3), permitting a faster and a more efficient collection of optical photons.We shorted all impacts based on their timestamps and we used this information to fill the histograms shown in Fig. 5 top.Earliest hit 0 (X-axis of the histogram) means that the first timestamp also collected the maximum number of optical photons.Whereas for instance, hit labelled 10 means that the 10th impact collected the highest energy for this given gamma-ray event.Therefore, for gamma-ray impacts near the photosensor (DOI3), the channels collecting the highest amount of energy also correspond to the fastest ones (first hits).That is, we observe the hits with highest energy being the earliest collected.However, impacts at the crystal entrance exhibit a wider distribution of energy hits and time.This fact, was found to be directly related with the timing resolution.
Also interesting is the analysis of the energy ranges of the earliest channel triggered (earliest timestamp recorded) which complements the previously described behavior.By averaging the energies of the eight earliest hits for all events, it was clearly shown that the first recorded hit shows much higher energy ranges compared with the later recorded ones for the case of deep DOIs, while at the higher DOI1, the energy ranges for all 8 first hits are all comparable (Fig. 5 bottom).It should be noted, that these plots were obtained for the whole scintillator volume without using the previously described position filter.No significant variations are expected in these distributions for independent ROIs.
The variations in the spread of the scintillation light depending on the DOI of each gamma event lead to explore the optimal event timestamp assignment method [23].When that many hits occurred for each event, it is critical to study if the optimal time resolution is given when using the first timestamp recorded of each event or an alternative approach is may needed.
Monolithic detectors, time analysis
When using the monolithic crystal and the reference pixel, the assembly was placed at a stable temperature environment of 7 °C, minimizing dark count rates and increasing the Photon Detection Efficiency (PDE) of the photosensors.Coincidences measurements were carried out with the 22 Na source attached to the reference detector and data for the whole scintillator volume were obtained.
We first obtained the timing resolution using the timestamp of the channel with the highest energy, resulting on 1.41 ns FWHM.Alternatively, we sorted the data based on the timestamp and we used the earliest one recorded for the timing distribution.By plotting the difference of the timestamps, we observed an additional satellite peak centered at 5000 ps, see Fig. 6 top.The satellite peak is directly related to the overvoltage of the SiPMs as well as with the value of the vth_t1 discriminator.Detailed analysis of this effect can be found in [28].
We applied a timing filter window accepting events whose first several hits recorded are within a time frame.In particular six hits were chosen as the optimum number of hits within this window.This filter had as a result an improvement of the CTR and the discard the satellite peak from the timing distribution plots, showing that this effect was a result of false triggering (Fig. 6 bottom).Table 1 summarizes the measured CTR for different filter timing windows.As it can be seen, narrower time windows significantly improve the CTR but also affect the statistics.Therefore, a window of 2 ns was selected and applied to all following measurements.This filter improved the measured time resolution to 996 ps FWHM.
Some authors have showed a significant CTR improvement when instead of the timestamp of the first hit, the timestamps of secondary hits are considered together with a low threshold at the level of the first photo-electron [29,30].Fig. 7 shows experimentally the same behavior.When using the timestamp of the fourth recorded hit in time, the time resolution was improved.Herein, using this approach and the fourth arrived timestamp, we were able to reach to an improved CTR from 996 ps (RAW data) to 883 ps FWHM.
The timing resolution measured for this set-up is still influenced by the time-skew among the ASIC channels.Moreover, the time-walk also affects the CTR due to the poor collection of photons per photosensor element.Thus, a slower rising time is observed as a consequence of the scintillation light sharing effect.
Time-skew and time-walk calibration
The reference detector with the single LYSO pixel was placed at a distance of 25 cm from the monolithic detector and measured in coincidence mode.The 22 Na point source was attached to the reference detector aiming again to irradiate the whole volume of the crystal block and about 10 6 events were recorded.Considering that the source, as well as the distance between detectors, remained constant during the experiment, the mean values of the timing distributions of all timestamp differences between all channels in the monolithic block and the reference one, should ideally be constant, independently of the energy collected.
Initially, aiming to obtain an estimation of the time-skew error, and not for calibration purposes, we selected events which occurred at the bottom of the crystal block and whose earliest recorded impacts contain a relatively high number of photons (8 ADC units).This filter was applied in order to consider only timestamps that are less influenced from noise.The aforementioned Gaussian mean values for the 64 pairs were obtained.These represent the time-skew for the 64 ASIC channels.Fig. 8 depicts the time offsets for all ASIC channels in this assembly.The introduced error can be as large as 1 ns when considering all channels for the CTR estimation.
In the following we describe the studies carried out regarding the time-walk influence.Fig. 9 left shows the timestamps differences for one single pair of channels as a function of the energy of the first hit recorded in the monolithic block before any calibration.Even when considering one single channel the timing resolution is strongly affected for lower energy impacts (see range 0 to 10 in arbitrary units), confirming the time-walk effect.
The 2D histograms containing the time differences as a function of the energy were generated for each channel of the monolithic detector (a total of 64).Then, they were fitted using a parabolic function and the fitting parameters stored in a Look-Up-Table .A parabolic function was used as it agrees well with the data behavior.The application of this method to all 64 channels of the monolithic based detector, besides some partial time-walk correction, also accounted for the time-skew errors, as all channel distributions were centered to zero (see Fig. 9 right).After correcting each recorded timestamp, an improvement of the CTR was observed for all channels, with the average value to be 851 ps FWHM, when using the earliest recorded timestamp.
The time arrival of secondary hits was again studied in detail, after applying all timestamp corrections.Fig. 10 shows the CTR when later recorded timestamps were used.A slighter improvement was observed when the second hit was used (black squares).However, we have also investigated the averaging of the timestamps (t i ) of the few first hits and not just considering one.We have tested both a simple averaging of timestamps (t SA ) as well as an energy weighted average (t EA ): Up to eight timestamps were considered for both methods, see also Fig. 10, blue triangles (t SA ) and red circles (t EA ), respectively.In particular, slightly optimum values were provided by the energy weighted average when the six earliest timestamps recorded were used, reaching about 580 ps FWHM.These values have been obtained for the whole Finally, after enabling the position filter mentioned in Section 2.4, an additional improvement in the CTR was found.The CTR improved from 580 ps to 550 ps FWHM.Since the contribution of the reference detector was estimated at 235 ps FWHM (330/√2 ps), the resulting time resolution for the monolithic based detector was found to be 497 ps FWHM.
CTR dependency with XY and Z position
The three ROIs showed in Section 3.2 were selected for an independent and in deep detail analysis of the CTR performance based on the X, Y and Z position of the gamma-ray event.In all three ROIs when RAW timestamps are considered, the influence from the time walk and the poor SNR significantly affects the time resolution.This is especially observed for impacts at the crystal entrance layer (DOI1), as depicted in Fig. 11 with black squares.The same dependency, CTR vs DOI layer, is also found when only one corrected timestamp is used, but with some CTR improvement, as expected.However, when additional timestamps (six of them) are averaged using an energy weighting method the CTR is highly improved (green squares) and most importantly, its dependency with the DOI layer significantly decreases.Moreover, it is worth to highlight that in the case of events occurring near the crystal corner (ROI1) and at the bottom crystal layer (DOI3), the averaging method of timestamps seems to provide very similar results to the case of using just the earliest corrected timestamp.This behaviour can be expected from the fact that firstly, as already showed in Fig. 5, most of the collected scintillation photons occur in the first hit.Moreover, the scintillation light absorption by the black painted laterals limits the light spread.Besides this, no significant variations in the timing behaviour were found among ROIs, however best values were obtained for the case of the ROI3 and DOI3, resulting in a CTR value of 440 ps FWHM (371 ps FWHM when subtracting the contribution of the reference detector).
Experiments with two monolithic blocks
The two monolithic blocks were independently calibrated using the approach described above with a reference single-pixel detector.Then, they were measured in coincidence by placing the source in between the two detectors.Fig. 12 top shows the CTR values when considering an average of up to 8 timestamps (energy and simply average).Best timing resolution was achieved at 660 ps FWHM when using the sixth earliest timestamps weighted by energy.This data includes all impacts in the whole scintillation volume.
In order to validate the timing results, the linearity of the measured Gaussian centroids of the timing distributions was evaluated.In Fig. 12 bottom, the centroids obtained with the use of just one corrected timestamp as well as with the average of 6 timestamps weighted by the energy are plotted against the theoretical ones.Results shown that when using the first timestamp (black squares) a regression coefficient of 0.97 was obtained, while with the averaging method this was improved to 0.99 (cyan squares).
Discussion
The TOFPET2 ASIC is capable of resolving, with high accuracy, the gamma impacts in terms of timing and energy resolution.The experiments carried out with the small LYSO pixels (5 mm thick) exhibited state-of-the-art CTR values of 186 ps FWHM using commercially available electronics.We hardly faced difficulties reaching this good timing, at even 19 °C set-up temperature.However, when using thicker crystal pixels (15 mm) and larger photosensors of 6 mm size, a deterioration to 330 ps was observed, as expected.
The use of a single pixel reference detector in the experiments with the monolithic crystals, permitted a better understanding of their timing performance.Here, the small crystal pixel minimizes the error introduced to the CTR determination.
An energy filter was applied to all measurements selecting events within the 511 keV photopeak.Regarding the energy resolution in the monolithic based detector, it was found to be nearing 30% when considering all events within the whole crystal volume.However, this significantly improves when selecting small ROIs.For instance, for a ROI at the crystal center, an energy resolution of 17% was determined.The poor resolution obtained when considering all events independently of the position of the event, is caused by the effect of scintillation light truncation at the crystal edges.Moreover, the very low thresholds might produce some small deterioration in the energy resolution.
A parameter of significant importance on the timing resolution of the monolithic block was found to be the timestamp assignment.For comparison, in the case of pixelated crystals and the one-to-one coupling, the best CTR results are seen when using the timestamp of the hit with the highest energy, if more than one photosensor is fired (not Fig. 11.Timing resolution CTR as a function of the DOI layer for the three ROIs, when using RAW timestamps, the earliest recorded timestamp corrected, and an energy weighted average of the earliest 6 corrected timestamps.Fig. 12. Top, timing resolution of the coincidence measurement between the two monolithic detectors using simple timestamps average (red circles) and energy weighted timestamps average (black squares).Bottom, measured centroids as a function of the theoretical expected centroids using the earliest corrected timestamp (black squares) and an energy average of the 6 earliest timestamps (cyan squares).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) shown in this work).However, this is no longer the case for monolithic based detectors.If the events occur near the photosensor, without producing any light sharing, the optimal approach would be to assign the timestamp of the first hit recorded, which is also the hit with the highest number of photons (energy) collected.However, in the case of events occurring in the entrance regions of the crystal, this approach cannot be applied, due to the low collection probability of enough photo-electrons in just one channel.
When there is an intense scintillation light sharing, very low thresholds are needed making the electronics sensitive to the very first photo-electrons.However, as we showed in this work, this low threshold may result in the acceptance of false triggered events, introducing uncertainty in the timing distribution.The timing window filter allowed us to discard noisy events and to improve the CTR.Although this filter significantly reduced the acquired statistics, we expect most of those rejected events are a result of false triggering.Moreover, by operating at very lower temperatures, a further reduction of the dark count rate will be achieved and, thus, discarding a smaller number of events.It should also be noted that the improvement seen by applying this filter is not related to the time-skew error as the improvement is also seen when only single pairs of channels are considered for the timing distribution, a method in which the time-skew has not effect.
It was observed that secondary hits provide better results in terms of timing.In some cases, an improvement better than 100 ps FWHM was observed.This behaviour has been studied in depth elsewhere and is directly related to the order and photocounting statistics [30].As previous works have shown, when the optical photon index (i.e.hit number) increases, the time interval among the following detected photons is reduced.This means the probability distribution for detecting the first photon is significantly larger compared to the probability of determining the detection time of secondary photons.Therefore, it is hypothesized that the improvement seen in the experimental results is based on the timing generation probability of photons and in the order statistics theory, since there was no relation with the energy of each hit (as shown in Fig. 5
bottom).
A calibration procedure was carried out to correct each timestamp.We generated 2D plots of the time differences for each channel pair as a function of the impact energy.Here, we studied the time given using the timestamp of the first hit recorded, of the first 8 hits or of all hits.However, not significant differences were found among them, so the first hit was decided to be used in these plots.Instead of applying fits to the 2D plots (time difference vs. energy) for each channel, another method was formerly studied [26].In that case, projections into the time difference axis were made in small energy steps.The centroids of the Gaussian-like profiles were used as timestamps offsets.However, since lower timing thresholds were currently used, the fitting approach described in this work showed slightly more accurate results.
Even after the calibration, the effect of the time-walk as well as of the false triggering might still be present.We expect some uncertainties in the generated timestamps, especially when the channel triggered did not collect a significant amount of scintillation photons.However, the method of averaging several timestamps, and in particular when weighting them by their collected energy, showed a significant improvement of the CTR, as it minimized the contribution of the noisy generated timestamps.This fact, was also verified when independently treating the CTR for different planar and DOI regions.As it was shown, the timestamp averaging method lead both to a partial compensation of the CTR and light spread dependency, providing more accurate CTR results than a single timestamp approach.The only difference was observed in the case of events occurring near the crystal edge and at the DOI layer near the photosensors.Here, we measured similar CTR both when using the averaging method or just a single timestamp.This might be explained due to the limited light spread in this region of the crystal.Herein, by analysing the CTR as a function of the DOI we were able to study the timing performance while avoiding the uncertainty introduced by the light speed propagation.
The CTR values obtained for the two monolithic blocks assembly were in accordance with the CTR recorded for the monolithic block in coincidence with the single pixel detector.We have estimated combined statistical and systematic error bars of about 20-30 ps FWHM.Notice that the custom PCB developed to interface the J-Series photosensors with the ASIC readout might introduce some additional noise to the signals coming from the photosensors due to their signals timepaths and higher capacitance.The validation of all methods used and described in the present manuscript was achieved with the linearity of the Gaussian centroids of three space-separated measurements.
Conclusions
We have evaluated the TOFPET2 ASIC showing its capability to achieve sub-200 ps FWHM time resolution using crystal pixels.
A thick and wide monolithic block was selected to be tested and explored in terms of timing resolution.The volume of the selected scintillation block exhibited several challenges in the determination of an accurate impact time resolution.The light sharing effect, and the resulting poor SNR per ASIC channel, is related to the size of the monolithic block.In addition to this, the selected treatment (black lateral paint and retroreflector layer at the entrance), on one hand enhances the determination of the impact coordinates, but on the other hand significantly degrades the timing resolution due to the scintillation light absorption at the lateral walls.We are aware these components somehow constrained the achieved performance, and that better absolute values could be obtained using smaller monolithic blocks, with white or reflecting painting, as well as when combined with photosensor arrays with smaller SiPM area.However, the analysis shown in this work is still useful to understand the overall limits and corrections to be applied when using monolithic blocks read out using analog SiPMs and ASICs.We have added especial focus in this work to the time-walk and the time-skew corrections.
The time-skew can be addressed through the independent processing of channel pairs but in the case of the monolithic block, the presence of time-walk uncertainties produces additional difficulties when aiming for an accurate calibration.Nonetheless, the calibration method described in this work provides good results.The time-skew was successfully corrected, permitting the exploitation of the timing information during future reconstruction processes.In addition, the time-walk has also been partially compensated, a fact that permits and motivates a follow up research work towards the development of TOF-PET detectors using other types and treatments of monolithic blocks.
Summarizing, RAW timing resolutions were found to be well above 1 ns for a large 50 × 50 × 15 mm 3 LYSO block when tested in coincidence against a reference pixel-based detector.Techniques to discard a fraction of noisy events and decrease the time uncertainty were applied, reaching a significant improvement in terms of CTR of 550 ps FWHM for the whole scintillation volume, without subtracting the reference detector contribution which is estimated at 230 ps FWHM.As shown in the analysis of the CTR and event position dependency, an improved timing resolution can be achieved for events at the center of the crystal and deep DOIs layers of 440 ps FWHM (again without subtracting the reference detector contribution).When two identical detectors were tested, CTR values of 660 ps FWHM were found.This timing resolution clearly cannot permit the use of timing information in the lines of response for small or organ dedicated systems [31], but will permit the reduction of noise as well as the improvement of the SNR in the reconstructed images.Moreover, recent pilot studies in our lab have shown to improve these results up to a factor of 2 if smaller crystals (1 × 1 in.),Teflon wrapped, and coupled to 8 × 8 SiPM arrays (3 × 3 mm 2 ) are used.
Fig. 1 .
Fig. 1.Representation of the scintillation light distribution for one gamma event inside a pixelated crystal (left) and a monolithic block (right).
Fig. 3 .
Fig. 3. Top, energy spectrum after energy calibration of one detector and time distribution obtained with 3 mm SiPMs and LYSO crystals of 3 × 3 × 5 mm 3 .Bottom, energy spectrum (after calibration) of one detector and time distribution obtained with 6 mm SiPMs and LYSO crystals of 6 × 6 × 15 mm 3 .
Fig. 4 .
Fig.4.Top-left, energy spectrum of the whole monolithic based before calibration.The black line shows a fit to the distribution using a Guassian profile plus a line.Top-right, DOI distribution of the events recorded at the center of the monolithic crystal (ROI3).Bottom-left, flood map of events, showing the three ROI selected for analysis.Bottom-right, average number of channels fired per event, as a function of the DOI and for the three ROIs.
Fig. 5 .
Fig. 5. Top, histograms showing which of hits collected the highest amount of energy for the three DOI regions for the whole scintillator.Bottom, average energy of each hit for all events recorded at the three DOI layers (no filter in position).
Fig. 6 .
Fig.6.Top, timing distribution of the measurement between the monolithic block and the reference detector without applying filtering windows.Bottom, timing distribution when applying a 2 ns window for the first six impacts.
Fig. 7 .
Fig. 7. Experimental results showing CRT measured as a function of the number of earliest timestamp used for different filtering windows.
Fig. 8 .
Fig.8.Dispersion of Gaussian centroids of the time differences between the channels of the monolithic detector and the reference one (time-skew error).
Fig. 9 .Fig. 10 .
Fig. 9. Time differences of one channel as a function of the energy of the first hit before (left) and after the calibration (right).The color map is in logarithmic scale.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Table 1
Table representing the CRT values as well as the statistics of the total event accepted for different filtering windows of the first 6 hits. | 2020-01-16T09:06:43.485Z | 2020-01-11T00:00:00.000 | {
"year": 2020,
"sha1": "fa425c71248bd31c8f2613332e046e26cd7ca145",
"oa_license": "CCBYNCND",
"oa_url": "https://www.physicamedica.com/article/S1120-1797(19)30525-3/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0b9ac8965511b2ab9aaacf33e63be8a8c933afd9",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
16394020 | pes2o/s2orc | v3-fos-license | Generalized Adaptive Network Coded Cooperation (GANCC): A Unified Framework for Network Coding and Channel Coding
This paper considers distributed coding for multi-source single-sink data collection wireless networks. A unified framework for network coding and channel coding, termed"generalized adaptive network coded cooperation"(GANCC), is proposed. Key ingredients of GANCC include: matching code graphs with the dynamic network graphs on-the-fly, and integrating channel coding with network coding through circulant low-density parity-check codes. Several code constructing methods and several families of sparse-graph codes are proposed, and information theoretical analysis is performed. It is shown that GANCC is simple to operate, adaptive in real time, distributed in nature, and capable of providing remarkable coding gains even with a very limited number of cooperating users.
work codes, we treat channel coding as an integral part of network coding, thereby reducing this issue to a minimal. Following the notion developed in [7] that network codes are essentially generalization of source codes and channel codes, we refer to the new protocol as the generalized adaptive network coded cooperation (GANCC) protocol. GANCC makes clever use of circulant sparse-graph codes, and subsumes ANCC as a degenerated case. Further, while ANCC has a network codeword length in the order of O(m), the effective network codeword length of GANCC is in the order of O(Nm), where m is the number of cooperating users and N is the packet length for each user. Please note that the long effective code length of GANCC is achievable regardless of whether channel codes exist in each packet. Hence, GANCC requires significantly fewer users to cooperate than ANCC to attain a similar (network) coding gain. code constructing algorithms are discussed to optimize the actual code graph of these codes using locally-available information: The column weight concentration (CWC) algorithm provides a simple method to construct codes with balanced protection; and the distributed progressive edge growth (DPEG) algorithm improves the girth, and hence the code performance, at the cost of a larger complexity. The efficiency of GANCC using these proposed codes are further analyzed using density evolution. Finally, the realistic code performance is verified and benchmarked by computer simulations.
The remaining of this paper is organized as follows. Section II introduces the system model.
Section III briefly introduces ANCC, and Section IV details the key idea and the general framework of GANCC. Section V discusses different code designs and code ensembles. Section VI conducts theoretical analysis. Finally, concluding remarks are provided in Section VIII.
II. SYSTEM MODEL
The model of interest here comprises m terminals communicating wirelessly to a common destination with only one antenna in each terminal by binary phase-shift keying (BPSK) modulation. In the first phase (broadcast phase), each terminal takes turns to broadcast its data packet, thereafter referred t o as source-packet. In the second phase (relay phase), each terminal takes turns to help forward others' data, thereafter referred to as relay-packet or parity-packet. The destination listens through both transmit phases, and combines all its reception to recover the source-packets for all the terminals.
We assume that all the communication channels used in this paper are spatially independent with fading coefficient α and channel noise Z. The fading coefficient α is modeled as a zeromean, independent, circularly symmetric complex Gaussian random variable with unit variance, whose magnitudes |α| is Rayleigh distributed. We assume that the fading coefficient α is always known to the receivers but not to the transmitters. The channel noise Z captures the addictive channel noise and interference, and is modeled as a complex Gaussian random variable with zero mean and variance N 0 . For completeness, we consider both block fading (very slow fading) and independent and identically distributed (IID) fading (very fast fading). In the block fading scenario, the fading coefficient α remains constant during one round of user cooperation, and changes independently from one round to another. In the IID fading scenario, the fading coefficient changes independently from bit to bit.
III. OVERVIEW OF ADAPTIVE NETWORK CODED COOPERATION
The ANCC protocol proposed in [8][9] operates as follows.
Consider an m-to-one data-collection network. In the first phase, each of the m terminals airs a source-packet of length N in its designated time slot. A terminal that is not transmitting listens, decodes what it hears, and collects the successfully decoded packets in its retrieval-set. Due to channel fading and other impairments, a terminal may not be able to retrieve all the packets.
In the second phase, each terminal randomly selects a small number of packets from its retrieval-set, computes their check-sum (i.e. XOR these binary vectors bit-by-bit) to form a length-N relay-packet, and forwards it to the destination. Thus, by the end of the second phase, the m terminals have transmitted, through user cooperation, a (2m, m) network code in the form of a distributed, random, systematic LDPC code [8] [9]. The source-packets transmitted in the first phase form the systematic symbols of the network code, and the relay-packets transmitted in the second phase constitute the parity symbols.
To help illustrate, consider a simple example of m = 5 users. Assume that for a particular round of cooperation, the inter-user channels form an instantaneous network topology as shown in Fig. 1, where a directed link in the figure represents a quality connection that will hold, say, until at least the end of this cooperation round. For simplicity, the destination is not shown in Suppose that the packets marked in bold font are selected (randomly) by each terminal to compute check sums. This results in an LT-LDPC network code with parity check matrix: parity symbols (1) Due to the random and on-the-fly construction of the code, a small bit-map field needs to be included in each relay-packet, so that the destination knows how the checks are formed and can correspondingly replicate the code graph and perform message passing decoding. Since a different network code is constructed and transmitted with each new round of cooperation, the overall system performance represents that of the ensemble average rather than of any individual code. Further, such a topology requires the destination be equipped with an adaptive decoder architecture, which can be implemented, for example, using software-defined radio (SDR).
The network code in (1) takes the form of LT-LDPC codes. This is because users take turns (time-division) to transmit, such that one can continue to listen to relay-packets until its turn. In the case when users transmit simultaneously through frequency-devision or code-devision, each user will collect only source-packets, and the resulting network code will become an LDGM code, whose right part of the parity check matrix is an identity matrix instead of a lowertriangular matrix. Since weight-1 columns degrade the performance, LDGM codes in general exhibit a higher error floor and a slightly worse water-fall region than LT-LDPC codes [8] [9].
Finally, depending on the quality of the user-destination channels or the residual power supply, a terminal may choose to relay a single time or multiple times, each time using a different relaypacket, or not to relay at all. This means that the network codes here need not be fixed-length or fixed-rate. Further, if there exists a simple feedback mechanism from the destination, then the users can keep generating and transmitting relay-packets, until the destination stops it. The resulting network code thus becomes a rateless sparse-graph code.
IV. GENERALIZED ADAPTIVE NETWORK CODED COOPERATION (GANCC) A. An Illustrating Example
The ANCC protocol does not consider or exploit the channel code which may well exist in each source-packet. Since the network code length is solely dependent on the number of users m, it takes a large number of users to cooperate in order to achieve a good network coding gain.
The associated delay and management overhead can be costly. Further, in the case when a large cluster of co-located users are not possible (e.g. in a mobile network or a small-scale network), the network code length may not be long enough to provide a desirable coding gain.
The proposed GANCC protocol provides a remedy to this problem by integrating the channel codes and the network code in one single codeword, making the effective code length from 2m to 2mN, where N is the length of each packet. The beauty of GANCC is that the channel codes now constitute an integral part of the network code, rather than being loosely connected to the network code via serial concatenation.
To best illustrate this, consider an extreme case where each source-packet contains only N uncoded bits with no explicit channel coding. For simplicity, we consider the same 5-user example discussed in the previous section. The LDPC network code of ANCC, whose parity check matrix H AN CC is given in (1), is rather weak due to the short block size (and the existence of length-4 cycles). The lack of channel coding in each packet further eliminates the possibility to iteratively decode the network code and the channel code to improve performance. Now GANCC remarkably changes the situation by a simple operation of interleaving. For each terminal, after selecting the packets from its retrieval-set, instead of computing their check-sums bit-by-bit in their original bit orders, it will interleave these length-N bit-steams, each using a different scrambling pattern, before adding them together to compute parities. The resulting parity check matrix of this joint network and channel code, H GAN CC , is illustrated in Fig.3, where π i,j is a permutation of an identity matrix, whose row permutation pattern determines how user i scrambles user j's bit-stream. Mathematically, H GAN CC is constructed by substituting each entry of H AN CC in (1) with an N × N square matrix, where "0"s are replaced by null matrices, and "1"s are replaced by independent permutation matrices, except for the "1"s on the right diagonal which are replaced by identity matrices (trivial permutations). In the degenerated case where all the permutation matrices use the identity matrix, then GANCC reduces to ANCC.
These permutation matrices or interleavers are critical to the performance of GANCC. First, interleaving integrates the bit-streams of all the users in one big network code, inter-connecting previously-unrelated bits for them to provide inference about one another. Since the effective code length becomes O(mN), where N typically ranges from a few hundred to a few thousand in practical systems, the system therefore obviates the need for many terminals to cooperate, making GANCC more practical. Second, by permuting each bit-stream using a different pattern, and so breaking the length-4 cycles that may previously exist in H AN CC , interleaving reduces the chance for short cycles. In the example, H AN CC in (1) consists of several length-4 cycles, but the corresponding H GAN CC in Fig. 3 has a much smaller fraction of length-4 cycles if any.
In GANCC, each terminal i needs to generate or store a set of (random) interleaver π i,j , j = 1, 2, ..., m, whose knowledge must be revealed to the common destination. This consumes a large storage space for all parties involved and/or a good amount of signaling overhead. To alleviate this burden, algebraic interleavers can be used in lieu of random interleavers [13]. An algebraic interleaver is one whose scrambling pattern can be generated on-the-fly using an oftenrecursive formula with a couple seeding parameters. Through the proper choice of formula and parameters, an algebraic interleaver can be made to behave much like a random interleaver, but requires significantly less storage [13].
In the context of GANCC, solutions that are even simpler than algebraic interleavers are possible by exploiting quasi-cyclic LDPC codes, or, circulant LDPC codes [14]. Several studies have shown that circulant matrices/interleavers can be used to construct good LDPC codes with simple encoding/decoding implementations. Thus, instead of using random or algebraic permutation matrices, we replace the "1" entries in H AN CC with N × N circulant matrices, such as the one shown below (N = 4): ( Since each row is the right cyclic shift of the previous row, it takes a single parameter, the position of the non-zero entry in the first row, termed the offset and denoted as p, to determine a circulant matrix. In practice, it is possible to make p a function of the terminals' indexes, GANCC requires very little additional complexity than ANCC, but produces a random LDPC code whose code length is several magnitudes larger. Notice that H GAN CC in general has a lower density than H AN CC . In the example, H AN CC in (1) is rather dense, whereas its counterpart H GAN CC in Fig. 2(A) appears to have just the right density. In practice, a delicate balance needs to be accounted for when choosing the check degrees, since heavy density breaks the message-passing decoding and excessive sparsity leads to uselessly weak codes.
Original and long version submitted to IEEE Transaction on Communications on July 9, 2007. DRAFT To demonstrate the advantage of GANCC over ANCC and the efficiency of circulant interleavers, we simulate the 5-user example discussed previously, where the users each transmit an uncoded data-packet of length N = 1000 in the first phase and relay a parity-packet of length N = 1000 in the second phase. We evaluate three cases: ANCC with (10,5) LT-LDPC codes, and GANCC with (10000,5000) LT-LDPC codes based on random permutation matrices and circulant matrices, respectively. We plot both the bit error rate (BER), averaged over all the bits in all the data-packets, and the packet error rate (PER), averaged over all the users, versus the user-destination signal-to-noise ratio (SNR) in Fig.4. Since the network topology changes from time to time, resulting a different network graph and hence a different network code every time, the curves represent the ensemble average performance rather than that of a single code. We observe that random permutation matrices and circulant matrices make no performance difference in GANCC (curves overlapping), and they both significantly outperform ANCC, by 7 dB in BER and by 11 dB in PER. For fairness, we have considered block Rayleigh fading channels, such that there are only m = 5 different channel realizations in each codeword regardless of the code length. Hence, the impressive gains achieved by GANCC is only due to interleaving and the larger code length and the richer decoding context that come after. Foreseeably, when the channels become fast fading, large code lengths in GANCC will bring additional time diversity and consequently even larger performance advantage over ANCC.
B. The General Framework
In general, the source-packet from each terminal is channel coded. Assume all the packets consume the same bandwidth N (bits). Let H 1 , H 2 , · · · , H m be the parity check matrices of the channel codes, (N, K 1 ), (N, K 2 ), · · · , (N, K m ), used in each source-packet, respectively, where K i is the raw data size for user i.
Irrespective of whether or not the source-packets are channel coded, each relay performs the same procedure as discussed before: after collecting a retrieval-set, (randomly) selects a few packets from the retrieval-set, interleaves them using different patterns for each, computes the parity-stream for these interleaved bit-streams, and forwards it to the destination.
Viewed from the destination, the combination of all the source-packets and the relay-packets together form one big network code whose parity check matrix consists of 2mN columns, corresponding to i K i raw data bits, and 2mN − i K i rows, corresponding to i (N − K i ) "channel-checks" and mN "network-checks". Consider the 5-user example, the parity check matrix of the unified network-channel code, H GAN CC , will take the general form in Fig. 3, where π i,j is the (circulant) permutation matrix for user i to interleaver user j's data, and H i is the parity check matrix of the channel code used in user i's source-packet.
The unified channel-network coding model depicted in Fig. 3 is general. It holds regardless of whether none, some, or all source-packets are channel coded and by what channel codes.
When user i does not employ a channel code, H i reduces to an identity matrix, which is like non-existent from the encoding and decoding perspective.
Three decoding strategies are available for GANCC. The optimal decoder treats H GAN CC as one integrative code and performs joint channel-network decoding. This becomes practical when all the channel codes involved can be individually decoded by the message-passing algorithm, and so will the entire channel-network code. Alternatively, a two-level decoding architecture can be employed, where the network code, specified by the lower mN rows of H GAN CC in Fig. 3, is first decoded using the message-passing algorithm, whose soft (probabilistic) outcomes are subsequently passed to the individual channel codes for channel decoding. If complexity permits and if the channel codes produce soft outputs, these soft outputs may iterate back to the network code for successive refinement, enabling an iterative network-channel decoding architecture.
Note that sequential and iterative network-channel decoding also apply to ANCC, but a truly
A. Circulant Matrices
Let p i,j be the offset of the circulant matrix that terminal i uses to scramble terminal j's data.
Well-chosen p i,j 's are not only storage-efficient, but also ensure a girth of at least 6 for the resultant circulant LDPC code.
Theorem: An (N, s, t) QC-LDPC code has a girth ≥ 6 if and only if for any two row indexes 0 ≤ i 1 < i 2 < s and any two column indexes 0 ≤ j 1 < j 2 < t in the base matrix,
Remark:
The proof is not difficult and therefore omitted. The offset values of the circulant submatrices need not be different to rid of length-4 cycles. Rather, it is the separation between the offset values that matters. In a conventional QC-LDPC code, every entry in the base code is substituted with a circulant submatrix. In the proposed GANCC protocol, because of channel outage and deliberate de-selection of certain packets at each relay, some entries will be replaced by zero submatrices. This results in sparser QC-LDPC codes, whose matrix sub-divisions are larger than column/row weights and which generally outperform their denser counter-parts [16].
In GANCC, the submatrix size N far exceeds the number of sub-division m, making it easy to select good p i,j 's which are storage-efficient and achieve girth ≥ 6 at the same time. One possibility, for example, is
B. The Base LDPC Code
Now consider the base code in GANCC. Irregular LDPC codes can outperform regular ones, but the degree profile must be designed to match the physical channel, which, in the case of GANCC, is an m-cyclic channel comprising m independently faded segments. It is not only difficult to optimize the degree profile for such a channel, but the lack of central control also makes it impossible to construct a code with the the target column and row degree profile.
Original and long version submitted to IEEE Transaction on Communications on July 9, 2007. DRAFT An LDGM or LT-LDPC code is regular if the systematic part of its parity check matrix has uniform column weight (thereafter referred to as the degree) and near-uniform row weight. For a regular LT-LDPC code, the column weight of the lower-triangular part should also decrease proportionally to preserve a uniform density. Our design below focuses on the edge connection of each individual code (rather than the degree profile of the code ensemble).
Design I: Column Weight Concentration Algorithm
When each relay randomly selects packets from its retrieval-set, it is highly likely that sourcepackets get unequal protection, with some over-protected and others under-protected. This should be differentiated from irregular LDPC codes, whose degree profile is carefully designed to optimize a "wave" phenomenon. Here the insufficiently-protected source-packets are vulnerable to errors and will likely degrade the overall system performance.
The column weight concentration algorithm aims at making the column weights as uniform as possible. The idea is simple and easy to implement: when each user listens and decodes packets, it also keeps track of the number of checks each packet participates in; when its turn comes, it selects from its retrieval-set the ones that are least protected to form a check. Due to possible inter-user outage, each user has only a partial view of the node degrees of the network code. The resulting code may not be exactly regular, but this simple mechanism effectively eliminate the majority of null-weight or weight-one columns in the H matrix, which are most harmful to the code performance. We note that the network codes simulated in Figures 4 and 5 are constructed distributedly using this CWC algorithm. Otherwise, the performance of the individual network code will vary significantly (especially since the network size is small), with some instances yielding rather disappointing performance.
Design II: Distributed Progressive Edge Growth Algorithm
For a given degree profile, the progressive edge growth (PEG) algorithm [15] builds a Tanner graph by connecting variable nodes and check nodes edge by edge, such that each time the added edge has minimal impact on the girth of the graph. An effective tool to maximize the girth, the PEG algorithm has resulted in some of the best known codes at short lengths [15].
The PEG algorithm exploited to the base LDPC code here is somewhat different from that in the original proposal [15]. First, the algorithm will be run in a distributed rather than centralized manner. A terminal continues to hear and collect information on the growth of the graph structure until it has fulfilled its turn of message forwarding. Each terminal independently constructs a subgraph to the level of its interest based on the information available locally, to determine how to add edges (i.e. form the parity check). Due to possible channel outage, the graph envisioned by each terminal may be slightly different from the true code graph. Second, because of the lack of global knowledge and central control, the terminals have no knowledge of the resultant degree of any of the variable nodes. Hence the graphs can not expand from the variable nodes as described in [15], but will instead expand from the check nodes. The general idea remains similar, and the process is summarized below, which is dual to what is discussed in [15].
where v i is a variable node having the lowest estimated variable degree under current graph setting Expand a tree from c j up to depth l under the current graph setting such that N l c j = ∅ butN l+1 c j = ∅, or the cardinality of N l c j stops increasing but is less than where v i is one variable node picked from the set N l c j having the lowest estimated variable node degree; end end end
C. Extended Circulant LDGM Codes
As discussed in Section III, in some scenarios, the resulting network code may take the form of LDGM codes. Due to the large number of weight-1 columns, whose outbound reliability information never gets updated, (circulant) LDGM codes produce a higher error floor as well as a worse water-fall region than LT-LDPC codes. To alleviate the negative impact of weight-1 columns, we propose the extended circulant low-density generator-matrix (EC-LDGM) codes.
The only difference between the new EC-LDGM code and the circulant LDGM code discussed before is an additional differential encoding process. Upon its turn to relay, a terminal first performs the same process as discussed before: select data-packets, circularly shift these bitstreams, and XOR them bit-by-bit to obtain the bit-stream of the parity-packet. Then, instead of sending this bit-stream {x i } as is, the relay sends the differentially encoded version {y i }: y 0 = x 0 , and y i = xor(x i , x i−1 ), for i = 1, 2, · · · , N − 1. What this reflects in the parity check matrix, as illustrated in Fig. 2(C), is that the right diagonal blocks now have zigzag pattens instead of a single line in the main diagonal.
VI. PERFORMANCE ANALYSIS
To predict the performance of GANCC using LDGM, LT-LDPC and EC-LDGM codes, we conduct theoretic analysis using density evolution [18]. To make analysis tractable and reflective (primarily) of the role of network coding, we assume source-packets have no channel coding.
A. The General Formulation
Consider a block fading scenario. Let the number of spatially independent channels m fixed to some finite value, and let the size of each uncoded data-packet N increase without bound. The overall network code has length 2Nm → ∞. Each codeword is transmitted over m independent Gaussian channel realizations with noise variances σ 2 (t), t = 1, 2, · · · , m.
In density evolution, it is an accepted practice to assume that the log-likelihood ratios (LLR) extracted from individual Gaussian channels as well as those exchanged between different types of variable and check nodes all follow some Gaussian density. This Gaussian approximation was initially proposed as a convenient tool for iterative analysis of LDPC codes, but one that is largely pragmatic [19]. Recent research has provided solid statistical support to the accuracy of this assumption [17]. Combined with a symmetry condition, the Gaussian approximation leads to a particularly useful result that the variance of the Gaussian LLRs equals twice the mean value [19]. Hence, the evolution of the LLR messages and the corresponding error probabilities can be characterized by a single parameter, the mean of the LLR messages.
Let u 0 (q)), q = 1, 2, · · · , m, be the LLR messages extracted from the qth Gaussian channel, where u 0 (q)) ∼ N(µ u0 (q), 2µ u0 (q)) and µ u0 (q) = 2/σ 2 (q). Let u 0 be the average of u 0 (q)s, averaged over the distribution of edges in the code graph that are associated with u 0 (q). Let u (l) and v (l) be the LLR messages passed from variable nodes to check nodes, and from check nodes to variable nodes, in the (l)th iteration, respectively. Further, let µ u0 , µ (l) u and µ (l) v be the respective means of u 0 , u (l) and v (l) . Since the codeword experiences different Gaussian channel realizations and since the network code is an irregular sparse-graph code, the messages generated at each one or half decoding iteration are in fact mixed Gaussian. Following the convention of density evolution, we assume that the messages passed across the edges from one type of nodes to the other are independent and follow the same Gaussian distribution.
Let d v and d c be the maximum degree of variable nodes and check nodes. Let λ(x) = dv i=1 λ i x i−1 and ρ(x) = dc j=1 ρ j x j−1 be the edge-perspective degree profile of the variable nodes and check nodes, respectively, where λ i and ρ j are the percentages of edges connecting to variable nodes of degree i and check nodes of degree j, respectively. Following the standard density evolution procedure of irregular sparse-graph codes [19], we get where where x is a Gaussian variable with mean µ x and variance 2µ x . To simplify computation, one may use approximations Ψ(µ x ) ≈ 1 − exp(−0.4527µ 0.86 x + 0.0218) [19] or Ψ(µ x ) ≈ 1 − exp(−0.432µ 0.88 x ) [17], where the former works for 0 < µ x ≤ 10, and the latter works for the entire region of µ x > 0.
At the end of the lth iteration, the total LLR messages associated with the variable nodes having degree i and transmitted through the qth channel have a mean value of: The error probability associated with degree-i variable nodes on the qth channel is: p * e (i, q) ≈ Q µ (l) (i, q)/2 . Averaging over all the m channel realizations and all the variable nodes, we get the error probability of this LDPC code ensemble where ξ ′ i is the percentage of systematic variable nodes (from the node perspective) having degree i, and i = 2 dv ξ ′ i = 1. Finally, the average error probability on block fading channels is the expectation over all the cooperation rounds, each round associated with a set of channel realizations (σ 2 (1), σ 2 (2), ..., σ 2 (m)) drawn from their respective Rayleigh fading distributions: bellow: Remark: The error rate formulated here estimates the ensemble-average performance of (2mN, mN) LDPC codes (N → ∞) over the so-called m-cyclic block fading channels. The joint network-channel codes we designed for GANCC are circulant LDPC codes, which constitute a subset of the general code ensemble. Here we take the results for the general ensemble to approximate that of the circulant sub ensemble. Although not entirely accurate, the difference will be very small, since (i) circulant LDPC codes are shown to perform on par with a randomlygenerated LDPC code of the same degree profile, and (ii) at very large block sizes, a concentration rule holds such that almost all the codes in the ensemble perform very close to the ensembleaverage performance.
We now evaluate the three classes of circulant codes proposed for GANCC: LDGM codes, LT-LDPC codes, and EC-LDGM codes.
B. LDGM Codes
Circulant LDGM codes, shown in Fig. 2(B), are a special class of irregular LDPC codes. whose analysis fits in the general model discussed before. Consider the ensemble of rate-1/2 degree-D LDGM codes. The check nodes have uniform degree D + 1, and the variable nodes have degrees D and 1, corresponding to the systematic bits and the parity check bits, respectively.
Hence the variable and check degree profile is: Since all the systematic variable nodes have degree D, the degree distribution ξ ′ (x) becomes: Gathering (3)- (7), and inserting the degree profiles (8), (9) and (10), we get the ensembleaverage BER of the LDGM codes.
C. LT-LDPC Codes
Following the same argument, (circulant) LT-LDPC codes, shown in Fig. 2(A), can be evaluated by inserting the right degree profiles to (3)- (7). For rate-1/2 circulant LT-LDPC codes with degree-D and a balanced density in the parity check matrix, we will have (ideally)
D. EC-LDGM Codes
In circulant LDGM and LT-LDPC codes, each N × N block has only one weight per row, so the bits participating in any one check experience different channel fades. In an EC-LDGM code, as shown in Fig. 2(C), the two parity bits coming from the same block experiences the same channel fade 1 , and the message exchange between these parity bits is confined to the same block.
Thus, one needs to track the LLR density of the mN systematic variable nodes (collectively, denoted as R s ), and the mN parity variable nodes of the network code separately.
Consider the parity checks contributed by the tth user. Let µ us (t) and µ up (t) be the respective mean LLR values from the systematic variable nodes and the parity variable nodes to the check nodes, at the tth user. Let µ vs (t) and µ vp (t) be the respective mean LLR values from the check nodes to the systematic and the parity variable nodes at the tth user. We have As the decoding process proceeds from the (l − 1)th iteration to the lth iteration, the LLR means evolve as follows: whereμ (l) us (j) is the average LLR mean, averaged over all the checks from all the m users. Gathering (14), (15), (16) and (17) yields where λ s (i) = number of edges connecting to all systematic variable nodes of degree i number of edges connecting to all systematic variable nodes .
Finally, the estimation of the error probability at the end of the lth decoding iteration is computed similarly to that of circulant LDGM and circulant LT-LDPC codes: where µ (l) vs (i, k) is the mean LLR value computed by the degree-i systematic variable nodes associated with the tth user.
VII. EXPERIMENTAL RESULTS
We conduct extensive simulations to verify the efficiency of the proposed GANCC framework and the different code constructing methods. Fig. 6 compares the ensemble average performance of the circulant LT-LDPC network-channel codes constructed using the CWC algorithm and the DPEG algorithm. We evaluate m = 5 and m = 10 terminals on block Rayleigh fading channels. All the packets (symbols) have 1000 raw data bits without channel coding. Whereas the CWC algorithm is simpler, the DPEG algorithm performs better, providing about 1 dB additional gain at BER of 10 −5 in both cases.
We also simulate different code ensembles, circulant LDGM codes, extended circulant LDPC codes, and circulant LT-LDPC codes, and compare them against the theoretical results obtained by the density evolution method. To get long codes, here we extend each source-packet to N = 5000 uncoded bits, but keep the number of users the same: m = 5, 10. The results, plotted in Fig. 7 and Fig. 8 respectively, show that the simulations match with the theoretically analysis fairly well with a gap of no more than 0.5 dB between them. As expected, the LDGM ensemble performs the worst among the three code ensembles, EC-LDGM ensemble performs 2 dB better and the LT-LDPC ensemble performs an additional 1-2 dB better.
Whereas the slow fading case is where user cooperation becomes most useful, for completeness, we also examine the same simulation setup in an IID fading scenario. The performance of these codes constructed using CWC are plotted in Fig. 9. It should be noted that since the Gaussian approximation is less accurate on IID fading channels, the analytical results provided here become less accurate. This may explain why there is a relatively large gap of about 2-3 dB between the simulation and the analytical results; but it is assuring that they both exhibit the same qualitative trends. First, the circulant LDGM ensemble always performs worst due to the large number of harmful weight-1 columns in the code. Second, the circulant LT-LDPC ensemble has a rather obvious error floor in its BER curves, due to the many weight-1 columns in the last one or the last few circulant blocks in the parity check matrix. Third, the EC-LDGM ensemble shows only water-falls in the BER region of interest, and thus becomes the best-performing code ensemble in the IID fading scenario.
VIII. CONCLUSIONS
We have investigated distributed and adaptive wireless user cooperation in a multiple-sender single-destination scenario. Unlike other approaches that use fixed network coding schemes and therefore rely on the ideal assumption of no link outage, the adaptive network coded cooperation protocol developed in [8] cleverly constructs adaptive sparse-graph network codes to match to the constantly-changing network topology. Since ANCC alone is best suited for large large networks, this paper extends ANNC to generalized ANCC, or, GANCC, by integrating adaptive network coding with channel coding in the framework of circulant LDPC codes. Through code design, theoretical analysis and computer simulations, we show that GANCC achieves impressive coding performance even when there are only a limited number of cooperating terminals. DRAFT | 2010-02-18T14:28:03.000Z | 2010-02-18T00:00:00.000 | {
"year": 2011,
"sha1": "a20c170ec183643b984ad76eb4a0f4523a5d821e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1002.3629.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b98a4e08c84095b23ec114efe45d9ee0cd110e0a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55216891 | pes2o/s2orc | v3-fos-license | Didactic sequence : a dialetic mechanism for language teching and learning
In preand in-service mother and additional language courses in Brazil, the concept of the didactic sequence has been largely studied as a theoretical and methodological procedure for language teaching and learning, since it focuses on the work of oral and written texts in a genrebased perspective. This text has the aim of extending this concept, taking into account its dialectic nature. To do so, the article reviews the definition, theoretical choices, characteristics, and modular structure of a didactic sequence, comparing this procedure to the concept of a dialectic methodology of knowledge construction in the classroom, and presents a brief analysis of a didactic sequence plan.
Introduction
In pre-and in-service mother and additional language courses in Brazil, the concept of the didactic sequence (DOLZ; SCHNEUWLY, 2004;DOLZ;NOVERRAZ;SCHNEUWLY, 2004;SCHNEUWLY;DOLZ, 1999) has been largely studied as a theoretical and methodological procedure for mother and additional language2 teaching and learning (for instance, MACHADO, 1997;2004;CRISTOVÃO et al., 2006;CRISTOVÃO, 2008 and followers), since it focuses on the work of oral and written texts from a genre based perspective.The Didactic Sequence (DS) is also regarded as an instrument of teacher development (DENARDI, 2009) because it can contribute to the teacher's knowledge-based construction (RICHARDS, 1998), thus developing the teacher's contextual, linguistic, and pedagogical knowledge dimensions.
Taking into account the importance of planning a DS to be applied in language classrooms for the enhancement of language teaching and learning processes, this text aims to review and extend the concept of DS by viewing it as a dialectic mechanism for teaching and learning.To do so, the article reviews the definition, theoretical choices, characteristics, and structure of a DS; compares this procedure to the concept of a dialectic methodology of knowledge construction in the classroom (VASCONCELLOS, 2002); and presents, as an example, a brief analysis of a DS plan for the genre advice letter, built by a group of public school teachers and guided by the author of the present text in an English teacher education course in the Southwest Paraná in 2007.Although the DS was planned in 2007, it was chosen to be briefly analyzed here because of its context of production and authenticity.
Reviewing the concept of the Didactic Sequence
According to the researchers from the group of Didacticism of Languages at UNIGE-Geneva/Switzerland (DOLZ; SCHENEUWLY, 2004;DOLZ;NOVERRAZ;SCHNEUWLY, 2004), who follows the methodological theoretical perspective of Sociodiscursive Interactionism (SDI) (BRONCKART, 2003), a DS refers to a set of classroom planned activities that aims to construct oral and written knowledge and focuses on a specific genre.Dolz and Schneuwly (2004, p. 51) define a didactic sequence as: …a sequence of teaching modules, conjointly organized to improve a given language practice.Didactic sequences establish the first relationship between a project of appropriation of a language practice, and the instruments that facilitate this appropriation.From this perspective, they intend to confront learners with the historically constructed language practices, namely, textual genres, so that learners are given an opportunity to reconstruct these practices, and consequently, appropriate them [authors' emphasis, my translation3 ].
Thus, since a DS consists of the planning of units and activities on genre-based teaching, mother and additional language teachers may take it as an important theoretical-methodological instrument for English language teaching and learning.
For the aforementioned authors, the learners' reconstruction of "historically constructed language practices" [my translation4 ] or "genres" (p.51) occurs through three interrelated factors, which I will present as follows.
Before mentioning the three main factors, it is important to point out that language practices are socially and historically developed, and are materialized in "forms of genres".Therefore, genres can be seen as instruments of mediation to the teaching of textuality, that is, micro and macro text structures.Based on this view and metaphorically speaking, genre can be used as a (mega) instrument (DOLZ; SCHNEUWLY, 2004, p. 52) to act in language classroom situations.Corroborating this view, Machado (2004) states that genre development is an essential tool for socialization, since it makes it possible for individuals to take part in different human language activities.
The first factor refers to the Characteristics of Language Practices, or in Bakhtin's (2004) perspective Characteristics of Discursive Genre, comprised of thematic content (theme), verbal style (speaker/writer's point of view and values, for example), and composition (textual structure that belongs to a specific genre).Dolz and Schneuwly (2004) state that this is the basis through which to organize activities required for genre teaching.
The second, Language Capacities, refers to the abilities learners should have to understand or produce a specific genre in a specific situation of interaction.These abilities refer to: action, discursive, discursive-linguistic, and signification.5I will review them all in subsection 1.1 below.
Finally, the third, Teaching Strategies, refers to some teachers' interventionist strategies, which are used in educational contexts to foster both the comprehension of a certain genre and the comprehension of the communicative situation in which the genre takes part, as well as the strategies to be used in order to improve genre production by the students.
Moreover, Dolz, Noverraz and Schneuwly (2004) highlight some important theoretical choices that should be taken into account when the procedure of the DS is to be put into practice: 1. pedagogical choices: represent a possibility of formative evaluation; motivate students to produce oral and written texts; open space for students to construct knowledge about the language capacities and genres by means of different activities and instruments.2. psychological choices: the procedure of a DS aims to enhance students' text production, focusing on the representation of a communicative situation, content knowledge, and text organization, and to make them aware of their own language behavior.To accomplish that, different language instruments and activities are provided to students, such as rules to re-write a text and to check text content knowledge, and specific elements/forms to write argumentations etc. 3. linguistic choices: linguistic elements are used to guide students to produce texts and discourses, which, in turn, are the object of the procedure of a DS in terms of the teaching of listening, reading, speaking, and writing.
4. general finalities: some main objectives of DSs include: a) to prepare students to communicate, orally and in writing, in different situations, by means of efficient instruments; b) to make students aware of their own language, or an additional language, and how they can act with language, by means of formative and summative evaluations; c) to make students build a representation of written and oral activities in complex communicative situations as a result of a classroom task.
Language capacities as an interrelated language system
From an SDI point of view, language capacities are defined as a set of operations that allows the accomplishment of a specific language action.Language, in turn, is seen by Vygotsky (1986) as a tool that mediates knowledge construction, that is, spontaneous knowledge is turned into scientific knowledge by means of language.Dolz and Schneuwly (2004) distinguish three types of language capacities in which learners should have to understand or produce a specific genre in a particular situation of interaction, which are described below: a) action capacity refers to the genre itself.According to Bronckart (2003, p. 99) language action can be understood at both sociological and psychological levels.In the first level, it is possible to analyze the parameters of the context in which the action takes place, whereas in the second, it is possible to identify the values the agent/writer attributes to the elements of the text's context of production, as well as to its content.
In other words, action capacity involves the understanding of the relation between thematic content and the text's context of production, as well as the writer's intentions or purposes.Thus, action capacity constitutes and is determined by the: a) parameters of context of production and reading: physical and socio-subjective aspects of these contexts (interlocutors, place, and time determiners); b) content of the action language: what is said; c) purpose of the communicative language action: the reasons to say what is to be said, and the expectations of the saying; b) discursive capacity refers to the form of language action is organized, involving the understanding of the textual macro-structure, types of discourse, 6 and types of sequence 7 used during language action; c) discursive linguistic capacity involves the understanding of an adequate use of discursive linguistic units, or any linguistic or semantic language resource available in a specific language system, which contributes to making language action real in terms of style and form.More specifically, in relation to textual analysis, the elements of the discursivelinguistic language capacity refer to textual and enunciative mechanisms.For example, cohesive devices and modalizers, among others, in order to create the whole meaning of the text.
In a recent study, Cristovão and Stutz (2011, p. 22) reviewed the categories of analysis of each language capacity, as well coined a fourth language capacity, the signification capacity.As its name implies, its main objective is to extract and to "build sense (of the text) based on representations and/or knowledge about social practices" [my translation and addition between parenthesis8 ].Therefore, such capacity comprehends the relation between text and the different contexts of social practices that are: "ideological, historical, sociocultural, economic, etc." [my translation9 ].In other words, by means of the signification capacity, the student can understand the relationship between texts and the authors' forms of being, thinking, acting, and feeling, as well as construct a point of view concerning these relationships (p.23).Consequently, by means of work conducted with this capacity, the teacher can guide the student towards developing criticism of local and global events, situations etc that surround him/her.
Yet, in relation to language capacities, Cristovão (2007) compares the way language capacities function in a text as a set of toothed wheels working together in a system of activities within a system of genres.This comparison makes it possible to understand the intrinsic system of language capacities, that is, one capacity contributes to the functioning of the other and viceversa.However, it is important to highlight that for the specific purposes of teaching and text analysis, language capacities and their constitutive elements can first be split and discretely analyzed, and should subsequently be observed as a whole, since in an empirical language situation they are intrinsically related and linked.
In short, a DS, aimed at the teaching of genres, can be seen as an interventionist classroom practice in which teachers can use some specific strategies in order to guide students to understand complex communicative events and activities.To accomplish this, teachers have to de-compound the events/activities and work, one by one, with the problems students face when they study a specific genre.This set of activities, in turn, guides teachers' work to the object of knowledge: the genre.Moreover, through the application of DSs in language classrooms, it is possible to develop the learners' linguistic and discursive-linguistic capacities to work with texts that belong to particular genres (CRISTOVÃO, 2002).In addition, by means of the action and signification capacities, students can develop a critical understanding of things, situations, and events, as well as expand their world knowledge.
As seen above, the construction or re-construction of knowledge about genres can occur through the teaching of genre characteristics using a DS procedure, which takes into account students' language capacities and difficulties concerning the genre to be studied.In this regard, Dolz, Noverraz and Schneuwly (2004) emphasize that the purpose of a DS lies in promoting students' access to new and difficult genres in order to master language practices.Therefore, school work should be done with genres that students do not master or have insufficient knowledge about; with genres of difficult access by the majority of students; and with public genres.In other words, the authors seem to refer to the apparently common genres that circulate in specific spheres of communication.For example, in school context: work with summaries, oral exposition, debate etc.
With those purposes in mind, a DS procedure was first designed, specifically for the teaching of writing.However, as already stated, a DS corresponds to a set of organized school activities to teach oral and written text genres.A DS for writing consists of a modular structure.This structure starts by the presentation of a communicative situation; followed by the initial production; the modules constituted by re-writings and additional information/ideas; and the final production.The initial production functions as a diagnosis of learners' capacities and necessities related to the studied genre.That is, from the first production, teachers can have access to what students already know about the genre and then build or propose activities so that they can develop new knowledge.Thus, the first production refers to the starting point of formative evaluation.During the process, teachers can build, or co-construct together with the students, some checklists which can be seen as dialogic mechanisms teachers use to guide students to master the genre they are learning or as a tool teachers and students themselves can use to evaluate text production.At the end of the writing task, teachers can compare the first and last production to check learners' language development in relation to the four language capacities and the knowledge about the specific genre as a whole.
In the following section, I will report on an analysis carried out by comparing each step of a DS to the concept of a dialectic methodology of knowledge construction in the classroom (VASCONCELLOS, 2002).
By comparing these three phases of knowledge construction in the classroom to the modular structure of a DS, it is possible to say that the first step of a DS refers to the presentation of a communicative situation.It serves to both show students that a writing project will be carried out in the classroom, which will result in a final text, as well as to prepare them to write their first version of a specific genre, which will be retaken in the modules.This first step of a DS can be related to the phase of mobilization to knowledge construction in a dialectic classroom methodology (VASCONCELLOS, 2002), because it helps students to link the content to be learned with their language practices by making content meaningful and challenging them to build a kind of representation of the communicative situation of the activity.In other words, the presentation explicitly confronts students with a problem of communication by challenging them to act in a situation in which they should produce an oral or written text about something meaningful for them.
The initial production can be characterized as a first practice with the objective of study, that is, what is called the praxis in a dialectic methodology.This first practice requires students to act on the object of study -a text that belongs to a specific genre -in order to make knowledge articulation possible within the students' social practices.As a result, a DS allows students and the teacher to formulate a kind of diagnosis about students' capacities and difficulties within the specific genre.Particularly, it allows the teacher to initiate the process of formative evaluation.In other words, the teacher can have a clear view of students' knowledge about the language, thematic content and genre organization, and then decide from what point to start the teaching process and how far to deepen it.Thus, knowledge is constructed from what dialectic methodology calls a critical view of reality, since the initial production makes it possible to have a diagnosis of students' capacities and necessities and their determinations in relation to the genre that is studied.In other words, the process departs from students' syncretism about the object of study to the construction of a qualitatively higher level of knowledge, that is to install a movement of rupture with the syncretic view of knowledge the students have, as well as a movement of continuity with the new elements/knowledge explicitly explained by the teacher to be applied in students' written or oral productions.This movement of knowledge construction starts in the initial production, but it should be accomplished in all modules of a DS.
The problems observed by means of the analysis of initial oral or written productions are worked on one by one in the modules of a DS.Different activities, new readings and research about thematic content, grammar exercises, re-writings, text analysis related to the language capacities and checklists with elements that constitute the specific genre are some of the pedagogical instruments used in the modules.Related to text analysis, texts are also analyzed in terms of contexts of production and textuality that involves the process of knowledge construction from contextual relations -historicism -in which physical and socio-subjective parameters of text construction are analyzed and confronted with the socio-historical current situation.Another aspect to be compared between the procedure of a DS and the dialectic methodology refers to the movement from its complex part to simple ones in the beginning, and then from the simple ones, by means of the study of language capacities, to the complex again -the final text of a DS.This dynamic movement represents the rupture with the students' view before the study of the genre to the knowledge they could build during the process, giving continuity or being able to study and know other genres.The final text represents the synthesis of the process, as seen below.
As a final stage of a DS, the final production provides students with the opportunity to use the knowledge constructed during the modules.This construction presupposes that students can make important relations inside the complexity of the content knowledge/language capacities they were exposed to during the whole development of a DS.In this perspective, the whole and the parts are articulated by means of internal connections of the elements that determine the final text.Therefore, it represents the synthesis of the whole process.Moreover, final production allows teachers and students to observe students' development/improvement in text production, and thus serves as an instrument for summative evaluation.In turn, the entire process of students' text production can be observed and checked as a process of writing development, which subsequently constitutes formative evaluation.
In the next section, as an illustration, I will present a brief analysis of a DS plan for the "advice letter" genre.
Analysis of a DS plan applied to the Advice Letter genre
Before starting the analysis, it is important to mention something about the DS plan's context of production.The DS plan, chosen to be briefly analyzed here, was built by a group of Basic Education public school teachers in a 60 hour/class course of ongoing education in a Southwestern town in Paraná in 2007.In the course, 10 participant teachers were taught about Didactic Transposition (DT), which in turn involves the processes of building a Didactic Model (DM) and a Didactic Sequence (DS).
The teachers were provided with some theoretical explanations about the whole process of DT and, later on, with some Didactic Models of some genres to study, such as: biography, autobiography, comic strips, letter to the editor, advice letter, fairy tale etc.They were asked to organize themselves in small groups of 3 or 4, and to choose one of those genres to study.Teachers organized themselves into 3 groups: 1 group chose the advice letter genre, while the 2 remaining groups chose the fairy tale genre.With the chosen DMs in hand, participants could build their DS plans to be taught to their students in real English classes.
Due to the spatial constraints, as well as our focus in this article -to discuss DS as a dialectic mechanism of knowledge construction in the classroom -I will only present part of the analysis of a DS plan for the writing of an advice letter: 10 the DS structure analysis.It attempts to show the interrelation/articulation between the activities related to language capacities and the dialectic process of knowledge construction in the classroom.
A DS plan of textual written production of an advice letter is addressed to 14 to 17-year-old students, who attend regular school in Brazil.It proposes that the activity of writing advice letters should be carried out by means of a first production, re-writings and a final production, in which, in between the tasks, students can be guided, by means of the teacher's and peers' orientations on the letters and the use of checklists, to construct knowledge of writing by means of the genre.This can be observed in the table below, in which excerpts related to objectives, as well as classroom procedures and activities, were extracted from the plan 11 (APPENDIX I) to be analyzed.
For the purpose of analysis, the DS plan is fragmented and shown in Tables 1, 2, 3 and 4 below.All tables present the same introduction, that is: the theme, genre, and general objective of the DS.However, each table presents different procedures and descriptions of activities related to the steps/modules it represents in the DS.Table 1 presents excerpts from the steps of theme presentation, situation presentation and first production. 10A complete analysis of this mentioned DS plan can be found in the doctoral dissertation "Flying together towards EFL teacher development as language learners through genre writing" (DENARDI, 2009, Chapter VIII).Moreover, providing a description of the context and the process of construction of the DS by a group of public school teachers, the analysis includes aspects related to the a) adequacy between thematic content to objectives, activities and methodological procedures of the DS; b) adequacy between the elements to be taught related to the three language capacities which are proposed in the didactic model (CRISTOVÃO et al., 2006), and c) adequacy to ten guidelines for the teaching of writing in EFL classes proposed in Chapter III of the same dissertation. 11See the complete DS plan in APPENDIX 1.General Objective: To ask for a piece of advice and obtain answers.
Specific objective: To give students the opportunity to perceive that everybody has problems and conflicts in his/her life and that through the advice letter they can receive help for their own problems and conflicts.
Modular Structure Specific Procedures and Description of Activities Theme Presentation and Warm-up activity 1-The students will watch and discuss the first scenes of the film "Message in a Bottle" (Mandoki, 1999), which refers to the production of a personal letter, which was afterwards published in a newspaper and had a great repercussion in its local community.Oral questions, about the experience of receiving letters, as well as questions related to the content/purpose and the context of the production of the letter from the film, will be asked to students to guide the oral discussion.
Situation Presentation 1-The students will be told that they are going to produce an advice letter, so they should think about some doubts or problems they have, break into pairs, and discuss them.2-The students will also be required to write and address the letter to the teacher, who will answer it and give the students some advice.
First Production 1-In pairs, the students will be asked to write a letter of advice.2-On finishing the task, the students should hand in their first versions to the teacher.3-At home, the teacher can observe the letters in terms of students' limitations and capacities, related to the three language capacities.
As seen from Table 1, when the teacher presents students with the theme and the communicative situation of producing an advice letter, s/he is involving students and mobilizing their previous knowledge about the theme and the object of study, as well as their language capacities.Thus, the teacher performs the first step of dialectic knowledge construction in the class, that is: knowledge mobilization.
In turn, in the first production, the praxis, the teacher requires students to act on the object of study, the writing of an advice letter, expecting to receive a diagnosis about students' capacities and difficulties related to the elements or aspects of the language capacities that belong to the advice letter genre.This leads to the process of formative evaluation.As already explained in this article, the process of knowledge construction departs from students' syncretism about the object of study to produce a qualitatively higher level of knowledge, in a movement of rupture to a movement of continuity of knowledge construction, which starts in the initial production and should be accomplished in all modules of a DS.
Table 2 below, shows the description of the work with action capacity and capacity of signification.
Specific Objective
To give students the opportunity to perceive that everyone has problems and conflicts in his/her life and that through the advice letter they can receive help for their own problems and conflicts.
Modular Structure Specific Procedures and Description of Activities
Module 1: Working on Action Capacity and Capacity of Signification: 1-The authentic letter of advice below will be brought by the teacher to the class, as a reference text.Dear Betty, I have a crush on this popular guy in my school.He's so cute and funny.OK, so he gave me his phone number on the last day of school (he saw that a friend of mine wrote that I liked him).But the problem is that 1) I'm not really popular; 2) He has a girlfriend; 3) I never had classes with him, so he doesn't really know me; and 4) I'm scared of calling him I think he'll make fun of me or something.Please, can you help me?What should I do?~LexyS ource: http://www.beatboxbetty.com/dearbetty/dearbetty/dearbetty.htmAccess on: October 25 th , 2007.2-Together with the students, the teacher will discuss the thematic content, production context, other related contexts (family, social, for example) and the purpose of the reference letter.3-While analyzing the reference letter -Lexy's -the teacher and students together will elaborate a list (the first checklist) of elements related to action capacity for students to use it to write up their own letters.The list could contain the addressee's name (teacher's name), the exposition of the problem, personal conflict or doubts, the asking for help and the name of the authors at the end of the letter (see the analysis in APPENDIX 2).4-The teacher will give back the students their first versions of the letter and they will rewrite them by means of the collaboratively built checklist.5-On finishing the analysis, students will hand the new version of their letters to the teacher.6-At home, the teacher will take notes on his/her notebook, again, about the students' progress.
Table 2 shows the teacher's intention to use an authentic advice letter -Lexy's letter -s/he got from an internet site as a reference text in order to construct knowledge related to the action capacity and capacity of signification that are present in advice letters.Next, students will review their first versions of the letters guided by the use of the checklist they constructed collaboratively.
As seen, in the work with action capacity, the teacher continues the process of constructing knowledge by means of making students rewrite their advice letters in a real language communicative action.Or in participants' words: "Letters will be written with a real purpose, in a social context that is authentic for the students and addressed to the "advisor", that means to the teacher, who will read and answer the letters giving advice" (Participants' report).
Passing to Table 3, there is a description of the work with a global plan and discursive capacity.2-The students and teacher will collaboratively build a checklist of the moves of the advice letter genre, as shown in procedure number 1 above.3-The teacher will hand students another checklist (list of words and expressions regularly present in some advice letters (Dear../ I'm…/ Can you help me?/What should I do?etc), so that they can revise and rewrite their texts again.4-On finishing the analysis, which will be followed by the text rewritings, students will hand the new version of their letters to the teacher again.5-The teacher will again take notes in his/her notebook about the students' progress.
As seen in Table 3, the teacher will explore the discursive capacity through the reading comprehension of the reference advice letter and the students' letters (students' letters could be written in Portuguese or in English, depending on the students' knowledge and feelings in relation to the English Language), and of the building of a checklist with the phases/ rhetorical movements of an advice letter.This discursive analysis provides students with the understanding of the types of discourse present in the advice letter genre, which are interactive and interactive report, and the structure of the predominant types of sequence, which are descriptive and explicative, although it is not explicitly necessary to mention these terms to the students.
Here, again, the knowledge construction of language and the advice letter genre continues to constitute a moment of analysis of the object of study: the advice letter genre, or the analysis phase in the dialectic theory of knowledge construction in the classroom.
Next, Table 4 presents the description of procedures and activities related to discursive-linguistic capacity.To make students observe and learn specific vocabulary and some grammar points in the context of an authentic published advice letter.Modular Structure Specific Procedures and Description of Activities Module 3: Working on the Discursivelinguistic Capacity 1-The teacher will show students or ask them to search for specific vocabulary and expressions -in the reference letter (Dear…/I'm…/she's…/scared of…/ call…/Can you help me?/What should I do?).2-Using the reference text again -Lexy's letter -the teacher will show students the structure of a statement of the problem: Subject pronoun (I, he, she, they) + is /have/has + a problem (he's not popular at school / he's already got a girlfriend / she's afraid of calling him), as well as, show syntactic structures of an advice requirement with modal verbs 'should' and 'can': Should + I/ he/she/they +do? (What should I do?), can: Can + you/he/she/they + help + me/them/us (Can you help me?).In addition, the teacher can teach other grammar content, such as subjective and objective pronouns, for example.3-The teacher will hand the students the lyrics of the song "Mr.Postman" (The Carpenters, 1975), since it evokes the activity of writing or receiving a letter.The lyrics of Mr. Postman show how a person is anxiously waiting for a postal worker to receive a letter from his/her girl/boyfriend.By means of pointing to some sentences and syntactic structures in the song and through the sentences extracted from students' own text productions, the teacher will assign students some grammar exercises and provide some explanations on verbs in the present and past tenses, modal verbs (should, can), pronominal anaphora etc. 4-The students will revise their letters by means of the checklists that would have already been built in the classroom or provided by the teacher or even by using a grammar book.Afterwards, they will rewrite the letters for the third time.5-On finishing the rewritings, the students will hand in the third version of their letters to the teacher.6-The teacher will again take notes in his/her notebook about the students' progress, as well as address a concept to each student's work.
As seen in Table 4, in the discursive-linguistic capacity, the teacher will explore vocabulary and grammar structures.Vocabulary will be dealt with by means of the building of a new checklist with expressions related to greetings and to the structure of advice requirement, as can be seen in APPENDIX 2. In turn, cohesion and pronominal anaphora will be explored by means of the lyrics "Mr.Postman" and from examples taken from the students' own productions.It is also possible to focus the study on modalization (verbs "can", "should") by observing some samples extracted from the reference letter and the students' own texts.
Once again, knowledge construction of language continues to constitute a moment of analysis of the object of study -the advice letter genre -alternating the movements of rupture and continuity of a dialectic activity of knowledge construction.
The third version of the students' advice letters will then be taken as a result of the process of the text production of an advice letter, as well as the synthesis of knowledge construction in a dialectic theory of knowledge construction in the classroom.
In sum, we can state, from the objectives and activities of the DS plan, that the language capacities are adequately related.It is also possible to say that orientations and activities of one language capacity/plan can be connected to the other, that is, they can be taught in an articulated way, as recommended by some language didactic scholars (e.g.: CRISTOVÃO, 2007).
Moreover, to guide students to write their advice letters, the teachers develop a teaching plan, which is a DS plan, with some activities to explore the content, the structure and the linguistic elements that constitute the advice letter genre.Therefore, by means of what is established in the plan, students can receive clear orientation to: a) select the type of discourse related to the advice letter genre -'the interactive report type of discourse'; b) build knowledge about the thematic content to be developed in their texts; c) have a global view of the genre; d) build knowledge about the types of sequence: descriptive and explicative; e) connect words and sentences; e) organize the thematic progression of the text; f) use/learn specific words and the modal verbs 'can' and 'should', among others, as well as conjunctions, personal pronouns, adjectives etc.
Finally, it can be said that the continuous process of writing, revising, redoing or re-writings, together with and the teachers' and peers' feedback, which can be observed in the analyzed DS plan, can greatly contribute to students' knowledge and development in the task of writing effective texts.In other words, by means of the application of a DS in English language classrooms, students can mobilize, construct, and produce a synthesis of knowledge about specific thematic content, language and genres.
Some final remarks
This article aimed to present a review and extend the concept of a DS, viewing it as a dialectic mechanism for teaching and learning language.To accomplish this, I reviewed the definition, theoretical choices, characteristics and structure of a didactic sequence, as well as compared the methodological procedure to the concept of a dialectic methodology of knowledge construction in the classroom (VASCONCELLOS, 2002).In addition, I presented, as an example, a brief analysis of a DS plan for the advice letter genre, which was built by three in-service teachers in a teacher education course in Southwest Paraná (Brazil) in 2007.
Particularly, concerning the comparison of the DS modular structure to the dialectic method of building knowledge in the classroom, it can be said that: a) the first phases of a DS -the situation presentation and first production -can be associated with the movement of knowledge mobilization; b) the modules of teaching can be compared to the movement of knowledge construction from the dialectic method; and c) the final production can be related to the movement of synthesis of knowledge or constructed knowledge.
It is also important to mention that by constructing a DS applied to the advice letter genre (involving the language capacities: action, discursive and discursive-linguistic, signification), teachers were able to realize the values and ideologies that exist in the authentic texts that encompass society, as well as the discursive practices and social relations that constitute human interaction.In other words, through the planning of DSs, teachers can coconstruct educational, social and linguistic types of knowledge, since in the task of planning they must analyze texts and their contexts of production, and then understand their roles in the school and society in which they live.Therefore, by means of this understanding, teachers can help students learn and understand the aspects shown in this study, as well as contribute to their development as citizens, since they will be guided to think critically about the themes, contexts, author's point of view etc., that is, to make sense of the text.
In conclusion, we assume that the procedure of a DS as a dialectic methodology of knowledge construction in the classroom can challenge, motivate, and guide the teacher's and students' oral and written language knowledge.From a teacher's perspective, developing and applying a DS plan could be understood as a theoretical-methodological and reflexive mechanism for teacher education, since it guides teachers' actions and makes them reflect about different aspects that constitute their linguistic and pedagogical practice, such as context, social aspects, ethics, linguistics etc.On the other hand, from a student's perspective, students can build language practices (reading, writing, listening and speaking practices) and produce intelligible English language, thus communicating and acting in society appropriately and, consequently, successfully.
APPENDIX 01
Didactic sequence plan: genre advice letter 1 st production: The teacher (…) will ask students to produce a first written version of their advice letters.In a box, whose decoration alludes to letter writing, the learners will place their advice letters written in Portuguese (…).As the teacher picks up the letters from the box, he/she will analyze the learners' letters one by one to check their background knowledge related to this subject.
Activities:
The learners, in groups of three, will analyze some advice letters written in Portuguese (…).The teacher will guide learners in order to make them identify the elements of the context of production (situação de produção, autor, destinatário, objetivo, conteúdo, espaço social de produção, momento histórico de produção, meio de veiculação) and the moves of an advice letter (saudação inicial, problema, pedido de conselho, pseudônimo), in Portuguese at first.
The teacher and the learners will build a controlling list with phases from an advice letter writing, considering specifically the language action plan and the discursive plan.
The teacher and students will translate into English some expressions and sentences that express the greetings and some pseudonyms.The teacher will also explain about the structure of a statement of the problem, the structure of an advice requirement etc.
2nd production: The teacher will give back the learners their first version and explain that they are going to read the first version, but they do not have to correct it.Instead, they are supposed to produce a second written version of their advice letter now in English.They will do this through the use of the two controlling lists provided by the teacher: the list of expressions and the list of the moves of an advice letter.
Activities:
To improve students' textual (coherence / cohesion) and grammatical knowledge (verb tenses, pronouns), the teacher will work with the song "Mr.Postman" (The Carpenters, 1975).Students will break into pairs, and they will be given the lyrics with the verses of the song in jumbled order.While they listen to the song, they are supposed to order it by numbering the verses.
The teacher will provide the translation of the song, but the students will have to match the song verses with their correct meaning.
THEME: ASKING FOR ADVICEDS Plan: Advice Letter Genre General Objective: to ask for a piece of advice and obtain answersSpecific ObjectiveTo provide the students the understanding of the advice letter's internal structure through the study of the global textual teacher will guide students to analyze the reference advice letter, as well as the students' letters, in terms of internal organization (initial greeting, problem, advice question, pseudonym, as shown in APPENDIX 2).
TABLE 1
Description of procedures and activities related to the presentation of theme, genre, situation and first production | 2019-02-16T14:33:39.176Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "9399e2cc91f6b48cba07b92565625e885dfbf6a1",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/rbla/v17n1/1984-6398-rbla-17-01-00163.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9dfccf00eb90e07f9264e5b32d9457ebbf7ecc4a",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
216680475 | pes2o/s2orc | v3-fos-license | Categorized Soviet Citizens in the Context of the Policy of Fighting Venereal Disease in the Soviet Latvia from Khrushchev to Gorbachev (1955–1985)
In the Soviet period, the incidence of venereal diseases (VDs) 2 was ideologically interpreted by authorities as a deficiency inherent in capitalist countries. It was stated that VDs would be eliminated in the Soviet Union at the earliest possible date. Compared to the independent Republic of Latvia (1918–1940), during the Soviet period, despite a larger population, the number of registered VDs infected patients was significantly lower. In such a historical context, a strange ideological obsession could be observed in the activities pursued by Soviet authorities when fighting VDs. The notion of the VDs as a remnant of the capitalist system determined the treatment of Soviet citizens infected by the VDs as marginalised populations. In order to achieve the goal of becoming a country where the VDs were eliminated, the Soviet Union practised a drastic dispensarisation system in the treatment of VDs, which was focused only on certain groups of the population that were identified in the documents of internal use for 1 The author’s work on this article constitutes a part of the University of Latvia project No. ZD2015/AZ85. 2 The term “sexually transmitted disease” has now replaced “venereal disease”. ““Venereal disease” was used from the eighteenth to the late twentieth centuries to refer mainly to syphilis and gonorrhea.” See: Pamela Cox, “Compulsion, Voluntarism, and Venereal Disease: Governing Sexual Health in England after the Contagious Diseases Acts”, Journal of British Studies 46, no. 1 (January 2007): 91–115. Acta medico-historica Rigensia (2019) XII: 92-122 doi:10.25143/amhr.2019.XII.04
In the 1950s the ideologues of the Soviet Union promoted the idea that socialist society does not have VDs and that VDs exist only as long as there are remnants of capitalism in the consciousness of certain individuals. 3 Any success, including in the field of health, in the SU was seen as the consequences of the implementation of the decisions taken by the Communist Party of the SU (CPSU), articulated in the formula that, thanks to the measures adopted by the CPSU and the Soviet Government, as well as the assistance provided by the health care authorities, VDs morbidity began to decrease sharply from 1947. The success was explained by the claim that in the socialist society the social and economic circumstances that facilitate the incidence and spread of VDs have been eliminated. Similarly, in 1955 it was announced that "the decisions of the XIX Congress of the Communist Party of the Soviet Union in the field of health are a guarantee that venereal diseases will be eliminated." 4 The Central Committee (CC) of the CPSU and the Council of Ministers (CM) of the SU, in a resolution of 1960, ordered the Ministry of Health Care (MHC) to eliminate a number of infectious diseases, including syphilis, in the coming years. 5 From the late 1950s, the VDs fighting policy was influenced by the ideology promoting Communist puritanism, as well as by the inclusion of the Moral Code of the Builder of Communism in the programme of the CPSU in 1961. 6 Thus, the regulations of the VDs fighting measures were activated from above. The scheme was following -reacting to the statistics on the morbidity of VDs, first, a resolution (postanovlenie) of the CM and/or a decree (dekret) of the Presidium of Supreme Soviet (PSS) of the SU was issued in Moscow, then it was followed by a resolution of the CM and/or a decree of the PSS of the LSSR. The decrees of the CM on the fighting VDs were marked as "secret, not to be published". The abovementioned authorities issued such decisions every few years without significantly changing the titles of the decisions. It was a practice used in the SU -to address specific social challenges when issuing special decisions. Whereas the MHC of the LSSR reported annually to the MHC of the SU, in terms of the achieved results, implementing the latest decision of the CM or the decree of the PSS or the order (prikaz) of the MHC of the SU on fighting VDs in the previous year.
The policy documents on fighting VDs can be divided in two levels. In the early 1960s a different language was established in the documents of two levels -the documents of the CC of the CPSU/CPSL and the CM of the SU/LSSR as the documents of the highest level and secondary level, namely, the documents of internal use in ministries/authorities. The language of the highest-level documents had a general form of expression, while in the documents of internal use in ministries/authorities a more direct language was used. In the context of the VDs fighting, Soviet citizens were categorised in the documents of the secondary level. In the VDs fighting policy documents of the highest level a social group under the label "prostitutes" was never mentioned, whereas in the documents of secondary level the concept of a prostitute was used at least from the mid-1960s.
In the 1920s-1930s, approximately 10 thousand people infected with VDs were registered annually, 7 while in Soviet Latvia, despite intensive migration from other republics of the Soviet Union, approximately one thousand Soviet citizens infected with VDs, were registered in 1959, while the index was the highest in 1973 -around six thousand became infected. The statement that the VDs exist as long as the remnants of capitalism have not been eliminated in people's consciousness has ideologically distinguished between the people who have contracted VDs and the rest of the Soviet citizens, where the VDs patients were regarded as somebody who discredited the socialist regime and, thus, were deserving of being held in low regard. The new aspect was only the ideological component that was added to the negative attitude towards the individuals, who had contracted VDs, because before World War II attitude was similarly negative in Latvia. It was determined by historical circumstances, whereby the VDs policy and practice in the 19 th century, when Latvia was incorporated into the Russian Empire, as well as in the period of independent Latvia was reduced to the fighting of prostitution. 8 After World War II in the occupied Soviet Latvia, prostitutes were not defined as a special risk group among the VDs patients in the official VDs fighting policy in the 1940s. Soviet ideologues argued that prostitution was created by the capitalist system, so the economic basis of prostitution had been lost in the socialist SU and it did not exist. Due to this ideological framework, it was not possible to base the Soviet VDs fighting policy on the pre-war notions. Therefore, using the terminology of the "antisocial-parasitic 7 Ineta Lipša. Seksualitāte un sociālā kontrole Latvijā, 1914-1939[Sexuality and Social Control in Latvia, 1914-1939 lifestyle" fighting, prostitution as one of the subgroups was euphemised under the label "frivolous women" (zhenshchiny legkogo povedeniia). 9 However, the fighting mechanism could not be directed at the selling of sex; instead, it could be directed at those individuals, who did not have paid employment and/or lived without a declared address. If a seller of sex worked in official workplace and had a shelter, the law was not violated in the event of selling sex.
In the 1950s the VDs fighting policy and practice included other marginalised groups of society. First, persons leading antisocial and parasitic lifestyle -this was a category of citizens that was established in the SU already before World War II. This group of citizens was set apart already by the 1951 and 1957 and 1961 Decrees. 10 By the early 1950s also in Germany in the zones occupied by allies "persons who transgressed the legal boundaries of social life and public health" were stigmatised/categorised as 'antisocial' and were confined into correctional institutions. 11 Thus, the practice of pre-war Germany was continued. Historian Markus Wahl states that "from the post-war East German perspective, the term 'antisocial' describes the extreme forms of deviance that were pathologized and stigmatized by authorities and society". 12 They were viewed also as supposedly 'promiscuous persons'. Since the early 1960s in the LSSR, the social control of the group labelled as "persons leading antisocial and parasitic lifestyle" was enforced with medical control, and in the context of VDs fighting policy, the word "antisocial" was replaced with the word "immoral" in the group's label. Within the group, the category of "frivolous women" was set apart.
In order to analyse how Soviet citizens were categorised in the context of the VDs fighting policy in Soviet Latvia from 1955 to 1985, first, the system of Soviet institutions that implemented the fighting VDs will be characterised -the system of dispensarisation (dispanserizatsiya), which after World War II was introduced in Latvia by the authorities of Soviet occupation for the purposes of VDs fighting corresponding to the practice that already existed in the SU. Further on, it will be analysed how in the 1950s special risk groups (persons with antisocial and parasitic lifestyle, male homosexuals and prostitutes) were introduced in the secondary level documents of the VDs fighting practice and how the list of target groups was detailed and amended. Analysing the most important VDs fighting policy documents -the 1964 and 1971 instructions, it will be investigated how, as a result of inter-departmental discussions between several authorities, the VDs fighting practice from gender neutral became gender (female) specific, and how the practice was institutionalised using a special operative militsiya group, maintaining a special filing system of prostitutes, male homosexuals and keepers of dens (pritony) (as allegedly VDs infected persons) simultaneously by law enforcement and medical authorities. As regards the development of VDs fighting practice in the 1970s-1980s, it will be argued that in this period the focus was narrowed down to the individuals with an immoral-parasitic lifestyle, establishing a special regime hospital in 1973 for the coercive treatment of seven subgroups of the respective target group (alcoholics, idlers (tuneiadtsy), male homosexuals, "frivolous women", pimps, occupants of dens, as well as disseminators of VDs as a separate subgroup). At the same time, law enforcement authorities tried to extend the targeted range of citizens, focusing on the prevention of promiscuous behaviour and regulation of sexual behaviour in the most extensive masses of citizens through the VDs prevention, popularising sexual knowledge not only in press publications since the 1960s, but also since the 1970s -on the radio and TV discourse.
Implementation of the Dispensarisation System
In the 1940s after the occupation of Latvia, the fighting of VDs was the responsibility of the MHC. The MIA was involved in the VDs fighting since the early 1960s. By 1951, the SU had managed to reduce the incidence of the active forms of syphilis by 32.8 times, using a dispensarisation system. 13 After World War II, the new type of medical institutions -dispensaries -were also introduced in the occupied Latvia. These were dermatological-venereological dispensaries in urban areas, whereas in the rural areas these were dermatological-venereological consulting rooms that were open at the regional hospitals and were under the control of dispensaries at the republican or city level. The dispensaries were located in Riga and the biggest cities -Rēzekne, Daugavpils, Liepāja, Ventspils, for some time in Jelgava, too. There were two dispensaries in Riga: the city dispensary 14 and the republican dispensary 15 . In-patient hospital (statsionar) with men and women's units was only available at the city dispensary. In the mid-1950s the SU health care leaders, taking into account the positive indicators in the VDs fighting statistics, considerably reduced the number of dermatological-venereological institutions in the time period from 1955 to 1962. 16 In 1956 the LSSR republican dermatological-venereological dispensary was closed down (re-opened again in 1973).
Dzidra Branta, dermatovenereologist, who had worked at the dermatological-venereological dispensary in Riga since the early 1950s, has stated in her memoirs that "in social terms the system was effective, but it was cruel towards the individual who had entered the system". She writes that Professor Anatoly Kartamishev, 17 who was the Head of the Dermatological and Venereal Disease Department at the Central Institute of Physician Qualification (1953)(1954)(1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970), often came to Riga from Moscow to provide consultations at the dermatological-venereological dispensary of Riga. According to Branta, once he said that the system of dispensaries was a real "mincer", namely, that the welfare of society was established at the expense of individual rights. Branta describes the system as follows: "All patients to be hospitalised at dispensaries were registered in a special outpatient filing forms (outpatient medical case histories) and a sophisticated reporting form. These last were sent to the special services (disinfection and statistics institutions). In the dispensarisation system the key goal was to discover and eliminate the source of infection. In fact, it meant that information had to be obtained from the patient to find out the focus of infectionthe individual, from whom he had contracted the disease, as well as contact persons, namely, those individuals, whom the patient could have infected. All these individuals were identified, examined or eventually they had to undergo a preventive treatment course. Examination and treatment were compulsory; it was stipulated by law. The patient had no choice. Special nurses and physicians' assistants were engaged in identifying the focus of infection and contact persons. The physicians -the health visitors' staff -were also engaged. If the patient avoided the examination or treatment, militsiya was involved and the patient was taken to the dispensary by force. There were also threats to make the information about the infection public at the workplace or family. Treatment took place at the in-patient hospital. The windows at the wards, where venereal disease patients were treated, had iron-bars." 18 The physicians were verbally abused by the administration of dispensaries and the representatives of the MHC, if they failed to find out the foci of infection and contact persons. Branta writes that sometimes she felt like a human torturer, when she had to find out the contact persons: "In the late 1950s, when syphilis was spreading among homosexual men, I frequently experienced unpleasant moments, when I tried to identify the focus of infection and contact persons of a homosexual patient. Unthinkable stories were provided, which gave rise not to compassion and pity, but to laughter and fun. Sometimes there were tears. I remember many occasions when we both cried: me because of anger, the patient -because of pity. Often, I felt compassionate, when I felt not as a physician, but as a human torturer." 19 Identification of the focus of infection was the core axis around which the VDs dispensarisation policy revolved. Therefore, in the course of treatment individuals had to name all their sexual partners. For male homosexuals, it meant not only confessing to the crime of sodomy, but also providing the names of other "criminal suspects" because male sodomy was criminal crime in the SU. 20
Beginning of the Formulation of Marginalised Populations
When analysing the morbidity rate of syphilis in the LSSR from 1960 to 1965, the MHC concluded that the prevalence of women is insignificant in the latent forms of syphilis, whereas among the infected individuals with active forms "men prevail, mostly because of homosexuals". 21 Thus, male homosexuals attracted the attention of HC authorities, since they were defined as the large source of syphilis dissemination in its active form among men. As a result, the Soviet Latvian VDs service faced increasing recognition of the existence of male homosexuality, which can be compared to Great Britain during the 1950s, 22 where after prolonged parliamentary and public debate, the Wolfenden Report led to legislation 20 There was no any period of decriminalization of male sodomy in Latvia as it was in Soviet Russia from 1917 to 1934. By August 1, 1933 the punishment for male sodomy in the Republic of Latvia was imprisonment for a period of no less than three months, afterwards -imprisonment without a specific duration, which, in practice, usually meant imprisonment for up to three months. In 1940, the Soviet occupation introduced more repressive policies. Until 1961, in accordance with Article 154a of the Criminal Code of the Russian Soviet Federative Socialist Republic, the punishment for the same "offence" was imprisonment for a period of three to five years. The Latvian SSR Criminal Code came into force on April 1, 1961. In a way it reduced the possible punishment because Article 124 Paragraph 1 did not impose a minimum sentence, but it did determine that a man guilty of male same-sex intercourse should be sentenced to imprisonment for up to five years. partially decriminalising homosexual activity in 1967. Meanwhile, in the LSSR the topic was not discussed due to a situation where public debate in the Soviet system was not possible.
In the late 1950s the authorities of HC added the group of homosexual men to the risk groups in the documents of secondary level in the VDs fighting policy (male homosexuals were set apart as an individual group in statistics of syphilis morbidity in 1956). In the HC statistics of the LSSR, the total number of registered individuals who had contracted the active forms of syphilis in 1956-1965 was indicated specifying the statistics with the following note "including among homosexuals" (out of the 194 registered infected male individuals 83, or 43 %, were homosexuals). 23 Near the same time male homosexuals were identified as a target group in the complex plans of the HC institutions to fight VDs. The ministry prepared such plans every year; however, the 1961-1962 plans are the first where a certain social group was identified as dangerous, namely, women who implement "immoral way of life in terms of sex life and who are malevolent sources of infection". 24 The 1964 plan added the group of male homosexuals, stipulating that it is necessary to "establish a regular exchange of information with the militsiya and public prosecution structures in order to examine the homosexuals who have contracted venereal diseases". 25 In 1962, the subcategory of the immoral-parasitic lifestyle group was included in the VDs treatment practice, because another unit was opened at the in-patient hospital of the dermatological-venereological dispensary of the city of Riga -the so-called regime in-patient hospital, which was intended for persons leading an "immoral-parasitic lifestyle". 26 Previously, Soviet citizens infected with VDs were treated together. The regime inpatient hospital was guarded by the militsiya guards. When the external 24/7 militsiya security was removed from the regime in-patient hospital for three years starting from 1968, the MHC demanded to restore it, because without the militsiya guards the patients engaged in fights and consumed alcohol, used force against the medical staff, ignored the requirements, behaved like hooligans and cursed using uncensored language. The patients sawed the iron-bars of the windows and fled the treatment institution. 27 Physician Dzirda Branta also recalls the behaviour of this group. The funny event with the chief physician of the dispensary took place in the 1950s-1960s, when during the night duty the prostitutes, who were treated in the women's unit, started a riot.
"As far as I remember, the frustration was caused by the poorquality catering. The patients had learned that the chief physician was on the night duty. They attacked Cipkins and tried to 'rape' him. Cipkins was saved by the militsiya officer, who was on duty and who was informed by the dermatological patients. The staff of dispensary laughed furtively about this event for a long time." 28 In general, in the early 1960s as a result of the ideological pressure (to eliminate syphilis in the SU), healthcare employees were mobilised to incorporate in the VDs fighting policy documents the socially marginal groups, which had been identified previously for other purposes, into the Soviet population (individuals who avoid socially useful work) and included homosexuals. 65 incidence cases of syphilis and 984 incidence cases of gonorrhoea registered, but in 1963 -105 and 3350 respectively. 30 ) In the 1960s in the LSSR, the highest morbidity level was registered in 1964. 31 The MHC of the LSSR was of the opinion that the reason was the comparatively quickly increased "clandestine prostitution", which was facilitated by the "withdrawal of the passport regime", which for the persons with "immoral-parasitic lifestyle", provided an opportunity to live without a declared address and to lead the antisocial lifestyle without hindrance. 32 The 1963 Order of the MHC of the SU on "Measures to be Taken to Eliminate Syphilis Morbidity and to Decrease Gonorrhoea Morbidity" stipulated the cooperation of the Ministries of HC and IA, the Prosecution Office, as well as other offices (resory) in order to bring down new cases of VDs. Additionally, protection measures were introduced that were supposed to include society through the action of collective organisations, such as Comrades' courts, trade unions, volunteer militias, the Komsomol, the Communist party etc. The Decree ordered the use of options provided in the 1961 Decree for fighting "antisocial and parasitic lifestyle" in fighting VDs. The 1963 Order required the MHC of the LSSR to draft a VDs fighting plan attracting the representatives of other authorities. The MHC of the LSSR in the 1964 plan named 4 target groups: persons with "immoral lifestyle in terms of sex life", persons, who avoided socially useful work and spread VDs, the VDs infected male homosexuals, as well as addresses of the apartments, which were defined as "lechery dens". 33 Thus the 1964 Resolution of the CM of the LSSR was pushed by the 1963 Order of the MHC of the SU. 34 The republican MHC informed the CM of the LSSR that the measures to fight VDs had exceeded the framework of the medical authorities and stated that there had to be not only therapeutic, but also educational and compulsory activities. 35 The 1964 Draft Resolution was discussed by the MHC, MIA and the Legal Affairs Commission (LAC) at the CM of the LSSR. The LAC recommended targeting those individuals as well, who according to the verdict of law enforcement authorities led an immoral-parasitic lifestyle. 36 Whereas the Minister of the MHC Vilhelms Kaņeps 37 had intended to cover a wider range of population, indicating certain locations in the social geography of Riga. Such an approach was not a new one. Already in 1949, the MHC, referring to the statistics of the first six months of 1949, marked certain locations in the social space of Riga, whose visitors should be medically controlled. 870 of the registered 982 infected individuals were of the opinion that they had contracted the infection -on street (17.4 %) and in dance parties (17.4 %), at the acquaintances' (15.5 %), in the park (14.2 %), in the family evening (11 %), at the partner's home (9.4 %), at their own home (8 %), in transport during business trip (7 %). 38 This data was not used in the VDs fighting policy. However, when discussing the 1964 Draft Resolution, the Minister of the MHC Vilhelms Kaņeps drew attention to the social geography of Riga. He argued that the spread of VDs was affected by the sales of alcohol in restaurants. The Minister pointed out the fact that 83 % of the infected people had contracted the infection from random individuals when they were under the influence of alcohol. He claimed that many employees of hotels, restaurants and taxis were engaged in pandering to bring the respective individuals together with the "women of immoral behaviour". 39 The CM of the LSSR rejected the following proposals by the Minister: limiting alcohol sales in restaurants, asking the trust management of hotels to ensure that the rooms were not used for "lecherous lifestyle", asking the Ministry of Road Transport and Highways to prevent a situation, when the taxi drivers were engaged in pandering, asking the Republican Council of Resort Administration of Latvia to provide systematic "anti-venereal propaganda" among holidaymakers. However, the MIA named three citizen groups -"frivolous women", keepers of lechery dens and pimps. 40 the VDs fighting policy with the prostitution fighting policy, without using the term of prostitution in the definition, whereas the MHC proposals were not gender specific. Still, in 1965 the MHC tried to prove that the VDs infected or "individuals, who lead immoral lifestyle, usually are inveterate drunkards (zlostnye pianitsy)"; therefore, the 1964 Decree of the PSS of the LSSR on enforced treatment of inveterate drunkards (alcoholics) and rehabilitation in medical-labour re-educational institution (lechebno trudo voi profilaktorii) should be applied. 41 However, the LAC at the CM of the LSSR rejected this idea. Medical employees based on the existing (unofficial/patriarchal) ideas about family values, which prohibited a woman (unlike a man) to use sexuality outside the marriage, and which punished the women for arbitrary use of sexuality, while encouraged the men. The VDs fighting practice from the 1960s was especially focused on the women whose behaviour was viewed by law enforcement authorities as frivolous, without taking into account their male partners. At the same time, in the Soviet official discourse it was stated that the socialist society, unlike the capitalist society, is characterised by the gender equality principle in all policies.
The Secondary Level: Instructions and Filing Systems
The highest-level documentation had to correspond to the values declared in the SU official discourse, for example, gender equality; therefore, in the 1964 resolution the citizen groups were not mentioned. However, in the 1964 and 1971 instructions on the arrangements for coerced medical examination of individuals, who are the sources of the VDs spreading, the broad categories of citizen are named. Whereas in the internal documentation of the MHC and MIA in the 1950s-1960s -mostly in the VDs fighting plans, reports and proposals on draft resolutions of the CM of the LSSR -the citizen groups have been named in detail.
The VDs fighting policy in the Soviet Union was not unified. For example, in the regulations of the RSFSR and Estonian SSR there was a legal norm stipulating coerced examination and treatment of the individuals who have contracted VDs, whereas in the regulations of Latvian SSR such norms were not introduced up to 1964. Responding to the 1963 Order of the MHC of the SU, in 1964 the MHC of the LSSR asked the CM of the LSSR to pass a resolution to provide it with the rights for coercive, medical examination and treatment of supposedly infected people, taking the 1946 Resolution of the CM of the Estonian SSR 42 and the 1927 Resolution of the Council of Peoples' Commissars of the RSFSR 43 as an example. 44 In 1964, the MIA of the LSSR together with the MHC elaborated and adopted the instruction on the procedure, in which individuals who were sources of VDs dissemination had to be coercively examined and treated.
In the 1964 instruction four target groups were identified. It did not have gendered groups -neither male homosexuals (MHC), nor "frivolous women" (MIA), but individuals: who had contracted VDs, who avoided the treatment, as well as unquestionable and probable sources of infection, and the contact persons of the individuals who have contracted VDs, who avoid medical examination and about whom the medical authorities have facts about their potential infection with VDs. The VDs treatment system was harsh in a psychological sense too. The instruction stipulated that in terms of the people, who have contracted VDs due to casual affairs (sluchainye polovye sviazi), the CP, the Young Communist League and the Trade Union organisations must be informed in the training and work places, but on certain occasions -the Party and Soviet authorities at a city (regional) level.
The formulation "individuals, who engage in sexual intercourse with the aim to get material benefit" latently pointed to the prostitutes without actually using the term itself. Furthermore, the formulation of the group including individuals, who "lead disorderly sexual lives" (vedushchie besporiadochnuiu polovuiu zhizn') 45 "disorderly sexual life" would be explained. Consequently, I have not succeeded in confirming or rejecting the hypothesis that this norm could have been intended to provide an opportunity to medically control the buyers of the sex services.
In the time period up to 1971, when a new instruction was adopted, medical authorities had formulated their intentions more accurately. In the 1965 report, the MHC identified the female group of "persons, who lead disorderly sexual lives" as the most important target group. The author of the report concluded that the coercive examination of the "women, who lead disorderly sexual lives", brought by the militsiya to hospital, "is justifiable and efficient tool in fighting the decrease of the gonorrhoea infection". 46 He connected the respective women with the citizen category -individuals with immoral, parasitic lifestyle, who mostly do not have a declared address or do not live in the declared address. He argued that to ensure the surveillance of this category cooperation with militsiya is very important. He based his argument on the data -in order to find one case of gonorrhoea, 850 women must be examined in compulsory medical examination, in the obstetrician-gynaecological institutions -105 women, but among those women, who are brought by militsiya -only 6 women. Such an attitude was not something the SU characteristic. In Scotland the system also was focused on females ("habitual prostitutes"), who were being traced and persuaded to follow a treatment plan despite the fact that attempts to introduce a bill to provide for coercive examination and treatment of individuals suspected of suffering from VDs failed in Scotland in 1968. 47 In 1965 the MHC recommended to subject the individuals (the gender was not indicated), who behaved immorally in public space (in the evenings, on the streets, etc.), to medical examination. 48 Most likely, the recommendation was aimed at a wider audience of the population, because the law enforcement authorities dealt with the "frivolous women" without any ceremonies. people's druzhina, unofficial militsiya employees (vneshtatnye sotrudniki militsii), as well as the operative units of the Communist Youth, had to raid the locations, where the "frivolous women" could gather (near the sea ports, railway stations, hotels, restaurants, parks and other recreational places of proletariat), in order "to take these women out and carry out preventive discussions with them" (po iz'iatiiu etikh zhenshchin s posleduiushchim profilaktirovaniem). Besides, the law enforcement authorities could send to medical examination those citizens, whom these authorities regarded as people with immoral and parasitic lifestyle.
The LAC, which evaluated the submitted legislative proposals, recommended not to accept the MHC proposal, explaining that so broad rights, essentially out of control, to subject individuals to the sexual health examination will not contribute to strengthening of the Soviet law and will offend those citizens, who will be unduly examined.
The 1964 Resolution entitled the Ministry of the IA to cooperate with the authorities of HC regarding the identification and transfer to coercive medical examination of the individuals, for whom it was justified to assume that they are infected with the VDs. 49 The LAC found that individuals with parasitic lifestyle can be subjected to coercive medical examination. 50 The VDs policy of the LSSR can be characterised by a contact tracing system launched by the governmental institutions through coercive medical examinations of marginalised populations, which was supplemented by the filing system of law enforcement and medical agencies. Such policy was in stark contrast with that of Great Britain where, as historian David Evans has argued, "from the late 1940s, the key VDs control mechanism promoted by the Ministry was the tracing of sexual contacts through the voluntary cooperation of patients, supplemented by professional contact tracers" and "the fundamental principles of policy were open access, treatment free at the point of service, confidentiality and non-coercion". 51 I already mentioned that several proposals submitted by the Minister of the MHC Vilhelms Kaņeps, including the limiting of alcohol sales in restaurants, were rejected in the debates on the draft decree; however, his proposal to establish an operative militsiya group for VDs fighting in the militsiya of Riga became the cornerstone of the VDs fighting practice. It was intended neither by the 1964 resolution, nor by the instruction; however, in 1965 law enforcement authorities institutionalised the fighting of VDs by set up a two-member operative militsiya group at the Criminal Investigation Department of the Administration of IA of the Riga Executive Committee (EC). 52 One employee of the operative group registered data on women, who were claimed to be leading an immoral lifestyle, whereas the other employee gathered information on male homosexuals. The group closely collaborated with the Riga City Dispensary. Both, law enforcement and medical authorities established and maintained the respective filing systems (kartoteki). In 1965 all male homosexuals known to the Administration of IA of Riga EC had been medically examined, in order to find out individuals who had contracted syphilis. The very possibility that one could be forced to tolerate medical manipulations could be perceived as humiliation. Besides, most of the detained individuals (not only male homosexuals) turned out to be not infectious. For example, in 1968, out of the men who were taken for medical examinations under constraint only 5 % turned out VDs positive, in 1969 it was 28 %, in 1970 it was 34 %. 53 In the 1969 filing system of Riga militsiya there were data on 1639 women, "who were engaged in prostitution", on 481 male homosexuals and 89 apartments, "where men and women, who were engaged in alcohol consumption, lechery and brawls, systematically gathered". 54 Overall the authorities were not interested in the fact that most of the male homosexuals and women, who were subjected to forced medical examination, were not infected. Apparently, the authorities were of the opinion that offense caused to these categorised citizens would not give rise to any problems.
Despite the fact that the VDs fighting policy was regulated by the previously mentioned CM of the LSSR resolution and instruction, in 1966 authorities started to work on the PSS of the LSSR draft order on the coercive treatment and re-education by labour of the VDs infected persons with immoral lifestyle. It seems, the draft order could have been elaborated with an aim to inform society, because the resolutions of the CM and instructions were confidential, whereas the PSS orders were made public. The Head of the LAC pointed that the concept of immoral lifestyle in the context of the VDs fighting only addressed "sexual debauchery" (I have not succeeded in finding the definition of sexual debauchery in normative acts). 55 The document prepared by him indicates that the HC authorities wanted to establish a special medical labour re-educational institution (lechebno trudovoi profilaktorii) for coercive treatment of VDs infected people. The key idea was to isolate certain citizen groups from the rest of the society. However, it turned out that such institutions could be established under the supervision of the MIA, as opposed to the MHC. 56 Therefore, after a few years in 1971 the MHC suggested to organize in the MIA system a medical labour re-educational institution for those, who have contracted VDs infections and who lead immoral-parasitic lifestyle. 57 At the same time, the medical authorities started to use the concept "clandestine prostitution". It was argued that the prostitutes operating secretly could not be held criminally liable, because Article 112 of the Criminal Code intended punishment for deliberate infecting with VDs. Therefore, the MHC asked to issue an order about the coercive treatment of the respective category citizens (VDs infected people with immoral-parasitic lifestyle) and labour re-education in the medical labour re-educational institutions of the MIA for the time period that is not less than 6 months, namely, up to the moment, when the ones infected with gonorrhoea were removed from the registration (uchet) and when the ones infected with syphilis had completed one or two treatment courses. 58 According to the MHC, only social isolation will prevent this citizen category from infecting other people. The existing regime in-patient hospitals attached to the dispensaries did not provide this, because after 2-3 weeks' treatment the patients were discharged, in order to continue ambulatory treatment, however the patients of the respective category had not done it. Whereas in 1971 the PSS of the LSSR issued an Order "On the Amendments and Additions to Article 112 in the Criminal Code of the LSSR", which included liability for infecting somebody with VDs or for malicious avoidance of VDs treatment. 59 Responding to the Decree of the PSS of the SU issued in 1971, 60 the new instruction was approved, where the MIA and MHC added the fifth group to the existing four -individuals, who had contracted VDs and who avoided the control of medical examination. Also, two existing groups were specified. It was stated that sexual intercourses with the goal of material benefit was to be defined as prostitution, and the individuals who "lead disorderly sexual life" were gendered as "frivolous women" 61 thus releasing "frivolous" men from the intervention of medical authorities in their private lives only because of their gender. Overall, the 1971 instruction drew attention to selling of sex (formally justifying purchasing of sex), but the practice of another type of "disorderly sexual life" was referred only to women.
Both the ministries of HC and IA planned to improve VDs eradication, introducing new instruments for the mechanism of surveillance. In 1971, the MIA had a plan to keep a photo album of "frivolous women", male homosexuals and occupants of dens in law enforcement authorities of the municipal and regional districts, "in order to ensure operative identification of the sources of infection of VDs". 62 The MIA anticipated that the MHC should use the latest scientific discoveries in the medical examinations of male homosexuals. 63 Meanwhile, the MHC recommended introducing weekly raids according to the Leningrad pattern to deliver the suspects in male homosexuality for medical examination. 64 The Special Regime In-patient Hospital:
Social Isolation
From 1969 up to 1973 in the LSSR the morbidity rate of both, syphilis and gonorrhoea had increased each year. 65 The 1973 resolution of the CM of the LSSR does not mention any specific groups. 66 However, the 1973 complex VDs fighting plan prepared by the Ministries of IA and HC that supplemented the resolution mentions 7 subgroups in the first paragraph under a group defined as persons with an immoral and parasitic lifestylealcoholics, idlers (tuneiadtsy), male homosexuals, "frivolous women", pimps, occupants of dens, as well as disseminators of VDs as a separate subgroup. 67 Separation of disseminators of VDs seemingly indicates that the rest of the six groups are not disseminators of VDs, but as populations, who favour "sexual debauchery", "disorderly sexual life", deserve to be forcefully undressed against their will and subjected to medical manipulations.
In 1973 Latvia took the first place in the SU in terms of the syphilis morbidity. 68 As a result, two new medical institutions for the treatment of VDs were founded. The republican dispensary that was closed down in 1956, was re-opened again. Its ambulatory care and in-patient hospital with two departments of venereal diseases and dermatological diseases for men and women were located in the former hospital of tuberculosis in Pērnavas Street, Riga. 69 Whereas the recommendations provided by medical authorities in 1966 and 1971 regarding the establishment of medical-labour re-educational institution were implemented in 1973 in a different form. In Riga, on Maskavas Street 241 a venereological hospital of the City of Riga was founded. It was an institution, which implemented a special regime for the forced treatment and control of the people who had contracted VDs. Physician Dzidra Branta writes that it was intended for "socially un-adapted individuals -without a declared address (vagabonds), prostitutes (gradually the term was applied), patients, who refused the treatment, etc." 70 The territory of the hospital was marked off by a stonewall, provided with internal and external signalisation system, as well as spotlights for the illumination of the territory. A special regulation was applied to the order to be followed at the hospital. 71 Regiment of militsiya of the Administration of IA of the Riga EC provided 24/7 security -28 militsiya men were on duty in 4 militsiya posts. Additionally, 4 officers of criminal investigation were in charge of the re-education of those persons, who maliciously avoided the VDs treatment or medical control. In 1979, for gross violation of the hospital regime in other dispensaries and medical prevention institutions 1074 patients were delivered to the special regime hospital in Riga
Focus Extension: Towards Promiscuous Behaviour Generally
After the 1974 resolution of the CM of the SU the CM of the LSSR demanded the MHC to be more proactive in finding individuals who had contracted VDs to ensure their medical treatment. 73 While discussing the 1974 republican draft resolution, the MIA of the LSSR objected that people with "immoral and parasitic lifestyle" have to be viewed as a main source of the VDs, because a study carried out together with the MHC proved the otherwise. 74 The MIA tried to achieve a provision that by the mid-1960s was already a reality in Great Britain where "the increased incidence of VDs was seen in official circles as 'primarily a reflection of sexual promiscuity in the population'". 75 The Minister of IA (1972)(1973)(1974)(1975)(1976)(1977)(1978), Jānis Brolišs , defended the idea to dismiss the category of individuals with "an immoral and parasitic lifestyle" as a main source of VDs, even in the meeting at the CM in the early 1974. He emphasised that the spread of VDs occurs not only due to casual sexual contacts.
"For example, 75 % of all diagnosed patients who contracted syphilis were infected not from casual sexual encounters but wives were infected by husbands, husbands -by wives, 32 individuals contracted infection from acquaintances and 32 individuals refused to give names of the persons from whom they contracted the disease." 76 The Minister of IA Jānis Brolišs argued that law enforcement authorities fail to fight VDs, because there was no such law that would hold a detained person liable for providing information regarding the individual, who was the source of infection. The Deputy Head of the CM (1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985) Viktors Krūmiņš even recommended to extend the places, where the catching campaigns of citizens should be implemented.
"We need to make the next step now and we should include the housing management institutions, all hotels, the most evil public catering places. All these individuals are known. The comrade, who is engaged in this, said that the waiters and bartenders know all these hustlers and easy life hunters by names. It is also necessary to discuss this subject very seriously with the individuals, who are responsible for these issues in the public catering field." 77 Finally, it was stated that exactly category of people with an immoral and parasitic lifestyle "have always been and still are the main distributors of VDs regardless of whether their number has decreased and regardless of the fact that along with the increased incidence of VDs other paths (encounters) appear that spread venereal diseases as it is currently happening". 78 Brolišs, the minister of IA, for his part, considered that focusing on individuals with "an immoral and parasitic lifestyle" alone could not solve the problem of VDs in the republic. The idea of extending the range of people subject to medical control was already expressed at the meeting of the MHC council. At the meeting of the MHC council on 20 September 1973 the Liepāja dermatological and venereal disease specialist suggested to extend the range of people to be subjected to medical control, "wassermanning" all the individuals, who were isolated in the medical sobering stations, namely, to carry out the syphilis diagnostic tests -the blood test, by testing Wassermann reaction. Already in the letter on April 1973 Nikolajevskis had suggested it to the Chief of the Liepāja medical sobering station. The Chief of the sobering station even offered to examine absolutely all the people, who were detained by the Department of Interior of the Executive Committee of the Liepāja People's Council of Deputies, that is, around seven thousand individuals. 79 However, even the Deputy Minister of the Ministry of Interior (1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984) Anrijs Kavalieris (1933-2016) saw the idea as too drastic, saying that "as regards the question of 'wassermanization' of 100 % individuals taken to the sobering station, I am not ready to answer, whether it would be legally and politically correct." 80 All in all, he concluded that with "our contingents", that is, marginalised populations supervised by the law enforcement authorities, "we will not solve this problem in the republic". 81 From 1980 on at the latest, no categorized citizens were mentioned in the VDs fighting plans (the plans for 1974-1979 have not been found), although the medical and militsiya control over the groups mentioned in the 1973 plan was continued. Nevertheless, the central explanatory concepts informing medical and media discourses over the prophylaxis of VDs were to become 'sexual promiscuity' and 'permissiveness'.
Thus, the authorities extended the range of the categorised citizens referring compulsory medical control including sexual health to the various groups of the so-called rank-and-file citizens. Regular health checks were compulsory for the employees working in the food and public catering industries. Physicians recorded their health condition in a special document called the sanitary booklet (sanitarnaia knizhka). In the early 1970s the venereal health had to be checked after every 3 months. In 1973 the CM of the LSSR elaborated a draft resolution, stipulating that further on the health checks must be carried out once a month. 82 The CM grounded the necessity for such changes on the statistics of the incidence of VDs. In 1973, syphilis morbidity in comparison with 1972 had increased for 2.5 times. The analysis of the employment data of the infected individuals provided evidence that 8.5 % worked in the respective industries. Morbidity among students in the higher education establishments and comprehensive schools was considered as high. 83 In 1973, to promote VDs prevention and provide sexual education to young people, the CM of the LSSR asked to attract mass information channels -the television of Latvia, radio, republican periodicals. In 1973, all together 12 radio conversations took place and 9 articles on these topics were published in the republican newspapers and 5 articles in magazines. 84 Two public addresses were held on the television of Latvia, after which the management of the television of radio broadcasting forbade the MHC to present public speeches on the television. The Deputy Minister of the LSSR MHC Gunārs Orleāns 85 argued that, taking into account the popularity of television, the broadcast shows on the VDs prevention must be brought back, because morbidity in rural areas increased. Medical authorities demanded that the CM of the LSSR Committee of Television and Radio Broadcasting would comply unconditionally (neuklonnogo vypolneniia) to the request to demonstrate popular science films and physician-specialist conversations each month on the television. 86 In the meeting at the CM in the early 1974 the Deputy Head of the State Television and Radio Broadcasting Committee (1972-1986) Jāzeps Barkāns had to explain the Committee's refusal to broadcast television records on the topic of VDs prevention. He said: "It must be noted that we watched the films, which were recorded for broadcasting on television. And it must be noted that these are not [suitable] for the screen, taking into account the audiences that watch them, namely, women and children. We have to find other films, perhaps we should invite physicians to television and discuss other medical questions." The Head of the CM (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)) Jurijs Rubenis pointed out that the language that is used to talk about the VDs prevention should be direct, so that the audiences could understand it. He provided his arguments in the following way: "We are not telling you, comrade Barkāns, what and how you should be doing. We want that the television would be actively engaged in this work. But what kind of forms and methods -whether it would be a film or staged drama, play or tragedy -these are your rights, that's to be decided by you. But our duty -to remind you once again that you have to do a big job in the prevention of these diseases. [..] But so far you have not focused on this job, and it is necessary to speak [about it] not on the side note or cautiously, but so that one can understand what you are talking about. And you have to do it. If this film is not suitable -don't show it, make another one, that is your business." 87 Consequently, in the early 1970s, the VDs prevention discourse was invented in mass media in Soviet Latvia. The VDs fighting measures were activated once again in 1977. The resolution of CM of the SU stipulated that additional measures fighting VDs must be taken and the efficiency of the interdepartmental commissions must be increased. In a month the resolution of the CM of the LSSR was issued. 88 In 1980, the MHC of the LSSR stated that in terms of the practice of VDs fighting the "system of formal contacts" that has been worked out by experience of the collaboration between the authorities of health care, internal affairs and prosecution, was operating successfully. 89 The 1985 Order (prikaz) of the MIA of the SU, terminated the operative militsiya group at the Criminal Investigation Department of the Administration of IA of the Riga EC for fighting VDs which had been working since 1965 by the May 1985. 90 Still, there are no facts to argue that the Decree had something to do with Gorbachev's reform policies. The MIA of the LSSR obeyed the order, but made a suggestion to the CC of the CPL to get it cancelled. Further on, the VDs fighting policy disappeared from the agenda of authorities. At the end of 1986, the joint ad hoc committee which had been established to fight VDs in Riga also ceased its work. In 1987, the prostitution was legalized in the LSSR. | 2020-03-26T10:49:56.595Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "d7e7b064b50ca9d68818275fb6e8249d01a15de4",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.rsu.lv/jspui/bitstream/123456789/1082/1/Acta%20medico-historica%20Rigensia%20(2019)%20XII_92-122.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "17565f61f5b9744953261598f8f9140c68833282",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Political Science"
]
} |
217685125 | pes2o/s2orc | v3-fos-license | Adoption of Good Agricultural Practice (VietGAP) in the Lychee Industry in Vietnam
Bac Giang province, Vietnam has potential for growing lychee. However, the price of lychee is low and unstable. Consumers are seeking higher quality lychee by accessing international standards and control systems. In order to improve the quality of lychee, Good Agricultural Practice (VietGAP) was implemented in Bac Giang province. This paper discusses the status of the lychee industry and evaluates the extent of and factors affecting adoption of VietGAP in lychee industry in the province. Results showed that the volume of lychee production of the province grew from 29,027 tons to 213,000 tons from 1997 to 2011. The prices of lychee were different across districts and depended on the lychee class and season. Fifty-six percent of VietGAP farmers who were considered high adopters were applying 8 or more of the prescribed practices; the rest were considered low adopters. The logit model showed that farm size, net profit, accessibility to VietGAP information, and membership in lychee farmers’ group significantly influenced the probability of high Original Research Article Loan et al.; AJAEES, 8(2): 1-12, 2016; Article no.AJAEES.19948 2 adoption of VietGAP. The study recommended improving the VietGAP program implementation; providing greater access to capital, and better equipment and tools for lychee production; and encouraging membership to lychee farmers’ group.
INTRODUCTION
Vietnamese fruit production has shown rapid growth since income per hectare from growing fruits are 4 to 8 times greater than from growing rice [1]. Lychee is one of the main fruits in Vietnam that include pomelo, dragon fruit, star apple, mango, durian, rambutan, longan and watermelon. Vietnam is one of the world's top five lychee producing countries after China, India, Taiwan, and Thailand [2]. It contributes significantly to the supply of lychee products in the world. Two provinces, namely, Bac Giang and Hai Duong are the main areas for producing lychee in Vietnam.
With lychee production of around 213,000 tons in 2011, Bac Giang is the largest lychee producing province. This province exported 69,565 tons of lychee to China and other countries such as Taiwan and Russia in 2011. The total export value was 890 billion VND 1 which indicated that Bac Giang has strong potential for growing and exporting lychee.
However, the price of lychee is very low and unstable. The exporters are the ones who have power to control the price [3]. Being the main importer of Vietnamese lychee, the preferences of Chinese consumers, particularly those in urban areas, have changed since China joined the WTO. Chinese consumers are seeking higher quality, clean and attractively packaged products. Similar trends are occurring in the Vietnam domestic market as national wealth grows. China and other developing nations have responded to this need for assurances by accessing international standards and leveraging domestic governmental standards and control systems [4]. The traders are willing to pay higher price for the lychee type which has better quality and appearance [3]. In order to improve the fruit quality, promote consumer safety, and increase production of high value export products, the Ministry of Agriculture and Rural Development (MARD) introduced Good Agricultural Practice (VietGAP). It was applied in the lychee industry in Bac Giang province as a pilot area. VietGAP 1 1 USD = 20,000 VND (Currency exchange rate in 2011) consists of 12 practices covering four components including food safety; environmental management; workers' health; safety and welfare; and production quality [4]. This paper discusses the status of the lychee industry and evaluates the extent of and factors affecting adoption of VietGAP in the lychee industry in Bac Giang province, Vietnam.
Sources of Data
The study area covered two major lychee districts in Bac Giang province, Vietnam, namely: Luc Ngan and Luc Nam. Luc Ngan has 52% of the total lychee planted area in Bac Giang province and adopted VietGAP. Luc Nam has approximately 20% of lychee planted area in the province and did not adopt VietGAP. Both primary and secondary data were used in this study. Primary data were collected by conducting a survey and personal interviews of the lychee farmers. Key informant interviews of the leaders of groups like cooperatives and other agencies/ organizations in the area were also conducted to gather relevant information. Secondary data were sourced from various government agencies and the internet.
Stratified random sampling was applied in choosing the farm samples. The number of respondents in Luc Ngan (43 farmers) and Luc Nam (57 farmers) was determined using the formula [5]: Where: is the sample size in Luc Ngan and Luc Nam. Ni is the total number of farms in Luc Ngan (37,826 farms) and Luc Nam (44,373 farms). d is the maximum error deemed acceptable. In this study, 10% error was used for calculation of the sample size. Z is the normal variable. With 10% error, the normal variable is equal to 1.645. Proportion P i is percentage of lychee producers in Luc Ngan (80%) and Luc Nam (70%).
Analytical Tools
Descriptive analysis was done with the aid of tables and figures to analyze the status of the lychee industry in Bac Giang province, the adoption of VietGAP on lychee industry, the characteristics of VietGAP farmers and non-VietGAP farmers, and other relevant information.
To determine the factors affecting the extent of adoption innovations in agriculture, various models can apply in the economic literature. The research from rice production in Philippines employed Logit model and Poisson estimators to show the effect of socioeconomic, institutional and environmental factors influencing the adoption of certified seeds. The survey consisted of 3164 samples [6]. Other researchs considered Linear regression analysis to identify the factors that further affected the vegetable growers' adoption intensity of IPM practices [7] and to determine the extent of organic vegetable farming at farm household level [8]. These studies did not consider specific level of adoption groups such as high adopters and low adopters groups. The binary logit model was applied to determine the factors affecting the extent of adoption of VietGAP in lychee industry. In the logit model, extent of adoption of VietGAP (dependent variable) has 2 values: P = 1: if lychee farmer has high adoption of VietGAP and P= 0: if lychee farmer has low adoption of VietGAP. Since VietGAP has 12 practices, farmers who applied 8 practices or more (above 67%) were classified as high adopters of VietGAP. Those who adopted less than 8 practices (less than 67%) were low adopters of VietGAP.
The linear form of the logit function is: Ln{Pi/ (1-Pi)} = α + β i X i + ε i . Where: i presents the individual i, and ε i is error term.The parameters were estimated by maximum likelihood technique. The marginal effects of Xi on Pi were measured by taking the partial derivative of Pi with respect to Xi In logit model, marginal effect represents the change in probability as affected by a unit change in Xi, ceteris paribus.
The logit empirical model is as follows: Where: Pi is the probability of high adoption of VietGAP; α 0 is intercept; α 1 , α 2 , α 3 , α 4 are coefficients of explanatory variables, namely, Age, Edu, Fsize, and Netprofit, respectively; β 1 is coefficient of accessibility to relevant information on VietGAP dummy variables; β 2 is coefficient of membership in the lychee farmers' group dummy variable; and ε i is the error term. The independent variables are explained in Table 1.
Production, planted area and yield
Lychee is one of the most demanding crops among tropical and subtropical fruit trees in terms of climatic condition.In Asia, China is the leading lychee-producing country in terms of volume of production. Its production covers 580,000 ha with output of 1,558,400 tons, over 66% of which has been developed in the past 10 years [9]. India is the second largest lychee producer with around 500,000 tons of lychee produced. Following India is Taiwan, the third largest lychee producer where about 100,000 tons of lychees are produced annually (Fig. 1).
Thailand produced 85,000 tons and is the fourth largest lychee producer [2,10]. Vietnam is the fifth largest producer where annual production is estimated at about 50,000 tons from 35,352 ha of planted area [11]. In Vietnam, the climate of northern where winter is short, dry and a little bit cold and summer is long and hot with high rainfall and humidity, is quite suitable for the growth of lychee [11]. In the northern region, Hai Duong is considered as the original lychee province. After that, lychee has grown everywhere in the northern region. Specifically, Bac Giang has the biggest lychee commercial cultivation. Be considered as the biggest lychee producer in Vietnam, the specifc planted area, production, and yield of lychee in the districts of Bac Giang province in 2011 are shown in Table 3.
As shown in
Luc Ngan district had the largest lychee planted area with 18,500 ha, followed by Luc Nam with 6,140 ha. Their shares in area planted are 52% and 17%, respectively. The total lychee production of Bac Giang province in 2011 was 206,861 tons of which 58% and 17% were contributed by Luc Ngan and Luc Nam, respectively.
The yield of lychee differed across districts in Bac Giang province in 2011. Two districts, namely, Luc Ngan and Luc Nam, had the highest lychee yield per hectare of 6.5 tons and 5.82 tons, respectively. The average yield of lychee of Bac Giang province was 5.77 tons per hectare.
Prices
There were two main lychee varieties in Bac Giang province, namely, the pre -ripened lychee and the Thieu lychee. The price of pre-ripened lychee was greater than Thieu lychee's price during the main season (Table 4).
Additionally, the price of lychee declined from the beginning of the season to the end of the season for both varieties. This can be explained by the law of supply and demand. When the supply of lychee is greater than the demand, certainly the price of lychee will decrease during the main season. The difference between the average prices of the pre-ripened lychee and the Thieu lychee was more than 3,000 VND. The prices in Luc Ngan district were not so different from those of other districts. Even the average price of preripened lychee was almost the same as the prices in other districts, around 15,000 VND.
However, the price of Thieu lychee was very high in Luc Ngan district with an average of 12,000 VND compared to the price of lychee in other districts which only was 5,000 to 7,000 VND. Fig. 2 shows that there were also differences in the prices of lychee across districts in Bac Giang province at the end of the season. The lowest price was in Yen Dung and Viet Yen districts at 2,000 VND per kg of lychee. On the other hand, Luc Ngan was not only able to maintain a high price for lychee but was also able to keep a stable lychee price from beginning up to the end of the season. This is because Luc Ngan has been applying VietGAP's cultivation techniques in lychee production since 2008. Source of basic data: Table 4 According to Lan (2010), the lychee produced with the application of VietGAP has not only good appearance and better uniformity but is also chemical residue free [3]. This is the reason why lychee collectors from China or even from domestic markets always choose Luc Ngan as first priority in sourcing lychee.
Marketing channels
The value chain of fresh lychee in Thanh Ha district, Hai Duong province indicates that around 2% of lychee produced in this area were sold in Chinese markets, 80% to the southern provinces and Cambodian markets, and 18% went to the northern provinces like Hai Duong, Hai Phong, Hanoi, and others [12]. However, there were differences in markets and market segmentations between Thanh Ha district and Bac Giang province. In Luc Ngan district, Bac Giang province, 48% of fresh lychees were consumed domestically and 52% were sold to China [3]. The product of this channel is the first class lychee which has better quality and appearance than that produced in other places and the price is usually higher than those of other lychee types by around 3,000 VND. Outside traders from Lao Cai province play an important role in the operation of this channel. They come to the trading centers and directly participate in the chain to select and buy lychee from the farm households or local collectors.
The total volume of lychee in Bac Giang province for export reached 69,565 tons in 2011 [13]. This amount was equal to 32.6% of the total volume of lychee production in Bac Giang equivalent to a total export value of 890 billion VND. Ha Khau port and Kim Thanh port in Lao Cai province are major exit ports for lychee export. The volume of lychee export through these ports was around 38,230 tons. Export volume in Tan Thanh port in Lang Son province was around 31,335 tons. However, the domestic markets also play a very important role. Trade was around 91,935 tons or 43.2% of total lychee production in Bac Giang which were sold in the fruit stores and local markets. Around 51,546 tons or 24.2% of total lychee production in Bac Giang were sold in the supermarkets. Table 5 compares the characteristics of the selected farmer -respondents from Luc Ngan and Luc Nam districts. The farmer -respondents were relatively homogenous in terms of demographic characteristics.
Socio -Economic Characteristics of Lychee Farmers
The mean age of the non-VietGAP respondents was 47.12 years while it was 46.67 years for VietGAP farmers. The difference in age between the two groups was not significant. In terms of educational attainment, VietGAP farmers spent more years in school than the non -VietGAP farmers, with the difference being significant at 10% probability level. The average lychee farm size was 0.6 hectare in Luc Ngan district and 0.51 hectare in Luc Nam district but the difference was not significant. However, there was a significant difference at 1% level in the density of lychee trees per hectare in Luc Ngan and Luc Nam districts. The lychee yields are 13,638 kg per hectare and 12,550 kg per hectare in Luc Ngan and Luc Nam districts, respectively, with the difference being significant at 1% of probability level. Lychee is the main crop in Bac Giang province. It contributed quite highly to the farmers' income. The average income contributions of lychee production were approximately 80% and 55% of the total household income in Luc Ngan and Luc Nam districts, respectively.
Production, Post harvest and Marketing Practices in Lychee
The practices for lychee industry were significantly different between VietGAP farmers and non-VietGAP farmers. In lychee production, the VietGAP farmers decided to graft more preripened variety trees to avoid the risk of lychee price in the peak season. Whereas the non -VietGAP farmers said this was not easy to care for the grafted branches. Thus, only a small number of non -VietGAP farmers were planting this variety. The densities of trees were also different between two farmers' group. According to the VietGAP farmers, they could focus on creating canopy and pruning with lower density of trees. It is a reason why they practiced pruning after harvest and spent more man day for pruning. This was also different in fertilizer and pesticide application practices between two groups about the time, the amount, and the method application, the application recording, and the protective clothing using. Specifically details were shown in the Table 6. Table 7 showed the comparisons in harvest, post harvest and marketing practices for lychee between non -VietGAP farmers and VietGAP farmers. The significant difference was the factors affecting the farmer's decisions in harvesting. The non-VietGAP farmers decided the time to harvest based on the demand from the market. However, the VietGAP farmers decided based on the fruit quality. Additionally, the VietGAP farmers used canvas to cover the orchard land during harvesting time to keep the picked fruits clean. They had some contracts with supermarkets and the lychee collectors marked lychee from VietGAP farmers as first class lychees.
Extent of Adoption of VietGAP
Forty-three respondents who applied VietGAP practices on lychee production in Luc Ngan district were interviewed. Table 8 shows that all of the respondents satisfied the soil and water requirement of VietGAP. Harvesting and storage and transport were the second mostly adopted practices with around 95% of total adopters. Chemical pesticide application was highly adopted as well with 93% of total adopters followed by waste management and treatment and fertilizer application with around 91% and 88% of total adopters, respectively.
Adoption for selection of variety was lower with 31 adopters or 72% of total adopters since the farmers who followed the practice of grafting pre-ripened lychee said this was quite difficult to adopt. Pruning after harvest, once a year. However, the farmers usually visited the orchard and started pruning disease branches immediately once they find signs of diseases. Spent 31.09 man days for pruning Fertilizer application Fertilized in early October. Fertilized based on 50% experience of farmers and 42% practice of the neighbors, 8% followed the instruction from the fertilizer seller and extension officers. Applied 1,850 kg composted manure, 50 kg nitrogen, 300 kg phosphorous, and 100 kg potassium per hectare. Although the farmers have known that applying more potassium will increase lychee productivity they only applied 0.3 kg per tree. According to them, higher lychee productivity would mean lower lychee price. All of the respondents did not record the details of fertilizer application, application rate, method of application, and price of fertilizer used.
Fertilized in early in August (after pruning) Fertilized based on 85% their experiences combined with guidelines from VietGAP program, 15% of the VietGAP farmers decided confidently on the volume of fertilizer and pesticide applied in their lychee orchards. Applied 1500 kg, 85.43 kg, 547.5 kg, and 285 kg for composted manure, nitrogen, phosphorus, and potassium, respectively per hectare. Around 88% of VietGAP respondents recorded their fertilizer application methods. Records included the prices of fertilizers, the time of applying fertilizer, and some other details worthy of notes for the information of the extension officers.
Pesticide application
Applied once a year. Consulted their neighbors and chemical sellers on how to apply pesticides. Average volume of chemical application on lychee: 45 liters. Used 30.7 man days of labor for pesticide application. Did not keep record regarding their pesticide application. The farmers used face covers but they did not usually use gloves or hats since they said the lychee trees in Luc Nam are shorter than those in other districts.
Applied once a year. Consulted and adhered to the VietGAP guidelines in applying chemical pesticide. Average volume of chemical application on lychee: 42.72 liters. Aside from similar pesticides that were applied in Luc Nam district, farmers also sprayed Ethrel several times to support the pre -ripened variety. Used an average of 43.22 man days for chemical application. Recorded all details related to this practice. Face covers, hats, and gloves were popular clothings used. However, they did not use protective glasses during chemical pesticide application (spraying) for lychee.
Source: Primary data, 2011
Pruning and documentation and recording had the same proportions of adopters at 42%. Not all of the respondents were confident with their pruning experiences. Some of them did not even practice pruning immediately after harvest. The farmers took note of their practices but some of them had no regular recording. Complaint and complaint resolution and safety for workers were least adopted with only 28% and 23% of total adopters, respectively. The farmers reported their complaints via extension workers and the lychee group although the number of reports was not so many. On the workers' safety, the VietGAP farmers understood the adverse effect of chemical pesticide to them and their community. This is the reason why they ensured that the time for harvesting was at least 15 days after chemical application. Nonetheless, they ignored the bad effect of chemical on their eyes. That is why none of them wore protective glasses.
Remarkably, no farmer adopted the internal audit requirement. According to them, this practice was not important in their lychee production. Table 9 shows the extent of VietGAP adoption in lychee by range of practices adopted. Among the 43 VietGAP adopters, 24 adopters applied 8 practices and above, or 56% of total respondents, while 19 farmers adopted less than 8 practices, or 44% of total respondents.
All respondents said they got more advantages when the extension workers and lychee group members supported them in their practice of VietGAP recommendations. This is the reason why all of them applied at least 6 practices. Ten is the highest number of VietGAP practices adopted in the survey area. Two farmers out of 43 respondents applied 10 practices. Two of them believed that lychee is a potential Table 7. Harvest, post harvest and marketing practices for lychee industry in Bac Giang province, Vietnam, 2011 Non -VietGAP farmers VietGAP farmers Harvested lychee as soon as possible or whenever they found there is a demand from the market. The farmers ignored the safe period after chemical application and focused more on the profit. The high demand for lychee fruit will always convince the farmers to harvest lychee. This decision also has advantage in terms of avoiding risk of lower price during the peak season.
Some traditional tools such as sickle, bamboo frame, bamboo chair, and bamboo basket were still being used for lychee harvest.
Lychee was sorted by shape and weight. The average weight of lychee bundles was around 1 to 2 kg. The producers brought the sorted fresh lychee to the collection station near their villages by motorbike.
The color and size of lychee were most important factors affecting the farmer's decisions in harvesting. Based on their experiences, they knew when to harvest lychee with good quality or fruit ripening level.
The farmers still used poor equipment to pick lychee such as sickle, bamboo frame, bamboo chair, and bamboo basket. Aside from the equipment, the farmers also used canvas to cover the orchard land during harvesting time. The canvas, a support from VietGAP, could keep the picked fruits clean. Lychee was then sorted by color, shape, and weight. The average weight of lychee bundles was around 1 to 2 kg. 20% of VietGAP producers (9 respondents) had contracts with some supermarkets in Hanoi.
Factors Affecting Adoption of VietGAP
Binary logit analysis was done to determine the factors influencing the extent of adoption of VietGAP in lychee industry. The dependent variable was defined as the rank in terms of extent of adoption based on the rate of practices used. A likelihood ratio test was made to measure the goodness of fit of the model. The likelihood ratio was found highly significant. This implies that the rate of adoption of VietGAP in lychee industry was explained by the explanatory variables included in the model. The results of logit estimation are shown in Table 10.
Among the explanatory variables considered, farm size, net profit in 2010 operation, accessibility to information, and membership to a lychee farmers'group significantly influenced the extent of VietGAPadoption in the lychee industry of Bac Giang province.
The results of the marginal effect estimation also shown in Table 10 indicate that a 1 hectare increase in farm sizewill decrease the probability of high adoption of VietGAP by 1.21.Farmers who have bigger farms are less likely to adopt VietGAP. The reason for this significant effect is that VietGAP involves strict implementation of practices, which could also mean requiring farmers to add more capital and employ more laborers in the lychee farming operations. A 1,000VND increase in net profit of lychee production last season (2010) would increase the probability of high adoption of VietGAP by0.02.Lychee farmers who had more income or net profit from previous lychee season were more inclined to adopt VietGAP. With higher income, they have more capital to finance the required farming operation for the next season. If the farmers have access to relevant information on VietGAP, the probability of high adoption of VietGAP will increase by 0.73. The positive effect of accessibility to VietGAP information would suggest the VietGAP program opening more training for farmers and extension officers. Information is very important for the farmers since they still have difficulties during applying VietGAP. Similarly, if they are members of the lychee farmers' group, the probability of adoption will increase by 0.6.The farmer will more likely adopt VietGAP if he is a member of a lychee farmers' group. This could have been due to the fact that lychee group members were priority participants of related trainings and workshops where they share experiences on lychee production. The age and educational attainment of the household head were found to have positive although not significant influence on the extent of adoption of VietGAP in the lychee industry.
CONCLUSION
The study applied the binary logit model to determine the factors affecting the extent of adoption of VietGAP lychee industry in Bac Giang, Vietnam. The factors that significantly influenced the extent of VietGAP adoption in the lychee industry are farm size, net profit in previous lychee season, accessibility to information, and membership to a lychee farmers' group. In order to encourage lychee farmers to adopt full practices, the study recommended: 1) improving the VietGAP program implementation; 2) providing more training for farmers and extension workers, processing support, insurance fund, greater access to capital, and better equipment and tools for lychee production; 3) and encouraging membership to lychee farmers' group.
ACKNOWLEDGMENTS
This work was funded by the Southeast Asian Regional Center for Graduate Study and Research in Agriculture (SEARCA). The authors acknowledge Luu Van Duy, Vu Thi Mai Lien, Vu Thi Hai, Bui Van Quang, and Dang Thi Be for their assistance with this study. The authors thank anonymous referees for their insightful comments. | 2019-08-19T08:09:33.809Z | 2016-01-10T00:00:00.000 | {
"year": 2016,
"sha1": "2f9ff5c1b977476698449a7fb9cda1383d120c1f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.9734/ajaees/2016/19948",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9a0422494581a9bb687851b1fa14cf2f03cd8c70",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
119243017 | pes2o/s2orc | v3-fos-license | Cooper pairs in the Borromean nuclei $^6$He and $^{11}$Li using continuum single particle level density
A Borromean nucleus is a bound three-body system which is pairwise unbound because none of the two-body subsystem interactions are strong enough to bind them in pairs. As a consequence, the single-particle spectrum of a neutron in the core of a Borromean nucleus is purely continuum, similarly to the spectrum of a free neutron, but two valence neutrons are bound up in such a core. Most of the usual approaches do not use the true continuum to solve the three-body problem but use a discrete basis, like for example, wave functions in a finite box. In this paper the proper continuum is used to solve the pairing Hamiltonian in the continuum spectrum of energy by using the single particle level density devoid of the free gas. It is shown that the density defined in this way modulates the pairing in the continuum. The partial-wave occupation probabilities for the Borromean nuclei $^6$He and $^{11}$Li are calculated as a function of the pairing strength. While at the threshold strength the $(s_{1/2})^2$ and $(p_{3/2})^2$ configurations are equally important in $^6$He, the $(s_{1/2})^2$ configuration is the main one in $^{11}$Li. For very small strength the $(s_{1/2})^2$ configuration becomes the dominant in both Borromean nuclei. At the physical strength, the calculated wave function amplitudes show a good agreement with other methods and experimental data which indicates that this simple model grasps the essence of the pairing in the continuum.
Introduction
A Borromean nucleus [1,2] is a bound three-body system in which any pair subsystems are unbound. This is so because neither, the bare neutron-neutron interaction nor the core-neutron interaction are strong enough to keep any pair subsystem together. As a consequence, the single-particle spectrum is exclusively continuum. The 6 He and 11 Li have these characteristics, hence they are both Borromean nuclei, formed by a core plus two neutrons [3,4]. The properties of these nuclei have been studied in the two-body framework [5] as well as using the three-body framework [6][7][8][9][10] with effective interaction. More elaborate formalisms as the Faddeev equations [1,11] and ad initio calculations [12][13][14] has also been used to scrutinize the properties of these exotic nuclei.
The pairing is a fundamental part of the residual interaction [15,5,16]. It is particulary important in Borromean nuclei, since the core-neutron system is unbound while the same core wiht two neutrons is bound. Even when the origin of the pairing is unknown, at least two different models provide possible mechanisms for the enhancement of the pairing in the 11 Li nucleus. The first one is through the interplay between the pairing and collective vibration [17]. This interpretation is consistent [18] with the experimental cross section of Ref. [19]. The second mechanism is provided by the tensor force [20], which explains the observed Coulomb breakup strength and the charge radius of Ref. [21]. The simplest pairing interaction is given by the constant pairing [15]. It will be shown that this effective interaction, even simple, in conjunction with the single particle density, captures the essence of the correlations of the two neutrons in the Borromean nuclei 6 He and 11 Li. This paper studies the neutron-neutron pairing correlations in the core of the Borromean nuclei 6 He and 11 Li. Due to the Borromean character of these two nuclei, the correlations between continuum states are the only one present. Usually these correlations are incorporated through quasibound states obtained by putting the system in a large spherical box. In Refs. [8,10] the scattering wave functions are used in order to consider explicitly the continuum; in this work instead, the continuum spectrum of energy is handled using the continuum single particle level density (CSPLD). This density is defined as the difference between the mean-field and the free densities [22][23][24]. The used of the CSPLD was implemented earlier in many-body calculations in the Bardeen-Cooper-Schrieffer and Richardson solutions of the pairing interaction in Refs. [25,26].
The paper is organized as follows. In section 2 the partial wave probability amplitudes in terms of the CSPLD is derived. In section 3.1 the single particle representation is defined, while the binding energy and partial wave amplitudes as a function of the strength are calculated in section 3.2. The conclusions are given in section 4. The paper contains an appendix (Appendix A) which gives some details about the CSPLD which modulates the pairing interaction in the continuum.
Formalism
The goal of this section is to give the partial wave probability in terms of the partial wave single particle level density. We found it is clearer formulating the problem by starting with continuum discretized wave functions by putting the system in a spherical box (what we call box representation) [27]. After the equations have been obtained we make the formal limit of the size of the spherical box to infinity. We get a dispersion equation similar to that of Eq. (4) in the two-electrons system [28] which includes the continuum single particle level density.
Let us consider the Borromean nucleus as a three-body system formed by an inert core plus two valence neutrons. The Hamiltonian which governs the system reads, where h is the single-particle Hamiltonian (see Eq. (17) in sect. 3.1) and V is the residual interaction between the two valence neutrons. The discrete eigenvalues of h are labeled by {a, m a } = {n a , l a , j a , m a } [29] h(r)ψ ama (r) = ε a ψ ama (r) , with ε a > 0 for all a.
The eigenfunctions of h are used as the single particle representation to build the antisymmetrized and normalized two-neutron bases |a, b; JM . This bases is used two expand the normalized two-neutron wave function |Ψ JM in term of the unknown amplitudes X J ab [29] From the Schrödinger equation H|Ψ JM = E J |Ψ JM we get the following eigenvalue equation for the three-body problem in the shell model framework We are going to consider as particle-particle effective interaction the constant pairing with matrix elements (m.e.) given by [15] c, d; complemented with a partial wave cutoff l max and an energy cutoff ε max which will be specified in Section 3.1.
From the secular equation (5) and the interaction m.e. (6) we get the dispersion relation which gives the J = 0 correlated two-neutron energies. This expression shows that every state in the discretized continuum, no matter if it represents a resonant or a continuum state [30], contributes with the same strength to the particle-particle correlation. This is a nonphysical feature since the expectation is that states in resonant configurations will have greater probability to interact with each other and with greater strength that states in continuum configurations. We will see below how this attribute of the constant pairing in the discretized continuum is modified in the continuum representation using single particle level density.
From the secular equation (5) and the normalization condition Eq. (4) we get the two-particle wave function amplitudes with N the normalization coefficient. Then, the two-particle ground state reads |Ψ = nlj X nlj |nlj, nlj; 00 with We define the partial wave amplitude by summing up the contribution of all positive energy states for each partial wave (l, j) where the coefficient N is fixed by the normalization condition lj X 2 lj = 1, and E 0 is obtained by solving the dispersion relation (7).
In the Appendix A we show that what makes sense in the limit R → ∞ of the size of the box, is not lim n )], i.e, the difference between the correlated and the uncorrelated magnitudes [22,24]. This is a practical way to get rid of the density due to the free nucleons. A subtraction prescription like this one was proposed by Bonche et al. [31,23,24] for the calculation of the contribution of unbound states in nuclear Hartree-Fock framework of finite temperature. The probability amplitude X lj in the continuum representation reads, with g lj (ε) = 1 π dδ lj dε (12) and δ lj (ε) the partial wave phase shift (Appendix A).
The probability amplitude X lj might be negative for some partial wave if g lj (ε) takes negative values, but the partial probability X 2 lj = (X lj ) 2 , is a positive magnitude. The value of N is defined such that the normalization lmax lj Notice that if we had defined instead, the partial wave probability as X 2 lj = n (X nlj ) 2 , when the limit of the size of the box is taking to infinity, it could happen that X 2 lj ∝ g lj (ε) (2ε−E 0 ) 2 dε might be negative if g lj (ε) takes mainly negative values in the interval (0, ε max ).
By taking the box limit of the Eq. (7) we get the following dispersion relation This expression physically differs with (7) not only because the limit of an infinite box has been taken, but mainly because the density without the free fermion gas is used (see Appendix and Ref. [22]). Now it is clear that resonant and non-resonant continuum states will not make the same contribution. In the applications it will be shown how the density affects the partial wave occupation probability. We will find that g lj (ε) is intense around a resonance and small everywhere else. Then, the above expression could be interpreted in the way that the CSPLD modulates the pairing interaction in the continuum.
In terms of the CSPLD g(ε) we get a dispersion equation similar to that of Eq. (4) in the Cooper's system [28], with (Appendix A).
Equations (15) and (13) give the energy and the probability occupation, respectively, for the two neutrons in Borromean nuclei interacting by a constant pairing force in the continuum representation through the partial wave CSPLD.
Single particle representation
The Woods-Saxon (WS) mean field arranges the neutron 0p 1/2 state below the 1s 1/2 state; this order is experimentally found in the nucleus 5 He but not in 10 Li. The ground state of 10 Li is the state 1/2 + which corresponds to the 1s 1/2 state in the shell model picture. In order to reproduce the experimental order in the nucleus 10 Li, it is usual to use parity-dependent parameters for the WS [6,7]. Alternatively, we add to the WS a deep Gaussian potential [32] which produces the same effect. The Gaussian parameters are chosen in a way that it strongly (mildly) affects the s(p) state. Doing so, the same mean-field can be use for odd and even states to describe the neutron states in 10 Li.
The single particle Hamiltonian which determines the representation through the eigenfunction of Eq. (2) is wherer denotes the coordinate of the nucleon and µ the reduced mass of the core-neutron system. The central and spin-orbit potentials in terms of r = |r| are given by the following expressions with R = r 0 A 1/3 .
The mean-field parameters in (18)- (20) are adjusted using the code Gamow [33] in order to reproduce as well as possible the low-lying levels of the nuclei 5 He and 10 Li. Table 1 gives the values of these parameters. In Table 2 whe compare the calculated [33] and experimental energies. The real and imaginary complex energies of 5 He are very similar to the experimental resonant parameters. The ground state energy of 10 Li is real and negative but this nucleus is unbound. This is an antibound state [34] with wave number k 0 = −i 0.033 fm −1 . An antibound state is an unbound single particle state with negative real energy, which would be bound if the mean field were a bit stronger [35]. The p 1/2 energy in 10 Li was fitted to the average of the two known experimental values.
The energy cutoff ε max is defined by using the expression which relates its value with the effective range r nn = 2.75 fm [38] obtained for the three-dimensional delta interaction in ref. [6] which gives 8.884 MeV using mc 2 = 939.57 MeV and c = 197.327 MeV fm. In our calculation we are going to used ε max = 9 MeV.
Using the mean-field which reproduces the low-lying levels of Table 2, we calculate the neutron partial wave phase shifts δ lj (ε) as a function of the energy (up to the energy cutoff ε max ) using the code ANTI [39,40] and then, we calculate each partial density with Eq. (12). The CSPLD g(ε) (16), shown in Fig. 1, is calculated summing up the partial wave densities up to the angular momentum cutoff l max = 4. Partial wave with bigger angular momentum mildly modify the density for energies ε < ε max .
The complex energy poles of the S-matrix manifest themselves on the real energy by modeling the shape of the CSPLD. We found two resonances below 9 MeV for the 5 He nucleus (see Table 2). While the p 3/2 resonance appears as a bump centered around 800 keV in Fig. 1, there is not signal of the p 1/2 resonance because of its large width (∼ 5 MeV). The finger print of the ground state 1/2 + of 10 Li appears as a very sharp structure (labeled as s 1/2 in Fig. 1) in the density, very close to the continuum threshold. This is consistent with Table 1.
(l, j) label the main contribution of the partial-wave.
the properties of the scattering wave functions u lj (k, r) at low energy [41] in the presence of bound or antibound state with energy ε ∝ k 2 0 0 close to the threshold in 10 Li, |k 0 | = 0.033 fm −1 . The density has another sharp structure around 200 keV corresponding to the first excited state (1/2 − state). The last structure observed in the 10 Li spectrum is due to the 5/2 + resonance around 4.4 MeV.
Notice that the scattering wave functions themselves are not used in the formulation but the CSPLD, i.e. the information of the structure of the continuum is coded in g(ε) as defined in Ecs. (12) and (16). The single particle representation is formed by N He = 174 and N Li = 289 discretized real energy states for the Helium and Lithium systems, respectively. These numbers of mesh points are chosen so as to make the results stable. The position of the mesh points are selected following the structure of the CSPLD, i.e. states around the resonant energies are favored. The narrower is the resonance, more mesh points are needed to smoothly describe the density; this explains why more states are used to describe the Lithium than Helium up to the same energy cutoff.
Results
The ground-state energy of the 6 He and 11 Li nuclei as function of the pairing strength G is calculated and shown in Fig. 2. They were obtained from Eq.
(15) using Gauss-Legendre quadrature for the integration. The experimental ground state energies E Exp = −0.973 MeV [3] and E Exp = −0.369 MeV [42] for 6 He and 11 Li respectively, are marked by dotted horizontal lines. Let us called physical strength G ph the value of G for which the experimental energy is reproduced. For the 6 He system we get G He ph (9MeV) = 1.427 MeV, while for 11 Li we get G Li ph (9MeV) = 0.553 MeV.
Taking a different energy cutoff ε max should only renormalized the pairing strength but not change the calculated properties of the system. In order to test this statement, we calculate the wave function amplitudes for a second model space with ε max = 18 MeV (using 90 extra mesh points to describe the density in the interval ε = (9, 18) MeV). The evolution of the ground state energy for this second model space is also shown in Fig. 2. For 6 He, the value of the physical strength for the second model space is G He ph (18MeV) = 1.507 MeV. This figure is larger than for the smaller model space. The usual trend for the strength as a function of the energy cutoff is that the former decreases as the last one increases. The inversion in our model is due to the small negative values of the tail of 5 He density (see Fig. 1). The value of the physical strength for 11 Li is G Li ph (18MeV) = 0.542 Mev. This figure is very similar to the one for the first model space (ε max = 9 MeV), indicating that the Lithium system is less sensitive to the energy cutoff than the Helium, probably due to the proximity of G Li ph to the continuum threshold.
The components l = 0, 1, and 2 of the occupation probabilities for the two different model spaces (energies cutoff) are given in Table 3. Since the results compare well with each other, we will adopt for purpose of comparison with other methods, the model space with ε max = 9 MeV obtained from Eq. (21). Table 3 Occupation probabilities X 2 lj (l = 0, 1, 2) for the 6 He and 11 Li for the two model spaces at the physical strengths G ph . Table 4 Percentage probabilities X 2 lj (%) for the main partial-wave components of the ground state wave function of 6 He from different models. The meaning of the abbreviations are: DDCI: density-dependent contact interaction; CI(p): contact interaction within p model space; CxS: complex scaling; ; CI(psd): contact interaction within psd model space. The probability occupation of the ground-state wave function of 6 He is compared with other methods in Table 4. The calculated (p 3/2 ) 2 contribution is not far from the one calculated using density-dependent contact interaction with box basis functions of Refs. [6,7]. Our best agreement is with the result obtained using contact-delta interaction in the basis of the continuum scattering s, p and d wave functions of Ref. [10]. Notice that the previous work [8] using only p wave gives much bigger value for the (p 3/2 ) 2 configuration. The simultaneous comparison of (p 3/2 ) 2 and (p 1/2 ) 2 configurations, shows a good agreement with Ref. [9] which uses Gaussian basis function in the complex scaling framework. A remarkable difference with all other methods is the big contribution of (s 1/2 ) 2 in our model.
It is experimentally well established that the two main configurations of the two neutrons in the Borromean nucleus 11 Li are (s 1/2 ) 2 and (p 1/2 ) 2 . Ref. [43] shows that the contribution of the second configuration is (51 ± 6)%. In Table 5 we compare our result with that of the cluster model of Ref. [1], the denstiydependent contact interaction of Refs. [6,7] and that of the Random Phase Table 5 Comparison of the percentage probabilities X 2 j (%) configurations for the ground state wave function of the nucleus 11 Li. The figures for the 'Cluster' model is taken from the average of Approximation (RPA) of Ref. [17]. In general we observe a good agreement with all these methods. In particular, the calculated X 2 s contribution is in between the result of the three-body and the collective models while the X 2 p seems to better agree with the result of Ref. [6].
As the last study of the properties of these two Borromean nuclei, we analyze how change the ratio X 2 s -X 2 p as the pairing strength is artificially decreased. Figure 3 shows the result for the 6 He nucleus. At the physical strength, the ground state wave function is dominated by the (p 3/2 ) 2 configuration. As the strength is decreased, the p contribution reduces is value at the time that the s increases. The system becomes unbound (changes to positive energy, see Figure 4 shows the evolution of the two main components of the wave function in 11 Li. At the physical strength, both (s 1/2 ) 2 and (p 1/2 ) 2 configurations are sizable in 11 Li nucleus. As the strength is decreased, the s configuration becomes more and more important, being the dominant one at the threshold strength G th ≃ 0.005 MeV.
A common feature of Lithium and Helium nuclei is that small pairing strength favors the configuration (s) 2 in detriment of (p) 2 in both Borromean nuclei. While a difference between them is that, at the threshold strength the wave function of the Lithium is almost 100% (s 1/2 ) 2 , while the two neutrons in the Helium share their strength between the (s 1/2 ) 2 and (p 3/2 ) 2 configurations. Figures 3 and 4 show that both Borromean nuclei 6 He and 11 Li are unbound until the pairing force creates enough correlations to unite them all three in a bound system. For the three-body n-n-9 Li this transition occurs very close to the continuum threshold, hence a very small correlation between the two neutrons in the 9 Li core is enough to bind the three-body system. This behavior of the two neutrons in the 9 Li core resembles very much the behavior of the electrons Cooper pair [28,17] with the difference, that in our finite system, the threshold strength is not zero. The small value of G th may be due to the presence of the antibound state close to the threshold in the n- 9 Li system. The antibound state may also affect other observables, like for example the dipole transition [44].
Conclusions
The ground state energy and its wave function of the Borromean nuclei 6 He and 11 Li have been studied as a function of the pairing strength using the single particle level density. The model consist of a three-body system with two valence neutrons outside the inert cores 4 He and 9 Li. The neutrons lie in the continuum of their respectively mean fields and they are correlated through a constant pairing interaction modulated by the continuum single particle level density. The single particle representation was defined using the continuum single particle level density defined by the subtraction method [22], while the cutoff energy was settled using the neutron-neutron effective range [6]. In order to compare with other formalism and experimental data, the pairing strength was fixed using the ground state energy of 6 He and 11 Li. It was found a good agreement with other methods for both nuclei 6 He and 11 Li. Finally, even when the (s 1/2 ) 2 configuration becomes dominant as the strength is artificially decreased in both Borromean systems, a seemingly apparent unique feature of the continuum s states in the 11 Li system appears due to the presence of the near-threshold antibound state, i.e. an extremely small (although finite) strength is enough to bind the two-neutron in the 9 Li core. This simple model shows that the essence of the pairing in the continuum is captured through the continuum single particle level density.
Acknowledgements
This work was supported by the National Council of Research PIP-0625 (CON-ICET, Argentina).
A Partial wave single-particle level density In this appendix we give details about the continuum single-particle level density (CSPLD) which is use in this work for the constant pairing interaction in the continuum energy representation. This density is derived from the box representation and it is expressed in terms of the derivative of the partial wave phase shift. We closely follow the consideration done by Beth and Uhlenbeck for the calculation of the second virial coefficient [22].
A partial-wave scattering state is characterized by the angular momentum l, the total angular momentum j and the continuum wave number k. This generalized eigenfunction of the single-particle Hamiltonian with continuum eigenvalue ε = 2 2µ k 2 (where µ is the reduced mass) is characterized asymptotically by the phase shift δ lj (k) [45], This asymptotic behavior together with the condition that for a given partialwave the phase shift tends to zero as k → ∞, determine δ lj (k) within a multiple of π. An increase of the orbital angular momentum makes the single-particle mean-field less important, and for this reason it makes sense to used an orbital angular momentum cutoff l max .
One can discretized the continuum scattering states energy ε by putting the system into a large spherical box with radius R. Then, the box boundary condition, u lj (k, R) = 0 forces the continuous spectrum to have discrete values ε nlj = 2 2µ k 2 nlj . The parameter n denotes the number of nodes (counting the ones at r = 0) of the function u nlj (r) = u lj (k nlj , r) in the interval [0, R). The relation between the number of nodes and the phase shift δ lj can be obtained through the asymptotic expression (A.1) and the boundary condition, given If for fixed {l, j} one orders the states ε nlj according to the number of nodes of u nlj , then n lj gives the number of levels (without counting the degeneracy) between the bottom of the single particle potential and the energy ε nlj [22]. In the limit of the box going to infinity the spectrum ε nlj becomes continuous and a magnitude like n f (k n ) changes to [45] lim R→∞ n with dn lj dk = lim ∆k→0 ∆n lj ∆k . Where ∆n lj = n lj (k + ∆k) − n lj (k) gives the contribution of all states for which k lies between k and k +∆k. Using the expression (A.2) we get The summation in (A.3) includes negative-energy bound states and positiveenergy discretized continuum states. Single particle energies in the core of Borromean systems are exclusively positive. Then, in the limit R → ∞ we would have lim R→∞ n ,ε nlj >0 f lj (k nlj ) = where we introduce the total partial wave energy density g (total) lj (ε) = lim R→∞ µ 2π 2 2 ε R + 1 π dδ lj dε (A.6) The first term, which diverges with the size of the box corresponds to the density of the free nucleon. This can be seen by doing an analogous analysis when the nuclear mean field is zero. In such a case we would have in the passing to the limit, lim R→∞ n ,ε have both the same divergence as a function of the box radius, we used the following recipe in the limit of an infinite box (transition to the continuum) [22] for a fixed partial wave (lj), where g lj is the partial wave single particle level density with the free nucleons density subtracted g lj (ε) = 1 π dδ lj dε (A. 10) i.e., the density so defined is the change in the density of single particle states at the energy ε due to the interaction [46]. With the usual convention lim ǫ→∞ δ lj (ǫ) = 0, the phase shift at zero energy is determined by the Levinson theorem as δ lj (0) = n lj π [47]. The partial density g lj (ε) may be either positive or negative depending on the sign of the derivative of the phase shift. For example if for a specific {lj} there are no resonance and n lj = 0, the phase shift will decrease monotonically from n lj π to zero [47] and the partial CSPLD will be negative for all values of the energies up to infinity. The draw-back of this "density" to be negative is compensated by the fact that it gives the structure of the continuum, i.e. for resonant partial wave g lj (ε) is positive around the resonant energy and its amplitude much bigger than for non-resonant partial waves. | 2017-01-27T16:23:00.000Z | 2017-01-27T00:00:00.000 | {
"year": 2017,
"sha1": "eb067b77d55e398b479697159213b6b3de5f8a83",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.08099",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "eb067b77d55e398b479697159213b6b3de5f8a83",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54701425 | pes2o/s2orc | v3-fos-license | Unexpected hypersurfaces and where to find them
In a recent paper by Cook, et al., which introduced the concept of unexpected plane curves, the focus was on understanding the geometry of the curves themselves. Here we expand the definition to hypersurfaces of any dimension and, using constructions which appeal to algebra, geometry, representation theory and computation, we obtain a coarse but complete classification of unexpected hypersurfaces. In particular, we determine each $(n,d,m)$ for which there is some finite set of points $Z\subset\mathbb P^n$ with an unexpected hypersurface of degree $d$ in $\mathbb P^n$ having a general point $P$ of multiplicity $m$. Our constructions also give new insight into the interesting question of where to look for such $Z$. Recent work of Di Marca, Malara and Oneto \cite{DMO} and of Bauer, Malara, Szemberg and Szpond \cite{BMSS} give new results and examples in $\mathbb P^2$ and $\mathbb P^3$. We obtain our main results using a new construction of unexpected hypersurfaces involving cones. This method applies in $\mathbb P^n$ for $n \geq 3$ and gives a broad range of examples, which we link to certain failures of the Weak Lefschetz Property. We also give constructions using root systems, both in $\mathbb P^2$ and $\mathbb P^n$ for $n \geq 3$. Finally, we explain an observation of \cite{BMSS}, showing that the unexpected curves of \cite{CHMN} are in some sense dual to their tangent cones at their singular point.
Introduction
Let K be an algebraically closed field of characteristic 0. Let R = K[P n ] = K[x 0 , . . . , x n ] be the homogeneous coordinate ring of n-dimensional projective space. Consider distinct general points P 1 , . . . , P r ∈ P n and positive integer multiplicities m 1 , . . . , m r . The fat point scheme X = m 1 P 1 + · · · + m r P r is the scheme defined by the homogeneous ideal I X = ∩ i I m i P i ⊂ R, where I P i is the ideal generated by all forms that vanish at P i . Given a homogeneous ideal I ⊆ R, we denote by [I] d the K-vector space spanned by homogeneous forms in I of degree d.
In [CHMN,Problem 1.4] the following problem was posed: Problem 1.1. Characterize and then classify all quadruples (n, Z, m, j) where Z = c 1 Q 1 + · · · + c s Q s for distinct points Q i ∈ P n , m = (m 1 , . . . , m r ) and X = m 1 P 1 + · · · + m r P r for general points P i ∈ P n , such that X fails to impose the expected number of conditions on In fact, it makes sense to pose the same problem for any subscheme Z of P n , but at this early stage of study, the focus of most research up to now (as was the case in [CHMN]) has been on the case that (n, Z, m, j) = (2, Z, (m 1 ), m 1 + 1), where Z consists of a finite set of reduced points and X = m 1 P . Indeed, the case of greatest interest for us is still when Z is a finite set of reduced points, but now in P n more generally, and we obtain some of our results when Z is a finite set of points by starting with a reduced variety V of higher dimension and picking suitable points Z on V imposing independent conditions on hypersurfaces of degree d, for which the same unexpected hypersurface arises. Formalizing this idea, we say that a subvariety Z ⊂ P n admits an unexpected hypersurface with respect to X of degree d if That is, Z admits an unexpected hypersurface with respect to X of degree d if the conditions imposed by X on forms vanishing on Z of degree d are not independent. Our main focus will be when X = mP is a single fat point with P general, in which case we will also sometimes say that Z admits an unexpected hypersurface with a general point P of multiplicity m. When Z = ∅, it is a long-standing open problem to characterize for which multiplicity vectors m = (m 1 , . . . , m r ) and degrees d there occur unexpected hypersurfaces. A conjectural characterization in the case of n = 2 is the content of the SHGH Conjecture [Se, Ha, G, Hi], for example. See also [LU] for a conjecture for P 3 .
As a means for approaching this problem, and motivated by an example in [DIV], the recent paper [CHMN] gave a careful analysis for the case where Z is a reduced set of points in P 2 , X is supported on a single point P , and the multiplicity of the unexpected curve at P is one less than the degree of the unexpected curve. Extending results in [DIV] and [FV], [CHMN] studied unexpected curves in this context in P 2 in connection to line arrangements and Lefschetz problems. Given an arrangement of lines in P 2 , the results of [CHMN] provide a means for determining whether the reduced scheme Z of points dual to the lines admits an unexpected curve, but it is still very unclear which line arrangements to look at.
One of the best results in this regard is that of [DMO], completely characterizing the supersolvable line arrangements A such that the points Z A dual to A admit an unexpected curve. Both supersolvable and non-supersolvable line arrangements were studied in [CHMN], and the latter can also give rise to unexpected curves, but it is not clear which ones do.
Moreover, apart from the examples of [BMSS], we are unaware of examples in P n for n > 2 in the literature. It is a natural and interesting next step to work to understand the range of examples of unexpected hypersurfaces that can occur with an imposed singularity at a single general point in P n , both in dimension 2 and in higher dimensions, and to find structural connections between the geometry of a reduced finite set of points Z in P n for n ≥ 2 and the existence of such an unexpected hypersurface. In §2 we show that a large class of examples is related to Z lying on projective cones over codimension 2 subvarieties. It ends with an application to the question of when ideals generated by powers of linear forms fail the Weak Lefschetz Property. In §3 we show that root systems can give rise to real point sets Z admitting unexpected hypersurfaces. There were already indications in [CHMN] (see also [I]) that hyperplane arrangements related to reflection groups sometimes give rise to unexpected hypersurfaces (see what was called the Fermat, Klein and Wiman arrangements in [CHMN]; these all come from complex reflection groups). We reinforce these indications here by finding additional examples coming from root systems of real reflection groups. These have the advantage of providing obvious candidates in higher dimension too. However, not all root systems seem to give rise to unexpected hypersurfaces; it would be an interesting project to understand what is special about those that do. In §4 we present initial results regarding a still mysterious duality between unexpected hypersurfaces having a single imposed general singular point P , and their tangent cone at P , first observed in [BMSS]. Finally, in §5 we present some open questions arising from our work.
Whereas [CHMN] studied unexpected plane curves of degree m+1 having a single imposed singular point of multiplicity m, an especially interesting aspect of the current paper is to relax the restriction that the multiplicity be one less than the degree. This greatly expands the universe of examples, as the following theorem shows; this is one of the main consequences of our work in this paper. (It is one of several interesting consequences of the careful analysis of cones over subvarieties of P n of codimension two that we give in §2.) Theorem 1.2. Denote by d the degree of an unexpected hypersurface of some finite set of points Z ⊂ P n and by m its multiplicity at a general point P in P n . It is still very unclear what kinds of unexpected hypersurfaces can occur for each d and m. One goal of this paper was to suggest new venues for where to find them. The title of this paper should not, however, be taken to mean that we have found all unexpected hypersurfaces (a title for a possible future paper could be "Unexpected hypersurfaces and where else to find them"). In addition, in contrast to what [CHMN] was able to do in P 2 , there are not yet good tools in higher dimension for rigorously verifying unexpectedness. In particular, we are able to give rigorous verifications of unexpectedness for the new examples coming from root systems only in some of the cases where we suspect that they occur.
Notation. For any subvariety (or subscheme) V ⊆ P n we write I V ⊆ R for the saturated ideal of V and I V for the sheaf on P n corresponding to I V . For any integer function h : Z ≥0 → Z the first difference ∆h is the backward difference ∆h(t) = h(t) − h(t − 1), where we make the convention h(−1) = 0 (so ∆h(0) = h(0)).
Cones
In this section we give a method for constructing examples of varieties Z (not necessarily points) with unexpected hypersurfaces. Although by far the more interesting question is the problem of understanding the unexpected hypersurfaces arising from a finite set of points, one can also begin by asking whether a reduced, non-degenerate curve in P 3 admits unexpected surfaces. We obtain the somewhat surprising fact that they always do! Using Bézout's theorem we then translate this back to finite sets of points. We also extend this idea to P n . Finally, we find a connection to the well-studied question of when an ideal generated by powers of linear forms has the Weak Lefschetz Property, extending results of [DIV] who first noticed a connection between cones and WLP.
Our method involves cones. By a cone with vertex P we mean a scheme X such that for every point Q in X the line joining P and Q is in X. In particular, by Bézout, every hypersurface of degree d with a point of multiplicity d at a point P is a cone with vertex P .
It is not hard to show that a plane curve of degree d in P 3 does not admit an unexpected hypersurface with a point of multiplicity d -instead, a point of multiplicity d imposes the expected number of conditions on hypersurfaces of degree d containing the plane curve (use the fact that a plane curve in P 3 is a complete intersection, and the known Hilbert function for complete intersections). For non-degenerate curves the situation is very different, as we now show.
Proposition 2.1. Let C be a reduced, equidimensional, non-degenerate curve of degree d in P 3 (C may be reducible, disconnected, and/or singular but note that d ≥ 2 since C is non-degenerate, with C being two skew lines if d = 2). Let P ∈ P 3 be a general point. Then the cone S P = S P (C) over C with vertex P is an unexpected surface of degree d for C with multiplicity d at P . It is the unique unexpected surface of this degree and multiplicity.
Proof. We first check uniqueness. Let F be a form defining a surface containing C, of degree d with multiplicity d at P . Let λ be a line through P and any point Q, of C. Then by Bézout, F must vanish on all of λ. Thus the surface defined by F is precisely S P .
We now check unexpectedness. Let D be a smooth plane curve of degree d.
Claim 1: The arithmetic genus, g C , of C is strictly less than that of D, which is g D = d−1 2 . This argument is classical. Much of it is given in [Har] (when C is irreducible) and in [Mi] Proposition 1.4.2, so no claim is made to originality; we include it here just for the reader's convenience. Let Γ be a general hyperplane section of C by a hyperplane H defined by a general linear form L. Let I Γ|H be the saturated ideal of Γ in H. Let 0. Then is the Hilbert function of C. On the other hand, for any integer t we have the exact sequence (where K is just the cokernel). Then after adding and subtracting some binomial coefficients and settingR = R/L, we obtain So since 0 we obtain Now replace C by D, and replace Γ by the hyperplane section of D, which is a set of d collinear points, say A. We have, similarly, It is clear that for any t ≥ 1 we have so we obtain g C < g D . This completes the proof of Claim 1. Now, by [GLP] Remark (1) (p. 497), we have H 1 (I C (d)) = H 2 (I C (d)) = 0, and we also have H 1 (I D (d)) = H 2 (I D (d)) = 0. Consider the exact sequence ) and similarly for D, so thanks to Claim 1 we have and we have Claim 2. Then thanks to Claim 2. To check that S P is an unexpected surface it is enough to note that using the value of g D mentioned above.
Remark 2.2. The same argument that was used to show uniqueness in the last result also shows that if C is a curve of degree d in P 3 then there does not exist a surface of degree e ≤ d − 1 containing C with a singularity of multiplicty e at a general point, since Bézout would force S P to be a component of such a surface. This means that dim[I C ] e ≤ (e−1)+3 3 for all 1 ≤ e ≤ d (where the statement for degree d is given in the proof). In fact, the same argument works for a subvariety V of P n of codimension two and degree d, to show that dim [I V ] e ≤ (e−1)+n n for 1 ≤ e ≤ d − 1. This statement also extends to the case e = d, and the argument is contained in the proof of Proposition 2.4.
We will give several corollaries to Proposition 2.1. The first is to extend it to subvarieties of codimension two in P n . Lemma 2.3. Let V be a reduced, equidimensional, non-degenerate subvariety of P n (n ≥ 4) of codimension 2 (not necessarily irreducible). Let H be a general hyperplane and let W = V ∩ H. Then W is non-degenerate in H = P n−1 .
Proof. We have the exact sequence We want to show that the third vector space in this exact sequence is zero. But [I V On the other hand, we claim that H 1 (I V (0)) = 0. This will complete the proof. But we also have the exact sequence The first term is clearly zero. The second has dimension 1. The third has dimension 1 since V is connected (being of codimension 2 and equidimensional) and reduced (by hypothesis). Thus the claim follows.
Proposition 2.4. Let V be a reduced, equidimensional, non-degenerate subvariety of P n (n ≥ 3) of codimension 2 and degree d (V may be reducible and/or singular but note that d ≥ 2 since V is non-degenerate, with V being two codimension 2 linear spaces if d = 2). Let P ∈ P n be a general point. Then the cone S P over V with vertex P is an unexpected hypersurface for V of degree d and multiplicity d at P . It is the unique unexpected hypersurface of this degree and multiplicity.
Proof. The proof is by induction on n. The initial case is Proposition 2.1, so we can assume n ≥ 4. Let H be a general hyperplane through P and let W = V ∩ H. Since P is general, we can assume that H is general as well. By Lemma 2.3, W is non-degenerate in H = P n−1 , and it is also reduced and equidimensional. Let T P be the cone in H over W with vertex P . Thus by induction, T P is the unique hypersurface of degree d containing W with multiplicity d at P , and it is unexpected.
Consider the exact sequence Since dim [I Z But Remark 2.2 gives which completes the claim. Uniqueness follows in the same way as it did for Proposition 2.1.
The next few corollaries have analogs in higher projective space, but the statements are a bit cleaner for curves in P 3 .
Corollary 2.5. Let C be a reduced, equidimensional, non-degenerate curve of degree d in P 3 (C may be reducible and/or singular). Let P ∈ P 3 be a general point. Let Z ⊂ C be any set of points on C such that [I C Then the cone S P over C with vertex P is an unexpected surface of degree d for Z with multiplicity d at P . It is the unique unexpected surface of this degree and multiplicity. In particular, we may choose Z to impose independent conditions on forms of degree d.
Proof. It is immediate from the hypothesis that [I C Corollary 2.6. Let C be a smooth, irreducible, non-degenerate curve of degree d ≥ 3 in P 3 . Let P ∈ P 3 be a general point. Let Z ⊂ C be any set of at least d 2 + 1 points on C (general or not). Then the cone S P over C with vertex P is an unexpected surface of degree d for Z with multiplicity d at P . It is the unique unexpected surface of this degree and multiplicity.
Example 2.7. Let C be a twisted cubic curve in P 3 . Then d = 3 and g C = 0. Let Z be a set of 10 points on C, so [I C ] 3 = [I Z ] 3 has dimension 10. In this case dim[I Z ] 3 − 2+3 3 = 10 − 10 = 0 so we do not expect a hypersurface of degree 3 with multiplicity 3 at a general point containing the 10 points of Z. But in fact there is such an unexpected hypersurface, given by the cone over C with vertex at a general point.
Remark 2.8. Corollary 2.5, Corollary 2.6, Corollary 2.14 and Corollary 2.20 all deal with the situation that we begin with a set of points lying on a variety C of codimension two in P n , and have enough points so that [I C In fact this assumption can be relaxed, although the statement becomes a little bit less transparent so we retained this assumption. But notice that the fact that C already admits an unexpected hypersurface of degree d means that we only need a set of d+n n − d−1+n n points on C that impose independent conditions on forms of degree d, and this number can be much smaller than the number forced by the condition [I C For example, say C is a general smooth rational curve in P 3 of degree 6. The Hilbert function of C is given by the sequence 1, 4, 10, 19, 25, 31, 37, . . . so the assumption that [I C ] 6 = [I Z ] 6 means we need Z to have at least 37 points of C. Instead, suppose that Z is a sufficiently general set of 6+3 3 − 5+3 3 = 28 points on C. Then the Hilbert function of Z is given by the sequence 1, 4, 10, 19, 25, 28, 28, . . . and we still do not expect a hypersurface of degree 6 with a point of multiplicity 6 to contain Z, but we know that the cone over C is such a hypersurface. Notice that in this case we not only have [I C ] 6 = [I Z ] 6 but even [I C Corollary 2.9. Let C be a non-degenerate union of d lines in P 3 . Let P ∈ P 3 be a general point. Let Z ⊂ C be a set of d(d + 1) points on C chosen by taking d + 1 general points on each line. Then the cone S P over C with vertex P is an unexpected surface of degree d for Z with multiplicity d at P . It is the unique unexpected surface of this degree and multiplicity.
Remark 2.10. On the other hand, it is not the case that all sets of points in P 3 (or any other projective space) admit an unexpected surface (resp. hypersurface) of some sort. Indeed, suppose Z is a general set of points in P 3 and let us ask if there is any degree and multiplicity at a general point, in which Z admits an unexpected surface. By considering the conditions imposed first by the general multiple point and then by the general points Z, we see that we must always get the expected number of conditions.
What is interesting is that in [CHMN] Corollary 6.8 it was shown that a set of points in linear general position in P 2 does not admit an unexpected curve of degree d and multiplicity d − 1 at a general point. Example 2.7 already shows that this does not extend to a set of points in linear general position in P 3 , if we weaken the condition on the multiplicity to allow multiplicity d. We do not know if the precise result from [CHMN] continues to hold in higher dimensional projective spaces.
Question 2.11. Let Z be a non-degenerate set of points in linear general position in P n , n ≥ 3. Is it true that there does not exist an unexpected hypersurface of any degree d and multiplicity d − 1 at a general point?
We next extend the cone construction in two different ways. First, we point out that Proposition 2.1 extends to surfaces in P 3 of higher degree and higher multiplicity. At the end of this section we will apply this result to show the failure of the Weak Lefschetz Property for certain ideals of powers of linear forms in four variables.
Corollary 2.12. Let C be a reduced, equidimensional, non-degenerate curve of degree d ≥ 2 in P 3 (C may be reducible and/or singular). Let P ∈ P 3 be a general point. Let k ≥ d be a positive integer. Then C admits an unexpected surface of degree k with multiplicity k at P .
Modifying the calculation above, we know that On the other hand, we have a unique surface S P of degree d with a singularity of multiplicity d at the general point P , so by multiplying S P by an element of [I k−d P ] k−d we always obtain a surface of degree k with multiplicity k at P . Thus Thus combining, it is enough to show A calculation shows that this is equivalent to which we showed in Claim 1 of Proposition 2.1.
Remark 2.13. Although we do not state them explicitly, we get the analogous corollaries for "sufficiently many" points on C that we got for Proposition 2.1, but now in higher degree.
The key is to assume (directly or by a condition on the number of points) that [I C We now give a different extension of the cone construction, allowing us to find unexpected hypersurfaces where the multiplicity is strictly less than the degree. It misses by 1 to be an answer to Question 2.11.
Corollary 2.14. Let V be a reduced, equidimensional, non-degenerate subvariety of codimension two and degree d in P n , n ≥ 3. Let S be a hypersurface of degree e ≥ 1 not containing Let P be a general point in P n . Then Z admits a unique unexpected hypersurface of degree d + e with multiplicity d at P . In particular, if V is irreducible and e ≥ 2 then we can take Z to be points in linear general position which impose independent conditions on forms of degree d + e.
Proof. Let F be the form defining S. Then as we saw in the proof of Proposition 2.4. Thus S P ∪ S is an unexpected hypersurface of degree d + e with a singular point of multiplicity d at P .
With the above results we can now give a complete answer to the following natural question. For which d and m does there exist a set of points Z in P 2 (resp. P n ) such that Z admits an unexpected curve (resp. hypersurface) of degree d and multiplicity m at a general point? Proof. For (i), we first note that there is an unexpected curve of degree m+1 and multiplicity m at a general point for each m ≥ 3. Indeed, • m ≤ 2. It was shown by Akesseh [A] that this occurs only in characteristic 2. (See also [FGST] for a different proof that shows that it does not occur in characteristic zero.) • m = 3. This comes from the dual of the B 3 arrangement [DIV].
• m = 4. Consider the line arrangement defined by the linear factors of This is a supersolvable arrangement, so Theorem 3.7 of [DMO] applies. In particular, the dual points give a reduced set of 12 points which admit an unexpected curve with d = 5 and m = 4. • m = 5. This follows from [CHMN] Proposition 6.15, taking k = 2.
• m ≥ 6. This comes from the points dual to the Fermat arrangement, by [CHMN] Proposition 6.15, taking t ≥ 5. It is clear that a set of points Z in P 2 cannot admit an unexpected curve whose degree and multiplicity at a general point P are equal (unlike what we have seen in P 3 ). Indeed, in this situation the line joining P to any point of Z must be a component of such an unexpected curve, so the unexpected curve is a cone with vertex P over the points of Z. Let d = deg Z be the degree of this curve. Then Z imposes independent conditions on curves of degree d, so in fact the cone is not unexpected. Thus m < d.
With this preparation, we use the same argument as was used to prove Corollary 2.14. Let Z 0 be a set of points admitting an unexpected curve of degree m + 1 and multiplicity m at a general point P and let A be a plane curve of degree d − m − 1 not passing through any point of Z 0 . Since P is general, it does not lie on A. Then choosing sufficiently many points on A gives a unique unexpected curve of degree d and multiplicity m at P . Part (ii) follows from Proposition 2.4 and Corollary 2.14.
We end this section with an application. There is an interesting interpretation of these results in terms of the Strong and Weak Lefschetz Properties. We first recall the definitions. For these definitions we maintain the assumptions on the polynomial ring R, but in fact all we need is that K be an infinite field.
Definition 2.16. Let R/I be an artinian K-algebra and let L be a general linear form. Then has maximal rank, and we say that R/I satisfies the Strong Lefschetz Property (SLP) in degree i with range k if ×L k : [R/I] i → [R/I] i+k has maximal rank. We say that R/I satisfies WLP (resp. SLP) if it does so for all i (resp. for all i and k).
Thus SLP failing in degree i with range k means that ×L k : [R/I] i → [R/I] i+k does not have maximal rank, and WLP failing in degree i is the same as SLP failing in degree i with range 1. The result [DIV,Theorem 5.1] and the remarks that follow the theorem connect the failure of SLP in degree i with range k to the occurrence of a form on P n of degree d = i + k with a general point of multiplicity i+1. In the specific case of n = k = 2, [CHMN,Theorem 7.5] and [DI,Proposition 3.18] establish connections between the occurrence of failures of SLP and existence of unexpected curves (or, in the case of [DI], something equivalent to the existence of an unexpected curve). Reformulating these results in the case of unexpected hypersurfaces gives the following.
Proposition 2.17. Let L 1 , . . . , L r be distinct linear forms on P n , and let Z be the set of points in P n dual to the hyperplanes defined by the L i . Fix integers d ≥ m > 1. Then the following are equivalent: (a) Z has an unexpected hypersurface of degree d with a general point P of multiplicity m; Let P be the general point of P n dual to L. By Macaulay duality [EI] and exactness,
Since (a) holds if and only if dim[I
Note that a set of points in P n dual to a set of general linear forms is, in particular, in linear general position. For a set of points in P 2 in linear general position (i.e., no three on a line), [CHMN] shows that there is no unexpected curve of any degree, hence the corresponding ideals of powers of linear forms do not fail SLP in range 2. This is in contrast to the case for n > 2. Indeed, we now give a result showing failure of WLP for arbitrarily many linear forms in four variables whose dual points are in linear general position (but not general), followed in Corollary 2.20 by a similar but weaker result for forms in any number of variables ≥ 4. (For P n with n > 2, a few papers have studied the question of WLP for ideals generated by powers of general linear forms, but most such results have focused on a small number of linear forms (e.g. [HSS], [MMN], [SS]).) Corollary 2.18. Let C be a reduced, irreducible, non-degenerate curve of degree d ≥ 3 in P 3 . Let k ≥ d be a positive integer. Let Z be any set of m ≥ dk + 1 points of C. Let L 1 , . . . , L m be the linear forms dual to the points of Z. In particular, L 1 , . . . , L m can be chosen so that no four vanish on a point (i.e. the points of Z are in linear general position). Then R/(L k 1 , . . . , L k m ) fails the WLP in degree k − 1. Proof. The statement about linear general position is immediate since C is reduced, irreducible and non-degenerate. From Corollary 2.12 we know that C has an unexpected surface of degree k with a general point P of multiplicity k, so we have dim [I C , hence Z has an unexpected surface of degree k with multiplicity k at P , so R/(L k 1 , . . . , L k m ) fails the WLP in degree k − 1 by Proposition 2.17.
Example 2.19. Let Z consist of a set of 31 points on a twisted cubic C. Let L 1 , . . . , L 31 be the linear forms dual to these points. Then for each 3 ≤ k ≤ 10, the algebra R/(L k 1 , . . . , L k 31 ) fails the WLP from degree k − 1 to degree k.
We remark that this does not mean that these are the only powers that fail WLP or that the given degrees are the only places where it fails. Experimentally with CoCoA [CoCoA] we have considered the case of 31 points on the twisted cubic as above, but allowed different k. When k = 2 the algebra has WLP. When 3 ≤ k ≤ 10 it fails from degree k − 1 to degree k as claimed, but also in certain other degrees (depending on k). For k ≥ 11 it still fails in several degrees, but now it does not fail from degree k − 1 to k.
The following is a slightly weaker analog of Corollary 2.18 for P n .
Corollary 2.20. Let n ≥ 4 and k ≥ 3. Set Choose any integer N ≥ f (n, k). Then there exist linear forms L 1 , . . . , L N ∈ K[x 0 , . . . , x n ] = R satisfying • no n + 1 of the linear forms have a common zero, and • R/(L k 1 , . . . , L k N ) fails the WLP from degree k − 1 to degree k. Proof. We recall that we can always find an irreducible, non-degenerate subvariety V of codimension 2 and degree k in P n . In fact, if k is even we take V as an intersection of a general form of degree k 2 and a general quadric. If k is odd we take V as a general arithmetically Cohen-Macaulay subscheme with a minimal free resolution of the form for example, by linking from a linear space of codimension 2). Using this sequence, or the Koszul resolution if k is even, we obtain Since V is irreducible, it makes sense to speak of a general set of points on V . Let Z be a general set of N points on V . From the generality of Z we have By Proposition 2.4 we then have that the cone over V with vertex at a general point P is an unexpected hypersurface of degree k and multiplicity k at P . Then the argument in the proof of Corollary 2.16 can be extended to our situation to show that the multiplication from degree k − 1 to degree k by a general linear form has an unexpectedly large cokernel, i.e. maximal rank does not hold. Since Z is general on V and V is irreducible and non-degenerate, Z is a set of points in linear general position, and this implies the condition on the linear forms not to have a common zero.
The above results all focus on failure of the WLP. We can also give a result about failure of the SLP with ranges bigger than 1.
As a result of Corollary 2.21 (i) we also recover the fact that such an algebra R/I must have the WLP (this corresponds to the excluded case d = m). This is a special case of the main result of [SS].
Root system examples
The construction used in §2 to get an unexpected hypersurface of degree d with a general point of multiplicity m for a locus Z in P n for n > 2, is based on [I Z ] d having a positive dimensional base locus. There are, as we shall see, examples of finite point sets Z with an unexpected hypersurface with d = m and n > 2 such that the base locus of [I Z ] d is 0 dimensional. Thus our construction in §2 is not the end of the story, since unexpected hypersurfaces can arise in other ways. The question is where else can one look to find them?
In this section we find new habitats where unexpected hypersurfaces lurk, both for n = 2 and n > 2, at least some of which for n > 2 have the property that the base locus of [I Z We first became aware of a set of points Z admitting an unexpected curve from an example of [DIV]. The lines dual to Z are shown in Figure 1. This example is interesting for a number of reasons. It is a simplicial real arrangement. (This means that the lines divide the real projective plane into triangles.) It is extremal. (If t k denotes the number of points where exactly k lines meet, an inequality of Melchior [Me] for real arrangements of d > 2 lines with t d = 0 states that t 2 ≥ 3 + k>2 (k − 3)t k . For B 3 we have t 2 = 6, t 3 = 4 and t 4 = 3, so equality holds.) It is free (meaning that if F is the product of the linear forms defining the lines dual to the points of Z, then there are no second syzygies for the Jacobian ideal J F = (F x , F y , F z ); i.e., the syzygy bundle for J F is free.) Its unexpected curve has minimal degree in characteristic 0 (no unexpected curve in characteristic 0 has degree 3 or less [A, FGST]). It is a line arrangement coming from a root system. (The lines in P 2 R correspond to the 2 dimensional vector subspaces in R 3 orthogonal to the roots of the B 3 root system, under the bijective correspondence between lines in P 2 R and planes through the origin in R 3 .) And it is a supersolvable arrangement.
A line arrangement in the projective plane is supersolvable if there is a so-called modular point (i.e., a point P where two or more of the lines meet such that if Q is any other point where two or more of the lines meet, then the line through P and Q is a line in the arrangement). Thus a supersolvable line arrangement includes the cone over its crossing points with vertex at any modular point. The multiplicity of a point with respect to a line arrangement is just the number of lines in the arrangement containing the point. When a line arrangement is supersolvable, every point of maximum multiplicity is modular (but not every modular point need have maximum multiplicity) [AT] (only the arXiv version includes the proof). For the line arrangement B 3 , shown in Figure 1, the center point is modular and indeed has maximum multiplicity (no other crossing point has multiplicity more than 4). The result of [DMO] says the point scheme Z A dual to the lines of a supersolvable line arrangement A has an unexpected curve of degree m A with respect to X = (m A − 1)P for a general point P if and only if 2m A < d A , where d A is the number of lines in the arrangement and m A is the maximum multiplicity of a point for A, and in this case the unexpected curve is unique. Since m B 3 = 4 and d B 3 = 9, it follows that Z B 3 has a unique unexpected curve of degree 4.
The roots of B 3 can be defined as the integer vector solutions (a, b, c) ∈ R 3 to 1 ≤ a 2 + b 2 + c 2 ≤ 2. Geometrically, given a unit cube aligned with the coordinate axes of R 3 and whose center is at the origin, these are the vectors pointing from the origin to the center of each face and to the midpoint of each edge. The roots in pairs correspond to points in the projectivization P 2 R of R 3 . Thus the 18 roots give the 9 points of Z B 3 , and the lines of the line arrangement B 3 are just the projectivizations of the planes normal to the roots; these lines are the projective duals of the points of Z B 3 .
Given the interesting behavior of B 3 , it is natural to look at other arrangements with similar properties. As noted above, [DMO] has done this for the case of supersolvable arrangements. Here we check what happens for arrangements coming from other root systems A in R n , not only for n = 3 but also for n > 3. The set-up then is: a root system A gives a finite set of vectors of R n for some n. Each root gives a point in P n−1 and the set of these points for the given root system A gives the point set we denote by Z A . The codimension 1 linear subspaces normal to the roots define the hyperplanes of the arrangement corresponding to A which we also refer to by A.
In principle, given a finite set of points Z ⊂ P n and a general point P = [a 0 : · · · : a n ], to find an unexpected hypersurface for Z + mP computationally one takes P to be the generic point P = [1 : a 1 a 0 : · · · : an a 0 ] and works as usual in the homogeneous coordinate ring S = K( a 1 a 0 , . . . , an ]. (We will however abuse notation and typically use [a 0 : · · · : a n ] to denote P and refer to it as a general point.) However, it can be convenient and more efficient to work in the bi-graded ring R = K[a 0 , . . . , a n ][x 0 , . . . , x n ]. So we mention here a few clarifying but elementary remarks.
All of our rings of interest are contained in the field K(a 0 , . . . , a n , x 0 , . . . , x n ), which is the field of fractions of a UFD. Thus given a form F ∈ S of degree d but which is not in K, there is (up to scalars in K) a unique factorization F = BG/H, where B and H are relatively prime forms in K[a 0 , . . . , a n ] and G ∈ R is bi-homogeneous with no factors of bi-degree (a, 0) with a > 0; its bi-degree is (t, d), where t = deg(H) − deg(B). We denote G by F * . Given any bi-homogeneous element G ∈ R of bi-degree (t, d), we denote G/a t 0 by G • . Note that G • ∈ S is homogeneous of degree d. It need not be true that (G • ) * = G (for example, (a • 1 ) * = (a 1 /a 0 ) * = 1) nor that (F * ) • = F (e.g., ((a 1 x 1 /a 0 ) * ) • = x • 1 = x 1 ), but we do have the following useful lemma.
Lemma 3.1. Let S and R be as above. Let P be the point [1 : a 1 a 0 : · · · : an a 0 ] ∈ P n K( ) . For any m ≥ 1, let I mP denote the ideal (I P ) m in S, and let J mP denote the ideal (J P ) m in R, where J P = ({a i x j − a j x i : i = j}) (hence (J P ) m is generated by elements of bi-degree (m, m)). Now let d > 0 and t ≥ 0, let F ∈ S be a nonzero form of degree d, and let G ∈ R be a nonzero, nonconstant bi-homogeneous form of bi-degree (t, d).
(a) As ideals in
Proof. (c) If G • is not irreducible, then G • = AB where both A and B have positive degree d A and d B , so we have G = αA * βB * where A * and B * have bi-degrees (a, d A ) and (b, d B ), and α and β have bi-degrees (t α , 0) and (t β , 0). Since R is a UFD, the denominators in the right hand side of the expression G = αA * βB * must cancel with factors of the numerators of the expression αA * βB * and this does not affect the values of d A or d B , so we may assume that α, β, A * , B * all are in R and A * and B * have bi-degrees (a , d A ) and (b , d B ), where the simplification might have changed a and b but will have left d a and d B unchanged. Since d A and d B are both positive, G has nonunit factors so G is not irreducible. Now note that irreducibility of G • implies that for (G • is the bi-degree of F * , and since d is the degree of (F * ) • and since (F * ) • ∈ I mP we have d ≥ m. Now let D a 1 a 0 , . . . , a n a 0 , x 1 x 0 , . . . , x n x 0 = F * a 0 a 0 , . . . , a n a 0 , x 0 x 0 , x 1 x 0 , . . . , x n x 0 = F * (a 0 , . . . , a n , x 0 , . . . , x n ) a s 0 x d 0 , so as a rational function D has bi-degree (0, 0), but as a polynomial in K( a 1 a 0 , . . . , an a 0 )[ x 1 x 0 , . . . , xn x 0 ], D has degree δ ≤ d. Moreover, since x 0 ∈ I P , we have D ∈ I mQ where Q is the point Q = ( a 1 a 0 , . . . , an a 0 ) in the affine open subset K( a 1 a 0 , . . . , an a 0 ) n away from x 0 = 0. Now translate Q to the origin; i.e., consider H a 1 a 0 , . . . , a n a 0 , x 1 x 0 , . . . , x n x 0 = D a 1 a 0 , . . . , a n a 0 , x n x 0 + a n a 0 = F * a 0 , . . . , a n , a 0 x 0 , a 0 x 1 + a 1 x 0 , . . . , a 0 x n + a n x 0 ∈ R is a bi-homogeneous polynomial of bi-degree (s + d, d).
Since H has multiplicity m at the origin (with respect to the variables x 1 x 0 , . . . , xn x 0 ), H is a sum of terms where each term consists of a monomial in x 1 x 0 , . . . , xn x 0 of degree at least m and at most d, times a polynomial in the variables a 1 a 0 , . . . , an a 0 of degree at most s + d. Thus each term is of the form c a 1 a 0 , . . . , a n a 0 x where m ≤ j i j ≤ d and c has degree at most s + d. Thus multiplying by a s+d 0 x d 0 clears the denominators. Translating back we recover D; i.e., D a 1 a 0 , . . . , a n a 0 , but each term becomes c a 1 a 0 , . . . , a n a 0 x 1 x 0 − a 1 a 0 i 1 · · · x n x 0 − a n a 0 in = c a 1 a 0 , . . . , a n a 0 a 0 x 1 − a 1 x 0 a 0 x 0 i 1 · · · a 0 x n − a n x 0 a 0 x 0 in and multiplying by a 2d+s 0 x d 0 we obtain C(a 0 , . . . , a n )(a 0 x 1 − a 1 x 0 ) i 1 · · · (a 0 x n − a n x 0 ) in where C is homogeneous in the variables a i of degree s + d, and (a 0 x 1 − a 1 x 0 ) i 1 · · · (a 0 x n − a n x 0 ) in ∈ J mP .
Thus a 2d+s 0 [BV,Corollary 7.10], since J P is the ideal generated by the maximal minors of a 2 × (n + 1) matrix of indeterminates, namely the two sets of variables, the powers J mP are primary for J P . Since no power of a 0 is in J mP , we see that F * ∈ J mP .
(e) If G ∈ J mP , then G = s H s M s where each M s is a product of m forms such as a i x j − a j x i and each H s is a bi-homogeneous form of bi-degree (t − m, d − m). So G • is obtained by dividing G by a t 0 , hence we get G • = s H • s M • s , and each M • s is a product of m forms such as (a i x j − a j x i )/a 0 , each of which is in I P . Thus G • ∈ I mP .
We used Macaulay2 [M2] to find and verify the occurrence of unexpected hypersurfaces for various root systems. It seems likely that the root system B n+1 , which gives a point set Z B n+1 in P n , gives rise to an unexpected hypersurface for each n ≥ 2, but our verifications of unexpectedness are computational and so are limited to a few smaller values of n. We do not have a general proof.
Given a finite set of points Z ⊂ P n we use the following script to check whether Z has an unexpected hypersurface of degree d vanishing on mP for a general point P = [a 0 : · · · : a n ]. Assuming that the computation of rank given by Macaulay2 is reliable, running the script on an example gives a rigorous proof of whether the example is unexpected or not.
The idea of the script is to construct the matrix N expressing the conditions imposed on all forms F of degree d by the points of Z, together with the conditions imposed for F to vanish to order m at a point P = [a 0 : . . . : a n ] with indeterminate coordinates a i represented in the scripts with new variables. Thus if we enumerate the monomials of degree d in n + 1 variables x i as b 1 , . . . , b ( n+d n ) , then N will be a matrix with |Z| + n+m−1 n rows and n+d n columns. A form F = i c i b i vanishes on Z and on mP if and only if N c = 0, where c is the coefficient vector c = (c 1 , . . . , c ( n+d n ) ) T . We can regard N as consisting of two matrices, Q 1 and Q 2 , where Q 1 comprises the top |Z| rows of N and gives the conditions imposed by the points of Z, and Q 2 comprises the bottom n+m−1 n rows of N giving the conditions imposed by the fat point mP . Thus the entries of Q 1 are scalars in the ground field, but the entries of Q 2 are the order m − 1 partials of the monomials b i evaluated at P . Then Z has unexpected hypersurfaces of degree d with a general point P of multiplicity m exactly when dim ker N > max(0, n + d n − rank(Q 1 ) − rank(Q 2 )), and in this case the coefficient vectors c of the unexpected hypersurfaces are precisely the nontrivial elements of ker N . The script which does this is given below. The section marked CODE BLOCK is where one puts in the list of the points of Z (or where one puts in code needed to generate the list; see the examples). For now we exhibit code which works when the points of Z are defined over the rationals. Subtleties arise when coordinates in an extension field are needed. We discuss that later.
Apart from output indicating current status of the computation, the script output indicates exactly when it finds an unexpected hypersurface. An output of the form (n, d, m, edim, adim) means that there is a hypersurface for Z in P n of degree d with a generic point of multiplicity m (that is a point whose coordinates are variables); the vector space of unexpected forms has expected dimension edim and actual dimension adim (where edim can be negative if the number of conditions imposed by mp is greater than the dimension of [I Z ] d ). If for a given n, d, m there is no output, then Z has no unexpected hypersurface in P n of degree d with a general point P of multiplicity m. We will refer to the script below as the universal script. --A will contain the list of rows for transpose of matrix Q1 apply(Md,i->(N={};for s from 0 to #Pts-1 do N=N|{sub(i,matrix{Pts_s})};A=A|{N})); D={}; --D will contain the list of rows for transpose of matrix Q2 apply(Md,i->(N={};for s from 0 to #Mm-1 do N=N|{diff(Mm_s,i)};D=D|{N})); Q1=transpose matrix A; --Q1 is defined over R Q2=transpose matrix D; --Q2 is defined over R M={}; for i from 0 to n do M=M|{a_i}; --M is the coord vector for generic point Q1S=sub(Q1,S); --Q1 is now defined to be over S Q2S=sub(Q2,matrix{M}); --Swap x variables for a variables N=Q1S||Q2S; expdim=#Md -(rank Q1S) -(rank Q2S); actdim=#Md-(rank N); if actdim > expdim and actdim > 0 then print {"n=",n,"d=",d,"m=",m,"edim=",expdim,"adim=",actdim}}}} One can recover the actual unexpected forms by putting in a line to print out the kernel of N . When the actual unexpected forms themselves are not needed, the script can be made more efficient. Note that the line Q2S=sub(Q2T,matrix{M}) merely substitutes the variables a i in for the variables x i in the (m − 1) order partials. This doesn't affect the rank. Also, the rank of the matrix over S after this substitution is the same as the rank of the matrix over R before the substitution. Thus if existence of and numerical data for unexpectedness is all that is needed, then the line S=frac(QQ[a_0..a_n]); can be deleted and the lines M={}; for i from 0 to n do M=M|{a_i}; --M is the coord vector for generic point Q1S=sub(Q1,S); --Q1 is now defined to be over S Q2S=sub(Q2,matrix{M}); --Swap x variables for a variables N=Q1S||Q2S; expdim=#Md -(rank Q1S) -(rank Q2S); actdim=#Md-(rank N); if actdim > expdim and actdim > 0 then print {"n=",n,"d=",d,"m=",m,"edim=",expdim,"adim=",actdim}}}} can be changed to N=Q1||Q2; expdim=#Md -(rank Q1) -(rank Q2); actdim=#Md-(rank N); if actdim > expdim and actdim > 0 then print {"n=",n,"d=",d,"m=",m,"edim=",expdim,"adim=",actdim}}}} It's possible the script would run faster by evaluating the matrix Q 2 of partials at a random point (with coordinates in the rationals or even in the integers) rather than at a generic point. To do so, replace the line apply(Md,i->(N={};for s from 0 to #Mm-1 do N=N|{diff(Mm_s,i)};D=D|{N})); with Table 1. Unexpected hypersurfaces arising from the root system B n+1 .
apply(Md,i->(N={};for s from 0 to #Mm-1 do N=N|{sub(diff(Mm_s,i),matrix G)};D=D|{N})); By semicontinuity, if the script indicates that there is no unexpected hypersurface for a given Z, n, d and m (by not outputting anything), then there is indeed no such unexpected hypersurface. But output claiming an unexpected hypersurface can't be relied on since the random point might have been unlucky.
We now check the usual root systems for occurrence of unexpected hypersurfaces.
3.1. The root system A n+1 . The roots for A n+1 are the (n + 1)(n + 2) integer vectors in R n+2 having one entry of 1, one of −1 and the rest 0. We project these into R n+1 by dropping the last coordinate. Projectivizing then gives a set Z ⊂ P n of n+1 2 points. No unexpected hypersurfaces turned up for 2 ≤ n ≤ 6, 2 ≤ d ≤ 6, 2 ≤ m ≤ d.
The case (2, 4, 3, 0, 1) comes from the arrangement B 3 shown in Figure 1. Its unique unexpected curve was shown to be unexpected by other methods in [CHMN].
In the case of B 4 we get a previously unknown unique unexpected hypersurface. It has degree 4 with a general point [a 0 : a 1 : a 2 : a 3 ] of multiplicity 4. Thus it is a cone at the point [a 0 : a 1 : a 2 : a 3 ]. In this case Z B 4 is the set of 16 points coming from the roots, but the vanishing locus of [I Z B 4 F (a, x) in a = [a 0 : a 1 : a 2 : a 3 ]; i.e., if F (a, x) is the unexpected surface, write it as a polynomial in a i with coefficients which are polynomials in x j . Take the ideal generated by these coefficient polynomials in x j . They define the locus of points at which all of the unexpected surfaces vanish as the point a moves around.) 3.3. The root system C n+1 . Since Z C n+1 = Z B n+1 , this case is covered by B n+1 .
The root system E 7 is the set of 126 elements of E 8 such that the first two coordinates are equal. Thus the CODE BLOCK for E 7 is obtained from that for E 8 by filtering out the cases where the first two coordinates are equal and then dropping the first coordinate to get a 7 element vector. We checked 2 ≤ n ≤ 6, 2 ≤ d ≤ 6 and 2 ≤ m ≤ d. The only case (n, d, m, edim, adim) that arose was (6, 4, 4, 63, 64).
The root system E 6 is the set of 72 elements of E 8 such that the first three coordinates are equal. The CODE BLOCK in this case is obtained from that for E 7 by filtering out the cases where the first two coordinates are equal and then dropping the first coordinate to get a 6 element vector. We checked 2 ≤ n ≤ 6, 2 ≤ d ≤ 6 and 2 ≤ m ≤ d but did not find any unexpected hypersurfaces.
BMSS Duality
A very interesting observation was made in [BMSS]. In the case of the unexpected quartic coming from the B 3 line arrangement, [BMSS] observed that F * (a, x) has bi-degree (3, 4) and for each x it defines three lines in the a variables, and moreover that these three lines meet at the point x. In fact, given a form defining an unexpected variety of degree d with a general point P of multiplicity m, one has by Lemma 3.1(d) that the bi-homogeneous form F * (a, x) ∈ R = K[a 0 , . . . , a n ][x 0 , . . . , x n ] has bi-degree (t, d) for some t ≥ m. Thus one can regard F * (a, x) as defining a family of hypersurfaces in the x variables, parameterized by a (these are the unexpected hypersurfaces), but one can also regard F * (a, x) as defining a more mysterious family of hypersurfaces in the a variables, parameterized by x.
In the case of the B 3 unexpected quartic F * (a, x) with a general triple point, it follows from Lemma 3.1(d) that F * has multiplicity at least 3 in the a variables at the point a = x and hence has bi-degree (s, 4) with s ≥ 3. We see below why in fact s = 3; thus, given the point of multiplicity at least 3 at a = x, it must have multiplicity exactly 3, and for a general choice of x, F * (a, x) splits as a product of three forms linear in the a variables, meeting at a = x.
In this section we will show that this phenomenon occurs in a range of cases, and that for these cases F * (x, a) defines the lines tangent to the branches of the curve F * (a, x) (see Figure 3). We will also study a similar duality for the hypersurfaces defined by our cone construction from §2.
Let W ⊂ A n be a hypersurface in affine space defined by a reduced polynomial F . We first recall what the tangent cone is for W at a point P ∈ W . Write F as a sum F = F 0 + F 1 + · · · of polynomials F i ∈ I(P ) i where each F i is homogeneous of degree i in coordinates centered at the point P . Let j be the least index such that F j = 0. Then F j = 0 defines the tangent cone to W at P . In characteristic 0, F = F 0 + F 1 + · · · is just a Taylor expansion of F at P ; the tangent cone is the term F j obtained by differentiating to order j where j is the multiplicity of W at P .
One can also work projectively. Let W ⊂ P n now be a hypersurface in projective space defined by a reduced form H of degree d. For simplicity, assume the characteristic of K is 0. Given two polynomials F and G, let F · G denote the action of differentiation, so To compute the tangent cone of W at a point P of multiplicity m, let µ j be an enumeration of the monomials of degree m. Let c j = µ j · µ j be the factorial expression obtained by differentiating µ j against itself (this is needed for the Taylor expansion). Then the tangent cone of W at P is defined by the degree m form (4.1) Example 4.1. As an example we consider the tangent cones for the cone construction of §2. Assume Z is a finite set of points in P n K and that the space V ⊂ [S] d of forms vanishing on the points has the property that there is (up to multiplication by a scalar) a unique form F ∈ V with a point of multiplicity d at a general point P = [a 0 : . . . : a n ] ∈ P n . Applying an idea similar to that of Remark 2.8, remove points if necessary so that we are left with a subset {P 1 , . . . , P r } of Z of r = d+n n − d−1+n n − 1 points. (Now F is not unexpected.) Then F * ∈ R = K[a 0 , . . . , a n ][x 0 , . . . , x n ] and by Lemma 3.1(d) F * is bi-homogeneous of bi-degree (δ, d) with δ ≥ d.
Let M j be an enumeration of the monomials in T = K[x 0 , . . . , x n ] of degree d. Let m i be an enumeration of the monomials in T of degree d − 1. Let t = n+d n and s = n+d−1 n . Let Γ be the matrix whose top r rows are the values M j (P i ) of the monomials M j at the points P i , and whose next (and bottom) s rows are the values (m i · M j )(P ), where as above the dot indicates the action of T on T by partial differentiation. Note that the entries of Γ are all in K[a 0 , . . . , a n ]. Elements in the kernel of Γ are coefficient vectors for forms vanishing at the points P i and having a point of multiplicity d at P = a. The assumption that there are r = t − s − 1 points P i and that there is (up to multiplication by scalars) a unique form vanishing on the points with a point of multiplicity m at P means that Γ is a (t − 1) × t matrix whose rank at a general point P is t − 1. Since the entries of Γ are monomials of degree 0 or 1, we can divide the s rows having degree 1 monomials by a 0 and obtain a row equivalent matrix Γ with entries in the field F = K a 1 a 0 , . . . , an a 0 . The kernel of Γ has dimension 1, and for any nonzero vector v = (v 1 , . . . , v t ) in the kernel, we can take F to be F = i v i M i . Note that (F * ) • (and hence F * ) also are in the kernel, and so can be used in place of F and thus all have the same tangent cone for a general point P = a. Each entry of v is in F so computing the tangent cone gives I.e., F is its own tangent cone, which for a general point P = a is thus defined by F (a, x) = 0 (or equivalently F * (a, x) = 0). We can also work over R. Let Γ * be the matrix obtained by appending (M 1 , . . . , M t ) as a row at the bottom of Γ , and let Γ * be the matrix obtained by appending (M 1 , . . . , M t ) as a row at the bottom of Γ. Then G = det Γ * is a multiple of F by a scalar in F and we have a s 0 G = G = det Γ * ∈ R. Since G = G/a s 0 is a scalar multiple of F by a scalar in F, it follows that G * = F * . Moreover, we have G = C(a)G * where C(a) is a polynomial describing for which points a the matrix Γ has less than full rank. (Suppose that we choose a point a = Q 1 such that there is a point x = Q 2 for which G * (Q 1 , Q 2 ) = 0. Then C(Q 1 ) = 0 if and only if det Γ * = G(Q 1 , Q 2 ) = 0, which occurs if and only if the maximal minors of Γ all vanish; i.e., Γ has rank less than t − 1.) As before, G is its own tangent cone, but it's still not clear what F * is or how F * (a, x) is related to F * (x, a). Now assume that there is an irreducible variety W of degree d and codimension 2 such that for a general point P = a the locus F (a, x) = 0 is precisely the union of all lines through P and a point of W , and that F (a, x) is irreducible (and hence so is F * (a, x) by Lemma 3.1). Then we have F * (a, x) = ±F * (x, a). Here's why. For a general point P = [a 0 : . . . : a n ] of F (a, x) = 0 (and hence of F * (a, x) = 0), P is on the cone through W having vertex P , so P is on the line through P and a point w ∈ W . But then P is on the line through P and w, so P is on the cone F (a , x) = 0 with vertex P (hence on F * (a , x) = 0), so F * (a , a) = 0. I.e., F * (a , a) = 0 if and only if F * (a, a ) = 0. Thus the loci F * (a, x) = 0 and F * (x, a) = 0 intersect in a nonempty open subset of F (a, x) = 0. Since F * (a, x) and F * (x, a) are irreducible, we have F * (a, x) = cF * (x, a) for some scalar c. But swapping variables again gives F * (a, x) = cF * (a, x) hence F * (a, x) = c 2 F * (a, x) so c = ±1. Thus in this case, without resorting to Lemma 3.1(d), we see F * (a, x) has bi-degree (d, d) and that the tangent cone to F * (a, x) at a general point P = a is defined by F * (x, a) (since here the tangent cone is defined by F * (a, x) but F * (a, x) and F * (x, a) define the same locus).
Question 4.2. Is it always true for an unexpected hypersurface F (a, x) of degree d with a general point of multiplicity d that F * (a, x) = ±F * (x, a)?
We now consider how F * (a, x) and F * (x, a) are related in the case of unexpected curves in the plane having degree m + 1 and a general point P of multiplicity m.
Newton's identities (also known as the Newton-Girard formulas) relate these as follows: etc., and in general Let f : K n → K n be the map f (x 1 , . . . , x n ) = (e 1 , . . . , e n ) and let df = (∂e i /∂x j ) be the matrix for the mapping on tangent spaces.
Proof. The proof is to show by applying row operations involving multiplying rows by −1 and adding multiples of one row to another that one obtains the Vandermonde matrix. Row n of the matrix df is the gradient of e n , namely ∇(e n ). Using Newton's identity we can rewrite this as 1 n ∇( n j=1 (−1) j−1 e n−j p j ) = 1 n n j=1 (−1) j−1 (∇(e n−j )p j + e n−j ∇(p j )). Using row operations, we can (up to row equivalence) clear out the terms ∇(e n−j )p j since these are multiples of rows ∇(e i ) higher up in the matrix. We can do the same now for row n − 1, and then row n − 2, etc.
Let Z ⊂ P 2 be a finite set of points admitting an unexpected curve C of degree m + 1 with a general point P = [a 0 : a 1 : a 2 ] of multiplicity m. Let F ∈ S be the form defining C over the field K( a 1 a 0 , a 2 a 0 ). Examples suggest that F * is bi-homogeneous of bi-degree (m, m + 1), but our proof is restricted to the case that the line arrangement dual to Z is free. A forthcoming independent result of W. Trok [Tr] also obtains a similar result on bi-degree in the free case.
Theorem 4.4. Let Z ⊂ P 2 be a finite set of points admitting an irreducible unexpected curve C = C P of degree m + 1 with a general point P = [a 0 : a 1 : a 2 ] of multiplicity m.
(a) The curve C P is unique.
be the form defining C over the field K( a 1 a 0 , a 2 a 0 ). Assume that the lines dual to the points of Z comprise a free line arrangement. Then F * (a, x) is bi-homogeneous of bi-degree (m, m + 1). Furthermore, viewing F * (a, x) ∈ K[x 0 , x 1 , x 2 ][a 0 , a 1 , a 2 ], F * (a, x) has multiplicity m in the a variables at the general point [x 0 : x 1 : x 2 ] (briefly we will say F * (a, x) has a point of multiplicity m in the a variables at a = x).
(c) Assume that F ∈ R = K[a 0 , a 1 , a 2 ][x 0 , x 1 , x 2 ] is any bi-homogeneous form of bi-degree (m, m + 1) such that F (a, x) is reduced and irreducible for a general point a = P and has multiplicity m both in the a variables at a = x and in the x variables at x = a. Then F P (a, x) = (−1) m F (x, a) is the tangent cone at x = P to the curve F (P, x) = 0 for a = P , where F P is defined in (4.1).
Proof. (a) Since C is irreducible, then [CHMN] shows that C is unique. (b) By Lemma 3.1(d) we know that F * is bi-homogeneous of bi-degree (m , m + 1) for some m ≥ m and that F * has multiplicity at least m in the a variables at a = x. The conclusion follows from examining a parameterization given in [CHMN] to conclude that m ≤ m. Let Λ be the product of the forms defining the lines dual to the points of Z. It is not hard to check that there are no unexpected curves for Z with |Z| < 3, so after a change of coordinates if need be, we may assume x 0 x 1 divides Λ. Let (s 0 , s 1 , s 2 ) be a syzygy of minimal degree (hence homogeneous of degree m; see [CHMN]) for the ideal (Λ x 0 , Λ x 1 , Λ x 2 ), where Λ x i denotes the partial with respect to x i ; thus Define φ formally to be the vector whose components are given by the cross product φ = (φ 0 , φ 1 , φ 2 ) = (s 0 , s 1 , s 2 ) × (x 0 , x 1 , x 2 ). Now let be the general line a 0 x 0 + a 1 x 1 + a 2 x 2 = 0. If a 2 = 0, we can parameterize by (ta 2 , −a 2 s, a 1 s − a 0 t), where (t, s) are projective coordinates on P 1 , and by [CHMN] φ defines a birational map from to C.
Plugging (x 0 , x 1 , x 2 ) = (ta 2 , −a 2 s, a 1 s − a 0 t) into φ(x 0 , x 1 , x 2 ) we see from the definition of the cross product that a 2 divides φ 2 (ta 2 , −a 2 s, a 1 s − a 0 t), and since x i |s i for i = 0, 1, we get that x i divides φ i for 0, 1 and thus after the substitution (x 0 , x 1 , x 2 ) = (ta 2 , −a 2 s, a 1 s − a 0 t) that a 2 divides φ i for i = 0, 1, 2. Let ψ i = φ i /a 2 . Then ψ i is bi-homogeneous of degree m − 1 in the a variables and degree m in the variables s and t.
We also get a parameterization of C using the slope of lines through the point P = a. This induces an isomorphism from E P to C, where E P is the blow up of P . But ψ = (ψ 0 , ψ 1 , ψ 2 ) induces a birational map to C. Composing gives an isomorphism to E P , which is therefore linear. Thus after a change of coordinates fixing P and x 2 = 0, we may assume that the parameterization of C given by ψ is the same as that given by slopes of lines through P . In particular, consider the line through the point (a 1 /a 2 , a 1 /a 2 , 1) with slope t/s; in affine coordinates it is x 1 x 2 − a 1 a 2 = t s x 0 x 2 − a 0 a 2 which we can rewrite as (x 1 a 2 −a 1 x 2 )s = t(x 0 a 2 −a 0 x 2 ). Setting x 2 = 0 gives the coordinates a 2 t = x 1 , sa 2 = x 0 as the parameterization on E P , and we can assume that the parameterization of C given by mapping the point (t, s) on E P to the point (x 0 : x 1 : x 2 ) where the line (x 1 a 2 − a 1 x 2 )s = t(x 0 a 2 − a 0 x 2 ) meets C (away from P ) is the same point of C as that given by ψ(ta 2 , −a 2 s, a 1 s − a 0 t).
on a 0 = 0 are these same points [0 : h i : 1]. Thus translating back we see F (a, x) vanishes on the lines through [x 0 : x 1 : x 2 ] and [0 : h i : 1], so the forms defining these lines divide F (a, x). Specifically, we have G(a) = b 0 (a 1 − h 1 a 2 ) · · · (a 1 − h m a 2 ) so Consider a point x = a where b 0 x 0 = 0. Then the branches of F (a, x) = 0 in a neighborhood of x = a are defined by the vanishing of each factor λ i (x 0 , x 1 , The tangent line to the branch λ i (x) = 0 at x = a is defined by ((∇λ i )| x=a ) · (x 0 , x 1 , x 2 ).
So we see that the tangent cone to F (a, x) = 0 at a point P = a such that b 0 x 0 = 0 is defined by the vanishing of F P (a, x) = b 0 (a)a 0 0 h 1 (a) 1 x 0 x 1 x 2 a 0 a 1 a 2 · · · 0 h m (a) 1 x 0 x 1 x 2 a 0 a 1 a 2 = (−1) m b 0 (a)a 0 0 h 1 (a) 1 a 0 a 1 a 2 x 0 x 1 x 2 · · · 0 h m (a) 1 a 0 a 1 a 2 x 0 x 1 x 2 = (−1) m F (x, a).
We can now extend our result to unexpected curves that are unique but not necessarily irreducible.
Corollary 4.5. Let Z ⊂ P 2 be a finite set of points admitting a unique unexpected curve C of degree m + 1 with a general point P = [a 0 : a 1 : a 2 ] of multiplicity m. Let F (a, x) ∈ S be the form defining C over the field K( a 1 a 0 , a 2 a 0 ) and assume F * is bi-homogeneous of bi-degree (m, m + 1). Then (F * ) P (a, x) = (−1) m F * (x, a) defines the tangent cone to C at P . P C Figure 3. The unexpected B 3 quartic curve C defined by F (a, x) = 0 (graphed as a solid line) and the graph of F (x, a) = 0 (dashed lines) for P = (a 0 , a 1 , a 2 ) = (−6, −5, 4).
Proof. It is shown in [CHMN] that C = C ∪ Λ 1 ∪ · · · ∪ Λ r where C is unexpected for a subset Z of Z and has degree m + 1 − r with a general point P of multiplicity m − r where r = |Z| − |Z | and each Λ i is the line from P to p i where p 1 , . . . , p r are the points of Z not in Z . Thus F = GL 1 · · · L r , where G is the form defining C and L i is the bi-linear form defining Λ i . So we have F * = G * L * 1 · · · L * r , but L * i = L i and L i (a, x) = −L ( x, a), so F * (a, x) = (G * L 1 · · · L r )(a, x) = (−1) m (G * L 1 · · · L r )(x, a). The tangent cone for C is the tangent cone for C union with the lines L i , so we also see that (−1) m G * (x, a)L * 1 (x, a) · · · L * r (x, a) = F * (x, a) defines the tangent cone for C.
Open Problems
In this short section we list some open problems stemming from this work.
1. Suppose Z is a finite set of points which admits an unexpected curve of degree d = m + 1 having a general point P of multiplicity m. If the arrangement of lines dual to Z is not free, to what extent does BMSS duality still hold? 2. To what extent does BMSS duality hold in higher dimensions? For example, it holds for B 4 and F 4 with d = m = 4. In these cases the unexpected surfaces are defined by a form F (a, x) of bi-degree (4, 4) and the form for the tangent cone at P is F (x, a). It also holds for D 4 with d = m = 3. In this case the unexpected surfaces are defined by a form F (a, x) of bi-degree (3, 3) and the form for the tangent cone at P is −F (x, a). 3. Given an unexpected variety for a finite point set Z having a general point P of multiplicity m and degree d, let B Z (P ) be the base locus of [I Z+mP ] d and let B Z = ∩ P B Z (P ), which we can refer to as the base locus associated to Z. What can be said about this associated base locus? For example, what is its dimension? If it is 0-dimensional, when is it strictly larger than Z? 4. What is special about the root systems having unexpected hypersurfaces? For example, why do the systems A n+1 not seem to have any? 5. For the systems B n+1 , computational runs suggest there might be unexpected hypersurfaces with d = m = 4 for all n ≥ 3 and for d = m = 3 for all n ≥ 5. How can one prove this? And why only 3 ≤ m ≤ 4? 6. Let Z be a non-degenerate set of points in linear general position in P n , n ≥ 3. Is it true that there does not exist an unexpected hypersurface of any degree d and multiplicity d − 1 at a general point? 7. Is there a class of finite sets of points in P n for n ≥ 3 (or respectively a condition on (d, m)) for which the syzygy bundle plays a similar role, in the study of unexpected hypersurfaces, to that which it plays when n = 2 and m = d − 1, or is that purely a phenomenon for the plane? 8. Let Z be a set of points in P 2 admitting a (unique) unexpected curve of degree m + 1 with a general point of multiplicity m. Then Z + mP does not impose independent conditions on forms of degree m+1, but for a suitable subset Z of 2m+2 of the points of Z, Z + mP does impose independent conditions, and the curve is still unique (but it is no longer unexpected for Z ). So suppose we consider more generally sets Z of 2m + 2 points such that there is a unique irreducible curve of degree m + 1 containing Z and having a general point of multiplicity m (but not necessarily unexpected). How does the bi-degree of F * depend on Z? Is there a connection between this bi-degree and the question of whether Z extends to a set of points for which the curve is unexpected? | 2018-12-09T06:43:33.375Z | 2018-05-27T00:00:00.000 | {
"year": 2018,
"sha1": "b438fa1e0e226f14bf6400184ccfb8382d0319c1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.10626",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b438fa1e0e226f14bf6400184ccfb8382d0319c1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
247595291 | pes2o/s2orc | v3-fos-license | A Statistical Analysis for the Neutrinoless Double-Beta Decay Matrix element of 48Ca
Neutrinoless double beta decay ($0\nu\beta\beta$) nuclear matrix elements (NME) are the object of many theoretical calculation methods, and are very important for analysis and guidance of a large number of experimental efforts. However, there are large discrepancies between the NME values provided by different methods. In this paper we propose a statistical analysis of the $^{48}$Ca $0\nu\beta\beta$ NME using the interacting shell model, emphasizing the range of the NME probable values and its correlations with observables that can be obtained from the existing nuclear data. Based on this statistical analysis with three independent effective Hamiltonians we propose a common probability distribution function for the $0\nu\beta\beta$ NME, which has a range of (0.45 - 0.95) at 90\% confidence level of, and a mean value of 0.68.
I. INTRODUCTION
The study of the double-beta decay (DBD) is currently a hot research topic since it is viewed as one of the most promising approaches to clarify important, as yet unknown, properties of neutrinos and to explore physics beyond the Standard Model (SM) [1,2]. Two scenarios are possible for this process to occur: i) two-neutrino double-beta (2νββ) transitions (with emission of two electrons/positrons and two anti-neutrinos/neutrinos), which conserve the lepton number and are allowed by the SM and ii) double-beta decay transitions without emission of neutrinos (0νββ), which violate the lepton number conservation and are only allowed by theories beyond SM (BSM).
Neutrinoless DBD was not experimentally detected so far, but its measurement would provide us with important information about lepton number violating (LNV) processes; neutrino properties (neutrino absolute mass scale and mass hierarchy, neutrino nature as Dirac or Majorana fermion, number of neutrino flavours); CP and Lorentz simmetries violation; constraining of different BSM mechanisms that may contribute to this decay mode, etc. The most common mechanism investigated is the light left-handed (LH) Majorana neutrinos exchange between two nucleons, but once a LNV operator is introduced in the Lagrangian, several other mechanisms are also allowed, such as: the exchange of light and heavy neutrinos in left-right symmetric models, the exchange of supersymmetric particles, DBD with the emission of Majorons, etc.
The DBD half-life equations can be expressed, in a good approximation, as a product of some factors. The 2νββ half-life is a product of a phase space factor (PSF), which depends on the atomic charge and energy released in the decay, and a nuclear matrix element (NME) related to the nuclear structure of the parent and daughter nuclei. The 0νββ half-life, besides the PSF and NME factors, also contains a LNV factor, related to the particular BSM mechanism that may contribute to the decay. If several mechanisms are considered, the inverse half-life can be written as a sum of all the individual contributions and their interference terms [2][3][4][5][6][7][8]. Using the experimental limits of the 0νββ decay half-lives and the theoretical values of PSF and NME, one can constrain the LNV parameters and the associated BSM scenarios, usually under the assumption that only one mechanism contributes at one time.
There is currently significant progress in the DBD experiments (in terms of the amount of source material, decreasing background and improvement in the detection techniques), leading to the expectation that the next generation of experiments will be able to cover the entire region of the neutrino inverted mass hierarchy [9]. Concurrently, the progress of the theoretical methods now provides us with accurate PSF values for all the doublebeta decay modes and transitions. [10][11][12]. Thus, at present, the uncertainty in the DBD calculations remains mostly the NME evaluation.
There are several nuclear structure methods for the NME calculation, the most used being: Shell model methods [13][14][15][16][17][18][19][20][21][22][23], pnQRPA methods [24][25][26][27][28][29][30], IBA methods [31,32], Energy Density Functional method [33], PHFB [34], Coupled-Cluster method (CC) [35], inmedium generator coordinate method (IM-GCM) [36], and valence-space in-medium similarity renormalization group method (VS-IMSRG) [37]. Each of these methods have their strengths and weakness, largely discussed over time in the literature, and the current situation is that there are still significant differences between NME values calculated with different methods, and sometimes, even between NME values calculated with the same methods (see for example the review [9]). For the 2νββ decay the NME are products of two Gamow-Teller (GT) transition amplitudes, and most of the nuclear methods overestimate them, in comparison with experiment. This drawback is often treated by introducing a quenching factor that multiplies the GT operator and reduces its strength. This is equivalent to using a quenched axial vector constant, instead of its bare value g A = 1.27.
For the 0νββ decay the NME calculation is more complicated, since besides the GT transitions, other transitions may contribute as well. Also, the NME values calculated by different methods may differ by factors of 3-4 for most relevant isotopes, and up to 7-8 in the case if 48 Ca (see e.g. Fig. 5 of Ref. [9], and Refs. [35,36]). Uncertainties in the NME values are further amplified when predicting half-lives, since they enter at the power of two in the lifetime formulas. In addition, there is no measured lifetime for this decay mode to compared with, and these uncertainties in the NME computation reflect in the interpretation of the DBD data and planning the performances of the DBD experiments.
The shell model-based methods have some advantages such as the inclusion of all correlations between nucleons around the Fermi surface, preserving all symmetries of the nuclear many-body problem, and the use of widely tested nucleon-nucleon (NN) interactions. For different mass regions of nuclei, one uses several different effective NN effective Hamiltonians that are appropriate for the corresponding model spaces. These effective Hamiltonians are usually obtained by starting with a theoretical Bruekner G-Matrix Hamiltonian that is further finetuned to describe the experimental energy levels for a large number of nuclei that can be investigated in the corresponding model spaces. These effective Hamiltonians are described by a small number of single particle energies and a finite number of two-body matrix elements. As a by-product, the wave functions produced by these Hamiltonian can be used to describe and predict observables, such as the electromagnetic transition probabilities, Gamow-Teller transitions probabilities, nucleon occupation probabilities, spectroscopic factors, etc, using relative simple changes of the transition operators in terms of effective charges and quenching factors. These effective charges and quenching factors are calibrating to the existing data. For 0νββ NME such calibrations are not yet possible due to the lack of data. However, different existing effective Hamiltonians for nuclei envolved in a given 0νββ decay produce smaller ranges of the NME. In addition, some recent ab-initio methods, such as IM-SRG [36,37], build on the modern advances in the shell model by providing ab-initio derived effective Hamiltonians and effective transition operators, and they can provide some guidance for calibrating the shell model 0νββ NME.
It would be thus interesting to study the robustness of the 0νββ NME to small changes of the parameters of different effective shell model Hamiltonians and to examine how the NME changes are correlated with other observables. In this work, we propose a statistical analysis of 0νββ NME of 48 Ca calculated with the interacting shell-model using three independent effective Hamiltonians (FPD6, GXPF1A, KB3G), emphasizing the range of the NME probable values and their correlations with several observables that can be compared to existing nuclear data. Based on this statistical analysis we propose a common probability distribution function for the 0νββ NME. We apply our analysis to 48 Ca, which is the lightest DBD isotope and thus more accessible to ab-initio calculations. We only consider in this work the standard light LH neutrino exchange mass mechanism, which is most likely to contribute to the 0νββ decay process.
The paper is organized as follows. In section II the calculation methods of the observables and the statistical model are presented. Then, in section III we present the results and discussions on their relevance, and in section IV we end with conclusions and outlook. Finally we included an Appendix with a short presentation of the Gram-Charlier A series that we used in our statistical model. Table I. Experimental data, experimental errors, and the calculated values using 3 effective Hamiltonians for the observables analyzed. The data for occupation probabilities ( * ) is not available and it was replaced with the GXPF1A results and errors ( # ) of 5% of the highest nucleon species occupation.
II. THE STATISTICAL MODEL
We plan to investigate the effect of small, random variation of the shell model effective Hamiltonian on the neutrinoless double beta decay NME of 48 Ca, and the NME correlations with other calculated observables, such as 2 + energies, B(E2)↑ values, 2νββ matrix elements, Gamow-Teller transition probabilities, neutron and proton occupation probabilities, etc.
To achieve that goal we selected a number of often used effective Hamiltonians describing nuclei around 48 Ca in the f p-shell (0f 7/2 , 0f 5/2 , 1p 3/2 , and 1p 1/2 orbitals for both protons and neutrons), and added small random contributions to their two-body matrix elements (TBME). For this project we only considered the FPD6 Hamiltonian [42], the KB3G Hamiltonian [15], and GXPF1A Hamiltonian [43,44] as starting effective Hamiltonians. In order to maintain the magicity of 48 Ca we decided to keep the single particle (s.p.) energies in the perturbed effective Hamiltonians the same as in the starting Hamiltonians.
One important decision to be made about the random contributions to the starting Hamiltonians is the choice of their maximum amplitude (range). In this work we were guided by the analysis of the USDA/USDB effective Hamiltonians [45] where one starts with an underlying G-matrix and modifies linear combinations of two-body matrix elements in a fine-tuning procedure [45] until the root mean square (RMS) deviation of the calculated energies vs the experimental ones shows some signs of convergence. In this fine-tuning process one would not want to change the TBME too much from the original G-matrix values, because the over-fitted TBME could result in unitary changes of the s.p. wave functions that may produce slightly better energies, but incorrect observables. For USDA, for example, the RMS deviation of the TBME was about 300 keV, while a small improvement in the overall energies given by USDB resulted in an additional change of 100 keV if the RMS deviation of the TBME. This analysis suggests that an additional RMS of about 100 keV would not dramatically change the quality of the TBME in the sd-shell and we extended this choice to the fp-shell. An analysis of the TBME for all three starting Hamiltonians listed above indicates that a ±10% range for the random contributions would suffice.
In the analysis we included as observables the 0νββ NME, the 2νββ NME, the Gamow-Teller probability to reach the first 1 + state in 48 Sc from the ground states (g.s.) of the parent, 48 Ca, and of the daughter, 48 Ti, the energies of the 2 + , 4 + and 6 + states of the parent and daughter, the B(E2)↑ transition probabilities to the first 2 + state of parent and daughter, the neutron occupation probabilities of the pf states of the parent, and neutron and proton occupation probabilities of the pf states of the daughter nucleus. The experimental occupation probabilities for the nuclei relevant for the 48 Ca 0νββ decay are not available, but we include synthetic (calculated) values in the analysis because the corresponding occupation probabilities are available for other nuclei of interest for 0νββ decay [46][47][48], and it might be interesting to see if they have any correlations with the 0νββ NME. All in all, there are 24 observables included in our statistical analysis.
The main goals are: (i) for each starting effective Hamiltonian find correlations between 0νββ NME and the other observables that are accessible experimentally; (ii) find theoretical ranges for each observables; (iii) establish the shape of different distributions for each observables and starting Hamiltonians; (iv) use this information to find weights of contributions from different starting Hamiltonians to the "optimal" distribution of the 0νββ NME; (v) find an "optimal" value of the 0νββ NME and its predicted probable range (theoretical error). One should mention that similar studies for other observables were recently proposed [49].
The 0νββ NME is related to the half-life of the respective process [17] by where G 0ν and M 0ν are the PSF and nuclear matrix elements for the 0ν decay, g A is the axial vector coupling constant, and < η l >≡< m ββ > /m e c 2 is a BSM parameter associated with the light neutrino exchange mechanisms. Here we only consider the contribution from the light LH neutrino exchange mechanism, which is likely to contribute to the 0νββ decay. The methodology of calculating the 0νββ NME, M 0ν , within the shell model was extensively described elsewhere [17,18,23] and it will not be repeated here. Suffices to say that it includes a short range correlation function that can be viewed as an effective modification of the bare operator (see below). The 2νββ NME is related to the half-life of the respective process [16] by Here, G 2ν is the appropriate PSF, and M 2ν can be calculated with where the summation is on the 1 + k states in 48 Sc and Often, the shell model calculations of the 0νββ NME are described as using the "bare" transition operator. This characterization is unfortunate, while the transition operator (see e.g. Eq. (7-12) of Ref. [17]) contains the bare operator from the underlying theory of 0νββ decay, modified by a phenomenological effective short-range correlation function, 1 + f (r), which is quenching the 0νββ NME. Therefore, the short-range modification of the bare operator acts as an effective operator. In practice the parameters of an effective operator need to be calibrated to the data. Given that the short-range correlator has radial dependence, its calibration has been only done relative to some ab-initio results. The standard Miller-Spencer short-range correlator [17,50] produces the highest quenching of the NME, while the CD-Bonn parameterization of the short-range correlator [51] produces little to no quenching. A direct renormalization of the 0νββ NME by a similarity renormalization group (SRG) evolution of the NME of the bare operator from 200 to 10 major harmonic oscillator shells using CDBonn two body wave functions, indicates that using a phenomenological CDBonn parametrization of the short-range correlator is a reasonable approach [52]. More recent ab-initio calculations of the 0νββ NME using the N 3 LO Hamiltonian provides more quenched values, more consistent with the shell model results base on Miller-Spencer parametrization of the short-range correlator. In an effort to calibrate the effective operator used in shell model calculations to the latest ab-initio results we used the Miller-Spencer correlator in this study.
The other observables used in this study including the excited state energies, the GT strengths to the first 1 + state 48 Sc, the B(E2)↑ to the first 2 + state in the parent and daughter, as well as the s.p. occupation probabilities, are calculated in the standard way. Here we use in all cases the same effective charges (e p = 1.5 and e n = 0.5)) for the B(E2)↑, and the same quenching factor (q = 0.74) for the the GT strengths and M 2ν . Table III).
III. RESULTS
The experimental data used in this study listed in Table I and in the legends of the rightmost column in Tables IV/V -VIII/IX are taken from Ref. [40] (excitation energies of the 2 + , 4 + and 6 + states of 48 Ca and 48 Ti in MeV), Ref. [38] (2νββ NME in MeV −1 ), Ref. [39] ( B(E2)↑ in e 2 b 2 ), and Ref. [41] for the GT transition probabilities to the first excited 1 + state in 48 Sc. The experimental errors for the excitation energies are very small, and for the calculation the χ 2 we use the typical theoretical RMS value of 150 keV [43]. The experimental occupation probabilities are not available, and we took as reference the GXPF1A results assuming an uniform error that we choose to be 5% of the highest occupation for a each nucleon species in the f p-shell. In the Tables Occ (Nf7) designates the neutron occupation probability of the f 7/2 s.p. orbital, Occ (Pf3) designates the proton occupation probability of the p 3/2 s.p. orbital, etc.
Tables IV/V -VIII/IX show the main results of this study. The leftmost columns indicates the 24 observables discussed in section II, including the 0νββ NME. The middle column shows the scatter plots of the correlation of each variable with the 0νββ NME, and the last column shows the distribution of each observable when the random term is added to the respective effective Hamilto- 48 nian. The legends in column with correlations show the standard Pearson correlator R, and in the last columns the legends include the mean, standard deviations, and the skewness (normalized 3rd moment) of the distributions, as well as the result for the starting interactions (FPD6, GXPF1A, and KB3G) and the experimental values when available. Tables IV/V present the results for the FPD6 effective Hamiltonian, Tables VI/VII show the results for the GXPF1A effective Hamiltonian, and Tables VIII/IX present the results for the KB3G effective Hamiltonian. For each starting effective Hamiltonian we use 20,000 random Hamiltonians produced by the procedure described in section II.
The results in Tables IV/V -VIII/IX indicate strong correlations between the 0νββ NME and the 2νββ NME. Alternative approaches of obtaining these NME, e.g. QRPA calculations, are calibrating parts of their nuclear Hamiltonian, such as the isoscalar particle-particle interaction g pp , to describe the experimental value of the 2νββ NME and to approximately restore the isospin symmetry, thus inducing correlation with the 2νββ NME. In the shell model approach, the Hamiltonian remains unchanged, and all symmetries are enforced. Therefore, we conclude that the strong correlations between the 0νββ NME and the 2νββ NME are genuine.
Interestingly, the correlations between the 0νββ NME and the Gamow-Teller (GT) transitions probabilities to the first 1 + state in 48 Sc are much reduced. One explanation of this phenomena is based on the fact the distri-butions of the GT strength from the parent and daughter (see last column in Tables IV/V -VIII/IX) are asymmetric in opposite direction, thus diminishing the correlation effects. A quick look to the full correlation matrix in Table II shows that the GT strengths to the first 1 + state in 48 Sc from 48 Ca and 48 Ti are anti-correlated with a correlation coefficient of about -0.5.
Other observables that have relatively high (anti)correlations with the 0νββ NME are the energies of the 2 + , 4 + and 6 + states in 48 Ti, and the neutron occupation probabilities in 48 Ca. Overall, the correlators R with the 2νββ NME are around 0.9, the ones with the energies of the 2 + , 4 + and 6 + states in 48 Ti are about 0.77, and correlators with the 0f 5/2 occupation probability is about 0.6, while the occupation probability of the 0f 7/2 is anti-correlated with the 0νββ NME, R ≈ −0.6.
Additional interesting information can be extracted from the full correlation matrix for all 24 observables. Tables II -III show the full correlation matrix evaluated for the FPD6 starting Hamiltonian. It is interesting to analyze which other observables are correlated with those that are directly correlated with the 0νββ NME. We already discussed the correlations between the GT strengths and the 2νββ NME. In addition, one can observe that the 2 + , 4 + and 6 + in 48 Ti are correlated with the neutron f and p states occupancies in 48 Ca, which in their turn are correlated with some of the neutron states occupancies in 48 Ti. Also, some of the neutron occupa- tion probabilities in 48 Ti are correlated to the B(E2)↑ values. These observations highlight the importance of a reliable experimental investigation of the occupation probabilities for these nuclei.
It would be interesting to extract some information about possible range and mean value of the 0νββ NME based on this statistical analysis. First, it is clear that the value of all observables are quite stable to reasonably small changes of the effective Hamiltonian. No hints of any wild departure from the main values that would indicate some phase transitions are found. This seems to be a consequence of the preservation of nuclear many-body symmetries in the shell model. One can further try using the distributions of all available effective Hamiltonians to draw conclusions on some optimal values for the 0νββ NME and its range (error). One direct approach would be to superpose the distributions of the NME produced in Tables IV/V -VIII/IX withe some weighting factors W H , where x is the value of the 0νββ NME. The normalized weights W H can be inferred using, for example, the likelyhood probability ∝ exp(−χ 2 /2), or replacing the bare χ 2 with its individual contributions weighted by the corresponding correlators R. Based on the data we show in Table II, we get the following χ 2 for each starting effective Hamiltonian: 7.9 for FPD6, 4.8 for GXPF1A, and 7.3 for KB3G. Unfortunately, there is no experimental data for the occupation probabilities that seem to correlate directly and indirectly with the 0νββ NME. Therefore, here we present the results of a "democratic" approach in which all W H are 0.33. Fig. 1 shows the probability distribution functions (PDF) for the three starting effective Hamiltonians and their weighted sum. To cal-culate each PDF we use the Gram-Charlier A series expansion [53] (see Appendix for detail), based on the first four normalized moments of the distributions presented in Tables IV/V -VIII/IX. Based on the results of our statistical analysis summarized in Fig. 1 (see "weighted sum" curve) one can infer that with 90% confidence the 0νββ NME lies in the range between 0.45 and 0.95, with a mean value of about 0.68.
IV. CONCLUSION AND OUTLOOK
In conclusion, we developed a statistical model for analyzing the distribution of the 0νββ NME of 48 Ca using the interactive shell model in the f p-shell model space.
In the analysis we started from three widely used effective Hamiltonians for the low part of the f p-shell, FPD6, GXPF1A and KB3G, to which we added a random contributions to the TBME of ±10%. Using sample sizes of 20,000 points we analyzed for each starting effective Hamiltonian: (i) the correlations between 0νββ NME and the other observables that are accessible experimentally; (ii) the theoretical ranges for each observables; (iii) the shape of different distributions for each observables and starting Hamiltonians; (iv) the weighted contributions from different starting Hamiltonians to the "optimal" distribution of the 0νββ NME; (v) an "optimal" value of the 0νββ NME and its predicted probable range (theoretical error).
We found that the 0νββ NME correlates strongly with the 2νββ NME, but much less with the Gamow-Teller strengths to the first 1 + state in 48 Sc. We also found that the 0νββ NME exhibits reasonably strong correlations with the energies of the 2 + , 4 + and 6 + states in 48 Ti, and with the neutron occupation probabilities in 48 Ca. We also found that there are additional correlations between observables, such as the energies of the 2 + , 4 + and 6 + states in 48 Ti and the neutron occupation probabilities, as well as between B(E2)↑ values in 48 Ti and proton and neutron occupation probabilities, which can indirectly influence the 0νββ NME. Therefore, we conclude that reliable experimental values of the occupation probabilities in 48 Ti and 48 Ca would be useful for this analysis, potentially helpful to reduce the uncertainties of the 0νββ NME. | 2022-03-22T07:13:10.602Z | 2022-03-20T00:00:00.000 | {
"year": 2022,
"sha1": "6d7ef3ba0a6414a4ecd6546d76d5b084d8b00e6b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6d7ef3ba0a6414a4ecd6546d76d5b084d8b00e6b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119122569 | pes2o/s2orc | v3-fos-license | Increasing singlet fraction with entanglement swapping
We consider entanglement swapping for certain mixed states. We assume that the initial states have the same singlet fraction and show that the final state can have singlet fraction greater than the initial states. We also consider two quantum teleportations and show that entanglement swapping can increase teleportation fidelity. Finally, we show how this effect can be demonstrated with linear optics.
We consider entanglement swapping for certain mixed states. We assume that the initial states have the same singlet fraction and show that the final state can have singlet fraction greater than the initial states. We also consider two quantum teleportations and show that entanglement swapping can increase teleportation fidelity. Finally, we show how this effect can be demonstrated with linear optics. Maximally entangled states are crucial for quantum information processing. If two parties share a pair of qubits in the maximally entangled state then they can perform quantum teleportation. However, usually the parties share nonmaximally entangled state ̺. One can define the singlet fraction F of the state ̺ as the maximal overlap of the state ̺ with the maximally entangled state, i.e., where the maximum is taken over all maximally entangled states |Ψ . If one performs quantum teleportation with the state ̺ preprocessed by local unitary operations [1] then the optimal teleportation fidelity is If F > 1 2 then the parties can perform quantum teleportation with the average fidelity of the teleported qubit exceeding classical limit 2 3 . However, it is well known that there are two-qubit entangled states which have F < 1 2 . The Horodecki family has proved that one can increase the singlet fraction of any two-qubit entangled state above 1 2 by non-trace preserving local operations and classical communication (LOCC) [2]. Verstraete and Verschelde have proved that one can do it even with trace preserving LOCC [3]. Moreover, they have found how to obtain optimal teleportation with any two-qubit state, i.e., how to find an LOCC protocol which gives the highest average fidelity of the teleported qubit. Let us now consider a string of nodes connected by nonmaximally entangled pairs of qubits. The first set of entangled pairs is distributed between nodes A and B, the second one is distributed between B and C and so on. In order to perform quantum teleportation from the first to the last node one first distills entanglement between A and B, then between B and C and so on [4]. Next, one performs entanglement swapping [5,6] at each node which creates entanglement between the first and the last node. Notice that usually many entangled pairs are needed between two nodes in order to distill entanglement. Finally, one performs quantum teleportation between the first and the last node. This strategy is crucial ingredient of quantum repeaters [7]. However, for pure nonmaximally entangled states there exists another strategy. Instead of distilling maximally entangled state between each pair of neighboring nodes and then performing entanglement swapping one first performs entanglement swapping and then distills entanglement between the first and the last node [8,9,10,11,12]. If each pair of neighboring nodes is connected by a single nonmaximally entangled pure state this strategy gives higher probability of obtaining maximally entangled state between the first and the last node.
In this paper we consider entanglement swapping for certain mixed states. We assume that we have two pairs of nonmaximally entangled states with the same singlet fraction. We perform entanglement swapping and show that for certain initial states the singlet fraction of the final state is greater than the singlet fraction of the initial states. In particular, we show that the singlet fraction of the initial states can be smaller than 1 2 and the singlet fraction of the final state can be greater than 1 2 . Thus the initial pairs of qubits are not useful for quantum teleportation (if one does not first increase their singlet fractions) and the final pair of qubits is useful for quantum teleportation. This effect does not occur for all mixed states. For example it was shown that if one performs entanglement swapping with mixtures of two Bell states or Werner states, then the final state has always the singlet fraction less than the initial states [13,14]. We will also consider maximization of the fidelity of quantum teleportation. We show that the fidelity of two teleportations is below 2 3 if we maximize the fidelity of each teleportation independently. However, if we first perform entanglement swapping and then teleport a qubit, then the fidelity of quantum teleportation is above 2 3 . Before we present mixed states for which one can increase the singlet fraction with entanglement swapping we discuss for which states one cannot do that. In Ref. [15,16] it was proved that the parties who use LOCC cannot increase the singlet fraction of a single copy of entangled Bell-diagonal state (e.g. Werner states or mixtures of two Bell states). We assume that two parties (Alice and Bob) share such a state. Let one party (Bob) prepare in his laboratory additional arbitrary entangled state. We stress that this additional state is held by Bob and it is not shared by Alice and Bob. Next, Bob performs Bell measurement on a particle from the state which he shares with Alice and on the first particle from his additional entangled state. It is clear that the state of Alice's particle and the second particle from Bob's additional entangled state cannot have the singlet fraction greater than the singlet fraction of the original state because Bob performed only local operations. We conclude that one cannot increase the singlet fraction of entangled Bell-diagonal states with entanglement swapping. Now, we present mixed states for which one can increase the singlet fraction with entanglement swapping. We assume that Alice and Bob as well as Bob and Charlie share a pair of qubits in mixed entangled state. Alice and Bob share a state and Bob and Charlie share a state Both states are entangled for p = 0 and 0 < a < 1 and have the same singlet fraction These states are mixtures of nonmaximally entangled state and an orthogonal product state and can be obtained by sending one qubit from an entangled pair of qubits through amplitude damping channel. Let Bob measures his qubits from two mixed states, which he shares with Alice and Charlie, in the Bell basis and If Bob obtains |Ψ + or |Ψ − as the result of his measurement, then Alice's and Charlie's qubits are found in the state (after local unitary operation) and N = 2p 2 a(1 − a) + 2p(1 − p)a is the probability of obtaining |Ψ + or |Ψ − as the result of Bob's measurement.
The singlet fraction of this state is In Fig. 1 we present how the singlet fraction of the initial states and the singlet fraction of the final state depend on the parameter a for p = 0.75 when Bob obtains |Ψ + or |Ψ − as the result of his measurement. For a < 0.0285955 the initial states have the singlet fraction less than 1 2 and for a < 0.666667 the final state has the singlet fraction greater than 1 2 . Hence, for a < 0.666667, if Alice teleports a qubit to Charlie with the final state then the fidelity of the teleported qubit will exceed the classical limit. For a > 0.666667, the final state is still entangled because it is a mixture of maximally entangled state and an orthogonal product state but one has first to increase its singlet fraction in order to perform quantum teleportation and exceed the classical limit.
If Bob obtains |Φ + or |Φ − as the result of his measurement then Alice's and Charlie's qubits are found in the state (after local unitary operation) p)a is the probability of obtaining |Φ + or |Φ − as the result of Bob's measurement. The singlet fraction of this state is: In Fig. 2 we present how the singlet fraction of the initial states and the singlet fraction of the final state depend on the parameter a for p = 0.75 when Bob obtains |Φ + and |Φ − as the result of his measurement. For a < 0.333333 the final state has the singlet fraction less than 1 2 and moreover, it is separable. So far we have shown how one can increase the singlet fraction above 1 2 with entanglement swapping probabilistically, i.e., by non-trace preserving operations, if the initial states have the singlet fraction below 1 2 . Now we show how one can increase the singlet fraction above 1 2 with entanglement swapping deterministically, i.e., by trace preserving operations. We use similar idea as in Ref. [3]. Let Bob measures his qubits in Bell basis. We suppose that if Bob obtains |Ψ ± as the result of his measurement (which happens with probability N ), then Alice and Charlie have a state with the singlet fraction greater than 1 2 . Let maximum in Eq. (1) for Alice's and Charlie's state be obtained for maximally entangled state |Ψ − . If Bob obtains |Φ ± as the result of his measurement (which happens with probability 1 − N ) and Alice and Bob have a state with the singlet fraction less than 1 2 , then they prepare a product state |01 . Hence, on average the singlet fraction is greater than 1 2 . Let us now assume that Alice wants to send quantum information to Bob. We consider two statrategies. The first strategy is as follows. Alice and Bob increase deterministically the singlet fraction of their entangled state and Alice teleports a qubit to Bob. Next, Bob and Charlie increase deterministically the singlet fraction of their entangled state and Bob teleports a qubit (which he received from Alice) to Charlie. We assume that the parties try to optimize each teleportation independently. It was shown [3] that in the optimal trace preserving LOCC protocol maximizing the singlet fraction for state ρ one party applies a local filter and classically communicates the result to the other party. If the filtering succeeds the parties do nothing. Otherwise they prepare pure product state. One can find the optimal filter A * and the fidelity F * by solving the following semidefinite program: maximize under the constraints: and X is of rank 1. The filter A is related to X by the following equation In order to solve the semidefininte program we observe that the state ρ Γ AB (and the state ρ Γ BC ) has the following symetries [3]: it is invariant under transposition and local operations σ z ⊗σ z and diag [1, i]⊗diag [i, 1]. The operator X has to have the same symmetries and hence it is of the form with x 1 , x 2 , x 3 , x 4 , x 5 , x 6 real. Moreover, since X is of rank one x 3 and x 4 have to be equal to 0. Now maximization can be easily done. We obtain the following expression for the maximal singlet fraction and local filter: if √ a √ 1−ap 1−p < 1; if √ a √ 1−ap 1−p ≥ 1. The final state of Alice and Bob (as well as Bob and Charlie), after they applied trace preserving operation to the initial state, is: Now, Alice teleports a qubit to Bob with entangled state of Eq. (21) and then Bob teleports a qubit to Charlie with entangled state of Eq. (22).
The second strategy is as follows. Bob performs entanglement swapping. If after Bob's measurement Alice and Charlie have a state with the singlet fraction greater than 1 2 , then they transform it with local unitary operations in such a way that the maximal overlap of the state with maximally entangled state is obtained for a state |Ψ − . If after Bob's measurement they have a state with the singlet fraction less than 1 2 , then they prepare a product state |01 . Note that this is not optimal strategy because if the singlet fraction is below 1 2 and Alice's and Charlie's state is entangled, then they can increase the singlet fraction above 1 2 . Hence, for the initial states of Eqs. (3) and (4) and p = 0.75 the final state of Alice and Charlie and its singlet fraction is: 1) for p = 0.75 and a ≤ 0.333333 2) for p = 0.75 and 0.333333 < a < 0.666667 In Fig. 3 we present how teleportation fidelity for two strategies depends on the parameter a for p = 0.75. One can see that if Bob first performs entanglement swapping, then Alice can teleport a qubit to Charlie with teleportation fidelity greater than 2 3 for 0 < a < 1. On the other hand if the parties increase the singlet fractions of the two states independently, then for a < 0.211325 and a > 0.788675 teleportation fidelity does not exceed classical limit. For 0.333333 < a < 0.666667 the two strategies give the same teleportation fidelity. This is connected to the fact that for 0.333333 < a < 0.666667 the final state resulting from entanglement swapping has the singlet fraction greater than 1 2 for each result of Bob's measurements and the two strategies are equivalent. If a < 0.333333 or a > 0.666667 and Bob performs entanglement swapping, then the final state has the singlet fraction greater than 1 2 for two results of Bob's measurement and less than 1 2 for the other two results of Bob's measurements. In such a case Alice and Charlie replace it with a product state which has the singlet fraction equal to 1 2 . The effect which we described can be demonstrated experimentally with linear optics, single photon sources and photodetectors discriminating the number of photons. First one prepares two entangled states where |i a stands for i photons in mode a and similarly for other modes. Beam splitters with transmission coefficients T = pa 1−p(1−a) are placed in modes b 1 and b 2 (see Fig. 3). They realize amplitude damping channels and produce the states of Eqs. 2 and 3. The photons from modes b 3 and b 4 are not detected. The modes b 1 and b 2 are incoming modes of beam splitter with transmission coefficient T = 0.5. After this beam splitter there are photodetectors which discriminate the number of photons. If only one of two photodetectors detects one photon then the result of the measurement is |Ψ + b1b2 or |Ψ − b1b2 and for appropriately chosen parameters a and p one increases the singlet fraction.
In conclusion we have shown that entanglement swapping can increase the singlet fraction. For some states the singlet fraction of the initilal states can be less than 1 2 and the singlet fraction of the final state can be greater than 1 2 . We have also considered the usefulness of entanglement swapping in quantum teleportation. Finally, we have shown how this effect can be demonstrated experimentally. Our result may have applications in quantum networks. | 2008-08-26T11:56:55.000Z | 2008-05-07T00:00:00.000 | {
"year": 2008,
"sha1": "bbee248f6123651d5c5b82239623c990d52647ea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0805.1044",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bbee248f6123651d5c5b82239623c990d52647ea",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13140213 | pes2o/s2orc | v3-fos-license | Akt Protein Kinase Inhibits Rac1-GTP Binding through Phosphorylation at Serine 71 of Rac1*
A putative Akt kinase phosphorylation site (64ydRIRplSYp73) was found in Rac1/CDC42 and Rho family proteins (RhoA, RhoB, RhoC, and RhoG). Phosphorylation of Rac1 by Akt kinase was assayed with recombinant Rac1 protein and the fluorescein-labeled Rac1 peptide. It was shown that the Rac1 peptide and the recombinant protein were phosphorylated by the activated recombinant Akt kinase and the lysate of SK-MEL28 cells, a human melanoma cell line. The phosphorylation of Rac1 inhibited its GTP-binding activity without any significant change in GTPase activity. Both the GTP-binding and GTPase activities of Rac1 S71A protein (with the serine residue to be phosphorylated replaced with alanine) were abolished regardless of the treatment of Akt kinase. Akt kinase activity and Rac1 peptide phosphorylation were down-regulated by the treatment of SK-MEL28 cells with wortmannin or LY294002 (a phosphoinositide 3-kinase inhibitor), but JNK/SAPK kinase activity was up-regulated. Thus, the results suggest that Akt kinase of the phosphoinositide 3-kinase signal transduction pathway phosphorylates serine 71 of Rac1 as one of its authentic substrates and modulates the Rac1 signal transduction pathway through phosphorylation.
Rac1 and CDC42 belong to a subgroup of the Rho GTPase family that binds to and hydrolyzes GTP (1,2). Several studies have shown that GTP-bound Rac1 is active and plays an important role in the control of cell shape, adhesion, movement, endocytosis, secretion, and growth while regulating various aspects of actin cytoskeleton organization (3,4). GTP binding to Rac1 is controlled by Ͼ15 GTP/GDP exchange factors (GEF) 1 and ϳ10 GTPase-activating proteins, depending on the external or internal signals (1,5,6). The role of Rac1 in the signal transduction pathway was also intensively characterized (1)(2)(3)(4)(5)(6). It was reported that activated Rac1 binds to and activates PAK65 protein kinase (7,8). Activated PAK65 can stimulate MEKK1, which in turn phosphorylates and activates SEK/JNK kinase (9,10). The active SEK/JNK kinase phosphorylates JNK/SAPK, which in turn binds to and phosphorylates the N-terminal region of c-Jun (11). Rac1 also activates MEKK3, which in turn phosphorylates MKK3, which mediates the p38 signal transduction pathway (1,12). Thus, these results revealed that Rac1 plays a pivotal role in the signal transduction pathway that regulates multiple cellular functions.
The relationship between the Rac1 and PI3K signal pathways was still unclear. It was suggested that the activation of CDC42 or Rac1 disrupts the normal polarization of mammary epithelial cells in a collagen matrix and promotes motility and invasion with activating PI3K (13). Other data revealed that PI3K and Rac1 form a complex and that Akt kinase and Rac1 are on separate pathways downstream of PI3K (14). It was also reported that p85 (the regulatory subunit of PI3K) binds to GTP-bound Rac1/CDC42 and regulates Rac1/CDC42 downstream function (15,16). Thus, these studies indicated that PI3K is a downstream effector of Rac1. However, most studies suggested that PI3K functions upstream of Rac1 (2,6,8). In the T-cell signal pathway, it was reported that PI3K regulates Rac1 function (17). Other data also revealed that lipid products of PI3K interact with Rac1 to stimulate GDP dissociation (18) and that the direct PI3K activation is sufficient to disrupt epithelial polarization and to induce cell motility and invasion, indicating that PI3K inhibition disrupts actin structures (19). Thus, although it is unclear whether PI3K is upstream of the Rac1 signal transduction pathway or not, it seems that the PI3K signal pathway cross-talks with Rac1.
To address how the PI3K and Rac1 signal transduction pathways are connected to each other, we investigated the relationship between Rac1 and Akt kinase instead of PI3K since Akt kinase has been characterized as the primary signal transducer of the PI3K signal pathway (20,21). Akt kinase in the PI3K signal pathway has been reported to deliver a survival signal while protecting cells from the apoptotic cell death induced by growth factor withdrawal (21). Moreover, the specific amino acid sequence (xxRxRxx(S/T)xx, with the hydrophobic amino acid underlined) that can be phosphorylated by Akt kinase was characterized from all known Akt kinase substrate proteins (20 -25). Thus, the investigation of whether a protein contains the sequence xxRxRxx(S/T)xx may help to determine whether the protein is a possible Akt kinase substrate. From Rac1, CDC42, RhoA, RhoB, RhoC, and RhoG proteins, the specific sequence ( 64 ydRIRplSYp 73 ) that could be phosphorylated by Akt kinase was identified (26 -31). The putative Akt kinase phosphorylation site (serine 71) is located between the effector protein-binding domain and the GTP-binding domain (32). Thus, this finding led us to determine whether Akt kinase phosphorylates Rac1 as one of its substate proteins and whether Rac1 phosphorylation by Akt kinase is one of the signal cross-talks between the Rac1 and PI3K signal transduction pathways. To demonstrate this, we performed an Akt kinase assay with recombinant Rac1 protein and the fluorescein-conjugated Rac1 peptide ( 64 ydRIRplSYp 73 ) and observed the phosphorylation of both recombinant Rac1 protein and the fluorescein-conjugated Rac1 peptide. To characterize how the phosphorylation by Akt kinase modulates Rac1 function, we compared Rac1 GTPase activity and its GTP binding with and without Akt kinase treatment. Thus, we observed that Rac1-GTP binding was significantly inhibited by Akt kinase phosphorylation, without a change in GTPase activity. With the Rac1 S71A mutant (with serine replaced with alanine, the mutant was not phosphorylated by Akt kinase), we observed that both its GTPase activity and GTP binding were abolished. In addition, we investigated whether Rac1 phosphorylation and JNK/SAPK activity are also modulated by Akt kinase activity by treatment with wortmannin or LY294002 (an Akt kinase inhibitor) in the SK-MEL28 cell line (20 -23). Furthermore, we observed the down-regulation of both Rac1 phosphorylation and Akt kinase activity and the activation of JNK/ SAPK by treatment with wortmannin or LY294002. Therefore, our observations strongly suggest that Akt kinase phosphorylates Rac1 protein as its target protein, resulting in the inhibition of Rac1-GTP binding.
EXPERIMENTAL PROCEDURES
Cell Culture-SK-MEL28 cells (a human melanoma cell line) were purchased from American Type Culture Collection (Manassas, VA). Media and supplements were obtained from Life Technologies, Inc. The cell line was maintained in Dulbecco's modified essential medium containing 10% heat-inactivated (30 min at 56°C) fetal bovine serum, 100 units/ml potassium penicillin, 100 g/ml streptomycin, 2 mM glutamine, and 20 mM sodium bicarbonate. The cells were incubated at 5% CO 2 and 95% humidity in a 37°C chamber. The growth medium was changed every 3 days.
Akt Protein Kinase Assay-The Akt kinase assay was performed following the protocol provided by Promega (Madison, WI) with the PepTag nonradioactive protein kinase C assay system, except for the substrate peptides (24). For Akt kinase substrates, the fluoresceinconjugated (the amino terminus of the peptide was conjugated with fluorescein isothiocyanate) IRS-1 ( 30 RKRSRKESYS 39 ) and Rac1 ( 64 ydRIRplSYp 73 ) oligopeptides were purchased from Peptron Co. (Daejun, Korea) (25-31). 5 g of fluorescein-conjugated oligopeptide was incubated with 10 l of differentially treated cell lysates or the activated Akt kinase in 20 l of protein kinase reaction mixture (20 mM HEPES, pH 7.2, 10 mM MgCl 2 , 10 mM MnCl 2 , 1 mM dithiothreitol, 0.2 mM EGTA, 20 M ATP, 1 g of phosphatidylserine, and protein kinase activator) at 30°C for 30 min. The reactions were stopped by heating at 95°C for 10 min. The phosphorylated peptide was separated on 0.8% agarose gel at 100 V for 15 min. The phosphorylated products, which gained one more negative charge, migrated to the anode. After the gel was photographed on a transilluminator, the phosphorylated peptide band was sliced out. The absorbance at 488 nm of the phosphorylated substrate was measured with the spectrophotometer following the protocol provided by Promega.
Activation of the Recombinant GST-Akt Kinase Protein-Akt kinaseagarose beads and alkaline protein phosphatase were purchased from Upstate Biotechnology, Inc. (Lake Placid, NY) and activated following the protocol described in our previous report (24). SK-MEL28 cells (10 7 ) grown under serum-free conditions for 24 h were lysed with radioimmune precipitation assay lysate buffer (50 mM Tris, pH 7.4, 150 mM NaCl, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% SDS, 1 mM sodium orthovanadate, 100 g/ml phenylmethylsulfonyl fluoride, and 1 g of aprotinin). The cell lysate was precleaned with GST-agarose beads. Akt kinase-agarose beads (20 g) were incubated with 50 l of precleaned cell lysate and protein kinase assay buffer (20 mM HEPES, pH 7.2, 10 mM MgCl 2 , 10 mM MnCl 2 , 1 mM DTT, 0.2 mM EGTA, 20 M ATP, 5 g phosphatidylserine, protein kinase activator) in a final volume of 100 l for 1 h at 30°C. The beads were precipitated and washed three times with excess cell lysis buffer. The final pellet was used for the Akt kinase assay.
Expression and Purification of Recombinant Proteins-GST-wildtype Rac1 and GST-Rac1 S71A fusion proteins were expressed in Escherichia coli and purified on glutathione-Sepharose beads (Amersham Pharmacia Biotech) according to the manufacturer's instructions. The recombinant GTPases were released from the beads by cleavage with human thrombin, and thrombin was removed by adding 10 ml of paminobenzamidine-agarose beads for 30 min at 4°C. Purified proteins were dialyzed against 15 mM Tris, pH 7.5, 150 mM NaCl, 5 mM MgCl 2 , and 0.1 mM DTT and concentrated by ultrafiltration with a Centricon-10 (Amicon, Inc.). Active protein concentrations were determined by a filter binding assay using [ 3 H]GTP as described (32). We performed Akt kinase phosphorylation and GTPase assays with these recombinant Rac1 proteins.
GTP Binding Assay-The assays monitoring the binding of GTP were performed as described previously (26,33,34). The phosphorylated or unphosphorylated protein (35 pmol) was incubated with [␥-32 P]GTP (4 Ci, 20 Ci/mmol; NEN Life Science Products) in 100 l of binding buffer containing 50 mM HEPES, pH 7.6, 0.2 mg/ml bovine serum albumin, and 0.5 mM EDTA at 15°C for the indicated times. The reaction was stopped by the addition of 1 ml of ice-cold wash buffer (50 mM HEPES, pH 7.6, 150 mM NaCl, and 10 mM MgCl 2 ). The samples were then applied to a nitrocellulose membrane that had been rinsed with 3 ml of wash buffer. The filters were immediately washed twice with 3 ml of wash buffer and soaked in 0.2 ml of 0.1 M KOH for 30 min, and the radioactivity was counted by liquid scintillation spectrometry.
GTPase Assay-Rac1 GTPase was assayed as described (33)(34)(35). The phosphorylated or unphosphorylated protein (350 pmol) was loaded with 400 pmol of [␥-32 P]GTP in 100 l of binding buffer at 15°C for 15 min. The reaction was initiated by adding MgSO 4 to 10 mM in 200 l of GTPase buffer (50 mM HEPES, pH 7.6, 1 mM DTT, and 5 mM EDTA). Reaction mixtures were then incubated at 15°C for the indicated times. Aliquots (50 l) were removed at the indicated times and mixed with 750 l of 5% (w/v) Norit A in 50 mM NaH 2 PO 4 . The mixture was centrifuged at 2000 rpm for 5 min, and 400 l of supernatant containing 32 P i was counted by liquid scintillation spectrometry. Site-directed Mutagenesis of Rac1 S71A-The Rac1 point mutation with serine 71 replaced with alanine was introduced with the Chameleon double-stranded, site-directed mutagenesis kit (Stratagene) according to the manufacturer's instructions. The Rac1 S71A mutation was confirmed by DNA sequencing.
Rac1 and Rac1 S71A Expression Vector Transfection and Purification-Hexahistidine-tagged Rac1, Rac1 S71A, and dominant-positive Akt kinase were each cloned in the pUSEamp(ϩ) mammalian expression vector (Upstate Biotechnology, Inc.). They were transfected or cotransfected in SK-MEL28 cells following the Lipofectin transfection method (Life Technologies, Inc.). Transfected cells (2 ϫ 10 7 ) were lysed in radioimmune precipitation assay lysate buffer, and Ni 2ϩ -agarose beads (20 g) were incubated with 500 l of precleaned cell lysate. The beads were precipitated and washed three times with excess cell lysis buffer. The final pellet was used for the GTPase and GTP binding assays as described above.
RESULTS
Phosphorylation of Rac1 by Akt Kinase-It has been shown that Akt kinase recognizes a specific peptide sequence as its substrate protein (20 -25, 36 -38). In our previous report, we also showed that human telomerase is activated by Akt kinase phosphorylation (24). Inspecting the amino acid sequences of all known Akt kinase substrates (H2B, BAD, 6-phosphofructo-2-kinase, IRS-1, glycogen synthase kinase-3, caspase-9, and human telomerase reverse transcriptase), we found that the serine residue is found within stretches of homologous amino acids (Table I). The arginine residues at positions Ϫ5 and Ϫ3 and the hydrophobic amino acid at position ϩ2 are conserved relative to the serine/threonine residues that are probably phosphorylated in these proteins. Therefore, the peptide xxRxRxx(S/T)xx (the hydrophobic amino acid is underlined) seems to be conserved in Akt kinase substrates (20 -25, 36 -38). We inspected possible Akt kinase substrate protein sequences with Akt kinase substrate sequence specificity (xxRxRxx(S/ T)xx). Interestingly, we identified the putative Akt kinase phosphorylation site ( 64 ydRIRplSYp 73 ) in Rac1 and all known Rho family proteins (Table I) (26 -31). This finding led us to determine that Akt kinase phosphorylates Rac1, which has been intensively studied as an important regulatory protein of actin organization, motility, invasion, and signal transduction (1)(2)(3)(4)(5)(6).
To demonstrate whether Akt kinase phosphorylates Rac1, we performed an Akt kinase assay with the activated Akt kinase and Rac1 protein in vitro. As shown in Fig. 1A, recombinant Rac1 protein was phosphorylated by Akt kinase. Recombinant Rac1 S71A protein was produced by replacing Ser-71 (to be phosphorylated) with alanine, and the Rac1 S71A mutant was not phosphorylated by Akt kinase. As shown in Fig. 1A and by other data (37,38), Akt kinase was also phosphorylated by itself. Thus, it seems that Akt1 and Akt2 are self-phosphorylated at the intrinsic phosphorylation consensus sequence ( 65 teRpRpnTFi 74 ) in their amino acid sequences (39 -41). However, the function of Akt kinase self-phosphorylation remains to be determined. To confirm Akt kinase protein phosphorylation, the reaction mixture was treated with alkaline phosphatase, and the phosphorylated protein bands disappeared (Fig. 1A).
To further determine that Ser-71 ( 64 ydRIRplSYp 73 ) in Rac1 is phosphorylated by Akt kinase, the fluorescein-labeled Rac1 peptide ( 64 ydRIRplSYp 73 ) was incubated with Akt kinase, which was expressed in E. coli and activated with human melanoma cell lysate. The amount of phosphorylated Rac1 peptide increased depending on the activated Akt kinase concentration (Fig. 1B) and the reaction time (Fig. 1C). For the control, the recombinant GST-Akt kinase protein, which has no kinase activity, was also included, but it gave no detectable amount of phosphorylated product (Fig. 1, B and C). Taken together, these observations strongly demonstrate that Akt kinase phosphorylates Ser-71 of Rac1 as one of its substrate proteins.
Akt Kinase Phosphorylation Inhibits Rac1-GTP Binding, but Not Rac1 GTPase Activity, in Vitro and in Vivo-GTP-bound Rac1 is active and controls the cell shape, adhesion, movement, endocytosis, secretion, and downstream JNK/SAPK activity (1)(2)(3)(4)(5)(6)(7)(8). Thus, we investigated whether Rac1 GTPase activity or its GTP binding was changed with or without Akt kinase pretreatment to determine that the phosphorylation by Akt kinase affects Rac1 function. In the experiments with recombinant Rac1 or Rac1 S71A expressed in E. coli, we observed that Rac1 GTPase activity was unchanged regardless of Akt kinase pretreatment (Fig. 2). To confirm this experiment, we used recombinant Rac1 S71A. The GTPase activity of the Rac1 S71A mutant was abolished regardless of Akt kinase pretreatment (Fig. 2). It seems that the lack of GTPase activity in the Rac1 S71A mutant is similar to that of the Rac1/CDC42 Q61L mutant (15). These results suggest that the phosphorylation by Akt kinase does not inhibit Rac1 GTPase activity in vitro. We also measured Rac1/CDC42 protein phosphorylation by Western blotting (data not shown).
It was confirmed by the cotransfection experiment that Rac1 phosphorylation by Akt kinase inhibits Rac1-GTP binding in vivo. When Rac1 was cotransfected with dominant-positive Akt kinase in SK-MEL28 cells, the GTP-binding activity of Rac1 was reduced (Fig. 4B), but its GTPase activity was not likely to be affected (Fig. 4A). The GTPase (Fig. 4A) and GTP-binding (Fig. 4B) activities of Rac1 S71A were abolished and not affected by the cotransfection of Akt kinase, consistent with in vitro assay results (Figs. 2 and 3). Taken together, the results suggest that Rac1 phosphorylation by Akt kinase inhibits its GTP binding without affecting its GTPase activity in vitro and in vivo.
Drugs Affecting Akt Kinase Activity Also Regulate Rac1 Phosphorylation and JNK/SAPK Activity-To further demonstrate that Akt kinase activity regulates Rac1 phosphorylation in vivo, wortmannin and LY294002 (an Akt kinase inhibitor) were used to modulate Akt kinase activity. Because it has been reported that wortmannin and LY294002 are PI3K/Akt kinase pathway inhibitors that result in specific Akt kinase inactivation (36 -38), we assayed Rac1/CDC42 phosphorylation in the SK-MEL38 cell line with wortmannin (0, 50, and 100 nM for a 2-h treatment) or LY294002 (0, 5, and 10 M). To monitor any change in Akt kinase activity upon wortmannin or LY294002 treatment, the fluorescein-conjugated IRS-1 peptide was used to monitor the activity of Akt kinase as a substrate (25). As shown in Fig. 5A (upper panel), wortmannin and LY294002 inhibited Akt kinase in a dose-dependent manner.
To determine whether Rac1 phosphorylation is also modulated along with a change in Akt kinase activity, we assayed Rac1/CDC42 phosphorylation in the human melanoma cell line with wortmannin (0, 50, and 100 nM for a 2-h treatment) or LY294002 (0, 5, and 10 M). Instead of the fluorescein-conjugated IRS-1 peptide, the fluorescein-labeled Rac1 peptide ( 64 ydRIRplSYp 73 ) was used as an Akt kinase substrate. As Akt kinase activity was inhibited by wortmannin or LY294002, Rac1 phosphorylation was also inhibited in a dose-dependent manner (Fig. 5A, lower panel). Thus, the results suggest that Akt kinase activity and Rac1/CDC42 phosphorylation are positively correlated in the human melanoma cell line.
It is known that the GTP-bound form of Rac1 is active and regulates its downstream biological function. Recently, several reports revealed the inhibition of JNK/SAPK kinase by wortmannin, indicating that PI3K also regulates JNK/SAPK kinase (43,44). It was thus necessary to determine how the phosphorylation of Rac1/CDC42 by Akt kinase would affect its downstream JNK/SAPK kinase. The JNK/SAPK activity was measured with the nonradioactive JNK/SAPK assay kit after modulating Akt kinase with wortmannin or LY294002 (20 -23). Rac1/CDC42 phosphorylation and its GTPase activity changes were also measured in the same samples. As shown in Fig. 5B, JNK/SAPK was activated by wortmannin and LY294002 in a dose-dependent manner. These results suggest that JNK/ SAPK activity is negatively correlated with Akt kinase activity and Rac1 phosphorylation. Therefore, the inhibition of Akt kinase activity and Rac1 phosphorylation by wortmannin and LY294002 seems to enhance Rac1/CDC42-GTP binding, which in turn up-regulates JNK/SAPK activity. Together, our data indicate that Akt kinase phosphorylates serine 71 of Rac1 and inhibits Rac1-GTP binding, resulting in the activation of JNK/SAPK. DISCUSSION The small GTP-binding protein Rac1/CDC42 plays a pivotal role in the regulation of diverse physiological events, including reorganization of the actin cytoskeleton, cell cycle progression, and transformation (1)(2)(3)(4)(5)(6)(7)(8). However, the relationship between the PI3K and Rac1 signal pathways seems to be ambiguous. Several reports have revealed that the GTP-binding protein Rac1 mediates some of the biological effects of PI3K and have led to the suggestion that Rac1 may be a common mediator of a variety of responses mediated by PI3K (13)(14)(15)(16). In contrast, most reports revealed that Rac1 and RhoA are downstream targets of PI3K (2,3,(17)(18)(19). Thus, the ambiguous relationship between the PI3K and Rac1 signal pathways still remains to be demonstrated.
To gain more insight into the relationship between two particular signal pathways, we intended to determine whether Akt kinase phosphorylates Rho family GTPases (including RhoA, RhoB, RhoC, RhoG, and Rac1/CDC42) directly, instead of interacting with the other kinases. Upon inspecting Rac1/CDC42 family protein amino acid sequences, we identified the specific sequence ( 64 ydRIRplSYp 73 ) that is possibly phosphorylated by Akt kinase in all known Rac1/CDC42 family proteins (Table I) (26 -30). Thus, we performed the Akt kinase assay with recombinant Rac1 protein and the Rac1 peptide ( 64 ydRIRplSYp 73 ) and demonstrated that these are phosphorylated by Akt kinase (Fig. 1). To characterize the major effect of phosphorylation on Rac1 by Akt kinase, we demonstrated that Rac1-GTP binding was inhibited by Akt kinase phosphorylation (Fig. 3) without a change in GTPase activity (Fig. 2). Moreover, we confirmed these observations with the Rac1 S71A mutant, which is not phosphorylated by Akt kinase. The mutant GTPase or GTP binding was abolished regardless of Akt kinase phosphorylation (Figs. 2 and 3). Therefore, our data demonstrate that Akt kinase mediates/modulates Rac1-GTP binding through phosphorylation of Rac1 serine 71. We assume that the inhibition of Rac1-GTP binding by Akt kinase phosphorylation is partially due to the reduction of its affinity for GTP, resulting from the electrostatic charge increase in Rac1 after Akt kinase phospho-rylation. However, it still remains unclear why Akt kinase phosphorylation inhibited Rac1-GTP binding (Fig. 3) without a change in GTPase activity (Fig. 2).
Because active Rac1 (the GTP-bound form) controls its downstream JNK/SAPK kinase activity, we performed the JNK/ SAPK kinase activity assay to determine whether the change in Akt kinase activity with wortmannin or LY294002 also modulates JNK/SAPK kinase activity. Consistent with other observations (44 -47), we observed that wortmannin or LY294002 activates JNK/SAPK kinase activity (Fig. 5B). Thus, our results demonstrate that Akt kinase modulates Rac1-GTP binding through phosphorylation and, in turn, controls JNK/ SAPK kinase activity, indicating that these two signal pathways cross-talk through phosphorylation. Moreover, our data support that Akt kinase is upstream of Rac1. However, we cannot rule out the other possibility that Akt kinase is downstream of Rac1 and that the phosphorylation of Akt kinase is one of the feedback regulation mechanisms.
Akt kinase is involved in signaling for a multitude of important cellular events. The activation or inactivation of Akt kinase in the cell is one of the critical regulatory points to deliver either a survival or an apoptotic signal. Thus, the upstream signal transduction pathway through which Akt kinase is regulated in the cell is an intensively interesting research area (20 -25, 36 -38). To understand how Akt kinase function in the PI3K pathway contributes to cell proliferation/survival, the identification of substrate protein that is phosphorylated by Akt kinase and the characterization of how Akt kinase phosphorylation modulates the protein function (either activation or inhibition) seem to be also important.
Until now, several important regulatory proteins, including BAD, 6-phosphofructo-2-kinase, glycogen synthase kinase-3, IRS-1, caspase-9, and human telomerase reverse transcriptase, have been characterized as the substrates of Akt kinase (Table I). The Akt kinase substrate consensus sequence was proposed from these protein sequences. The serine/threonine residue to be phosphorylated in these proteins is within the consensus amino acid stretches in which the arginine residues at positions Ϫ5 and Ϫ3 are conserved and in which position ϩ2 relative to those of serine/threonine residues is a hydrophobic amino acid (xxRxRxx(S/T)xx). With the Akt kinase substrate consensus sequence (xxRxRxx(S/T)xx), we identified the possible Akt kinase phosphorylation sequences in Dbl ( 619 khRiRedSYi 628 ), Ras-guanine nucleotide release factor ( 738 pvRaRklSLt 747 ), p115 Rho-GEF ( 772 kpRpRpsSTr 781 ), limulus clotting factor C, and CDC24 ( 121 tiReRpsSAi 130 and 181 nmRnRtlSVe 190 ) (34, 35, 48 -52). The function of these regulatory factors was reported to control the GTP/GDP exchange of Rac1 GTPase. The Dbl-like GEF Lbc oncoprotein specifically activates the small GTP-binding protein Rac1 in mammalian fibroblasts to induce transformation and actin stress fiber formation (51). Another Dbl-related molecule, CDC24, stimulates guanine nucleotide exchange of the GTPase CDC42 family to elicit effects on both gene induction and actin-based cytoskeleton change in yeast (Saccharomyces cerevisiae) (52). p115 Rho-GEF , the coupling factor between G protein and Rho GTPase, also stimulates guanine nucleotide exchange of the GTPase family (34,35,51). Thus, the proteins that contain Akt kinase phosphorylation sites are supposed to regulate/modulate Rac1 family protein function. The finding that Akt kinase phosphorylation sequences are present in these proteins indicates that Akt kinase also modulates the Rac1 signal pathway indirectly with the phosphorylation of Rac1 GTPase regulatory proteins, together with direct Rac1 phosphorylation (Figs. 1-3). However, it remains to be demonstrated that these Rac1 GTPase regulatory proteins are phosphorylated by Akt kinase and that the phosphorylation of Rac1 Table I) and Rac1 ( 64 ydRIRplSYp 73 ) peptides (see Table I) were used to monitor Akt kinase activity and Rac1 phosphorylation, respectively. Akt kinase activity (A, upper panel) and Rac1 peptide phosphorylation (lower panel) were down-regulated, whereas JNK/SAPK activity was up-regulated (B) by treatment with wortmannin or LY294002 in a dose-dependent manner. The negative control was a heat-treated (65°C, 10 min) SK-MEL28 cell lysate. regulatory protein by Akt kinase also affects Rac1 GTPase or GTP-binding activity.
Recently, Akt kinase family proteins (Akt1, Akt2, and Akt3) have been characterized (39 -41). Because of their amino acid sequence homology, each Akt kinase-specific function seems to be redundant. It may be that Akt2 and Akt3 are also kinases responsible for Rac1 phosphorylation. However, it remains to be determined that Akt2 and Akt3 also have the same substrate specificity and which Akt kinase has more preference for a certain kind of Rho family proteins.
Since Akt kinase substrate specificity (xxRxRxx(S/T)xx) is conserved in the other proteins as an Akt kinase substrate, the inspection of these consensus amino acid sequences may help to determine whether these proteins are possible Akt kinase substrates (Table I). With the inspection of the Akt kinase consensus sequence, we noticed two sites ( 159 EPRSRHLSVS 168 and 330 DPRGRLRSAD 339 ) in human MEKK3, which mediates the p38/RK signal transduction pathway (53), and one site ( 73 IER-LRTHSIE 82 ) in human JNK-activating kinase (11). Even though the relationship between Akt kinase and MEKK3 or JNK kinase is presently unknown, Akt kinase phosphorylation sites in these proteins may contribute the signal cross-talk between two different signal pathways to protect the cell from apoptosis or to promote cell proliferation. The presence of Akt kinase phosphorylation sites in both human JNK-activating kinase and Rac1 may also provide a clue to explain why the drugs (wortmannin and LY294002) modulating PI3K also affect JNK/SAPK activity (Fig. 5B) (44 -47). We are investigating to determine whether the phosphatidylinositol 3-kinase/Akt pathway antagonizes/modulates the p38 mitogen-activated protein kinase or JNK-activating kinase pathway through Akt kinase phosphorylation.
In summary, upon identifying Akt kinase substrate protein and characterizing the functional role of Akt kinase phosphorylation, we seem to have discovered significant clues to understand how Akt kinase functions to protect cell apoptosis or to promote cell proliferation. The phosphorylation site of Akt kinase seems to be highly conserved as an Akt kinase substrate consensus sequence (xxRxRxx(S/T)xx). Thus, the inspection of protein sequences with this consensus sequence may help to identify Akt kinase substrate proteins. With this information, we have demonstrated that Akt kinase inhibits Rac1-GTP binding, which plays an important role in controlling cell shape and morphology by the phosphorylation of serine 71 of Rac1 as one of the Akt kinase target proteins. | 2018-04-03T00:36:20.978Z | 2000-01-07T00:00:00.000 | {
"year": 2000,
"sha1": "e97b8bc1c808f4c2570c3ebf45f61a74ac7d0eac",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/1/423.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e3e5b8423bcda1045507894d7072b7dbb8b2dfa7",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
254105348 | pes2o/s2orc | v3-fos-license | Influences of dark energy and dark matter on gravitational time advancement
The effect of dark matter/energy on the gravitational time advancement (negative effective time delay) has been investigated considering a few dark energy/matter models including cosmological constant. It is found that dark energy gives only a (positive) gravitational time delay, irrespective of the position of the observer, whereas a pure Schwarzschild geometry leads to a gravitational time advancement when the observer is situated at a relatively stronger gravitational field point in the light trajectory. Consequently, there will be no time advancement effect at all at radial distances where the gravitational field due to dark energy is stronger than the gravitational field of the Schwarzschild geometry.
Introduction
The discovery of the acceleration of the universe's expansion [1][2][3][4][5] has led to the inclusion of a new component into the energy-momentum tensor of the universe having a negative pressure, the so-called dark energy component. On the other hand data from rotation curve surveys [6] and a few other observations [7,8] require there to be a dominating component of matter in galaxies which is non-luminous or dark. Several other observations, which include the cosmic microwave background (CMB) measurements [9][10][11][12], baryon acoustic oscillations (BAO) [13][14][15], and lensing in clusters [16,17], support the existence of dark energy as well as the presence of a dark matter halo surrounding the Galactic disc. Consequently on large distance scales, astrophysical and cosmological phenomena are governed mainly by dark matter and dark energy.
The simplest candidate for dark energy is the cosmological constant ( ): a constant energy density with equationof-state parameter w = −1 and the CDM model where a e-mail: samrat.ghosh003@gmail.com b e-mail: aru_bhadra@yahoo.com CDM refers to cold dark matter, which is in accordance with all the existing cosmological observations [18,19] such as the cosmic microwave background (CMB) anisotropies, the large scale structure, the scale of the baryonic acoustic oscillation in the matter power spectrum, and the luminosity distance of the supernovae type Ia; but it has a big theoretical problem-its size (∼10 −52 m −2 ) is many orders of magnitudes below the expected vacuum energy density in the standard model of particle physics [20]. Hence many other theoretical explanations for the DE have been proposed in the literature in which the parameter w evolves with time or is different from −1 such as the quintessence [21][22][23], k-essence [24][25][26][27], phantom field [28,29], and Chaplygin gas [30,31] models. There are also proposals for a modification of general relativity, which include scalar tensor theories [32] or f(R) gravity models [33], conformal gravity models [34,35], massive gravity theories [36] including Dvali-Gabadadze-Porrati (DGP) braneworld gravity [37,38] models etc., which lead to late-time accelerated expansion without invoking any dark energy.
Like dark energy, there are also several candidates for dark matter [39] such as WIMPs, axions, sterile neutrinos etc. There are proposals for the modifications at the fundamental theoretical level as well, which include MOND [40][41][42][43], that suggest modifications in Newtonian dynamics. The evidence of the presence of non-baryonic dark matter from the CMB data, however, questions the MOND-like schemes. The conformal gravitational theory [34,35], which is based on Weyl symmetry, also can explain flat rotation curves of galaxies without the need of dark matter.
Dark energy/matter is likely to affect the gravitational phenomena on all distance scales including the local scales. Several investigations have so far been made to estimate the influence of dark energy (mainly through cosmological constant) on different local gravitational phenomena, which include the three classical observables-the perihelion shift of planets [44,45], gravitational bending of light [45][46][47][48][49], and grav-itational time delay (or Shapiro time delay) [45,50,51]. Due to the tiny value of , the influence of dark energy has been to be found very small, not detectable by the ongoing experiments. Out of the local gravitational phenomena the effect of is found to be maximum in the case of perihelion precession of planets and the observations on perihelion precession of Mercury put an upper bound of ≤ 10 −42 m −2 [52]. On the other hand analysis of the perihelion precession of Mercury, Earth, and Mars also lead to a upper bound 3 × 10 −19 g/cm 3 for dark matter density (ρ dm ) [53], whereas the rotation curve data implies that ρ dm in the Milky Way at the location of the solar system is ρ dm = 0.5 × 10 −24 g/cm 3 [54].
In this work we would like to examine the influence of dark energy and dark matter on gravitational time advancement. The gravitational time advancement effect takes place when the observer is situated at a stronger gravitational field with respect to the gravitational field encountered by the photon while traversing a certain path [55]. We found that dark energy and dark matter do affect the gravitational time advancement and though the magnitude of the effect is small, it induces an interesting observational consequence, at least in principle.
The organization of the paper is as follows. In the next section we discuss briefly the gravitational time advancement effect. The influence of dark energy and dark matter on gravitational time advancement are evaluated in Sect. 3. The results are discussed and finally we conclude in Sect. 4.
Gravitational time advancement
The gravitational time delay is one of the classical solar system tests of general relativity. The general perception as regards the gravitational time delay is that due to the influence of a gravitating object the average global speed of light is reduced from its special-relativistic value c 0 and hence the signal always suffers an additional time delay. But depending upon the position of the observer, the delay can as well be negative, which was called a gravitational time advancement [55]. To exemplify the effect let us consider light propagating in a gravitational field between two points A and B. Assuming the standard Schwarzschild geometry, i.e.
the total coordinate time required for the round-trip journey between the points A and B (or between the points B to A and back) to the first order in μ = G M/c 2 0 is given by [53] where r A and r B are the radial coordinates of the point A and B, respectively, and r o is the closest distance to the gravitating object in the photon path. Suppose the point A is located at a relatively much weaker gravitational field due to a mass M than the point B i.e. r A >> r B where r A and r B are the values of coordinate r evaluated at the position A and B, respectively. Hence the proper time for transmission and the reception of the signal to be measured by the observer at the point A is In the above expression the first term on the right hand side is the usual special-relativistic time of travel. The remaining two terms are general-relativistic corrections. As a result the observed time will be higher than the time taken between transmission and the reception in the absence of a gravitating object, which is the well-known gravitational time delay. Now let us consider the case that the observer is at the point B instead of the point A. In that case the proper time between transmission and the reception of the signal to be measured by the observer will be [55] Due to the last term of the right hand side of the above expression, which is the dominating one among the generalrelativistic correction terms, the time taken between trans-mission and the reception will be reduced from the usual special-relativistic time of travel when the distance between A and B exceeds a certain value. This effect is known as the gravitational time advancement (negative time delay), which arises because of the clock running differently at different positions in the gravitational field.
Influence of dark energy/matter on gravitational time advancement
In the presence of dark energy the exterior space-time of a spherically symmetric mass distribution is no longer described by the Schwarzschild geometry, but by some modification of the Schwarzschild metric. For instance if dark energy is the cosmological constant, the exterior static spacetime will be the Schwarzschild-de Sitter (SDS) space-time.
Here we shall consider a general static spherically symmetric metric of the form and where a and are constants. Different choices of n and a lead to different models of dark energy.
Case 1: With n = 1/2, a = 2, and = ± G M/r 2 c , the model represents the gravitational field of a spherically symmetric matter distribution on the background of an accelerating universe in the Dvali-Gabadadze-Porrati (DGP) braneworld gravity, provided only leading terms are considered [56]. r c is the crossover scale beyond which gravity becomes five dimensional.
Case 2: For the choice n = 1, a = 1, and negative , the model well describes the gravitational potential due to a central matter distribution plus dark matter [34,35,57]. 13c 2 , the model corresponds to the non-perturbative solution of a massive gravity theory (an alternative description of accelerating expansion of the universe) [58] where m g is the mass of graviton.
Case 4: When a = 1, n = 2, and m = μ the above metric describes the Schwarzschild-de Sitter (SDS) or Kotler space-time, which is the exterior space-time due to a static spherically symmetric mass distribution in the presence of the cosmological constant [59].
General trajectory
Now let us suppose that a light beam is moving between two points A and B in the gravitational field of Eqs. (5-7). The expression for the coordinate time required for light rays to traverse the distance r o to r , where r o is the closest distance from the gravitating object over the trajectory can be obtained from the geodesic equations, given by where For a general power index (n) of in Eq. (5), the above equation after integration can only be expressed in terms of hyper-geometric functions and thereby are not very useful. However, for n = 1 and n = 2, the integral can be written in a handy form, particularly when higher order terms in M and are ignored. The extra coordinate time delay (δt 1 ) induced by the dark sector terms in Eq. (8) is given, for n = 1 and = − , by while for n = 2 we have and for general n (n = 1) when r A >> r o and r B >> r o , δt n (a + 1) 6(n + 1) Hence the proper time between the transmission and the reception of the signal to be measured by the observer at point B will be for n = 1 Usually for observing a time advancement effect, r o = r B . Further for describing a flat rotation curve, a has been chosen as 1. Hence the above equation reduces to When r A >> r B , the above equation transforms to Similarly for n = 2 with which for r A >> r B becomes and for general n Unless the effect dominates over the pure Schwarzschild effect, the net time delay will be negative in all the above cases resulting in time advancement.
Small distance travel
Let us suppose a light beam is moving from a point on the Earth surface (B) (R, θ, φ), where the radius of Earth is denoted as R E , to a nearby point with coordinates C(R + R, θ, φ) and reflects back to the transmitter position (B). The light signal will travel a null curve of space-time, satisfying ds 2 = 0. Then the proper distance between point B and point C is given by The coordinate time interval in transmitting a light signal from B to C and back, is given by The observer at B will experience that coordinate time interval in proper time to be measured by the observer at B between transmission and reception of the signal as given by In deriving the above equations, the higher order terms in and m 2 , m 3 , m 2 R 2 R 2 , and terms of higher order in m have been neglected.
Discussion and conclusion
Dark energy has a significantly different kind of influence on gravitational time advancement than that of pure Schwarzschild geometry. The time advancement effect is entirely due to the pure Schwarzschild geometry, while dark energy leads only to a time delay effect, which means the gravitational time advancement effect will be reduced in the presence of dark energy. When r 2 A > 2μ/r B , there no time advancement at all. So in principle the time advancement effect should be able to identify dark matter clearly.
In contrast the conformal theory description of a flat rotation curve suggests a large time advancement effect. The fitting of galactic rotation curves suggests /3 = −(5.42 × 10 −42 M M + 3.06 × 10 −30 ) cm −1 [60]. Therefore, in our galaxy, the dark matter potential should start dominating over the luminous matter contribution (pure Schwarzschild part) at distances larger than about 30 kpc. Hence at distances beyond the ∼30 kpc time advancement effect will be quite large. The experimental realization to examine the gravitational time advancement effect at such distances is a challenging issue.
Here it is worthwhile to mention that the gravitational time advancement effect has not been experimentally verified yet, but it should not be very difficult to test the effect. This is because the magnitude of the time advancement effect is reasonably large. In fact, gravitational time advancement is a much stronger effect than gravitational time delay when large distances are involved. However, time delay has the advantage of probing stronger gravity. In the solar system tests of gravitation, time delay measurements mainly rely on the passage of radiation grazing the sun, and thereby the solar gravitational potential at the surface of the sun comes into play. In such a situation the time delay is about 240 µs, whereas the total special-relativistic travel time between the earth and the sun is about 1000 s, which means the gravitational time delay is about a 2 × 10 −7 part of the total travel time. For testing gravitational time advancement from the earth or its surroundings, on the other hand, the solar gravitational potential at the position of earth will be applicable and when light propagates from the earth to say Pluto and back, the time advancement will be about 1 ms over a total propagation time of 50,000 s i.e. here the time advancement is about a 0.2 × 10 −7 part of the total travel time, which is just one order smaller than the time delay caused by the sun and hence is detectable. Note that the above estimates need to be corrected taking into account the variations in round-trip travel time due to the orbital motion of the target relative to the Earth by using radar-ranging or any other similar kind of data. Since gravity cannot be switched off, one does not have access to a special-relativistic propagation of a photon against which the time delay is to be measured. Therefore, the variation of the time delay is measured as a function of distance to verify the radial profile of Eq. (3). A similar check can be made for the time advancement also.
The future missions, such as the Beyond Einstein Advanced Coherent Optical Network (BEACON) [61] or the GRACE Follow-On (GRACE-FO) mission [62], will probe the gravitational field of the Earth with unprecedented accuracy. The BEACON mission will employ four small spacecraft equipped with laser transceivers and the spacecraft will be placed in a circular Earth orbit at a radius of 80,000 km. All the six distances between the spacecraft will be measured to high accuracy (∼0.1 nm), out of which one diagonal laser trajectory will be very close to the Earth and thereby pick up the gravitational time delay effect. If the distance between the spacecraft and the Earth is also measured by an Earth bound observer and compared with distances measured by the spacecraft, the time advancement effect may be revealed from the measurements. The GRACE-FO, which is scheduled for launch in 2017, will be equipped with a laser ranging interferometer and is expected to provide a range with an accuracy of 1 nm. With such a level of accuracy general-relativistic effects may become significant [63]. It is, therefore, important to examine whether the time advancement can have any significant effect on the observables of GRACE-FO.
To probe dark matter through its influence on the gravitational time advancement properly, one is required to observe a time advancement (delay) effect at distance ∼30 kpc or beyond. For probing dark energy, observations are to be made at even higher distances. This is currently not feasible. At present, observations can be made only from the Earth or from its neighborhood via a satellite/space station. So strategies to be developed for observing the time advancement/delay effect at other distances may be some indirect means. This would be a very challenging task.
For small distance travel, the time advancement effect is a second order effect, unlike the long distance travel where the time advancement occurs due to first order effect. However, since the time advancement effect is cumulative in nature, if a light beam is allowed to travel, say, from the Earth surface radially upwards to a nearby point large number of times it (the light beam), this should acquire a time advancement of reasonable magnitude when observed from the Earth surface and should be measurable.
In summary, we investigate the influence of dark matter/energy on gravitational time advancement. We obtain analytical expressions for the time advancement to first order in M and where is the parameter describing the strength of the dark matter/energy. From our results it is found that dark energy leads to a gravitational time delay only, whereas a pure Schwarzschild metric gives both a time delay and a time advancement (negative effective time delay) depending on the position of the observer.
The present finding suggests that in principle the measurements of gravitational time advancement at large distances can verify the dark matter and a few dark energy models or put an upper limit on the dark matter/energy parameter. | 2022-12-01T15:27:47.710Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "792f6ec89faadbcb1a588c1b850136adecc6bd50",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-015-3719-8.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "792f6ec89faadbcb1a588c1b850136adecc6bd50",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
73418821 | pes2o/s2orc | v3-fos-license | Iodine status of euthyroid adults: A cross‐sectional, multicenter study
Background Iodine, an essential nutrient, is the most important trace element in thyroid hormone synthesis and maintenance of thyroid function. This study investigated the iodine nutrition status in healthy Chinese adults and assessed the relationship between urinary iodine concentration (UIC) and thyroid hormone levels. Methods A cross‐sectional, multicenter study was conducted between October 2017 and January 2018, with 1017 adults recruited from five cities in China. All subjects underwent thyroid ultrasonography, and only those with normal results were included in the study. UICs were measured by inductively coupled plasma mass spectroscopy and adjusted using urine creatinine levels. Thyroid hormone levels were measured using an automated immunoassay analyzer. Results The median UIC and adjusted UIC were 134.0 µg/L and 114.2 µg/g, respectively. UIC was not significantly different between males and females (P = 0.737). However, the adjusted UIC was significantly different between sexes (P < 0.001). The median UIC was higher than 100 µg/L. According to the World Health Organization criterion (100 µg/L), the total prevalence of iodine deficiency is 33.1% (n = 271). The prevalence rates of iodine deficiency in our study were 33.2% and 32.9% in males and females, respectively, and had no difference between sexes and among cities (P > 0.05). Serum thyroid‐stimulating hormone (TSH) levels increased when UIC increased. The Kruskal‐Wallis test showed no significant differences in free triiodothyronine, free thyroxine, and TSH, with different levels of UIC (all P > 0.05). Conclusions Chinese adults with normal thyroid structure have relatively sufficient iodine levels.
3. Took thyroid medication in the past 15 days; 4. Was a hospital inpatient or seriously ill during the previous 4 weeks; 5. Surgery in the past 6 months; 6. Female participants, pregnant, breastfeeding, or within 1 year after childbirth; 7. On a high-iodine diet or consumed seafood including kelp, sea fish, crab, shrimp, and shellfish in the past 3 days.
A total of 1017 apparently healthy participants aged 18-82 years were enrolled in this study. This study was approved by the Ethics
Committee of the Institute of Peking Union Medical College
Hospital. All participants studied were informed in writing of the intended use of their samples, and each participant provided written consent.
| Data collection and physical examination
Data including demographic characteristics and medical history were collected from a representative sample of the study via a standard questionnaire. Body weight was measured on a calibrated beam scale, and height was measured in triplicate. Body mass index (BMI) was calculated as body weight divided by the square of the height (kg/m 2 ). Blood pressure (BP) was measured three times after the participant rested quietly for at least 10 minutes, and the average of three measurements was used. Current smoking status was classified as a self-reported response of "yes" to the question "Do you smoke now?" We also evaluated the UIC distribution between intake iodine salt and noniodine salt among 693 subjects who selfreported a response to the question "do you consume iodine salt during breakfast, lunch, or dinner?" All participants underwent thyroid ultrasonography examination performed by trained technicians.
| Laboratory measurement
All subjects were advised to have a bland diet before blood testing.
Following overnight fasting, blood was drawn from the antecubital vein of the arm. Spot urine samples were also collected. Blood specimens were centrifuged at 3000 rpm/min for 10 minutes. All samples were sent to the laboratory and stored at −80°C until tested.
Calibration and quality controls (Lyphochek ® Control) were performed before the analyses to monitor instrument precision.
Measurements were performed according to the standard operation procedure. Instrument calibration and preventive maintenance were performed annually. We also participated in External Quality Assessments by the National Center for Clinical Laboratories and College of American Pathologists to guarantee the accuracy and reliability of results. UIC was measured by inductively coupled plasma mass spectroscopy. Urine creatinine was measured using Beckman AU 2700 Automatic Biochemical Analyzer. Serum lipoprotein, including total cholesterol (TC) and triglycerides (TG), and fasting blood glucose (FBG) were measured. Thyroid hormones including free triiodothyronine (FT3), FT4, and TSH were measured using Beckman DXI 800 chemiluminescent immunoassay. The reference range for FT3, FT4, and TSH were 2.5-3.9 pg/mL, 0.61-1.12 ng/dL, and 0.38-5.33 mIU/L respectively. The precision of FT3, FT4, and TSH measurements was assessed according to the Clinical Laboratory and Standard Institution EP-15A2 protocol. We previously used this method for measuring UIC. The results revealed that the inter-run coefficients of variation (CVs) and total CVs for urine iodine were 3.5%-6.7% and 3.9%-6.7%, respectively. The intra-and interassay coefficient of variation for FT3, FT4, and TSH were 5.4%-8% and 5.8%-7.6%, 4.1%-7.6% and 1.7%-6.9%, and 2.4%-3.9% and 2.2%-3.7%, respectively, which meet the clinical values.
All laboratories participating in the survey followed the same internal quality control program that was standardized by the Peking Union Medical College Hospital.
| Iodine status
The iodine status of subjects was assessed by median UIC based on the World Health Organization (WHO) recommendations. According to iodine nutrition epidemiologic criteria of WHO, a population's median UIC of <100, 100-199, 200-299, and ≥300 µg/L is each representative of insufficient, adequate, above requirements, and excessive iodine intake. In this study, UIC of enrolled subjects was classified by <100, 100-299, and ≥300 µg/L. Furthermore, the prevalence of iodine deficiency was defined as proportion of subjects with a UIC <100 µg/L. The UIC values were adjusted using urine creatinine. 9 Urine iodine can be expressed in a relationship with creatinine excretion (µg iodine/g creatinine) also called adjusted urine iodine concentration equation 1. 9 for data analysis. Normally distributed data were presented as mean and standard deviation (SD), while skewed data were expressed as median (percentiles). Categorical variables were presented as a number (percentile). Group differences of normally distributed values were compared using the t test or one-way ANOVA, and skewed data were compared using the Mann-Whitney U or Kruskal-Wallis test. Group differences of categorical variables were compared using the chi-square test. P < 0.05 was defined as statistically significant.
| Characteristics of participants
A total of 1017 adults from five cities were recruited for this study.
Subsequently, 198 subjects lacking complete information or urine iodine measurements were excluded. Ultimately, 819 subjects with complete information and UIC and urine creatinine measurements who met the inclusion criteria were used in the final analysis. Baseline characteristics of study subjects are shown in Table 1. Among 819 subjects, the average age, BMI, systolic blood pressure (SBP), and diastolic BP (DBP) were 41.3 ± 13.2 years, 23.3 ± 3.6 kg/m 2 , 122 mm Hg, and 76 mm Hg, respectively. One-way ANOVA showed that there were significant statistical differences in age, BMI, SBP, DBP, FBG, TG, TC, FT3, FT4, and TSH among different cities (P < 0.001).
| Iodine status of the population
The median UIC and adjusted UIC (P 25 , P 75 ) were 134.0
| UIC and thyroid function
The median concentrations of FT3, FT4, and TSH in the subjects were 3.36 pg/mL, 0.91 ng/dL, and 1.95 mIU/L, respectively. Table 3 shows changes in serum FT3, FT4, and TSH concentrations accord-
| Relation of UIC and other indicators
Among the enrolled subjects, 712 answered the question "Are you smoking now?" and the median UIC of these subjects was 137.8 The correlations between UIC and indicators using Spearman correlation analysis are shown in Table 4. There was a statistically significant negative correlation between UIC and age, where UIC decreased as age increased. The relationship between UIC and adjusted UIC showed a positive correlation (P < 0.05).
| D ISCUSS I ON
This cross-sectional study includes the latest survey to date examining the iodine status, and the association between UIC and TA B L E 2 Iodine status of the study population in China classified by the World Health Organization criteria N (%)
100-299 µg/L N (%) ≥300 µg/L N (%)
Age-group (years) However, most studies used only UIC to estimate the iodine status of a population. 3,14 This study demonstrated that the median UIC varied with age but not with geographic location. A recent study with subjects aged 20 years and older also reported that the median UIC decreased according to age, supporting our data. 12 We also found that the median adjusted UIC varied by age and geographic location although we did not find various regularities between UIC and adjusted UIC.
The iodine nutrition status of the Chinese population has been suggested to be sufficient in several studies. 3 The iodine nutritional status in the adult population of the Shandong province was reported to have a median UIC of 248.5 µg/L. 15 In this study, the Chinese population had above sufficient levels of iodine.
The UIC in this study was lower than in the previous study. It might be that the UIC was affected by many factors, such as the place of residence (inland, seashore), eating habits, and economic development. However, the overall UIC showed the iodine levels to be sufficient in China. In this study, the subgroups with a higher UIC were associated with a higher median serum TSH, but not with statistical significance, and a relationship between UIC and FT3 or FT4 levels was not evident. Due to the large interindividual variation in the ability of the thyroid to adapt, thyroid hormones, including FT3, FT4, and TSH, are not considered sensitive indicators of the population iodine status. 10 Evidence suggests that levels of thyroid hormones will remain within normal range in mild iodine deficiency, while the hormone levels will fall outside the normal ranges only in cases of severe iodine deficiency. 1 Several studies reported the relationship between UIC and thyroid functions. 5,14,16 A Korean study reported that the serum TSH and FT4 levels showed statistically significant as UIC. 12 The prevalence of clinical hypothyroidism, subclinical hypothyroidism, and positive thyroid antibodies, assessed with UIC, was significantly higher in individuals with more than adequate iodine intake, than in individuals with adequate iodine intake. 14 Although no statistical significance was observed between UIC and thyroid hormones in our study, it is important to control the iodine nutrition intake.
This study demonstrated that the median UIC in smokers was higher than in nonsmokers, but not with statistical significance.
Kang et al reported that active smokers had significantly lower iodine levels than passive smokers and nonsmokers. 17 Regardless of smoking status, both groups were associated with decreasing serum TSH levels, which might be related to lower urinary iodine levels. 17 It was unclear whether smoking decreased the urinary iodine levels until now.
A strength of this study is that it is the latest study to report an association between UIC and its relationship with thyroid hormones in a Chinese population whose thyroid ultrasonography tests were normal. Additionally, we used urine creatinine adjusted UIC to evaluate the iodine status to ensure the appropriate evaluation of iodine nutritional status. This study still has several limitations. An important limitation is the lack of information on iodine intake via medications, or other sources. Lastly, we used UIC to assess the iodine status of a population. Spot urine sample UIC has been well documented as a suitable indicator for assessment of a population's iodine status. Therefore, currently it is the most suitable indicator to assess iodine status in a population-based study.
In conclusion, the iodine status of apparently healthy Chinese adults was found to be sufficient. However, salt iodization is still necessary to prevent iodine deficiency.
ACK N OWLED G M ENTS
The authors express their sincere gratitude to all participants and workers who contributed to this study. Funding support for this study was provided by the National Natural Science Foundation of China (81702060).
CO N FLI C T O F I NTE R E S T
The authors have no conflict of interests.
AUTH O R CO NTR I B UTI O N S
DCW, SLY, HLL, SWX, QC, and LQ performed the experiments.
DCW, SLY, YCY, and XQC analyzed the data. DCW, SLY, and HLL wrote the article. DCW, HLL, LQ, and YCY revised the article. DW and SY contributed equally to this article. All the authors have accepted responsibility for the entire contents of this article and approved its submission. | 2019-03-08T14:13:00.869Z | 2019-02-08T00:00:00.000 | {
"year": 2019,
"sha1": "eeff62ce2e7b049fdb40be2812dc8fa7ed49371e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcla.22837",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba80933cf689d5f69452ab93039bb3a11cfb502a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237848619 | pes2o/s2orc | v3-fos-license | Improving Diagnostic Coincidence Rate of Graves’ Disease by Ultrasound Examination with Clinical Symptoms
087 2576-2508/ C AUDT 2021·http://www.AUDT.org This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. Improving Diagnostic Coincidence Rate of Graves’ Disease by Ultrasound Examination with Clinical Symptoms
C linically, diffuse thyroid diseases include hyperthyroidism, subclinical hyperthyroidism, hypothyroidism, subclinical hypothyroidism, and Hashimoto's thyroiditis. Hyperthyroidism not only has the highest incidence rate (about 85% is Graves' disease), but also has great impact on entire body systems. Therefore, it has been drawn an attention for ultrasound research. The ultrasound imaging of diffuse thyroid disease mainly show the changes of diffuse thyroid parenchyma echogenicity and increased blood flow. Over the years, research on the diagnosis of hyperthyroidism by ultrasound has made considerable progress. However, some shortcomings still exist in the application of current protocol and criteria for clinical practice.
Some investigators only studied the sonographic characteristics (such as thyroid volume, blood supply, blood velocity of thyroid artery, etc.) of hyperthyroidism [1][2][3], while others studied the differential diagnosis between hyperthyroidism and one of the above diffuse diseases [4,5]. Some research just studied the influence of one factor on the diagnosis of hyperthyroidism, such as the heart rate, the blood flow velocity of thyroid artery [6]. Only three literatures have studied the influence of two factors on the diagnosis of hyperthyroidism, i.e., the heart rate and peak flow rate of superior thyroid artery (STA) [7][8][9]. Most literatures had shortcomings in research methods: (1) the selection of cases was not rigorous, such as not distinguished patients with newly diagnosed and those had received treatment; (2) the precise measurement site of the thyroid artery was not clear. Sometimes, the measurement site was randomly selected or just selected the lower segment of the STA where close to the gland; (3) the research design methods were retrospectively analyzed, and no blind control was used.
In fact, thyroid volume, STA peak velocity, and HR are factors that can be directly obtained by ultrasound. Until now, the diagnosis of hyperthyroidism by ultrasound has been limited to ultrasound itself without input of clinical symptoms. However, in view of the advantages of ultrasound, imaging doctors can fully communicate with patients during entire examination, and it is easy to obtain a reliable medical history. Therefore, clinical symptoms could be included as the study factor. The typical clinical symptoms of hyperthyroidism including excessive sweating, heat intolerance, increased bowel movements, tremor, nervousness, agitation, rapid heart rate, weight loss, fatigue, decreased concentration, irregular and scant menstrual flow and proximal limb muscle atrophy and so on. However, a small number of patients, contrary to the typical symptoms of general hyperthyroidism, are indifferent hyperthyroidism, characterized by emotional indifference, not easily excited, daze, lethargy, depression, emaciation, fatigue, haggard face, dry skin, sweaty, lower eyelid edema, can also be complicated by anemia, stomach disease, hypertension, hyperlipidemia, high blood viscosity and immune disorders. The clinical manifestations of hyperthyroidism are very complex and careful patient consultation is necessary. In this study, we attempted to analyze combining thyroid volume, STA peak velocity, blood supply grade, HR on ultrasound imaging and clinical symptoms in patients with diffuse echogenic changes of the thyroid gland, and to use double-blind control with clinical diagnosis including biochemical test to explore the possibility of improving diagnostic rate of hyperthyroidism than that stated in previous literatures.
Patients
A total of 179 patients, aged from 9 to 82 years, were enrolled, with an average age of 37.9±15.2 years, including 47 males and 132 females. Inclusion criteria: (1) untreated diffuse thyroid lesions; (2) patients with no more than two masses in each thyroid lobe and each tumor's diameter ≦1.0 cm; (3) patients having an interval between laboratory examination and ultrasonic examination of no more than 3 days. The laboratory examination included routine bloods, and seven items of thyroid function (TT3: Total triiodothyronine; TT4: Total thyroid hormone; FT3: Free triiodothyronine; FT4: Free thyroxine; TSH: Hypersensitivity thyroid stimulating hormone; TPO: Thyroid peroxidase antibody; TGAb: Thyroglobulin antibody). Exclusion criteria: (1) neonatal hyperthyroidism; (2) autonomous hyperfunctional thyroid adenoma; (3) drug-induced hyperthyroidism; (4) secondary hyperthyroidism; (5) allogenic hyperthyroidism; (6) postoperative hyperthyroidism; (7) history of radioiodine treatment for hyperthyroidism; (8) patients with the largest diameter of a single thyroid tumor was ≥1.5 cm (because relatively large tumors have an impact on hemodynamics); (9) patients with more than three masses in each thyroid lobe and tumor diameter >1.0 cm; (10) patients convalescing from serious disease or destructive thyroiditis (including subacute thyroiditis and postpartum thyroiditis); (11) untreated primary adrenal dysfunction; (12) patients with malignant tumor lesions in any system of the body; (13) patients who received cardiovascular drug intervention before a color Doppler ultrasound examination; (14) and patients with STA anatomical variations (such as being too small or hard to find the starting point).
Ultrasound evaluation
High-resolution ultrasound systems, including Aixplorer (Supersonic Imaging, France) and Aloka a-10 (Japan) machines, with transducer frequency of 4-15 MHz were utilized in this study.
Before ultrasound examination, we instructed every patient to rest for 10 min and relax their body to avoid inaccurate measurements of HR or blood flow velocity. The thyroid gland was scanned in the transverse, sagittal, and oblique sections by experienced ultrasound practitioners. The size of the thyroid was measured on grayscale imaging. Color Doppler velocity was uniformly lowered to 4.0-4.5 cm/s during the grading of thyroid blood supply. Blood flow grading of the thyroid was used based on previous study [10]: Grade 0: no increased in blood supply; Grade I: blood flow increased in a star-spot pattern, and thyroid background echo in the color window is still dominated by a large area of glandular substance; Grade II: blood flow increased as a bulky dot and branch pattern, and thyroid glandular parenchyma and blood flow signal in the color window are 50 vs. 50; Grade III: known as "fire-sea sign", the thyroid is dominantly covered by a diffuse blood flow signal, making it almost impossible to show the glandular parenchyma. STA peak velocity was measured according to our previous study [6], that is, sampling and measurement are conducted at the brightest part of color blood flow imaging of the STA on both sides. The value of the STA peak velocity ≥70 cm/s was considered as increase value [11]. HR values were obtained from pulse Doppler ultrasound.
In this study, we evaluated the diagnostic value of the following 6 sonographic features to differentiate hyperthyroidism and non-hyperthyroidism. Sonographic parameters included STA peak velocity (X1), heart rate (X2), thyroid volume (X3), thyroid blood supply(X4), clinical symptoms (X5), and echogenicity (X6). Clinical diagnosis was used as a reference standard. A doubleblind control was used, in which neither sonographic doctor nor the patient was aware of the laboratory results or clinical diagnosis before the procedure.
Statistical Analysis
We refer to the research method in this paper as the study group, and refer to the method described in the literatures to diagnose hyperthyroidism by combining heart rate and maximum flow velocity of the STA as the control group, and conduct a comparative study of the two. Statistical analyses were performed by SPSS 21.0. The logistic regression was used to simulate a model to differentiate hyperthyroidism from non-hyperthyroidism. A P value < 0.05 was considered statistically significant. The statistically significant features were encapsulated in this model. This model's performance was conducted by crossing validation, including sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. The receiver operator characteristic curves (ROC) of multivariate observations between study group and control group were drawn, which can assess the logistic regression model's prediction performance. Z test was used to differentiate the significance of the area under the ROC curve between the two groups.
According to the patients' STA peak velocity, HR and corresponding clinical symptoms, 119 patients with hyperthyroidism were classified, as shown in Table 2. Type A was characterized by either high peak velocity or fast HR, Type B was characterized by high peak velocity and slow heart rate, Type C presented normal peak velocity and fast HR, Type D presented normal peak velocity and normal HR. Ultrasound imaging of all types were shown in Figure 1.
In order to explore which parameters were closely related to the diagnosis of hyperthyroidism, the study analysis considered "Whether or not hyperthyroidism" as the dependent variable (y = 0,1) and thyroid volume, STA peak velocity, blood supply grade, HR, clinical symptoms and echogenicity as covariables in logistic regression analysis (Table 3).
From summary of logistic regression analysis, the adjusted R square of goodness and fit test was 0.919. The sonographic parameters and clinical symptoms were detailed in Table 2. The age, thyroid volume and thyroid blood supply (Grade I, Grade II) were not found to have significantly difference between hyperthyroidism and non-hyperthyroidism (P > 0.05). Among these sonographic parameters, STA peak velocity, HR, thyroid blood supply (Grade III), echogenicity and clinical symptoms had statistically significance between hyperthyroidism and non-hyperthyroidism; especially for the clinical symptoms, in which its regression coefficient value (B) was 6.617, and the P (Sig.) value showed a significance of 0.001 level and its OR value was 747.822. The clinical symptoms had the most positive influence on the determination of hyperthyroidism.
After cross validation, the ROC curve was obtained (Fig 2). The area under the ROC curve (AUC) in study group for using this formula to differentiate hyperthyroidism and non-hyperthyroidism was 0.993. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were 89.92%, 96.67%, 92.18%, 98.17%, and 82.80%, respectively while the AUC in control group was 0.899. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were 68.91%, 96.67%, 78.21%, 97.62%, and 61.05%, respectively. Using Z test to differentiate the significance of the AUC between the study group and the control group, there was a significant difference in diagnostic coincidence rate between the two groups (Z = 3.868, P < 0.001).
Discussion
Hyperthyroidism is a medical condition that results from an excess of thyroid hormone in the blood. Thyroid hormones control most metabolic processes in the body. In cases of hyperthyroidism, these processes are often sped up causing symptoms of hyperthyroidism, such as increased excitability and hypermetabolism of the nervous, circulatory, digestive system and so on. However, some middle-aged and elderly patients show indifferent symptoms [12,13] which was contrary to the typical hyperthyroidism, such as lethargy, anorexia, and apathy. By means of magnetic resonance imaging, Zhi et al. [14] pointed out that abnormal brain spontaneous activity was mainly in default mode network (DMN), and this implicated that hyperthyroid patients were with neuro-pathological substrate of relevant emotional and cognitive dysfunction, yet they didn't point out what the neuro-pathological substrate was.
Due to the variety of clinical manifestations of hyperthyroidism, the consultation should be as detailed as possible. For example, the acceleration of HR caused by increased excitability due to hyperthyroidism should All of thyroid enlargement, rich blood supply, and high velocity of STA can be measured by ultrasound in patients with hyperthyroidism. These ultrasound features can also be found in subclinical hyperthyroidism, subclinical hypothyroidism and Hashimoto's thyroiditis, which increased the difficulty of making a diagnosis and differential diagnosis among them. In the current literatures on ultrasound examination, peak velocity was the most studied factor [6,15,16], followed by thyroid volume and HR. In the majority of the studies [11][12][13], thyroid volume, peak velocity, and HR were considered single independent factor to explore their relationship with hyperthyroidism. These above factors were statistically different from their control group, but overlapping data between groups made it difficult to give an exact diagnosis, and result in a high false positive rate. From the literatures, only three studies [7][8][9] were found to investigate dual factors, all of which reported the application of the product of STA peak velocity and HR to diagnose hyperthyroidism. However, in clinical practice, it was no better than the use of a single factor. After cross validation, the ROC curve was obtained (Fig 2). In this study, the patients were divided into four types according to the ultrasonic parameters of peak velocity and heart rate, and compared with clinical symptoms ( Table 2). All cases in Type A had high peak velocity and fast heart rate, and 97.2% (70/72) had hyperthyroidism symptoms. In most patients with type B, C and D had normal thyroid volume, fast or slow heart rate, and high or low peak velocity; however, most of them had hyperthyroidism symptoms, and the symptoms of type C and type D are mostly indifferent and the sympathetic nerve excitement was rare. We also can see from Table 2 that for patients with hyperthyroidism, clinical symptoms are the important determinant regardless of the type of ultrasound features.
Our current study suggested that neither patients' age (P = 0.825) nor thyroid volume (P x3 = 0.894) was correlation with the diagnosis of hyperthyroidism (Table 3). In general, thyroid blood supply grade had no statistical significance (P x4 = 0.174), in which both Grade I (P = 0.998) and Grade II (P = 0.263) had no statistical significance, while only Grade III (P = 0.041) was statistically significant, that its P value was close to 0.05, indicating that increased blood supply in the glands was a secondary reference factor for the diagnosis of hyperthyroidism. Increased blood flow with Grade II and Grade III in the thyroid parenchyma was more common in patients with type A and type B, and increased thyroid volume in these two types accounted for a larger proportion of overall patients. Vita [17] et al. proved that thyroid vascularization correlated directly with thyroid volume and that larger thyroids tended to be more vascularized. Uchida et al. [18] also assessed the frequency and sonographic and laboratory characteristics of Graves' disease with intrathyroid hypo-vascularity, and they found that the thyroid volume and thyrotropin receptor antibody level were significantly lower in patients with Graves' disease and hypovascular thyroid pattern than those with hyper-vascular thyroid pattern. Another 4 factors in Table 3 were statistically significant with the clinical symptoms (P = 0.001) while STA peak velocity (P = 0.005), HR (P = 0.009), and echogenicity (P = 0.015) were ranked in order of their contribution to the diagnosis of hyperthyroidism. The clinical symptom was absolutely superior to any other parameter obtained by ultrasound. Goichot et al. [19] thought that positive diagnosis of Graves' disease after biological confirmation of thyrotoxicosis does not always require complementary etiological examinations if clinical presentation is unambiguous, notably including extra-thyroid signs.
As shown in Figure 1 and Table 4, the area under the ROC (AUC) curve indicates that the research method in this paper was significantly better than those described in the past literatures (P = 0.001).
All of these statistical results above further indicated that during the examination of diffuse thyroid disease, in addition to accurate acquisition of relevant parameters by ultrasound, it is more important to pay attention to medical history, and clinical symptom is the best factor to improve the diagnostic coincidence rate of hyperthyroidism.
The sonographic manifestations of hyperthyroidism, subclinical hyperthyroidism, subclinical hypothyroidism and Hashimoto's thyroiditis are similar to each other, all of which can be manifested as increased thyroid volume, increased blood supply, and increased STA peak velocity. In the differential diagnosis, hyperthyroidism can be distinguished initially from others with the information of clinical symptoms.
The presence of hyperthyroidism symptoms is a highly specific factor for the diagnosis of hyperthyroidism.
Before obtaining laboratory examination results, the presence of hyperthyroidism symptoms is the only reliable diagnostic factors. In our study, there were still five patients with hyperthyroidism who were not asked about their specific symptoms, which may be related to the fact that hyperthyroidism involves many organs and systems, that the individual clinical manifestations of hyperthyroidism are different, or sometimes, when hyperthyroidism is mild, patients may not experience any symptoms.
One might ask a question: since clinical symptoms are so accurate in diagnosing hyperthyroidism, should ultrasound be needed? The answer is positive. First, we can use ultrasound to image structure quantification to differentiate normal from pathological thyroid parenchyma in patients with diffuse autoimmune thyroid disease, including hyperthyroidism [20]. Secondly, we can distinguish hyperthyroidism from other autoimmune thyroid disease by clinical symptoms and laboratory examinations. Ultimately, treatment options are determined by ultrasound evaluation of the presence and malignancy of nodules in the thyroid [21]. Thus, Ultrasound imaging could play a key role for the diagnosis and treatment of hyperthyroidism in clinical settings.
There are some shortcomings of this study. Because subclinical hyperthyroidism and subclinical hypothyroidism cases were few, the differential diagnosis of hyperthyroidism, subclinical hyperthyroidism, subclinical hypothyroidism and Hashimoto's thyroiditis could not analyzed. Further research with larger sample size is still needed.
Conclusion
For patients with finding diffuse echogenicity changes in thyroid gland by ultrasound, further combination of STA peak velocity, HR and thyroid blood supply with clinical symptoms could significantly improve the diagnostic coincidence rate of hyperthyroidism. | 2021-08-27T16:41:51.913Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "295e473c7669913b7e76524b23dba7dc03fa11c0",
"oa_license": "CCBY",
"oa_url": "http://www.journaladvancedultrasound.com:81/CN/article/downloadArticleFile.do?attachType=PDF&id=171",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "37ef73c5dafc94b1fceb0f920b4e3e5930af3953",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52195301 | pes2o/s2orc | v3-fos-license | Effects of exercise on myokine gene expression in horse skeletal muscles
Objective To examine the regulatory effects of exercise on myokine expression in horse skeletal muscle cells, we compared the expression of several myokine genes (interleukin 6 [IL-6], IL-8, chemokine [C-X-C motif] ligand 2 [CXCL2], and chemokine [C-C motif] ligand 4 [CCL4]) after a single bout of exercise in horses. Furthermore, to establish in vitro systems for the validation of exercise effects, we cultured horse skeletal muscle cells and confirmed the expression of these genes after treatment with hydrogen peroxide. Methods The mRNA expression of IL-6, IL-8, CXCL2, and CCL4 after exercise in skeletal muscle tissue was confirmed using quantitative-reverse transcriptase polymerase chain reactions (qRT-PCR). We then extracted horse muscle cells from the skeletal muscle tissue of a neonatal Thoroughbred. Myokine expression after hydrogen peroxide treatments was confirmed using qRT-PCR in horse skeletal muscle cells. Results IL-6, IL-8, CXCL2, and CCL4 expression in Thoroughbred and Jeju horse skeletal muscles significantly increased after exercise. We stably maintained horse skeletal muscle cells in culture and confirmed the expression of the myogenic marker, myoblast determination protein (MyoD). Moreover, myokine expression was validated using hydrogen peroxide (H2O2)-treated horse skeletal muscle cells. The patterns of myokine expression in muscle cells were found to be similar to those observed in skeletal muscle tissue. Conclusion We confirmed that several myokines involved in inflammation were induced by exercise in horse skeletal muscle tissue. In addition, we successfully cultured horse skeletal muscle cells and established an in vitro system to validate associated gene expression and function. This study will provide a valuable system for studying the function of exercise-related genes in the future.
INTRODUCTION
The horse is a valuable model animal for studying the effects of exercise, because it is the most adaptive animal for exercise among livestock. Thoroughbreds are one of the most famous horse breeds in the horse racing industry, and they were specially bred for speed, endurance, and strength from the early 1700s. Through the selection of these characteristics, Thoroughbred has become a horse for racing [1]. The exercise characteristics of Thoroughbred are used as a biological model in the field of exercise physiology, and helped identify the molecular mechanisms of the adaptive responses to exercise [2].
To date, a number of genes related to exercise have been screened through high throughput analyses such as RNA sequencing [3] or microarrays [4]. A number of differentially expressed genes from genomic studies have been studied for functions in the stress response [2,5] and validated in horse muscles [6][7][8] Exercise-induced stress is considered one of the major stimuli for the adaptation to and improvement of phy sical performance of racing horses. Exercise induces endoplasmic reticulum, oxidative, and inflammatory stresses in muscle. After muscles contract during exercise, skeletal muscles produce a group of cytokines called myokines (interleukin 6 [IL-6], interleukin 6 , chemokine [C-C motif] ligand 4 [CCL4], and chemokine [C-X-C motif] ligand 2 [CXCL2]), which function in metabolism, insulin action, and inflammatory responses [9][10][11][12][13]. Among these, IL-6 is the first myokine to be secreted into circulation [14], and it acts as both a pro-inflammatory and an anti-inflammatory myokine. Plasma IL-6 was locally expressed in skeletal muscle without muscle damage, and its concentration dramatically increased following exercise [10]. To date, IL-6 is known to induce a variety of effects, including glucose and fat meta bolism [15,16] and anti-inflammatory functions during exercise [17]. IL-8 is a chemokine, which mainly functions in the chemotaxis of neutrophils. It is also expressed in skeletal muscle after exercise [11], and it has been hypothesized to play a role in angiogenesis within skeletal muscles [18]. After endurance racing in horses, CXCL2, also called macrophage inflammatory protein 2-α (MIP2-α), is upregulated in peripheral blood mononuclear cells [4]. In humans, CXCL2 is significantly induced in exercising legs [19]. CCL4 is a chemoattractant for a variety of immune-related cells [20]. It is highly expressed in skeletal muscles in response to pathological situations [21], and the concentration of CCL4 increases following exercise [12]. It induces myoblast proliferation via G protein-coupled receptors, extracellular signal-regulated kinases 1/2, and mitogen-activated protein kinase pathway, and it may be involved in wound healing after muscle injury [22,23].
In this study, we examined the gene expression of myokines, including IL-6, IL-8, CXCL2, and CCL4 in Thoroughbred and Jeju horse skeletal muscles before and after a single bout of exercise. Furthermore, we tested the myokine expression in primary muscle cells that were derived from Thoroughbred skeletal muscles in response to hydrogen peroxide (H 2 O 2 ) treatment, which mimics oxidative stress in vitro. The results of this study could be valuable for the establishment of strategies that manage exercise-induced muscle damage in the equine industry.
Study animals
Six horses were used in this study, and they were divided into two groups: Thoroughbred and Jeju horses. The Pusan Natio nal University-Institutional Animal Care and Use Committee approved the study design (Approval Number: PNU-2015-0864).
Tissue sampling
Two stallions, one Thoroughbred mare, and three Jeju mares (aged 5 to 10 and weighing 500 to 700 kg) were used to obtain skeletal muscle samples before and after exercise. Exercise involved trotting at 13 km/h for 30 min and cantering through lunging and long-reining (circular bridge lunging). Skeletal muscle samples were collected from the triceps brachii of the right leg.
Primary horse muscle cell culture A skeletal muscle tissue biopsy was performed on the leg of a neonatal Thoroughbred. Horse skeletal muscle cells were maintained and sub-passaged in Medium 199 (Gibco, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (FBS; Invitrogen, Carlsbad, CA, USA), 2% donor equine serum (DES; Hyclone, Carlsbad, CA, USA), and 1% antibiotic-antimycotic (ABAM; Invitrogen, USA). Medium 199 supplemented with 0.5% FBS and 1% ABAM was used as the differentiation medium. Horse skeletal muscle cells were cultured in a humidified atmosphere with 5% CO 2 at 37°C. At approximately 70% to 80% confluence, cells were treated with 1 mM H 2 O 2 (Junsei, Tokyo, Japan) and cultured for 6 h. Cells were gently washed twice with phosphate-buffered saline, and were then harvested using 0.05% trypsin-ethylenediaminetetraacetic acid (Welgene, Gyeongsan, Korea) to extract total RNA.
RNA extraction and cDNA synthesis
Horse skeletal muscle tissue (~50 to 100 g) was crushed using a mortar, and ground muscle tissue was then dissolved using 1 mL TRIzol (Invitrogen, Karlsruhe, Germany). Next, 200 μL of chloroform was added to remove cells from the organic solvent, the mixture was shaken for 10 s, maintained at 4°C for 5 min, and centrifuged at 4°C for 15 min. The supernatant was removed and added to a new test tube, mixed with an equal amount of isopropanol, and maintained at 4°C for 15 min to collect RNA pellets. Isopropanol was removed from the solution via centrifugation at 4°C for 15 min, and the sample was then sterilized with 85% ethanol and dissolved in RNase-free water. The purity of the extracted RNA was confirmed by measuring absorbance at 230 nm and 260 nm using a spectrophotometer (ND-100, Nanodrop Technologies Inc., Wilmington, DE, USA), and only RNA samples with purity (optical density value of 230 nm/260 nm) measurements greater than 1.8 were selected and stored at -70°C until the experiment was carried out.
RT-PCR and real time-qPCR
NCBI (http://www.ncbi.nlm.nih.gov) and the Ensembl Genome Browser (www.ensembl.org) were utilized to retrieve gene sequence information. The primers for amplification of myokines and myogenic marker mRNA (Table 1) were synthesized using PRIMER3 software (http://bioinfo.ut.ee/ primer3-0.4.0/). Reverse transcriptase-PCR and real-time qPCR reactions were carried out in a 25 μL reaction solution using a C1000 Thermal Cycler (Bio Rad, Hercules, CA, USA) to measure the relevant expression of target genes. The solution was prepared as follows: 2 μL diluted cDNA (50 ng/μL) was added to 14 μL SYBR green master mix (Bio Rad, USA) and 1 μL each of 5 pmol/μL diluted forward and reverse primers. The conditions used for the real-time qPCR were as follows: initial denaturation at 94°C for 10 mins followed by 40 cycles of denaturation at 94°C for 10 s, annealing at 60°C for 10 s, and extension at 72°C for 30 s. All measurements were carried out in triplicate for each specimen, and the 2 -ΔΔCt method was used to determine relative gene expression. The relative expression of target genes was normalized with glyceraldehyde-3-phosphate dehydrogenase.
MTT assay
Cell viability was assayed by measuring blue formazan that was metabolized from 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) by mitochondrial dehydrogenase. Horse muscle cells were re-suspended in the medium one day before H 2 O 2 (Junsei, Japan) treatment, at a density of 2×10 5 cells per well in 24-well culture plates. Liquid medium was replaced with fresh medium containing dimethyl sulfoxide (DMSO) for control. Horse muscle cells were incubated with various concentrations of H 2 O 2 . MTT (5 mg/mL) was added to each well and incubated for 4 h at 37°C. The formazan product formed was dissolved by adding 200 μL DMSO to each well, and the absorbance was measured at 570 nm on an Ultra Multifunctional Microplate Reader (TECAN, Durham, NC, USA). All measurements were performed in triplicate and repeated at least three times.
Statistical analysis
Means and standard deviations of gene expression were calculated using Microsoft Excel. The statistical significance (* p<0.05, ** p<0.01, or *** p<0.001) was assessed using an analysis of variance, followed by an unpaired sample t-test using the Prism 5 program (San Diego, CA, USA).
Expression of exercise-related myokines before and after exercise in horses
To examine the expression patterns of horse myokines including IL-6, IL-8, CXCL2, and CCL4 after exercise, mRNA levels were quantified in muscle tissues from Thoroughbred horses following one bout of trotting on the treadmill. Expression of myokines was significantly increased after exercise in Thoroughbreds (*** p<0.001, Figure 1A). Similar expression patterns were observed in the muscles of Korean native Jeju horses, demonstrating that regulation of myokine expression by exercise is conserved in horses ( Figure 1B).
Culture and validation of primary skeletal muscle cells of horse
We isolated and cultured muscle cells from the skeletal muscle tissue of a neonatal Thoroughbred to establish a reliable system that allowed an in vitro study in horses. Horse muscle cells were stably maintained even after passage 23 in Medium 199 supplemented with 10% FBS and 2% DES. It is interesting to note that horse muscle cells were larger than those of mouse muscle cell line C2C12. To test whether horse primary myoblast cells possess the capacity for differentiation into myotube cells, horse muscle cells with 80% confluence were cultured in Medium 199 supplemented with 2% FBS for 12 days. Dur-ing myogenic differentiation, myoblasts were fused into multinucleated fibers (Figure 2A). Subsequently, we conducted RT-PCR for MyoD (one of the myogenic markers) expression to confirm the origin of the cells. As a result, musclespecific transcription factor MyoD was specifically expressed in both skeletal muscle tissues and cultured cells, but it was not expressed in the other tissues ( Figure 2B). RT-PCR analysis for myogenic markers including paired box 7 (PAX7), myogenic factor 5 (Myf5), myogenin (MyoG), four and a half LIM domains 1 (FHL1), and nuclear factor of activated T cells 1 (NFATc1) confirmed that horse muscle cells maintained myogenic features in vitro ( Figure 2C). to control ( Figure 3A). To maximize oxidative stress, horse muscle cells were exposed to 1 mM H 2 O 2 for 6 h. There were no morphological changes in H 2 O 2 -treated horse muscle cells ( Figure 3B). Then, the effects of oxidative stress on the expression of the myokine genes was evaluated by qRT-PCR. Oxidative stress caused by H 2 O 2 treatment increased the expression of IL-6 (** p<0.01), IL-8 (* p<0.05), and CXCL2 (* p<0.05) mRNA, but CCL4 expression was not altered significantly ( Figure 3C).
DISCUSSION
A variety of cytokines are secreted from muscle cells after exercise [19,24], and this creates a milieu for muscle recovery and inflammatory responses. Although the horse is a representative model for exercise, the expression of exercise-induced cytokines has been poorly studied in horses. Several studies have investigated cytokine expression following exercise [24], and the effects of exercise-induced cytokines in muscles have been observed in previous studies, which found that exercise greatly reduces the risk of chronic inflammatory diseases [25]. Physical activity induces reactive oxygen species (ROS), which initiate signaling cascades and exert various functions [26]. In addition to the expression of oxidative stress-related genes, ROS are closely related to inflammation induction [26]. Inflammatory cytokines promote the involvement of immune cells such as macrophages or neutrophils [27]. It is assumed that exercise-induced inflammatory responses are required for regenerative and adaptive processes in the skeletal muscle. In this study, we examined the expression of myokines after horse exercise, and IL-6, IL-8, CXCL2, and CCL4 increased after exercise in Thoroughbred and Jeju horses ( Figure 1). It has been previously shown that the exercise-induced IL-6 has specific roles in fat metabolism and immune system [16,17]. Although the exact function of IL-8, CXCL2, and CCL4 in exercise still remains uncertain, the increased expressions after exercise in muscle tissue are similar among species [11,12,19]. Therefore, these results indicate that exercise-related myokines may play a role in exercise regardless of species and breed.
In this study, we established a culture system for horse muscle cells derived from the skeletal muscle tissue of a neonatal Thoroughbred (Figure 2A). After 23 culture passages in vitro, we investigated whether these cells possessed the myogenic features by examining myogenic markers using RT-PCR. PAX7 is regarded as an important gene for the specification of myogenic satellite cells [28]. Though PAX7 was expressed weakly in both horse skeletal muscle tissue and horse muscle cells, it was reasonable to assume that PAX7 may play a role in the specification of myogenic cell lineages in horses. In addition, other myoblast markers, Myf5, MyoD, MyoG, FHL1, and NFATc1, were expressed in horse muscle cells, indicating that the cell population possesses myogenic features observed in other mammals; however, further investigation is required ( Figure 2C). Because the cells were derived from neonatal skeletal muscle tissue, they may contain various myogenic cells. Further study is needed to demonstrate the exact state of these horse skeletal muscle-derived cells. Finally, we validated the possibility of a suitable in vitro system for studying horse muscle physiology and exercise-induced muscle disease using in vitro cultured horse muscle cells (Figure 3). Horse muscle cells developed in this study will provide an important system for the functional study of exercise-related genes.
In conclusion, exercise can induce the expression of cytokines, which play an important role in muscle regeneration and anti-inflammation in horse skeletal muscle tissues after exercise. Additionally, we established a horse skeletal muscle cell culture, which will be a valuable tool for investigating the expression and function of exercise-related genes. | 2018-09-16T08:12:25.118Z | 2018-09-13T00:00:00.000 | {
"year": 2018,
"sha1": "0729a833a70b8df4bf185f718bc7e69feec836bb",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/ajas-18-0375.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0729a833a70b8df4bf185f718bc7e69feec836bb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
237551456 | pes2o/s2orc | v3-fos-license | Negative Posttraumatic Cognitions Color the Pathway from Event Centrality to Posttraumatic Stress Disorder Symptoms
The centrality of an event indicates the extent to which it becomes a core part of identity and life story. Event centrality (EC) has been shown to have a strong relationship with PTSD symptoms, which seems to be indirectly influenced by negative posttraumatic cognitions (PTC). However, research on this potential mediation and its causal links particularly with clinical samples is limited and essential to derive treatment implications. Pre- and posttreatment data of 103 day-unit patients with PTSD was examined using mediation analyses and structural equation modeling. Negative PTC mediated the relationship between EC and PTSD symptoms, partially pre- and completely posttreatment. Within extended longitudinal analyses causal directions of the mediation pathways were not adequately interpretable due to unexpected suppression effects. The results suggest that EC may only have an indirect effect on PTSD symptoms through negative PTC. Thus, decreasing negative PTC which are connected to centralized events might be a key element for PTSD treatment. Thereby, transforming the cognitions’ valence to more positive and constructive forms could be crucial rather than mere decentralization. Although suppression effects limited causal inferences, they do not contradict the mediation and further indicate potential interactional terms and a transformation of EC.
Even though exposure to a traumatic event is very likely across countries with over 70.3%, the conditional risk for posttraumatic stress disorder (PTSD) in the aftermath of trauma is only 4.0% cross-nationally Liu et al., 2017). This low probability has brought etiological factors for PTSD into the focus of research interest. A strong link has been established to posttraumatic cognitions (PTC) that are assumed to play a crucial role in etiology models of PTSD: Particularly views of individuals with strong beliefs about a safe world and high competences can be shattered through traumatic events since assimilation or over-accommodation of world-and self-schemes may lead to dysfunctional thoughts and consequently yield high vulnerability for PTSD (Foa & Rothbaum, 1998). Similarly, Ehlers and Clark (2000) proposed in their cognitive model that in the aftermath of trauma a disturbance of autobiographical memory and negative appraisals of a trauma and/ or its consequences such as "My personality has changed for the worse" or "Nowhere is safe" (p. 322) lead to an ongoing perceived threat that can result in PTSD symptoms. Therefore, treatments such as cognitive therapy for PTSD (Ehlers et al., 2005) or cognitive processing therapy (Resick et al., 2017) aim to alter PTC. A recent review of treatment studies showed that the reduction of PTSD symptoms was often preceded by a decrease in negative PTC . In a commentary on 11 studies on appraisals and PTSD, McNally and Woud (2019) address the question if the relationship between trauma and PTSD is mediated by appraisals and evaluate methodological limitations for 1 3 inferences. The authors outline that appraisals are found to be a correlate and predictor for PTSD but studies examining them as causal risk factor are rare. McNally and Woud (2019) reflect on innovations in this research field and point out that replicating previous results in clinical samples as well as causal research is important, particularly to derive therapeutical interventions.
Another facet of cognitions associated with PTSD symptomatology is the perceived centrality of a traumatic event, which arises if the event (1) is seen as a reference point for everyday inferences as well as (2) a turning point in life story and (3) becomes a core component of personal identity (Berntsen & Rubin, 2006). Rubin (2006, 2007) argue that these three processes can emerge due to a high accessibility of traumatic memories through their emotional impact and distinctiveness. Thus, they assume an enhanced integration may be responsible for maladaptive cognitions and behavioral responses such as an overestimation of further traumatic events and avoidance of perceived threats. In a systematic review of 92 studies which mainly included student samples (Gehrt et al., 2018), a robust positive relationship between event centrality (EC) and PTSD symptoms was found. Beyond mainly cross-sectional evidence, only few studies have investigated longitudinal effects so far and suggest that EC precedes and predicts PTSD symptoms (Blix et al., 2016;Boals & Ruggero, 2016;Boelen, 2012;Grau et al., 2020) while one recent study indicates the reversed temporal order (Glad et al., 2019). In line with the picture of multiple studies included in the review by Gehrt et al. (2018), previous research predominantly indicates that EC effects PTSD symptom severity.
Concurrently, the review by Gehrt et al. (2018) showed that EC correlated strongest with posttraumatic growth (PTG), which is described as a positive change after highly adverse life events and includes an increase of life appreciation, significant relationships, and perceived personal strength, as well as a shift of priorities and a development of an existential or spiritual life (Tedeschi & Calhoun, 2004). Tedeschi and Calhoun (2004) bring up that PTG "may be accompanied by a reduction in distress" (p. 13) but also point out that PTG cannot be directly equated with wellbeing or less distress.
The ambiguity of EC correlates stimulated more research. Boals and Schuettler (2011) found that EC predicted both, PTG and PTSD symptoms. They concluded that the way of coping (problem-focused vs. avoidant) and the valence of interpretations (positive vs. negative perspective) determine the course of mental health outcomes . Barton et al. (2013) came to the same conclusion, suggesting that negative PTC come along with PTSD symptoms and inhibit PTG. In line with these findings, Groleau et al. (2013) concluded that negative or positive outcomes after traumatic experiences might depend on the perceived valence of the centrality appraisal for the worse or better. Accordingly, an examination with a valence-modified version of an EC measure showed that central-negative appraisals were positively related to PTSD symptoms and negatively to PTG, whereas central-positive appraisals showed the inverse relationships (Teale Sapach et al., 2018). Broadbridge (2018) used a similar approach adding valencemodified items to the measurement and also found that centralizing events negatively predicted PTSD symptomatology much stronger than centralizing them positively.
The findings suggest that EC may not be connected directly to PTSD symptoms but indirectly through centrality-related, negative cognitions. Their potential impact has not been systematically investigated, but some studies have included examinations of mediational relationships: Using an undergraduate sample, Lancaster et al. (2011) found support for their hypothesis that EC mediated the relationship between PTC and PTSD symptoms and also found a good fit for a model with PTC as a mediator, concluding that both mediators are independent predictors. Two studies showed that violated core beliefs mediated the relationship between EC and PTSD symptoms in undergraduates (George et al., 2016) and internally displaced older adults (Chukwuorji et al., 2019). Vermeulen et al. (2019) were able to experimentally reduce students' centrality appraisals of distressing events and found no reduction of PTSD symptoms relatively to a placebo control training but demonstrated that PTC and rumination had a mediating effect. The authors concluded that "there might be no direct causal relation between appraisals of event centrality and symptoms of PTSD" (p. 223) and consider that EC does not encompass relevant appraisals based on PTSD models (e.g. by Ehlers & Clark, 2000). These mediational studies support the idea that negative PTC have an indirect effect of EC on PTSD symptoms, but more research is needed to confirm this assumption.
In sum, numerous studies have shown a strong association of EC and PTSD symptoms, but questions still remain concerning the influence of negative PTC and causal directions of the relationships. These are core targets of this study. We aimed to study the associations of the three variables examining a clinical sample that had experienced a traumatic event and using longitudinal design. This approach substantially broadens research in a field where so far predominantly subclinical student samples who refer to their most stressful event have been investigated cross-sectionally.
First, we hypothesized that negative PTC mediate the relationship between EC and PTSD symptoms pre-and posttreatment. We also expected to find these mediations when the cross-lagged paths and stabilities between the two points of time are included. Second, we expected to find evidence for causal directions of the mediational pathways through longitudinal analyses, such that EC influences PTC and that both influence PTSD symptoms. Thus, comparing the three pairs of cross-lagged paths, we hypothesized that the regressions implying the mediational direction are larger than the regressions vice versa.
Treatment
Patients voluntarily reached out to the trauma-focused day-unit program at St. Hedwig-Hospital in Berlin of the Psychiatric University Hospital Charité. The department provides cognitive behavioral therapy to patients suffering from posttraumatic disorders. The program ran from 8:00 am to 3:30 pm on weekdays over an average of 7.94 treatment weeks (SD = 1.45) for the included patients. It comprised four sessions of individual therapy per week and daily group therapy, conducted by mental health professionals (psychologists, psychiatrists, and nurses) who obtained multiple supervisions weekly. The composition of the group varied throughout.
The treatment followed the cognitive processing therapy (CPT) manual by Resick et al. (2017). In addition, behavioral experiments were conducted and avoidance as well as safetyseeking behavior was monitored (Ehlers et al., 2005). Within the individual therapy, half of the sessions were conducted under guidance of one main therapist. Patients were encouraged to write an Impact Statement which encompasses their beliefs of reasons for the occurrence of the traumatic event and its influence on their beliefs about self, others, and the world-particularly in terms of safety, trust, control, power, esteem, and intimacy. Thereby, dysfunctional beliefs (stuck points) were identified and challenged through socratic dialogue with the help of the professional and worksheets. Besides, patients wrote a narrative of their most distressing trauma and repeatedly read it. If present and necessary, other traumatic events could be addressed after working on the index trauma. The standard program was individually adapted to the therapy goals of the patients. The specific topics were consolidated with the help of a co-therapist within the other half of individual sessions. Besides, group therapy addressed more general and commonly arising topics. It comprised psychoeducation, thought and behavioral analyses, challenging irrational thoughts, planning and evaluating behavioral experiments, and monitoring avoidance as well as safety-seeking behavior. The use of various worksheets was thereby practiced.
To be admitted, a preliminary session by a trained and experienced clinician (psychologist or psychiatrist) was conducted to evaluate the indication for the treatment with a semi-structured clinical interview based on the diagnostic criteria according to ICD-10 (WHO, 1992). This study included only patients who fulfilled clinically assessed PTSD diagnosis (F43.1). To assure sufficient posttraumatic stress symptom severity, the criterion of the Posttraumatic Diagnostic Scale (PDS; Foa et al., 1997) had to be fulfilled additionally (at least one re-experiencing, three avoidance, and two hyperarousal symptoms had to be rated with 1 or higher and present over at least four weeks as well as the existence of at least one impairment).
Patients
From November 2014 to January 2020, 145 of 304 admitted patients completed the set of pre-(T1) and posttreatment (T2) assessments. Thereof, 109 patients fulfilled clinically assessed PTSD diagnosis, however six of them did not fulfill the PDS-criterion. Consequently, 103 patients (73 females; 70.9%) were included in this study.
Measures
Data acquisition was part of the routine clinical monitoring at the day-unit. The self-rating measures of this study were collected in German on a tablet at the week of admission and averagely 55.82 days (SD = 10.31) later at the week of discharge. The Posttraumatic Diagnostic Scale (PDS) was only assessed at admission because of its diagnostic value.
Event Centrality
The Centrality of Event Scale (CES; Berntsen & Rubin, 2006;German version: Conen et al., 2021) measures to which extend an event becomes central to an individual, referring to its identity and life story. An event is seen as central if it becomes a (1) reference point, (2) turning point, and (3) central component of personal identity. The authors introduced a questionnaire with 20 items (α = 0.94) and a shortened version (α = 0.88) with seven items of the long version (no. 3, 6, 10, 12, 16, 17, and 18). Both scales are rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree). In this study, the short version was used, which demonstrated acceptable internal consistency at admission (α = 0.78) and good internal consistency at discharge (α = 0.88).
Posttraumatic Cognitions
The Posttraumatic Cognitions Inventory (PTCI; Foa et al., 1999;German version: Ehlers & Boos, 2000) was used to assess trauma-related thoughts and beliefs. It consists of the three factors: Negative cognitions about self (21 items), negative cognitions about the world (7 items), and self-blame (5 items). Rated on a 7-point Likert-scale from 1 (totally disagree) to 7 (totally agree), high scale scores indicate stronger endorsement of negative cognitions. All factors as well as the total score showed excellent internal consistency (α = 0.88-0.97; Foa et al., 1999). Internal consistency in this sample proved excellent reliability at admission (α = 0.94) and discharge (α = 0.98).
Posttraumatic Stress Symptoms
The Posttraumatic Diagnostic Scale (PDS; Foa et al., 1997;German version: Ehlers et al., 1996) measures the presence of posttraumatic stress disorder (PTSD) and the severity of its symptoms within the past month. The four parts of the questionnaire are allocated to the criteria of PTSD according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; APA, 1994): Part 1 and 2 assess trauma history and characteristics of the traumatic events (types, index event, time of occurrence, circumstances and reactions). Part 3 measures the PTSD symptom clusters (intrusions, avoidance, hyperarousal) and their frequency with 17 items, ranging from 0 (not at all or only one time) to 3 (five or more times a week/almost always) on a 4-point Likertscale. Two further questions assess the duration and onset of symptoms. In Part 4, impairments as a result of the PTSD symptoms are retrieved. The total symptom severity (Part 3) demonstrated with α = 0.92 an excellent internal consistency (Foa et al., 1997) while it was acceptable (α = 0.76) in the present sample.
Comparable to Part 3 of the PDS, the Davidson Trauma Scale (DTS; Davidson et al., 1997) measures the presence of PTSD symptoms according to DSM-IV (APA, 1994). Consisting of 17 items, both the frequency and severity of the PTSD symptom clusters were rated on a 5-point Likert-scale, ranging from not at all (0) to every day (4) and not at all distressing (0) to extremely distressing (4). The DTS showed good internal consistency (α = 0.99) and is sensitive to treatment effects (Davidson et al., 1997). In the present study, internal consistencies were excellent at admission (α = 0.92) and discharge (α = 0.96).
Statistical Analyses
Analyses were conducted using R version 3.6.3 (R Core Team, 2020) and are presented in the Appendix A. For descriptive statistics, one sample Kolmogorov-Smirnov-Tests were conducted to test normal distributions, paired t-tests to investigate the mean changes over time, and Pearson correlations to examine the relationships between the variables. Cohen`s d was calculated as effect size (Cohen, 1988).
Mediations and the structural equation model (SEM) were estimated with the lavaan package (Rosseel, 2012).
To test the mediation model regarding the first hypothesis, the presence of indirect effect of negative PTC on the relationship between EC and PTSD symptoms was estimated separately as well as simultaneously for both points of time within one SEM using the Maximum-Likelihood method.
To examine the second hypothesis regarding the causal direction of the mediational pathways, such that EC influences PTC and that both influence PTSD symptoms, the three pairs of cross-lagged paths between the variables of each point of time were analyzed within the SEM comparing the sizes of their regression weights. We tested whether the regressions of (1) PTSD symptoms at discharge on EC at admission, (2) of PTC at discharge on EC at admission, and (3) PTSD symptoms at discharge on PTC at admission are larger than the respective regressions vice versa. Two crosslagged paths were considered to be different if a model with equality constraints on the paths fit the data significantly worse than the original unconstrained model. Models were compared using likelihood ratio tests with significant χ 2 values indicating differences between the two cross-lagged paths.
To analyze power of the regression coefficients, the mixedDesign() function (Schad et al., 2020) was used: First, data for 1010 simulations was generated with the means, standard deviations and correlations from the specified model. Then, coefficients were extracted, rows with missing values excluded and 1000 rows randomly extracted. Last, the significant results were counted to estimate the power of the indirect effects and cross-lagged paths.
Descriptive Statistics
The assumption of normally distributed scores was violated only for CES at discharge (p < 0.05). An analysis of changes from admission to discharge showed significant reductions (p < 0.001) for CES, t(102) = 5.71, PTCI, t(102) = 9.47, and DTS, t(102) = 8.57. Effect sizes of the mean differences were medium for the CES (d = 0.56) and large for the PTCI (d = 0.93) and DTS (d = 0.84). The descriptive statistics and correlations are summarized in Table 1. All correlations were positive and significant at p < 0.001. Cross-sectionally, correlations were of medium (r = 0.36-0.44) and large (r = 0.54-0.79) magnitudes at admission and discharge, respectively (Cohen, 1988).
Cross-sectional Mediation
According to the first hypothesis, negative PTC mediated the effect of EC on PTSD symptoms at admission (indirect effect: z = 2.55, p < 0.05) and discharge (indirect effect: z = 6.26, p < 0.001). A partial mediation occurred at admission (direct effect: z = 3.64, p < 0.001), whereas it was complete at discharge (direct effect: z = 0.95, p = 0.343). Figure 1 presents the separate mediation models pre-and posttreatment with standardized regression coefficients.
Structural Equation Modeling
Figure 2 displays the SEM integrating the two mediation models of Fig. 1 along with standardized regression coefficients. Within this model, indirect effects of EC on PTSD symptoms via negative PTC were tested simultaneously and occurred at admission (z = 2.55, p < 0.05) and discharge (z = 5.92, p < 0.001) in line with the first hypothesis. Again, partial mediation occurred at admission (direct effect: z = 3.64, p < 0.001), whereas a complete mediation occurred at discharge (direct effect: z = − 0.63, p = 0.529). The analysis of power revealed high values for the indirect effects at admission (84.3%) and discharge (100.0%).
Stabilities (s 11 , s 22 , s 33 ) of the questionnaires between the two points of time were throughout positive and significant. The power of the cross-lagged paths (clp) analogous to the hypothesized direction of mediation paths was high (clp 12 = 89.3%, clp 23 = 97.3%) and small (clp 13 = 33.6%), while the power of the cross-lagged paths inversed to the hypothesized mediation pathways was medium (clp 31 = 55.4%) and small (clp 21 = 37.4%, clp 32 = 40.7%). Thus, the probability detecting a real effect of the path coefficients as statistically significant by the analysis was adequate for half of the clps with power above 50%. Besides this descriptive information, interpreting the comparisons of clp coefficients was still appropriate since our second hypotheses focused on the coefficient sizes. As we hypothesized that EC influences PTC and that both influence PTSD symptoms, we expected larger coefficient sizes of these clp directions than reversed. Thus, analyses were performed comparing the standardized regression weights of the three paired crosslagged paths. The results showed patterns contrary to the expectations: First, the standardized regression of PTSD symptoms (T2) on EC (T1) was smaller (clp 13 = 0.09) than vice versa (clp 31 = 0.20), χ 2 diff = 1.60, but the difference did not reach statistical significance, p = 0.206. Second, the regression of PTC (T2) on EC (T1) was significantly smaller (clp 12 = − 0.23) compared to the reverse regression (clp 21 = 0.16), χ 2 diff = 9.74, p < 0.01. Lastly, the estimated coefficient of the regression of PTSD symptoms (T2) on PTC (T1) was also significantly smaller (clp 23 = − 0.27) than vice versa (clp 32 = 0.12), χ 2 diff = 8.20, p < 0.01. Overall, the expected larger regression coefficients of paths implying the mediational directions compared to paths of the reversed direction did not appear within the longitudinal analyses of this model.
Regarding the comparison of the cross-lagged paths, two negative regressions occurred (clp 12 and clp 23 ), indicating unexpected suppression effects: EC at admission correlated higher with PTC at admission (r = 0.36) than at discharge (r = 0.22), implying that EC suppressed variance in PTC at admission which may have led to a high partial correlation of PTC at admission and discharge (r = 0.61). Similarly, PTC at admission correlated higher with PTSD symptoms at admission (r = 0.42) than at discharge (r = 0.38), indicating suppressed variance which may have led to a high partial correlation of PTSD symptoms at admission and discharge (r = 0.53). Thus, the subtractive contribution of EC as well
Discussion
This study cross-sectionally and longitudinally investigated the relationship between EC and PTSD symptoms and the possible indirect effect of negative PTC on their relationship. To our knowledge, this is the first longitudinal study on this topic including a clinical sample. So far, previous research indicated that EC and negative PTC precede PTSD symptoms (Blix et al., 2016;Boelen, 2012;Brown et al., 2019;Grau et al., 2020) and centralizing events negatively resulted in higher symptomatology (Broadbridge, 2018;Teale Sapach et al., 2018). In addition, related mediational effects were found within students samples cross-sectionally (George et al., 2016;Lancaster et al., 2011) and longitudinally (Vermeulen et al., 2019) as well as in a sample of internally displaced older adults cross-sectionally (Chukwuorji et al., 2019). Therefore, we first hypothesized that negative PTC mediate the relationship between EC and PTSD symptoms in a clinical sample pre-and posttreatment. Cross-sectionally, mediation was partial at admission and even complete at discharge, strengthening the evidence for the indirect effect. Within a longitudinal SEM including the cross-lagged paths and stabilities between the two points of time, we found the same mediational pattern. These results extend the findings of the separate tested mediations in support of the hypothesis since the stabilities from admission to discharge-even though they were comparably small but still positive as expected-were included within the tested model.
Concerning the second hypothesis, we expected to find extending evidence for the causal directions of the mediational pathways, such that EC influences PTC and that both influence PTSD symptoms. To investigate each direction of the paths, we compared the paired cross-lagged paths within the longitudinal design and expected that regression coefficients of paths implying the mediational directions are larger compared to the respective paths of the reversed direction. Against our assumptions, none of the three comparisons showed the expected pattern: The regression weight of PTSD symptoms at discharge on EC at admission was smaller than vice versa but the comparison was not significant, indicating no difference between the regressions. The regression of PTC at discharge on EC at admission as well as the regression of PTSD symptoms at discharge on PTC at admission were significantly smaller than the respective regressions vice versa. However, two negative coefficients within the cross-lagged paths led to the assumption of unexpected suppression effects which generally implicate a restriction of comparative interpretations. Thus, designating the cross-lagged paths as larger or smaller as mentioned above was considered inappropriate. Since causal inferences were therefore limited, the results cannot speak for or against reversed directions of the mediational pathways. Thereby, they do not contradict our previous findings.
Overall, the first part of our results regarding the mediation pre-and posttreatment implicate that particularly reducing negative PTC which are connected to centralized events might be essential for PTSD interventions. At the same time, this would suggest that reducing EC within trauma-focused interventions may not be directly relevant for achieving a reduction of PTSD symptoms. This would stand in contrast to the original assumption of a direct influence (Berntsen & Rubin, 2006) and recent suggestions (Boals et al., 2020) pleading for interventions which aim to specifically reduce EC. We assume that this may only have an effect on PTSD symptoms if the intervention implicitly targets connected, negative PTC. As mentioned above, the valence of centrality-related cognitions is not included in the Centrality of Event Scale (Berntsen & Rubin, 2006). For example, an item asks to which extend the event has colored the way of thinking and feeling about other experiences-but it is not defined how, and the "color" remains unclear. Assimilation or overaccommodation processes in the aftermath of trauma, developing appraisals such as "The world is dangerous and an unsafe place", "I am weak" or "I cannot trust anyone", may lead to an ongoing perceived threat and avoidance behavior (Ehlers & Clark, 2000). Whereas outlasting thoughts such as "I might not be able to control everything, but I can still control a lot" or "I can overcome challenges in life, surviving has made me a stronger person" might lead to a less negative, anxious "color" and even enhance PTG (Tedeschi & Calhoun, 2004). Either way, comprising a negative or positive valence, it seems conceivable that the event can be seen as and remain central in terms of an identity component, reference and turning point based on the concept of Berntsen and Rubin (2006). Our descriptive results indicate the reduction of EC over time was comparably smaller in terms of a medium effect size in comparison to large effect sizes concerning the reduction of negative PTC and PTSD symptoms. Also, the low stability coefficient of EC from admission to discharge may indicate a shift in its meaning over time which could indicate a transition of the latent construct from a negative to more positive centralization. These aspects may underline that instead of decentralization, reducing the negative valence of connected cognitions might be more essential. As we derive this suggestion from our results, we could assume that less negative PTC imply more positive PTC which are related to less PTSD symptoms. However, we did not specifically include a positive facet of PTC and can only speculate on the influence of this concept. With regard to results of studies that suggest having events centralized positively leads to good mental health outcomes (e.g. Broadbridge, 2018;Groleau et al., 2013;Teale Sapach et al., 2018), focusing on the reduction of negative and a shift to positive appraisals of centralized events could lead to better outcomes. This would underline the importance of decreasing negative PTC on the one hand and enhancing positive PTC on the other hand as targets of PTSD treatments. In a clinical context, this could implicate utilizing cognitive techniques to reduce dysfunctional negative appraisals which have occurred in the aftermath of a trauma. In addition, individuals could be encouraged to develop positive appraisals for example being a strong person who is able to cope with a traumatic event or even to find positive effects such as strengthened relationships or changed priorities. Further research on positive PTC and its meanings would therefore be necessary to claim such inferences. Also, an additional orientation on growth instead of only focusing on deficits may be a helpful tool, since PTG can be associated with less distress (Tedeschi & Calhoun, 2004). However, debates about whether PTG is rather related to negative mental health outcomes should be considered investigating this topic (e.g. Engelhard et al., 2015;Frazier et al., 2009).
Regarding the longitudinal results of our second part, the unexpected suppression effects restricted causal inferences as described above but despite implicate other valuable inferences: They could indicate interactional terms which were not represented within our model. For instance, EC may implicitly include negative or positive "colored" cognitions. Conversely, negative PTC might depend on the presence of EC and even strengthen it. Studies using valence-modified versions of the CES (Broadbridge, 2018;Teale Sapach et al., 2018) showing that higher PTSD symptom severity was present in case of a "negative" centralization underscore such a potential interaction. Similarly, negative PTC and PTSD symptomatology are also closely connected. For example, thinking the world is dangerous or being a weak person can certainly increase hypervigilance or avoidance behavior which in turn can maintain dysfunctional thoughts. The latest revision of the DSM underscores this idea including negative thoughts and feelings in PTSD criteria (DSM-5; APA, 2013). As it is thereby seen as a symptom and an integral part of the disorder, our model assuming PTC to have a mediational effect on PTSD symptoms is not necessarily suitable. In this light, investigating further cognitive aspects or processes leading to the symptom clusters of PTSD within extended structural models might be more appropriate.
Another aspect considering the longitudinal analyses is that in line with previous research (for an overview see Gehrt et al., 2018), the ratio of correlation sizes was similar to ours 1 3 at admission but not at discharge, since it changed from preto posttest. For instance, while the correlation between EC and PTSD symptoms was the highest at admission, it was the lowest correlation at discharge. Again, this could indicate a shift in the meaning of EC, possibly from a negative to positive centralization. In case of a transition of a latent construct such as EC, lagged analyses may not be appropriate from a post-hoc perspective.
One methodological aspect of the longitudinal analysis is that it could be argued that the suppressor constellation absorbed a lot of error variance and might also have helped the mediation pattern to become visible more clearly, resulting in the change from a partial mediation at admission to a complete mediation at discharge. Thus, extended post-hocanalyses comparing models excluding all or only the negative cross-lagged paths did not support this explanation since the mediation patterns were still present (see Appendix B).
We see three notable strengths of this work which jointly stand out considering previous studies. First, we investigated a clinical sample with PTSD suffering from substantial posttraumatic symptomatology. Also, the patients experienced traumatic events of a diverse range, extending generalizability. Second, even though parts of our analyses regarding the comparability of the cross-lagged paths were restricted due to suppression effects, the longitudinal analyses gave more insights regarding potential interactional terms and a possible transformation of the latent EC construct. Third, our methodological approach was overall valuable due to the conduction of power analyses, SEM, and taking account of possible suppression effects which are often neglected.
At the same time, the suppression effects represent a methodological limitation of this study. They can often restrict SEMs as in the present case and are also difficult to replicate (e.g. Ghiselli, 1972). Another methodological constraint is that even though indirect effects can be statistically confirmed, we cannot rule out alternative mediators and reversed pathways; thus, the direction of the causality is not definable (Fiedler et al., 2011). Furthermore, even if the cross-lagged paths would have been interpretable and had given indications for temporal orders, more than two points of time are generally necessary for inference about causal directions in cross-lagged panels. Lastly, self-reported measures generally can cause a bias.
In conclusion, our results indicate that reducing negative PTC of centralized events should be a core target for the treatment of PTSD. Thereby, reframing in terms of a transformation of the cognition's "color" from negative (maladaptive) toward positive (constructive) appraisals could play a more important role than mere decentralization. Thus, the use of valence modified questionnaires proposed by Broadbridge (2018) or Teale Sapach et al. (2018) might bring more distinct insights when assessing improvements during therapy: Besides a reduction in negative PTC a shift to more positive appraisals might be essential.
Future research is necessary to confirm the mediational relationship between EC and PTSD symptoms via negative PTC. Thereby, potential interaction terms should be addressed. Future studies could also test with latent constructs whether there is a change in the meaning of EC reflected in different weights of measured variables at preand posttest. Ideally, further examinations should generally include more than two assessments to conclude possible causal influences. The inclusion of other associations that may impact the relationship of the variables, specifically the positive facet of PTC and PTG, could be complemented in order to get additional insights.
Funding Open Access funding enabled and organized by Projekt DEAL. This study was not funded by any grants.
Data Availability R analyses are available in the Appendix. The data are not publicly available due to containing information that could comprise the privacy of participants.
Conflict of interest
The authors declare that they have no conflicts of interest.
Ethical Approval Data collection and analysis was approved by the ethical commission of the Psychiatric University Hospital Charité.
Informed Consent
Informed consent was obtained from all participants of this study, including the publication of the data.
Animal Rights This article does not contain any studies with animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-09-18T13:43:10.428Z | 2021-09-17T00:00:00.000 | {
"year": 2021,
"sha1": "087c8b15a5bada16322507b2920204966a31165c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10608-021-10266-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "087c8b15a5bada16322507b2920204966a31165c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
217547373 | pes2o/s2orc | v3-fos-license | Hidden Realities of Infant Feeding: Systematic Review of Qualitative Findings from Parents
A growing, global conversation, regarding realities and challenges that parents experience today is ever-present. To understand recent parent’s attitudes, beliefs, and perceptions regarding infant feeding, we sought to systematically identify and synthesize original qualitative research findings. Following the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) framework, electronic databases were searched with a priori terms applied to title/abstract fields and limited to studies published in English from 2015 to 2019, inclusive. Study quality assessment was conducted using the Critical Appraisal Skills Programme (CASP) checklist, and thematic analyses performed. Of 73 studies meeting inclusion criteria, four major themes emerged. (1) Breastfeeding is best for an infant; (2) Distinct attitudes, beliefs, and perceptions of mothers that breastfeed, and those that could not or chose not to breastfeed, are evident; (3) Infant feeding behaviors are influenced by the socio-cultural environment of the family, and (4) Parent’s expectations of education and support addressing personal infant feeding choices from health care providers are not always met. This systematic review, guided by constructs within behavioral models and theories, provides updated findings to help inform the development of nutrition education curricula and public policy programs. Results can be applied within scale-up nutrition and behavioral education interventions that support parents during infant feeding.
Introduction
Nutrition during the first 1000 days, spanning from conception to age 24 months, has critical influence on the immediate and long-term physical and cognitive development of infants. The period from birth through the first 12 months characterizes a unique time when parents or caregivers make essentially all feeding decisions about what and how their infant is offered food [1]. Although the definition of a modern family is changing [2], parents are currently described as the main caregivers of children in the home [3] and infant feeding is a large component of that care that encompasses the social, cultural, and economic structure of a parent's life [4].
Significant progress with improved infant feeding and nutrition has been realized through nutrition education efforts, yet childhood growth faltering as evident by the number of children at both the lower and upper percentiles of the World Health Organization growth standards remains a significant public health concern across the globe [5]. Breastfeeding rates are below global targets [6], particularly in high-income countries [7], and assessment of parental complementary feeding behaviors has identified room for improvement from all regions studied [8][9][10]. Understanding the current modifiable determinants influencing today's parents feeding choices and behaviors is essential in providing support and education.
Education strategies likely to benefit parents are guided by a theory of health behavior, and evidence indicates that utilization of behavioral models and theories for nutrition education interventions improves effectiveness [11,12]. Within the often applied Social-cognitive Theory, Theory of Planned Behavior, and Health Belief Model, infant feeding constructs (concepts) include parental feeding attitudes, beliefs, perceptions, social norms, environmental constraints, as well as skills and knowledge [13]. Understanding the underlying psychosocial drivers, or "hidden realities" related to infant feeding behaviors of parents provides insights for developing, improving, and scaling nutrition education interventions. Ethnographic and qualitative research methods are well suited to capture these social-cognitive constructs [14,15].
Meta-synthesis of qualitative studies and systematic qualitative reviews have previously contributed to an understanding of parent's perspectives on infant and child dietary patterns unique to low-and middle-income countries [16][17][18]. Feeding experiences of migrant and refugee women in Australia have been assessed by qualitative synthesis of publications through 2014 [19], and men's views, perceptions, and experiences with infant feeding have been recently summarized [20,21]. In addition, via a systematic qualitative review of studies through 2014 [22], knowledge has been expanded related to factors that influence parent's timing, choices, and process of transitioning their infant's diet to family foods. Including a majority of studies published through 2015, parent's experiences and perceptions of complementary food and feeding recommendations have also been reviewed [23]. Meta-ethnographic and systematic qualitative reviews have specifically addressed mother's experiences with breastfeeding [24][25][26], yet references within these reviews may not reflect current social-cognitive constructs associated with infant feeding of modern parents, as reviews have not included studies published after 2015. As such, an update to previous research syntheses is needed to investigate if there are new developments within more current literature.
The aim of this study was to provide a current and comprehensive synthesis of original qualitative literature findings related to parent attitudes, beliefs, and perceptions regarding infant feeding and to identify factors that influence parent infant feeding decisions. As only studies published between 2015 and 2019 are included, our results provide a new assessment of the most recent parent perspectives of infant feeding.
Materials and Methods
Guidelines from the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) statement [27] were followed within this qualitative review synthesis. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) flowchart was utilized for reporting the different phases of searching, screening and identifying studies for inclusion in the qualitative synthesis [28].
Search Strategy and Study Selection
A pilot literature search strategy with terms of "infant, feeding, perception, attitude, and belief" was conducted in March 2019 to provide an initial overview of the literature and to help inform the final search strategy. The search terms and process utilized in the final search strategy included: infant* AND parent* OR mother* OR father* OR caregiver*; AND feeding* OR "feeding behavior*" OR "infant feeding" OR breastfeed* OR breast feed* OR bottlefeed* OR "bottle feed*" OR formula* OR "infant formula" OR "baby formula" OR wean* OR "complementary feeding" OR "baby food*"; AND perception* OR attitude* OR belief* OR perspective* OR view* OR emotion* OR influence* OR feel* OR view*; AND qualitative OR "qualitative study" OR "qualitative analysis" OR "qualitative interview" OR "qualitative research" OR ethnograph* OR "thematic analysis" OR "focus group*" OR interview*. The search strategy was applied to electronic scientific databases of Medline, PsycInfo, and Cochrane Database of Systematic Reviews, with limits on year (2015-2019, inclusive) and English language. Reference lists of recently published studies were hand searched for additional potential inclusions.
Studies were required to have enrolled a parent or primary caregiver of an infant up to 1 year of age, and have a focus on infant feeding. Included studies were required to have utilized qualitative data analyses; if mixed methods were reported in an individual study, findings from qualitative components were included. Any discrepancies with study inclusion were discussed by authors and resolved by consensus. Excluded studies were those that enrolled preterm infants, infants with morbidities, or if the study enrolled only adolescent age mothers, pregnant women, HIV+ mothers, or women with a history of infertility. Studies that addressed baby-led weaning only, or with publication dates < 2018 that were specific to fathers only were excluded, as recent reviews have included such findings.
Study Quality Assessment and Data Reporting
Studies were evaluated for quality and internal validity using the Critical Appraisal Skill Programme (CASP) tool for qualitative research [29]. Completeness of reporting and potential of bias were addressed within the tool, as well as appropriateness of study design, methods, data collection, and analysis methods used. Since CASP does not use assessment scores, we adopted a 3-point rating system similar to others [16,17,19,26,30,31]. For each checklist item, studies were scored with 2 points if a CASP criterion was met, 1 point if unable to determine, and 0 points if the standard was not met. Any disagreements in quality appraisal were resolved by author discussion. As there is no consensus about which, if any, quality criteria should be applied to qualitative research synthesis [31], and due to the risk of losing new insights [22,23], quality was not used as an exclusion criterion in the current review.
Thematic synthesis [32], as utilized in other qualitative reviews [16][17][18]22,23], was the qualitative evidence synthesis method employed. This approach is designed to identify new themes and concepts, while maintaining conclusions of the individual primary study. The process included becoming familiar with the data by open-minded reading of each study, line-by-line extraction and coding of study findings and organization into first-order descriptive themes, sub-theme, and higher order major themes. After coding results of selected studies, any disparities were addressed by author discussion.
Results
The literature search identified 901 unique papers published between 2015 and 2019 that potentially brought insight to parent's attitudes, beliefs, and perceptions and influencers regarding infant feeding behaviors. Following title and abstract screening against inclusion criteria, full texts of 119 papers were coded and assessed for eligibility. After removing 46 studies not meeting inclusion criteria, 73 original qualitative studies served as the base for this review. Details of the studies screened, included, and excluded are on Figure 1.
More than half (55%) of the studies were published in the past 3 years; 2019 (n = 17), 2018 (n = 11), 2017 (n = 12), 2016 (n = 18) to 2015 (n = 15). A majority (82%) of the studies were conducted with parents from North America (n = 28), Europe (n = 10), United Kingdom (10), and Australia (n = 12). A limited number of studies included parents from Asia (n = 8), or Africa (n = 5). Focus groups (n = 28) and interviews (n = 45) were most often utilized as methods of data collection, and some studies included a variety of methods. Studies were dominated by the experiences with milk feeding (n = 55), and the majority enrolled only mothers (n = 52). Details of the 73 studies included in the systematic literature review synthesis are identified in Supplementary Materials.
Study quality assessment ratings were generally moderate-to-high, and all study scores met at least 16 of 20 points. Of the studies with the lowest quality rankings (n = 7), most met at least 8 of the 10 criteria within the CASP [29], with exceptions in categories of incomplete details provided within research description approach, and disclosure of the relationship between researcher and participant. Aside from publication year, no differences were identified within studies of moderate quality (e.g., 16 points) of which were published prior to 2017, compared to those of higher quality (20 points).
Ethical standards were addressed in all of the studies, and no significant failings within methods or analyses were detected within any of the included studies.
Thematic analyses identified four major themes. (1) Breastfeeding is best for an infant; (2) Distinct attitudes, beliefs, and perceptions of mothers that breastfeed, and those that could not or chose not to breastfeed, are evident; (3) Infant feeding behaviors are influenced by the socio-cultural environment of the family, and (4) Parent's expectations of education and support addressing personal infant feeding choices from health care providers are not always met.
Breastfeeding Is Best for an Infant
Parents perceived that "breastfeeding is the best way to feed infants", despite if they had personal breastfeeding experience or not . This finding was consistent for parents from studies within all geographical regions, with the exception of two studies in which some mothers reported that breastfeeding (BF), in general, was not acceptable for infant feeding [59] or that colostrum was not considered appropriate [60]. Results from several studies indicated that parents believed that "breastfeeding is the natural way to feed infants" [34,47,56,61], or "the normal way to feed" [62], and that BF is the healthier option for infant milk feeding [41,55,63].
Distinct Attitudes, Beliefs, and Perceptions of Mothers That Breastfeed, and Those That Could Not or Chose Not to Breastfeed, Are Evident
Positive attitudes, beliefs, and perceptions toward BF were frequently reported [40,57,64], such as "breastfeeding creates happiness" [45], and "breastfeeding was a satisfying experience" [65], yet few studies identified positive descriptors from women that chose not to BF [64]. Overall, studies more often reported negative terminology of constructs as parents described their feeding experiences.
Studies that included mothers that BF reported "negative feelings of judgement from others" [35,54,[66][67][68], and several identified "stigma, shame, and personal embarrassment" to feed in public [39,69,70], which may have contributed to their reported sense of isolation [38,67]. Some women described shame as experienced and internalized through exposure of their body [67] or a negative body image [59]. Feelings of guilt for not finding BF easy [38], for taking time out of work to BF [66], or for continuing to BF although finding the practice aversive [71] were reported. Some mothers felt overwhelmed, anxious, and frustrated with the intensity and unpredictability of breastfeeding [37] and found that "breastfeeding was demanding, not as easy as it should seem, and required perseverance" [72]. Developing resilience to judgement, and recognizing that "everyone has something to say about breastfeeding" [35,66,73] are coping skills that mothers reportedly used to help maintain their BF goals.
One study identified mothers of whom intended to not BF due to being fearful of the practice, or a perception that their behaviors were incompatible with BF, and these mothers were comfortable with their choice [64]. However, the majority of studies with mothers that could not, or elected not to BF, particularly for the duration they intended, reported "feelings of shame, guilt, or stigma" [33,34,65,67,[73][74][75][76]. The idealism of "striving to be a good mother" via BF [36,42,55,58,67,68,75,77] created conflict, with potential negative influences on a woman's self-perception of what it means to be a "good mother". These findings highlight the divide between perceptions of infant feeding idealism and the reality experienced by many parents.
Studies that included mothers that exclusively BF, or BF for longer durations than typical in their culture, had high internal perceptions of their confidence and determination with their BF decision, despite some challenges in reaching their goals [35,64,78]. Mothers described individual (e.g., determination, self-efficacy for BF) and interpersonal (e.g., social support) coping resources as facilitators of BF maintenance [33,34,53,73]. Social support, "particularly enlisting a female relative, friend, or partner was important for BF continuation" [39,53,56,72,79,80], and one study [81] identified social media as a maternally perceived facilitator of BF duration and maternal support.
Other studies described that mothers differed in their BF practices depending on whether their attitudes and beliefs were infant centered (more likely to BF) or maternal centered (less likely to BF) [48,50,54]. Some mothers perceived that their diet may be nutritionally inadequate to support BF [46,54] or believed that "exclusive BF provided insufficient nourishment for their infant" which led to the early introduction of complementary feeding [43,80,82]. Perceived insufficient BM production was most frequently reported as an influencing factor for cessation of BF [40,42,46,48,[54][55][56][83][84][85][86][87].
Infant Feeding Behaviors Are Influenced by the Socio-Cultural Environment of the Family
Although a father's role in parenting may be changing, studies within this review primarily recruited mothers, and identified mothers as the primary managers of infant and young child feeding [80,88]. Fathers deferred infant feeding decisions to the mother, valued BF and believed it as healthy and natural for babies. As some fathers had seen their partners struggle with BF, they acknowledged that BF was more difficult than they had perceived [62] and some viewed BF as a potentially harmful practice for mothers [61]. Studies that included co-parents reported parental agreement that BF affected the relationship with their infant in different ways, and negotiated with adapting and acceptance of different feeding roles [47]. Involvement in feeding over the first few years was described in terms related to "including ongoing discussions and collaborations around co-parenting related to feeding" [88,89].
Studies that included other family members of the mother [55,69,70,80,84,89,90] identified that infant feeding attitudes, beliefs, and perceptions may be "generationally passed down" and potentially impact infant feeding beliefs and behaviors [59]. This finding was evident by the reported influence of family elders and grandmothers [43,87,89]. Overall, the influence of a mother's immediate family on her infant feeding decisions and behaviors reportedly had a strong impact [49,84,85]. Advice from family was often contradictory to nutrition-based feeding guidelines, and to show respect to family members, some mothers incorporated family advice instead of recommended practices [91].
Family, tradition, and culture (social norms within the parent's environment) shaped parental infant feeding beliefs and perceptions about when to begin complementary feeding, and what first foods to offer. Of the studies in this review addressing introduction of solid or semi-solid foods, "beliefs, values, and perceived norms" were a central influence on complementary feeding practices [43,44,49,51,55,59,85,88,89,[91][92][93][94][95][96][97][98][99], which brought challenges to immigrant mothers of children who were culturally separated [100,101]. Parents perceived that "everyone gives you advice" [102], and complementary feeding was viewed as a natural progression with the goal of enjoyment of food and development of an expansive palate [95]. Considerations of infants' own preferences [93], as well as responsiveness to family needs and wants [92] were determinants of food choices. "Cost, location, and access to fresh and traditional foods" [85,93,96] was a priority. Some parents reported dissatisfaction with the "one size fits most" approach of infant feeding guidance as "every child is different" [97,102] and reported relying on their own instincts, or cultural familiarity when deciding what and how to feed their infant.
Parent's Expectations of Education and Support Addressing Personal Infant Feeding Choices from Health Care Providers Are Not always Met
Parents desired professional and individualized instruction regarding infant feeding that was in keeping with their attitudes, beliefs, culture, and feeding decision from various sources [83], including physicians, pediatric nurses, lactation consultants, or professionals working in health care centers or public nutrition programs, described collectively here as health care providers (HCP). In contrast, studies illuminated that many parents found "infant feeding advice, support, and education from their HCP inadequate, missing completely, inconsistent or contradictory" [36][37][38]41,42,44,46,[49][50][51]53,55,57,63,72,74,76,83,97,98,103,104]. As identified within some studies, while it is important to promote and maintain BF, it is also necessary to ensure that the care, education, and needs of parents and their infants that are not BF are met [74,76], without stigmatizing parents who do not BF [68]. Some parents expressed distrust of the feeding information and recommendations provided by HCP and looked to family or peers for more culturally sensitive and practical infant feeding advice [41,100].
A need for strategies and support that "address parent's personal, cultural, and ideological constraints with infant feeding" were identified within several studies [33,51,56,67,74]. Additionally, a desire for expanded infant nutrition education that included parent's wider community such as family members, rather than only mothers, was identified within some studies [67,79,80,105]. Role models and support groups were noted as important by parents, but perceived as inadequate [38,72,103].
Behav. Sci. 2020, 10, x FOR PEER REVIEW 6 of 13 a desire for expanded infant nutrition education that included parent's wider community such as family members, rather than only mothers, was identified within some studies [67,79,80,105]. Role models and support groups were noted as important by parents, but perceived as inadequate [38,72,103].
Discussion
At the individual or parent level, nutrition education focuses on building a person's capacities for adoption or change of nutrition-related behaviors conducive to health and wellness. Previous research has suggested that infant feeding is likely to be predicted by socio-cognitive variables [11][12][13], and within this qualitative review we examined mediators and cognitive constructs that potentially influence parent's infant feeding behavior by identifying their infant feeding attitudes, beliefs, and perceptions. Findings are directly applicable within a nutrition education theoretical framework aimed at improving parental infant feeding behaviors for better health and nutrition of infants and young children.
Results from this review identified that parents predominately agree that breastfeeding is the best way to feed infants. As similar to conclusions from older systematic reviews [24,25], recent mothers described breastfeeding in terms of their "perceived expectations, compared to the reality they experienced." Similarly, a dichotomous desire to be a good/perfect mother (compared to feeding approaches perceived inconsistent with "good mothering") [22,25] was realized in the current review.
Although some large studies have reported that mothers often decide about infant feeding on their own initiative [106], previous qualitative reviews have concluded that family and cultural practices are strong influences on infant feeding behaviors [16][17][18][19]23,26]. Our results expand upon
Discussion
At the individual or parent level, nutrition education focuses on building a person's capacities for adoption or change of nutrition-related behaviors conducive to health and wellness. Previous research has suggested that infant feeding is likely to be predicted by socio-cognitive variables [11][12][13], and within this qualitative review we examined mediators and cognitive constructs that potentially influence parent's infant feeding behavior by identifying their infant feeding attitudes, beliefs, and perceptions. Findings are directly applicable within a nutrition education theoretical framework aimed at improving parental infant feeding behaviors for better health and nutrition of infants and young children.
Results from this review identified that parents predominately agree that breastfeeding is the best way to feed infants. As similar to conclusions from older systematic reviews [24,25], recent mothers described breastfeeding in terms of their "perceived expectations, compared to the reality they experienced." Similarly, a dichotomous desire to be a good/perfect mother (compared to feeding approaches perceived inconsistent with "good mothering") [22,25] was realized in the current review.
Although some large studies have reported that mothers often decide about infant feeding on their own initiative [106], previous qualitative reviews have concluded that family and cultural practices are strong influences on infant feeding behaviors [16][17][18][19]23,26]. Our results expand upon previous themes with specific new findings. In particular, parents report a desire, and have expectations, that they will be offered factual education related to their individual and personal infant feeding choices, provided with sensitivity, in a non-judgmental manner. Education and support that addresses family and cultural priorities that empower parents to adopt recommended infant feeding guidance, while preventing or addressing internalized feelings of shame or guilt provides an unmet opportunity within nutrition education.
The current qualitative review was performed according to accepted guidelines [27], appropriate thematic synthesis methods [32], and detailed inclusion of individual study objectives, methods, and results provided (Supplementary File 1). In addition, following the CASP tool for qualitative research [29], individual study quality was rated as moderate-to-high, increasing our confidence within the inputs to this synthesis. Moreover, this review included only studies published within the last five years. As such, the findings of this review represent a methodologically sound and comprehensive synthesis of the most recent parent perspectives regarding "hidden realities" with infant feeding that can be incorporated within behavioral based nutrition education efforts.
Given the current light that this literature on parent's attitudes, beliefs, and perceptions of infant feeding contributes, this thematic synthesis is not without its limitations. Firstly, the majority of studies included in the current review were conducted in the developed world and published in English. Despite this limitation, our results identified that infant feeding behaviors occur via the socio-cultural environment of the family. Given the consistency of this finding, we anticipate that results would not be different if additional studies from more diverse populations were included. Secondly, as studies did not consistently offer author-generated quotations, and there is lack of consensus for identifying the priority of quotes from participants within individual studies, we chose to adapt author conclusions as quotations within this work. Our approach was diligent and consistent with standard qualitative evidence synthesis methods, yet it is possible that some lower order themes were not included. Thirdly, in the majority of studies, the term parent was frequently synonymous with maternal; further research could explore infant feeding constructs with more clearly defined primary caregivers and support persons within the individual qualitative studies. Lastly, most studies addressed perceptions of women that had or were currently BF, directly from their breasts. Few studies addressed participant weight status or other known confounders related to BF. Studies with parents that chose to provide breastmilk by cup or bottle, provide infant formula, or used mixed-feeding methods would provide additional insight.
Conclusions
Parental infant feeding attitudes, beliefs, and perceptions are influenced at multiple levels, including individual (self-efficacy, determination to meet goals, wanting to be a "good mom"), and external influences (social support, the "village always has an opinion", family and culture), as well as reported difficulty of finding education resources to overcome challenges. Parents desire factual education and support that addresses their personal feeding choices and ideology, within a culturally sensitive approach from health care providers.
Conflicts of Interest:
A.M.D. and R.S.C. are employed by Nestlé Nutrition. S.F. has received research grants from government, charitable organizations, and industry, and consultancy fees and honoraria from government and industry, including companies that produce infant formula; no honorarium or funding was received for this study. R.F. and A.Z. disclose no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2020-04-30T09:08:07.811Z | 2020-04-27T00:00:00.000 | {
"year": 2020,
"sha1": "42fd33b4b6cf2107fd2a62ca365c40fb95646c4d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-328X/10/5/83/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91c7c1e5354120d5bd2cbc0e59ebff36a550b583",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
3541038 | pes2o/s2orc | v3-fos-license | Ovarian low and high grade serous carcinomas: hidden divergent features in the tumor microenvironment
Only recently low-grade serous carcinoma (LGSOC) of the ovary has been recognized as a disease entity distinct from the more common high-grade serous carcinoma (HGSOC), with significant differences in pathogenesis and clinical and pathologic features. The present study aimed at evaluating whether the different natural histories and patterns of response to therapy demonstrated for LGSOC and HGSOC, along with a diverse genomic landscape, may also reside in the supporting tumor stroma, specifically in the state of differentiation and activation of tumor associated macrophages (TAMs). TAMs play complex roles in tumorigenesis since they are believed to possess both tumor rejecting (M1 macrophages) and tumor promoting (M2 macrophages) activities. Here we showed that, when compared to HGSOC (n = 55), LGSOC patients (n = 25) exhibited lower density of tumor-infiltrating CD68+ macrophage, along with an attenuated M2-skewed (CD163+) phenotype. Accordingly, assessment of intratumoral vascularization and of matrix metalloproteinase 9 expression (a key protein involved in tumor invasion and metastasis) revealed lower expression in LGSOC compared to HGSOC patients, in line with emerging evidence supporting a role for TAMs in all aspects of tumor initiation, growth, and development. In conclusion, results from the present study demonstrate that microenvironmental factors contribute greatly to determine clinical and pathological features that differentiate low and high grade serous ovarian carcinomas. This understanding may increase possibilities and opportunities to improve disease control and design new therapeutic strategies.
INTRODUCTION
Ovarian cancer is the most deadly gynecologic malignancy [1]. This insidious disease is often diagnosed in an advanced stage, develops rapidly and therefore has a poor prognosis. Over 90% of ovarian malignancies are categorized as epithelial ovarian cancers, and currently five main types are identified: high-grade serous, lowgrade serous, mucinous, endometrioid, and clear-cell carcinoma. Low-grade and high-grade serous ovarian cancers actually comprise ~70% of all epithelial ovarian tumors and account for the majority of deaths. In line with the evidence that ovarian cancer represents a group of distinct entities with distinct types of carcinogenesis, it is now widely accepted that low-grade and high-grade serous tumors are essentially distinct diseases, exhibiting distinct genetic alterations, molecular patterns and clinical behaviors. Specifically, the former develop from wellrecognized precursors and behave in an indolent fashion, are characterized by specific mutations, including KRAS,
Research Paper
BRAF and ERBB2 and are relatively genetically stable. In contrast, HGSOCs are suggested to be more aggressive, found at advanced stage, and genetically highly unstable. The majority have TP53 mutations, but rarely harbor the mutations detected in the low-grade serous tumors [2].
Overlaying this complexity is the contribution of supporting cells, and the tumor microenvironment is now increasingly recognized to play an important role in epithelial ovarian cancer [3]. The microenvironment of solid tumors is indeed characterized by a reactive stroma with a plenty of inflammatory cells, dysregulated vessels and proteolytic enzymes. Inflammatory infiltrates include a rich supply of macrophages, which are recruited by tumor cells through their secretion of chemokines [3,4]. Actually, tumor cells and macrophages engage in a bidirectional interaction through the exchange of soluble mediators, which influence cell behavior and phenotype [4]. Macrophages constitute an extremely heterogeneous population which differentiate into distinct types, schematically identified as M1 (or classically activated) and M2 (or alternatively activated) [5]. "Classically activated" M1 macrophages contribute to tumor rejection through type 1 cytokine production and antigen presentation, whereas "alternatively activated" M2 macrophages enhance angiogenesis and remodeling, through type 2 cytokine production. It is now generally accepted that tumor-associated macrophages (TAM) most closely resemble M2-polarized cells, creating an immunosuppressive microenvironment and finally promoting tumor invasion, angiogenesis, and metastasis [5]. If the tumor is small, TAMs derived from the surrounding tissue macrophages represent the majority of TAMs, while as the cancer mass rises and an intratumoral vascular network forms, monocyte-derived TAMs turn out to be the main source of TAMs [6]. In the primary tumor, TAMs create an immunosuppressive microenvironment promoting angiogenesis, tumor invasion, motility and intravasation. During metastasis, macrophages prime the pre-metastatic site and promote tumor cell extravasation, survival and persistent growth [7]. Previous studies aimed at characterizing TAMs in ovarian cancer demonstrated that they most closely resemble M2-polarized macrophages and express M2 markers such as CD163, CD204, CD206 (Mannose Receptor), and IL-10 [4]. Moreover, co-culture of human macrophages with ovarian cancer cell lines was associated with the polarization to the M2 phenotype [8].
Despite an increasing amount of evidence is emerging to suggest that TAMs display a unique activation profile in ovarian tumors, many questions still remain, and, among these, the contribution of this immune cell type in each of the histopathological ovarian cancer subtypes is likely to be really complex and requires investigations. The present study aimed at evaluating whether the different natural histories and patterns of response to therapy demonstrated for LGSOCs and HGSOCs, along with a diverse genomic landscape, may also reside in the supporting tumor stroma, specifically in the state of differentiation and activation of TAMs, which in turn, may promote a different tumor development and spread.
RESULTS
The study population included 25 LGSOC and 55 HGSOC patients. Patient characteristics are summarized in Tables 1 and 2. The mean age of patients with LGSOC was significantly lower than in the HGSOC group (49.8 ± 3.0, and 56.8 ± 1.5, respectively, mean ± SEM, p = 0.03, Table 1), in keeping with literature data [9,10]. In addition, HGSOC patients were more likely to have advancedstage disease, compared to LGSOC ones (p = 0.01, Table 1). Follow-up information was available for all cases, with LGSOC and HGSOC patients having mean follow-up times of 51 (9-180) and 47 (7-140) months, respectively, from the date of surgery ( Table 2). On follow up, most of LGSOC patients were alive without evidence of recurrence, while the majority of HGSOC eventually recurred and died of disease (Table 2).
Microvessel density in LGSOCs and HGSOCs
Clinical evidence shows a correlation between local macrophage density and areas of intense angiogenesis defined by the presence of microvessels, suggesting that the angiogenic switch in tumors depends on macrophage infiltration [20]. We thus assessed the microvessel density (MVD) in tumors using CD31, a specific and sensitive endothelial marker for formalin-fixed paraffin-embedded tissues [21]. LGSOC patients had significantly lower microvessel densities compared to HGSOCs, these latter showing a dense network of vessels with multiple branching (MVD = 5.4 ± 0.5 and 11.2 ± 0.5 vessels/ HPF, respectively, mean ± SEM, p < 0.0001; Figure 3A and 3B). Data stratification per stage and subsequent comparison, confirmed significant differences between the two histotypes, independently of disease stage (p = 0.04 and p < 0.0001 for early and advanced stage patients, respectively, Figure 3B). Notably, the Spearman rank correlation showed a significant positive correlation between MVD and CD163+ macrophage density (r = 0.5 p < 0.0001) ( Figure 3C).
MMP-9 expression in LGSOCs and HGSOCs
Matrix metalloproteinase-9 (MMP-9) is a zincdependent peptidase, belonging to the gelatinase subfamily of MMPs. It is excreted as an inactive proenzyme that undergoes activation upon cleavage by different types of extracellular proteases, and mediates extracellular matrix (ECM) degradation, thus playing a key role in tumor invasion and metastasis, and in tumorinduced angiogenesis. In some tumors, TAM appeared to be a major source of MMP-9 [22]. On the basis of these findings, we used immunohistochemistry to assess MMP-9 expression in our series of high-and low-grade serous ovarian cancers ( Figure 4A and 4B). Data obtained demonstrated that LGSOC patients expressed significantly lower MMP-9 protein than HGSOC ones (IRS 6.4 ± 0.6 and 8.9 ± 0.5 for LGSOCs and HGSOCs, respectively, mean ± SEM, p = 0.006). After stratification per stage and subsequent comparison, differences in MMP-9 expression between the two histotypes remained significant for stage III-IV only (IRS 6.6 ± 0.8 and 9.0 ± 0.6 for LGSOC and HGSOC, respectively, mean ± SEM, p = 0.04), while no significant changes were found at lower stages ( Figure 4B). A Spearman correlation analysis showed a significant positive correlation between MMP-9 and CD163+ macrophages density (r = 0.2, p = 0.04) ( Figure 4C).
E-cadherin expression in LGSOCs and HGSOCs
Epithelial to mesenchymal transition (EMT) plays a fundamental role in tumor progression and metastasis formation, and accumulated evidences have demonstrated that TAMs plays critical role in the regulation of EMT in cancer [23]. To verify this hypothesis, we chose membranous E-cadherin as epithelial marker and evaluated its expression in our series of low-and highgrade serous ovarian cancers ( Figure 5A and 5B). Results obtained did not show any significant differences in protein expression between LGSOC and HGSOC samples (IRS 7.1 ± 0.7 and 8.1 ± 0.5 for LGSOC and HGSOC, respectively, mean ± SEM). Paired comparison after data stratification per stage confirmed the similar distribution in E-cadherin levels between the two series examined ( Figure 5B). As expected on the basis of these results, Spearman analysis did not show any correlation between CD163+ macrophages density and E-cadherin expression (r = −0.001, p = 0.98, Figure 5C).
DISCUSSION
Significant clinical, pathologic, and pathogenesis differences have been described between LGSOC and HGSOC, although most research on the diversity of these two cancers has been focused on the impact of cancer cell biology [9]. However, cancers develop in composite tissue environments (that they depend upon for growth, invasion and metastasis) consisting of matrix components, inflammatory cells, and stromal cells. Therefore, in this study we sought to investigate whether, besides tumor cell-intrinsic factors, microenvironmental factors can contribute to determine clinical and pathological features that differentiate low and high grade serous ovarian carcinomas. Notably, we show here, for the first time, that LGSOC and HGSOC exhibit striking differences in tumorassociated macrophage infiltration and, more importantly, in their activation profile, findings which were in turn related to different tumor vascularization and expression of key proteins involved in tumor growth and metastases.
Indeed, we found that, when compared to HGSOC, LGSOC patients showed a lower density of tumorinfiltrating CD68+ macrophage along with an attenuated M2-skewed (CD163+) phenotype. Notably, this trend was confirmed when patients with early-and latestage disease were analyzed separately, this suggesting that the subpopulations of EOC cells composing the diverse tumors can differentially affect the process of immune cell infiltration and differentiation. In line with these results and with the notion that low-grade have Figure 1: Histological features of LGSOCs and HGSOCs. Low-grade serous carcinoma of the ovary is characterized by relative uniformity of the cells and up to 12 mitoses per 10 high-power fields. High-grade serous carcinoma of the ovary is characterized by pleomorphism, marked nuclear atypia and > 12 mitoses per 10 high-power fields. www.impactjournals.com/oncotarget better outcomes than high-grade tumors, literature data strongly support a role of TAMs as prognostic factors in ovarian cancer [reviewed in 4]. As far as we know, results described here are the first showing differences in TAM distribution patterns between low-and high-grade serous ovarian cancers. In fact, previous studies in this area did not analyze separately LGSOC and HGSOC, but considered the serous histotype on the whole, also producing contradictory results. In detail, while some Authors demonstrated significant differences in the TAM infiltration according to cancer histotype (TAM most frequently infiltrating serous and mucinous, compared to other histotypes) [24], others did not find any relationship between the density of CD68/CD163-positive cells and LGSOCs and HGSOCs respectively, ***p < 0.0001). Bar graphs depict data (mean ± SEM) following stratification per stage (Early stage, n = 10 and n = 8; Advanced Stage n = 15 and n = 47, for LGSOCs and HGSOCs respectively, ***p < 0.0001). (B) Representative pictures for immunohistochemical staining of CD163+ macrophages in clinical samples of LGSOC and HGSOC. Magnification 20× and 40×. Scatter plot shows all data points and mean ± SEM for the entire set of patients (see above for sample sizes, ***p < 0.0001). Bar graphs depict data (mean ± SEM) following stratification per stage (see above for sample sizes, ***p < 0.0001). (C) Bar charts showing the CD163/CD68 ratio (mean ± SEM) in the entire population (see above for sample sizes, *p = 0.02) and after stratification per stage (see above for sample sizes, *p = 0.049). www.impactjournals.com/oncotarget ovarian cancer histological type [18]. Moreover, our results also showed that there were no significant changes in the overall TAM profile when comparing, within the same subtype, early-and advanced-stage disease, this suggesting that distinct tumor microenvironments support the growth and development of low-and high-grade serous ovarian cancer independently on tumor stage.
Tumor-associated macrophages have been found to promote ovarian tumor by employing several different strategies, including promotion of angiogenesis. Indeed, TAMs preferentially accumulate in hypoxic and necrotic regions within the tumors and cooperate with tumor cells to boost the angiogenic switch [25]. Several recent studies have indeed demonstrated that not only TAMs function as major producers of a panel of pro-angiogenic factors (i.e. growth factors, cytokines, and chemokines) in malignant tumors, but they also induce a pro-angiogenic program in tumor cells [22]. In keeping with these literature data, we found a strong association between intra-tumor TAM density and microvessel density, so that CD31 expression closely paralleled density of tumor-infiltrating CD163+ macrophage in low and high-grade serous ovarian cancer. Notably, molecular support to our observation in clinical specimens, is provided by data from Wang and colleagues [26], showing, in vitro models, that the interaction of ovarian cancer cells and TAMs enhances the ability of endothelial cells to promote the progression of ovarian cancer.
Once the barrier of the angiogenic switch has been overcome, tumors rapidly become invasive. For metastasis to occur, a crucial step is the destruction of biological barriers, such as the basement membrane, which requires activation of proteolytic enzymes. Key proteins in this process include the matrix metalloproteinases (MMPs) [27] and recent literature data have provided evidence of a strong association between TAMs and MMPs levels [22], showing that is through the production of (C) The Spearman rank correlation showed a significant positive correlation between MMP-9 IRS and CD163+ macrophages density (cells/ mm 2 ) (n = 80, *p = 0.04). www.impactjournals.com/oncotarget proteolytic enzymes and MMPs that TAMs reorganize the extracellular matrix and degrade the basement membrane [28]. Actually, Spiller and colleagues [29] demonstrated that M2c macrophages (distinguished by expression of the scavenger receptor CD163) secrete the highest levels of MMP-9. Our results fit this mechanism well, since we found that there is a positive correlation between CD163+ macrophages and MMP-9 tumor levels, as demonstrated by Spearman analysis. Hence, LGSOC patients showing an attenuated M2-skewed phenotype (CD163+) compared to HGSOC, also expressed significantly lower MMP-9 expression in tumor samples.
Recent studies have postulated that TAMs triggers EMT through regulation of different signaling pathways in cancer [23,30], with E-cadherin showing a negative correlation with CD68+ macrophage density [30]. However, unlike most carcinomas that dedifferentiate during neoplastic progression with loss of epithelial E-cadherin, ovarian carcinomas undergo transition to a more epithelial phenotype, early in tumor progression, with increased E-cadherin expression. Subsequent reacquisition of mesenchymal features is observed in latestage tumors, and loss of E-cadherin expression or function may occur in ovarian cancer progression [reviewed in 31]. We thus assessed E-cadherin expression in our series of low-and high-grade serous ovarian cancers, to verify whether any differences occurred in the two series and, if so, whether these differences had any relationship to TAM density. Data obtained showed a similar distribution in E-cadherin levels between the two series examined, and protein expression was not correlated to density of tumor-infiltrating CD163+ macrophage. Our data confirm previous literature reporting that ovarian epithelial cancers express high levels of E-cadherin regardless of tumor type, stage of malignancy, or stage of differentiation [32,33], with a strong positivity in HGSOC described in more than 85% of cases [34]. However, some discrepancies exist since other authors reported higher E-cadherin expression in LGSOC compared to HGSOC [35,36]. It is interesting to note, however, that mechanistic studies proved that in ovarian cancer cells, E-cadherin may serve not only as an intercellular adhesion molecule, but also as an upstream regulator that triggers downstream kinase activation, this explaining why E-cadherin is always expressed during ovarian tumor development and progression [33,37]. Additional studies on a larger number of cases are certainly needed to clarify these unresolved issues.
In conclusion, results from the present study give a substantial contribution in the definition of macrophage subpopulations in low-and high-grade serous ovarian cancer, this aligning with the drive to understand the tumor microenvironment and cancer cell biology to improve disease control.
Notably, in spite of differences in histology and clinical outcomes, patients with LGSOC and HGSOC are currently treated with the same treatments, which are not that effective in LGSOC [38]. Thus, new therapeutic strategies and novel molecular targets are needed to improve the outcome of this patient cohort, and TAM might represent an attractive target of novel biological therapies. As recently reviewed by Williams and colleagues [28], macrophage-targeted intervention strategies may actually represent a cornerstone in cancer treatment, particularly in association with conventional or novel ovarian cancer interventions.
Patients
This retrospective study included specimens collected for clinical purposes between the years 2002 and 2014 at the Gynecologic Oncology Unit, Catholic University of Rome, Italy. Histologic grading of ovarian carcinomas was revised according to the 2014 WHO Classification of Tumors of the Female Genital Tract [39].
A total of 25 LGSOC and 55 HGSOC tissue samples were included in the study. In our Institution a written informed consent is routinely requested from patients for collection of their clinical data, as well as paraffin embedded sections for research use. Clinical information was obtained from the existing medical records in accord with institutional guidelines. All data were managed using anonymous numerical codes.
Evaluation of immunohistochemical staining
Tumor-associated macrophage (TAM) densities were assessed by counting the number of intratumoral macrophages with positive staining for the phenotype marker(s) in four representative 400× high-power fields (total tumor surface: 1 mm 2 ). Macrophage density was expressed as cells/mm 2 . For the quantitative analysis of microvessel density, CD31-positive intratumoral microvessels were counted blindly under a microscope field (×400 objective magnification, high-power field area = 0.24 mm 2 ). A minimum of 4 tumor areas per section were evaluated and the microvascular density (MVD) was then expressed as mean number of vessel per highpower field (MVD, vessels/HPF). For MMP-9, the intensity of cytoplasmatic staining and the percentage of immunoreactive cells to total tumor cells were evaluated. The extent of expression was scored 0 for no staining, 1 = 1-10%, 2 = 11-33%, 3 = 34-66%, 4 = 67-100%. A similar semiquantitative scale of 0, +, ++, or +++ was used to assess the intensity of staining. The two values obtained were multiplied to calculate an immunoreactive score (IRS, maximum value 12) [41]. The E-cadherin immunoreactivity was recognized as a membrane staining signal. The immunoreactive score was calculated as described above for MMP-9. Immunohistochemical assessment was carried out by two investigators blinded to groups.
Statistical analysis
Differences between groups in clinicopathological parameters were evaluated using the Fisher's exact test. All other data were analyzed for homogeneity of variance using an F test. If the variances were heterogeneous, log or reciprocal transformations were made in an attempt to stabilize the variances, followed by Student's t-test. If the variances remained heterogeneous, a non-parametric test such as the Mann-Whitney U test was used. Data are reported as mean ± SEM. P values are for twosided tests; p values ≤ 0.05 were considered statistically significant. Analyses were performed using GraphPad Prism version 5.0 for Windows (GraphPad Software, San Diego, CA).
ACKNOWLEDGMENTS AND FUNDING
This work, as part of the activities of the Unit of Translational Medicine for Women and Children Health, was partially supported by the "Associazione OPPO e le sue stanze" Onlus. | 2018-04-03T05:44:36.633Z | 2016-07-23T00:00:00.000 | {
"year": 2016,
"sha1": "ede1164def7f0764ce2bb2685ae631e02f59ec8e",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=10797&path[]=34181",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ede1164def7f0764ce2bb2685ae631e02f59ec8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221458077 | pes2o/s2orc | v3-fos-license | Drug-target binding quantitatively predicts optimal antibiotic dose levels in quinolones
Antibiotic resistance is rising and we urgently need to gain a better quantitative understanding of how antibiotics act, which in turn would also speed up the development of new antibiotics. Here, we describe a computational model (COMBAT-COmputational Model of Bacterial Antibiotic Target-binding) that can quantitatively predict antibiotic dose-response relationships. Our goal is dual: We address a fundamental biological question and investigate how drug-target binding shapes antibiotic action. We also create a tool that can predict antibiotic efficacy a priori. COMBAT requires measurable biochemical parameters of drug-target interaction and can be directly fitted to time-kill curves. As a proof-of-concept, we first investigate the utility of COMBAT with antibiotics belonging to the widely used quinolone class. COMBAT can predict antibiotic efficacy in clinical isolates for quinolones from drug affinity (R2>0.9). To further challenge our approach, we also do the reverse: estimate the magnitude of changes in drug-target binding based on antibiotic dose-response curves. We overexpress target molecules to infer changes in antibiotic-target binding from changes in antimicrobial efficacy of ciprofloxacin with 92–94% accuracy. To test the generality of our approach, we use the beta-lactam ampicillin to predict target molecule occupancy at MIC from antimicrobial action with 90% accuracy. Finally, we apply COMBAT to predict antibiotic concentrations that can select for resistance due to novel resistance mutations. Using ciprofloxacin and ampicillin as well defined test cases, our work demonstrates that drug-target binding is a major predictor of bacterial responses to antibiotics. This is surprising because antibiotic action involves many additional effects downstream of drug-target binding. In addition, COMBAT provides a framework to inform optimal antibiotic dose levels that maximize efficacy and minimize the rise of resistant mutants.
Introduction
The rise of antibiotic resistance represents an urgent public health threat. In order to effectively combat the spread of antibiotic resistance, we must optimize the use of existing drugs and develop new drugs that are effective against drug-resistant strains. Accordingly, methods to improve antibiotic dose levels to i) maximize efficacy against susceptible strains and ii) minimize resistance evolution play a key role in our defense against antibiotic resistant pathogens.
It is noteworthy that dosing strategies for treatment of susceptible strains (e.g., dosing level [1], dosing frequency [2], and treatment duration [3][4][5]) have recently been substantially improved, even for antibiotic treatments that have been standard of care for decades. This suggests that there likely remains significant room for optimization in our antibiotic treatment regimens. It also highlights the difficulty in identifying optimal dosing levels for new antibiotics. Indeed, optimizing dosing is one of the biggest challenges in drug development. Traditionally, antibiotic efficacy was mainly described by a single value, the minimal inhibitory concentration (MIC). While correlations between treatment success and MIC have been demonstrated there is limited predictive power [6,7]. When susceptibility is assessed by MIC, not all patients infected with "susceptible" bacteria are successfully treated with antibiotics. Additionally, a large majority of patients with a "resistant" infection can be successfully treated with antibiotics even when the underlying infection is serious and untreated patients are unlikely to recover [8]. This was e.g. shown for patients with complicated intraabdominal infections [7]. Reasons for this mismatch may include that the MIC only gives the minimal concentration to suppress bacterial growth and contains no information on antibiotic efficacy above or below MIC [9]. This makes the MIC ill-suited to describe efficacy of the strongly fluctuating antibiotic concentrations in patients. This has led to an increase in more sophisticated dose-response measurements where bacteria are exposed to multiple antibiotic concentrations and the kill rate is assessed at each concentration individually (pharmacodynamic profiles). However, these approaches require orders of magnitude more experimental effort than simple MIC measurements because they involve a multitude of antibiotic concentrations and time points. This process is too time-consuming when testing new drug candidates.
It is even more challenging to optimize dose levels to minimize the emergence of antibiotic resistance, both for existing and novel antibiotics. Typically, not only the MIC changes when a strain acquires resistance, also other properties such as the steepness of the dose-response curve and the maximal kill rate at very high concentrations change [10]. Predicting the changes in the dose-response curve is therefore not trivial. Thus, a full pharmacodynamic profile should be assessed for each potential resistant strain. To this end, resistant strains must be isolated and due to the amount of different resistance mechanisms, a good saturation of the mutational target must be achieved. This requires substantial and lengthy evolutionary experiments. In addition, there remains substantial debate about which dosing strategies best prevent the emergence of resistance during treatment [11][12][13]. In this context, a useful concept that links antibiotic concentrations with resistance evolution is the resistance selection window (mutant selection window) that ranges from the lowest concentration at which the resistant strain grows faster than the wild-type, usually well below the wild-type MIC, to the MIC of the resistant strain [14][15][16]. Antibiotic concentrations above the resistance selection window safeguard against de novo resistance emergence. Antibiotic concentrations below the resistance selection window do not kill the susceptible strain, but also do not favor the resistant strain and therefore do not promote emergence of resistance. To limit resistance, it is therefore important to identify the resistance selection window and optimize dosing accordingly. However, this again requires obtaining a full pharmacodynamic profile of a majority of the expected resistant mutants and is therefore not feasible as a standard assessment in drug development.
The next challenge to successfully designing antibiotic treatment arises when the experimental information is integrated into mathematical pharmacodynamic models that then predict efficacy under realistic, fluctuating concentrations in patients. Pharmacodynamic models from 1910 (E max or Hill-models) [17] are still widely used despite assuming instantaneous equilibria of antibiotic-target binding and therefore being often inaccurate when antibiotic concentrations fluctuate. Recently described models that relax these assumptions have been useful in gaining a better qualitative understanding of realistic dosing and complicated drug effects, such as post-antibiotic effects, inoculum effects, and bacterial persistence [18][19][20][21]. However, to speed the development of new antibiotics or to inform practices which minimize resistance, we require quantitative predictions for antibiotics or resistant bacterial strains that do not exist yet. Models which permit quantitative predictions of changes in drug efficacy as a function of modification of antibiotic molecules (i.e. new drugs) or novel resistance mutations would be invaluable. Such tools would advance our general mechanistic understanding of antibiotic action, could guide dosing trials of new drugs, and suggest better dosing of existing drugs.
In this report, we describe a mechanistic computational modeling framework (COMBAT-COmputational Model of Bacterial Antibiotic Target-binding) that allows us to predict full pharmacodynamic profiles based solely on accessible biochemical parameters describing drugtarget interaction. These parameters can be determined early in drug development. We use this framework to investigate how changes in drug target binding, either due to improvements in existing antibiotics or due to resistance mutations in bacteria, affect antibiotic efficacy. We first show that COMBAT accurately predicts bacterial susceptibility as a function of drug-target binding and, conversely, allows inference of these biochemical parameters on the basis of observed patterns of bacterial growth suppression or killing. We then use COMBAT to predict the susceptibility of newly arising resistant variants based on the molecular mechanism of resistance and determine the resistance selection window.
Quinolone target affinities correlate with antibiotic efficacy
To investigate how biochemical changes in antibiotic action modifies bacterial susceptibility, we explored how the affinity of antibiotics to their target affects the MIC. We compared the MICs of quinolones, an antibiotic class in which individual antibiotics have a wide range of affinities to one of their targets, gyrase (K D~1 0 −4 -10 −7 M) but are of similar molecular sizes and have a similar mode of action [22]. This choice allowed us to isolate the effects of differences in drug-target affinity on the MIC.
We obtained binding affinities of quinolones to their gyrase target in Escherichia coli from previous studies [23][24][25][26][27]. We then retrieved MIC data for several quinolones from clinical Enterobacteriaceae isolates collected before 1990 [28], i.e., before the widespread emergence of quinolone resistance [22]. We assume that quinolone affinities obtained from clinical Enterobacteriaceae isolates collected before the emergence of resistance correspond to those measured in wild-type E. coli.
To make qualitative predictions of MICs, we employed a simplified model based on the assumptions that i) drug-target binding occurs much more quickly than bacterial replication, ii) the antibiotic concentration remains constant and iii) that during the 18 hours of an MIC assay, the concentration gradient of the drug inside and outside the cell has equilibrated. Under these assumptions, the MIC can be expressed as where K D represents the affinity constant and f c the fraction of the target bound at the MIC [29]. Accordingly, this model predicts that the MIC is linearly correlated with K D . Fig 1 shows the correlations between drug-target affinities and MICs for seven quinolones and clinical isolates of 11 different Enterobacteriaceae species. We observed a significant (p < 0.018) linear correlation between MIC and K D in all species, confirming the qualitative model prediction.
A quantitative model to predict antibiotic efficacy
While it was encouraging that our model can qualitatively predict MIC changes, our aim was to quantitatively predict antibiotic treatment performance. The simplified model assumes that the binding kinetics are much faster than bacterial replication, which may not be true in all cases. To expand the generalizability of the model, we extended the modeling framework to allow that bacterial replication may occur in a similar time frame as drug-target binding events.
The full model (COMBAT-COmputational Model of Bacterial Antibiotic Target-binding) describes the binding and unbinding of antibiotics to their targets and predicts how such binding dynamics affects bacterial replication and death (Fig 2A). In previous work linking drugtarget binding kinetics with bacterial replication [21], we described a population of bacteria with θ target molecules per cell with a system of θ + 1 (bacteria with 0, 1, . . ., θ bound target molecules) ordinary differential equations (ODEs). This system increases in complexity with the number of target molecules and makes fitting the model to data computationally too demanding for most settings. To simplify this prior approach, we developed new mathematical models based on partial differential equations (PDEs), where a single equation describes all bacteria simultaneously. The sum of bacteria within all target occupancy states over time can be described by a time kill curve (Fig 2B), during which the bacterial population is characterized by the distribution of bacterial cells with different levels of target occupancies at each time-step ( Fig 2C). This curve can be visualized as a two-dimensional surface in a threedimensional coordinate system where the number of bacteria is represented on the z-axis, the percent of bacteria with the fraction of bound target molecules on the x-axis, and time on the y-axis (Fig 2D).
Antibiotic action is described by rates of binding (k f ) and unbinding (k r ) to bacterial target molecules (Fig 2A and 2E). The binding of an antibiotic to a target results in the formation of an antibiotic-target molecule complex x, where x ranges between 0 and θ.
COMBAT consists of two mass balance equations: Eq 2 describing bacterial numbers as a function of bound targets and time and Eq 3 describing antibiotic concentration as a function of time (see Methods). [24,[26][27][28], and the y-axes show the MICs, both in mol/L. The adjusted R 2 and p-value of each correlation are given. In cases where there was more than one K D value reported in the literature, we used the mean for this analysis. The tested MIC values are the median of several clinical isolates described previously [28]. https://doi.org/10.1371/journal.pcbi.1008106.g001
PLOS COMPUTATIONAL BIOLOGY
Drug-target binding quantitatively predicts optimal antibiotic dose levels in quinolones and v r can be seen as a generalized velocity v ¼ dx dt . Eq 4 (part of the replication term in Eq 2) describes how daughter cells inherit bound target molecules from the mother cell during replication: Eq 5 (part of the replication term in Eq 2) is a logistic growth model describing reduced bacterial replication as the carrying capacity is approached:
Model fit to ciprofloxacin time-kill data
We used the quinolone ciprofloxacin to quantitatively fit bacterial time-kill curves, since this is a commonly used antibiotic for which binding parameters have been directly measured. S1 Table gives an overview of the known parameters used for fitting; S2 Table gives the parameters resulting from our fit.
The functional relationship between the levels of bacterial replication and death on the fraction of bound target molecules is extremely hard to obtain experimentally. We therefore treated the relationships between the fraction of bound target and bacterial replication and death as free parameters in our model fitting. Ciprofloxacin is considered to have both bacteriostatic and bactericidal action (mixed action) [30,31], and we fitted functions for a monotonically decreasing replication and a monotonically increasing killing with each successively bound target molecule (see Methods & S1 Fig).
Overall, we found that COMBAT could fit the time-kill curves well (R 2 = 0.93, Fig 3A). Fig 3B shows the predicted bacterial replication r(x) and death as a function of target occupancy δ (x) based on the fit obtained in Fig 3A. After model calibration, we simulated bacterial replication during exposure to different antibiotic concentrations for 18 h. For this simulation, positive values indicate an increase in the number of bacteria, and negative values indicate a decrease in the number of bacteria. We estimated a MIC of 0.0139 mg/L (Fig 3C), a value that is within the range of MIC determinations for wt E. coli (0.01 mg/L, 0.015 mg/L, 0.017 mg/L and 0.023 mg/L [15,[32][33][34]). unbinding events of the antibiotic to its target molecule in the cell.k f is the adjusted forward reaction rate, k r is the reverse reaction rate, A is the concentration of antibiotics inside the bacterium, x is the number of bound targets, θ is the number of targets and B x is the number of bacteria with x bound targets. b, Modeled sample time-kill curve, in which the sum of bacteria in all binding states (i.e., the entire population of living bacteria) is followed over time after exposure to antibiotics. The vertical dotted lines indicate the time points depicted in (c); 1 min (grey), 14 min (yellow), and 80 min (purple). c, The percentage of bound antibiotic targets in the bacterial population at indicated time points. d, Illustration of how the partial differential equation describes the bacterial population as a surface in a three-dimensional coordinate system, the dimensions of which represent percent bound target (x-axis), time (y-axis), and number of bacteria (z-axis). The three time points shown in (c) represent two-dimensional cross-sections at different points of the y-axis. e, Overview of used parameters and functions. https://doi.org/10.1371/journal.pcbi.1008106.g002
Accurate prediction of target overexpression from time-kill data
Having shown that COMBAT can quantitatively fit experimental data on antibiotic action within biologically plausible parameters, we continued to test the predictive ability of the model. Given our hypothesis that modifications in antibiotic-target interactions lead to predictable changes in bacterial susceptibility, we experimentally induced changes in the antibiotic-target interaction of ciprofloxacin in E. coli. We then quantified these biochemical changes by fitting COMBAT to corresponding time-kill curves and compared them to the experimental results. Ciprofloxacin acts on gyrase A 2 B 2 tetramers [22]. We used an E. coli strain for which both gyrase A and gyrase B are under the control of a single inducible
PLOS COMPUTATIONAL BIOLOGY
Drug-target binding quantitatively predicts optimal antibiotic dose levels in quinolones promoter (P lacZ ), such that the amount of gyrase A 2 B 2 tetramer can be experimentally manipulated [35]. We measured net growth rates for this strain at different ciprofloxacin concentrations in the presence of 10 μM isopropyl β-D-1-thiogalactopyranoside (IPTG; mild overexpression) and 100 μM IPTG (strong overexpression) and compared it to the wild-type in the absence of the inducer (Fig 4A).
Like previously reported, we find that increasing gyrase content makes E. coli more susceptible to ciprofloxacin [35]. We fitted net growth rates allowing the target molecule content, i.e. gyrase A 2 B 2 , to vary. We assumed that the only change between the different conditions was the amount of target. We further assumed that the relationship between bound target and bacterial replication or death did not differ between the control strain containing a mock plasmid (no IPTG) and the experiments with overexpression ( Fig 4B, between 0% and 100%). Finally, we assumed that the maximal kill rate at very high antibiotic concentrations was accurately measured in our experiments and forced the function describing bacterial death through the measured value when all target molecules are bound. We found the best fit for a 1.31x increase in GyrA 2 B 2 target molecule content for bacteria grown in the presence of 10 μM IPTG and a 2.02x increase in GyrA 2 B 2 target molecule content for those grown in the presence of 100 μM IPTG.
We subsequently tested these predictions experimentally by analyzing Gyrase A and B content by western blot Figs 4C and S2). Using realistic association and dissociation rates for biological complexes [36], we predicted a range of functional tetramers based on the relative amount of Gyrase A and B proteins ( Fig 4D). S3 Table details the individual measurements, and the procedure to estimate tetramers is provided in the methods section. We found that the observed overexpression was very close to our theoretical prediction, with 1.
Accurate prediction of target occupancy at MIC from time-kill data
Next, we tested whether COMBAT can be applied to the action of the beta-lactam ampicillin, a very different antibiotic with a distinct mode of action from quinolones. Using published pharmacodynamic data of E. coli exposed to ampicillin [34] also allowed us to compare COMBAT predictions to established pharmacodynamic approaches. Most of the biochemical parameters for ampicillin binding to its target, penicillin-binding proteins (PBPs), have been determined experimentally (S1 Table). Ampicillin is believed to act as a bactericidal drug [37], and this mode of action is supported by findings from single-cell microscopy [29]. We therefore assume that ampicillin binding does not affect bacterial replication. In order to model the consumption of beta-lactams at target inhibition and eventual target recovery, we made small adjustments to Eq 13 (see Methods, description of beta-lactam action).
We fitted COMBAT to published time-kill curves of E. coli exposed to ampicillin ( Fig 5A). Again, COMBAT provides a good fit to the experimental data between 0 min and 40-60 min. After that time, observed bacterial killing showed a characteristic slowdown at high ampicillin concentrations which is often attributed to persistence [21] (Fig 5A). For the sake of simplicity, we chose to omit bacterial population heterogeneity in this work and therefore cannot describe persistence, even though COMBAT can be adapted to capture this phenomenon [21]. Because ampicillin acts in an entirely bactericidal manner, we assume a constant replication rate (see Table). Fig 5C shows the predicted net growth rate over a range of drug concentrations. We estimated a MIC of 2.6 mg/L. This MIC is based on the Clinical & Laboratory Standards Institute definition of the MIC determined at 18 h. The original source of the MIC, which was based on experimental data and a pharmacodynamic model [34] determined an MIC of 3.4 mg/L at 1 h. If we change our prediction to 1 h, our estimated MIC is 3.32 mg/L, which is within 2.5% of the reported value [34].
Having established that COMBAT can also adequately capture the pharmacodynamics of ampicillin, we next tested whether we can estimate experimentally determined target occupancy at the MIC. Our estimated mean occupancy considering both living and dead bacteria is 89% (Fig 5B), a value within previously reported experimental estimates from Staphylococcus aureus (84-99%) [38].
Sensitivity of antibiotic efficacy to parameters of drug-target binding
It is possible to vary all parameters in COMBAT and explore their effect. We used this to test how hypothetical chemical changes to ampicillin or ciprofloxacin would affect antibiotic efficacy (S3-S11 Fig). These changes could reflect either bacterial resistance mutations or modifications of the antibiotics themselves. We predict that changes in drug-target affinity, K D , have more profound effects than changes in target molecule content, bacterial reaction to increasingly bound target (i.e. δ(x) and r(x)), or changes in target molecule content. We also predict that the individual binding rates k r and k f , and not just the ratio of these terms, the K D , are important factors in efficiency. The faster a drug binds, the more efficient we predicted it will be. One intuitive explanation for the observation that k f drives efficacy is that a slow binding fails to rapidly interfere with bacterial replication, which may allow for the production of additional target molecules and thereby reduce the ratio of free antibiotic to target molecules.
Forecasting the resistance selection window
Finally, we illustrate how COMBAT can be used to explore how the molecular mechanisms of resistance mutations affect antibiotic concentrations at which resistance can emerge, i.e., the resistance selection window. We compared predicted net growth rates as a function of ciprofloxacin concentrations for a wild-type strain and an archetypal resistant strain. For this analysis, we assumed that the resistant strain has a 100x slower drug-target binding rate (i.e.~100x [34]. The points represent experimental data, and the lines represent the fit of the model. Each color indicates a single ampicillin concentration, as described in the legend. b, Replication (blue) and death (red) rates as a function of the number of bound targets predicted by the model fit in (a). The black line indicates the predicted distribution of target occupancies in a bacterial population (both living and dead cells) exposed to ampicillin at the MIC for 18 h. c, The net growth rate, as determined by the slope of a line connecting the initial bacterial density and the bacterial density at 18 h on a logarithmic scale predicted from the model fit in (a), is shown as function of the drug concentration (blue). The dotted horizontal line indicates zero net growth, and the intersection with the blue line predicts the MIC (2.6 mg/mL). https://doi.org/10.1371/journal.pcbi.1008106.g005
PLOS COMPUTATIONAL BIOLOGY
Drug-target binding quantitatively predicts optimal antibiotic dose levels in quinolones increased MIC, realistic for novel point mutations [39]) and that the maximum replication rate of the resistant strain is 85% of the wild type strain [40]. We then predicted the antibiotic concentrations at which resistance would be selected. Interestingly, when comparing COM-BAT to previous pharmacodynamics models (Fig 5), we observed that estimates of replication rates depend on the selected time frame (Fig 6A). When the timeframe for MIC determination is set to 18 h as defined by CLSI [41], the "competitive resistance selection window", i.e., the concentration range below the MIC of both strains where the resistant strain is fitter than the wild type, ranges from 0.002 mg/L to 0.014 mg/L for ciprofloxacin ( Fig 6A) and 1 mg/L to 2.6 mg/L for ampicillin (S12 Fig), respectively. This corresponds well with previous observations that ciprofloxacin resistance is selected for well below MIC [15]. However, when measuring after 15 min or 45 min, the results are substantially different. The reason for this is illustrated in Fig 6B. COMBAT reproduces non-linear time kill curves where bacterial replication continues until sufficient target is bound to result in a negative net growth rate. This compares well with experimental data around the MIC in Figs 3A and 5A. In Fig 6B, Thus, we estimate that sustained levels of 1.27-7mg/L would safeguard against resistance. While ciprofloxacin plasma concentrations typically reach concentrations of 2mg/L after oral uptake and 6mg/L after intravenous administration [42], levels of 2.6 mg/L and above were shown to be chondrotoxic in young animals [43] and concentrations of 40 mg/L are toxic to mitochondria [44]. Clearly, toxicity and risk of resistance must be carefully weighed when deciding on dosing.
Discussion
Optimizing dosing levels of antibiotics is important for maximizing drug efficacy against wildtype strains as well as for minimizing the rise of resistant mutants. Antibiotic efficacy is traditionally described by a single value, the minimal inhibitory concentration (MIC), which has limited predictive power [7,8]. In more sophisticated dose-response measurements, bacteria are exposed to multiple antibiotic concentrations and the kill rate is assessed at each concentration individually in dose-response curves (pharmacodynamic profiles). However, this approach requires substantial experimental effort and is too time-consuming when testing large libraries of new drug candidates. Limited predictive power of standard measures of pharmacodynamics is not only a problem for antibiotic development, drug attrition in general is mainly due to insufficient predictions of pharmacodynamics rather than pharmacokinetics [45].
Because of the experimental effort, pharmacodynamic profiles for either novel drug candidates or novel resistant strains are often not obtained. Thus, we need a transferrable framework that allows quantitative predictions based on parameters that can be determined a priori. Recent studies have reported methods to predict MICs from whole genome sequencing data [46,47]. However, these methods require transfer of prior knowledge on how the resistance mutations affect MICs in other organisms. There are no methods that could predict a priori how chemical changes to an antibiotic structure or novel resistance mutations affect bacterial growth at a given antibiotic concentration.
Here, we accurately predict antibiotic action on the basis of accessible biochemical parameters of drug-target interaction. Our computational model, COMBAT provides a framework to predict the efficacy of compounds based on drug-target affinity, target number, and target occupancy. These parameters may change both when improving antibiotic lead structures as well as when bacteria evolve resistance. Importantly, they can be measured early in drug development and may even be a by-product of target-based drug discovery [48]. When these data are available, COMBAT makes only one assumption: that the rate of bacterial replication decreases and/or the rate of killing increases with successive target binding. While fitting, we allow this relationship to be gradual or abrupt and select the best fit. This means we do not model specific molecular mechanisms down-stream of drug-target binding, but their effects are subsumed in the functions that connect the kinetics of drug-target binding to bacterial replication and death.
In previous work, for example on antipsychotics [18], antivirals [19] and antibiotics [20,21], models of drug-target binding kinetics have been used to improve our qualitative understanding of pharmacodynamics. Our study substantially advances this work by making quantitative predictions across antibiotics and bacterial strains when measurable biochemical characteristics change with extremely high accuracy. This is possible because COMBAT employs an efficient and versatile mathematical approach, based on partial differential equations, that makes it computationally feasible to fit the model to a large range of data. Importantly, we are not only able to predict antibiotic action from biochemical parameters, but can also vice versa use COMBAT to accurately predict biochemical changes from observed patterns of antibiotic action. We have confirmed the excellent predictive power of COMBAT with clinical data as well as experiments with antibiotics with very different mechanisms of action. The high predictive power makes it possible to use modeling to guide dosing. This gives us confidence that biochemical parameters are major determinants of antibiotic action in bacteria and that COMBAT helps to make rational decisions about antibiotic dosing.
In drug development, our mechanistic modeling approach provides insight into which chemical characteristics of drugs may be useful targets for modification. For example, our sensitivity analyses indicate that antibiotics with a similar affinity but faster binding inactivate bacteria more quickly and therefore prevent replication and production of more target molecules, which would change the ratio of antibiotic to target. Furthermore, because e.g. antibiotic binding and unbinding rates can be determined early in the drug development process, such insight can help the transition to preclinical and clinical dosing trials. This may contribute to reducing bottlenecks between these phases of drug development and thereby save money and time.
Avoiding antibiotic concentrations that select for resistance is challenging for two reasons. First, the differences in the pharmacodynamic curves of wild-type and resistant strains are not trivial. Resistance can affect not only the MIC, but also the maximal kill rate at high drug concentrations and the steepness of the dose-response curve [10]. Therefore, one would need to record full pharmacodynamic profiles rather than just MICs to assess the mutation selection window for resistant mutants. Second, this process would have to be repeated for all (or at least a representative set of) potential emerging resistant mutants. This makes it extremely time-and resource-consuming to safeguard against resistance by determining the resistance selection window.
COMBAT offers insight into determinants of the resistance selection window and builds transferrable knowledge that allows estimating useful dose ranges. In concordance with a recent meta-analysis of experimental data [49], our sensitivity analyses predict that changes in drug target binding and unbinding have a greater impact on susceptibility than changes in target molecule content or down-stream processes. Thus, a more comprehensive characterization of the binding parameters of spontaneous resistant mutants would allow an overview of the maximal biologically plausible levels of resistance that can arise with one mutation. Dosing above this level should then safeguard against resistance. This is especially useful for compounds for which it is difficult to saturate the mutational target for resistance, or for safeguarding against resistance to newly introduced antibiotics for which we do not yet have a good overview of resistance conferring mutations.
Good quantitative estimates on the dose-response relationship of new drugs would also help defining the therapeutic window, i.e. the range of drug concentrations at which the drug is effective but not yet toxic. For example in ciprofloxacin, the doses found necessary to prevent resistance after marketing were found to be toxic [50], and an early assessment of doses that might become necessary after resistance is wide-spread might preserve antibiotic utility. If toxicity, solubility or other constraints do not allow dosing above the MIC of expected resistant strains, COMBAT can also predict the concentration range at which resistance is less strongly selected. This could guide decisions on treating with low versus high doses, which is currently controversially debated [11,12]. COMBAT therefore offers new promise to reduce the failure rates of candidate compounds late in the drug development process when resistance is observed in patients and substantial resources have been invested.
Our quantitative work can help to identify optimal dosing strategies at constant antibiotic concentrations for homogeneous bacterial populations. These measures are commonly used to assess antibiotic efficacy. In addition, previous work has demonstrated that drug-target binding models outperform traditional pharmacodynamic models for the fluctuating concentrations that actually occur in patients [29,51]. To illustrate this, we coupled a pharmacokinetic model describing different modes ampicillin administration in patients and predict the pathogen load in infected tissues based on the realistic, fluctuating antibiotic concentrations (S1 Text, S13 and S14 Figs and S5 Table). They can also explain complicated phenomena such as biphasic kill curves, the post-antibiotic effect, or the inoculum effect [20,21,52] that often complicate the clinical phase of drug development. COMBAT has similar characteristics that allow capturing these complex phenomena. Therefore, employing COMBAT may be useful for guiding drug development to maximize antibiotic efficacy and minimize de novo resistance evolution.
Mathematical model
COMBAT incorporates the binding and unbinding of antibiotics to their targets and describes how target binding affects bacterial replication and death. This work extends the model developed in [21]. COMBAT consists of a system of two mass balance equations: one PDE for bacteria (describing replication and death as a function of both time and target binding) and one ODE for antibiotic molecules (describing the concentrations as function of time).
In the most basic version of COMBAT, we ignored differences between extracellular and intracellular antibiotic concentrations and only followed the total antibiotic concentration A, assuming that the time needed for drug molecules to enter bacterial cells is negligible. We model ciprofloxacin (to which there is a limited diffusion barrier [53]) and ampicillin (where the target is not in the cytosol, even though the external membrane in gram negatives has to be crossed to reach PBPs). We therefore believe that this assumption is justified in wild-type E. coli. This basic version of COMBAT is therefore more accurate for describing antibiotic action where the diffusion barrier to the target is weak.
Binding kinetics
We describe the action of antibiotics as a binding and unbinding process to bacterial target molecules [21]. For simplicity, we assume a constant number of available target molecules θ. The binding process is defined by the formula A + T Ð x, where the intracellular antibiotic molecules A react with target molecules T at a rate k f and form an antibiotic-target molecule complex x, where values for x range between 0 and θ. If the reaction is reversible, the complex dissociates with a rate k r .
In [21], the association and dissociation terms are described by the following terms dB i ðtÞ dt ¼k f AðtÞððy À i þ 1ÞB iÀ 1 ðtÞ À ðy À iÞB i ðtÞÞ zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {
Association term
À k r ðiB i ðtÞ À ði þ 1ÞB iþ1 ðtÞÞ zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {
Dissociation term
; i�½0; y� ð6Þ wherek f ¼ k f V tot n A , k f is the association rate, V tot is the volume in which the experiment is performed, n A is Avogadro's number, k r is the dissociation rate, B i is the number of bacteria with i bound targets, and θ is the total number of targets.
This approach requires the use of a large number of ordinary differential equations, (0 + 1) for the bacterial population and one for the antibiotic concentration. To generalize this approach, we assume that the variable of bound targets is a real number x 2 R. Under this continuity assumption, we consider the bacterial cells as a function of x and the time t, thereby reducing the total number of equations to two.
Under the continuity approximation (x 2 R), we can rewrite the binding kinetics in the form @Bðx; tÞ @t ¼ @ @x �k f AðtÞðy À xÞBðx; tÞ � zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl { Association term À @ @x � k r xBðx; tÞ � zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {
Dissociation term ð7Þ
or simply @Bðx; tÞ where v f ¼k f A t ð Þ y À x ð Þ and v r = k r x can be considered as two velocities, i.e., the derivative of the bound targets with respect to the time dx dt . Replication rate. We assume that the replication rate of bacteria, r(x), is dependent on the number of bound target molecules x. The function r(x) is a monotonically decreasing function of x, such that fewer bacteria replicate as more target is bound. r(0) is the maximum replication rate, corresponding to the replication rate of bacteria in absence of antibiotics. Thus, r(x) describes the bacteriostatic action of the antibiotics, i.e., the effect of the antibiotic on bacterial replication.
Carrying capacity. Replication ceases as the total bacterial population approaches the carrying capacity K. At that point, the replication term of the equation is @Bðx; tÞ @t ¼ rðxÞBðx; tÞ where is the replication-limiting term due to the carrying capacity K, and 0 � F lim � 1.
Distribution of target molecules upon division. We assume that the total number of target molecules doubles at replication, such that each daughter cell has the same number as the mother cell. We also assume that the total number of drug-target complexes is preserved in the replication and that the distribution of x bound target molecules of the mother cell to its progeny is described by a hypergeometric sampling of n molecules from x bound and 2θ−x unbound molecules. Under the continuity assumption, we generalize the concept of hypergeometric distribution. Because the hypergeometric distribution is a function of combinations and because a combination is defined as function of factorials, we can use Γ functions in place of factorials and redefine a continuous hypergeometric distribution as a function of Γ functions. A Γ function is where z is a complex number. In this way, the distribution can be expressed as a probability density function of continuous variables. The amount of newborn bacteria is given by the term r(x)B(x, t)F lim (t). We assume that bound target molecules are distributed randomly between mother and daughter cells, with each of them inheriting 50% upon division on average. This means that twice the amount of newborn cells must be redistributed along x to account for the random distribution process. For example, if a mother cell with 4 bound targets divides, we have two daughter cells, each with a number of bound targets between 0 and 4 (their sum has to be 4), following the generalized hypergeometric distribution. For simplicity, we define S(x,t) to be a function related to the replication rate that depends on the number of bacteria with a number of bound target molecules ranging between x and θ, their specific replication rate r(x), and the fraction of their daughter cells expected to inherit x antibiotic-target complexes h(x,z): Death rate. The death rate function δ(x) depends on the number of bound target molecules. The function δ(x) is assumed to be a monotonically increasing function of x, where δ(θ) is the maximum death rate, when all targets in the bacteria have been bound by antibiotics. The shape of this function describes the bactericidal action of the antibiotic.
Bacteriostatic and bactericidal effects. We consider several potential functional forms of the relationship between the percentage of bound targets and replication and death rates, because the exact mechanisms how target occupancy affects bacteria is unknown (S1 Fig). We use a sigmoidal function that can cover cases ranging from a linear relationship to a step function. When the inflection point of a sigmoidal function is at 0% or 100% target occupancy, the relationship can also be described by an exponential function. We assume that replication in bactericidal and death in bacteriostatic drugs is independent of the amount of bound target. With sufficient experimental data, the replication rate r(x) and/or the death rate δ(x) can be obtained by fitting COMBAT to time-kill curves of bacterial populations after antibiotic exposure. The sigmoidal shape of r(x) and δ(x) can be written as: where x rth is the replication rate threshold, x dth is the death rate threshold, and both represent the point where the sigmoidal function reaches ½ of its maximum. γ r and γ d represent the shape parameters of the replication and death rate functions, respectively. These factors determine the steepness around the inflection point. When they are extreme, the relationship approaches a linear or a step function. Full equation describing bacterial population. Putting these components together, the full equation describing a bacterial population is: @Bðx; tÞ @t þ @ @x ðv f ðx; tÞBðx; tÞ À v r ðx; tÞBðx; tÞÞ zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {
Description of beta-lactam action.
Beta-lactams acetylate their target molecules (PBPs) and thereby inhibit cell wall synthesis. The acetylation of PBPs consumes beta-lactams. However, PBPs can recover through deacetylation. We modified the term of drug-target dissociation in the equation describing antibiotic concentrations (Eq 3), and set the unbinding rate k r = 0. To reflect the recovery of target molecules, we substituted the dissociation rate k r in the equation describing the bacterial population with the deacetylation rate k a , as described in [29].
Initial and boundary conditions. At t = 0, we assume that all bacteria have zero bound targets (x = 0), and the initial concentration of bacteria is B(x, 0) = 0, x > 0, and B(0,0) At the boundaries of the partial differential equation (x = 0, x = θ), we specify that the outgoing velocities are zero. For x = 0, i.e. no bound target molecules, the unbinding velocity v r (0, t) = 0, and in x = θ, i.e. all targets are bound, the binding velocity v f (θ, t)=0. When the replication term at x = 0 and the death term at x = θ are known, we can solve the partial differential equation with two ordinary differential equations at the boundaries. They are similar to the equations at x = 0 and at x = θ described by Abel zur Wiesch et al. [21], but taking into account that x is a continuous variable instead of a natural number.
Numerical schemes. To solve our system of differential equations, we used a firstorder upwind scheme. Specifically, we used the spatial approximation u f À ¼ u i ð ÞÀ uðiÀ 1Þ Dx for the binding term (v f > 0) and the spatial approximation for the unbinding term (v r < 0). For the time approximation of both the PDEs and the ODEs, we used the forward approximation DB Dt ¼ B nþ1 À B n Dt [54]. We also verified that the Courant-Friedrichs-Lewy condition is satisfied. For fitting the experimental data of bacteria exposed to ciprofloxacin and ampicillin, we used the particle swarm method ("particleswarm" function in Matlab, MathWorks software).
Time-kill curves. Overnight cultures of BW25113 or SoA3329 and SoA3330 were diluted 1:1000 in pre-warmed LB or LB-Cm and LB-Cm-IPTG, respectively, and grown with shaking to OD 600~0 .5. A 1:3 dilution series of ciprofloxacin was made and added to the cultures at indicated concentrations. Additional cultures without antibiotics and with a very high concentration of ciprofloxacin (2.187 mg/L) were used to determine the minimal and maximal kill rate, respectively. Samples were taken immediately prior to addition of the antibiotic and iñ 20 min intervals or after 45 min, respectively. Samples were washed once in phosphate buffered saline (PBS) before colony forming units (CFUs) were determined for each sample by plating a 1:10 dilution series in PBS on LB agar plates.
GyrAB quantification. To quantify the relative amount of GyrAB, samples of SoA3329 and SoA3330 were collected after 45 min of drug treatment as described above. An equal number of cells corresponding to 1 mL culture at OD 600 = 1 were harvested by centrifugation. Pelleted cells were lysed at room temperature for 20 min using B-PER bacterial protein extraction reagent (90078, Thermo Scientific) supplemented with 100 μg/mL lysozyme, 5 units/mL DNaseI (all part of B-PER with Enzymes Bacterial Protein Extraction Kit, 90078, Thermo Scientifc) and 100 μM/ mL PMSF (52332, Calbiochem). Samples were stored at -80˚C until further use.
Band intensities were quantified from unmodified images using the record measurement tool of Photoshop CS6, normalized to the CRP loading control after background subtraction, and reported relative to SoA3330. For clarity, the "levels" tool of Photoshop CS6 was used to enhance the contrast of shown Western blot images. We use the model fitted to experimental data to explore the sensitivity of our results to changes in the turnover rate of the drug-target complex. We changed k r and k f (0.01x, 0.1x, 1x, 10x, 100x, and 1000x original value) while keeping the ratio between k f and k r , the affinity K D , constant. a, shows the net growth rate (log 10 (bacterial number at 18 h)-log 10 target r(x). We use the model fitted to experimental data to explore the sensitivity of our results to changes in the replication rate with increasingly bound target r(x). We change the value of bound target at which we obtain a half-maximal replication rate, x 1/2 . a, Functions connecting bacterial replication rates r(x) to percentage of bound target molecules with different half-maximal replication rates. b, Net growth rate (log 10 (bacterial number at 18 h)-log 10 bound target δ(x). We use the model fitted to experimental data to explore the sensitivity of our results to changes in the death rate with increasingly bound target δ(x). We change the value of bound target at which we obtain a half-maximal death rate, x 1/2 . a, Functions connecting bacterial death rates δ(x) to percentage of bound target molecules with different half-maximal death rates. b, Net growth rate (log 10 (bacterial number at 18 h)-log 10 5) to explore the sensitivity of our results to changes in k a (0.01x, 0.1x, 1x, 10x, and 100x original value). a, Net growth rate (log 10 (bacterial number at 18 h)-log 10 (bacterial number at 0 h))/18 h) as function of drug concentration for different values of the binding rate k a (see legend). The dotted horizontal line indicates zero net growth. The intersections of the simulated dose-response curves with this line indicate the respective MICs. b, Sensitivity of the MIC to k a obtained from simulations in (a). The color code indicates the MIC corresponding to the simulation with the same color in (a). (TIF) S10 Fig. Sensitivity analysis of ampicillin fit: Changes in the drug-target turnover rate. We use the model fitted to experimental data (Fig 5) to explore the sensitivity of our results to changes in the turnover rate of the drug-target complex. We changed values for k a and k f (0.01x, 0.1x, 1x, 10x, and 100x original value) while keeping the ration of k a /k f constant. a, Net growth rate (log 10 (bacterial number at 18 h)-log 10 . We use the model fitted to experimental data ( Fig 5) to explore the sensitivity of our results to changes in the death rate with increasingly bound target δ(x). We change the value of bound target at which we obtain a half-maximal death rate, x 1/2 (see legend). a, Functions connecting bacterial death rates δ(x) to percentage of bound target molecules with different half-maximal replication rates x 1/2 (see legend). b, Net growth rate (log 10 (bacterial number at 18 h)-log 10 (bacterial number at 0 h))/18 h) as function of drug concentration for different values of δ(x) (see legend). The dotted horizontal line indicates zero net growth. The intersections of the simulated dose-response curves with this line indicate the respective MICs. c, Sensitivity of the MIC to δ(x) obtained from simulations in (b). The color code indicates the MIC corresponding to the simulation with the same color in (a&b). (TIF) S12 Fig. Predicted mutation selection windows for E. coli exposed to ampicillin. The drug concentration of ampicillin is shown on the x-axes, and the average bacterial net growth rate over 18 h is given on the y-axes. The blue line represents the wild-type strain based on the fits shown in Fig 5, and the red line represents a strain with a theoretical resistance mutation that decreases the binding rate (k f ) 100-fold and imparts a 15% fitness cost. The dotted horizontal line represents no net growth. The first vertical dotted line indicates where the resistant strain becomes fitter than the wild-type (the start of the competitive resistance selection window), the solid vertical line indicates the MIC of the wild-type (the start of the classical resistance selection window), and the dashed vertical line indicates the MIC of the resistant strain, above which selection for resistance should be minimal because both growth of the wild-type and the resistant strain is inhibited. (TIF) S13 Fig. Schematic of pharmacokinetic model. We simulate plasma and tissue concentrations of ampicillin with a two-compartment pharmacokinetic model. This model described intravenous drug input into the "plasma" compartment, which has an apparent volume of V 1 . From there, it can enter the peripheral "tissue" compartment, characterized by the apparent volume V 2 , with a rate k 12 . Conversely, the drug can also re-enter the plasma compartment with a rate k 21 . From the plasma compartment, the drug is eliminated with a rate k 10 . (TIF) S14 Fig. Coupling a pharmacokinetic model to COMBAT. Two modes of drug administrations, 2 g of drug per day given as single, 5 min i.v. infusions (blue line) and 2 g of drug per day given as continuous i.v. infusion (red line), are simulated. a, Simulated drug concentrations in the tissue (i.e., infected) compartment of a two compartment pharmacokinetic model over two days. The dotted black line indicates the MIC of the pathogen (2.6 mg/L). b, Pathogen load in the tissue compartment in response to fluctuating drug concentrations predicted by COMBAT over the same timeframe. (TIF) S1 Table. Kinetic parameters for used antibiotics. To our knowledge, the association rate for ciprofloxacin has not been determined directly. Because the values of the ratio of dissociation rate k r and association rate k f , K D , diverge by more than an order of magnitude in the literature, we chose to fit the association rate k f as a free parameter in our model while constraining K D to remain within the published range (the resulting value of k f is given together with other fitted parameters in S2 Table). Table. Results of gyrase level determination and estimated GyrA 2 B 2 tetramer levels.
Supporting information
(m) indicates mild overexpression, (s) indicates strong overexpression. Columns headed GyrA and GyrB show the experimentally determined overexpression as fold expression compared to the wild type. The columns headed GyrA 2 B 2 show the estimated tetramer levels resulting from each measurement. For the GyrA 2 B 2 tetramer estimation, we sampled 10 4 sets association and dissociation rates from a uniform distribution within their reported limits (Latin hypercube approach). We report the standard deviation for each estimate. We give summary estimates in the last row of the table. (DOCX) S4 Table. Parameters resulting from model fit to experimental time-kill curves of ampicillin in wild-type E. coli. Fig 5B shows the resulting death rate δ(x) as an exponential function of the number of bound targets δ(x) = a 3 e b3x + c 3 . (DOCX) S5 Table. Parameters used in pharmacokinetic model. The parameters were obtained from [60]. (DOCX) S1 Text. Predicting pathogen load in patient by coupling COMBAT with a pharmacokinetic model. (DOCX) | 2020-08-16T13:05:23.942Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "2fa44b9224d2954efe8373e7bccf9afaf824a959",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1008106&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56e3e2930d2361115423982f0706b4cf5d567b94",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
248887629 | pes2o/s2orc | v3-fos-license | On the affine Springer fibers inside the invariant center of the small quantum group
Let $\mathfrak{u}_\zeta^\vee$ denote the small quantum group associated with a simple Lie algebra $\mathfrak{g}^\vee$ and a root of unity $\zeta$. Based on the geometric realization of the center of $\mathfrak{u}_\zeta^\vee$ in [8], we use a combinatorial method to derive a formula for the dimension of a subalgebra in the $G^\vee$-invariant part of the center $Z(\mathfrak{u}_\zeta^\vee)^{G^\vee}$ of $\mathfrak{u}_\zeta^\vee$, that conjecturally coincides with the whole $G^\vee$-invariant center. In case $G=SL_n$ we study a refinement of the obtained dimension formula provided by two geometrically defined gradings.
Introduction
Let G be a complex simple simply connected algebraic group, and g its Lie algebra. We will fix Cartan and Borel subalgebras t ⊂ b ⊂ g in g. Denote also by t ∨ ⊂ b ∨ ⊂ g ∨ the same data for the Langlands dual Lie algebra. Let ℓ be an odd number that is greater than the Coxeter number h of g and coprime to the determinant of the Cartan matrix and to h + 1.
We denote by u ∨ ζ = u ζ (g ∨ ) the small quantum group associated to the Lie algebra g ∨ and a primitive ℓ-th root of unity ζ [33]. Let Λ denote the coweight lattice of G, and W the Weyl group of g (and g ∨ ). Then u ∨ ζ decomposes into a direct sum of blocks enumerated by the orbits of the extended affine Weyl group of g, W = W ⋉ Λ acting via the ℓ-dilated dot action on Λ, see for example the Introduction to [25]. We denote by u ∨,λ ζ the block corresponding to the W -orbit of λ ∈ Λ. In particular, u ∨,0 ζ denotes the principal (regular) block of the small quantum group.
In this paper we derive a dimension formula for a subalgebra in the G ∨ -invariant part of the center Z(u ∨ ζ ) of the small quantum group 1 . Conjecturally, this subalgebra coincides with the entire Z(u ∨ ζ ) G ∨ . In case G = SL n we study a refinement of the obtained dimension formula provided by two gradings. Our treatment is based on the geometric realization of the subalgebra in the G ∨invariant part of the center of u ∨ ζ obtained in [8].
Theorem 1.1 (BBASV). There is an algebra embedding
where the product on the left is the cup product.
Conjecture 1.2 (BBASV). The embedding above is an isomorphism.
Here Gr ζ,γ = Gr γ ∩ Gr ζ , where Gr γ is the affine Springer fiber with γ = st ℓ−1 for a regular element s ∈ t reg , and Gr ζ are the ζ-fixed points for the cyclic group action generated by ζ. The W -action is the one induced by the lattice action on the affine Springer fiber, as well as a monodromy action coming from variation of s in a family.
The left-hand side of Theorem 1.1 decomposes naturally in blocks, since Gr ζ,γ can be written as a finite disjoint union of generalizations of affine Springer fibers, as explained in the next section. This block decomposition respects the one on Z(u ∨ ζ ) G ∨ as explained in loc. cit.
We establish an isomorphism of vector spaces between the geometrically defined subspace of the G ∨ -invariant part of the center of the regular block Z(u ∨,0 ζ ) G ∨ of the small quantum group and Gordon's canonical quotient of the space of the diagonal coinvariants, which we will denote DR W and simply call diagonal coinvariants, hoping this will not cause extra confusion. Assuming Conjecture 1.2, the obtained result agrees with (an ungraded form of) the conjecture formulated in [24].
Further, we extend the result to a similar isomorphism for other blocks of u ∨ ζ , relating the geometrically defined subalgebra in each block of the center Z(u ∨,λ ζ ) to a respective space of partial diagonal coinvariants DR W λ n .
Combinatorially, this allows us to derive the formula for the dimension of the subalgebra isomorphic to H * (Gr ζ,γ ) W in the G ∨ -invariant part of the center (Z(u ∨ ζ )) G ∨ of the small quantum group in terms of the rational Catalan number associated with the Weyl group of g.
Theorem 1.3. Suppose that ℓ is as in the beginning of the Introduction. Then
where Cat W is the generalized rational Coxeter-Catalan number of W , and h the Coxeter number associated with the root system of g.
The G ∨ -invariant part of the blocks of the center, indeed the entire blocks of the center can in fact be equipped with two gradings that arise from the isomorphism (as a bigraded vector space) of the center of each block with certain equivariant cohomologies of coherent sheaves on the Springer resolution [7] (see Theorem 4.13).
In case G = SL n , assuming the main conjecture in [11], we define a bigrading on the side of the affine Springer fibers, which coincides with the sign-twisted bigraded structure on the diagonal coinvariants (see Theorem 4.11). This bigrading comes from the realization of the invariant piece of the cohomology as a quotient of the BM homology of the "positive part" of the affine Springer fiber, up to a linear dual.
We also study another model for the bigrading, coming from the perverse filtration on certain parabolic Hitchin fibers. This formulation is more geometric and allows us, for example, to define an sl 2 action on the blocks of the cohomology of affine Springer fibers. We expect that our bigrading (either version) carried over to the blocks of the center coincides with the one coming from the equivariant cohomologies of the coherent sheaves on the Springer resolution. This would imply in particular that the constructed sl 2 -action on the affine Springer fiber side coincides with the sl 2 action "along the diagonals" as in [24,Section 4]. Finally, we exhibit a spectral curve construction which can hopefully be used to relate the second model of the bigrading to the (much better behaved) elliptic homogeneous affine Springer fibers studied e.g. in [12].
Affine Springer fibers
Let ζ be a primitive ℓ-th root of unity, O := C[[t]], K := C((t)), and s ∈ t reg . Define γ = st ℓ−1 and consider where Gr γ is the usual affine Springer fiber and Gr ζ are the ζ-fixed points for the cyclic group action generated by ζ ∈ G rot m . More generally, denote by the affine Springer fiber of γ for F l P = G(K)/P the partial affine flag variety of some parahoric subgroup (or some congruence subgroup thereof) of G(K).
Note first that we may view Gr γ as an affine Springer fiber in the partial affine flag variety for the first congruence subgroup Remark 2.1. The explanation for taking γ = st ℓ−1 as opposed to γ = st ℓ is given by the rewriting of Gr γ in terms of st ℓ and the congruence subgroup G (1) (O) above. Ultimately, we will not be interested in the F l γ P for standard parahorics, but their close analogs fl γ P , to be defined below. They can be directly defined using st ℓ . This should be compared to [8,Lemma 2.6.], where the terminology "affine Spaltenstein fiber" and notations such as 0 Gr γ are used.
By [32,Proposition 4.6.] we have that where P λ is the parahoric group scheme in G((z ℓ )) associated to λ. Here W ℓ is the extended affine Weyl group acting by the ℓ-dilated action on Λ.
Since the actions of ζ and γ commute, The Λ ∼ = T (K)/T (O)-action commutes with ζ, so we get a Λ-action on Gr ζ,γ and this preserves the decomposition into fl γ P λ . On H * (Gr γ,ζ ), there is also an action of W coming from varying s ∈ t reg . These assemble to a (left) action of W on the cohomology. There is also another commuting (right) action of W coming from the Springer action. Taking invariants for the left action and antisymmetrizing on the right by the idempotents Here Q is the coroot lattice. In particular, where DR W is defined in the beginning of Section 3.1. Here W λ is the stabilizer of λ ∈ Λ inside W .
Proof. By [9, Theorem 1.2.] we have that H * (fl γ I ) W is isomorphic to as a W −representation. This is the sgn-twist of DR W [16].
On the other hand, we claim that Note that there is a natural inclusion And that we always (i.e. for any parahoric containing I and any regular semisimple γ) have a Cartesian diagram where the right-hand column is the Grothendieck-Springer resolution for L P , the Levi quotient of P. Taking the fiber at 0 of the bottom map gives exactly fl γ P . The cohomology of this fiber is exactly the W λ -antisymmetric part of the pullback of the Springer sheaf (see [17,Lemma 2.2]), so after noting that everything commutes with the W -action, we are done.
Remark 2.2. It would be interesting to know if there is an extension of this Proposition to the singly or doubly graded cases. For the doubly graded case in type A, see Corollary 4.12.
The relation with the center of the small quantum group is as follows.
Proposition 2.2. We have
To compute the dimension of H * (Gr ζ,γ ) W , it will be enough to understand the structure of the block decomposition and the dimensions of the DR W λ W . This will be done in Sections 3.2 and 3.3.
In analogy with the proof of Proposition 2.1, we will need the following simple fact. Let e = 1 |W | w∈W w, e − = 1 |W | w∈W (−1) ℓ(w) w be the symmetrizing and antisymmetrizing idempotents for W . Lemma 2.3. Let γ ∈ g(K) be any regular semisimple element. As singly graded W -representations, we have that Proof. This is [17, Lemma 2.2.].
The following corollary is probably well-known but we couldn't find a proof in the literature and give a Springer-theoretic proof here, which may be of independent interest. Proof. For any m coprime to h, it is known by [34] that C[Q/mQ] can be realized as a W -representation using the cohomology of the affine Springer fiber corresponding to a certain "elliptic homogeneous element γ m/h of slope m/h". More precisely, let Φ r denote the set of roots for g of height r. Write m = ah + b where 0 ≤ b < h and define
Singular blocks from the principal block
For the Lie algebra g, fix a Cartan and Borel subalgebras t ⊂ b ⊂ g. Then t carries an irreducible representation of the Weyl group W . Let DR W denote Gordon's canonical quotient of the diagonal coinvariants for W . The latter is by definition the quotient ring of C[t × t * ] over the natural doubly homogeneous ideal containing the invariants without the constant term with respect to the diagonal action of W , and in [16] the further quotient DR W and its structure as a W -module is studied. In particular, the dimension of DR W is (h + 1) r , where r = rank(g), and h its Coxeter number [16,Theorem 1.4.] and as a W −representation, DR W ∼ = sgn ⊗ C[Q/(h + 1)Q], where sgn is the sign representation of W.
Let λ ∈ Λ, and W λ be the stabilizer of λ, and consider By Frobenius reciprocity, the latter is the same as Hom W (Ind W W λ (triv), DR W ).
The bigraded dimension of this space is given by the "Hall inner product" of Frobenius characters: Here the Frobenius character Frob q,t : Rep Z 2 −graded (W ) → K 0 (Rep Z 2 −graded (W )) takes a doubly graded representation to its class in the Grothendieck group, where a representation in bigrading (i, j) is weighted by q i t j . When G = SL n , so that W = S n , we use the notation Frob q,t to also denote the composition of the above map with K 0 (Rep Z 2 −graded (W )) → Sym q,t sending the Specht module labeled by λ to the Schur function s λ , see the next Section.
Type A
Let us now upgrade the type A result to include the natural bigrading. In case g = sl n we have W = S n and we write DR Sn = DR n = DR n = C[x 1 , . . . , x n , y 1 , . . . , y n ] C[x 1 , . . . , x n , y 1 , . . . , y n ] W + .
Let Sym q,t [X] be the ring of symmetric functions over Q(q, t) in the alphabet X = {x 1 , x 2 , . . .} and let ∇ be the nabla operator of [6], diagonal in the basis of modified Macdonald polynomials. Let {e λ }, {p λ }, {h λ }, {m λ }, {s λ } be the bases of elementary, power sum, complete homogeneous, monomial, and Schur symmetric functions, and ω = ω X the usual involution on symmetric functions.
In this case we have the following more explicit statement about the bigraded dimension of DR W λ n . Proposition 3.1. Let λ ∈ Λ and W λ ⊂ S n be the stabilizer of λ. Then where h λ is the homogeneous symmetric function associated with λ, e n the n-th elementary symmetric function, and ∇ the Garsia-Haiman nabla operator acting on symmetric functions.
Proof. It is clear that DR n is bigraded by x, y-degree and that this decomposition respects the W -action. The q, t-Frobenius character is given by Haiman [19] as the symmetric function ∇e n where ∇ is the Garsia-Haiman ∇-operator on symmetric functions and e n is the n-th elementary symmetric function.
It is also well known that the (trivially graded) Frobenius character of Ind W W λ (triv) is given by h λ , the homogeneous symmetric function attached to λ. Therefore, we compute using (3.2): Remark 3.1. We will later use the interpretation of the Frobenius characters as symmetric functions when G = SL n . If we are for example interested in the bigraded multiplicity of λ, it is given by ∇e n , s λ .
1
Proof. The Shuffle Theorem of Carlsson-Mellit [11] states that where π ∈ P F n is a parking function on n letters, and area and dinv are certain combinatorial statistics (see [11] for the definition). The monomial x π is a monomial in the alphabet {x 1 , . . . , x n , . . .} associated to π. Collecting all the monomials in the S n -orbit of a fixed π and using the orthogonality of the bases {m λ }, {h µ } for the Hall inner product, we see that h λ , ∇e n is a weighted count of Dyck paths whose associated monomial is λ. For λ = (n − 1, 1), these are Dyck paths differing from the one with minimal area by allowing an extra horizontal step (compare [15], where a similar result is proved using Schröder paths). Fixing the length of this step, we get n − length Dyck paths, each of which has the same area. It is easy to see that they all have a different dinv statistic. In total, we get n+1 2 Dyck paths, each with distinct statistics. This completes the proof. Corollary 3.3. Let g ∨ = sl n and let Z(u ∨,λ ζ ) denote the block of the center of the small quantum group u ζ (g ∨ ) with λ a subregular weight. Let P λ ⊂ G = SL n be the parabolic subgroup associated to λ and N λ ≃ T * (G/P λ ) the Springer resolution. The additional grading of the coherent sheaf of poly-vectorfields ∧ j T N λ is given by the induced action of C * along the fibers of the Springer resolution. Then there is an isomorphism of bigraded vector spaces Proof. The first isomorphism is a particular case of theorem 7 in [7]. The bigraded dimensions of the equivariant coherent sheaf cohomologies in case when λ is subregular and G/P λ ≃ P k are computed in Theorem 3.3 of [25].
They match exactly the bigraded dimensions of DR λ n obtained in Proposition 3.2.
This shows in particular that for G of type A n and a singular block Z(u ∨,λ ζ ) such that G/P λ is a projective space, the cohomology of the corresponding affine Springer fiber is isomorphic to the whole singular block of the center. This also confirms Conjecture 4.9(3) in [24] at the level of the bigraded vector spaces. Note that in this case the whole block of the center of the small quantum group is G ∨ -invariant.
Enumeration of blocks
Let a ℓ be the ℓ-dilated fundamental alcove for G. We would like to compute the number of blocks u ∨,λ ζ for the small quantum group for λ with a given stabilizer W λ ∈ W . Therefore, we have to enumerate the singular coroot weights in a ℓ ∩ X + , or equivalently in Q/ℓQ, with a given type of stabilizer in affine Weyl group. By [20], the total number of orbits is and the number of regular orbits is The first quantity merits a name and plays a significant role in so called Coxeter-Catalan combinatorics, see for example [35].
Let us first sketch how this plays out for G = SL n . In this case for ℓ = n + 1, we have the classical Catalan number Cat Sn (n + 1, n) = 1 n+1 2n n . On the other hand, Eq. (3.4) gives 1, i.e. there is a single regular orbit in Q/(n + 1)Q.
For more general m, since Q/mQ for (m, n) = 1 is in S n -equivariant bijection with rational parking functions of slope m/n, we only need to understand the orbits on the latter. One can think of rational parking functions as Dyck paths with labels {1, . . . , n} on the vertical runs, where the labels have to be increasing in each run. The S n -action permutes the labels on the parking functions, and therefore the stabilizer is given by the structure of the vertical runs.
In [3, Proposition 2] the following is proved, giving the general number of orbits of a given type: Compare also [20,Conjecture 2.4.2] in the case m = n + 1.
These numbers are known as (rational) Kreweras numbers. The goal of this section is to understand the general "rational Coxeter" analogs of Kreweras numbers, as first defined in [34]. These are defined as follows. Consider Q/ℓQ with its natural action of the affine Weyl group W .
The stabilizer in the finite Weyl group W ⊂ W of q ∈ Q/ℓQ is by [34, Proposition 4.1.] a parabolic subgroup of W . Let {W λ } be a set of representatives of parabolic subgroups of W . Now, C[Q/ℓQ] is by definition a permutation representation of W , so by orbit-stabilizer splits as Definition 3.6. The ℓ/h-rational Kreweras number of type λ for G is by definition the coefficient d λ,ℓ in the above decomposition. In other words, it is the number of Worbits in Q/ℓQ with a given stabilizer.
Remark 3.2. Without our assumptions on ℓ, which for example imply ℓ is "very good" in the sense of [34], the W λ appearing above are in general only so called quasi-parabolic subgroups of W . We will however not need them.
Explicit formulas for the Kreweras numbers for classical groups can be found in [34]. There is also a general formula in terms of hyperplane arrangements [34, Proposition 5.1.]. We will need the following proposition, which follows directly from the definitions. For G of type G 2 , we have the origin, (ℓ−1)(ℓ−5) 12 regular orbits and ℓ − 1 subregular orbits.
Putting it together
Fix G and as in the previous section, let d λ,ℓ be the number of orbits of type λ in Q/ℓQ. We are interested in the sum (3.5) The rest of this section will be devoted to a proof of this theorem. Before doing so, we note how this implies Theorem 1.3 from the Introduction. Corollary 3.9. With ℓ as above, we have that the dimension of the subalgebra described in Theorem 1.1 of the G ∨ -invariant part of the center of the small quantum group at a primitive ℓ-th root of unity ζ is given by Therefore, we are computing the dimension of the anti-invariants: By our assumptions on ℓ, the Chinese remainder theorem implies To conclude the proof, use Lemma 2.3 to note that
The dimension of this space is
Remark 3.4. Corollary 3.9 gives the dimension of the cohomology of the affine Springer fibers. By the explicit computation on the coherent side, we know that these dimensions match the dimensions of the G ∨ -invariant part of the center of the small quantum group for types A 1 , A 2 , A 3 , A 4 , B 2 , G 2 for all blocks ( see [21,Sections 4 and 5]). Therefore, Conjecture 1.2 is confirmed in all these cases. and these can be checked to match Hochschild cohomology computations as in [21].
Type A
Let d λ,ℓ be the number of orbits of type λ in Q/ℓQ. We will give a slightly different proof of Theorem 3.8 for G of type A in this section, which we hope will be illuminating to the reader. Note that in this case, where we use the Hall inner product on symmetric functions and h λ is the complete homogeneous symmetric function.
By Proposition 3.5, we have that where m i (λ) is the number of parts of size i in λ for i > 0 and defined as above for i = 0.
The final answer we are looking for is the ((n + 1)ℓ − n, n)-Catalan number. This is the same as the total number of orbits in Q/(n(ℓ + 1) − n)Q. Proof. Note that d λ,ℓ = P ℓ,n ·1, m λ | q=t=1 where m λ are the monomial symmetric functions and P m,n for m, n ≥ 0 are the usual elliptic Hall algebra operators as in [30].
We may combine the summation over λ on the RHS and use linearity of the scalar product to get λ h λ P ℓ,n · 1, m λ , ∇e n | q=t=1 = ωP ℓ,n · 1, ∇e n | q=t=1 (3.6) Here ω is the usual involution on symmetric functions. On the other hand, on the LHS, we may write d n,ℓ = P (n+1)ℓ−n,n · 1, e n | q=t=1 . We may interpret this latter pairing as taking the dimension of the invariants in the ((n + 1)ℓ − n, n)-parking function module. On the other hand, this is the same as the antiinvariant part of the ((n + 1)ℓ, n)−parking function module by Lemma 2.3.
In Eq. (3.6) we may further interpret the pairing as the dimension of the invariants of the tensor product of the sign-twisted (ℓ, n)-parking function module and the (n + 1, n)-parking function modules. Now, each of these m, n-parking function modules looks like C[Q/mQ] where Q is the root lattice of S n . If n ≡ 0, −1 mod ℓ, the Chinese remainder theorem implies that C[Q/ℓQ × Q/(n + 1)Q] ∼ = C[Q/(ℓ(n + 1))Q] as S n -representations, giving the conclusion.
Remark 3.6. We have checked the identity from Theorem 3.10 with a computer for all (not just coprime to n, n + 1) odd 5 < ℓ < 30 and 1 < n < 10 and expect it to be true in general.
Remark 3.7. By the rational shuffle conjecture, resulting from e.g. [29], we know the bigraded scalar products P m/n · 1, h λ for the m/n-case as well. It would be interesting to understand a (bi)graded version of Theorem 3.10 as well.
The Harish-Chandra center, the Higman ideal and the Verlinde quotient
In the proof of Theorem 3.8 we have obtained a (non-multiplicative) combinatorial model of a subalgebra Z ≡ H * (Gr ζ,γ ) W of Z(u ∨ ζ ) G ∨ , and hopefully, under Conjecture 1.2, of the whole G ∨ -invariant part of the center. It is given by Here we denote by [.] W− taking the isotypical component of the sign representation. We would like to point out remarkable correspondences between certain natural subspaces in [C[Q/(h + 1)Q] ⊗ C[Q/ℓQ]] W− and the well known Recall that u ∨ ζ is a unimodular Hopf algebra, meaning that it contains a twosided integral ν ∈ Z(u ∨ ζ ) that is unique up to rescaling and such that νx = ε(x)ν and xν = ε(x)ν for any x ∈ u ∨ ζ , where ε : u ∨ ζ → C is the counit. The Hopf algebra u ∨ ζ is a left module over itself with respect to the Hopf adjoint action adh(x) = h 1 xS(h 2 ) for any h, x ∈ u ∨ ζ , where S is the antipode and we used the Sweedler's notation for the coproduct ∆h = h 1 ⊗ h 2 . It is easy to see that the space adν(u ∨ ζ ) is spanned by central elements. Definition 3.11. The space adν(u ∨ ζ ) ≡ Z Hig is an ideal in Z(u ∨ ζ ), called the Higman ideal.
Recall also that u ∨ ζ is a quasitriangular Hopf algebra with the invertible element sends the K 2ρ -twisted traces of u ∨ ζ -modules to Z(u ∨ ζ ). Moreover, it is an injective algebra homomorphism from the Grothendieck ring of u ∨ ζ into the center. It follows from Lusztig's tensor product theorem for simple modules over the big quantum group [27], that Z HCh is isomorphic as an algebra to C[Λ] W /C[ℓΛ] W [10], the algebra of characters of u ∨ ζ -modules, spanned by the W -symmetric functions with highest weights in Λ/ℓΛ, and that dimZ HCh = ℓ r(g ∨ ) .
Proposition 3.13. ([26], Proposition 2.26 and Theorem 4.3). The Higman ideal is spanned by the images of the characters of the projective u ∨
ζ -modules under the map J. The intersection of Z Hig with each block of the center is one-dimensional and therefore we have dimZ Hig = Cat W (ℓ, h).
We have Z Hig ⊂ Z HCh ⊂ Z(u ∨ ζ ) G ∨ The second inclusion follows from the fact that the J-image of the Grothendieck ring also arizes as the specialization to u ∨ ζ of the center of the big quantum group, where the action of g ∨ is realized by the adjoint action of the l-th divided powers of the Chevalley generators, trivial on the central elements. Now consider the model of Z ⊂ Z(u ∨ ζ ) G ∨ given by 3.7. Note that there exists exactly one regular orbit of W in C[Q/(h + 1)Q], which decomposes as the regular representation of W : O reg = i∈I d i V i , where {V i } i∈I is the complete set of inequivalent irreducible representations of W , and d i = dimV i . On the other hand, C[Q/ℓQ] decomposes as i∈I k i V i for some natural k i > 0, i ∈ I. Now for each i ∈ I there exists a uniqueī ∈ I such that Vī = V i ⊗ sgn and d i = dī. Here sgn denotes the sign representation of W . We have that V j ⊗ V i contains sgn with multiplicity 1 if and only if j =ī. Then contains the trivial representation triv with multiplicity 1. Therefore the subspace dim sgn ⊗ triv CatW (ℓ,h) = dim Z Hig and we have Exchanging the roles of sgn and triv also leads to an interesting subspace in O reg ⊗ C[Q/ℓQ]] W− . We have exactly one trivial representation triv in O reg . By Lemma 2.3 the sign representation in C[Q/ℓQ] has multiplicity Cat W (ℓ − h, h). Therefore the subspace triv ⊗ sgn CatW (ℓ−h,h) in our model of the center has the dimension Cat W (ℓ − h, h), which is the dimension of the Verlinde quotient V erl of the Harish-Chandra center of u ∨ ζ . Recall that the Verlinde quotient of the Grothendieck ring is spanned by the characters of the Weyl modules over u ∨ ζ with highest weights running over the regular weights in the ℓ-dilated fundamental alcove a ℓ , and the multiplication is defined up to the ideal spanned by the linear combinations of Weyl characters symmetric with respect to any reflection in W ℓ . The number of regular weights in a ℓ is given by Cat W (ℓ−h, h).
The bigrading in type A
In this section we explain how to get a bigrading on the space H * (F l γ I ) W ∼ = H * (F l γ I ) Λ for G = GL n , which corresponds to the principal block of the center under Conjecture 1.2. There are in fact at least two geometric ways to do this. The first one is using the perverse filtration on the parabolic Hitchin fibration and the second one is using the "number of points" or "connected components" grading on the positive part of the affine Springer fiber, as studied in [11]. Using techniques of loc. cit. and assuming their Conjecture A, we can show that the bigraded structure as an S n -module agrees with that of the diagonal coinvariant ring, as conjectured in [24].
On the other hand, the advantage of using the Hitchin fibration is that there is a natural "Lefschetz element" coming from the relatively ample determinant bundle on the parabolic Hitchin fibration. We conjecture that the sl 2 -action on the center obtained this way coincides with the one given by the wedge product with the Poisson bivector field on the Springer resolution. Further, we conjecture the bigradings obtained in these two ways are the same (up to an explicit change of variables).
The parabolic Hitchin fibration
First, we want to construct a particular compactification of the singular curve given by x n + y n = 0 ⊂ C 2 , inside a Hirzebruch surface. Most importantly, this compactification will be irreducible, i.e. a spectral curve for the anisotropic locus of the Hitchin fibration, and have only an isolated singular point which is an ordinary n-uple point.
Let Σ r = P(O P 1 (r) ⊕ O P 1 ) be the r-th Hirzebruch surface. The Picard group of Σ r is generated by the zero section E r and the class of a fiber F , with intersection form determined by F 2 = 0, E 2 r = −r and E r F = 1.
Recall that there is a birational map from Σ r to Σ r+1 , called an "elementary transform" (see [4,Chapter 3]), constructed as follows. We choose some fiber F , and consider the surface Σ ′ r , the blow-up of Σ r at p : For all n ≥ 0, there is a curve C ⊂ Σ 2 such that C is irreducible, has a unique singular point, with singularity type x n + y n = 0.
Proof. Let C 2 ⊂ P 2 be a smooth curve of degree n, and Σ 1 be the blow-up of P 2 at a point a / ∈ C 2 . We denote by C 1 the strict transform of C 2 . Consider a generic fiber F 0 and the corresponding elementary transform.
The strict transform C ′ of C 1 inside Σ ′ 1 is isomorphic to C 1 . Denote by C the image of C ′ under the contraction of F ′ . Since F ∩ C 1 is given by n points, we see that C is analytically isomorphic to C 2 where n points have been glued transversally together, resulting in an ordinary n-uple point q. It's clear that C\{q} is smooth. Since C\{q} is connected, C is irreducible.
Remark 4.1. Since C 2 is the normalization of C, the geometric genus of C is g g = n−1 2 . Since the blowdown introduces n 2 nodes to C ′ , the arithmetic genus is g a = g g + n 2 = (n − 1) 2 . Definition 4.2. Let X/C be a smooth projective curve, G a split reductive group, and L a line bundle on X with deg L ≥ g X . The Hitchin moduli stack is the functor Let now G = SL n and L be a line bundle of degree ≥ 0 on P 1 . By the BNR correspondence [5], we may realize the curve C from Lemma 4.1, or rather its intersection with Tot(O(2)) as a spectral curve {det(xI −ϕ) = 0} for the Hitchin fibration M → A associated to the data of P 1 , G, L. Let a ∈ A be such that C is the associated spectral curve. Note that we in fact have a ∈ A ani ⊂ A ♥ , the locus where the spectral curves are irreducible, resp. reduced (we will not need a more general definition of A ani or A ♥ here, for that see [31, § 6.1]).
The relationship to the affine Springer fibers considered in this paper is as follows. The curve C may be chosen so that the unique singularity is over 0 ∈ X. Its local form corresponds to γ = st ∈ g(K) as before, for s = diag(1, ρ, . . . , ρ n−1 ) where ρ is a primitive n:th root of unity. Let (a, 0) ∈ A ♥ ×X. Then [36, Proposition 2.4.1] says that is a homeomorphism of stacks.
Here P a is the generalized Picard stack, P red 0 (J a ) the reduced quotient of the local Picard stack at 0. Modding out by P a , the left-hand side of Eq. (4.1) simplifies to F l γ I /P red 0 (J a ). By taking γ = st for s ∈ t reg as above, it is easy to compute by hand in this case that P 0 (J a ) = T (C) × Λ where T is the diagonal torus in GL n and Λ = X * (T ) ∼ = Z n is the lattice part of the centralizer.
Modifying the proof of [31, Proposition 4.13.1] slightly, we can write the following variant of Eq. (4.1): where P ♭ a is the Picard group of the normalization of C as in [31, 4.7.3].
The upshot of this analysis is that we may define the perverse filtration on H * (F l γ I /Λ). Namely, if π : M → M × {0} → A ani denotes the restriction of the parabolic Hitchin fibration to the locus of irreducible spectral curves and with the parabolic reduction at 0 ∈ X, π * C acquires a filtration from the t-structure on the base as P ≤i := im( p τ ≤i π * C → p τ ≤i+1 π * C).
Restricting to the stalk at a, we get a filtration P ≤i on H * ( M a /P ♭ a ) ∼ = H * (F l γ I /Λ). By results of Maulik-Yun [28] this filtration is independent of the choice of deformation of C used here (we only require the total space to be smooth and a codimension estimate on the base, handled in this case by [31]). See also [28, Section 3.1.3.].
We make the following conjecture, which holds for G = SL 2 .
Conjecture 4.5. As bigraded vector spaces
Proposition 4.6. The conjecture 4.5 is true for G = SL 2 .
Proof. In this case, the two vector spaces are equal to C 3 , hence we just need to check that the gradings agree. The affine Springer fiber F l γ I can be identified with an infinite chains of P 1 , and the lattice action is obtained by translation by 2 ( [37]). Hence the quotient X 0 = F l γ I /Λ is isomorphic to an elliptic curve with a singularity of type I 2 (i.e two P 1 glued transversally twice). By the discussion before, this curve also appears as a spectral curve inside a cotangent bundle of P 1 , hence its compactified Jacobian is a Hitchin fiber inside the corresponding Hitchin fibration. Since X 0 has arithmetic genus 1, it is isomorphic to its own compactified Jacobian. It follows by versality of the Hitchin map in this case that the restriction of this fibration to a generic line is simply a smoothing of X 0 , say f : X → L = C. Let L * = L\{0}. By the decomposition theorem, we have where L is the rank 2 local system on L * given by the matrix 1 2 0 1 . The pure part is given by . The perverse degree are −2, 0, 2. Up to renormalisation, we obtained the same bigrading as the diagonal coinvariants in this case.
The Lefschetz element
Let L det be the determinant line bundle on M. The iterated cup product by c 1 (L det ) induces a map and therefore maps , where a is the cohomological degree and b is the perverse degree.
We will now prove that c 1 (L det ) coincides with a certain polynomial in the ring of diagonal coinvariants, under Conjecture 4.5. On the other hand, under the bigraded isomorphism of the principal block of the center Z(u 0 ζ ) with sheaf cohomology groups of the Springer resolution, we can hope that c 1 (L det ) coincides with the Poisson bivector field on the Springer resolution as explained in more detail in Conjecture 4.15.
The positive part of the affine Springer fiber
We now recall some results from [11]. We will use symmetric functions in two sets of variables X, Y , see Section 3.1. Using the standard plethystic notation, see for example [18], we write f [XY ] for the result of substituting p k (X) by p k (X)p k (Y ) in the expansion of f ∈ Sym q,t in the basis of the p λ . Recall from Section 3.1 that we have defined Frob q,t to be the bigraded Frobenius character of a Z 2 -graded S n representation.
Next, suppose γ = st for s ∈ t reg as before and G = GL n . Let Gr + G be the so called positive part of the affine Grassmannian, consisting of lattices Λ ⊂ K n contained in the standard lattice O n . The positive part of the affine flag variety, F l + I is defined as the preimage of Gr + G under the natural projection. The positive part of F l γ I is defined to be F l γ,+ I := F l + I ∩ F l γ I Its equivariant Borel-Moore homology H T * (F l γ,+ I ) is bigraded by the connected component t i ∈ π 0 (F l + I ) = Z and the (half of the) cohomological grading q j ∈ Z and carries two bigraded S n -actions, one from the Springer action and one from the monodromy as s moves in a family. In [11], these are called the star and the dot actions, respectively. In [9], they are called the Springer action and the equivariant centralizer-monodromy actions, respectively.
Next, note that the positive part of the lattice Λ, i.e. Λ + ∼ = Z + ≥0 acts on F l γ,+ I , and by [22] we have F l γ,+ I /Λ + ∼ = F l γ I /Λ Further, from the explicit description as the module called "M " in [11], we see as C[Λ]-modules.
Using the degeneration of the Cartan-Leray spectral sequence for the Λ + and Λ-actions on F l γ,+ I , resp. F l γ I , we have Suppose we wanted to kill the lattice action instead. Indeed, since (H * (F l γ,+ I ) Λ + ) * ∼ = H * (F l γ,+ I ) Λ , the bigraded Frobenius characters under S n are the same (up to contragredient). Note that the coinvariant space is by definition H * (F l γ,+ I ) Λ + = Tor (H * (F l γ,+ I ), C), so inherits a second grading from H * (F l γ,+ I ).
Again by [19,Lemma 3.6.] where the LHS is the equivariant Borel-Moore homology of a certain open fundamental domain U of the lattice action defined in [11,Definition 6.9.]. It has only even dimensional nontrivial cohomology groups, as is implied from the formula. But interestingly, this does not mean that it is equivariantly formal, and indeed this space will have nontrivial odd usual Borel-Moore homology groups.
Finally, Eq. (4) of loc. cit. implies that The main conjecture of [11] essentially says that the Tor i groups that appear on the left contain only those nontrivial representations χ λ of the "dot" or equivariant-monodromy S n -action for i = ι(λ ′ ), ι being a certain combinatorial statistic from the nabla positivity conjecture of loc. cit.
As we know by the Shuffle Theorem of [14], ∇ k e n [X] is the character of the diagonal coinvariants, and is the result of substituting p k (Y ) = 1 in e n [XY ], in other words taking the trivial component of the representation of the "dot" action. By [11,Conjecture A], this is the same as the Tor 0 part, and so corresponds to tensoring out both x and y H T * (F l γ,+ I ) over C[Λ + ] ⊗ C[t] ∼ = C[x, y], without including higher derived functors.
Combining the above remarks, we have
The sl 2 symmetry of the center
Let P ⊂ G be a parabolic subgroup of G, and X = G/P the partial flag variety. Set N P = T * X, the cotangent bundle of X. The following result is proven in [25]: Theorem 4.13. Let λ ∈ Λ be a weight singular with respect to the shifted action of W , and P a parabolic subgroup of G whose Weyl group W P ⊂ W stabilizes a weight in the W -orbit of λ. Then the Hochschild cohomology of the singular block u λ ζ is given by the C * -equivariant Hochschild cohomology of N P : where the degree k comes from the C * action by dilation on the fibers of N P . In particular, The statement is based on the derived equivalence of categories between certain category of representations of quantum groups at roots of unity and a derived category of G × C * equivariant coherent sheaves over the Springer resolution (see [2]).
The variety N P is naturally symplectic and the action of G on N P is Hamiltonian.
In particular, we have the following result [25]: Proposition 4.14. The space H 0 ( N P , ∧ 2 T N P ) −2 is one-dimensional, spanned by the Poisson bivector field τ , that is dual to the canonical holomorphic symplectic form w ∈ H 0 ( N P , ∧ 2 T * N P ) −2 . The exterior product with τ and contraction with w defines an sl 2 action on the total Hochschild cohomology of N P . This gives for any j = 0, 1, ...n an isomorphism of vector spaces Combining Theorem 4.13 and Conjecture 1.2, we also have the following conjecture.
Conjecture 4.15. There is a bigraded algebra isomorphism
where the bigrading on the right is explained in Section 4.3. Alternatively, Moreover, the element τ on the left should correspond up to a scalar to the polynomial ∆ (n−1,1) introduced in Theorem 4.7 on the right, or in the second version equivalently to c 1 (L det ).
A degeneration of spectral curves
To propose yet another model for the center in type A, we will study the elliptic homogeneous affine Springer fibers of slope (n + 1)/n associated to the elements γ n+1/n introduced in Eq. (2.2) and their relation to F l γ I , where γ is as in the introduction. It is known by e.g. [12] that We will construct a a family of irreducible spectral curves C t ⊂ Tot(O P 1 (2)), such that the associated family of parabolic Hitchin fibers models the degeneration of affine Springer fibers of slope n/n to the one of slope (n + 1)/n. One can then ask whether the specialization map from the cohomology of the total family (which turns out to be just that of the central fiber) gives an injection to the cohomology of the special fiber, respecting the perverse filtration. Theorem 4.16. There exists a family of irreducible curves C → A 1 , arising as a restriction of the Hitchin system to a line on the Hitchin base, so that the spectral curve C t , t ∈ A 1 will have two singular points: one "constant" (i.e independent of t) singular point with equation y n = z n−1 , and another singular point of the form y n = tx n + x n+1 . Remark 4.2. Let us notice that if t = 0, this singular point is isomorphic to the singularity y n + x n = 0. Indeed, in C[[x, y]] the equation can be written (t + x) −1 y n = x n . Taking a n-th root of the unit (t + x), say a, we can use the coordinate change Y = a −1 y and X = x to get Y n = X n . Hence, around this second singular point we are degenerating the singularity y n = x n into the singularity y n = x n+1 .
Proof. We construct the family of spectral curves realizing this degeneration as follows: let E ⊂ Σ 1 be the exceptional section inside the first Hirzebruch surface, and F ⊂ Σ 1 some fiber, which we will call "the fiber at infinity". Let U = Σ 1 \(E ∪ F ). Take coordinates x, y on U such that the straight lines x = constant are the fibers of the projection U ⊂ Σ 1 → P 1 . Let us consider the curve C t ⊂ U given by the equation y n = t + x. The effect of a positive elementary transform ϕ : Σ r Σ r+1 is given by the change of variables u = y/x, v = x.
Hence the strict transforms of C t (inside ϕ(U )) have local equation given by u n = tv n + v n+1 , giving the desired degeneration. Now let us describe the singular point at infinity (i.e compute the closure of these curves inside Σ 2 ), and prove that C t is irreducible for all t ∈ A 1 .
First, we claim that the closure of C t doesn't intersect E. Indeed, recall that Σ 1 is the blow-up of P 2 at a point. Hence, it's enough to take the closure of the preimage of C t inside P 2 (call this curveC t ) and check thatC t doesn't intersect the center of the blow-up. On U , we have coordinates x, y, that form a dense open of P 2 (recall that U and E are disjoint by definition). Because U ∼ = A 2 , we can take homogeneous coordinates [x : y : z] on P 2 . The fiber at infinity is given by z = 0 and U is given by z = 1. The fiber x = 0 and z = 0 both contains the center of the blow-up which is therefore [0 : 1 : 0].
Since the elementary transforms are isomorphism outside the exceptional locus, it follows that the closure of C t coincide with the closure ofC t inside P 2 , i.e the curve with equation y n = tz n + xz n−1 . The only point at infinity is [1 : 0 : 0], and has local equation y n = tz n + z n−1 as claimed. To check that C t is irreducible, it's enough to check that C t is irreducible on the chart x = 0. On this chart, C t is isomorphic to the curve given by y n = z n−1 , which is irreducible.
Consider the associated family of parabolic Hitchin fibers, which is a restriction of the family in 4.4 to a line. Using Eq. (4.1), we note that the only affine Springer fibers contributing to the cohomology are the ones coming from the singularities described above. We will ignore the one which is constant, for there is an injective map in cohomology sending the cohomology classes α ∈ H * (F l η I /Λ η ) of interest to where η is either γ or γ n+1/n .
In particular, by Theorem 4.16, we get a pullback map i * : H * (F l γ n+1/n I ) → H * (F l γ I /Λ). Now, both of these spaces are endowed with the perverse filtration, as the family comes by restriction of the Hitchin fibration and on the locus of interest the map is proper, so that the decomposition theorem applies. Note however that it is unclear how this filtration compares to that induced by the t-structure on A 1 , as the pullback along the inclusion to the base is in general only right t-exact. Remark 4.3. It seems likely that the map i * is injective and its image is exactly H * (F l γ I ) Λ . Moreover, the map respects the perverse filtration. Note that as the map is a pullback in cohomology, it automatically respects the multiplicative structure.
The only supporting evidence for this remark is that these properties are true for G = SL 2 , where they are easy to check. In general, we observe that F l γ n+1/n I has only even-dimensional cohomology as it is paved by affines. It is also known that it has n! components, as does F l γ I /Λ. On the level of top cohomology, it is clear that the map is injective, but in general it seems hard to control the associated vanishing-nearby cycles-central fiber exact sequence in cohomology. | 2022-05-20T01:16:13.642Z | 2022-05-19T00:00:00.000 | {
"year": 2022,
"sha1": "842f1021e722e168f1edb3a8d63f4e8280fe0f2a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "842f1021e722e168f1edb3a8d63f4e8280fe0f2a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
1958507 | pes2o/s2orc | v3-fos-license | Hsp27-Actin Interaction
Hsp27 oligomer is reported to interact with F-actin as a barbed-end-capping protein. The present study determined the binding strength and stoichiometry of the interaction using fluorescence of probes attached to Hsp27 cysteine-137. The fluorescence of acrylodan attached to Hsp27 increased 4-5-fold upon interaction with F-actin. Titration of the fluorescence with F-actin yielded a weak binding constant (K D app = 5.3 μM) with an actin/Hsp27 stoichiometry between < 1 and 6. This stoichiometry is inconsistent with an F-actin end-capping protein. Pyrene attached to Hsp27 exhibited a large excimer fluorescence, in agreement with the known proximity of the cysteine-137's in the Hsp27 oligomer. Upon interaction with F-actin the pyrene-Hsp27 excimer fluorescence was largely lost, suggesting that Hsp27 interacts with F-actin as a monomer, consistent with the acrylodan-Hsp27 results. EM images of F-actin-Hsp27 demonstrated that Hsp27 is not a strong G-actin sequester. Thus, Hsp27, in vitro, is a weak F-actin side-binding protein.
Introduction
Mammalian heat shock protein 27 (Hsp27) is a member of a family of small heat shock proteins that includes the eye lens protein αB-crystallin, the archetype of the family [1][2][3]. These proteins form oligomeric structures and share a conserved α-crystallin domain responsible for intersubunit β-strand-β-strand interactions in the basic dimer subunit. Under physiological conditions Hsp27 occurs as a polydisperse distribution of oligomers centered about a minimum 22mer [4]. Hsp27 is involved in a variety of cellular functions including molecular chaperone activity, control of apoptosis, and regulation of the actin filament cytoskeleton [1][2][3][5][6][7][8][9].
Actin is one of the most abundant proteins, that is present in almost all eukaryotic cells. The actin filament (F-actin) is composed of actin monomers (G-actin) polymerized headto-tail to form two intertwining helical strands, resulting in a polar filament with a "barbed end" and a "pointed end" [10]. Actin filaments are part of the cytoskeleton and their dynamic structure is involved in the motility and shape change of the cell [10]. Hsp27 has been implicated in numerous physiological and pathological processes that involve its interaction with actin and its control of actin dynamics [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]. However, the mechanism of the interaction between Hsp27 and actin is unclear and controversial and a binding study of the interaction between the two proteins in vitro has not been reported [3]. An early study of the Hsp25 (murine and avian isoform of Hsp27)-actin interaction concluded that Hsp25 was an actin filament barbed-endcapping protein based on limited evidence [26]. Since that study it has generally been assumed that Hsp25 and Hsp27 are actin barbed-end-capping proteins, although there has been little further support for this interaction model. It is also unknown if Hsp27 interacts with the actin filament as a monomer, dimer, or higher multimer.
In order to address some of these issues we have investigated the interaction between Hsp27 and actin, in vitro, using fluorescence probes attached to Hsp27. We concluded that Hsp27, although an oligomer in solution, binds to the actin filament, most likely, as a monomer, with an actin monomer/Hsp27 molar stoichiometry between <1 and 6. Such a stoichiometry does not support the view that Hsp27 is simply an actin filament end-capping protein. Instead, we propose that Hsp27 binds along the side of the F-actin filament.
Preparation of Hsp27 and Actin.
Full-length Hsp27 was prepared by bacterial expression and purification using the New England Biolabs IMPACT-CN intein system, as described previously [4]. The advantage of using the intein expression system is that no extraneous N-or C-terminal residues are introduced. Additional N-terminal amino acids in expressed Hsp27 have been shown to alter the oligomeric structure of Hsp27 and other small heat shock proteins [27,28]. The purified Hsp27 was concentrated by Millipore/Amicon Ultra centrifugal filter devices, then dialyzed and stored on ice in 2 mM Mops, 0.1 mM EDTA, 0.01% NaN 3 , 50 μM PMSF, pH 7.5 (Buffer A). Hsp27 concentration was determined from the absorbance at 280 nm, after subtracting the absorbance at 320 nm, using an extinction coefficient of A 0.1% 280 nm = 1.65 cm −1 . Actin was prepared as reported [29] from rabbit skeletal muscle and stored for up to two weeks on ice as the actin monomer in G-buffer (2 mM Mops, 0.2 mM CaCl 2 , 0.2 mM ATP, 0.01% NaN 3 , pH 7.5). Actin concentration was determined from the absorbance at 290 nm minus that at 320 nm using an extinction coefficient A 1% 290 nm = 6.3 cm −1 [30]. G-actin was polymerized to F-actin by adding NaCl and MgCl 2 to 40 mM and 2 mM, respectively (F-buffer).
Fluorescence
Labeling of Hsp27. The sulfhydryl reactive fluorescent probes acrylodan (6-acryloyl-2-(dimethylamino)naphthalene) (AC) and pyrene iodoacetamide (N-(1pyrene)iodoacetamide) were purchased from Molecular Probes (Invitrogen). The single cysteine (Cys137) of Hsp27 was labeled with the above probes for several hours at 37 • C as described for other probes [31]. Briefly, Hsp27 in Buffer A was reacted with a 2.5-fold molar excess of acrylodan for 3 hr or a 10-fold molar excess of pyrene-iodoacetamide for 5 hr at 37 • C. The reaction was stopped with 5 mM DTT, followed by dialysis versus Buffer A to remove dissolved excess label, after centrifuging pyrene-Hsp27 to pellet the suspended, unreacted pyrene-iodoacetamide probe. The labeling ratios were determined by measuring the labeled Hsp27 concentration with the BCA protein assay [32] using unlabeled Hsp27 as a standard and measuring the attached acrylodan and pyrene concentrations from the optical absorbance with extinction coefficients ε 360 nm = 12,900 M −1 cm −1 [33] and ε 343 nm = 22,000 M −1 cm −1 [34], respectively. The final molar labeling ratios (probe/Hsp27) were between 0.6-1.0 for pyrene-Hsp27 and 0.8-1.1 for acrylodan-hsp27.
Hsp27-Actin Interaction by Fluorescence Spectroscopy.
Fluorescence measurements were made on a Varian Eclipse fluorometer equipped with a Varian Peltier device for temperature control. Fluorescence measurements of acrylodan-Hsp27 and pyrene-Hsp27 were made at excitation wavelengths of 395 nm and 342 nm, respectively. For fluorescence titration of acrylodan-Hsp27 with F-actin at different concentrations, separate samples were made up for each titration point and incubated overnight at 37 • C before spectra were recorded. From the spectra, the fluorescence intensity was determined at 485 nm and plotted versus the total actin concentration.
The data are analyzed according to a simple binding equation, nA + B = A n B where A is the actin protomer in Factin, B is Hsp27 irrespective of its oligomeric state, and n is the actin/Hsp27 molar binding stoichiometry. The binding is described by the equation: nbφ 2 − (K D + nb + a)φ + a = 0 where b is the total concentration of Hsp27 which is held constant, a is the total concentration of actin which is varied, K D is the dissociation constant, and φ is the fractional saturation of binding sites, i.e. [AB]/nb, which is equal to the fractional increase in fluorescence enhancement and is a root of the quadratic solution to the above equation. One can extract n and K D by fitting a quadratic solution to the binding equation to the data points using an unweighted, nonlinear least squares fitting procedure [35,36].
Electron
Microscopy of F-Actin ± Hsp27. F-actin at 150 μM was diluted to 2 μM in F-buffer. To some samples Hsp27 was added to 6 μM and incubated for 120 min at room temperature, followed by overnight at 4 • C and 120 min at room temperature. Samples were then applied to a carbonfilm coated copper grid, negatively stained with 1% aqueous uranyl acetate and observed under a Philips 300 electron microscope at 60 kV.
Analytical Ultracentrifugation.
Analytical ultracentrifugation sedimentation velocity was run and the resulting data was analyzed as described [4].
Results
The direct binding of a protein to F-actin filaments is generally determined by cosedimentation of the filaments with bound protein, followed by SDS-PAGE of the pellet and supernatant. However, Hsp27 is a large oligomeric protein which also sediments, independently of actin binding, making this technique problematic. Therefore we took an alternative approach by using Hsp27 labeled with a fluorescence probe whose signal is sensitive to the binding of actin. Two cysteine-specific fluorescence probes, acrylodan and pyrene-iodoacetamide, attached to the single cysteine (Cys137) of Hsp27 fulfilled this requirement.
Acrylodan-HSP27.
In order to determine the effect of the acrylodan probe on Hsp27 quaternary structure, analytical ultracentrifugation sedimentation velocity of acrylodan-Hsp27 (AC-Hsp27) was compared to that of unlabeled Hsp27 over a range of concentrations ( Figure 1). No significant difference was found. Both labeled and unlabeled Hsp27 had almost identical sedimentation coefficients of close to 20 svedbergs (Figure 1), a value in agreement with our previous conclusion that Hsp27 is an oligomeric distribution centered around a minimum size of a 22-mer [4]. Thus placing a probe at Cys137 does not significantly perturb the oligomeric structure of Hsp27.
Acrylodan is very sensitive to its environment [33] and, when attached to a protein, often sensitive to protein-protein Biochemistry Research International interaction. As shown in Figure 2, acrylodan-Hsp27 is very sensitive to the binding to F-actin. In the presence of actin the fluorescence intensity increased about 4-5-fold and the emission spectrum was blue-shifted about 20 nm (Figure 2), changes indicative of greater probe shielding from the solvent upon interaction with actin.
In order to determine the strength and stoichiometry of the binding, the fluorescence enhancement of a fixed concentration of acrylodan-Hsp27 was titrated with increasing concentrations of F-actin at 37 • C. For this determination acrylodan-Hsp27 was incubated with F-actin overnight at 37 • C in order to insure complete equilibration. This was necessary since upon addition of F-actin to acrylodan-Hsp27 the acrylodan fluorescence increased slowly after an initial relatively rapid rise (Figure 3). After overnight at 37 • C there was no further change in fluorescence intensity for at least 5 hours. The slow equilibration could be best fit with two exponentials (Figure 3) suggesting that at least two processes are taking place.
The titration of acrylodan-Hsp27 fluorescence intensity with F-actin was best fit with an apparent dissociation constant of K app D = 5.3 μM ± 0.6 ( Figure 4), fixing the actin/Hsp27 molar stoichiometry at 1.0, indicating a rather weak interaction. Because the Hsp27-actin binding is not strong, the stoichiometry is somewhat uncertain in that the data can be fit with an actin/Hsp27 molar stoichiometry value of between <1 and 6 with a 90% confidence level using F-statistics on χ 2 values. The dissociation constant varies between 3.0 and 5.6 μM over this range of stoichiometries. The fit deteriorates for stoichiometries of greater than 6 in that χ 2 values continue to increase. These results indicate that Hsp27 binds to actin all along the actin filament, each Hsp27 monomer interacting with between <1 and 6 actin subunits. Indeed, Hsp27 is an elongated molecule [4] which could well cover at least two actin subunits. A stoichiometry of less than 1 would indicate several Hsp27 molecules interacting with each actin molecule in the actin filament. This latter situation would be limited by physical crowding of Hsp27 molecules along the filament.
These results rule out that Hsp27 is binding to actin simply as a barbed-end-capping protein. The average length of an actin filament polymerized in vitro is about 6.7 μm [38], equivalent to about 2500 actin subunits. Thus an Hsp27 22mer [4] bound at the barbed end of each filament would result in an actin/Hsp27 molar stoichiometry of greater than 100. For smaller oligomers the stoichiometry would be higher, all values much larger than we have estimated from the fluorescence titration in Figure 4.
Pyrene-HSP27.
Although we have ruled out that Hsp27 binds to actin simply as an end-capping protein from the acrylodan-Hsp27 results, the results do not allow us to know the size of the Hsp27 that is bound to actin. That is, does it bind to actin as a monomer, a dimer, or a larger multimer? We have used the fluorescence of pyrene-Hsp27 to aid us in this determination. The pyrene probe can be a proximity probe, in that when two pyrene moieties are in close proximity they may stack in the excited state which results in an excited state dimer, called an excimer. Pyrene excimer fluorescence is a broad band centered around 500 nm, quite different from the narrower doublet band of the pyrene monomer moiety centered at about 400 nm [39,40].
Cys137 of Hsp27 (Cys141 of Hsp25) has been shown to be close to Cys137 of a neighboring Hsp27 molecule in the basic dimeric building block of the Hsp27 oligomer by spinspin ESR spectroscopy [41,42] and by disulfide cross-linking [26,[42][43][44]. We thought that we might be able to exploit pyrene excimer fluorescence to monitor this interaction. In agreement with this proximity the fluorescence spectrum of pyrene-Hsp27 showed a relatively intense excimer band compared to that of the band of the pyrene monomer ( Figure 5). Upon interaction with F-actin the intensity of the excimer fluorescence of pyrene-Hsp27 dropped dramatically with a smaller concomitant increase in that of the pyrene monomer ( Figure 5). The remaining excimer fluorescence in the presence of actin is most likely due to pyrene-Hsp27 that is not bound to actin since the 5-fold molar excess of actin used in this experiment is not sufficient to complex all of the Hsp27 (Figure 4). The concentration of F-actin used was limited by its Rayleigh light scattering which overlaps with the pyrene monomer band. Therefore it is clear that, upon binding of pyrene-Hsp27 to F-actin, the excimer fluorescence is almost entirely lost and this loss is most likely due to a dissociation of pyrene-Hsp27 into monomer molecules attached to actin and thus no longer able to form excimers. These results, together with those of AC-Hsp27, are consistent with Hsp27 binding to actin as a monomer with an actin/Hsp27 molar stoichiometry between <1 and 6.
However we cannot entirely rule out that the loss in pyrene-Hsp27 excimer fluorescence upon actin binding is due to a conformational change around Cys137 such that pyrene moieties are no longer able to properly stack for excimer formation. In that case the results would also be consistent with a pyrene-Hsp27 binding to F-actin as a dimer or a multimer with the loss of excimer due to a long-range allosteric conformational change in Hsp27 resulting in the loss of excimer.
Electron Microscopy of the Actin-HSP27 Complex.
It has been concluded from indirect evidence that Hsp27 is a strong actin monomer sequestering protein [24]. It was calculated that 3-4 actin monomers bound tightly to each Hsp27 monomer within the Hsp27 oligomer, with a dissociation constant of 20 nM [24] at 25 • C. In order to rule out such an interaction accounting for our findings we observed electron microscopic images of F-actin filaments in the absence and presence of a threefold molar excess of Hsp27 under the F-actin ionic conditions used above. Strong Hsp27 sequestration of actin monomers in equilibrium with actin filaments would result in a dissolution of the filaments upon incubation of the filaments with Hsp27. Our electron microscopic images ( Figure 6) showed that, although Hsp27 results in some aggregation of F-actin filaments, there is no significant dissolution of the filaments, indicating no significant actin monomer sequestration by Hsp27. Thus such a mechanism cannot account for our fluorescence observations above.
Discussion
This study investigated the equilibrium binding parameters of the in vitro interaction between Hsp27 and F-actin and concluded that, in vitro, Hsp27 is not a simple actin barbedend-capping protein, as is generally assumed. Rather, it is concluded that Hsp27 binds weakly along the side of the actin filament as a monomer with an actin/Hsp27 molar ratio between <1 and 6. A previous work also concluded that monomeric but not oligomeric Hsp25 interacted with actin [45]. However, we cannot entirely rule out that Hsp27 is a barbed-end-capping protein in addition to being an actin side-binding protein, which has been shown to be the case for another actin-binding protein, vinculin [46]. The fitting of the binding data assumes a rather simple binding model of Hsp27 monomer in solution in equilibrium with actin-bound Hsp27. However, Hsp27 is primarily an oligomer in solution [4]; therefore the apparent dissociation constant from the fit reflects the difference in binding between Hsp27 monomer bound to the oligomer compared to its binding to the actin filament. Thus we can only conclude that Hsp27 binds weakly to actin compared to its binding to itself in the Hsp27 oligomer. The strong binding of Hsp27 to itself might reflect a slow rate of dissociation of the monomer (or dimer) from the Hsp27 oligomer and account, at least in part, for the slow attainment of equilibrium of its binding to actin. Alternatively, the first step in the binding sequence might be the attachment of the Hsp27 oligomer to the actin filament which then slowly "peels off " Hsp27 monomers.
Another work has also concluded that Hsp27 is not a barbed-end-capping protein by the effect of Hsp27 on actin polymerization properties [24]. That work concluded that Hsp27 is primarily a strong actin monomer sequestering protein, tightly binding 3-4 actin monomers per Hsp27 molecule in the Hsp27 oligomer [24]. However, our electron microscopic images of F-actin plus Hsp27 demonstrate that Hsp27 does not dissolve actin filaments and thus is not a strong actin monomer sequestering protein. Another recent work has also concluded, from actin polymerization studies, that Hsp27 is inefficient in sequestering actin monomers in the presence of actin filaments [47]. That work [47] and the present work employ Hsp27 without any extra residues at either end of the molecule whereas the previous work [24] used Hsp27 with extra residues at the N-terminus. It has been shown that extra N-terminal residues can modify the oligomeric structure of Hsp27 and other similar small heat shock proteins and thus possibly modify its function [27,28]. This might be the source of the above discrepancies.
In conclusion, in vitro, Hsp27 is not a simple barbed-endcapping protein but instead primarily binds to the sides of the actin filament. Compared to its binding to itself in the Hsp27 oligomer, Hsp27 binds weakly to actin. The functional consequence of such a weak and slow-to-reach-equilibrium interaction awaits further investigation. | 2014-10-01T00:00:00.000Z | 2011-10-10T00:00:00.000 | {
"year": 2011,
"sha1": "7c07777ab5629a6e9c3bd79f972fd3621c967654",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bri/2011/901572.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c07777ab5629a6e9c3bd79f972fd3621c967654",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
248860003 | pes2o/s2orc | v3-fos-license | Cytomegalovirus blood reactivation in COVID-19 critically ill patients: risk factors and impact on mortality
Purpose Cytomegalovirus (CMV) reactivation in immunocompetent critically ill patients is common and relates to a worsening outcome. In this large observational study, we evaluated the incidence and the risk factors associated with CMV reactivation and its effects on mortality in a large cohort of patients affected by coronavirus disease 2019 (COVID-19) admitted to the intensive care unit (ICU). Methods Consecutive patients with confirmed SARS-CoV-2 infection and acute respiratory distress syndrome admitted to three ICUs from February 2020 to July 2021 were included. The patients were screened at ICU admission and once or twice per week for quantitative CMV-DNAemia in the blood. The risk factors associated with CMV blood reactivation and its association with mortality were estimated by adjusted Cox proportional hazards regression models. Results CMV blood reactivation was observed in 88 patients (20.4%) of the 431 patients studied. Simplified Acute Physiology Score (SAPS) II score (HR 1031, 95% CI 1010–1053, p = 0.006), platelet count (HR 0.0996, 95% CI 0.993–0.999, p = 0.004), invasive mechanical ventilation (HR 2611, 95% CI 1223–5571, p = 0.013) and secondary bacterial infection (HR 5041; 95% CI 2852–8911, p < 0.0001) during ICU stay were related to CMV reactivation. Hospital mortality was higher in patients with (67.0%) than in patients without (24.5%) CMV reactivation but the adjusted analysis did not confirm this association (HR 1141, 95% CI 0.757–1721, p = 0.528). Conclusion The severity of illness and the occurrence of secondary bacterial infections were associated with an increased risk of CMV blood reactivation, which, however, does not seem to influence the outcome of COVID-19 ICU patients independently. Supplementary Information The online version contains supplementary material available at 10.1007/s00134-022-06716-y.
Introduction
In critically ill patients, the reactivation of Cytomegalovirus (CMV) and other Herpesviridae has been reported with a rate ranging between 20 and 70%, and it is associated with increased risk of secondary infections and mortality [1][2][3][4]. Although the risk factors remain to be defined, profound dysfunction of the immune response is the key mechanism leading to viral reactivation in previous immunocompetent patients with critical illness [5]. In this context, the immunosuppression induced by SARS-CoV-2 direct pathogenic effects, the unregulated host response and the use of drugs to modulate such response (e.g., steroids and immunomodulators) make critically ill patients affected by coronavirus disease 2019 at high risk for viral reactivation [6][7][8][9]. A recent large observational study indicated that the Herpes simplex 1 virus reactivated in around a quarter of COVID-19 patients requiring mechanical ventilation and impacted secondary infections and mortality [10]. Unfortunately, in this population, very little data are available on CMV reactivation which is one of the most pathogenetic viruses and seems to be closely related to worse outcomes in other intensive care unit (ICU) populations [11,12]. As from animal model and in vivo studies, CMV reactivation may have significant clinical effects by direct organ damage, down-regulation of the immune response and boosting a hyper-inflammatory response that may further aggravate the ongoing inflammatory processes in sepsis, in acute respiratory distress syndrome and, perhaps, in COVID-19 disease [13][14][15][16][17]. However, the lack of survival benefits from prophylactic or pre-emptive anti-CMV strategies generated extensive discussion on the role of CMV reactivation that is often considered only a marker of clinical complexity not requiring specific treatment in immunocompetent critically ill patients [18][19][20].
This large observational study aimed to evaluate the incidence and the risk factors associated with CMV reactivation and its effects on mortality risk in a large cohort of COVID-19 patients admitted to ICU for severe respiratory failure.
Methods
In this observational study using prospectively collected data, we included all the patients admitted to the three COVID-19 ICUs of the University Hospital of Modena with laboratory-confirmed SARS-CoV-2 infection and moderate to severe acute distress respiratory syndrome (ARDS) from February 22nd, 2020, to July 21st 2021 [21]. Patients with age < 18 years, ICU length of stay (LOS) < 24 h, limitation of care or do not resuscitate order were excluded from the study. The Institutional Ethics Committee of Area Vasta Emilia Nord (EC AVEN) approved the study (approval number 396/2020/OSS/ AOUMO-CoV-2 MO-Study). Due to the observational nature, written informed consent was not required.
Treatment protocol
All the patients received standard ICU and supportive care as recommended by the World Health Organization (WHO) guidelines [22] and specific therapies according to national [23] and local protocol for COVID-19 treatment, including dexamethasone, low-molecular weight heparin for prophylaxis of deep vein thrombosis according to individual bodyweight and renal function. In addition, the local protocol allowed the use of steroids (methylprednisolone 2 mg/kg/day) to prevent the onset of pulmonary fibrosis in patients who maintained a PaO 2 / FiO 2 ratio < 150 mmHg for at least 7 days of mechanical ventilation [24]. Since March 2020, the local management protocol has included Tocilizumab (TOCI) option in patients with moderate or severe ARDS and the need for mechanical ventilation (non-invasive or invasive). From the end of March 2020, all patients who received TOCI or a high dose of steroids received standard prophylaxis with Acyclovir. The standard supportive management in ICU did not significantly change during the study period.
Data collection
Patients' demographics, Sequential Organ Failure Assessment (SOFA) score, Simplified Acute Physiology Score II (SAPS II) and standard laboratory including coagulation and inflammatory variables were collected at ICU admission. In addition, the need for invasive mechanical ventilation, therapy with steroids, tocilizumab (also before ICU admission) and ganciclovir, the CMV blood reactivation and the occurrence of new bacterial infections were collected during ICU stay.
As for the ICU protocol, patients were screened at ICU admission and twice (in invasive mechanically ventilated patients) or once per week for bacterial colonization in the rectum, respiratory (if tracheal intubation) and urinary tract, respiratory (if tracheal intubation) and serum Galactomannan, serum Beta-d-glucan and quantitative CMV-DNAemia in the blood (see protocol in the supplementary material). The CMV reactivation was set for a DNAemia > 62 UI/ml in the whole blood, the detection threshold of the method used (Abbott, Real-Time CMV). In patients with CMV blood reactivation, in case of suspected CMV-related pneumonia, ganciclovir was initiated after detection of CMV-DNA in the broncho-alveolar lavage. The clinical suspicion of CMV pneumonia was based on the following elements: new worsening of pulmonary gas exchange, modification of
Take-home message
In critically ill patients affected by coronavirus disease 2019 (COVID- 19), the Cytomegalovirus (CMV) blood reactivation is frequent, and its risk depends on the severity of illness and the development of secondary bacterial infections. CMV reactivation is associated with prolonged hospital stay and higher mortality, but its role in worsening patient outcomes and the appropriate strategy for its management remain to be clarified chest X-ray or computed tomography compatible with new interstitial pneumonia, CMV blood reactivation and no other causes of pneumonia/worsening of pulmonary gas exchange. Ganciclovir dosage was set based on renal function and continued for at least 10 days. Secondary infections were defined according to international guidelines [25,26] and divided into hospital-acquired pneumonia (HAP), including also ventilator-associated pneumonia and bloodstream infection (BSI). Probable invasive pulmonary Aspergillosis was defined according to definitions from the recent consensus document [27]. According to the WHO International Standard for Human CMV, all microbiological samples were analyzed in the local Microbiology and Virology laboratory [28].
Data analysis
The risk factors associated with CMV blood reactivation within 60 days after ICU admission were estimated by Cox proportional hazards regression model adjusted for covariates with p value < 0.1 at unadjusted analysis. Similarly, we used a Cox proportional hazards regression model including variables with p value < 0.1 at unadjusted analysis to evaluate the independent association of CMV blood reactivation with mortality censored at day 60. An additional sensitivity analysis with the same procedure described above was performed only in the population with CMV-related pneumonia treated with ganciclovir to evaluate the independent association of CMV pneumonia with mortality censored at day 60. To further evaluate the association of CMV blood reactivation with mortality censored at day 60, we performed a secondary analysis by matching patients with and without CMV blood reactivation (1:1) using a propensity score estimated by multivariable logistic-regression model that included as covariates the risk factors for developing CMV reactivation; the nearest-neighbor method was applied to propensity-score matching analysis.
Non-parametric and χ 2 tests were used as appropriate for the comparisons of demographic and baseline values, outcomes in patients with and without CMV blood reactivation, and survivors and no-survivors. All results were expressed as median (range) for continuous variables and as frequency (percentage) for categorical variables. All tests were two-tailed with a p value < 0.05 considered significant. SPSS version 22.0 package (SPSS Inc., Chicago, IL, USA) was used to perform statistical analysis.
Results
In the study period, 493 patients were admitted to the three ICUs, and 431 patients met the study's inclusion criteria. Blood CMV reactivation was observed in 88 patients (20.4%) with a median onset of 17 days (IQR 5-26) after ICU admission. Interestingly, blood CMV reactivation was observed upon ICU admission in 19 out of 88 patients (21.6%). Thirty patients out of 88 (34.1%) received ganciclovir because of clinical signs of CMV-related pneumonia, and CMV-DNA detected in the broncho-alveolar lavage.
Adjusted analysis by Cox regression model showed that factors related to risk of CMV reactivation within 60 days after ICU admission were SAPS II and platelet count at ICU admission and need of invasive mechanical ventilation and occurrence of secondary bacterial during ICU stay (Fig. 1). Interestingly, CMV reactivation during ICU stay occurred later (median 9, IQR 1-15 days) than the occurrence of bacterial infection in 43 patients (64.1%), at the same time (± 48 h) in 7 patients (10.4%) and earlier (median 11, IQR 9-15 days) in 17 patients (25.4%) (Fig. 1S1).
Unadjusted analysis indicated that CMV reactivation, together with many other variables, was related to an increased mortality risk at day 60. However, the adjusted analysis did not confirm the relationship between CMV reactivation and mortality at day 60 (Table 3). Similarly, the sensitivity analysis performed in patients only in patients with CMV-related pneumonia and treated with ganciclovir did not show any independent relationship between CMV pneumonia and mortality at day 60 (HR 1248; 95% CI 0.732-2129; p = 0.415). The secondary analysis on the 168 patients with and without CMV reactivation matched (1:1) for the individual propensity to develop CMV blood reactivation also indicated no association between CMV blood reactivation and mortality at day 60 (HR 1105; CI 0.738-1640; p = 0,638) ( Table 1 and 2 S1). The fifty (56.8%) patients with CMV blood maximal viral load > 500 IU showed similar characteristics, treatments (but Ganciclovir therapy) and mortality than 38 patients with maximal viral load < 500 IU (Table 3S1).
Discussion
This large cohort observational study showed that CMV blood reactivation occurs in about 20% of COVID-19 patients admitted to ICU for respiratory failure, and 30% of these patients received anti-CMV treatment for suspected CMV pneumonia. High severity scores at ICU admission, the requirement of invasive mechanical ventilation and the development of secondary bacterial infections during ICU stay increase the risk of CMV blood reactivation. Furthermore, COVID-19 patients with CMV blood reactivation showed increased mortality compared to patients without reactivation, but the CMV blood reactivation and the occurrence of CMV-related pneumonia did not seem to increase the risk of mortality at day 60 independently. To our knowledge, this is the most extensive study published so far on the occurrence, risk factors and impact of CMV reactivation in ICU patients with respiratory failure caused by SARS-CoV-2.
The data provided should be considered high quality because it originated from a prospective clinical protocol used from the beginning of the COVID-19 surge. As in our cohort, the previous small studies reported a CMV reactivation in about a quarter of COVID-19 critical patients, with about 50-60% of these patients showing at least one Herpesviridae reactivation [6,8,29]. In COVID-19 patients with a high rate of pre-existing immune defect, Epstein-Barr virus (EBV) was the Herpesviridae with the most frequent reactivation [29]. Unfortunately, our protocol did not include surveillance of EBV reactivation and then, only a few patients were screened for it. Similarly, we are not able to provide robust data on HSV-1 reactivation because very early during the first wave 2020, we introduced acyclovir prophylaxis in all the ICU patients after two cases of fatal liver failure related to HSV-1 and the high incidence of HSV-1 reactivation observed (published elsewhere) [9,30]. Consequently, we withhold the protocol for systematic surveillance of HSV-1 that was evaluated only in patients with high clinical suspicion of infection.
In immunocompetent no-COVID-19 critically ill patients, numerous risk factors have been associated with the risk of CMV reactivation with a relationship for sepsis and mechanical ventilation. Although in our cohort many demographic characteristics, admission parameters and treatments were related to CMV reactivation in the crude analysis, as in no-COVID-19 patients, the adjusted analysis indicated that only invasive mechanical ventilation and the occurrence of new bacterial infection, that in critically ill patients frequently causes sepsis, increased the risk of CMV reactivation during ICU stay. The association observed between low platelets count and CMV reactivation may be explained by the pathobiology of SARS-CoV-2 infection that comprises persistent viral replication/viremia, uncontrolled inflammation, immune system impairment, and progressive involvement the endothelium with severe disturbances of coagulation processes leading to multiple thrombotic events [31]. Therefore, as for bacterial sepsis, the reduction of platelets indicates the degree of the hemostasis and the immune-inflammatory response impairment [32].
Interestingly, the incidence of CMV reactivation in COVID-19 appears to be lower than that reported in no-COVID-19 immunocompetent critically ill patients with sepsis, in whom it ranges between 30 and 60% depending on the time of screening, specimen and methods used [11,33]. Several factors may theoretically explain this difference, but the severity scores of COVID-19 patients and the impact of SARS-CoV-2 infections on the immune response are usually less severe in the first 15 days than those in patients with sepsis from bacterial infections admitted to ICU [34]. These aspects, combined with younger age, fewer pre-existing comorbidities and the shorter duration of ICU stay, may justify the reduced occurrence of CMV reactivation in COVID-19 patients compared to the septic ICU population [13].
Therapy with steroids has been suggested as a potential risk for CMV reactivation in ICU patients with a low grade of certainty [35]. In our cohort, steroid therapy was administered more frequently in patients with CMV reactivation, but the adjusted analysis did not confirm this association. On the contrary, a relationship between CMV reactivation and the occurrence of new bacterial infections, the most hospital/ventilator acquired pneumonia, came out in our patients. The association between CMV and bacterial infections and their causal effect have been long debated in non-COVID-19 ICU patients [13,33]. On one side, it is well known that sepsis profoundly deranges the immune mechanisms controlling viral reactivation with increased levels of IL-10, lymphopenia and reduced activity of T cells, natural killer and Th1 T cells [36][37][38]. On the other side, CMV reactivation may further induce immune suppression by complex mechanisms involving TNF-alfa, interleukin1-beta and cellular mediated response [5] with consequent augmented risk for secondary infections. Similar to previous reports in COVID-19 and non-COVID-19 patients [8,13], the CMV reactivation occurred in median 2 weeks after ICU admission and, noteworthy, only in a quarter of the patients occurred before secondary bacterial infection. Therefore, we believe reasonable to suppose that in COVID-19 patients, the development of secondary bacterial infections increases the risk of CMV reactivation rather than the opposite. CMV reactivation in immunocompetent ICU patients has also been indicated as a risk factor for invasive pulmonary Aspergillosis [39]. In fact, our patients with CMV reactivation also showed an increased (+ 20%) incidence of invasive pulmonary Aspergillosis compared to patients without CMV reactivation. Nevertheless, the adjusted analysis did not show a significant association, and, in addition, probable invasive pulmonary Aspergillosis was diagnosed at least 1 week before CMV reactivation in most of our patients.
The negative impact of CMV reactivation on the outcome of immunocompetent critically ill patients has been reported from several observational studies from at least 30 years. A recent meta-analysis reported 2.5-fold increase in ICU mortality (10 studies, n = 970 patients), prolonged duration of mechanical ventilation (7 studies, n = 683 patients; mean difference 6.6 days 95% CI 3.1-10.1) and increased length of ICU stay (9 studies, n = 973 patients; mean difference 8.2 days 95% CI 6.1-10.2) associated with CMV reactivation [33]. Nevertheless, the true effect of CMV in the critically ill patient is still objective of large debate without a definitive answer. As described in the introduction, several animal and in-vivo studies described the putative mechanisms for direct and indirect pathogenicity of CMV in the ICU population, but numerous recent interventional trials failed to demonstrate any survival improvement by anti-CMV-specific therapeutic strategies. A specific potential role of CMV reactivation in worsening COVID-19 disease has been theorized due to its capacity to reinforce and perpetuate hyperinflammatory response and induce immune suppression with persisting SARS-CoV-2 viremia and secondary infections [17]. Our COVID-19 cohort detected increased mortality at ICU and hospital discharge, but the adjusted analysis did not confirm this association. Similar results were also observed in the sub-group of patients treated with ganciclovir because of suspected CMV-related pneumonia. Therefore, as hypostatized in the critically ill non-COVID-19 population, the CMV reactivation might be considered just a marker of disease severity rather than a factor capable of modifying outcomes in COVID-19 ICU patients. The lack of specific serial measurements related to CMV pathogenicity (e.g., IL-1Beta, IL-6, TNF-alfa) and the difficulties in diagnosing CMV pneumonia in patients with COVID-19 severe interstitial pneumonia limit any further consideration on the potential effects of CMV reactivation in these patients.
Beyond the limitations reported above, our study has other limitations. First, the study included patients admitted to 3 ICUs by the same hospital with potential limitations in generalization to other settings. Second, as in many other observational studies [33], we included all the ICU admitted patients and not only the CMV-seropositive population because seropositivity was not systematically evaluated. Third, the clinical protocol did not include systematic analysis of the respiratory sample for CMV reactivation, which was deserved only to patients with high suspicion of CMV pneumonia. This can underestimate the actual number of CMV reactivation. However, due to the uncertain significance of CMV detection in the respiratory tract, the CMV viremia is commonly used in high-quality interventional trials [18]. Last, our data did not indicate an association between CMV reactivation and the use of immune-suppressive therapies. However, the large number of patients treated with steroids (91%) and tocilizumab (84%) may have limited the sensitivity of our analysis.
Conclusions
In COVID-19 critically ill patients, the CMV blood reactivation is frequent and depends on the severity of illness and the occurrence of secondary bacterial infections but not on steroids and cytokine blocking agents. The patients with CMV reactivation showed prolonged hospital stay and higher mortality than patients without reactivation. Nevertheless, the lack of independent association between CMV reactivation and mortality leaves open the question of its role and the appropriate strategy for its monitoring and management. | 2022-05-19T06:23:52.861Z | 2022-05-18T00:00:00.000 | {
"year": 2022,
"sha1": "614fac462f863763e0e27e130e79003944e72b70",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00134-022-06716-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f67281995e093030d83224b91bb601fa79d37bd",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251741483 | pes2o/s2orc | v3-fos-license | Adversarial Speaker Distillation for Countermeasure Model on Automatic Speaker Verification
The countermeasure (CM) model is developed to protect ASV systems from spoof attacks and prevent resulting personal information leakage in Automatic Speaker Verification (ASV) system. Based on practicality and security considerations, the CM model is usually deployed on edge devices, which have more limited computing resources and storage space than cloud-based systems, confining the model size under a limitation. To better trade off the CM model sizes and performance, we proposed an adversarial speaker distillation method, which is an improved version of knowledge distillation method combined with generalized end-to-end (GE2E) pre-training and adversarial fine-tuning. In the evaluation phase of the ASVspoof 2021 Logical Access task, our proposed adversarial speaker distillation ResNetSE (ASD-ResNetSE) model reaches 0.2695 min t-DCF and 3.54% EER. ASD-ResNetSE only used 22.5% of parameters and 19.4% of multiply and accumulate operands of ResNetSE model.
Introduction
Automatic speaker verification (ASV) is a method for determining if a certain utterance is spoken by an individual. It is one of the essential biometric identification technologies widely used in real-world applications, including smartphones, smart speakers, digital wallets, etc. Through active research on various methods [1][2][3][4], significant performance improvements have been created in accuracy and efficiency of ASV systems. The earliest proposed method [1] used the Gaussian mixture model to extract acoustic features and then compute a score based on the likelihood ratio. End-to-end ASV models such as [4,5] have been proposed to map utterances to verification scores directly. End-to-end models improve verification accuracy, making the ASV model compact and efficient.
A variety of defense methods have been proposed to improve the robustness of ASV and anti-spoofing model against adversarial attacks [23][24][25][26][27][28]. The biggest commonality between the generated spoofing audio and the adversarial attack audio is that they influence the decisions of the countermeasure model in imperceptible ways.
Some ASV systems [29,30] are running on edge devices, in order to avoid network transmission failure. The CM model is often used in conjunction with the ASV model. In edge devices, the models need to be lighter to account for limited computing power and storage space. Knowledge distillation (KD) [31] is one classical method used to reduce the model size, where the knowledge is transferred from a larger teacher model to a more lightweight student model. However, KD often degrades the performance while reducing the model size of the model. To maintain or prevent the model performance degradation after distillation, one straightforward motivation is to train a powerful teacher model firstly.
To enable the CM system to sense the gap between the audio produced by TTS and VC technology and the real audio, we separate the embeddings of different spoofing conditions and narrow those of the same spoofing condition. We also utilize the adversarial example to improve the ability of CM model to perceive the subtle difference between modified and bona fide audio, which inspired by training by adversarial example can easier perceive the subtle perturbation in ASV task [27]. In this paper, we mainly have the following contributions: 1) To our best knowledge, this is the first to explore the lightweight ASV spoofing CM model.
2) We proposed an adversarial speaker distillation method, which combined with generalized end-to-end (GE2E) [32] pre-training and adversarial fine-tuning for the teacher model, and used KD to obtain the student model.
3) Experiments showed that our proposed training strategies effectively improved student model performance, while maintaining a balance between performance and resource consumption.
Methods
In this paper, we designed an adversarial speaker distillation training strategy and used ResNetSE [33] as the backbone model.
SE layer ResNet
C h a n n e l n speaker and the spoofing condition. We will use them in different steps respectively. The overall process is shown in Figure 1, and the details will be described in the following subsections.
Model structure
Excellent results have been achieved by ResNet and its improved version ResNetSE on image processing and ASV [32][33][34]. Figure 2 shows the structure of ResNetSE, where selfattentive pooling was used as the pooling layer to enhance model flexibility for accepting audio streams of various lengths. It can reduce the input signals of any length to the same dimension by using pooling in our system. The output target of ResNetSE is an 8-element vector, indicating the input audio is bona fide, TTS spoofing methods, VC spoofing methods, and an adversarial speaker. Note that only the adversarial fine-tuning step has an adversarial speaker class. The difference between GE2E-ResNetSE and ASD-ResNetSE is the number of channels: (32, 64, 128, 256) for the former and (16, 32, 64, 128) for the latter.
Teacher training
The process of training the teacher model was composed of two main steps. The first step of using GE2E pre-training with spoofing condition classes is to ensure the model has sufficient discriminating power between different spoofing conditions. In this step, the GE2E loss was computed using the pooling layer of ResNetSE (Figure 2 (a)). In the second step, we fine-tuned the whole model with the ASVspoof2021-provided data and the injected adversarial data generated by AEG. This process makes the model focus on distinguishing fake audio streams from bona fide ones.
Generalized End-to-End Pre-training
GE2E loss [32] was proposed for speaker verification, and a variant of this approach has been used to detect replayed spoofing attacks (PA subtask of ASVspoof 2021) [35]. This paper uses GE2E loss calculated according to spoofing condition classes in LA subset to obtain the initial model. Firstly, each batch included M utterances from one of the N different conditions. The utterances xnm in the batch (except the query itself) formed the centroids cn of each condition, where 1 ≤ n ≤ N , The formula of cn is defined as: Each utterance embedding was expected to be close to its corresponding spoofing condition centroid but far from the centroids of other spoofing conditions. Thus, a similarity matrix S nm,k was defined to describe the scaled cosine similarity of utterances with centroids k, where w, b are learnable parameters in the expression: Softmax was applied to the similarity matrix for every category (from 1 to N ) when calculating GE2E loss. The overall loss function was defined in equation 3, which means that utterance embeddings of the same spoofing condition should be close to each other, and far from those of other spoofing conditions.
Adversarial fine-tuning
After the teacher model was initialed by GE2E pre-training, it will be fine-tuned using NLL loss with spoofing condition label. The spoofing condition label is composed of 8 classes: 1 bona fide, 6 spoof methods, and 1 adversarial speaker. The additional adversarial speaker class is generated by the Adversarial Example Generation (AEG) algorithm described below. AEG was a process that deliberately generated a tiny perturbation to the original sample to generate adversarial audio. This work adopted the basic iterative method (BIM) [36] for AEG. The audio input of the AEG algorithm was a same-speaker randomly-selected bona fide audio signals W1 and W2, and the output was the newly generated sample. Extract Feature(M, W) meant using model M to extract features for W. The score s indicates the similarity between the forged and the original audio. Only the new sample with s smaller than threshold will be used as an adversarial sample. The algorithm is detailed in Algorithm 1. Based on this, we implemented two AEG methods, static AEG and active AEG. Static AEG. Static AEG used the GE2E pre-trained model as the input for Algorithm 1 to generate attack data. All the generated attack data will be relabeled as an adversarial example and injected into the original data set. Since static AEG created all adversarial samples before fine-tuning, it did not increase the overhead of fine-tuning. Active AEG. The difference between static AEG and active AEG was that active AEG executed BIM before each epoch, which means that the same input audios will produce different adversarial samples in different epochs. Although this additional work may increase training time, active AEG is expected to generate more architecture-specific data. return NULL 12: end if 13: return X2
Student training
This stage used KD loss [31] to transform the capabilities of GE2E-ResNetSE into ASD-ResNetSE. KD loss mainly consists of Kullback-Leibler divergence (KL) and NLL loss. KL loss is to lets the student learn soft targets from the teacher's output. NLL loss enables the student to learn the hard spoof condition label. The overall loss function is: where Os means the output of the student model, Ot means the output of the teacher model, LNLL is the NLL loss between the prediction of students and ground truth classes, and T is the parameter controlling the distillation temperature, and γ is the weight for balancing the contribution from the teacher and the ground truth class. After completing this stage, the ASD-ResNetSE will be used to measure the final model performance. [37] and equal error rate (EER) were used to evaluate the effectiveness of the countermeasure (CM) models. Waveform augmentation. The number of utterances in the evaluation data set was much larger than in training or developing sets. Therefore, we used waveform augmentation to expand the training data to increase system robustness. First, we randomly selected music, voice, or noise in MUSAN [38] and trimmed or padded it to the same length as the target utterance, then added it to the audio file to generate new audio. Next, we randomly selected audio from the RIR noise data [39] set and convoluted it with the target audio to generate new audio for reverberation simulation of different room sizes. The training data of all models in the experiment were augmented using the above-mentioned on-the-fly methods. Training Details. We extracted 40 dimensions log-Mel spectrogram with a 25 ms window size, a 10 ms hop size, and an FFT size set as 512 as the input, while all audio files had a sample rate of 22,050 Hz. Following acoustic extraction, we applied instance normalization to the feature. We set α = 3.0, iter = 5, threshold = 0.4 in AEG augmentation. The α and iter are referred from the empirical value of [27]. We applied waveform augmentation and used the Adam optimizer during the end-toend teacher and student model training. In the beginning, the learning rate is 0.0003, and every two epochs will become 0.95 times the original. In KD loss, γ = 0.5, T = 5.
Ablation Study
GE2E. We compared the performance of ResNetSE and GE2E-ResNetSE model in Table 1. The results show that GE2E pretraining reduces min t-DCF score from 0.3143 to 0.3003 and reduces EER from 5.78 to 5.10. GE2E pre-training enables the model to distinguish the information of the same spoofing condition from the information of other spoofing conditions, which helps classify the spoofing classes by providing additional spoofing condition information. Rather than classifying all the data at once, GE2E enabled the classification of spoof and non-spoof starting from classifying a particular spoofing condition case, which was expected to have better results.
Model Loss min t-DCF EER
ResNetSE LNLL 0.3143 5.78 GE2E-ResNetSE LGE2E + LNLL 0.3003 5.10 Table 1: Influence of GE2E pre-training. Table 2 show that both active and static AEG improved the performance of GE2E-ResNetSE. In particular, the active method effectively reduced the min t-DCF from 0.3003 to 0.2869 and the EER from 5.10 to 4.59% through on-the-fly augmentation. It showed that adding model-weakness data to the dataset during training can help the model perceive the subtle differences inside the spoofing audio streams. From the perspective of data preparation, the active method is easier to implement, whereas the static method generates the whole dataset before training. Knowledge Distillation. The lower half of Table 2 is the result of ASD-ResNetSE obtained by distillation of the corresponding teacher through the KD process. The result of (A ) is our knowl- edge distillation baseline. Its teacher (A) did not use GE2E pretraining and adversarial fine-tuning. The teacher (B) only used GE2E pre-training. The teachers (C) and (D) of (C ) and (D ) are based on GE2E-ResNetSE and also use static AEG and active AEG to inject adversarial speaker class during fine-tuning, respectively. Note that we don't use AEG during the distillation process.
AEG. The results (B)(C)(D) in the upper part of
In Table 2, (A ) gains 4.9% min t-DCF and 16.4% EER improvement after distillation. (C ) obtains 8.0% min t-DCF and 29.9% EER improvement combined with GE2E pre-training and static AEG fine-tuning, which outperforms baseline (A ). However, (D ) degrades -1.1% min t-DCF and -3.7% EER. In most distillation results, the performance of the student model did not decrease but increased. Previous work [40] has found that subnetworks exist in the original network that can reach or exceed the original model performance. Wang [41] has tried to find such a subnetwork through KD. The above result shows that static AEG is more helpful than active AEG in finding a super subnetwork in the adversarial speaker distillation process.
Overall Performance
To further analyze overall performance, we select our best ASD-ResNetSE model (C ) and ResNetSE (A) and the other existing countermeasure models, such as RawNet2 [10], SE-ResNet18 [11], LFCC-LCNN [12], ECAPA-TDNN [13], AS-SERT34 [14] and W2V2-AASIST [15]. The best ASD-ResNetSE model and GE2E-ResNetSE model were from Table 2. For a fair comparison, we only consider single models instead of fusion models. Figure 3 visualize the relation between the model sizes and their performance of min t-DCF, EER. Because SE-ResNet18 does not provide EER results, so we do not make a comparison here.
We mainly compare three aspects in Figure 3. Firstly, ASD-ResNetSE not only outperforms most countermeasure models but also has only a 22.5% model size of ResNetSE. Secondly, ASD-ResNetSE is more than 50% min t-DCF lower than AS-SERT34, which has a similar model size to ASD-ResNetSE. Moreover, the EER of ASD-ResNetSE is only about 20% of the EER of ASSERT34. The above results demonstrate that ASD-ResNetSE can obtain a better trade-off between the model size and performance. Thirdly, W2V2-AASIST achieves stateof-the-art min t-DCF and EER, consisting of a self-supervised learning frontend wav2vec2.0 [42,43] and a countermeasure backend AASIST [44]. However, its parameters are over 1200 megabytes, so this is not practical for using it on edge devices.
Model Size and Operands
We compared the size and multiply and accumulate operands (MACs) of several methods, shown in Table 3. ASD-ResNetSE, with only 0.90G MACs, had a significantly smaller model size than ResNetSE. As the model size, ASD-ResNetSE also had similar MACs with ASSERT34. Combining the results in Table 3 and Figure 3, ASD-ResNetSE stands out for its capacity, efficiency and effectiveness.
Conclusion
This paper is the first to explore the lightweight CM model for ASV and propose an adversarial speaker distillation method, an improved version of knowledge distillation. In the evaluation phase of the ASVspoof 2021 Logical Access task, our ASD-ResNetSE reaches 0.2695 min t-DCF and 3.54% EER with only used 22.5% of parameters and 19.4% of MACs of the original ResNetSE model. The experiment results demonstrate that ASD-ResNetSE stands out for its capacity, efficiency and effectiveness. | 2022-04-01T01:15:35.049Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "502b7fcc53f5dc983aa722efdba5acb4ba5623df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1666bff4938ec26996c1f30f5f12d7f5a286668a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
237149767 | pes2o/s2orc | v3-fos-license | Long-term exhaustion of the inbreeding load in Drosophila melanogaster
Inbreeding depression, the decline in fitness of inbred individuals, is a ubiquitous phenomenon of great relevance in evolutionary biology and in the fields of animal and plant breeding and conservation. Inbreeding depression is due to the expression of recessive deleterious alleles that are concealed in heterozygous state in noninbred individuals, the so-called inbreeding load. Genetic purging reduces inbreeding depression by removing these alleles when expressed in homozygosis due to inbreeding. It is generally thought that fast inbreeding (such as that generated by full-sib mating lines) removes only highly deleterious recessive alleles, while slow inbreeding can also remove mildly deleterious ones. However, a question remains regarding which proportion of the inbreeding load can be removed by purging under slow inbreeding in moderately large populations. We report results of two long-term slow inbreeding Drosophila experiments (125–234 generations), each using a large population and a number of derived lines with effective sizes about 1000 and 50, respectively. The inbreeding load was virtually exhausted after more than one hundred generations in large populations and between a few tens and over one hundred generations in the lines. This result is not expected from genetic drift alone, and is in agreement with the theoretical purging predictions. Computer simulations suggest that these results are consistent with a model of relatively few deleterious mutations of large homozygous effects and partially recessive gene action.
INTRODUCTION
Since Darwin's (1877) early experiments on plants, inbreeding depression, the reduction of fitness observed in inbred individuals compared to noninbred ones, has received increasing attention from researchers in multiple areas, from evolutionary, population, or conservation genetics (Charlesworth and Willis 2009;Charlesworth and Charlesworth 2010;Frankham et al. 2010), to new arising areas such as multitrophic interactions or community ecology (Kariyat and Stephenson 2019). Inbreeding depression and the reduction of population size are closely related and can lead the population to the "extinction vortex" (Gilpin and Soule 1986), which can ultimately lead to the extinction of populations and metapopulations (Frankham 2005;Wright et al. 2008;Robert 2011;Nonaka et al. 2019). The source of inbreeding depression is the inbreeding load, i.e., the component of the deleterious genetic load that is concealed in heterozygous state in outbred populations. This load is mainly ascribed to deleterious mutations with different degrees of recessivity, the contribution of overdominant fitness effects being most likely small according to empirical evidence (Hedrick 2012;Thurman and Barrett 2016). Reduced population size causes increased inbreeding and, therefore, exposes as homozygotes recessive deleterious components leading to a reduction in fitness (Roff 2002;Charlesworth and Willis 2009;Bozzuto et al. 2019), with a consequent further reduction in population size and an increase in the probability of extinction (Tanaka 1997(Tanaka , 2000O'Grady et al. 2004). Finally, epistasis can also contribute to inbreeding depression by enhancing the effects of homozygosity at different loci, and can be detected by fast inbreeding experiments (see, e.g., Domínguez-García et al. 2019).
Despite the frequent evidence of inbreeding depression in nature, examples can be found in which a reduced population size and high levels of inbreeding do not translate into significant inbreeding depression (e.g., Duarte et al. 2003;Laws and Jamieson 2011;Mullarkey et al. 2013;Lobo et al. 2015;Peer and Taborsky 2005;Runemark et al. 2013;Tien et al. 2015;Caballero and Criscione 2019). This phenomenon is often explained by the purging of the inbreeding load through the action of natural selection under inbreeding (see, e.g., Hedrick and García-Dorado 2016). It is well known, both theoretically and empirically, that genetic purging can be particularly effective in eliminating lethal or severe effect deleterious mutations but also mutations of moderate effect (Hedrick 1994;Wang et al. 1999;Swindell and Bouzat 2006a;Ávila et al. 2010;Pekkala et al. 2012;García-Dorado 2012;Bersabé and García-Dorado 2013;López-Cortegano et al. 2016). The efficiency of purging to eliminate deleterious mutations depends on the inbreeding rate, which is inversely proportional to the effective population size (N e ) and also depends on the breeding system (Glémin 2003). Purging also depends on the magnitude of the deleterious effects, being efficient against alleles with N e d > 1, where d is the purging coefficient, i.e., a measure of the magnitude of the deleterious recessive component of mutations masked in heterozygotes (García-Dorado 2012, 2015. The value of d is also the dominance effect defined as the value of the heterozygote deviated from the average of the two homozygotes (Caballero 2020, p. 44). For values of N e d below one, genetic drift overwhelms selection. According to theoretical predictions, fast inbreeding occurring in populations of very small size (e.g. full-sib lines) leads to only (or mostly) the purging of severely deleterious or lethal mutations (Hedrick 1994;Frankham et al. 2001), while slow inbreeding under large panmictic populations (say N e > 20 individuals) offers more opportunities to purge weak deleterious alleles before the population reaches a large level of inbreeding, although its consequences appear later in time (Wang et al. 1999;García-Dorado 2012, 2015. Given the environment-dependent expression of the inbreeding load (Kristensen et al. 2008;Cheptou and Donohue 2011;Reed et al. 2012;Pemberton et al. 2017), purging could also be affected by environmental factors, being more effective under stable (Bijlsma et al. 1999) and competitive conditions .
Given the nature of purging and the multiple factors influencing its effectiveness, its detection in experimental studies is often a difficult task, especially if it is obscured by other processes. For example, adaptation in the wild or the laboratory environment may emulate the effects of purging (Crnokrak and Barrett 2002;Gilligan and Frankham 2003), and relaxation of selection in conservation programs may reduce its effects (Ballou 1997;Boakes et al. 2007;Caballero et al. 2017). The experimental studies addressing genetic purging include self-fertilization in plants (e.g. Willis 1999; Baldwin and Schoen 2019; Barrett and Charlesworth 1991) or animals (Chelo et al. 2019), forced matings between closely related individuals (such as sib matings) (e.g., Frankham et al. 2001;Reed et al. 2003;Swindell and Bouzat 2006b;Kristensen et al. 2008;Fox et al. 2008;Ávila et al. 2010;Noël et al. 2019) or random mating in small or moderate size populations (e.g., Latter et al. 1995;Reed et al. 2003;Meffert et al. 2006;Bouzat 2006a, 2006b;Larsen et al. 2011;Pekkala et al. 2012Pekkala et al. , 2014Bersabé and García-Dorado 2013). A major factor hampering the detection and description of the effects of purging is, however, the elapsed time period of inbreeding. While purging can take considerable time to become visible, especially for nonlethal alleles, most purging detectionoriented experiments are limited to a small number of generations, generally not more than 20-30 (with some exceptions, such as those of Latter et al. 1995;Reed et al. 2003;Ávila et al. 2010;Chelo et al. 2019;Noël et al. 2019). Thus, a major question arises as to what are the long-term consequences of inbreeding and purging.
In a previous, long-term analysis, López-Cortegano et al. (2016) conducted experiments with two populations of Drosophila melanogaster in order to detect purging and quantify its magnitude. Two large populations from Madrid and Vigo labs (with census sizes of N ≈ 2600 and 3000 individuals, respectively) were maintained in 32 and 30 bottles, respectively, with circular mixing for more than 100 generations. In addition, multiple lines of reduced, but moderately large size (N = 80 and 100, respectively), were derived from the large ones and maintained for about 40 generations. The analysis reported an estimate of the overall purging coefficient of about d ≈ 0.3 (or 0.2 for nonlethal mutations). It also showed that purging is rather efficient in reducing the inbreeding load. Thus, over the one-hundred generation period, an initial inbreeding load for pupae productivity of about 1.8 lethal equivalents in the large Vigo population was reduced down to 0.60, and that of the Madrid population (about 2) down to 0.85, a reduction much more drastic than expected from genetic drift alone.
Although these experiments proved the efficiency of long-term purging, a substantial amount of inbreeding load still remained by the end of the period considered. According to theory, even if all the ancestral inbreeding load was removed (due to random fixation and/or purging), new inbreeding load is continuously introduced through new mutation as the populations approach a new mutation-selection-drift balance (García-Dorado 2007. However, if the current population size is small, that new balance could harbor very small inbreeding load. Thus, a question arises as to whether continuous purging can render a population of relatively large size with negligible inbreeding depression and inbreeding load, as found in some natural populations. To respond to this question, we continued the maintenance of the Madrid population and its derived lines, and we can now report inbreeding load estimates for a time span of 10 years. Our results show that the inbreeding load has been fully depleted in both the large populations and the derived lines. The evolution of the inbreeding load of the populations is compared with theoretical predictions assuming purging and genetic drift, or only this latter. We also explore the fit between the observed values and computer simulation results using a range of deleterious mutation models. Throughout the experiments, the populations were maintained in the same conditions, in a room chamber with constant temperature (25°C) and permanent lighting, except when handling flies. The base populations were maintained in 32 (Madrid) and 30 (Vigo) bottles with circular mixing, under which, for each generation (every 2 weeks), each bottle i was founded with~40-50 flies from the offspring of bottle i plus 40-50 flies from the offspring of bottle i + 1. The derived lines were maintained synchronously to the base populations in individual bottles with 80 (Madrid lines) or 100 (Vigo lines) individuals (half of each sex) per generation. Each bottle (~5 cm of diameter) was filled with~2 cm of agar-yeast-flour-sugar medium and propionic acid (5 ml per liter of medium) to prevent fungal contamination. Only one of the Madrid lines was lost during the experiment.
Evaluation of fitness and inbreeding load
We evaluated fitness by measuring pupae productivity (P) estimated as the average number of pupae produced 11 days after mating. This trait includes mating success, fecundity, and pupae survival, thus being a proxy for fitness in moderately competitive conditions. Pupae productivity and its inbreeding load (δ) was evaluated at generations 201 and 234 (Madrid base population) and, synchronously, at generations 120 and 153 of the derived lines. Here we add to the previously published data on the decline of the productivity mean and inbreeding load , results obtained at generation 125 (Vigo base population) and generations 25 and 39 (derived lines). All evaluations were conducted in the same way. At a given generation t, seven virgin females and seven males were sampled from each bottle and placed in pairs in individual mating vials. For the base population, males were first randomized so that N. Pérez-Pereira et al. they mated females from random bottles, while for the lines, the pairs were formed from flies born in the same bottle (i.e., from the same line). The offspring was used to generate two schemes: an inbred one, in which full-sib couples were mated in individual vials, and a noninbred scheme, in which a male from vial i was mated with a female from vial i + 1. At the next generation, we repeated both schemes, so that in the inbred scheme, parents had an expected inbreeding coefficient of F = 0.25 and their offspring F = 0.375 relative to the generation t of their population or line. In the noninbred scheme, a male from vial i was mated with a female from vial i + 2 to avoid new inbreeding, so that the corresponding expected inbreeding coefficient of both parents and their progeny was F = 0.
Productivity was estimated in the noninbred (P O ) and inbred (P I ) schemes and the inbreeding load was estimated as δ ¼ ln where F is the average inbreeding coefficient of parental and progeny generations, as the productivity is a trait that depends on both parental and progeny genotypes (Ávila et al. 2013). Bootstrap errors were obtained for each estimate of δ using the function "sample" in R. For each estimate, 1000 samples of the same size as the original productivity data (inbred and noninbred) were sampled with replacement. For each sample, δ was obtained as above and the standard deviation of the 1000 measures was calculated.
Prediction of the long-term evolution of fitness and inbreeding load
The Inbreeding-Purging (IP) model (García-Dorado 2012) describes the evolution of fitness and of the inbreeding load in populations undergoing inbreeding and, therefore, exposed to the action of genetic purging. This model establishes a purged inbreeding coefficient, g, that equals the classical Wright's inbreeding coefficient F but corrected for the expected reduction in frequency of fully or partially recessive deleterious alleles due to purging. The value of g can be calculated for each generation as where N e refers to the effective population size, F to Wright's inbreeding ) and d to the purging coefficient. For each locus, d equals the recessive component of deleterious mutations that remains hidden in heterozygosis but is expressed in homozygosis. For a single locus d amounts to s(½h), where s represents the homozygous deleterious effect and h the dominance coefficient. As generations proceed, g t approaches a value smaller than one, corresponding to an asymptotic situation where all the ancestral inbreeding load has been lost due to the combined effect of genetic drift and purging. The larger is d, the smaller are the asymptotic g value and the role of drift, and the fewer deleterious alleles become fixed. For d = 0, g t reduces to F t . For a multilocus model with variable effects, an effective purging coefficient is defined as the value of d that provides the best fit between the theoretical predictions and the observed temporal evolution of the inbreeding load or the average fitness (García-Dorado et al. 2016). For pupae productivity, this d parameter was estimated from the evolution of the inbreeding load in the data analyzed by López-Cortegano et al. (2016) using minimum square fitting, giving a global value of d = 0.3 (with 95% confidence interval 0.28-0.33). Assuming absence of purging (only genetic drift), the evolution of productivity at generation t expected from the inbreeding depression can be predicted as where E stands for expectation and δ is the rate of inbreeding depression that, in the absence of selection, should equate the initial inbreeding load. The corresponding evolution of the inbreeding load, ignoring the contribution of new mutation, can be predicted as However, under the IP model, the expected evolution of productivity is predicted as which accounts for both inbreeding and purging, while that of the inbreeding load is which accounts both for genetic drift and purging. Note that each of these two predictions is a function of d and N e (the main parameters determining purging and drift, respectively), which determine F t and g t . Taking d = 0 in Eq. (1) g t reduces to F t , so that Eqs. (4) and (5) reduce to (2) and (3), respectively. Therefore, to disentangle the inbreeding load removed by genetic purging from that removed by random fixation and loss of deleterious alleles (genetic drift), the predictions of Eqs. (3) and (5) should be compared. Assuming this IP model and making use of the estimates of N e , d, and δ obtained by López-Cortegano et al. (2016), we predicted the expected evolution of productivity and inbreeding load over generations, and compared them with the observed results. To compute these predictions, we calculated F for both the base populations and the lines from the corresponding estimated N e values, and subsequently we obtained g applying Eq.
(1). From this point, P t and δ t were predicted considering, either a neutral model without purging (i.e., using d = 0 in Eq. (1), which gives g t = F t ) or a model assuming both drift and purging (using the d estimates in Eq. (1)).
Computer simulations
Computer simulations were performed with an in-house C program in order to describe the range of mutational effects and dominance coefficient that better explain the evolution of fitness and inbreeding depression predicted by the model using our d estimate. Starting from a very large population (N e = N = 10,000), emulating a natural population at the mutation-selection equilibrium, a dioecious sample of 1376 individuals with sex ratio of one to simulate the Madrid population (1000 individuals for Vigo, in parenthesis from here on), was taken and maintained with constant size during 83 (86) discrete generations under the action of selection and drift. Polygamous matings were allowed, where the probability of being selected as a parent to generate each offspring was determined by the individual fitness value (see below). At generation 83 (86), a line of reduced size, N = 43 (N = 52) was derived from the base population. This line was maintained synchronously with the base population under the same conditions for 250 generations.
To generate the natural population at the mutation-selection equilibrium, we established a total number of 9000 diploid genomic positions. Each position was ascribed to a homozygous selection coefficient s obtained from a gamma distribution with mean s and shape parameter β = 0.2 (values of s larger than 1 were redefined as s = 1), and to a dominance coefficient h obtained from a uniform distribution between 0 and e (−ks) , k being a constant needed to get the desired average value (h) (Caballero and Keightley 1994;see Caballero 2020, p. 152-161). All mutations were assumed to have deleterious effect on fitness, where the fitness of the wild-type genotype, the heterozygote and the mutant homozygote are 1, 1sh, and 1s, respectively, and individual genotypic fitness values were obtained multiplicatively across loci. The deterministic expected equilibrium frequencies at the mutation-selection balance were obtained from (Crow and Kimura 1970, p. 258), where b p and b q are the equilibrium frequencies of wild-type and mutant alleles, respectively, and u the mutation rate per locus and generation. Using these theoretical frequencies, allele copies were randomly distributed across individuals of the natural population. Individuals were then sampled from this natural population to found the base populations, which were subjected to the action of mutation, genetic drift, and selection. After the formation of the natural populations, the simulation process that followed was repeated 100 times and the results were averaged. As a result, we obtained the average fitness and the average inbreeding load, computed as the sum over loci of δ = s(1 − 2h)pq (Morton et al. 1956) per generation for both the base populations and the derived lines. We explored a range of mutational parameters encompassing the values observed empirically, either assuming models of many mutations of small average effect (s = 0.01-0.03) or fewer mutations of large effect (s = 0.1-0.3), and average dominance coefficients (h) ranging between 0.1 and 0.3. The mutation rate was adjusted so that the simulated populations had an initial inbreeding load close to that inferred in the experimental populations. This implied a haploid genomic mutation rate per generation of around U = 0.1 for the models of small-effect mutations and one of around U = 0.02 for the models of large-effect mutations. The mutational parameters assumed are within the range of those obtained empirically from mutation-accumulation experiments (Caballero 2020, p. 152-157). Sampled s values larger than 1 were assigned a value s = 1 so that the mutational model generates a lethal class, thus producing a bimodal distribution of selection coefficients in the case of large-effect mutations (see Supplementary Material Fig. S1). We also considered mutational estimates obtained from the evolutionary analysis of genomic data (Kim et al. 2017), to follow the conditions of the simulations carried out by Kyriazis et al. (2020). This model considers that the average homozygous selection coefficient is s = 0.0161, with effects obtained from a gamma distribution with shape parameter β = 0.186. The model assumes that the dominance coefficient is a constant value of h = 0.25 when the mutation homozygous effect is s < 0.02 and h = 0 otherwise. Thus, the model is similar to one of the above models of mutations with small effects except for the dominance assumed. As in the previous models, the mutation rate was adjusted to produce an initial inbreeding load close to the observed ones, giving U = 0.16, which is very close to the value assumed by Kyriazis et al. (2020) (U = 0.21). Simulations were also carried out assuming a neutral (only genetic drift) model in order to check the accuracy of neutral predictions. The fit between the simulation and observed results was quantified by the mean square difference between both values considering all generations of the base population and lines.
RESULTS
The effect of long-term purging on the inbreeding load Tables 1-2 show results for the estimates of the inbreeding load (δ) over generations reported by López-Cortegano et al. (2016) or obtained in the present study. For Madrid base population (Table 1), the inbreeding load dropped to about the same amount at generations 201 and 234, being nonsignificantly different from zero in the last generation (δ = 0.145 ± 0.098; p = 0.066). In the case of Madrid derived lines, inbreeding depression was largely reduced after 120 generations and virtually exhausted after 153 generations (δ = 0.014 ± 0.083; p = 0.439). Note, however, that the measures at both generations were not significantly different from each other (p = 0.124). For the Vigo base population (Table 2), the decline in the inbreeding load was rather linear until generation 111 . However, in the latest generation (Gen. 125), the inbreeding load showed a substantial drop down to a nonsignificant value (δ = 0.11 ± 0.07; p = 0.051). Regarding Vigo's lines, the inbreeding load was slightly reduced after 25 generations but, again, was exhausted by generation 39 (δ = −0.05 ± 0.06; p = 0.755). Figure 1 shows the fit of the expected declines in inbreeding load under the IP model (continuous lines) and the neutral model (dotted lines) to the observations for both base populations and derived lines. The observed inbreeding load in the last generation evaluated was nonsignificantly different from the expectations under a purging model in the case of the Madrid base population (p = 0.392) and lines (p = 0.442) (Fig. 1A), but lower than the expectations in the case of the Vigo base population (p < 0.001) and lines (p = 0.011) (Fig. 1B). The inbreeding load expected under a neutral model (only genetic drift) was very much larger than that observed experimentally in all cases, except for generation 25 of Vigo's lines.
A comparison between the productivity means across generations must be done with caution, as the crossgenerational estimates are subjected to environmental fluctuations. Although the flies were maintained in a chamber with uniform and constant environmental conditions, the external environmental changes throughout the year (which could take a role during the handling of flies in the laboratory) could affect very sensitive traits such as female fecundity. The mean productivity of the Madrid base population declined from generation 201 to 234 by 8.4% in the outbred estimate and by 8.3% in the inbred one (Table 1). The corresponding declines for the Madrid lines were 12 and 8.4%. Therefore, the change in mean over generations was roughly the same for the base population and for the lines. For Vigo base population, the mean productivity dropped from generation 111 to 125 by 26% in the outbred estimate and by 14% in the inbred one, and the corresponding drops in Vigo lines were 35 and 17%. Thus, although somewhat larger in the lines, the drops were not very different between the base population and the lines. Altogether, this suggests that the drop in productivity observed between these generations was not mainly due to inbreeding depression, which should progress much faster in the lines with N e ≈ 50 than in the populations with N e ≈ 1000, but is most likely due to environmental causes. A comparison between the estimates of mean productivity of the base population and lines in the same generation is more reliable, as they were obtained synchronously. Figure 2 presents the average productivities of the noninbred lines relative to those obtained in the corresponding base populations for which the IP model predicts negligible depression. Again, the observed results were clearly closer to the expectations under a purging model than under a neutral model, although the predictions of the purging model for the Madrid lines were consistently higher than the observed values. Thus, the expected mean productivities under the purging model in the latest generation were 20% and 5% higher than the observed values in the Madrid and Vigo lines, respectively, whereas the expectations under the neutral model were 56% and 17% lower, respectively.
Computer simulations
Computer simulations were performed in order to assess which set of mutational parameters better explain the experimentally observed inbreeding loads of base populations and derived lines and the relative fitness of the lines. All experimental results were generally consistent with simulations assuming large homozygous deleterious effects and small genomic mutation rates (U ≈ 0.02; Figs. 3 and 4). The average selection and dominance coefficients of mutations that better explained the inbreeding load results were s = 0.3 and h = 0.25. Simulations assuming effects tenfold smaller (s = 0.01-0.03) and a mutation rate five times larger (U ≈ 0.1) were clearly inconsistent with the empirical and theoretical predictions (Figs. S2 and S3). The same could be concluded regarding the mutation model assumed by Kyriazis et al. (2020) (Figs. S4 and S5). Simulations assuming the removal of inbreeding load only by genetic drift were very close to the neutral predictions (Figs. S6 and S7).
DISCUSSION
Genetic purging has long been considered an effective force in reducing the inbreeding load ascribed to lethal and large-effect mutations, particularly under fast inbreeding (Hedrick 1994;Wang et al. 1999), but its efficiency against minor mutations under slow inbreeding is less obvious (Leberg and Firmin 2008). The time needed for purging deleterious mutations of small effect imposes an important limitation when it comes to detecting it, both in experimental populations (too long experiments, difficulty in handling, etc.) and in natural populations (not enough information is usually available), as has been previously noted (Gulisija and Crow 2007;García-Dorado 2015). In two long-term Drosophila analyses under slow inbreeding, López-Cortegano et al. (2016) showed that purging was very effective in reducing the inbreeding load to a great extent, but at the latest generations, the reported populations still harbored substantial inbreeding load. In this work, we continued one of those experiments and report late unpublished results of the other in order to evaluate how far the original inbreeding load can be removed in the long term by genetic purging. The base populations, with estimated effective sizes over 1000 individuals, reached a state of slight and nonsignificant inbreeding load after 5 or 10 years under laboratory conditions, with an average of 24 generations per year. The final inbreeding load of the derived lines, maintained with an effective size around 50 for up to 153 or 125 generations, was almost negligible. In the case of the Madrid population, maintained for the longest period, the two final estimates (generations 201 and 234) suggest that the population is close to a plateau.
The above results are a sound evidence of purging, as the observed decline of the inbreeding load fits far better to Inbreeding Purging predictions than to the much faster decline predicted under a neutral (only genetic drift) model, both for the populations and for the lines (see Fig. 1). In addition, an exhaustion of the inbreeding load due to genetic drift alone would have required a much longer process and, more importantly, it would have been accompanied by a drastic decline in average fitness. It should be noted that IP predictions are computed ignoring both standard non-purging selection and new deleterious mutation. Particularly for large populations, predictions taking into account these factors should be more appropriate (see the full-model approach in García-Dorado 2012) but, unfortunately, this requires reliable estimates of too many genetic parameters. However, the very small value of the late estimates of the inbreeding load suggest that the role of new mutation in generating new inbreeding load has been rather small and that the simple Inbreeding Purging approach can provide reasonable long-term predictions even in moderately large populations. The IP model for the average fitness of the lines gave also better predictions than the neutral one (Fig. 2), although neither of the two fittings was very good. It is worth noting the constant overestimation that showed the IP prediction in the lines from the Madrid experiment. This might be explained by inbreeding depression ascribed to deleterious alleles with too small effects to be efficiently purged (i.e., with d values much smaller than assumed to compute the predictions), but this should also cause a poor fitting for the inbreeding load predictions, which was not observed (Fig. 1). There was also a drop in productivity over the two last generations for both Madrid (generations 201-234) and Vigo (generations 111-125) experiments (Tables 1 and 2), but the drop was not very different for the large population (with N e over 1000) and the lines (with N e around 50), particularly in the Madrid experiment. This indicates that the late fitness drop cannot be explained by inbreeding depression due to inefficient purging, as this would have caused a much larger fitness drop in the small lines than in the large populations. It can neither be ascribed to new mutations with deleterious effect so small as to escape selection even in the large population, as this would require a huge mutation rate.
Other experiments have shown an exhaustion or drastic reduction of the inbreeding load, but for populations of much lower census sizes, where an important reduction would be expected from genetic drift alone. For example, Swindell and Bouzat (2006a) showed a decline of the inbreeding load in D. melanogaster to one third its initial value in populations maintained by mass mating ten breeding pairs for 19 generations, which represents a decline not much larger than expected from drift. Other long-term Drosophila studies with large census sizes also detected a reduced inbreeding load, but not its complete depletion. For example, Ávila et al. (2010) studied the effect of purging induced by restricted panmixia in a population of size N = 220 individuals (distributed among 55 vials), and reported a 44% reduction of δ for competitive fitness after 34 generations, and of 77% for viability after 60 generations.
Substantial reductions of the inbreeding load have also been observed under fast inbreeding (e.g., full-sib lines). This latter purges lethal and severely deleterious alleles but, contrary to slow inbreeding, usually leads to a continuous decline in fitness because of the fixation of mild or moderate deleterious mutations (e.g., Pekkala et al. 2014;Sharp and Agrawal 2016;Domínguez-García et al. 2019; see also Lynch and Walsh 1998, p. 255). For example, Domínguez-García et al. (2019) carried out a fast inbreeding experiment (5-6 generations of full-sib mating) with two Drosophila populations, one of them being the Vigo population referred to in this paper (in fact, the results of generation 111 of Table 2 correspond to those of the third generation of full-sib mating in experiment A of Domínguez-García et al. 2019). This experiment showed a continuous decline in fitness with inbreeding depression accelerated in the latest generations, suggesting synergistic epistasis among deleterious alleles. In this regard, and considering that synergistic epistasis may facilitate the joint elimination of interacting deleterious mutations (Kondrashov 1988;Kouyos et al. 2007), an interesting result found in our analysis (Table 2 and Fig. 1B) was the fast drop of the inbreeding load observed in the base population of Vigo from generation 111 to 125, a reduction not predicted by the IP model. It cannot be discarded that some synergistic epistatic interactions between deleterious alleles may have induced a late enhanced purging of these alleles, potentially observable under slow inbreeding, in disagreement with the prediction of the IP model, which ignores epistasis.
There are additional evidences of purged inbreeding load under fast inbreeding, as that provided by Chelo et al. (2019), who detected a reduced extinction risk in experimental populations of Caenorhabditis elegans with high selfing levels. Fox et al. (2008) assayed the reduction of the inbreeding load that could be ascribed to purging by measuring it in the outbred cross of three generations of full-sib inbred lines, and observed an important average reduction in the beetle Stator limbatus, implying that about half the original inbreeding load was due to severely deleterious mutations. In contrast, Willis (1999) only detected a slight reduction in the outbred cross of selfed lines of Mimulus guttatus, suggesting that lethals or mutations of severe effect were not the main contributors to inbreeding load in those populations. This could be explained by the continuous removal of severely deleterious alleles by selfing in Mimulus guttatus. Barrett and Charlesworth (1991) also showed that the genetic load present in an outcrossing population of Eichhornia paniculata exposed to five generations of self-fertilization could be explained only with a high mutation rate to partially recessive deleterious alleles, and that inbreeding purged these alleles from the population.
Our results show that purging under slow inbreeding is effective in removing most inbreeding load. This finding is in accordance with several previous empirical results (Day et al. 2003;Reed et al. 2003;Swindell and Bouzat 2006a;Pekkala et al. 2012Pekkala et al. , 2014. Note that purging most inbreeding load is likely to imply purging substantial load due to non-severely deleterious alleles. For example, in the simulated case with s = 0.2 and h = 0.25, about 15% of the inbreeding load is ascribed to deleterious alleles with s < 0.2. Other studies, however, have failed to detect a significant reduction in inbreeding depression under relatively slow inbreeding. For example, Kristensen et al. (2011) did not find a reduction in inbreeding depression in Drosophila populations with slow inbreeding (N e = 32 during 19 generations) compared to populations with fast inbreeding (one generation of full-sib mating), and Leberg and Firmin (2008) did not find evidences of purging on mosquitofish populations after serial bottlenecks (consisting of a reduction of the population size to 5 or less individuals, followed by an expansion of up to 300 individuals). These contrasting results could be ascribed to the small experimental scale in terms of generation numbers.
Our experimental results show the drastic long-term effect of genetic purging in removing the initial inbreeding load for moderately competitive fitness of populations maintained in the laboratory, and we may wonder how far our conclusions could be extrapolated to the wild. The expression and severity of the inbreeding depression is environment-dependent, often being more pronounced in harsher environments (Martin and Lenormand 2006). Such interaction may be the result of a differential expression of phenotypes under selection (plasticity), an environment-dependent dominance, or a differential selection pressure (Cheptou and Donohue 2011). Therefore, alleles that in laboratory conditions (or a particular benign environment in the wild) cannot be purged even under slow inbreeding because they show only slight or no deleterious effect, may induce substantial depression in a harsher or competitive environment. Thus, Bijlsma et al. (1999) have already noted that purging efficiency depends on the conditions under which it occurs, and that effective purging in a given environment may not prevent inbreeding depression under different conditions (see also Swindell and Bouzat 2006b). However, as shown by López-Cortegano et al. (2016), although inbreeding depression can be larger in more competitive conditions due to the larger deleterious effects, purging should also be more efficient. Furthermore, these authors observed that purging occurred in competitive conditions can also be efficient against inbreeding load expressed in noncompetitive ones, suggesting that the larger inbreeding load expressed under high competition could be mainly due to the same deleterious alleles expressed in a noncompetitive environment, but with more severe effects. Thus, our IP conclusions could be expected to hold in natural populations maintained in the wild.
The virtually complete depletion of inbreeding load by genetic purging observed in our experiments is compatible with several examples of natural populations showing little or no evidence of inbreeding depression for particular traits in populations with a history of inbreeding. For example, a reduced inbreeding load for juvenile survival was found in the bottlenecked Stewart Island robin (Petroica australis rakiura) population, with an estimated value of 0.24 lethal equivalents (Laws and Jamieson 2011). An absence of inbreeding depression among different life-history traits was also found in the ambrosia beetle Xylosandrus germanus (Peer and Taborsky 2005), and for several early fitness traits in a population of the tree Ceiba pentandra, with variable selfing rates among maternal trees (Lobo et al. 2015). A reduced inbreeding load (δ = 0.19) was observed in the tapeworm Oochoristica javaensis with mixed mating (Caballero and Criscione 2019) and in the case of the captive population of Cuvier's Gazelle, which showed a positive relationship between juvenile survival and inbreeding (Moreno et al. 2015). Other examples of observed reduced or lacking inbreeding depression in wild populations include populations of the lizard Podarcis gaigeae (Runemark et al. 2013), the greater white-toothed shrew Crocidura russula (Duarte et al. 2003; δ = 0.3 for fecundity), and the invasive biennial Alliaria petiolata (Mullarkey et al. 2013). Recent studies at the genomic level added more evidence about the action of purging against deleterious alleles. Xue et al. (2015) and Grossen et al. (2020) detected a reduction of the genomic load for mutations classified as highly deleterious in populations of mountain gorillas and Alpine ibex, respectively, both with moderate sizes and a history of bottlenecks, but no for putatively mildly ones. Although the true magnitude of the corresponding effects is unknown, and that for the putatively mildly alleles could in fact be very small and irrelevant in the time scale of laboratory experiments or conservation management programs, these results highlight the importance of maintaining a high population size, above 1000 individuals, to prevent the accumulation of deleterious mutations that might put in risk the long-term population survival. However, the reduction of the genomic load may not appropriately reflect the reduction in fitness inbreeding load and inbreeding depression. For example, inbreeding depression could be smaller than suggested by the genomic load due to the purging of the more severe deleterious mutations, as seems to be the case of island foxes (Robinson et al. 2018), with a higher proportion of missense and loss-of-function mutations than the mainland gray foxes but no signs of inbreeding depression (presence of congenital defects). Thus, although genomic-based information can be useful in order to assess conservation efforts and to ensure the survival of inbred populations, the main relevance of purging relies primarily on its impact on the fitness inbreeding load.
Among all the mutational models tested in the simulation analyses, a set of mutational parameters produced results that fit well the observed inbreeding load and fitness, as well as the corresponding IP predictions. These parameters are a low genomic mutation rate of U ≈ 0.02 per haploid genome and generation, a relatively large average deleterious effect (s) of about 0.3, and a moderate average dominance coefficient (h) around 0.25. These parameters are within the range of those generally found for eukaryotic species from mutation-accumulation studies (see, e.g., Caballero 2020, p. 161) including a lethal class (see distribution in Fig. S1). The good fit between the predictions obtained with this low rate of mutations of large-effect and the experimental results suggests that most mutations of tiny effect that can be classified as deleterious in terms of molecular evolution (Haag-Liautard et al. 2007), which may be relevant for evolutionary time scales, contribute little inbreeding load for relative short-time spans regarding genetic conservation or animal breeding (García-Dorado and Caballero 2000;Caballero 2020, p. 157). Simulations performed by Domínguez-García et al. (2019) in relation to fast inbreeding by full-sib mating also support this model. In both cases, a model of many more mutations (five times larger mutation rate) of small effect (ten times lower) seems incompatible with the experimental results. The same can be concluded with respect to the mutation model assumed by Kyriazis et al. (2020) (Figs. S4 and S5), which implies that mutations of moderately large effect (say with selection coefficient s > 0.1) are very scarce, and does not consider lethal mutations (García-Dorado and Caballero 2021). This contrasts with experimental evidence supporting that deleterious mutations of moderately large effect, as well as lethals, are common and have a substantial impact on inbreeding depression (Caballero and Keightley 1998;Bijlsma et al. 1999).
In conclusion, although large population sizes are always to be preferred in order to preserve biodiversity and avoid fixation of deleterious mutations, our results illustrate the potential of purging under slow inbreeding to remove the original inbreeding load in moderate size populations, and to lead the populations toward an equilibrium with little or no inbreeding depression. It is important to emphasize that the magnitude of the deleterious effects being purged and of the expressed inbreeding depression can be influenced by the environment, so that populations should preferably be maintained in the wild if a proper assessment of the habitat is carried out. This has important implications, not only in conservation programs (either natural or captive populations), but also in breeding programs (livestock, aquaculture, etc.), where population sizes are often small and usually face the problem of inbreeding. | 2021-08-18T06:18:07.935Z | 2021-08-16T00:00:00.000 | {
"year": 2021,
"sha1": "abdec049387def02d2f066b7ccbcd593573fb939",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41437-021-00464-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1dbbab69a8a4bc6931523efe9d60779f1037439b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16952335 | pes2o/s2orc | v3-fos-license | Neutrino Scattering in a Magnetic Field
Motivated by the evidence for a finite neutrino mass we examine anew the interaction of neutrinos in a magnetic field. We present the rate for radiative scattering for both massless and massive neutrinos in the standard model and give the corresponding numerical estimates. We also consider the effects arising from a possible neutrino magnetic moment.
Introduction
Neutrino flavor oscillations, for which there now appears significant experimental evidence, have established that at least some neutrino has non-zero mass. However the sensitivity to mass in vacuum flavor oscillations only enters as ∆m 2 , and thus one is unable to establish the neutrino's mass. Electromagnetic interactions of neutrinos, however, often have direct sensitivity to the neutrino mass. In this paper we evaluate a number of electromagnetic processes that could result when a high energy neutrino enters a laboratory magnetic field and consider the practicality of observing such effects.
We first consider the radiative scattering of high energy neutrinos in a magnetic field. Such scattering arises from the coupling to the field of the virtual charged particles in the loop diagrams shown in Fig.1. Radiative scattering can also be viewed as the back-scattering of the virtual photons in the magnetic field, off the high energy neutrinos.
Neutrino photon scattering has been considered early on, and it was shown by Gell-Mann [1], that for a local interaction and massless neutrinos, ν + γ → ν + γ is forbidden by angular momentum conservation [2]. This restriction does not apply for virtual photons, massive neutrinos or to higher order in (m ℓ /m W ) 2 . Dicus and collaborators [3][4][5][6] have given cross sections for both massless and massive neutrino photon scattering. Neutrino scattering in a magnetic field has been considered in detail by Mikheev and collaborators [7][8][9][10], as well as by other authors [11,12].
For high energy neutrinos, the radiated photons are energetic and the distribution is peaked in the forward direction. The kinematics can be visualized in terms of the 3-momentum q of the virtual photons. We take q to be directed opposite to the neutrino momentum and q 0 = 0. With E, p and E ′ , p ′ the incoming and outgoing neutrino 4-vectors, m ν the neutrino mass and ω ′ the scattered photon energy, Here γ = E ν /m ν and θ is the scattering angle measured from the forward direction.
In the following section we present the scattering rate as calculated in the standard model. This is followed by numerical estimates of expected rates for operating and planned accelerators. We conclude with a discussion of effects that would manifest themselves should neutrinos possess a magnetic moment.
Scattering Rate in the Standard Model
The relevant amplitudes are shown in Fig.1. For small momentum transfers, s/m 2 W ≪ 1, the boson propagators can be contracted to the low-energy 4-fermion interaction. It is also evident from the graphs, that the dominant contribution will come from the lowest mass fermion, in the loop the electron. The W -exchange graph allows for mixing of the neutrino flavor eigenstates, specified by a unitary matrix U ℓα , where ℓ, α are the flavor, mass, eigenstate indices. The external field appears in the neutrino rest-frame as time independent crossed electric and magnetic fields. The exact solution of the wave equation in these fields is used for the lepton propagator [7] instead of the virtual photon coupling to the external field.
The scattering rate (probability per unit time) is expressed in terms of the invariant χ e defined through where B e = 4.41 × 10 9 T is the Schwinger critical field [13], and m e the electron mass. Three different cases can be identified. First, for massless neutrinos, m ν = 0, we find In the second case the neutrino is assumed massive, m ν = 0. In the absence of mixing the scattering rate is given by In the presence of mixing but if the incident mass eignestate, α, is not changed by the scattering Eq.(4) remains valid but is modified by a factor (given here for 3-flavor mixing) This factor arises from the interference of the W and Z 0 contributions. The third cases arises in the presence of mixing but when a massive neutrino α, decays radiatively to a lighter neutrino β. Assuming that m να ≫ m νβ , the rate for the process ν α → ν β + γ is now Of course in an experiment the incident as well as the detected neutrinos are usually flavor eigenstate. Thus to use Eq.5 or Eq.4a one has to first project the flavor eigenstate onto it mass eignestates.
The above results remain valid as long as χ e < 1 which is always the case even at the highest neutrino (beam) energies and for magnetic fields that can be achieved in the laboratory. Results pertaining to larger values of χ e such as can be reached in the very strong fields of astrophysical environments or for extremely high energy cosmic ray neutrinos are discussed in [9,14].
We also show the neutrino photon-scattering cross section for two special cases: (a) Massless neutrino but the initial photon is virtual. In this case we set (in the lab frame) q µ = {0, 0, 0, −q}, p µ = {E ν , 0, 0, p ν } and find Here s = 2E ν |q| is the square of the cm energy and q 2 = −q µ q µ . (b) When the incoming photon is real but the neutrino is massive. The cross section for this process has been discussed in detail in ref. [6]. When s ≫ 4m 2 e the cross section tends to a constant value Eqs.6,7 can be used in conjunction with the virtual photon formalism alluded to previously to obtain estimates of the scattering rates in a magnetic field.
Numerical Estimates
The range of variables entering eqs.(3-5) is restricted by the experimental possibilities. We will therefore consider only the following values (in Gaussian units) It follows that χ e ≃ 0.5 × 10 −4 and for the purposes of this estimate we have set U * eα U eβ = 1 in Eq.5. The resulting probabilities for radiative scattering per incident neutrino are shown in Fig.2, as a function of neutrino mass. It is important to note that Eqs. (3)(4)(5) have been obtained by using the amplitude corresponding to only one of the three possible cases. Thus for a given neutrino mass the probability for radiative scattering is given by the dominant contribution: see for instance Fig.2.
We note that for m ν > ∼ 100 eV the dominant process is the radiative decay catalyzed by the presence of the magnetic field. The probabilities shown in Fig.2 are to be compared to the available integrated fluxes of high energy neutrinos. The MINOS beam at Fermilab will soon deliver 10 18 , 50 GeV neutrinos per year, mainly ν µ . A future 50 GeV neutrino factory could deliver, under optimal conditions 10 21 neutrinos in one year at E ν ≃ 20 GeV. In either case these fluxes are too low to lead to observable effects unless m ν > 100 MeV.
Magnetic Moment Interactions
A Majorana neutrino or a massless Dirac neutrino can have no magnetic moment. A massive Dirac neutrino has a magnetic moment which arises in the standard model from loop corrections. To leading order in (m ℓ /m W ) 2 it has the value [15] where µ 0 is the Bohr magneton, µ 0 = 5.79 × 10 −11 MeV/T. The accepted limits on possible magnetic moments as given by the PDG [16] are These limits are obtained most directly from the shape and rate of the spectrum in νe andνe scattering (see for instance Ahrens et al [17]). There are also astrophysical limits based on the cooling rate of stars and on the observation of the neutrino burst from SN1987A. Such limits are in the range of (10 −10 to 10 −12 )µ 0 for all three neutrino mass eigenstates. We discuss three possible manifestations of a magnetic moment interaction in a magnetic field. For laboratory field strengths these processes are far from reaching the value predicted by Eq.8, nor can they improve on the existing limits listed previously. The simplest interaction is the precession of the spin vector which modifies the helicity state of the neutrino and thus alters its weak interaction rate. Spin rotation is due to the different time evolution of the two spin states projected onto the magnetic field; the neutrino momentum vector is assumed to be perpendicular to the magnetic field. The rotation angle is θ = 2µ ν BL where L is the length of the field. The fractional change in the weak cross-section is then ∆σ/σ ≃ 1 − cos 2 θ and for small rotation angles For B = 2 T and L = 10 m a change ∆σ/σ > ∼ 1% would correspond to a limit µ ν /µ 0 > ∼ 10 −5 . If we allow for mixing, different mass eigenstates would have different magnetic moments. In the presence of flavor mixing among magnetic moment eigenstates, the presence of an axial magnetic field would lift the degeneracy between these eigenstates, and potentially result in field-induced neutrino flavor oscillations. Analyzing this situation in the familiear two neutrino case where we assume the mass and magnetic moment eigenstates to be identical, as in the standard model, we find the flavor oscillation probability: P (ν α → ν β ) = sin 2 2θ sin 2 (∆m 2 ν L/4E + ∆µ ν BL/2) where θ is the mixing angle, ∆m 2 ν = m 2 α − m 2 β and ∆µ = |µ α − µ β |. The second term in the time evolution factor is typically smaller than the first term, except in cases wehre the magnetic moment is anomalous, the field strength is extreme or the neutrino is very energetic (E ν = 10 12 GeV when (m α + m β ) ∼ 0.1 eV and B = 2 T).
In a laboratory experiment, one can search for an anomalous neutrino magnetic moment through this process. For B = 2 T, L = 10 m and if one can detect oscillations at the 10 −4 level, then for maximal mixing the limit is ∆µ/µ 0 < 2 × 10 −6 .
The presence of a magnetic moment would also lead to the emission of a high energy photon by magnetic "Compton" scattering as shown in Fig.3. For the crosssection for this process involving real or virtual photons and assuming that s ≫ q 2 , we find where s is the cm energy. For a head-on collision with a real photon of momentum q, s = 4E ν q. Using the virtual photon formalism [14] we can obtain the radiation rate in a magnetic field B Γ ≃ where χ e is the invariant of Eq.3. For E ν = 50 GeV, and χ e = 0.5 × 10 −4 (B = 2.2 T), L = 10 m and (µ ν µ 0 ) = 7 × 10 −5 the scattering probability is P ≃ 10 −20 . Given the available neutrino fluxes this can be considered as the lowest limit of detection. Finally we note that the presence of a transition magnetic moment leads to radiative decay. The invariant probability (inverse lifetime in the neutrino rest frame) is [18] Γ(ν α → ν β ) = µ αβ µ 0 For instance for L = 10 m, E ν = 50 GeV and ∆m 2 αβ = m 2 α = 1 eV 2 , the decay probability is P = 3.5 × 10 −18 (µ αβ /µ 0 ) 2 It has been proposed to use an external radio frequency field to induce the transition [19]. As an example, using one of the LEP rf cavities at nominal power levels the probability for a (non-radative) transition is of order P ∼ 10 3 (µ αβ /µ 0 ) 2 Experimentally, the transition leads to a change in the rate of detection of a particular neutrino flavor. This limits the observable probability to P > ∼ 10 −3 , or (µ αβ /µ 0 ) < 10 −3 . | 2014-10-01T00:00:00.000Z | 2001-05-09T00:00:00.000 | {
"year": 2001,
"sha1": "5d9f9927d6c6edc362ec77096efd6fa6838ba33b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0106196",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2d10ab5ba84f0c2a68629aa4363d2487f16f9a01",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237605696 | pes2o/s2orc | v3-fos-license | Radiology Reporting Errors: Learning from Report Addenda
Background The addition of new information to a completed radiology report in the form of an “addendum” conveys a variety of information, ranging from less significant typographical errors to serious omissions and misinterpretations. Understanding the reasons for errors and their clinical implications will lead to better clinical governance and radiology practice. Aims This article assesses the common reasons which lead to addenda generation to completed reports and their clinical implications. Subjects and Methods Retrospective study was conducted by reviewing addenda to computed tomography (CT), ultrasound, and magnetic resonance imaging reports between January 2018 to June 2018, to note the frequency and classification of report addenda. Results Rate of addenda generation was 1.1% ( n = 1,076) among the 97,003 approved cross-sectional radiology reports. Errors contributed to 71.2% ( n = 767) of addenda, most commonly communication (29.3%, n = 316) and observational errors (20.8%, n = 224), and 28.7% were nonerrors aimed at providing additional clinically relevant information. Majority of the addenda (82.3%, n = 886) did not have a significant clinical impact. CT and ultrasound reports accounted for 36.9% ( n = 398) and 35.2% ( n = 379) share, respectively. A time gap of 1 to 7 days was noted for 46.8% ( n = 504) addenda and 37.6% ( n = 405) were issued in less than a day. Radiologists with more than 6-year experience created majority (1.5%, n = 456) of addenda. Those which were added to reports generated during emergency hours contributed to 23.2% ( n = 250) of the addenda. Conclusion The study has identified the prevalence of report addenda in a radiology practice involving picture archiving and communication system in a tertiary care center in India. The etiology included both errors and non-errors. Results of this audit were used to generate a checklist and put protocols that will help decrease serious radiology misses and common errors.
Introduction
Sometimes radiologists approve the report only to realize later that certain additional information had to be mentioned in the report. Or, the radiologist's opinion may change when other clinical data are available at a later time. In such situations, where there is a need to add new comments or clarification to the original report, software tools such as "report addenda" can prove to be very useful. This has been possible because of the worldwide availability of picture archiving and communication system (PACS) in the last few decades, which has revolutionized the radiological documentation, making the reports electronically and promptly available to the referring clinicians and patients.
What is a Report Addendum?
An "addendum" is the supplementary text added at the end of a previously approved radiology report, to correct or expand on an original statement (►Fig. 1). 1 It is not just "discrepancy documentation," but can become the most crucial part of the report, not only for medical and ethical implications, but also for medicolegal consequences. 2 The new information conveyed ranges from less significant typographical errors to serious clinically significant misses. 3 The addendum and the original report, are available for viewing together, hence eliminating the need for deletion of the original report by the radiologist at a later date. It also contains a record of the date, and reason for the addition or clarification of information being added to the medical history.
In this study, we aim to study the frequency and common causes for addenda generation in radiology reports in our institution with everyday scenarios and examples. We also discuss possible solutions to minimize errors.
Study Group
This retrospective study was performed in a tertiary care hospital in India, after approval by the institutional review board (IRB No: 11744/2018). All the approved cross-sectional radiology reports generated over 6 months between January 2018 and June 2018 were searched from the radiology information system (RIS). ►Fig. 2 shows the flowchart of the included patients.
Report and Image Analysis
All the original reports, addenda, and images were reviewed by two radiologists and categorized as below. Patient records were also examined to determine the effect of the initial report on management and clinical outcome.
Imaging modalities:
Finalized computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound reports were included for the study. Fig. 1 Report template. The addenda is added at the end of the initial report. The entire report should be rewritten in the addenda and not just the modified portions. Both reports should be available for viewing together.
Time of generation and time delay:
The time delay between the original report and addenda was calculated based on the time of generation already recorded in the addenda and divided as less than one day, 1 to 7 days, 8 to 30 days, and more than 30 days. Addenda were also classified based on whether they were generated during regular work hours or emergency hours.
Etiology:
We classified addenda into 10 broad categories and subcategories based on the reasons for creation (►Table 1). They were also divided into "error" and "non-error" addenda (►Fig. 12).
Clinical significance:
The medical charts of all patients were reviewed and the clinical importance of modification in report content was graded into five categories (►Table 2). 4 Clinically significant addenda were those who had the potential to change the diagnosis and immediate patient management, and affect patient morbidity. Nonsignificant clinical addenda were related to patient demography, spelling errors, records of communication, etc.
Experience:
We classified the residents and consultants based on their expertise in radiology into three groups: junior residents (< 4 years' experience), junior consultants and fellows with 4 to 6 years' experience, and assistant/associate professors or higher ranks, including specialist radiologists (> 6 years' experience).
Results
Addenda rate: In the 6 months study period, out of a total of 97,003 approved cross-sectional radiology reports, addenda was generated in 1,076 reports, yielding an overall frequency of 1.1%. Twenty-three reports had more than one addenda added to the same report. Ninety-six percent of addenda were created by the author of the original report.
Classification Based on Etiology
Errors contributed to a 71.2% (n = 767/1,076) share of addenda or 0.8% (767/97,003) of total approved reports. Communication errors were most common (29.3%, n = 316) followed by observational errors (20.8%, n = 224) and interpretation errors (17.7%, n = 191). Among the subtypes, the most common were errors due to typographical reasons (n = 136), followed by an incomplete description of the findings (n = 110) and under reading (n = 118). ►Table 3 shows the percentage of each reason for addenda creation.
Non-error addenda contributed to 28.7% (n =309/1,076) reports, out of which 50.1% (n =155/309) contribution was by "limitations of modality." This group included reports, in which definite diagnosis could not be made on one particular imaging modality or sequence, and patients were called back for additional imaging. The conclusion was added to the original report as addenda.
Classification Based on Imaging Modalities
Among the imaging modalities, 36.9% (n = 398) of the addenda were added to CT reports, followed by 35.2% (n = 379) in ultrasound reports and 27.7% (n = 299) in MRI reports.
Time Interval
The range of time interval between the sign-off of the original reports and addition of addenda was 0 to 273 days, with an average of 2 days. Majority of addenda were issued in 1 to 7 days (46.8%, n = 504) while 37.6% (n = 405) were issued in less than 24 hours (►Fig. 13). Higher time gaps of more than a month (n = 7/1,076), were due to the addition of comparison Radiology Reporting Errors: Learning from Report Addenda Patra et al. Missing pulmonary embolism in CT abdomen as lower sections of thorax not checked.
Prior examination Finding missed due to overreliance on previous reports without seeing images, or not checking previous reports and images.
Interpretation error
Faulty reasoning Finding appreciated but misinterpreted, possibly due to lack of knowledge, limitations of imaging modality or insufficient clinical data.
Faulty reasoning More differentials added, as findings not fitting to one diagnosis.
Negative points ruling out alternative diagnosis missed
Multiple differentials for pancreatic lesions added due to atypical imaging features Incomplete description -Finding appreciated and mentioned in the report but not sufficiently elaborated or summarized.
-Negative points ruling out alternative diagnosis missed.
Size or extent of disease for malignancy. The volume of urinary bladder clot.
Transcription or communication error
Physician communication -Critical or unexpected findings not communicated to the physician.
-Note added requesting the physician to discuss the case with more clinical history.
Further recommendations
Failure to suggest the next step to guide the physician. Follow-up with specific imaging sequences, lab tests etc Typographic error Mismatch of gender, age, laterality and spelling errors.
Erroneous report/ template related errors -Report generated in error and belonging to another patient.
-Pasting common formats and not removing the non-applicable points.
-Report approved by mistake.
-Wrong scan title or incorrect clinical data entered.
-Copy-paste error: details copied from the previous report, not applicable to the present report.
-Gallbladder reported as normal in a post-cholecystectomy patient -Mentioning prostate in females and uterus in males Incomplete report -Findings picked up during the reading of scan but missed out on mentioning in the body or impression -Only abnormalities mentioned. Unremarkable structures not added.
-Name of reporting radiologist not added -Findings in some organs left blank in the template.
Additional remarks/ comments
-Not a report error, but a clarification of previously described finding.
-Review and confirmation of the conclusions reported by another radiologist -The volume of the liver before hepatectomy added on clinician's request.
-Color of aspirated fluid added to the report for a CT-guided aspiration procedure.
Clinical history
-Finding missed or misinterpreted because of inaccurate or absent clinical history.
-Not paying attention to history.
Study limitations
Limitations of modality Help of another modality sought to remove the ambiguity in findings (►Fig. 8).
Liver lesion detected on CT, but additional MRI done for characterization.
Limited sequences/ views
Views within the same modality too limited to give a definite opinion. More sequences done as a problem-solving tool.
Additional DRIVE sequence requested to identify scolex in the brain (►Fig. 9). Table 1 Categories of report addenda based on the reasons for creation
Limitations of technique/ protocol
Finding misinterpreted due to scan-related factors such as contrast vs non-contrast, supine vs prone, incorrect scan parameters, incorrect windowing, the plane of imaging, artefacts (►Fig. 10 and 11).
Inaccurate local staging of carcinoma rectum due to wrong scanning planes.
Comparison
-Forgot to compare disease status with an already available prior imaging.
-Comparing with films of scan done elsewhere, made available at a later date.
Follow-up
Record of complications or follow-up of an intervention procedure.
-Resolution of a collection after pigtail insertion.
-Resolution of intussusception after reduction
Patient-related limitations
-Study incomplete or suboptimal due to patient-related factors. Findings added as addenda at a later date/ time after completion.
-Empty urinary bladder or bowel gas shadows on Ultrasound.
-Movement artefacts on CT.
Technical errors:
-Images sent to the wrong patient's folder.
-Report approved with ID of another radiologist -Addendum added by error -Failures due to machine resolution -Voice recognition software error Left-sided pneumothorax was reported on chest computed tomography (CT) (A) but the right clavicle fracture was not mentioned (B). Eventually, the patient was diagnosed to have myeloma with pathological fractures. An addendum was added to the CT report. This highlights the importance of following a checklist for reporting.
Clinical Significance
Minor or no clinical impact was noted due to modifications in 82.3% of reports, whereas 17.6% of reports were of major clinical significance (►Table 2). All the critical changes were made in less than one day and directly notified to the referring clinician. On subsequent review of patients' medical records, none of the critical addenda resulted in major adverse outcomes.
Radiologic Errors
To err is human; nevertheless, reporting errors is a complex issue and often not admitted or recorded due to fear of bad reputation and medical lawsuits. 2,5 Radiologists are judged by their clinical colleagues by their "misses," and these misses are the major contributor to legal grievances among radiologists. 6,7 Radiologists, referring doctors, and patients should be made aware of the fact that discrepancies in radiology reports A and B) Under reading. Normal-pressure hydrocephalus was correctly reported in magnetic resonance imaging (MRI) brain (A), but the inflammatory changes in the single lower section of the skull base (circle) were not mentioned in the report. After 1 week, the patient presented with fever and cranial nerve palsies. This time, contrast MRI picked up the skull base osteomyelitis. (B) The missed findings were documented as a supplementary text in the old report.
Fig. 7 (A and B)
Over reliance on clinical history. The clinical history said "A 50-year-old lady with suspected ovarian malignancy and family of the same." The radiologist was biased by the statement and interpreted the computed tomography (CT) abdomen as "bilateral ovarian masses (A), consistent with primary ovarian malignancy." On a second look at a later date, he discovered the stomach wall thickening (B). This later turned out to be gastric adenocarcinoma with Krukenberg ovarian deposits. are well-recognized, sometimes inevitable and do not always equate to negligence. 7 Instead of hiding them, we should see them as learning opportunities and initiate preventive strategies to reduce their occurrence. An essential step in this direction is to understand the sources of these errors.
Prevalence of Radiologic Error: What is Already Known
Several studies in the past have used different methods and study populations to analyze radiological error rates and have obtained variable results (►Table 4). 1,[8][9][10][11][12][13][14][15][16] This is because of the absence of a single, standard, and universally reproducible process of analyzing radiological errors.
The peer-review method is the most widely used, wherein a second radiologist reviews prior imaging and decides the degree of interobserver disagreements. 17 This method is time-consuming and heavily dependent on the reviewer's There is limited literature that evaluates report addenda as a tool for error analysis, summarized in ►Table 4. 1,8-10 One such noteworthy work by Brigham et al 8 used report addenda to calculate error rates in 5,568 reports and classified them based on etiology and image modality. These studies reflect an agreement that addenda analysis as a self-acknowledged method of error detection is less time consuming and minimizes the interobserver variability in image interpretation compared with the peer review method in previous works.
Several error classifications have been proposed in the past. The most notable works are by Renfrew et al in 1992 21 and Kim and Mansfield 22 in 2014. Majority of errors in both these classifications were contributed by under reading or observational errors. Our classification system is adopted from the previously published literature 8,21,22 and elaborated taking into consideration the reasons for addenda generation in our reports and errors prevalent at our institution. For example, we removed the broad category of "satisfaction of search" as due to the retrospective nature of the study, it was not possible to assess the thought process of the radiologist from the report. 21 Few additions were made to the classification such as "comparison with the prior report" and "technical errors" which made a fair contribution to our data.
Addenda: A Useful Tool to Study Errors
Report addenda is a relatively new concept, with limited available literature exploring its role in error analysis. The purpose of studying the reasons for addenda generation is to get an idea about the areas where mistakes are made so that measures can be taken to reduce their occurrence by incorporating common misses into reporting checklists and radiology training.
We have the largest PACS in the country catering to a 3,000-bedded hospital, with an inbuilt facility to insert "addenda" to reports. Monthly audit of report addenda is one of the key performance indicators (KPI) that we assess for the National Accreditation Board for Hospitals and Healthcare Providers (NABH) accreditation purposes. An automatic computer-generated list of addenda from RIS is audited by a dedicated radiologist and the reasons and trends of report addenda and errors are studied. Through this retrospective study, we aimed to convey the lessons we learned through this exercise.
Our error rate is lower compared with most of the previously reported error rates between 3 to 40%. 8,10,16,23 Majority of our errors were related to poor communication. These can not only cause ill-effects on patient management but also can create confusion in the minds of the reader. For example, writing "distal" urethral stricture instead of "proximal" or "arterial" thrombosis instead of venous completely changes the meaning of the report and can have significant repercussions on patient management. Some radiologists are dependent upon transcriptionists, who are in habit of copy/paste from common reporting templates, and finalizing such reports without reading introduces gender and age errors like writing "prostate" in female and "uterus" in males. 24 These typographical errors reflect lack of focus, inattention to details, and, most importantly, not reviewing the typed or dictated report before approval.
Observational errors were the second most common type of errors, unlike other series that have reported this to be the most common error. 3,22,25 Reducing the frequency of missing findings is a challenge for the radiologists. These errors occur due to failure to pay attention to all images and all areas within each image, which is likely to be influenced by psychophysiological factors such as the satisfaction of search, level of alertness, work fatigue. 6,12 They could also be affected by external factors such as reporting conditions, duration of reporting, distractions, the pressure to issue reports fast, nonavailability of relevant clinical data, suboptimal imaging, and conspicuity of findings. 18,26 Though report addenda were most commonly inserted to correct the errors, in 28.7% cases, they were used to highlight new information that potentially have a role in the management of the patient. For example, comparisons with older studies done elsewhere made available at a later date, communicating the findings of an additional modality, correcting the scan date, adding a missed step in an intervention procedure, etc., may not critically affect the immediate management but are a part of the report. Hence, though the addenda rate in our reports was 1.1%, the actual error rate was only 0.8% because of the dilution of our study sample by non-error addenda. In other words, addenda rate can overestimate error rates, when used to add specific information of low clinical relevance instead of crucial misses.
Addenda can also underestimate the actual error rate, as many errors go unrecorded and never come to the radiologist's attention unless pointed out by clinicians in multidisciplinary meetings or by colleagues in retrospect while reporting the follow-up scans.
Reasons for a higher percentage of addenda in reports by more experienced radiologists are possibly due to (1) higher proportion of cases finalized by them compared with the trainee residents, (2) referring clinicians directly discussing cases with senior radiologists making additional clinical data available to them resulting in necessary changes to the reports whenever required, and (3) the junior radiologists discussing their cases with the more experienced ones before finalizing reports, reducing their chances of errors.
The lower percentage of addenda in reports issued during the emergency hours can be attributed to (1) lesser distractions for the on-duty reporting radiologist while the pager is handled by a co-on call resident, and (2) direct phone conversation with the referring doctor giving a clearer picture about the patient's history and their clinical concerns.
Communication of Addenda between Radiologists, Referring Physicians, and Patients
Documentation of addenda does not conclude the responsibilities of the radiologist. It is also the radiologist's job to communicate the changed findings directly to the referring physician either face-to-face or by telephone on time and record the same in the modified report. This is the most crucial step to avoid potential mismanagement of patients. 27 Radiologists are sometimes reluctant to document an addendum as they feel it is an admission of guilt and can lead to medical lawsuits, if not conveyed in a proper manner. 5,28 Report addenda are accessible to the patients who perceive them as an open record of fallacies in reports, discovered in hindsight. This may leave an idea of gross medical malpractice in the minds of patients, even when it is not. 29 In such situations, direct verbal communication of the changed or additional findings with an apology to the patients and the referring physician can prevent loss of trust.
How to Reduce Errors?
Based on the reasons for the creation of addenda in our reports, we propose some strategies to minimize radiological error, as enlisted in ►Table 5. The key highlights are updating our skills through reading, discussion, and practice, being more vigilant at the time of reporting and creating a better reporting environment.
The NABH set up under the Quality Council of India has introduced a quality assurance program to monitor the quality of reports. The KPI include monitoring of the rate of variation of imaging findings compared with clinical diagnosis and histopathology, rate of radiology reporting errors, peer group reviews for radiology protocols, maintaining records of re-dos of radiology studies and audits on internal quality, critical reporting, and emergency radiology services. 30 Sharing the lessons learned with radiology colleagues in the regular audits would minimize the need for addenda and improve the quality of reports. Some of these guidelines have been incorporated in ►Table 5.
Artificial intelligence methods can potentially increase the efficiency of radiologists and help to reduce some of the errors of omission/commission in the future. For example, Minn et al developed an error detection algorithm for detecting and notifying radiologists of gender and laterality errors. 31 Tools such as computer-aided detection can assist the radiologists in disease detection, improve interpretation, and report generation. 32
Study Limitations
The study is retrospective. There is a selection bias, as non-cross-sectional imaging modalities such as radiographs and mammograms were not included, due to subjective nature of interpretation and inter-/intraobserver variation of judgment in these studies. 33 Provisional reports were also not included. In situations, where the type of error was found to be a combination/overlap of multiple categories, it was assigned to the single most appropriate group. The reported time of notification of addenda is not a precise representation of the actual time delay as some errors may have been notified to the referring physician much earlier than addition to the report.
Conclusion
Errors in radiology reports are rare but sometimes avoidable. Addenda gives a great platform to modify or correct reports at a later date, and we use this system as an opportunity to identify, quantify, and classify errors occurring in day-to-day radiology practice at a large tertiary care academic teaching hospital. Regular audits would minimize the need for addenda. This knowledge is immensely helpful in providing ideas to improve the quality of our reports and the patient's clinical records, ultimately benefiting the patient.
Financial Support and Sponsorship
None.
Conflict of Interest
There are no conflicts of interest.
Others
-Ensure timely and accurate generation and verification of reports by competent staff -Gather and analyze feedback on content and quality of reports from referrer or colleagues about final diagnosis to determine accuracy of reports -Document any noncompliance with the guidelines through peer reviews and internal audits, along with record of corrective steps taken -When a report is found to be invalid after issuing, replace the original report by an addended report, clearly identified as a replacement report | 2021-09-24T05:22:09.739Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "29c547c199edc03674cc75196d645c14143c1bec",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1734351.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29c547c199edc03674cc75196d645c14143c1bec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
89706014 | pes2o/s2orc | v3-fos-license | IN VITRO ANTIBACTERIAL ACTIVITY OF THE ETHANOLIC EXTRACT OF JALOH (SALIX TETRASPERMA ROXB.) LEAVES AGAINST STAPHYLOCOCCUS AUREUS AND PSEUDOMONAS AERUGINOSA
Objective: This study aims to determine antibacterial activity of ethanolic extract of jaloh (Salix tetrasperma Roxb.) leaves against Staphylococcus aureus (SA) and Pseudomonas aeruginosa (PA). Methods: Extract was obtained by maceration method of jaloh (S. tetrasperma Roxb.) leaves dried powder with 96% ethanol as solvent. The antibacterial activities of extract were tested by Kirby–Bauer method against SA and PA. Data were analyzed statistically using Kruskal–Wallis test for significant difference level p<0.05. Results: Based on the regression test, the equation of regression curve of extract antibacterial activity on SA and PA, respectively, was y=350.456x-229.579 and y=331.866x-272.069. The minimum inhibitory concentrations (MICs) of SA and PA from the equation of regression curve, respectively, were 4.5193 and 6.6039 mg/mL. Conclusion: Based on the MIC value, ethanolic jaloh leaves extract had a weak antibacterial activity against SA and PA.
INTRODUCTION
Commonly to treat infectious diseases use antibiotics. The improper usage of antibiotics arise bacterial resistant to one or some type of antibiotic (multiple drug resistance) [1]. For this reason, the effort to search and develop new antibacterials still have been done. New antibacterial sources can be obtained and developed from plants because of the content of secondary metabolites such as saponins, tannins, alkaloids, flavonoids, and terpenoid efficacious as antibacterial [2,3].
There are plant species that grow in Aceh, by people called jaloh or sijaloh (Salix tetrasperma Roxb.), from the Salicaceae family. Utilization of this plant in some areas of Aceh is used as a febrifuge (antipyretic) [4]. Salicaceae family plant contains the main compounds of phenols such as flavonoids and tannins [5].
Based on that, jaloh plants have potential as antibacterial. This study aims to determine the antibacterial activity of jaloh plants.
Bacterial strains
Bacteria test culture was Gram-positive Staphylococcus aureus (SA) ATCC 6538 and Gram-negative Pseudomonas aeruginosa (PA) ATCC 9027 obtained from Microbiology Laboratory of Faculty of Pharmacy USU, Medan, Indonesia.
Preparation of extracts
A total of 600 g of jaloh leaves dried powder was macerated with 6 L ethanol 96% [6]. The macerate was then distilled and evaporated under reduced pressure at a temperature of not more than 50°C using a rotary evaporator to obtain a viscous extract [7].
Antibacterial activity assay
The antibacterial activity test was performed by Kirby-Bauer method by as much as 0.1 mL inoculum of each bacterium SA and PA 10 6 CFU/mL mixed homogeneously with 15 mL Mueller-Hinton Agar in a petri dish, then left until medium solidified. Thereafter, paper disc was impregnated by 10 µL of each extract solution in DMSO concentration of 500, 250, 125. 62.5, 31.25, 15.625, and 7.8125 mg/mL implantation to the medium for 10 min to diffuse and then incubated at 36-37°C for 24 h. Furthermore, each Petri was measured the diameter of the transparent zone (inhibition zone) around the disc using the sliding term [7,8]. The test was conducted 5 times. The minimum inhibitory concentration (MIC) of the extract was determined using the regression equation from the graph of square inhibition zone diameter (mm) to the concentration log and the intersection on the X-axis recorded as MIC [9,10].
Statistical analysis
Data were analyzed statistically using Kruskal-Wallis test with significant different level p<0.05. This statistical analysis was performed using the Statistical Product and Service Solution program version 18.
RESULTS
The inhibitory zone formed around the papermaking indicates that the extract has antibacterial activity against Gram-positive SA and Gramnegative PA bacteria. The mean diameter of the inhibitory extract zone is shown in Table 1. Table 1 were greater by increasing the concentration of extracts tested against SA and PA. Based of Kruskal-Wallis test, it showed significant difference (p<0.05). meaning that the various extract concentration produced significant difference to the diameter of the inhibition zone of SA and PA. Table 1 also shows the extract inhibitory zone diameter in SA greater than PA. Based on the Kruskal-Wallis test, in addition to concentrations, different types of test bacteria also caused a significant difference (p<0.05) against the inhibitory zone diameter. However, the inhibitory zone diameter formed in SA and PA with extract concentration of 15.625, 250, and 500 mg/mL did not differ significantly (p>0.05).
The values of the inhibitory zone diameters shown in
Based on the regression test, the regression equation curve of antibacterial activity of extract on SA and PA as shown in Fig. 1 was each y=350.456x-229.579 and y=331.866x-272.069. MIC is the antilog value of the intersection on the x-axis of the regression equation so that MIC extract of SA and PA was 4.5193 and 6.6039 mg/mL, respectively.
DISCUSSION
The results of the antibacterial activity test of jaloh plants as presented in Table 1 show the differences with the results obtained by Islam et al. [11]. The antibacterial activity of natural compounds can be different because it is influenced by growing places, harvest time, and extraction methods. Extraction procedures that use chemicals or heating may alter the content of active compounds, functions, and natural characteristics or may produce unsafe compounds. The chemical structure and concentration of the plant's active components also determine its antibacterial properties [5]. Increased concentrations of the extracts mean that the antibacterial compound is higher and the diameter of the inhibitory zone is greater.
Flavonoids and tannins are phenol group compounds that become the main compound of plant jaloh. Phenol compounds are polar. A compound has maximum antibacterial activity when it has optimum polarity because hydrophilic-lipophilic balance is required in the interaction of an antibacterial compound with bacteria. The location and number of hydroxyl groups in the phenol group are thought to be related to toxicity to microorganisms. The mechanism of the action of phenol compounds will generally interact with proteins present in the cell wall or cytoplasm through hydrogen bonding and hydrophobic interactions. This compound disrupts the membrane function and affects the membrane protein, resulting in changes in the structure and function and permeability of bacterial cells, causing the loss of macromolecules from within the cell. Phenol compounds at high concentrations can cause protein denaturation. Another mechanism is to interfere with the activity of enzymes in cells. This can occur at low concentrations [5,12,13].
Flavonoids are synthesized by plants in reaction to bacterial infections, so when tested in vitro with various microorganisms, show effective results as an antibacterial compound. Its activity may be related to the ability of these compounds to form complexes with extracellular and dissolved proteins and bacterial cell walls. Lipophilic flavonoids can also interfere with microbial membranes. The bacterial cell membrane is damaged in the phospholipid part so that the permeability becomes reduced [14,15].
Tannin acts as an antibacterial possibility regarding its ability to: (i) Bind to proteins and adhesins and inhibit enzymes, (ii) form complexes with cell walls and metal ions, and (iii) disrupt plasma membranes. Complexes formed with proteins through hydrogen bonds cause the protein to be denatured so that bacterial metabolism becomes impaired [15,16].
Anthraquinone has very reactive characteristics. It is known to form a non-reversible complex with nucleophilic amino acids in proteins, thereby causing the protein to become inactive and to lose cell function. This makes the potential coverage of anthraquinone antibacterial effects very good. Possible targets in bacterial cells are adhesins, cell wall polypeptides, and enzymes resulting in bacterial death. Anthraquinone also makes substrate unavailability for bacteria [14,17]. The mechanism of action of terpenoid compounds is not fully understood but is thought to interfere with membrane formation by lipophilic compounds [14]. The ability of terpenoids damages cell membranes, deactivates enzymes, and denatures proteins causing the permeability of bacterial cell walls to decrease so that cell walls are damaged [14,18].
Antibacterial activity is also affected by the type of bacteria. Grampositive bacteria SA and Gram-negative PA have differences in thickness and cell wall components. SA cell wall consists only of a thick layer of peptidoglycan with a lipid content of 1-4% and a water soluble polysaccharide (acidic acid) so that the cell wall of bacteria is polar. Since the structure is simpler so as to facilitate the antibacterial compound into the cell and find the target of work and this causes Gram-positive tends to be more sensitive to antibacterial, whereas PA has a thinner peptidoglycan layer with the lipid content of 11-22%, the structure has an outer membrane composed of the outer lipopolysaccharide and the inner phospholipids so as to be nonpolar. The outer membrane serves as a protector of toxic substances including antibacterials so as to prevent antibacterial penetration into the work target and this causes the antibacterial to be less effective. Polar compounds more easily penetrate the peptidoglycan layer than the lipid layer [13,15].
According to Sartoratto et al. (2004) in Fernandes et al., antibacterial activity with MIC 50-500 µg/mL is strong, 600-1500 µg/mL is moderate, and above 1500 µg/mL is weak [16]. Other literature sources mention that crude extracts are said to be active if they have <8 mg/ mL of MIC. MIC crude extracts and insulating compounds, respectively, above 1 and 0.1 mg/mL are recommended to avoid. Current research considers that MIC <1 mg/mL has good activity [19]. Based on this, the antibacterial activity of ethanol extract of jaloh leaves against SA and PA is weak.
Most plant secondary metabolites have weak antibacterial activity, even less activity than antibacterial compounds produced by bacteria and fungi. Due to the synergistic mechanisms between plant compounds, even though their antibacterial potency is lacking, plants successfully fight infection well [16]. Plants with compounds that have no intrinsic antibacterial activity are capable of making bacteria susceptible to previously ineffective antibiotics [20,21].
CONCLUSION
Leaf ethanol extract had a weak antibacterial activity against SA and PA with MIC values of 4.5193 and 6.6039 mg/mL, respectively. | 2019-04-02T13:14:05.121Z | 2018-04-26T00:00:00.000 | {
"year": 2018,
"sha1": "df5d8d9e5232a50dc04f331704973ae5e022c7be",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/26580/14452",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7793fdf344e1f7238f2939acc50c07266511741e",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
253663702 | pes2o/s2orc | v3-fos-license | Shape-Dependent Velocity Based Droplet Routing on MEDA Biochips
Digital microfluidic biochip (DMFB) has attracted attention in the biochemical and medical industries. In particular, a microelectrode dot array (MEDA) biochip, which is composed of a two-dimensional microelectrode array, enables to realize fine-grained manipulation such as dilution, mixing, sensing, and so on in real-time. Unlike existing DMFB biochips, a MEDA architecture allows microelectrodes to control a certain volume of droplet in a fine-grained manner and can vary droplet volume and shape in such a way that it efficiently conducts synthesis and manipulation of droplets. There have been many works in order to improve the efficiency of synthesis of MEDA biochips; however, the synthesis, especially droplet routing, has never considered the shape-dependent velocity of droplets. In this paper, we propose the droplet routing techniques for MEDA biochips with shape-dependent velocity of droplets. The proposed techniques take the advantage of variant velocities of droplets dependent on the shapes and aim to reduce the overall routing time of a droplet from a source to a destination. Simulation results confirm that the proposed techniques can shorten the routing time by 80% compared to the state-of-the-art techniques.
I. INTRODUCTION
Digital microfluidic biochips (DMFBs) have been prevalent as one of Labs-on-a-Chip (LoCs), in a variety of biochemical and pharmacy fields [1], [2] [3]. A DMFB manipulates micro/pico-liter of droplets in biochemical samples on a two-dimensional electrode array based on the electrowetting on dielectric (EWOD) principle [4]. However, existing DMFBs contain several limitations [5], [6]: DMFBs are not capable of controlling the volume and shape of droplets during manipulations and of detecting droplets in real-time, and thus they encounter functionality issues in practical use.
DMFBs based on microelectrode dot array (MEDA) architecture, so-called MEDA biochips, have been developed to The associate editor coordinating the review of this manuscript and approving it for publication was Liang-Bi Chen . overcome the limitations. Unlike the conventional DMFBs, MEDA biochips compose a myriad of microelectrode cells (MCs), where the concept is built on a sea-of-microelectrode array [7], that are fabricated by TSMC 0.35µm CMOS technology [8]. In addition, in contrast to DMFBs, MEDA biochips can dynamically control the size and shape of droplets and detect droplets in real-time during manipulations, resulting in that droplet routing is no longer negligible.
A certain volume of droplets on a MEDA biochip can have a variety of shapes by controlling the MCs. The actuation force on a droplet from a group of MCs works on the droplet to transfer, and the moving velocity is dependent on its volume and shape since the actuation force works to adjacent area of the droplet [9]. However, most of the previous works on MEDA biochips have not assumed the velocity of droplets dependent on the shape, resulting in the inefficient routing in terms of low latency. In addition, it should also be taken into account the reshaping overhead of an intermediate droplet during reshaping [10], [11].
In this paper, we propose droplet routing techniques with shape-dependent velocity. We first formulate the proposed routing techniques that aim to find the best route from a source to destination cell with taking advantage of shape-dependent velocity of a droplet in such a way that the routing time is minimized. Then we propose two routing techniques to efficiently explore feasible solutions. The contributions of this work are presented as follows: 1) We achieve reduction of the routing time by taking advantage of shape-dependent velocity on a MEDA biochip. 2) We propose two efficient routing techniques that solve the proposed routing problem.
The rest of this paper is organized as follows. Section II describes related work. Section III formulates our droplet routing problem that takes into account shape-dependent velocity. Section IV proposes two routing techniques to efficiently solve the problem presented in Section III. Section V describes the simulations and comparison, and Section VI concludes this paper.
II. RELATED WORK
Much work for DMFBs has been studied since 2000s. The authors in [6] mentioned that routing complexity increased as an increase in the complexity and number of bioassay manipulations mapped on a DMFB, unlike the conventional microfluidic biochips. Cho et al. envisioned that a DMFB design can be done as in VLSI techniques. They provided a three-dimensional representation to schedule manipulations, which describes when and where a manipulation is performed [12]. The goal of this work was to develop a high-performance droplet router with combining conventional high-level synthesis techniques in VLSI. The work in [13] proposed a Tabu search metaheuristic for the synthesis of a DMFB. The literature has been aware of VLSI design techniques, which consist of the allocation, resource binding, scheduling, and placement of the manipulations in the applications. However on DMFBs, there have been several functional limitations: One is that the (1:1) mixing model has basically been employed for mixing and dilution, which may take the long completion time after all [14]. In addition, droplets on the DMFB move along adjacent electrodes and are limited to movement in the x-y axis direction only [15]. Thus, the DMFBs have suffered from the limitations.
A MEDA biochip is smaller than the electrodes used in DMFBs and has MCs with control and detection units on the chip, which is dynamically grouped to efficiently achieve a variety of droplet manipulations [8], [16] [17]. Due to the efficient utilization of MCs, MEDA biochips, for example, can detect a droplet at any position within 10 milliseconds [8]. An error recovery method based on this feature has also existed [18]. Moreover, the MEDA biochips enable diagonal movement by appropriately grouping MCs, and this degree of freedom can be exploited for more efficient manipulations [19], [20]. By taking advantage of the features in MEDA biochips, the manipulations such as mixing and dilution have been improved and the time for each manipulation has been significantly reduced [21]. MEDA biochips have also been successfully used to reduce the size of biochips by using the split operation [22]. In contrast, routing of droplets has been remarkably important than ever, and it needs an efficient technique [11].
There have been studies on MEDA biochips that address droplet routing problems [23], [24] [25], [26] [27], [28]. The authors in [23] assumed that the droplet aspect ratio can be changed only to avoid obstacles on the chip. This work considers the droplet velocity to be dependent on its volume, which controls the thickness of the droplet. In particular, one of challenging issues is that a MEDA biochip handles a larger volume of droplets, the velocity deterioration occurs, resulting in much time for droplet routing [7]. Unfortunately, there is no work to improve the efficiency especially in terms of the routing time by focusing on the velocity dependent on droplet shape.
Some studies have shown that the EWOD force on a droplet varies depending on the droplet volume, shape, and moving direction [7], [29] [30]. In [7], the authors proposed 2D layout design with triangle shaped MCs. In their MEDA biochips, they conducted simulations with considering cross-contamination aware routing for MEDA biochips. A MEDA biochip contains a plenty of microelectrodes, but several cells often become unavailable due to out of order, degradation of EWOD and contamination on the cells. Furthermore, the degradation of EWOD occurs at a certain rate, according to the work in [10]. Under the circumstances, droplet routing has its difficulty with finding an efficient route with satisfying timing requirements for the synthesis. Therefore, this paper attempts to transfer a droplet with a shape-dependent velocity under unavailable cells such as degradation and contamination.
A. PROBLEM DESCRIPTION
In this section, we formulate routing of a single droplet that is transferred from a source cell to destination cell under unavailable some MCs on a MEDA biochip. We are given a reconfigurable W × H MEDA biochip, where the coordinates of each microelectrode are defined as (x, y) with the origin (1, 1) at the lower-left cell. MCs on a MEDA biochip can control a certain volume of a droplet. The droplet is allowed to reshape into a different shape at any time, and the velocity of the droplet is assumed to be depended on its volume and shape. According to the work in [21], the velocity of the droplet depends on the number of actuated MCs adjacent to the droplet.
To introduce the shapes of a droplet, we consider a droplet (A × B) on a MEDA biochip in Figure 1 (a). The horizontal motion requires B of the actuated MCs as shown in Figure 1 (c). In Figure 1 (d), when the droplet moves diagonally, the number of active MCs is assumed to be (A+B−1). Here, it should be noted that motion in the diagonal direction takes longer distance than either of the horizontal and vertical motion. On MEDA biochips, a droplet can reshape into a different shape.
, the number of actuated MCs is A (Figure 1 (e)). Otherwise, the number of actuated MCs is B (Figure 1 (f)).
Next, we describe the time required for each motion. Basically, the actuation force from MCs can be modeled by profiling the droplet energy as a function of droplet location [21]. For simplicity, we assume that the velocity of a droplet is proportional to the number of actuated MCs adjacent to its droplet but is inversely proportional to the volume, except for diagonal motion. In the real world, such parameters are profiled for each type of droplets such that we should consider chemical and physical properties of droplets. For example, if a droplet of A × B moves horizontally by actuating B MCs, then it takes A time steps. Similarly, the vertical motion needs B time steps and the reshape motion from (A × B) to (B × A) actuates B MCs, taking A time steps. The diagonal motion is specially assumed to be a combination of the horizontal and vertical motion; thus, it takes A + B time steps with actuating A + B − 1 MCs.
A MEDA biochip often contains unavailable MCs due to degradation and contamination. The locations of unavailable MCs are analyzed and known in advance. Given a biochip size, droplet with motions, and the locations of unavailable MCs, our routing tries to find the best route by reshaping under unavailable MCs such that the overall routing time from a source to destination is minimized.
B. EXAMPLE
We describe a motivating example that considers a routing problem with a droplet of size 2 in Figure 2. Figure 2 (a) represents the 6 × 6 MEDA biochip. In the problem, we assume that our routing of a droplet starts from a source to destination as shown in the two cells colored in green. As the initial state, there are two shapes such as horizontal (2 × 1) and vertical (1 × 2) shapes at a source, and this example assumes the former shape. Four cells painted in black in Figure 2 are represented as unavailable MCs due to degradation, contamination, and so on. This problem aims at minimization of the total routing time.
Before presenting our proposed routing, we first derive the overall routing time based on the state-of-the-art (SOTA) technique in [21]. This work allows a droplet to reshape only when it cannot find any route to the destination without reshaping. The droplet moves four cells to the right in Figure 2 (a). Here, the time step required for the transition is 8 (=2 × 4) time steps. Then, the droplet in the shape moves three cells to the upper, as shown in Figure 2 (b). Similarly, the transition takes 3 (= 1 × 3) time steps. In Figure 2 (c), the droplet can hardly reach the goal without reshaping due to the unavailable cells. The droplet is reshaped into the vertical one to avoid unavailable cells. Then, the time in this transition to reshape the droplet is 2 (= 2 × 1). When the droplet moves one cell to the upper as shown in Figure 2 (d), it takes 2 (= 2 × 1) time steps. The overall routing time is 15 (= 8 + 3 + 2 + 2). Recall that the state-of-the-art technique allows the droplet to reshape only for avoiding the unavailable cells.
On the other hand, the proposed routing technique allows the droplet to reshape at any time. Figure 3 (a) shows that the droplet, which has moved one cell to the right, reshapes into the vertical one to take advantage of the force from MCs, resulting in a higher velocity of the shape than the horizontal one. So far, it takes 4 (= 2 × 1 + 2 × 1) time steps. Then, the droplet further moves two cells to the right and reshapes into the horizontal one again in Figure 3 (b), which takes 4 (= 1 × 2 + 2 × 1) time steps. As shown in Figure 3 (c), the horizontal droplet moves two cells to the upper and reshapes into the vertical one, which takes 4 (= 1 × 2 + 2 × 1) time steps. Finally, the vertical droplet reaches the goal by moving one cell, which takes 2 (= 2 × 1) time steps, as shown in Figure 3 (d). Here, the overall routing time is 14 (= 4 + 4 + 4 + 2) time steps, achieving the reduction of the routing time compared to the state-of-the-art. The example demonstrates the potential of reshaping the droplet during routing with taking into account shape-dependent velocity.
C. FORMULATION
This section presents formulation for the droplet routing problem based on integer programming. Although the formulation in this paper is not described as linearized one due to the page limitation, we describe equivalent formulation with logical expressions. We presuppose the notations in the formulation as shown in Table 1.
We express the coordinate and shape of a droplet. Let s denote operation step and (x s , y s ) define the reference point of the droplet at step s. Operation step represents the number of droplet operations. Operation step 1 does not always represent time step 1. Here, the lower-left corner of the droplet is the coordinate called a reference point. The shape of the droplet can be expressed with using (w s , h s ), where w s and h s represent the width and height of the droplet, respectively. Thus, its droplet is located from (x s , y s ) to (x s + w s , y s + h s ). It should be noted a droplet is assumed to occupy the cells formed as a rectangle.
In Formula (1), the volume of a droplet is consistently fixed during its routing. For example, there are either of two states when the volume is 2: (w s × h s ) = (1 × 2) or (2 × 1).
Formula (2) shows the initial location of a droplet. Let X .source and Y .source represent a coordinate the of initial state at step 0.
Next, we express a number of droplet motions. In this paper, we assume that a droplet is allowed to move to several directions or reshape during routing. A droplet can be moved vertical, horizontal, and diagonal directions. In Figure 4 (a), dir s = 1 shows that the droplet horizontally (i.e., to the right or left) moves one cell. As shown in Figure 4 (b), dir s = 2 represents the vertical motion (i.e., the upper of lower). Figure 4 (c) expresses reshaping of the droplet. To move diagonally, dir s = 4 is determined as shown in Figure 4 (d).
Formula (3) shows the horizontal motion when dir s = 1. As shown in Figure 4 (a), the reference point moves in the horizontal direction but maintains in the vertical direction: Similar to Formula (3), Formula (4) shows the vertical motion of dir s = 2: Formula (5) expresses reshaping of the droplet of dir s = 3. When the droplet is reshaped, the shape at step s is different from the one at step s − 1: Formula (6) also shows the case of dir s = 3. Figure 5 shows four ways of reshaping the droplet if it reshapes from (1 × 2) to (2 × 1). As shown in the figure, the reference point can be different, depending on which way to reshape. The expression that allows such possible shapes is given by: Formula (7) shows the diagonal motion of the droplet as shown in Figure 4 Formula (9) expresses the constraints on unavailable cells. Droplets must not enter unavailable MCs.
In Formula (10), sum.dir i represents the number of times each motions. In this paper, we assume that the time step for each motion depends on the volume of a droplet and number of actuated MCs that force on the droplet. The number of actuated MCs is represented as adjacent cells to the droplet for the direction. active w s ,h s ,motion means the number of active cells adjacent to the droplet. When a droplet of (A × B) moves horizontally, the number of active cells is given by active A,B,hor . Similarly, when the droplet moves vertically, it is given by active A,B,ver , and when the droplet is reshape, it is given by active A,B,res . We can derive the total routing time from a source to a destination cells as shown in Formula (11).
The objective aims at minimization of the total routing time: Minimize : rou.time (12)
IV. SIMULTANEOUS ROUTING WITH AN GUIDE DROPLET
Our routing problem presented in the previous section is classified as NP-hard problem, and it can hardly be solved in a practical time [31]. In order to efficiently explore solutions, we propose two routing techniques. As a basic concept of the techniques, we hypothesize that a good route for an unit of droplet is one of efficient routes for routing of a larger size of droplet to result in a shortest path problem; however, there exist multiple best routes for the unit of droplet. Thus, we present simultaneous routing framework utilizing both an unit of virtual droplet (called guide droplet) which is not exist on the biochip and the droplet (called target droplet) which is exist on the biochip. Along with the an guide droplet, the target droplet asks a best route in such a way that the total routing time is minimized. The route should be determined in two steps, such that the route of the guide droplet is determined in advance and then the route of the target droplet is determined. However, the best route of the target droplet does not always match with the best route of the guide droplet, so the method of two steps could not realize. The route of the guide droplet should be determined so that the routing time of the target droplet is minimized. Our technique efficiently finds the solution by prioritizing route determination over reshaping. The details of our techniques are described in the following sections.
A. ROUTE-GUIDED TECHNIQUE
The route-guided (RG) technique indicates that the guide droplet finds good one of routes and the target droplet is guided along the route with changing the shape. Note that this is not a warm start approach but simultaneously solved in the framework. More intuitively, this technique is similar to a route-constrained problem, but our technique does not eliminate the solution space. We are given an example as shown in Figure 6. The guide droplet finds a good route in a practical time as shown in Figure 6 (a), and the target droplet follows the route as shown in Figure 6 (b). The implementation of this technique can be expressed by adding the following formulae. The size of an guide droplet is assumed to be one: (1 × 1). The coordinates of this droplet is represented as (accmp.x s , accmp.y s ). Formula (13) indicates that the guide droplet can only move to adjacent coordinates.
accmp.map (x, y) becomes 1 when the guide droplet passes through the coordinate (x, y), otherwise zero. Formula (14) represents the cells where the guide droplet passes.
In Formula (15), the constraint expresses that the target droplet must be followed along the route where the guide droplet passes. The route-and time-guided (RTG) technique is a further extension of the RG technique presented in the previous section. This technique designates the location at each time step by an guide droplet. Thus, the target droplet must move or reshape with keeping the guidance by the guide droplet. Figure 7 shows the cell through which the guide droplet routes and the operation at each step are determined. A part of the target droplet moves to include the cell through which the guide droplet passed (Figure 3). The RTG technique can be implemented by adding the following formulae to the formulation in SectionIII-C. The RTG technique must guarantee the continuity of the guide droplet.
Formula (16) represents the positional relationship between the sub-droplet and target droplet in each step.
V. SIMULATIONS A. SIMULATION SETUP
We have conducted simulations to demonstrate the effectiveness of our proposed techniques. We were given a biochip that includes its size, unavailable MCs, a source and destination for routing. In addition, we assumed that the size of a target droplet is two given in the simulations. The goal of the routing problem is to minimize the overall routing time from a source to destination. We compared the routing time with the following two techniques: • SOTA: It allows to reshape only when a droplet is faced to blockage by unavailable MCs, which is presented in [21].
• Proposal: Our proposed routing technique that a droplet can reshape at any time as mentioned in Section III.
• SOTA-based RG: The SOTA technique based on the route-guided technique proposed in Section IV-A.
• SOTA-based RTG: The SOTA technique based on the route-and time-guided technique.
• Proposal-based RG: The proposed technique based on the route-guided technique.
• Proposal-based RTG: The proposed technique based on the route-and time-guided technique. There have appeared very little work considering a shape-dependent velocity model on droplet routing, and the work in [21] is a state-of-the-art technique. Although we could easily compare to the techniques which do not take advantage of shape-dependent velocity, it should be intuitively obvious that the results of routing time will take longer than those considering shape-dependent velocity. The techniques, which have a velocity model with or without taking into account shape dependency, also have smaller solution space than our proposed technique. Therefore, we did not employ others than the SOTA in [21] for comparison in this paper.
We show the simulation scenario as follows: We were given MEDA biochip assumed to be width = 10 and height = 15. Unavailable MCs accounted for all the cells were assumed to be 0%, 5%, 10%, and 20%, distributed randomly on MCs. For each failure rate of unavailable MCs except for 0%, we were given randomly 10 different arrangements of unavailable MCs. The failure rate is determined as one of the problem settings. Unavailable cells should be assumed for MEDA biochips because they can occur due to contamination and electrode failure. The target route was from (1, 1) to (10,15) as the source to destination.
The simulations have been conducted on Ryzen Threadripper 3970X (3.7 GHz, 32 cores and 64 threads in total) with 256 GB memory. IBM ILOG CPLEX Optimization Studio 20.1.0 is used to solve both the state-of-the-art and the proposed texhniques. The runtime is limited by up to 10 hours in CPU time. If an optimal solution is failed to be found within the runtime, the solution at the time limit is employed for comparison.
B. RESULTS
We show the simulation results for each failure rate of unavailable MCs as shown in Figures 8 (a), 8 (b) and 8 (c). The horizontal axis shows the problem instances that are randomly generated and the vertical axis shows the total routing time from the source to destination. Note that the results shown as Baseline on the left-most in each graph represent the failure rate is 0%, which means that all the MCs are available.
The overall results show that the proposed techniques achieves shorter routing time on average than the SOTAbased techniques. There was no significant difference in the computational time spent to determine the routing time. All results required nearly 10 hours in CPU time. As shown in Figure 8 (a), the proposed techniques can find good routes by taking advantage of reshaping during routing. In the 5th instance, the proposal-based RG obtains the longer routing time than the SOTA-based one because the solution space of our techniques is relatively larger than the SOTA-based ones. If the solution space is very large, it takes a great deal of time to determine the solution. Therefore, processing to the upper limit of the computation time worse the solution. Figure 8 (b) shows the similar tendency towards Figure 8 (a). The results present that the proposed technique shorten the routing time 15% on average. In Figure 8 (c), there are some cases that the proposed techniques obtain worse solutions compared to the SOTA-based ones. As increasing failure rate of MCs, the solution space shrinks and the best solutions of the proposed techniques become closee to the SOTA technique. Furthermore, the increased number of unavailable MCs enables the SOTA technique to have chances to reshape a droplet. In such a case, the effectiveness of the proposed techniques can hardly be observed if the unavailable cells are increased; however, the cases with 20% of failure rate are extremely special since such a biochip is generally discarded. Therefore, our proposed techniques can shorten the routing time in the practical failure rates.
We evaluate the RG-based and RTG-based methods. At 0%, 5%, and 10% (Figure 8 (a),8 (b)), the RTG-based method succeeded in reducing routing time by 10% to 20% compared to the SOTA method. On the other hand, at 20% (Figure 8 (c)), the RG-based method succeeded in reducing routing time by about 10% compared to the SOTA method. In all simulations, either RG-or RTG-based methods performed equally well or better than SOTA and Proposal-only methods. These results indicate that the RG-and RTG-based methods are sufficiently useful.
VI. CONCLUSION
In this paper, we have proposed droplet routing techniques with taking advantage of shape-dependent velocity to minimize the total routing time on a MEDA biochip. In addition, we have also presented two efficient exploring techniques called RG and RTG. Our simulation results have shown that our techniques significantly shorten the overall routing time 15% on on average, compared to the state-of-the-arts. In future, we will extend this work so that multiple droplets can be controlled with a combination of mixing, dilution, and so on at the same time. The evaluation of its scalability of this work is also left as our future work. | 2022-11-19T16:18:22.543Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "125a2649b5c1d577372d3c7b9295dfab89818889",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09954395.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "6b074531e803c1ad7b0ee125490be39ed1cc405c",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
268851452 | pes2o/s2orc | v3-fos-license | A word of scandal: Managing dissent in the Spanish polemic over Marry Him and Be Submissive
Abstract Conversations regarding moral values are increasingly becoming a pivotal dimension of public discourse. This paper presents a new approach to unacceptable discourse. Drawing on René Girard’s cultural anthropology and Luciano Elizalde’s dissent management theory, this work develops a scale of situations of dissent, such as controversy, polemic and public discourse scandal. It offers a detailed study of the polemic raised by the book Marry Him and Be Submissive, a process of dissent which was generated in Spain in November 2013 on the occasion of the publication of this Italian bestseller and lasted until January 2014. The book was written by the Italian journalist Costanza Miriano and translated and edited by the Archdiocese of Granada. The case study uses a triangular approach to discourse analysis, which aims to comprehend the enunciator’s legitimacy and social positioning, the communication context in which the debate takes place, and the characteristics of the public discourse. A central element regards how reactions are organized around the signifier 'submissive’ and the different frames of interpretation assigned to this word, in what will be called 'feminist’ and 'post-feminist’ paradigms. Mechanisms of dissent and consensus are explained, and takeaways and actors’ strategies are discussed.
Scandalization and public debate on values
Conversations regarding moral values are increasingly becoming a pivotal dimension of public discourse (Cannata 2022).Severe discursive transgressions against a community's fundamental principles and norms can trigger highly unstable dissent processes, imperilling the social standing and reputation of public figures due to indignant accusations and the societal dissemination of such dissent (Thompson 2000;Elizalde 2017).Ekstr€ om and Johansson (2008) termed this phenomenon 'talk scandal'.According to Haidt (2012), the contemporary public arena has seen a diminishment in incentives for consensus positions.Instead, it has evolved into a 'dichotomous polarization' (Amossy 2016, 30) featuring frequent delegitimization of opponents and the framing of communication scenarios in terms of confrontation or culture war.
This article presents a new approach to unacceptable discourse.Drawing on Ren� e Girard's cultural anthropology (Girard et al. 2006) and Luciano Elizalde's dissent management theory (Elizalde 2017), this work develops a scale of situations of dissent based on the risk to the accused, the object of discussion, and the nature of the relationship between social actors (see Table 1).First, a controversy is about ideas, there is no risk to the social position of the actor, and the relationship is characterized by confusion.Then, a polemic is about interests or values, there is no risk to the social position of the actor, and it is a confrontational, often polarized relationship.Finally, a scandal is triggered by a moral transgression, there is a real risk for the social position of the actor, and it is a confrontational relationship marked by indignation and anger.Girard adds that a scandal occurs when a social process of unanimous accusation is organized and tends to be resolved through the scapegoat mechanism.
The concept of dissent management makes it possible to identify social situations that appear scandalous because they provoke outrage and calls for punishment, but in fact, those situations are not characterized by unanimous accusations.This scenario is common in polarized contexts, where one group is scandalized while another is strengthened in their stance, both in response to the same situation.
On its behalf, polemical discourse -discourse that involves confrontation without undermining the accused's social position within his or her reference system -can be strategic, emerging from a struggle of interests, or emotional, triggered by values and worldviews.Affective polemics, through the spontaneous or calculated activation of criticism of the transgression of values, tend to invalidate the opponent.They generate heated, intense debate.
Because of their high emotional tension, polemics may appear to be scandals of public discourse because they are born out of processes of scandalization.In this matrix, however, there is not a scandal without an actual threat to the position of the accused.The case of Marry Him and Be Submissive begins with a process of scandalization, but it quickly takes shape as an affective polemic and, eventually, is deactivated by becoming a controversy.The case presents a spiral of dissent in both factions of the dispute: the more the book is criticized in one sector of society -the political and media sphere -the more it is defended in the Catholic sector, until the book to be censored becomes a bestseller.These kinds of theoretical distinctions between types of dissent will help in deepening the study of public debates about values, the dynamics of indignant accusation, and discourse management (Table 1).
Purpose and methodology
In March 2012, the Catholic information portal Religi� on en Libertad published an interview with Costanza Miriano due to the publication in Spanish of her Italian bestseller Sposati e sii sottomessa: Pratica estrema per donne senza paura (2011), which was translated by Nuevo Inicio Editorial (of the Archdiocese of Granada) as C� asate y s� e sumisa: experiencia radical para mujeres sin miedo [Marry Him and Be Submissive: Radical experience for women without fear] (Religi� on en Libertad 2012).
In the following year, the book generated great controversy, including various social condemnations and even proposals for legal censorship, at the same time that it became a bestselling book.This phenomenon anticipated what would later be called 'cancel culture' and revealed some of its paradoxical effects: the very broad condemnation coming from the official paradigm of values in the political public sphere was contrasted by the private adhesion of consumers.I propose to analyse this case in order to identify mechanisms of activation and deactivation of dissent.The study of mechanisms (Bunge 1997) will be carried out by process-tracing the key events of the case, which took place in Spain from November 2013 to January 2014.
The case will be approached from the public position of the accused, that is, from the perspective of the Archbishop who published the book, and the author -a rare perspective in scandal studies.This outlook allows for takeaways for public discourse and dissent management.In following this method, three objectives are proposed: a. to analyse the process of transgression, accusation and management of social condemnation; b. to delve into the controversies over values and the functioning of the processes of dissent activated by discursive transgressions; c. to point out lessons learned regarding the management of public discourse and social dissent.
The documentation of the case was based on 1,118 articles from Spanish newspapers taken from the MyNews database, with special attention to the Joly Group in Andalusia.Although the controversy was of national relevance, the epicentre was in Granada.In addition, the author's blog (https://costanzamiriano.com/), the websites of the Office of the Archbishop of Granada and the publishing house, and the repercussions on social networks were used.
The case study is presented using a triangular approach of public discourse analysis (Cannata 2014), which aims to comprehend the enunciator's legitimacy and social positioning, the communication context in which the debate takes place, and the characteristics of the public discourse itself.In this case, a central element is how reactions are organized around the signifier 'submissive' and the different frames of interpretation assigned to this word, in what will be called 'feminist' and 'postfeminist' paradigms.
These processes of scandalization particularly affect the Catholic Church, which has suffered a double erosion: on the one hand, its public position on certain moral issues has lost acceptance; on the other hand, the institution itself has experienced a process of loss of social legitimacy in the West (Cannata 2022).Controversies over values can be an opportunity to express the vision of Catholics on sensitive issues, but they can also be processes of erosion of legitimacy that hinder the mission of the institution.Determining the extent to which entering into a polemic can be productive or destructive is a constant challenge for all institutions that develop their activities in increasingly pluralistic and complex contexts.
Presentation and background of the accused
In her blog, Costanza Miriano introduces herself primarily as a married mother of four.She graduated with a degree in Greek and Latin Literature and currently works as a journalist for the Italian public television network RAI.Additionally, she is a contributor to various newspapers, writing on family dynamics, education, and love relationships.She collaborates with the Pontifical Council for the Laity of the Holy See.She also points out that her blog has received over three million views in the last two years (Figure 1).
For his part, after engaging in extensive studies abroad and developing a career centered around working with young people, Francisco Javier Mart� ınez was ordained as a very young auxiliary bishop of Madrid at the age of 37 in 1985.Then, he was appointed by John Paul II to lead the Archdiocese of Granada in 2003.
In 2009, Archbishop Mart� ınez gained media attention for delivering an Advent homily wherein he drew a comparison between abortion and 'silent genocide'.He alleged that a law passed during the Rodr� ıguez Zapatero administration put doctors in a similar position to the officers in Hitler's concentration camps (Diario de Sevilla 2009).Moreover, some have interpreted a segment in the homily as an attempt by the Archbishop to diminish the gravity of rape against women who have undergone abortions.Later, the bishops of southern Spain explained that the Archbishop was misunderstood, but the PSOE accused Archbishop Mart� ınez of promoting violence against women.A few days later, in January 2010, the Granada Hoy newspaper reported on a Facebook 'war' sparked by the prelate's statements, with thousands protesting against and around 800 in favor (Granada Hoy 2010).Additionally, a 2014 profile in El Mundo characterized Mart� ınez as 'one of the most critical prelates against the policies of Rodr� ıguez Zapatero' (El Mundo 2014).
Context of communication
The Spanish version of the book was published in 2012 by the publishing house Nuevo Inicio, an initiative of the Archbishop of Granada and a group of faithful Catholics.The book had sold 50,000 copies and 16 editions in Italy, auguring translation into several languages; in 2013, it was translated into Spanish and Polish (Figure 2).The Vatican's official newspaper, L'Osservatore Romano, published a review of the book in 2011, calling it 'amusing and ironic'.In the same year, the left-leaning newspaper La Repubblica and the fashion weekly Elle included recommendations for the book in features on marriage.In June 2012, the author participated in an interview on RAI 1, where she had the opportunity to comment on the content of the book, which was presented as a great bestseller.
In her explanations to the media, Miriano refers to the origin of the expression in the title of the book to the letter of St. Paul to the Ephesians, Chapter 5, verses 21-25: 'Wives, submit yourselves to your own husbands'.Both in the title of the book and in her explanations, the author proposes a meaning of 'submissive' in the sense of avoiding the 'tendency to control and manipulate' that, in her opinion, women have.In contrast, she says, the man, who tends to focus on himself, should 'give his life for her'.In Italy, the word 'submissive' is provocative, but Miriano's version is approached with curiosity.It may trigger controversy and even opposition because of disagreement with the model of woman she proposes, but it does not cause scandal.
While Miriano emphasizes that the invitation to be submissive is issued only to women; John Paul II interprets the meaning of St. Paul's text as 'mutual submission': 'Whereas in the relationship between Christ and the Church the subjection is only on the part of the Church, in the relationship between husband and wife the "subjection" is not one-sided but mutual.In relation to the "old" this is evidently something "new": it is an innovation of the Gospel' (John Paul II 1988, 24).
Saturday, November 9, 2013: 'a book that teaches women to be submissive'
The news portal Granada Hoy, of the Joly group of Andalusia, published an article entitled 'The Office of the Archbishop of Granada publishes a book that encourages women to marry and be submissive' (Figure 3), based on a press release sent by Nuevo Inicio Editorial, 'presided over by the Archbishop of Granada'.At the bottom, the article adds: 'The Italian journalist Costanza Miriano, author of the book, assures us that the vocation of women is to support the world "from below"'.This information garnered significant attention in media coverage and on social networks.According to the MyNews database, 53 articles regarding the subject were published on that day.Most of the comments conveyed an outraged tone and many included derogatory language such as 'living in the Middle Ages', 'crazy people', 'what absurd advice!!!', and 'sexist retrogrades'.
An article published by Agencia EFE elevated the matter to a national scale, resulting in 35 articles with identical headlines: 'The Archbishop of Granada publishes a book that teaches women to be submissive'.The text stated: 'The book by Italian author Costanza Miriano has been a bestseller in Italy and is inspired by the phrase "wives, be submissive to your husbands", from St. Paul to the Ephesians'.
The outraged reaction was widespread and quickly triggered institutional responses.For example, a publication of the El Pa� ıs on Twitter has had 834 retweets.The comments reveal indignation: 'They shuffled other titles, such as "marry and be his bitch"'; 'inquisitor'; 'they go back several centuries'; 'social cancer'; 'a joke in bad taste'; 'misogynist and anti-democratic institution'; 'it promotes gender violence'; 'idiots'.Associations with Islam emerged, pointing out ironically that the prologue had been written by two imams.The book Fifty Shades of Gray was also mentioned a few times, distinguishing between playing at submission and actually proposing it.
The headlines focus on the Archbishop of Granada as the subject of the action of publishing the book.The author is in second place.The news articles report that the book has sold 70,000 copies in Italy and it is presented with a phrase taken from the publisher's webpage: 'Now is the time to learn loyal and generous obedience, submission.And, between us, we can say it: underneath always stands the one who is more solid and resistant, because she who is underneath supports the world' (Figure 4).It is reported that the book has a second volume, entitled Marry Her and Die for Her, which will soon be published in Spanish.
About Costanza Miriano it is highlighted that she is the mother and a journalist for RAI, the Italian public television.It is indicated that, as she is a Catholic, '(almost) always in a good mood'.About the word 'submissive', it is mentioned that the author refers it to 'service' and that she attributes the adjective to herself in her presentation: 'She is "submissive" -at least she likes to say so'.It is also explained that it is inspired by St. Paul's letter to the Ephesians and that she has presented a copy to Pope Francis.
The social media impact on the first day indicates that the controversy is a more media-driven phenomenon rather than a social media one.Only 10 posts have more than 10 retweets, and the majority are from journalists or media outlets.
Sunday, November 10, 2013: an 'intolerable' book
On Sunday, the news appeared in the print editions of numerous newspapers and followed the inertia of the previous day in the digital media.According to the MyNews database, it reached 48 publications that day.A similar tone is maintained, but the attribute of 'bestseller' in Italy is added to several headlines.At the same time, opinion columns echoed the outrage with statements such as 'it's intolerable'.
Monday, November 11, 2013: 'all parties against the book'
At the beginning of the week, a new step in the upward curve of dissent is generated, with a total of 118 articles.The coverage, once again, is crossed by news agency cables that are replicated by numerous media.The important milestones of the day, which are reflected in a homogeneous news coverage, relying on information from EFE and Europa Press agencies, are as follows: 1. Izquierda Unida of Granada asks the Public Prosecutor's Office to act against the book as one that advocates violence against women and demands that the City Council of Granada, whose presidency is in the hands of the right-wing Popular Party (PP), act in repudiation of the book.2. The PSOE (Spanish Socialist Workers' Party) demands that the Women's Institute and Ana Mato (Minister of Health, PP administration) issue a statement on the book.It also accuses the Catholic Church of receiving money from the state to advertise 'sexism' and 'discrimination'.3.They ask the Archbishop of Granada to withdraw his support for the book.4. Numerous women's associations demonstrate against the book.For example, Ana Mar� ıa P� erez del Campo, president of the Association of Separated and Divorced Women, points out that this book teaches 'the opposite' of Pope Francis. 5.The PP of Granada asks the Archbishop to 'rectify' the situation and withdraw the book.6.The Archbishop of Granada sees no reason to withdraw the book, which is the translation of a 'bestseller'.
The agency Europa Press reports a new framing of the Archbishop's Office: 'they criticize the book without having read it'.For its part, the Granada Laica Association (GLA) has asked the Archbishop to withdraw his support for the book.The spokesperson of GLA, Manuel Navarro, has considered in declarations to the EFE agency that the Archbishop has become an 'accomplice' of an 'aberrant' book, which goes against human rights and equality.Some media highlight that the political system expresses itself to be unanimously against the book.In addition, in the comments and complementary references, the subject of sexual abuse perpetrated by clergymen appears (Figure 5).
Tuesday, November 12, 2013: articles 'beyond the title'
The topic is still very present in the media coverage, with 79 articles.The dialectic of the coverage takes place between the Archbishop who defends the book, the PSOE that demands its immediate withdrawal, and the comparisons to the Imam of Fuengirola made by Izquierda Unida.Imam Mohamed Kamal Mostafa, of Fuengirola, spent three weeks in prison for publishing a book in which he explained how to beat a woman without leaving a trace (Figure 6).
An article in elDiario.essummarizing the contents of the book -and so far the only article that goes beyond the title, in a response to the statements that 'they criticize it without having read it' -stands out on this day.The journalist Miguel Ortega Lucas adopts a position of critical analysis and his framing places the book in a socially acceptable place: 'the book is thick, but entertaining, the author is intelligent and not a fanatic'.He also adds that the tone between 'Old Testament and Superpop' may explain its success in Italy.Finally, on the site specialized in religious topics Aleteia.org,Feliciana Merino shares her opinion on the book in a column entitled: 'No one wants women inferior to men or locked up at home'.Her point is that critics have been carried away by a provocative title without considering the context and the ironic tone.
Wednesday, November 13, 2013: Miriano's voice in the media
The media proliferation continues: 98 articles according to the MyNews sample, with great influence in the narrative of the Europa Press and EFE cables.Furthermore, opposing groups organize campaigns: one on Change.org to withdraw a 'misogynist' and 'shameful' book that reaches 17 thousand signatures and another one by the Christian association Enraizados.org, that achieves 4 thousand signatures in the first hours, to defend 'religious freedom and freedom of expression against the unjustified and totalitarian attacks on the Archbishop of Granada for publishing C� asate y s� e sumisa'.
Meanwhile, the Bishop of Bilbao distances himself from the book in a press conference, criticizing it because it sows ambiguity and 'does not reflect what the Church thinks' (about marriage).
On this day, the author's voice appears for the first time in the media, with two interviews: one to the radio Onda Cero of Granada and the other to the Agencia EFE.In both interviews she maintains a similar narrative: � They criticize without having read the book: 'All the commotion that has formed in Spain shows me that no one has read the book'.� Surprised by the controversy: 'I am astonished.In Italy there was no controversy, in fact it was very well received, and the Catholic newspaper L'Osservatore Romano called it an amusing manual of evangelization'.� Rejection of gender violence: 'I don't know how much weight the word "submissive" will have in Spanish, but I certainly do not instigate violence in any case and even less do I seek to stop women from being free or independent.My goal is to help recover loving relationships'.� Frame of attack to the Archbishop: 'The aim of the polemic is to attack the Archbishop'.� Association of his phrase with the Bible: 'The phrase is from St. Paul, so they should withdraw all the bibles from the market'.
The Catholic news portal Religi� on en Libertad makes a positive assessment of the elDiario.esarticle by Ortega Lucas because he judges the book 'having read it' (Religi� on en Libertad 2013).The portal points out that, after reading news among hundreds of publications, it found only one journalist who took the trouble to read the book.It concludes with a link to the publisher's website, where the book can be purchased, in order to talk about it, 'having read it'.
Finally, the Public Prosecutor's Office rules out starting an investigation and headlines proliferate saying that the Archbishop of Granada 'ignores' the criticism and publishes the second book.Opinion columns also begin to appear that, without necessarily agreeing with Miriano, criticize the scandalized reaction, with frequent reference to the fact that there was no controversy over the book Fifty Shades of Gray.In social networks, a negative approach continues, with publications that emphasize the idea that it promotes violence against women and seek to ridicule the book (Figure 7).
Thursday, November 14, 2013: 'the Archbishop's habitual tone could not have helped many to believe in the good faith of the book'
With 37 articles published, the intensity of the media impact is lower.There are no significant events that mobilize the development of the case.At this point, the public dynamic is organized into two rival coalitions.A new article by Ortega Lucas in elDiario.esfeeds the moderate interpretation of the case, with the following headline 'The controversy and its edges'.He takes on the task of making a critical analysis of the book, but his denunciation only goes so far as to mention that the 'irony' provokes a 'calculated ambiguity'.In this sense, he nourishes the hypothesis that the positioning of the Archbishop of Granada may have been more decisive for the polemic than Miriano's book itself: 'The tone habitually used by the Archbishop of Granada could not have helped many to believe in the good faith of the famous book'.One week after the start of the polemic: Friday, November 15 and Saturday, November 16, 2013 Of the more than 70 pieces of news -according to MyNews -of these days, 54 correspond to a statement from the Archbishop of Granada.The most reproduced headline is the following: 'The Archbishop of Granada defends the book and says that abortion does promote abuse'.With this intervention, abortion and divorce, presented as the real generators of violence in society, enter into debate.In this statement, the 'gross manipulations' and 'gratuitous disqualifications' are deplored, and the interpretation of an attack on the Church is promoted: 'It is rather about damaging the only institution ( … ) that resists being tamed by the roller of the dominant culture: the Christian people'.It is also recalled that 'the book has been positively recognized as good evangelizing by L'Osservatore Romano', therefore, it is shown that 'the position of the publisher in these two books is in accordance with the teachings of the Church' -in response to the Bishop of Bilbao.Finally, the Archbishop again asked that the criticisms be based on concrete passages from the book: 'Whoever makes such accusations regarding the book must be rigorous and specify the page and paragraph where the slightest justification or excuse of any type of violence appears'.At the same time, the general press -such as El Mundopublished an article summarizing the case, giving a voice to Miriano and her point of view (Figure 8).
Sunday, November 17, 2013: Miriano refines her positioning and seeks to build bridges between the Church and feminism
On Sunday, elements of the general coverage of the polemic are recovered.For example, a full-page article in El Pa� ıs, focused on the figure of Archbishop Javier Mart� ınez.In turn, Miriano grants an interview to the Huffington Post (Carretero 2013), in which she refines her position and brings it closer to that of John Paul II, offering a more inclusive interpretation of the word 'submissive', as an attitude proper to both spouses: 'Marriage should be above all a place of mutual conversion, a place where both strive to offer the other the best of themselves.And in this we must help one another, because as St. Paul says, we are "mutually submissive"'.At the same time, she now stresses that the only violence against women in this story is that she is suffering, by being pressed to censor her book.She emphatically denies that the book promotes violence against women.On the contrary, from her perspective, it promotes equality between women and men: 'The book may or may not be liked, that's obvious.But to say that there is an incitement to violence against women is pure madness.I don't know about you, but in Italy the very thought that the equality of men and women can be questioned is ridiculous.The whole thing is ridiculous'.She also establishes positive points of contact with feminism.
Week of Monday 18 to Sunday 24, November, 2013: negative institutional repercussions
The inertia of isolated news and repercussions in opinion forums continues.The tension remains between furious attacks such as 'The Costanza scandal' or nuanced analyses such as 'The delicate concrete mixer'.In turn, Alfa y Omega, a Catholic publication from Madrid, publishes an open defence of the book.
This week, the Junta de Andaluc� ıa (Andalusian Regional Government) asks again for the book to be withdrawn.In addition, the Congress of Deputies issues a unanimous declaration rejecting publications that call for the submission of women.Finally, the spokesman of the Episcopal Conference denies that the submission of women is an evangelical value.
November 25 and 26, 2013: speech by the Minister of Health on the day for the elimination of gender violence
Ana Mato, Spain's Minister of Health, Social Services and Equality, intervenes in the case requesting the book to be withdrawn because it disrespects women.This new milestone reaches about 70 pieces of news in two days, according to the MyNews database.The minister, in declarations to RTVE gave assurance that she does not agree at all with 'either the title or the content and would like, and have requested, that this book be withdrawn'.She also announced that a few days before she had asked the Women's Institute to send a letter requesting the withdrawal of the book.
November 26, 2013: record sales on Amazon.es
Several media outlets announce that the book has become a bestseller on Amazon.es.Comments manifest the defensive narrative: 'When politicians of all parties want to censor a book it means it is a must-read'; 'It is an exaltation of the figure of women'; 'Above all it presents a way of looking at the family that is very different from what is currently preached' (Figure 9).
Chronology of the final repercussions: court case, freedom of expression and international mentions
November 27, 2013: the Andalusian Observatory of Gender Violence complains against the book.
November 28, 2013: the Women's Institute studies taking the book to court.November 30, 2013: young leftists stage a 'submissive wedding', a critical parody, next to the Cathedral of Granada.
December 2, 2013: an opinion column entitled: 'The Unsubmissive Archbishop' (by Magda Trillo) is published in Granada Hoy and all the Grupo Joly network.After a long and detailed analysis of the controversy, she takes a position on the debate on freedom of expression, recalling that, if censored, it would be the first book censored since the end of Franco's dictatorship.
December 3, 2013: Granada Hoy publishes a column entitled 'Freedom for all', on freedom of expression, recalling that, if Miriano's book is banned, Fifty Shades of Gray and Justine by the Marquis de Sade should also be analysed.A low-volume debate continues for several days on freedom of expression, censorship, and submissive practices in sexual relationships.
December 12, 2013: Express publishes an article in the United Kingdom, in which it mentions the existence of an anonymous video repudiating the book.This information is not published in Spain.
December 14, 2013: In an interview on BBC News Night, Miriano, when asked why they wanted to ban her book in Spain, explains with some irony that it must be because of the word 'c� asate' (marry him), as she has researched, and there are other books with the word 'submissive' in the title, such as Learning to be Submissive and Diary of a Submissive Woman (Figure 10).
December 17, 2013: the Prosecutor's Office requests information about the book.The process will determine whether to open criminal investigation proceedings.
December 23, 2013: the news coverage is relaunched with more than 70 articles about the publication of the second book, C� asate y da la vida por ella (Marry her and give your life for her).
January 9, 2014: Miriano and her husband give an interview to Alfa y Omega, in which they relativize the accusations of machismo and show a couple in an equal relationship giving their opinions on different issues (V� azquez 2014).
January 13, 2014: Anonymous hacks the website of the Archbishop of Granada and generates a wave of more than 40 news items in two days.
January 16, 2014: Ideal de Granada states that the book is sold out in bookstores.In turn, in Alfa y Omega, Margarita Fraga Iribarne publishes a humorous column celebrating the success of the book through a parody of the censorship attempt, entitled: 'C� asate y … s� e t� u misma' (Get Married and … be yourself).January 24, 2014: the Prosecutor's Office of Granada opens criminal investigation proceedings against the book C� asate y s� e sumisa, and generates another wave of 60 articles.The Granada Municipal Women's Council -made up of 67 associations -is asked to send a copy of the book for analysis, detailing those expressions that could be subject to criminal reproach.
January 27, 2014: the Archbishop's Office told EFE that they are 'absolutely convinced' that the book has no criminal problem and that they will respect the work of the Prosecutor's Office.
April 14, 2014: the Prosecutor's Office files the case on the book C� asate y s� e sumisa, generating 90 articles.The archiving decree issued by the delegated prosecutor of the Section of Violence against Women in Granada rules that the book does not contain unlawful statements that undermine the image of women or contain discriminatory discourse.Therefore, the content of the work is not criminal, since, according to them, it contains the 'manifestation of a religious ideology'.
A word of scandal: Interpretations collapse over the word 'submissive'
The textual transgression derives from the book's title: Marry Him and Be Submissive.Although the focus is on the term 'submissive', the surrounding context of its first media appearance is determinant.The book's introduction on the publisher's website states: 'Now is the time to learn loyal and generous obedience, submission.And, between us, we can say: the one who is more solid and resilient always takes the bottom position because the one below holds up the world'.In this initial conceptual construction, the word 'submissive' is modulated by 'obedience' and 'being below,' which immediately triggers negative connotations regarding women's dignity.
Another key aspect to consider is the interpretation of the title in relation to the speaker's perspective: initially, the Archbishop of Granada -positioned as a cultural warrior -is foregrounded, and the author, a young modern-looking woman, is in second place.
The term 'submissive' is associated with St. Paul in the narratives of both the publisher and the author.Yet, in numerous commentaries, it evokes semantic connections with the BDSM world -particularly associated with the book Fifty Shades of Grey -and also with the notion of women's submissiveness in Islamic cultures.Throughout the debate, there's a recurring focus on the interpretation of the word 'submissive': the textual genre, the author's intent, the semantic context.Is it a provocation, an irony, an oxymoron?In what contexts is it valid?Who can use it and who cannot?
As is typical in polemics, reading contracts from various systems intersect: The political system interprets the submissive-obedience-archbishop-below scheme and reacts with unanimous outrage.The conservative Catholic system, bound by a communicative collaboration agreement, reads 'submissive' as a provocation, understands the author's post-feminist reinterpretation, defends the book, and purchases it.Those scandalized see it as regressive in a world advocating for women's empowerment in both work and relationships.Conversely, the author approaches the topic from a post-feminist paradigm, focusing on the experience of dissatisfaction of empowered women.The author expresses this framework in various ways, with one notable manifestation being her surprise at the polemic.In her first public appearance, she comments: 'I am astounded.In Italy, there was no controversy; in fact, it was well-received, and the Catholic newspaper L'Osservatore Romano described it as an amusing evangelization manual'.Whereas in a BBC interview, when confronted by a heated interviewer, she mentions researching the word 'submissive' and notes other books employing it without causing a stir, i.e., Diary of a Submissive Woman.
The term 'submissive' spawns its own set of tensions within the Catholic sphere itself.While these debates might be seen as sporadic or incidental, they attracted substantial media coverage, notably from figures like the Bishop of Bilbao and the spokesperson of the Conference of Bishops.In such instances, interpretations of the term 'submissive' diverge.
The final debate on 'submissive' revolves around the book's title versus its content.When interpreted within the book's broader argument, 'submissive' seems to cater to an empowered woman, cautioning against potential overreach and the desire to 'control everything'.This perspective is reinforced when Miriano accuses women of attempting to exert control over her.This interpretation of the word seems fitting within the book's entire context because, when Miguel Ortega Lucas, from an ideologically opposing stance, analyses the book's content, he concedes that the author is not fanatic but intelligent, and the book is rife with ironies and provocations.
Dignity of woman is common ground, but on derivative topics contrast is apparent
The debate centres on the theme of gender equality, a topic of heightened interest in Spanish public discourse.This theme extends to related issues, encompassing violence against women, women's rights, women's workforce access, and claims of abuse.This equality agenda serves as common ground for the diverse stakeholders in the matter.The crux of the discussion lies in determining whether there was a genuine infringement upon these universally accepted values.Some sub-discussions underscore the inherent logic of the established paradigms within each conflicting value system.For instance, when the Archbishop posits that abortion equates to violence against women, it accentuates the disparity in values between different subsystems.While this perspective fortifies the sentiments within the conservative Catholic camp, it evokes strong disapproval from the feminist faction.
The public debate is organized as two systems in outrageous opposition
In situations of polemical dispute, it is typical that messages tend to polarize and generate outrage in the two opposing subsystems, reaffirming the value structure of each pole.The accusation did not arise from a spontaneous or widespread social reaction, but rather from the response of institutional actors.This includes the alliance between politicians from the three major parties, feminist associations and similar groups, and the media.Minister Mato's position within the Popular Party (PP) was crucial in achieving unanimity within the political system.
Escalation makes each subsystem stronger in its own logic.For instance, the book's supporters start off with prudence.Nevertheless, they culminate in a jubilant and satirical attitude towards the attempts at censorship.However, the pro-book coalition is a more comfortable place to be when the author refines her statements and creates a more socially acceptable version: her tone and appearance are positive and, contrary to the stereotypes of submissive women, she accepts the idea of 'mutual submission' between spouses, builds bridges with feminism, categorically rejects violence against women.
Self-defence strategy
The defendants maintain a strong public stance against attacks.Their defence effectively fractures the anti-book coalition when elDiario.esshifts from scandalizing to moderate criticism, deactivating dissent from the polemic on values to the controversy on ideas.Highlighting the flaws in adversaries' critiques prompts some actors, such as Ortega Lucas, to depart from the anti-book coalition.
The self-defence could be interpreted as a counterattack when the defending individuals attempt to downplay the severity of the accusations by claiming that critics have not read the book.Moreover, they frequently claim -especially against the accusations of the Catholic sector itself -that they have received institutional support, specifically from L'Osservatore Romano.The primary outcome of selfdefence on the part of the accused is the enabling of the defence of allies, who take up the arguments provided and gain confidence from the firmness with which the accused respond.
In the pivotal moment of the polemic, the Archbishop opts for confrontation, questioning the intentions of the detractors: they criticize because they want to attack and subjugate Christians.Conversely, Miriano presents a discourse that constructs bridges and promotes shared values.Notably, no sympathy, apology, or compassion is expressed towards the victims of gender violence.While this may have reinforced their position with regard to their group, it also contributed to polarization.
Mechanisms that activate dissent
The accusation is based on the decontextualization mechanism, which disregards the textual composition of the book and does not seek to understand the author's intention.The process of dissent rapidly escalates through the mechanism of unanimous accusation coming from the political system, culminating in the intervention by the Public Prosecutor's Office as the competent institutional body.The political system is scandalized and generates its own process of sacrificial resolution, in this case through the ritual of denunciation, which culminates in acquittal.
The use of 'opprobrious discourse' (Thompson 2000) and stigmatization are mechanisms of dissent that intensify moral disqualification and prevent openness to understanding the intentions of others.Finally, the strategy of counterattack is a mechanism of dissent as it seeks self-defence through escalating opposition.This produces an effect of affective polarization, generating a heated consensus in one's own sector to confront critical opposition with greater force, thereby deepening the conflict.
Mechanisms that deactivate dissent
The main mechanism for stopping the escalation of a scandal is the defence of allies, because it prevents a situation of unanimous accusation.Nevertheless, such a defence in this case is mutually escalating and therefore insufficient.The 'bestseller' mechanism gives the book a certificate of legitimacy and a distinctive reputation.In this scenario, the mechanism works on two levels.First, the book is labelled a 'bestseller' in Italy, providing it with foundational legitimacy.Additionally, the book's success on Amazon.esreveals a significant community of private support.Miriano also uses endorsements from reputable sources, such as the media, including her affiliation with RAI and praise from L'Osservatore Romano.Lastly, Miriano's intervention in the chain of events has had a positive impact.First and foremost, the mechanism of 'image'.She presents herself as a young, stylish, empowered, and modern woman, both on her blog and in the media.As a result, the article in elDiario.esdescribes her as 'superpop', a total contrast to the idea of 'submissive'.
Miriano takes a different approach than the Archbishop, opting for a strategy of convergence rather than confrontation.The 'congruent values statement' mechanism establishes a contextual framework that enables the acceptance of Miriano's interpretation of the word 'submissive' and projects a reasonable position within the value framework of the general society.She not only opposes violence against women but also states a common ground with feminist groups, aligns her position on submission with the official Catholic paradigm -expressed by John Paul II in Mulieris Dignitatem, presents critiques of the economic system shared by her critics, and points to common-ground values such as the avoidance of control and manipulation of your partner, or the promotion of happiness and love.
Finally, the intervention of the court functions initially as an activator of dissent, but becomes the ultimate mechanism of dissolution of dissent, as it opens the way to a ritualistic absolution legitimized for all parties.
Social consequences of the polemic
The polemic generates a process of thematization and social debate on values, putting on the agenda issues such as women's dignity, equality, violence, BDSM sexuality, conjugal relations, the role of the Church in society, and freedom of expression.In addition, there is a process of validation of norms through the mutual reinforcement of values in the different subsystems.This is what Gamson (1992) calls 'theme' and 'counter-theme'.As is characteristic of polarization, in mimetic rivalry each pole is strengthened, and the opposition becomes harder.
From the point of view of the defendants, the polemic produced a refinement of Miriano's public discursive positioning.The author evolves in her narrative and incorporates an approximation to the institutional position of the Catholic Church, which considers that submission is a reciprocal invitation to men and women.This expanded framework has more resources to broaden the acceptance of her proposal and restore social legitimacy.
On the other hand, the intense confrontation, and the lack of argumentative bridges with the society's official paradigm of values generates an erosion of the Archbishop's legitimacy, both in the general system and in the margins of his own subsystem, since the case shows the discomfort of some Catholic reference persons with the position taken by the Archbishop.
Takeaways and some lessons learned
Apparently, there is a trade-off between the record-breaking sales of the book and the erosion of the Catholic Church's public legitimacy, particularly in relation to the political and institutional systems.Simultaneously, the situation oscillates between confusion and confrontation.Initially, the adverse reaction to the book's title appears to be the outcome of a twofold process: there is evident confusion regarding the meaning of the word 'submissive,' but there is also a plethora of indignant confrontational reactions stemming from the publisher's framing of the book and the attributes the Archbishop transfers to the endeavour.
Facing the initial indicators of scandalization, the strategy to defend and counterattack is effective.Then, it's possible to subsequently communicate congruent values and reinforce shared frames of reference as a second step.This process gives rise to three stances which correlate with the three scenarios of dissent initially presented: for the feminist and political sectors, it's a scandal met with unanimous accusation; for the Catholic sector, it's a polarized polemic between two disparate value systems that persist in their own viewpoint without yielding to each other's demands; and for those who shed their indignant demeanour and approach the full text of the book in a more measured manner, it becomes a controversy of ideas that might even be acknowledged as insightful.
What would have happened if … a response based on congruent values with the critics had been given?
Given the case's characteristics, one can contemplate alternative communicative strategies.An early emphasis on congruent values might have anchored the publication more firmly within an established paradigm of values, tempering the intensity of the controversy and fostering a pluralistic conversation about ideas.This could have set a foundation for subsequent dialogues rooted in shared premises.
Despite the entrenched defensive posture, there was the possibility of recognizing that the introduction of a post-feminist perspective might have resonated negatively with certain segments, particularly those facing gender-based violence -a sentiment echoed by social media platforms.Both defendants -but especially the Archbishopby reinforcing congruent values and extending empathy towards those affected by such violence, could have reinforced a public positioning more admissible to wider audiences, and also consistent with their institutional values.
For example, the narrative could have been: My book is written for the specific profile of a professional woman, one who has had ample opportunities, which is why it's framed as letters to my friends.However, this controversy has illuminated the perspectives of women who have endured, or are enduring, violence and mistreatment in their relationships -women striving for equality.I recognize that the book's title, when isolated from its broader context, may have caused pain and rejection, making some women feel forsaken by a peer who should stand alongside them in their fight.This has prompted introspection on my part.I deeply regret the distress it has caused, but I wish to emphasize that this book aims to empower women, aiding them in their pursuit of happiness within the oftentimes oppressive realm of professional relationships, and theories of male-female rivalry that don't necessarily foster profound personal growth.
The importance of sustainable public positioning within the system of reference of the defendant
Both Miriano and -to a lesser extent -the Archbishop were able to maintain the support of the Catholic sector and gradually gain allies, first some journalists and later the support of the Court.They avoided unforced errors that would destabilize their legitimacy in their own subsystem.The incidents of the spokesman of the Episcopal Conference and the Bishop of Bilbao did not generate a destabilization of the pro-book coalition.However, as it has been said, they used distinct strategies.The Archbishop chose a counter-position strategy, as evidenced by the November 15 statement.In contrast, Miriano succeeded in refining her stance, advocating values congruent with those of both the Catholic sector and general society.Miriano's experience as a journalist and her background values helped her to be effective in a communication that was firm in its defense but inclusive in its proposal.
The debate on freedom of expression
At the climax of the confrontation, the anti-book coalition pushes more insistently for the withdrawal of the book.In response to this pressure, a position defending freedom of expression emerges, with the following narrative: books can be bought or not, criticized or not, but the publication of a book cannot be banned.This would be the first time since the Franco dictatorship.This type of polemic shows the need to continue to deepen the implications of freedom of expression and how it relates to respecting people.
The significance of the Catholic media outlets
Finally, Catholic news sources are important in this case, such as the specialized media Aleteia.org,Religi� on en Libertad, and Alfa y Omega.These specialized media allow the defence of the allies to be manifested early and to a certain extent, which then opens the way for the allies to appear in the general media.
The support of the readers who buy the book is fundamental to ensuring acceptance within the Catholic sector itself.A very problematic scenario would have occurred if the position of rejecting the book as contrary to Catholic values or to the style of evangelization that the Church wanted to promote in the country had grown.However, the same moderate statements by unexpected actors -such as the columnist of elDiario.es-the inclusive interventions of the author, the original success in Italy and the early endorsement in L'Osservatore Romano, create a position of legitimacy in the Catholic subsystem that never seems to be in danger of being lost.
Therefore, although the media speak of scandal and accusatory unanimity in the political subsystem, in reality it is a process of scandalization in one sector and a process of consolidating the position in the other sector.It is a prototypical case of an emotional clash of values, in mutual disqualification.The dispute ignites both poles of the interaction: the demand for an exemplary sanction versus record sales, with short-term gains but long-term deterioration of general legitimacy.
Figure 4 .
Figure 4. Image of the publisher's website, where it can be read the description quoted in the news.
Figure 5 .
Figure 5. elDiario.esreflects the unanimous criticism of the political system.
Figure 6 .
Figure 6.El Pa� ıs publishes the defense of the Archbishop against political attacks.
Figure 7 .
Figure 7. Two featured posts on Twitter, one opts for ridiculing and the other presents the denunciation of domestic violence. | 2024-04-02T16:09:33.199Z | 2024-01-02T00:00:00.000 | {
"year": 2024,
"sha1": "796ff23ce76b16122fafaeca94491cc801262a8b",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23753234.2023.2301568?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a1a53d90568920a1094ca18e22102fd353c8c023",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
211013003 | pes2o/s2orc | v3-fos-license | Aerobic Intramolecular Oxidative Coupling Using Heterogeneous Catalyst for Synthesis of N-Substituted 2,3,6,7-Tetramethoxycarbazoles.
A heterogeneous metal catalyst enabled the intramolecular oxidative coupling of diarylamines to form carbazoles with molecular oxygen as the sole oxidant. Rh/C had efficient catalytic activity and allowed the catalyst loading to be reduced to 0.1 mol% while maintaining excellent yields of carbazoles. This reaction is operationally simple in an open-to-air setup, and provides a green and atom-economical process for an efficient synthetic approach to N-substituted carbazoles.
Introduction
Carbazoles have widespread applications in both the pharmaceutical and materials sciences. [1][2][3][4] Many biologically active natural products consisting of a carbazole framework have also been isolated in past decades. 5) As a result, significant efforts have been applied toward the development of efficient methods for the synthesis of carbazoles. [6][7][8] Among them, direct oxidative cyclization reactions have emerged as one of the most powerful approaches for carbazole synthesis because preactivation of the substrates can be avoided, in addition to the atom-and step-economy of the reaction. [9][10][11][12][13] Ever since Åkermark et al. reported the oxidative cyclization of diphenylamine via a two C-H bond activation process using a stoichiometric amount of palladium acetate, 14) the palladium-catalyzed intramolecular oxidative coupling reaction of diarylamines in the presence of various oxidants such as cupric acetate, [15][16][17] tert-butyl hydroperoxide, 18) silver oxide, 19) and oxygen [20][21][22] has been further developed and the potential synthetic utility of this type of transformation has been significantly improved (Chart 1A). In recent years, the photochemical cyclization 23,24) and molybdenum pentachloride mediated 25) homogeneous reactions of triarylamines and diarylamines under mild conditions have been successfully developed. In contrast, relatively little progress has been made in heterogeneous catalysis for the synthesis of carbazoles from diarylamines, despite the high efficiency, robustness, practicality, and facile reusability of heterogeneous catalysts. [26][27][28] Matsubara et al. reported the first heterogeneous catalysis of the intramolecular oxidative coupling of diphenylamine using platinum on carbon under hydrothermal conditions 29) (Chart 1B). However, these heterogeneous reactions still suffered from high temperatures, harsh reaction conditions, and the formation of only 9H-carbazoles; N-substituted carbazoles could not be accessed. Therefore, the heterogeneous metal-catalyzed synthesis of carbazoles via direct aryl-aryl bond formation reactions that are amenable to environmentally sustainable chemistry would be valuable. 30) We previously developed aerobic oxidative homo-coupling and selective cross-coupling reactions of aryl amines catalyzed by heterogeneous metal catalysts. [31][32][33][34] These reactions are the first examples of catalytic intermolecular oxidative coupling reactions of aryl amines under mild conditions using oxygen as a co-oxidant. [35][36][37][38] Recently, we reported the heterogeneously catalyzed aerobic oxidative intramolecular coupling reaction of aromatic compounds to provide triphenylenes and carbocyclic biaryl compounds in good yields. 39) These results encouraged us to develop an efficient method for the synthesis of carbazoles via the intramolecular oxidative aryl-aryl coupling of diarylamines. Herein, we report the heterogeneously catalytic oxidative cyclization of triarylamines and diarylamines under aerobic conditions (Chart 1C).
Results and Discussion
To test whether the heterogeneous metal-catalyzed intramolecular coupling reaction of diarylamines would proceed as anticipated, N-benzyldiarylamine 1a was reacted in the presence of 5 mol% Rh/C catalyst (Table 1). Fortunately, the reaction proceeded efficiently in trifluoroacetic acid (TFA) under open-air conditions at room temperature to afford benzocarbazole 2a in 76% yield (entry 1). When dichloroethane was used as a co-solvent, the amount of TFA was decreased to 5 equiv. and 2a was obtained in 99% yield (entry 2). The cyclization reaction of 1a in benzotrifluoride proceeded more smoothly to afford 2a in high yield (entry 3). We next examined the effect of various acids in benzotrifluoride, and found TFA to be the most efficient under the standard conditions (entries 4-7). In the absence of acid, the reaction did not proceed, even after heating at 50°C overnight (entry 8). We then examined various catalysts for the coupling reaction. The use of Rh/Al 2 O 3 delivered 2a in high yield despite the slightly longer reaction time (entry 9). Catalysts such as Pd/C, Pd/Al 2 O 3 , Pt/C, and Pt/ Al 2 O 3 showed high reactivities, and these reactions were completed in shorter reaction times with good product yields (entries 10-13). In contrast, Ru/C and Ru/Al 2 O 3 were found to be less active (entries 14 and 15). Despite the higher activity of the palladium and platinum catalysts, the slightly lower yields meant that the best conditions were achieved by a combination of Rh/C and TFA at room temperature. When the reaction was performed under an argon atmosphere, a reduced yield of 2a was obtained (entry 17). This indicates that oxygen plays an important role as a terminal oxidant. 40) With the optimized conditions in hand, we explored the substrate scope of the intramolecular biaryl coupling reaction ( Table 2). In addition to N-benzyldiarylamine, the reaction of N-propyl-and cyclohexyl-substituted compounds 1b and 1c also proceeded efficiently to afford carbazoles 2b and 2c in 85% and 98% yields, respectively (entries 1 and 2). The Rh/C-catalyzed reaction of N-tert-butyl compound 1d was unsuccessful, resulting in a complex mixture. When Pd/Al 2 O 3 was used as the catalyst under oxygen atmosphere, the cyclization reaction proceeded to provide 38% of carbazole 2d and 60% of the tert-butyl group deprotected carbazole as a result of the instability of the product under acidic conditions (entry 3). Triarylamines 1e, 1f, and 1g also underwent the cyclization reaction to provide N-aryl carbazoles 2e, 2f, and 2g, respectively, in good to moderate yields (entries 4-6). The reaction tolerated both electron-rich and electron-poor substituents on the benzene ring. The reaction of 1f possessing the 3,4,5-trimethoxyphenyl group did not proceed cleanly to result in the low yielding of 2f (entry 5). Because diarylamines with electron-withdrawing groups are less reactive, the cyclization reaction of 1g was conducted under an oxygen atmosphere to provide the ester group-substituted carbazole 2g in 81% yield (entry 6). In contrast, a secondary amine possessing an N-H group (1h) was sluggish, and acetamide 1i was inert to this reaction (entries 7 and 8).
Since tetramethoxydiphenylamines 1 were excellent substrates for the intramolecular biaryl coupling because of their electron-rich nature, as seen in Table 2, we next examined the cyclization of differentially substituted diarylamines (Chart 2). When 3,3′-dimethyl substituted substrate 3 was used, the carbazole was not produced because side reactions such as intermolecular coupling occurred. In addition, tetramethyldiphenylamine 4 was not cyclized to the corresponding carbazole, but 30% of dihydroazepine 5 was obtained instead, probably due to an addition-cyclization reaction involving the in-situ generated p-quinone methide imine intermediate. 41,42) These results indicated that tetramethoxylated compounds were preferred for the Rh/C-catalyzed aerobic intramolecular oxidative coupling to obtain carbazoles in high yields.
The reaction of 1b on a 1 mmol scale also proceeded efficiently (Chart 3). Furthermore, even when using only 0.1 mol% of 5% Rh/C, the intramolecular oxidative coupling afforded carbazole 2b in excellent yield, with a turnover number of up to 890.
Conclusion
We have developed a catalytic aerobic intramolecular oxidative coupling of N-alkyldiarylamines and triarylamines using a heterogeneous metal catalyst, providing an efficient route to Chart 1. Catalytic Intramolecular Oxidative Coupling of Diarylamines N-substituted 2,3,6,7-tetramethoxycarbazoles in high yields. Further studies on synthetic applications are underway in our laboratory. | 2020-02-04T14:04:09.065Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "d78038580424eea8f57f90a5f5b6131af184072a",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/cpb/68/2/68_c19-00947/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1d8d604b028893a154c69d9e71a4bf8207f70143",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
243804733 | pes2o/s2orc | v3-fos-license | Flow theory: Advancing the two‑dimensional conceptualization
This research advances the conceptualization and measurement of flow. The results of six studies ( N = 2809) reveal that flow has two dimensions: “fluency,” which is comprised of experiences related to fluent thought and action; and “absorption,” which is based on sustained full attention. The results also demonstrate that the two dimensions have nuanced relationships with other variables. Specifically, while the fluency dimension is related to antecedents of flow (familiarity, skill, progress), the absorption dimension is not. Conversely, the absorption dimension was found to be strongly related to consequences of flow (behavioral intentions, presence), while the fluency dimension was not. Furthermore, we demonstrate that fluency-related experiences can give rise to the absorption-related experiences, which advances our understanding of how flow emerges. Finally, we develop a refined measure of flow called the two-dimensional-flow scale, and demonstrate its enhanced ability to capture variance in flow and other related variables in leisure contexts.
Introduction
Flow manifests as a state of seemingly effortless concentration wherein one is completely absorbed in what they are doing (Csikszentmihalyi, 1975), and can arise during a wide range of daily activities, including work, physical activity, technology use, and interactions with others (Aubé et al., 2014;Eisenberger et al., 2005;Kawabata & Mallett, 2011;Mathwick & Rigdon, 2004;Moneta, 2012). Flow offers a compelling line of inquiry given the myriad positive outcomes that have been associated with it, including optimized levels of engagement, performance, and enjoyment (Christandl et al., 2018;Moneta, 2017;Smith & Sivakumar, 2004). In an attempt to leverage these benefits, researchers have identified and examined several antecedents to flow, including contextual factors, personality differences, and task characteristics (Baumann & Scheffer, 2011;Baumann et al., 2016;Engeser & Rheinberg, 2008;Schüler et al., 2013).
Despite these efforts, there remains confusion regarding the conceptualization of flow, including its dimensionality. Nascent work suggests that the various experiences that characterize flow can be grouped into two dimensions: fluency and absorption (Engeser, 2012;Rheinberg et al., 2003). However, researchers have almost exclusively treated flow as unidimensional (e.g., Baumann et al., 2016;Schüler, 2010). While recent work has provided support for the two-dimensional structure of flow (e.g., Lavoie & Main, 2019a), this conceptualization is still in its infancy and requires further development. Specifically, further research is needed to support the descriptive accuracy and benefits of a two-dimensional conceptualiztion. The exact nature of flow-including its underlying cognitive and affective process-also requires explication.
The present research attempts to address these concerns through a series of studies that utilize a range of leisure activities. First, factor analysis of existing flow measures reveals that a two-dimensional structure is superior to a unidimensional structure, and that these two dimensions have nuanced relationships with other variables. In particular, our findings establish that several known antecedents of flow (e.g., skills, familiarity) are related to its fluency dimension, but not its absorption dimension. Conversely, the consequences of flow, such as behavioral intentions and presence, are related to its absorption dimension. We also provide evidence of the relationship between the two dimensions by showing that fluency-related experiences mediated the emergence of absorption-related experiences in the leisure activities used in our studies.
Our results also further clarify the two dimensions of flow based on existing constructs and their underlying psychological processes. Most notably, our findings suggest that the fluency dimension of flow entails the subjective experience of fluency (i.e., ease) and control, and is based on both fluent action and fluent thought. This insight is important given the conceptual similarity to processing fluency, which is defined as the "conscious experience of processing ease, low effort, and high speed" (Winkielman et al., 2003, p. 193). Our results suggest that processing fluency is an important aspect of flow's fluency dimension, as it underlies the subjective experience of fluent thought. Furthermore, our findings also provide empirical support for theorizing which posits that sustained full attention underlies the absorption dimension of flow (Dietrich, 2004) and its emergent nature (Lavoie & Main, 2019a).
Clarifying the nature of the fluency dimension and demonstrating its importance in the emergence of flow can change how we think about fostering flow. Our findings reveal the importance of subjective ease, which advances earlier theorizing that flow is best achieved via highly challenging tasks (Csikszentmihalyi, 1975). The discrepancy is likely due to the fact that the present research explores flow within the context of leisure activities, whereas original flow theory focused on deepflow states in tasks that are more complex and of longer duration (Lavoie & Main, 2019a). This explanation is consistent with emerging evidence that flow states and their consequences differ based on the nature of the activities that elicit them (Engeser & Baumann, 2014), and it advances the literature by suggesting that entry to flow differs depending on the context (e.g., leisure versus work) (Engeser & Rheinberg, 2008).
Furthermore, we refine the two-dimensional conceptualization and the existing measures of flow in order to capture these dimensions more accurately. To this end, we develop a new measure called the two-dimensional flow scale (TDFS) and demonstrate its ability to capture greater variance in flow and its consequences compared to the unidimensional conceptualization and existing measures. Clarifying flow's dimensionality is important from both a theoretical and measurement standpoint (see Engeser, 2012;Schiefele, 2013;Schiefele & Raabe, 2011 for further discussion).
Finally, our findings indicate that a match between one's skill level and the demands of a task is not a particularly strong indicator of either dimension of flow, which builds on research suggesting that it is necessary to look beyond this relationship in capturing flow (e.g., Baumann et al., 2016). This result is also consistent with research demonstrating that occurrence of flow in experiential activities, which are not typically associated with challenge (e.g. Lavoie & Main, 2019b;Novak et al., 2003). Together, these findings advance flow theory and provide a foundation for future research.
The dimensionality of flow
An important yet largely overlooked fact is that flow states differ in intensity and duration based on the activities that elicit them. As such, it is useful to think of flow states as falling on a continuum, with those elicited by relatively simple activities (e.g., washing dishes) on one end, and those elicited by highly complex activities (e.g., painting a masterpiece, playing professional sports) on the other. Flow is generally thought of in relation to relatively complex activities that are intense and long in duration; this type of flow is often referred to as deepflow (Csikszentmihalyi, 1975). However, flow can also be elicited by shorter experiences that are less intense, yet more common; this type of flow is referred to as microflow (Lavoie & Main, 2019a). While flow theory and most subsequent theorizing has been based on deepflow, research on flow, including the scales that have been developed to measure it, has almost exclusively investigated flow elicited by relatively shorter, less complex tasks, or what could most properly be considered microflow states. We recognize the importance of distinguishing between types of flow states, and we therefore stipulate that this research focuses on those that fall on the microflow region of the continuum in the context of leisure activities. Given this focus, the terms, "flow," and, "microflow," will be used interchangeably for the remainder of this paper.
The characteristics of flow originally identified by Csikszentmihalyi (1975) refer to an intrinsically rewarding experience defined by clear goals, unambiguous feedback, congruence between skills and task demands, the ability to concentrate on and exhibit control over the task at hand, a sense of merging between action and awareness, a loss of self-consciousness, and a distorted perception of time. However, there is disagreement regarding the dimensionality of these nine characteristics and the subsequent conceptualization of flow. While most research utilizes a unidimensional conceptualization of flow (Martin & Jackson, 2008;Schiefele, 2013), other findings suggest that the nine characteristics are driven by similar underlying mechanisms (Dietrich, 2004). There is further disagreement about the underlying experiences of flow among proponents of a multidimensional conceptualization; while some suggest that flow has three primary experiences-namely, centering of attention, loss of self-consciousness, and the merging of action and awareness (Csikszentmihalyi & Csikszentmihalyi, 1988;Moneta, 2017)-others argue that flow is best described by two distinct sets of experiences (Engeser, 2012).
Despite these disagreements, empirical investigations of the dimensionality of flow have been limited. We seek to fill this gap in the literature by empirically exploring the dimensionality of flow across a wide variety of leisure tasks. We begin by evaluating various existing measures to determine which are most suitable for assessing a wide range of activities. Findings have demonstrated that flow varies in degree, and this has motivated the development of various continuous measures, including the global "core flow" scale, which captures a subject's overall feeling of being in flow (Martin & Jackson, 2008). However, it is not possible to determine the dimensionality of flow without assessing each of the original characteristics of flow identified by Csikszentmihalyi (1975).
Two existing scales were created with this aim in mind. Martin and Jackson's (2008) Flow Short Scale (henceforth referred to as the MJ-FSS) is primarily used in sports contexts and employs wording focused on performance. Rheinberg et al.'s (2003) Flow Short Scale (henceforth referred to as the FSS) was developed for use in a more general psychological context-thus, with less focus on performancemaking it more appropriate for measuring flow across contexts (cf. Engeser & Rheinberg, 2008). Given that we aim to explore flow in relation to both performance-oriented and experiential consumer-related tasks, we begin our investigation by building on the items on Rheinberg et al.'s (2003) FSS.
We suggest that it is most appropriate to conceptualize flow as consisting of two dimensions. The view that flow is best conceptualized as consisting of three experiences (i.e., centering of attention, loss of self-consciousness, and merging of action and awareness) seems unsuitable, as these experiences can all be explained by the common underlying process of sustained concentration (Dietrich, 2004). Furthermore, this view of flow also overlooks the importance of its characteristic feelings of control and perceived ease. The unidimensional view of flow does not seem suitable either, as flow's characteristic experiences appear to be unique and sometimes conflicting (Baumann & Scheffer, 2010). For example, experiences related to feeling in control are qualitatively distinct from other flow experiences, such as losing track of time. Perhaps experiences related to concentration (e.g., losing track of time) and fluent processing (e.g., control and ease) become relatively symbiotic over the longer periods of time involved in deepflow. This distinction is important, and it is discussed in greater detail in the General Discussion section of this paper.
We suggest that a two-dimensional conceptualization of flow is most appropriate for microflow states, as it is able to account for all of the critical flow experiences, while also making the necessary distinctions between the sets of experiences within flow, which is not possible with a unidimensional conceptualization. For example, a loss of self-consciousness, the merging of action and awareness, and time distortion are all explained by one underlying attentional process related to concentration (Dietrich, 2004); specifically, focusing on something for an extended period necessarily results in the merging of action and awareness and the centering of attention. Moreover, our limited attentional resources will result in one losing track of time and selfconsciousness (Dietrich, 2003).
The remaining characteristic flow experiences (i.e., perceptions of control, feelings of automaticity, inherent enjoyment) are related but qualitatively distinct from the absorption experiences and can be grouped into a second dimension that is based on things going well. In particular, the remaining flow experiences are all associated with fluent thought and action, and there is empirical evidence supporting the inherent relationships between them. The experiences of fluent thought and action (i.e., processing fluency) both bolster feelings of control (Sidarus et al., 2017). Moreover, fluent cognitive processes such as feelings of automaticity (Bargh, 1994), fluency (Oppenheimer, 2008) and ease (Schwarz, 2004) are inherently enjoyable. Fluent action in the form of progress is also inherently enjoyable, partly because it satisfies one's need to feel competent and self-efficacious (Bandura, 1982;Ryan & Deci, 2000).
The inherently enjoyable fluency experiences help to explain the autotelic (intrinsically rewarding) aspect of flow. This aspect is especially evident when combined with the nature of the absorption dimension, which suggests that one would have mental "order" and be devoid of any negative thoughts given their complete focus on the task or experience (Csikszentmihalyi, 1975). In summary, we suggest that the two-dimensional conceptualization of flow is most appropriate in succinctly capturing the various experiences that comprise flow while also hinting at the underlying processes.
Hypothesis 1: Flow has a two-dimensional structure.
Relationships with other variables
We build on our two-dimensional hypothesis by exploring the relationship between the two dimensions of flow, as well as some of their antecedents and consequences (see Fig. 1). We theorize that the two dimensions of flow will have nuanced relationships with antecedents and consequences. Demonstrating relationships of varying strength would support the importance of the two-dimensional conceptualization. For example, if a certain dimension had stronger relationships with antecedents it would reveal which aspect of flow is most important in fostering the state and the process through which flow happens.
Antecedents of flow
Several antecedents to flow have been demonstrated, with individual's familiarity with a task and their task-based skills being among the most prominent (Schiefele & Raabe, 2011). For example, researchers have manipulated flow by calibrating players' skillsets in games such as Tetris and Pacman, and then manipulating the difficulty of the game to provide an appropriate level of challenge (Moller et al., 2010). We suggest that one's familiarity with a task will facilitate the fluency-related experiences of flow, as familiarity makes it easier to process task-related information (Song & Schwarz, 2009). In addition, one's familiarity with a task will result in the representation of its elements in their memory (Bornstein & D'Agostino, 1992), which will assist the processes of encoding and decoding, thus increasing processing fluency (Mandler et al., 1987).
We argue that flow is not only characterized by fluent thought, but also by fluent action. Similar to the relationship between task familiarity and fluency of thought, we suggest that task skill level is positively related to task fluency. Fluency of action manifests as successful progress through an activity, and, alongside fluency of thought, forms the foundation of flow's fluency dimension. These relationships are particularly evident when considering the flow experiences that comprise the fluency dimension (e.g., feelings of control, automaticity, and confidence in one's knowledge of the task), as these experiences are largely derived from familiarity and skill (Reber et al., 1998) and are manifested in the form of continual progress. If a person is not making fluent progress-and especially if they are doing the opposite by making mistakes-they will feel less in control, less enjoyment, and less ease with respect to the task.
The direct relationships between familiarity, skills, and the absorption-related experiences of flow are less clear. In fact, a high level of skill may have the opposite impact on absorption. Absorption occurs when one devotes their attention to an object or activity over an extended period of time such that the object or activity captures their full awareness (Agarwal & Karahanna, 2000). Underload accounts of attention suggest that people tend to disengage when tasks are not stimulating enough to capture or hold their full attention (Manly et al., 1999). When a task is not sufficiently difficult or interesting to capture one's full attention, their mind will begin to process unrelated thoughts (i.e., mind wandering) and become more susceptible to distractions (Smallwood & Schooler, 2006). This phenomenon may occur among individuals with higher levels of skill and familiarity in a given task, as the information presented to them will likely be less stimulating or challenging, thus resulting in decreased attention and limited absorption. This is consistent with Lavoie and Main's (2019a) findings, which demonstrate that flow experiences related to fluency become stronger as task-related skill increases, but that the opposite effect can be observed for absorption. As an exception, one's general ability to focus on, or their inclination to fully engage in, all activities may facilitate absorption (Baumann & Scheffer, 2011).
Consequences of flow
Interestingly, some of the established consequences of flow appear to be more directly related to its absorption dimension rather than its fluency dimension. One noteworthy consequence of flow in digital media contexts (e.g., playing video games) is presence, which is defined as a feeling of being present in a particular virtual computermediated environment rather than the immediate physical environment (Sheridan, 1992). As an attention-based experience, presence is enabled when one focuses on one thing for an extended period such that it becomes reality (Banos et al., 1999). As a state of high engagement in a task, flow has been suggested to be the foundation that enables the experience of presence (Weibel et al., 2008). However, the relationship between flow and presence is strongly attention based, which suggests that it is based on flow's absorption dimension, and not its fluency dimension. As such, we posit that the absorption dimension of Fig. 1 Flow's nomological network flow will have a direct positive relationship with presence, but the fluency dimension will not.
Findings have also shown that flow can produce futureoriented consequences, such as increasing one's intentions of engaging with the product or experience that elicited the flow state (Lavoie & Main, 2019a). Full sustained engagement in something suggests a high degree of interest and enjoyment; this relationship is what motivated Csikszentmihalyi's (1975) seminal investigations of flow, as he believed that these highly engaging experiences provide the key to happiness. The fluency related experiences of flow may also be linked to behavioral intentions, as feelings of automaticity, control, and certainty of what to do are also enjoyable and could therefore lead to further engagement (Chambon & Haggard, 2012). Nonetheless, we argue that one's desire to engage with a product or experience again is more strongly correlated with their initial level of engagement.
On its own, the fluency dimension may not be a good predictor of engagement intentions, as a high degree of perceived fluency may simply be the result of a task being too easy. However, if fluency is accompanied by sustained absorption, it is possible to be more confident that one will want to engage in the flow-inducing activity again. This is consistent with recent findings, which show that fluency without absorption does not create the level of enjoyment for which flow is known (Lavoie & Main, 2019a). Thus, while the fluency dimension of flow may be related to positive affective outcomes, we suggest that its relationship to behavioral intentions is relatively weak and depends on whether it is accompanied by absorption-related experiences, as these are the experiences that truly indicate interest and predict behavioral intentions. In summary, we suggest that absorption, not fluency, will have the strongest relationship with behavioral intentions, including future use.
Hypothesis 3a:
The absorption dimension of flow will have a strong direct positive relationship with presence, but the fluency dimension will not.
Hypothesis 3b:
The absorption dimension of flow will have a strong direct positive relationship with behavioral intentions, but the fluency dimension will not.
To summarize, we suggest that the fluency and absorption dimensions of flow will have differing relationships with other variables. Specifically, antecedents of flow such as familiarity, skill, and task progress will be more directly related to the fluency dimension of flow, while some established consequences of flow, such as feelings of presence and behavioral intentions, will be more directly related to the absorption dimension (see Fig. 1 for the complete model).
Overview of studies
Given the dearth of empirical research on the dimensions of flow and the experiences that best illustrate them, we aim to empirically validate the two-dimensional conceptualization of flow. In Study 1a, we employ exploratory factor analysis to evaluate the dimensionality of flow based on the items of the FSS (Rheinberg et al., 2003;cf. Engeser & Rheinberg, 2008). To this end, we ask the participants to engage in either a performance-oriented (completing a Sudoku puzzle, N = 619) or an experiential leisure activity (listening to new music, N = 542). In Study 1b, we refine the measures of flow to create a 6-item scale, which we refer to as the Two-Dimensional Flow Scale (TDFS). The two-dimensional structure is then confirmed and the TDFS is tested in a different context in Study 2. This is followed by Study 3, which analyzes the discriminant and convergent validity of the two dimensions of the TDFS. In Studies 4 and 5, we examine the TDFS' predictive validity and investigate the antecedents and consequences of both dimensions. The relationship between the two dimensions of flow is also examined in Study 5.
Study 1a-dimensionality of flow
Study 1a investigated the dimensionality of flow via performance-oriented and experiential tasks across two samples. Our objective in this study was to determine which flowexperiences best capture the emergent dimension(s) in order to develop a better understanding of their underlying psychological processes. In addition, we also sought to demonstrate that the dimensionality of flow remains consistent across both types of tasks. As noted above, the FSS (Rheinberg et al., 2003) was selected as a continuous measure of flow, and was used to test the dimensionality of flow based on its fit with our broad contexts of interest.
Sample 1 (performance-oriented) participants and procedure
The participants (619 undergraduate students, M age = 20.25, SD = 2.58, 51.5% male) in this sample were asked to work on a Sudoku puzzle. The size of both samples was determined based on Yong and Pearce's (2013) suggestion of 300 participants as the minimum sample size for valid exploratory factor analysis. Given the exploratory nature of the pretest, we sought to approximately double this minimum threshold in order to minimize error. Flow can be experienced during a variety of performance-oriented tasks that range in duration and relative difficulty. To demonstrate that the dimensionality of flow is consistent across such variations, we had the participants work on the Sudoku for either 3 (N = 191) or 8 min (N = 428), and we used two different puzzles: an easy puzzle (N = 308) and a moderately difficult puzzle (N = 311). In addition, the sample for Study 1a was comprised of participants from different countries (US N = 282, CDN N = 337). Following the task, the participants were asked to complete the FSS (Rheinberg et al., 2003). The Sudokus and full descriptions of the measures and exclusions from each study are reported in either the manuscript or the Supplementary Appendix.
Sample 2 (experiential) participants and procedure
Flow is also prevalent in everyday life (Csikszentmihalyi & Lefevre, 1989), occurring in contexts that are more experiential in nature, such as exploring content online (Novak et al., 2000;Trevino & Webster, 1992). To test the dimensionality of flow in a common experiential task, we had the participants in this sample listen to music. Specifically, we had them listen to a piece of electronic music for approximately three and a half minutes. The 542 participants (M age = 36.47, SD = 13.24, 49.8% male) consisted of undergraduate students (N = 420) and CrowdFlower panel workers (N = 122). As with the performance-oriented sample, the participants were asked to complete the FSS (Rheinberg et al., 2003) when the task was finished.
Statistical analysis
Exploratory factor analysis (EFA) with promax rotation (kappa = 4) was used to explore the dimensionality of flow within the two task conditions, while eigenvalues over one, scree tests, and parallel analysis were used to identify the number of dimensions. Costello and Osborne's (2005) guidelines for best practices were used when determining low-loadings (≤ .32), cross-loadings (≥ .32), and low communalities (≤ .40).
Results
The eigenvalues and scree tests indicated that flow consisted of two dimensions in both samples. In addition to determining the number of dimensions of flow, the Study 1a also sought to understand the mechanisms that underlie these dimensions. As such, we analyzed which flow experiences were most representative of the dimensions based on the factor loadings and communality coefficients, which can be found in Table 1.
The strongest factor loadings for the first dimension to emerge belonged to the items "I felt that I had everything under control," "My thoughts ran fluidly and smoothly," and "I knew what I was doing each step of the way," which supports our theorizing that many characteristic flow eperiences are the result of fluent action and thought. It is important to note that the loadings were consistent across both task conditions (i.e., performance-oriented and experiential). Given the common underlying mechanism related to fluency, we henceforth follow the theorizing of Rheinberg et al. (2003) and refer to this dimension as "fluency." The second dimension to emerge was consistent with theorizing that many flow experiences are the result of focusing one's attention on something for an extended period of time (Dietrich, 2004). Due to limited cognitive resources, focusing one's attention on a limited information set for a prolonged period of time will impede the ability to process higher-order concepts like the "self" and "time" (Dietrich, 2003). The influence of this relationship is evidenced by the fact that the item, "I lost track of time," had the strongest loading for this dimension across both tasks. Other items that loaded strongly across both tasks included, "I was lost in thoughts related to the task," and "I was totally absorbed in the task," which further supports the full devotion of one's mental resources as an underlying mechanism of this dimension of flow. Given this finding, we again adopt Rheinberg et al.'s (2003) terminology and henceforth refer to this dimension as "absorption." In an effort to develop a better understanding of the two dimensions, we also examined flow experiences that were less diagnostic, such as those with small contributions (i.e., communality below .40) or instances where there were significant overlaps between the experiences (i.e., crossloadings exceeding 0.32). The results were similar across the samples with some minor differences. All fluency items met the criteria in both samples such that they had no significant overlap with absorption items, they had sufficiently high communality values and they loaded strongly on their respective factor (although, items F 2 and F 3 had relatively weaker loadings, most notably in the Sudoku sample). These findings suggest that the fluency dimension is represented well by the six items but that some items are less diagnostic.
In the Sudoku sample, all absorption items failed at least one criterion as one item loaded on both dimensions ("I was totally absorbed into the Sudoku") and three items ("I lost track of time", "I was completely lost in thought", and "I felt just the right amount of challenge from the puzzle") had small communality values. However, in the music sample, only two absorption items loaded on both dimensions ("I was totally absorbed into the song" and "I felt just the right amount of challenge"). Together, the results across both samples indicate that two absorption items consistently overlapped too much with the fluency dimension or had a weak relationship with the absorption factor.
Discussion
The results of Study 1a support a two-dimensional conceptualization of flow and provide clarity regarding the processes underlying these two dimensions. Our findings show that the fluency dimension is comprised of flow experiences related to fluent thought and action, such as those demarcated by a high level of control and a smooth progression of thought, while the absorption dimension is comprised of experiences that arise from focusing one's attention on a limited amount of information for an extended period of time, for example, losing track of time. The results were also consistent across both activities (completing a Sudoku and listening to music), which suggests that flow is most appropriately captured by the two identified dimensions.
Since one of the goals of Study 1a was to distinguish the dimensions of flow and their underlying processes, we also identified flow items that poorly describe the two dimensions. Although this analysis was able to identify two distinct factors, further refinement of the scale could enable an even clearer distinction between the fluency and absorption dimensions. In Study 1b, we revise two absorption items to create new ones that more appropriately represent experiences related to the dimensions of flow. Specifically, we suggest removing the two absorption items that failed the criteria in both samples by overlapping with the fluency dimension ("I was totally absorbed into the Sudoku/song" and "I felt just the right amount of challenge"). We seek to retain and further test the other two absorption items since they had strong loadings across both activities and performed well in all metrics in one of the activities.
Study 1b-refining the dimensions of flow
Study 1b was designed with the aim of further refining the dimensions of flow. All fluency-related items from Study 1a were retained because they successfully captured the core element of fluency (i.e., things going well). However, it was necessary to add new absorption items that correspond to the underlying attention-based process (Dietrich, 2004). Four new items were developed for the absorption dimension to ensure that each dimension was assessed using an equal number of items (six).
The developed items were consistent with the findings of Study 1a, which revealed that the second dimension of flow and its characteristic experiences result from concentrating on something for an extended period of time (Dietrich, 2004). To choose the experiences that would exemplify this underlying process, we used characteristic flow experiences (i.e., the loss of self-consciousness and the merging of action and awareness) that are predicated on focusing one's full attention on something for a prolonged period of time (Csikszentmihalyi, 1975;Dietrich, 2003). As a result, the following items we developed for the music task in this pretest: "The song was the only thing on my mind," "I felt like it was just me and the song," "I was one with the song," and "I was unaware of anything else."
Participants and procedure
The sample for this study consisted of 419 TurkPrime panelists (M age = 45.00, SD = 16.50, 38.7% male). Given the consistency in dimensionality across the performanceoriented (Sudoku) and experiential (music) tasks in Study 1a, the participants performed the same experiential music task as in Study 1a, and then completed the new flow measures. In this study, sample size was determined according to Jackson's (2003) 20:1 rule (20 participants per parameter), which helps to reduce error and increase power. Since 21 parameters were estimated in Study 1b, the ideal sample size was set at 420 participants.
Statistical analysis
We explored the dimensions of flow using the same procedure as in Study 1a.
Results
The eigenvalues over one rule, scree tests, and parallel analysis once again revealed two dimensions of flow: fluency and absorption (see Table 2 for the new items and their factor loadings). Absorption explained 47.4% of the variance, while fluency explained 10.5%. To further refine the two dimensions, we used the same criteria as in the first pretest to eliminate experiences that make small contributions (i.e., communality below .40) and overlap with other experiences (i.e., cross-loadings exceeding 0.32). Specifically, three absorption items and three fluency items were withdrawn, leaving a six item measure of flow, with three items capturing each dimension (see Table 2). These changes enhanced the clarity of the two dimensions, as the remaining items did a good job of capturing both dimensions of flow and explaining a substantial portion of the variance in flow.
Discussion
The results of Study 1b provide additional support for the two-dimensional conceptualization of flow and the processes which underlie its dimensions. The flow experiences loaded on their predicted dimensions, and the refined items helped to explain the absorption dimension; these results indicate that we have successfully captured the essence of both dimensions. The results of Study 1b also yielded a new measure of flow, which we have named the Two-Dimensional Flow Scale (TDFS).
Study 2
The results of Studies 1a and 1b supported a two-dimensional view of flow. In Study 2, we seek to test Hypothesis 1 by using a performance-oriented task to confirm the twodimensional structure of flow.
Participants and procedure
The sample for Study 2 consisted of 186 MTurk workers (M age = 43.84, SD = 15.42, 37.1% male). Sample size was determined based on Wolf et al.'s (2013) recommendation that studies using models with two factors and three indicators per factor employ samples of at least 180 participants. In this study, participants were instructed to play an online version of Pacman for approximately 5 min, and then to complete the TDFS and some demographic questions related to age and gender.
Statistical analysis
The two-dimensional structure of flow was confirmed via confirmatory factor analysis (CFA) with IBM SPSS Amos 22. This analysis comprised a comparison of two models: the unidimensional model, wherein all indicators load directly onto the latent flow factor; and the two-dimensional model, wherein flow is a higher-order factor and fluency and absorption are first-order factors. The following three items were used to assess fluency: "My thoughts ran fluidly and smoothly," "I knew what I was doing each step of the way," and "I felt that I had everything under control." In contrast, the following three items were used to assess absorption: "I lost track of time," "I felt like it was just me and the game," and "I was unaware of anything else."
Results
The comparison of the single-factor model (all items loading on the latent flow factor) and the higher-order two-factor model (absorption and fluency items loading on their respective factors with flow as a second-order latent factor) revealed that the two-factor model had better fit statistics, and that these statistics were also satisfactory (i.e., CFI/TLI ≥ .95, RMSEA ≤ .06, SRMR ≤ .03 (Hu & Bentler, 1999;Brown, 2006)). The fit statistics for both models are presented in Table 3. These results further demonstrate that flow is comprised of two distinct but related dimensions, thus supporting Hypothesis 1.
Discussion
The results of Study 2 confirm that flow is characterized by two aspects-fluency and absorption-and further support the suitability of the two-dimensional conceptualization over the unidimensional conceptualization. Given these results, it is important to demonstrate the discriminant and convergent validity of the TDFS. This is the objective of Study 3.
Study 3
Study 3 had two primary goals. First, we sought to demonstrate the relationship between the two dimensions and three existing scales of flow: the Core Flow scale (Martin & Jackson, 2008), the MJ-FSS (Martin & Jackson, 2008), and the flow questionnaire, which focuses on the absorption-related experiences of flow (Csikszentmihalyi & Csikszentmihalyi, 1988). Our second goal in Study 3 was to demonstrate that both dimensions are related to characteristic variable of flow, and different from those that are inconsistent with it. The concept of mindfulness provides an interesting opportunity for juxtaposition, as some aspects of it are concordant with flow, while others are discordant. As such, we measured aspects of mindfulness that are consistent with flow (i.e., openness and de-centering), as well as positive affect for convergent validity. In addition, we also measured aspects of mindfulness that have been shown to be inconsistent with flow (i.e., self-reflective awareness and situational awareness; Sheldon et al., 2015). People experiencing flow have been suggested to have heightened awareness (Csikszentmihalyi & Lefevre, 1989). However, since attention is a limited resource, it is not possible for them to have a heightened awareness of everything around them. Flow is associated with a heightened awareness of task-related stimuli and, as a result, a decreased awareness of everything else, including perceptions of the self (Csikszentmihalyi, 1975). Indeed, findings have demonstrated a negative relationship between flow and the selfreflective aspect of mindfulness (Sheldon et al., 2015), and that the processing of higher-order constructs, such as the "self," is thwarted during flow (Dietrich, 2004). We measure the self-reflective awareness aspect of mindfulness in order to assess discriminant validity.
On the other hand, the de-centering and openness aspects of mindfulness should be related to both dimensions of flow. De-centering refers to a shift away from personal identification with thoughts and feelings (Teasdale et al., 2002), which should allow one to experience the present moment without ruminating or reflecting on it, thereby enhancing one's ability to become absorbed in a task and to experience it fluently due to a decrease in unnecessary thoughts. Openness should Table 3 Study 2 goodness-of-fit indicators of models for flow dimensions The fit statistics are for the single-factor model (all items loading on the latent flow factor) and the twofactor model (absorption and fluency items loading on their respective factors with flow as a second-order latent factor) df degrees of freedom, CFI comparative fit index, TLI Tucker-Lewis index, RMSEA root mean square error of approximation, SRMR standardized root mean square residual have a similar effect on flow, as it represents experiential receptivity to information and stimuli and a willingness to have new experiences (Lau et al., 2006;McCrae & Costa, 1985), thus facilitating one's attention and absorption into a task without the burden of ruminative thought or worry. Lastly, since flow is also inherently enjoyable, we measured positive affect, which should converge with both dimensions.
Participants and procedure
The participants (449 Mturk workers, M age = 47.87, SD = 15.91, and 36.8% male) engaged in the same music experience as in the prior studies. Of the original 520 participants, 71 were removed due to failing the attention check, which was performed by embedding the item, "I often eat cement," in one of the scales. If participants answered anything other than 1 (not at all) on a 1-7 scale, they were removed (Huang et al., 2015). A sensitivity power analysis using R package "pwr" (alpha = .05, two-tailed, beta = .80) revealed that 781 participants would be required to detect a weak correlation (r = .10), while 85 would be required to detect a medium size correlation (r = .30). Our correlation analysis had 80% power to detect a correlation coefficient of r = .13 (weak correlation).
Statistical analysis
We computed correlations between the dimensions of flow, other flow measures, and similar/dissimilar concepts in order to examine how they relate to one another. We expected the correlations to be moderate (.30 to .49) or strong (above .5) for convergent validity and weak (below .3) for discriminant validity (Campbell & Fiske, 1959;Cohen, 1988;Furr, 2017). Table 4. First, we evaluated the discriminant validity of the TDFS by examining the correlations between its two dimensions and self-awareness. Fluency and absorption were strongly related to each and were both found to be weakly correlated to self-awareness. Together, these results support the discriminant validity of the TDFS and its dimensions, indicating that there is little to no overlap between the measures. Next, we assessed the convergent validity of the TDFS by examining its associations with similar variables and other flow scales. Our findings revealed that fluency was strongly correlated with de-centering and positive affect, and moderately correlated with openness, while absorption was found to be strongly correlated with decentering, openness, and positive affect. Overall, the variables that were expected to be associated with fluency and absorption were strongly correlated with both dimensions (with the exception of openness, which was close to the cutoff value for a "strong" correlation), thus supporting the convergent validity of the TDFS.
Descriptive statistics and correlations are presented in
Lastly, as expected, both dimensions correlated strongly with existing flow scales. Interestingly, fluency had a stronger correlation with the Core Flow scale compared to the absorption dimension, while both dimensions correlated equally strongly with the MJ-FSS. Furthermore, a moderate correlation was observed between fluency and the Flow questionnaire (quote) measure, while a strong correlation was observed between absorption and the questionnaire. These findings make sense when considering the quotes in the flow questionnaire, as those used to characterize flow tend to focus on its absorption-related experiences. Overall, the correlation between the TDFS and other similar measures once again indicates that it possesses sufficient convergent validity. The one exception to these findings is the correlation between fluency and the Flow questionnaire (quote) measure, which was much weaker than all other correlations. This result was expected, as the quotes used in the flow questionnaire focus on the absorption-related experiences of flow.
Discussion
The results of Study 3 indicate that the proposed flow dimensions are related to the expected constructs, yet do not overlap with alternative variables. This supports the discriminant and convergent validity of the TDFS. Given these findings, we explore the relationships between fluency, absorption, and various antecedents and consequences in Studies 4 and 5. This inquiry is important from a theoretical standpoint, as it is critical to understand whether nuanced relationships exist between the dimensions of flow and other variables.
Study 4
The primary goal of Study 4 was to test the relationships between the two dimensions of flow and antecedents/consequences (Hypotheses 2a, 2b, 2c, 3a, and 3b). In addition, we also sought to compare our two-dimensional conceptualization with the unidimensional conceptualization (i.e., MJ-FSS) with respect to these relationships (Martin & Jackson, 2008). Finally, we aimed to provide additional support for the fluency dimension being characterized as a general perception of fluency that is rooted in both fluent thought and fluent action. To support the underlying importance of fluent thought we seek to demonstrate a strong relationship between familiarity and the fluency dimension (Hypothesis 2a), since familiarity and processing fluency are tightly linked (Song & Schwarz, 2009). To support the underlying importance of fluent action we also seek to demonstrate strong relationships between skills, progress, and the fluency dimension (Hypotheses 2b, 2c).
Participants and procedure
The initial sample for Study 4 consisted of 359 MTurk workers; however, 21 were omitted from the final sample due to failing the attention check, which was implemented by embedding the item, "I often eat cement," in one of our scales. Thus, the final sample contained 338 participants (M age = 36.23, SD = 10.12, 55.8% male). For this study, the participants were instructed to play a version of the Bejeweled video game until they received a "game-over" message. Prior to performing the task, they were provided with a short tutorial on how to play the game, which included text descriptions, visuals, and a live demonstration. When the game was finished, the participants were asked to complete the measures of flow, measures of the predicted antecedent and outcome variables, and demographic questions.
A sensitivity power analysis using R package "pwr" (alpha = .05, two-tailed, beta = .80) indicated that a sample of 781 observations would be required to detect a weak correlation (r = .10), while a sample of 85 observations would be necessary to detect a medium-size correlation (r = .30). Our correlation analysis had 80% power to detect a correlation coefficient of r = .15 (weak correlation). The regression analyses with flow as an antecedent had 80% power to detect effect size with f 2 = .04 (small effect size; for a regression with 5 independent variables and 338 participants) and f 2 = .03 (small effect size; for a regression with 4 independent variables and 338 participants). The regression analyses with flow as an outcome had 80% power to detect effect size with f 2 = .02 (small effect size; for regressions with 1 independent variable and 338 participants).
Familiarity with a task and an appropriate level of skill have been suggested as being critical antecedents to entering flow in performance-oriented tasks (Keller et al., 2011a(Keller et al., , 2011b. We assessed familiarity with Bejeweled by asking participants, "how familiar are you with the strategies/rules of Bejeweled?" (1 = not at all, 7 = very familiar), while skill was assessed with the item, "how skilled were you at Bejeweled before playing today?" (1 = not at all skilled, 7 = very skilled). Progress was measured based on the number of gems cleared per minute, which was tracked by the game software, reported on the final screen, and recorded by the research assistant running the study.
With regards to consequences, flow has been suggested to give rise to feelings of presence (Sheldon et al., 2015) and an increased desire to engage in the task that elicited flow again in the future (Martin & Jackson, 2008). Presence was measured using the scale developed by Kim and Biocca (1997;α = .95), and engagement intentions were assessed using the question, "how likely would you be to play Bejeweled again in the next week?" (1 = not at all likely, 7 = very likely). See the Supplementary Appendix for a full description of the measures.
Statistical analysis
Hierarchical regression analysis was employed both to evaluate the relationships between the TDFS and the suggested consequences-presence and intentions to play again-and to compare its predictive power against the MJ-FSS scale. In the first step, we controlled for the effects of age, familiarity, and skill; in the second step, we entered the MJ-FSS score; and in the third step, we entered fluency and absorption as measured using the TDFS. A similar approach was employed to evaluate the antecedents to the TDFS, with game progress being entered in the first step, familiarity being entered in the second step, and skill being entered in the third step.
Results
Descriptive statistics and correlations are presented in Table 5. Fluency has weak positive correlations with absorption and presence, moderate positive correlations with intention to play the game again, progress made in the game, familiarity, and skill, and a strong positive correlation with the MJ-FSS. Absorption had a strong positive correlation to the MJ-FSS and presence, and a moderate positive correlation to intention to play again. Absorption was not significantly related to progress in the game, familiarity, or skill. All correlations are in the expected directions.
The results of the regression analysis of flow outcomes (see Table 6) indicate that the two dimensions are related to outcomes, such as presence and intention to play again, in different ways. Fluency was weakly and negatively related to presence (β = − .14, p < .05) and not related to intention to play again (β = .03, n.s.). Absorption was strongly positively related to presence (β = .64, p < .001) and moderately positively related to intention to play again (β = .32, p < .001). Compared to the MJ-FSS, the TDFS explained nearly twice as much variance in presence (an additional 31% after controlling for the MJ-FSS), and slightly more variance in intentions to play again (an additional 7% after controlling for the MJ-FSS).
The results of the regression analysis for flow antecedents (see Table 7) indicated that progress in the game was not related to either of the flow dimensions, and that familiarity (β = .20, p < .01) and skill (β = .33, p < .001) were only related to fluency. Familiarity and skill explained a total of 27% of the variance in flow.
Discussion
The results of Study 4 demonstrate that the two dimensions of flow have nuanced relationships with its established antecedents and consequences. The antecedents to flow (i.e., familiarity and skill) were found to have significant positive relationships with the fluency dimension, but not with the absorption dimension. In contrast, presence and engagement intentions, which are consequences of flow, had significant relationships with the absorption dimension, but weaker or non-significant relationships with the fluency dimension. The results also provided initial insight into the nature of the fluency dimension, particularly the role of task progress. While progress was positively correlated with the fluency dimension, skill and familiarity captured its variance when included in the regression equation. This is consistent with our theorizing that progress is a function of skill and familiarity. Since familiarity and skill explained more variance, these findings also suggest that they may exert a greater influence on fluent processing than progress. That is, those who are familiar with and skillful at a task may be less dependent upon making progress in order to feel ease in thought and behavior. This finding is consistent with the relationship between familiarity and ease of processing (Song & Schwarz, 2009). In Study 5, we apply a different context to further explore the roles of familiarity, skill, and progress in eliciting flow.
Study 5
Study 5 had two primary goals. First, we sought to replicate the nuanced relationships between the dimensions of flow and its antecedents and consequences in a new context, as we wanted to be certain that the differences in the investigated relationships were not a product of the specific task in Study 4. The second goal of Study 5 was to gain further insight into the process through which flow happens given Study 4's finding that antecedents were strongly associated with fluency, but not with absorption. This finding hints at a potential process wherein the fluency dimension is critical to experiencing absorption, which is consistent with the findings of nascent research suggesting that absorption possesses an emergent, time-based element (Lavoie & Main, 2019a).
We posit that skill is an important antecedent to flow because it gives rise to fluent action, as captured by the fluency dimension, which allows one to maintain focus on the task for long enough to become absorbed. This is important to note, as skill should not elicit absorption directly; rather, it should only do so indirectly through the fluency .27*** .31*** .13*** .07*** dimension. We test this potential process of entering flow and hypothesize that fluency-related experiences will play an important role in the emergence of absorption related experiences. Stated formally, Hypothesis 4: Fluency will mediate the relationship between skill and absorption.
Participants and procedure
The sample for this study was comprised of 256 undergraduate students (M age = 19.91, SD = 5.67, 59.8% male). Study 5 employed the same procedure as Study 4, only this time the participants were instructed to play an online version of the video game, Tetris, instead of Bejeweled. As in Study 4, participants were given a short tutorial of how to play, including a text description of the game. After receiving the "game over" notification, the participants were asked to complete the TDFS, measures of antecedent and consequence variables and demographic questions. A sensitivity power analysis using R package "pwr" (alpha = .05, two-tailed, beta = .80) indicated that a sample size of 134 observations would be required to detect a weak correlation (r = .10), while 49 observations would be required to detect a medium-size correlation (r = .30). Our correlation analysis had 80% power to detect a correlation coefficient of r = .18 (weak correlation). The regression analyses with flow dimensions as an antecedent had 80% power to detect effect size with f 2 = .05 (small effect size; for a regression with 5 independent variables and 248 participants). The regression analyses with flow as an outcome also had 80% power to detect effect size with f 2 = .05 (small effect size; for regressions with 3 independent variables and 246 participants).
The antecedents were measured using the same approach as Study 4: skill was assessed using the question, "how skilled were you at Tetris before playing today?" (1 = not at all skilled, 7 = very skilled), and familiarity was measured by asking, "how familiar are you with the strategies/ rules of Tetris?" (1 = not at all, 7 = very familiar). We also used a progress measure that asked the participants what level they had reached in Tetris; this measure consisted of an open-ended question that allowed the participants to enter a number directly. Consequences were measured using the same method as in Study 4: presence was assessed using 5 items (α = .96; Kim & Biocca, 1997), and engagement intentions were measured by asking "how likely would you be to play Tetris again in the next week?" (1 = not at all likely, 7 = very likely).
Results
Descriptive statistics and correlations are presented in Table 8. As can be seen, fluency was strongly and positively correlated with absorption, moderately and positively correlated with skill, presence, and progress, and weakly and positively correlated with familiarity and intention to play again.
The results of the regression analysis of flow outcomes (see Table 9) replicated those of Study 4, with only absorption being related to presence and intention to play again. Specifically, participants who reported higher levels of absorption also reported a greater sense of presence (β = .59, p < .001) and stronger intentions to play the game again (β = .17, p < .05). These effects were observed beyond the effects of familiarity and skillfulness, explaining 35% of the variance in presence and 18% of the variance in intention to play again.
The results of the regression analysis of flow antecedents (see Table 10) showed a positive relationship between skill and with fluency (β = .23, p < .01). As we hypothesized, progress was positively related to both fluency (β = .27, p < .01) and absorption (β = .18, p < .01), though its relationship with fluency was stronger. Familiarity was not related to either dimension in this study. These variables explained a total of 20% of the variance in fluency and 5% of the variance in absorption. Although no relationship was observed between skill and the absorption dimension of flow, we suggest that they will be indirectly related due to skill's ability to facilitate the fluency dimension of flow. In order to support this relationship, we ran a mediation model with the fluency dimension mediating the relationship between skill and absorption using Model 4 of the PROCESS macro in SPSS (Hayes, 2018). As expected, the results revealed that the fluency dimension mediated the absorption dimension. Specifically, a significant indirect effect was observed [conditional effect = .21, SE = .04, 95% CI: .14, .29], such that increases in skill resulted in increased fluency [conditional effect = .34, SE = .05, 95% CI: .23, .45], which in turn mediated the absorption dimension of flow [conditional effect = .63, SE = .06, 95% CI: .52, .74].
Discussion
The results of Study 5 provide further support for the assumption that nuanced relationships exist between the two flow dimensions and other variables. As in Study 4, the antecedents of flow (i.e., skill and familiarity) had significant positive relationships with its fluency dimension. Furthermore, the results related to the consequences of flow also aligned with those of Study 4, as significant relationships were observed between the established consequences of flow (i.e., presence and engagement intentions) and the absorption dimension, while only weak or non-significant relationships were observed with the fluency dimension. Moreover, the findings further elucidated the nature of the fluency dimension by foregrounding the importance of fluent action, as skills and progress were significantly related to the fluency dimension.
The results of Study 5 also provide insight into flow's underlying processes. In particular, the mediation results confirm that absorption is an emergent state that is made possible by progress in a task, which is captured by the fluency dimension of flow. Skill, an established antecedent of flow, directly enhances the fluency dimension, but is only indirectly associated with the absorption dimension via the fluency dimension. Interestingly, the correlation between the fluency and absorption dimensions was much stronger in this study (.59) than in Study 4 (.18). Given that the greater variety of potential actions in Tetris makes it more difficult than Bejeweled (the flow-inducing task in Study 4), it is possible that the dimensions of flow converge during tasks that are more challenging, and thus closer to deepflow.
General discussion
The results of this research advance the conceptualization of flow in several ways. First, they clarify the dimensionality of flow, providing evidence of a two-dimensional conceptualization comprised of "fluency" and "absorption" dimensions. This research also helps to clarify the nature of these dimensions. As we have shown, the fluency dimension is associated with fluent thought and fluent action, while the absorption dimension is emergent in nature and driven by sustained attention to the focal activity. Importantly, we demonstrate that the two dimensions .07*** .35*** .13*** .18*** ΔR 2 .28*** .05*** exist across contexts (i.e., performance-oriented and experiential leisure activities), and have nuanced relationships with antecedents and consequences of flow. The results of this research not only advance flow theory, but they also serve as a foundation for future research aimed at exploring the nomological network of flow and measuring it more precisely. Lastly, we develop an initial understanding of how the two dimensions relate to each other in the emergence of flow. In particular, our mediation findings suggest that the fluency dimension of flow can mediate the absorption dimension.
Our demonstration of the nuanced relationships between the dimensions of flow and its established antecedents and consequences has several theoretical and empirical implications. First, it illustrates the importance of properly measuring the two dimensions when assessing the relationship between flow and other variables. Our results suggest that treating flow as a unidimensional construct may obfuscate its relationships with other variables. While the antecedents of flow may have a strong relationship with its fluency dimension, thus facilitating it, their weak direct relationship with the absorption dimension may cause the overall relationship to seem weaker.
Notably, we show a nuanced relationship between flow and presence. Given their conceptual similarity, our research contributes to the nascent literature aimed at distinguishing these concepts from one another (e.g., Weibel et al., 2008). Specifically, we demonstrate that the fluency dimension of flow is distinct from presence. Similarly, our findings also help clarify the relationship between flow and mindfulness. While several research efforts and lay thought often conceive of the two concepts as being highly related (e.g., Kee & Wang, 2008), other studies have shown that they are discordant (Sheldon et al., 2015). Our research suggests that some aspects of mindfulness (i.e., decentering and openness) are important for flow, while others (e.g., self-reflective awareness) can thwart it.
Our results also provide greater clarity regarding the fluency dimension, which is based on a general perception of ease and control, fluent thought, and/or fluent action. While the wording of the fluency items on the TDFS captures both fluent thought and action (e.g., "I knew what I was doing"), our findings reveal that fluent thought is especially important, as it can occur with little-to-no physical action and can give rise to the fluency dimension of flow on its own. Taken alongside evidence from performance domains that fluent thought underlies fluent action (Ilundáin-Agurruza, 2015), our results suggest that fluent thought may be at the core of this dimension. That is, it is unlikely to sustain fluent action without fluent thought, but it is possible to sustain fluent thought without fluent action.
The suggested importance of fluent thought in leisure activities also illuminates the potential relationships between flow and constructs like processing fluency (Reber et al., 1998). Our finding that progress contributes to the fluency dimension of flow also advances current understandings of the relationship between performance and flow, as prior findings have suggested that performance is a consequence of flow (Engeser & Rheinberg, 2008). However, we highlight the importance of performance in giving rise to flow since it is generally equated with smoothly progressing through a task.
Our mediation findings highlight that the emergence of flow is two-dimensional in nature, which represents a potentially important contribution to facilitating future research on flow. In particular, our demonstration of the mediating roles of fluency and efficiency of progress advances current understandings of how to manipulate flow (Kulkarni et al., 2016). Our process findings suggest that fluency-related experiences (e.g., control, automaticity) could be manipulated to induce flow, as they can give rise to absorptionrelated experiences. The ability to manipulate flow via fluency-related experiences provides a viable alternative to the skills-challenge balance approach, which is currently the only established manipulation of flow (e.g., Keller & Bless, 2008). As a result, experimental research on flow is limited, especially in contexts that are not traditionally skills-based.
Our two-dimensional conceptualization of flow allows it to be studied and measured in experiences not typically associated with skill and performance. This is important because there is growing evidence of flow in more passive tasks, including those that are more experiential in nature and not typically associated with skill (e.g., Lavoie & Main, 2019b;Novak et al., 2003). It is possible to experience flow during tasks that do not require physical interaction, as the dimensions of flow (i.e., absorption and fluency) can be elicited through the senses (e.g., seeing and hearing) and subsequent psychological processing alone. For example, one can be fully absorbed and perceive a high level of fluency while reading a book. Similarly, while watching a new movie, one can become absorbed in the content and perceive degrees of progress related to learning and/or satisfying curiosity depending on how the story is structured (Van Laer et al., 2014. Likewise, our results advance the current conceptualizations of flow by shifting emphasis away from the match between high skill level and high task demands, which is perhaps the most commonly understood characteristic of flow (Csikszentmihalyi, 1975;Keller & Bless, 2008).
Importantly, our revised conceptualization of flow accounts for the role of matching skills with task demands. Matching skills with task demands is one way to develop high levels of absorption and fluency, but it is not the only way to do so, and it is not a particularly strong measure of either dimension. Removing the focus on skills enables the study of flow in contexts not traditionally understood as skill and performance-based, and increases its relevance by broadening our understanding of what could be considered a flow state. For example, scrolling through social media can give rise to a flow state (Hamilton et al., 2016).
Clarifying the dimensionality of flow is also important to ensure that it is measured properly. Our results suggest that it is not possible to adequately measure flow using the common method of assessing certain components (e.g., time distortion and balance of skills with task demands), as these items do not capture both dimensions (e.g. Keller & Bless, 2008). Therefore, the TDFS developed in this paper represents a significant empirical contribution for future research, as it will enable the accurate measurement of flow in leisure activities.
With regards to the proper measurement of flow, we think it is important to highlight the similarities between the TDFS and the FSS (Rheinberg et al., 2003) to illustrate the novelty of our scale. While we removed the three weakest fluency items from the FSS we retained the three strongest items; thus, the fluency dimension of the TDFS is comprised entirely of items from the FSS. Conversely, the absorption dimension of the TDFS retains one item from the FSS ("I lost track of time"), but replaces the others. Thus, the TDFS retains the strongest fluency experiences from the FSS, while providing a new approach to capturing the absorption dimension.
Lastly, our demonstration of flow's two-dimensional structure raises the question of when we can speak of flow as a specific state. We recommend that terminology remain at the general, unidimensional level when referencing flow, with distinctions between dimensions being reserved for conceptual and measurement purposes. We believe that flow is achieved when a certain degree of absorption and fluency are obtained together. Both absorption and fluency exist on a subjective continuum, with increases or decreases in each bolstering or thwarting flow, respectively. It is important to note that both elements are necessary but not sufficient on their own in order for one to experience flow. For example, one could be fully absorbed in an activity but fail to experience flow due to insufficient fluency.
The TDFS represents an important contribution because it provides a more precise understanding of which dimension of flow is being limited. This information can then be used to provide insight into whether it is necessary to increase fluency or absorption in order to foster flow. We do not think it is appropriate to suggest specific values marking the flow threshold, as we expect this threshold to differ across people and contexts. Thus, future research should explore the various combinations of fluency and absorption with regards to flow thresholds. We discuss the limitations of our research and the additional opportunities for future research next.
Limitations and future research
Our results have several limitations that could provide opportunities for future research. Mostly notably, although some of the activities used in our studies were more active and performance-focused than others, they were ultimately all leisure-based. This is important because differences have been shown across flow states in leisure and work activities. Most notably, flow states in work activities emerge in the presence of more negative emotional activation/arousal (e.g., feeling stressed, nervous) compared to flow states in leisure activities (Engeser & Baumann, 2014). The nuanced emotional qualities of work activities, particularly increased levels of negative emotional activation/arousal, may lead to differences in the dimensionality of flow and the process through which it emerges. While we are confident in our findings related to the dimensionality of flow, the TDFS and our conclusions related to the process through which flow emerges are ultimately limited to leisure-based activities; as such, the TDFS should be used appropriately in future research.
Relatedly, the activities that were used in our studies to generate the TDFS limit our ability to assess its superiority over prior measures. Since our comparison of the TDFS and prior measures of flow was limited to leisure activities, future research could test the effectiveness of the two measures across several other contexts, including work activities. In addition to testing the different measures, future research could also explore the dimensionality of flow and the relationships between the dimensions in work activities.
Another limitation of this research is the specificity of the antecedents and consequences that were studied. While our results suggest that antecedents to flow may be more related to the fluency dimension, it is possible that some are more directly related to absorption, such as individual differences in action orientation (Baumann et al., 2016). The roles of emotion and body movement could also prove fruitful for exploring the antecedents of flow given their importance related to absorption (Jantzen et al., 2012;Murphy et al., 2018). Individual differences, which are suggested to give rise to flow and its physiological markers, may also have nuanced relationships with the dimensions of flow (Keller et al., 2011a(Keller et al., , 2011bPeifer et al., 2014;Teng, 2011).
Our results also provide preliminary support for the relationship between the dimensions of flow and lay the groundwork for future research exploring the process through which flow develops based on these two dimensions. We demonstrate one way in which the two sets of experiences that comprise flow are related, but we do not suggest that this is the only way. It is possible that the absorption-related experiences of flow could also give rise to and sustain the fluency-related experiences. For example, things more directly related to the absorption dimension, such as focused concentration, could increase perceived fluency from a processing perspective by calming other disruptive processes, such as negative emotions (Dolcos et al., 2020). Future research should explore the role that the absorption-related experiences of flow play in facilitating and sustaining its fluency-related experiences.
The TDFS is further limited in that it is based on selfreporting, which is sometimes difficult or impossible to obtain. A new behavioral measure of flow should be developed so that observers could determine flow in such instances. As discussed earlier, sustained fluent action may be a proxy for fluent thought; thus, the degree to which people make "smooth" progress without errors could be used to capture the fluency dimension. Our findings also suggest that the absorption dimension could be assessed using physical cues related to sustained visual attention and body language.
Given our findings supporting the relationship between the two dimensions in eliciting flow, it will be important to develop alternative methods of manipulating flow. Future research should explore a variety of fluency-related enablers to determine which ones are most effective for manipulating flow in leisure contexts. Exploring the processes through which fluency facilitates absorption will help to advance the emerging literature related to the process of flow (Kawabata & Mallett, 2011). It would also be worthwhile to explore how the two dimensions are related in sustaining flow. Perhaps the distinctiveness of each dimension may become compromised during sustained deepflow states due to the two dimensions merging and supporting each other as a result of being held for a prolonged period. It is possible that the importance of fluency may have been more pronounced in the leisure studies in this research, and that absorption may play a more critical role in sustaining fluency during deepflow.
Accordingly, future research should also explore the dimensionality of deepflow states, as our research was limited to relatively shorter microflow states. The differences between deepflow and microflow, particularly the time and difficulty components, may lead to different results. Perhaps the unidimensional view of flow is more appropriate for deepflow states, as the experiences of ease and attention, and the relationship between them, becomes symbiotic after concentrating for long enough. This is consistent with the differences in the correlations between the two dimensions across Studies 4 and 5, with the relationship being stronger in Study 5, which utilized a more difficult flow-inducing task. This result makes sense, as concentrating on one task for long enough should increase the ease with which one is able to process related stimuli. Moreover, the progress made within deepflow states is relatively more important to the person than it would be in microflow states and may have a more direct role in driving absorption. Alternatively, the relationship between the two dimensions of flow was not as unified in the shorter microflow states explored in this research, as it is possible to experience fluency without yet being fully absorbed. In summary, we suggest that exploring the relationship between the dimensions of flow in both deepflow and microflow states would be a fruitful line of inquiry, and that our research provides many avenues to do so, most notably through our two-dimensional conceptualization.
Conclusion
While previous research has conceptualized flow in various ways, this research reveals that flow in leisure activities is most appropriately conceptualized as consisting of two dimensions: fluency and absorption. In separating flow into these two dimensions, we show that experiences related to absorption play a larger role in driving typical consequences of flow, and that some typical antecedents of flow are more directly related to the fluency dimension. Efforts to measure flow solely based on matching skills and task demands, or those based on variables more associated with the absorption dimension, are problematic, as our research demonstrates that both dimensions are necessary to achieve flow. Finally, the TDFS is best suited for capturing the two dimensions of flow during leisure activities, and should therefore be used in such contexts in future research.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-11-07T16:18:28.115Z | 2021-11-05T00:00:00.000 | {
"year": 2021,
"sha1": "f383b6ad42b629d7b72c1b6ac374e2ad104d68b3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11031-021-09911-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "20a1809f2bde236a35a8073aaf7b2d1d8f81d1b4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
237625377 | pes2o/s2orc | v3-fos-license | Exploration of the H2O2 Oxidation Process and Characteristic Evaluation of Humic Acids from Two Typical Lignites
To study the effect of H2O2 on the content and properties of humic acids (HAs) in lignites, the experimental conditions including oxidation time, H2O2 concentration, and the solid–liquid ratio were investigated. Under the optimum oxidation conditions, the contents of HAs of YL and HB lignite were 45.4 and 40.9%, respectively. The HAs extracted from oxidized and raw lignites were characterized and compared. The results showed that the HAs extracted from oxidized lignites contain more total acidic groups, carboxyl groups, and aliphatic carbon than that in HAs extracted from raw lignites, and their hydrophilic–hydrophobic index value is higher and thermooxidative stability is better than those in HAs extracted from raw lignites. In addition, the composition of polycyclic aromatic hydrocarbons and fluorophore types in HAs extracted from oxidized lignites are similar to the HAs extracted from raw lignites. The results indicated that the oxidation operation can increase the content of HAs in lignites, and simultaneously increase the content of oxygen-containing functional groups and biological activity of HAs, which provided a reference for the subsequent application of HAs.
INTRODUCTION
Lignites are low-rank coals, and their reserves are abundant in China. 1 Generally, lignites have the characteristics of a low calorific value, low stability, high moisture, and high ash content, 2 and these shortcomings reduce their effective utilization rate as fuels in real life. The oxygen content in lignites is relatively high, and the main existing forms of oxygen-containing functional groups are hydroxyl, methoxy, phenolic hydroxyl, ether, carbonyl, and carboxyl groups, 3 and these oxygen-containing functional groups have a great influence on the chemical properties and applications of lignites. The high efficiency and clean application of lignites should be paid more attention to, including produce clean fuels, high value-added products, and carbon materials, and among which the extraction of HAs is an important way to increase the added value of lignites.
HAs are natural organic polymers that are widely found in peats, lignites, leonardites, soil, and water. The humic substances in lignites are traditionally classified into three categories according to their alkaline and acid solubility, including alkali-insoluble, alkali-soluble, and acid-insoluble, and alkali-and acid-soluble components, corresponding to humin, humic acids, and fulvic acids, respectively. 4,5 HAs are rich in a variety of oxygen-containing functional groups such as carboxyl, hydroxyl, phenolic hydroxyl, carbonyl, methoxy, and so on. 6 The chemical and physiological activities of HAs are directly related to their molecular weight, structural character-istics, and the type and number of oxygen-containing functional groups. 7 HAs have many properties, such as exchange, adsorption, complexation, and chelation with metal ions, 8,9 and are widely used in the agriculture, 10−12 industry, 13 environmental management, 14−16 medicine, 17,18 and other fields. 19,20 Therefore, research on HAs is of great significance.
HAs are insoluble under acidic conditions but can be dissolved and extracted in alkaline solutions. 4 The main methods of extracting HAs from coals are an alkaline extraction method, an acid extraction method, and a microbial dissolution method. 6,21−23 Due to the low content of free HAs in lignites, the yield is low when directly extracting HAs. To improve the yield and activity of HAs, researchers used air thermal oxidation, oxidant oxidation, mechanical activation, and other methods to pretreat lignites, 24−31 and the changes of HA structure were investigated. According to the literature, 9,24,32 oxidation of lignites by H 2 O 2 to produce HAs is an important way to increase the additional utilization value of lignites.
In this study, two different sources of lignites were oxidized and degraded by H 2 O 2 at room temperature, and HAs were extracted by the method of NaOH dissolution and HCl precipitation. On the premise of environmental friendliness, this experiment studied the optimal process conditions for the oxidation of lignites with H 2 O 2 at room temperature to increase the content of HAs. Subsequently, the difference in physicochemical properties of extracted HAs were analyzed, including proximate analysis, ultimate analysis, thermogravimetric-derivative thermogravimetry analysis (TG-DTG), and acidic functional groups analysis. 33−37 TG-DTG could be used to study the thermal properties of HAs (including thermal oxidation and thermal stability). The thermal properties and the content of acidic functional groups affect the application of HAs. Spectral analysis include Fourier transform infrared spectroscopy (FTIR), fluorescence spectroscopy (FS), NMR, and X-ray photoelectron spectroscopy (XPS), among which FTIR was often used to analyze the functional groups in HAs, NMR was used to analyze the types of carbon in HAs, XPS was used to analyze the surface elements of substances, and FS was a sensitive method for studying the structure of macromolecules. 4,7,21,23,24,38−44 The composition and properties of HAs extracted from H 2 O 2 oxidized and raw lignites were studied comprehensively through a combination of multiple analysis methods. This study aimed to improve the content of HAs by H 2 O 2 oxidation, provide a reference for the preparation of high-quality HAs from lignites, and provide theoretical basis for further application of HAs.
EXPERIMENTAL SECTION
2.1. Materials. In the experiment, two kinds of lignite from Yiliang (YL) in Yunnan (the young lignite) and Hulunbeir (HB) in Inner Mongolia (the old lignite) were used as the experimental raw materials, the total HA content in lignites was determined by the volumetric method of Chinese standard GB/T11957-2001, and the total HA contents were 27.8 and 23.9%, respectively. Before the experiment, each lignite was crushed and sieved through an 80 mesh inspection sieve. The lignites were placed in the atmosphere and dried to a constant weight, and then they were put into a sample bag and sealed for storage for later use. Hydrogen peroxide (H 2 O 2 ) and sodium hydroxide (NaOH) were purchased from Tianjin Fengchuan Chemical Reagent Technology Co., Ltd., and hydrochloric acid (HCl, 37%) and sulfuric acid (H 2 SO 4 , 98%) were purchased from Chongqing Chuandong Chemical (Group) Co., Ltd. All reagents used were of analytical grade unless otherwise states.
2.3. Preparation of HAs. Alkali-soluble acid separation was a common method for extraction of HAs. 4,21 The YL lignite was added sufficiently into a 0.39 mol/L NaOH solution at a liquid-solid ratio of 32:1 mL/g and reacted at room temperature for 4.1 h. The HB lignite was added sufficiently into a 0.31 mol/L NaOH solution at a ratio of 20:1 mL/g and reacted at room temperature for 5.6 h. The supernatant solution was collected by centrifugation at 6000 rpm for 5 min, and the residue was washed with deionized water three times. The supernatant was collected and filtrated with a Brinell funnel. Finally, the filtrate was acidified with HCl by pH < 2 and aged for 24 h, and then centrifuged to separate the HA (precipitate) fraction. The precipitate was slurried with distilled water and the slurry was transferred to a dialysis bag (MWCO 3500D), and then dialyzed against regularly exchanged distilled water for about 20 days. All of the extracted products were freeze-dried for further characterization and analysis, and the yield of HAs was calculated according to eq 1 where m 1 corresponds to the quality of the HAs product, g; m 2 corresponds to the mass of the lignites, g; and a corresponds to the total amount ratio of HAs in lignites. 27,36 The moisture content was determined by air drying in a vacuum drying oven at 105−110°C for 2 h. The ash and volatile content of the samples were automatically determined by an intelligent muffle furnace model 5E-MF6100K produced by Changsha Kaiyuan Instrument Co., Ltd. at 815 and 900°C, respectively. The fixed carbon content could be obtained using the subtraction method.
Chemical and
The ultimate analyzer of a Vario macro cube model produced by Elementar, Germany, was used for ultimate analysis. C, H, N, and S were measured, respectively, and the O content was obtained by the subtraction method.
2.4.2. Determination of Oxygen-Containing Functional Groups. The total acidic functional group content was determined by the barium hydroxide method. 36,45 In the experiment, about 200 mg of the sample was suspended in a 0.05 mol/L Ba(OH) 2 solution (25 mL), stirred for 48 h under room temperature, filtered, and rinsed three times with distilled water without CO 2 . The ion-exchanged sample was suspended in 0.1 mol/L HCl (25 mL) added in advance and titrated with 0.1 mol/L NaOH standard solution. The blank experiments were concurrently performed to calculate the total acidic groups content.
The carboxyl group content of HAs was determined by a calcium acetate method. The procedure was similar to that for total acid group determination except that the exchange solution was 0.5 mol/L Ca(AC) 2 instead of 0.05 mol/L Ba(OH) 2 (25 mL) and titrated with a 0.1 mol/L NaOH standard solution directly. The content of phenolic hydroxyl groups could be obtained by subtracting the content of carboxyl groups from the content of total acidic groups.
2.4.3. Spectroscopic Analysis. 2.4.3.1. FTIR Spectroscopy. A Fourier transform infrared spectrometer (FTIR), Nicolet iS50, was used to determine the functional groups of HAs. The KBr tablet method was used for sample measurement; the spectral resolution was 4 cm −1 and the spectral scanning range was 4000−500 cm −1 .
Fluorescence Spectroscopy (FS)
. The 10 mg/L HA solution was prepared for testing, and the solvent was NaOH (0.05 mol/L). A Hitachi F-4600 fluorophotometer was used to test the HA samples; the slit was 5 nm during the test and the scanning speed was 120 nm/min when measuring the excitation and emission wavelengths.
2.4.3.3. CP/MAS 13 C NMR Spectroscopy. Nuclear magnetic resonance (NMR) spectroscopy of the HAs was tested with an AVWBIII600 nuclear magnetic resonance spectroscopy instrument produced by Bruker. The test speed was 14 kHz and the resonance frequency was 150.9 MHz.
2.4.3.4. X-ray Photoelectron Spectroscopy (XPS). The multifunction scanning imaging photoelectron spectroscopy model PHI 5000 Versaprobe-II was used to analyze the surface element of HAs; the instrument power was 50 W, the voltage was 15 kV, the Al target (hv = 1486.6 eV) was selected as the X light source, and the hemispherical pass energy analyzer was 46.95 eV, and the spectrum used was C 1s with a binding energy of 284.8 eV for correction.
2.4.3.5. TG-DTG Analysis. The thermogravimetric-derivative thermogravimetry analysis (TG-DTG) of the experimental samples was carried out with a NETZSCH STA 449F3 equipment produced by NETZSCH. The atmosphere used in the experiment was air, the heating rate was 10°C/min, and the heating range was from 20 to 900°C.
RESULTS AND DISCUSSION
3.1. Extraction Conditions of HAs. According to the single-factor experiments, the optimal oxidation conditions of HAs are obtained and are shown in Table 1. The content of HAs increases after the oxidation of H 2 O 2 , and it is speculated that the oxidation makes some macromolecular aromatic substances in lignites oxidize into small molecular acids. 46,47 The HAs extracted under the optimum oxidation conditions and raw lignites are used for analysis to guide the subsequent industrial production.
3.2. Proximate Analysis and Ultimate Analysis. The proximate analysis and ultimate analysis of the four HAs extracted from oxidized and raw lignites are shown in Table 2. It can be seen from the proximate analysis that the ash content of YLHA is higher. After the oxidation process, it is proved that the content of the carbon element in HAs extracted from lignites decreases after oxidation, and the content of oxygen and the O/C ratio increases, which indicates that they contain more oxygen-containing functional groups in HAs extracted from lignites after oxidation. The H/C atomic ratio can be used as an index to evaluate the aromaticity of HAs. 4,48 The larger the H/C ratio, the lower the aromaticity of the HAs. The experiment shows that the aromaticity of the four HAs is OHB > HBHA > YLHA > OYL. In addition, the O/C atomic ratio mainly reflects the content of oxygen-containing functional groups of HAs. 36,37 The content of oxygen-containing functional groups of four HAs is OYL > YLHA > OHB > HBHA. The N/C atomic ratio is assigned to the content of nitrogen in HAs, and the N/C ratio of HAs from lignite is generally less than 0.05. 48 The N/C ratios of the four HAs in the experiment are all within this range.
3.3. Determination of HAs' Functional Groups. The acidic functional groups in HAs play a significant role in the application of HAs. Therefore, the acidic functional groups of four HAs are compared and analyzed. The analysis results are shown in Table 3. It can be seen that the total acidic groups of the four HAs are OYL > OHB > YLHA > HBHA, and the content of carboxyl groups is OYL > YLHA > OHB > HBHA. The content of total acidic groups and carboxyl groups in OYL and OHB both increase, indicating that H 2 O 2 oxidation can increase the active groups in HAs, thereby increasing the molecule activity of HAs.
3.4. FTIR Spectroscopy Analysis. FTIR is used to analyze the functional groups of the four HAs, and the FTIR spectra are shown in Figure 1. It shows that the four HAs have almost the same characteristic peak positions in the infrared spectrum, which demonstrates that the types of functional groups contained are similar. It can be clearly seen from the illustration that the shoulder peak at about 1714 cm −1 is attributed to the stretching vibration of CO bond in the carboxyl group. 4 Meanwhile, the peak intensities of OYL and OHB are both stronger than YLHA and HBHA, which indicates that the carboxyl content of HAs increases after oxidation. The characteristic peak at 1232 cm −1 is assigned to the stretching vibration of the C−O bond in esters, ethers,
ACS Omega
http://pubs.acs.org/journal/acsodf Article phenols, and lignin and the deformation of OH in the carboxyl group. 49,50 The peak intensity of OYL and OHB at 1232 cm −1 is larger than YLHA and HBHA, which indicates that the C−O content in HAs increases after oxidation, which is consistent with the results of ultimate analysis. The FTIR spectra are fitted using a curve fitting method, and then the peaks of the correlation curve are located precisely. The content of carboxyl groups and the aromatic structure is an important index that affects the performance of HAs, so the peak fitting of the oxygen-containing functional group region (1800−1500 cm −1 ) in the FTIR of four HAs is carried out to discuss its difference, and the fitting results are shown in Figure 2 and Table 4. It can be seen from Table 4 that the main oxygen-containing functional groups of the four HAs are carboxyl and carbonyl, and the carboxyl contents of OYL and OHB are larger than those of YLHA and HBHA, indicating that the oxidation process increases the carboxyl content of HAs, which is consistent with the result of acidic functional group determination. It can be seen from Table 4 that the main oxygen-containing functional groups of the four HAs are carboxyl and carbonyl, and the carboxyl contents of OYL and OHB are larger than those of YLHA and HBHA, indicating that the oxidation process increases the carboxyl content of HAs, which is consistent with the results of acidic functional groups determination.
3.5. Fluorescence Spectroscopy. The electron-withdrawing groups in the HA molecules (such as carboxyl and carbonyl) can reduce the fluorescence intensity of HAs, and the electron-donating groups (such as amino, hydroxyl, methoxy, etc.) can increase the fluorescence intensity of HAs, and the presence of these oxygen-containing and nitrogen-containing functional groups can shift the fluorescence to longer wavelengths by reducing the energy difference between the ground state and the first excited state. Synchronous fluorescence spectroscopy can reduce the overlapping phenomenon of the spectrum and reduce the influence of scattered light on the spectrum. The most commonly used step size Δλ (the difference between the emission wavelength (Em) and the excitation wavelength (Ex)) is 18 nm. 39,40,42 The fluorescence spectra of the excitation and emission wavelengths of the four HAs are shown in Figure 3. The maximum Ex of the four HAs is about 270 nm, and the maximum Ems of the four HAs are in the range of 410−450 nm. The maximum Em of YLHA is 442.8 nm and the intensity is 34.05; the maximum Em of HBHA is 438.2 nm and the intensity is 44.73. The maximum Em of OYL is 445.5 nm, and the intensity is 22.74, which is slightly larger than the maximum Em of YLHA, and the intensity is lower, indicating that OYL has more electron-withdrawing groups (such as carboxyl and carbonyl). The maximum Em of OHB is 434.4 nm, and the intensity is 28.69, which is lower than the maximum Em and intensity of HBHA. The possible reason is that there are more electron-withdrawing groups (such as carboxyl and carbonyl) in OHB, which reduces its fluorescence intensity, and the unsaturated aliphatic structure or the aromatic system is reduced. At the same time, the reduction of the conjugated system in HAs makes the Em shift to the shortwave direction.
The synchronous fluorescence spectra of the four HAs are shown in Figure 4. It can be seen that YLHA has one obvious peak and two weaker small peaks, and HBHA has two obvious peaks, which indicates that YLHA has a simpler structure and minimal dispersion. When the Ex of the synchronous fluorescence spectrum (Δλ = 18 nm) is in the range of 340−370 nm, the corresponding polycyclic aromatic hydrocarbons (PAHs) are composed of about three to four benzene rings; when Ex is in the range of 370−420 nm, the corresponding PAHs are composed of about five benzene rings; when Ex is in the range of 438−487 nm, the corresponding PAHs have about seven benzene rings or the molecule contains a lignin structure. 51 The PAHs in YLHA contain three to five benzene rings, and are mainly composed of five benzene rings. The PAHs in HBHA are mainly composed of three to four benzene rings, and YLHA and HBHA also contain a small amount of PAHs composed of seven benzene rings and/or lignin structures.
In the synchronous fluorescence spectrum of OYL, there is a relatively obvious peak in the Ex range of 340−420 nm, indicating that the PAHs in the molecule are mainly composed of three to five benzene rings. In the synchronous fluorescence spectrum of OHB, there is an obvious peak in the Ex range of 340−370 nm, and a small peak near 390 nm, indicating that the polycyclic aromatic hydrocarbons in the molecule are mainly composed of three to five benzene rings. In the Ex
ACS Omega
http://pubs.acs.org/journal/acsodf Article range of 438−487 nm, OYL has a small peak near 450 and 480 nm, respectively, and OHB also has a small peak at 440 nm, indicating that there is a small amount of PAHs with seven benzene rings or lignin structures in both OYL and OHB molecules, similar to the YLHA and HBHA. It also can be seen that the synchronous fluorescence intensity of OYL and OHB is both larger than that of YLHA and HBHA, which may be related to the more C−O bond in OYL and OHB. 3.6. CP/MAS 13 C NMR Spectroscopy. CP/MAS 13 C NMR can be used to analyze the type of carbon in HAs, and nuclear magnetic data can be used to calculate the aromaticity, the aliphatic carbon ratio, and the hydrophilic−hydrophobic index (f h/h ) of HAs. Furthermore, the oxygen-containing functional group, the aliphatic structure, and the aromatic structure in HAs can be analyzed. In this paper, CP/MAS 13 C NMR is used to analyze the carbon types of four HAs, and the nuclear magnetic spectrum is shown in Figure 5.
The four samples showed an obvious peak in the range of 0−40 ppm, which belongs to aliphatic carbon. The strongest Where λ is the wavenumber, cm −1 , and RP is the relative percentage, %.
ACS Omega
http://pubs.acs.org/journal/acsodf Article peak appears at 100−150 ppm, which is attributed to the aromatic carbon in HAs. It shows that the relative strength of OYL and OHB in the aliphatic carbon region (0−100 ppm) is higher than YLHA and HBHA, indicating that HAs extracted from oxidized lignites contain more aliphatic structures than that of raw lignites, while the aromatic structure is slightly decreased. The relative intensity of the aliphatic carbon region and the aromatic carbon region (100−150 ppm) of OYL is not much different, and the aromatic carbon content in OHB is more than the aliphatic carbon content. The peaks in the range of 165−190 ppm belong to the characteristic peaks of carboxyl groups. The intensity of the peaks of OYL and OHB in this range is greater than YLHA and HBHA, indicating that OYL and OHB contain a higher carboxyl content, and the results are consistent with the analysis results of FTIR.
To use CP/MAS 13 C NMR to quantitatively analyze HAs, the NMR spectrum of HAs was fitted by peaks. The fitted-peak curves of NMR spectra of four HAs are shown in Figure 6, and the corresponding attributions are shown in Table 5.
Compared with YLHA and HBHA, the aliphatic carbon content of OYL and OHB increases, while the aromatic carbon content decreases. The carboxyl content of OYL and OHB is higher than that of the corresponding HAs in raw lignites, which is consistent with the analysis results of oxygencontaining functional groups in Table 3. The hydroxyl carbon content of OYL phenol (8.1%) is lower than that of YLHA Where BE is binding energy (eV) and RP is relative proportion (%).
ACS Omega
http://pubs.acs.org/journal/acsodf Article (9.37%). The possible reason is that phenol is converted into other substances during the oxidation process. The OHB phenolic hydroxyl carbon content (8.79%) is higher than that of HBHA (6.08%). The possible reason is that oxidation causes the −C−O bond between the benzene ring and the main structure to break. The content of carbonyl carbon in OYL is 6.84%, which is 2.32% higher than that of YLHA (4.52%). The content of carbonyl carbon in OHB is 8.78%, which is 3.05% higher than the content of carbonyl carbon in HBHA (5.73%), indicating that oxidation can oxidize alcohol ether phenolic compounds in HA molecules to ketones or carbonyl compounds. The oxygen-containing and nitrogen-containing alkyl carbons in HAs were hydrophilic carbons, which affect the biological activity of HAs. The hydrophilic/hydrophobic indexes (f h/h ) 52 of four HAs are calculated according to eq 2, and the results are listed in Table 3.
It can be seen from Table 3 that the f h/h of the four samples is OYL > YLHA > OHB > HBHA, indicating that OYL has the most hydrophilic groups, and in agricultural applications, OYL has higher biological activity. 52 It shows that HAs extracted from oxidized lignites contain more hydrophilic groups than those of raw lignites, indicating that oxidation can improve the biological activity of HAs. 3.7. XPS Spectrometry. XPS spectrometry is an analytical method used to measure the surface elements of samples. This paper uses XPS to analyze the surface carbon and oxygen elements of four HAs. When using XPS to analyze the surface carbon of HAs, there are six main chemical states, including aromatic carbon, aliphatic carbon, ether/alcohol carbon, ketone carbon, pyrrole/amide carbon, and carboxyl carbon. 4 The C 1s of the XPS of the four HAs were fitted by peaks. The fitting results are shown in Figure 7, and the fitting results are shown in Table 6.
The main existing forms of carbon elements on the surface of four HAs are aromatic carbon and aliphatic carbon. Compared with the YLHA and HBHA, it can be found that the aromatic carbon content of OYL and OHB is lower, and the aliphatic carbon content is higher. The contents of ether/ alcohol carbon, ketone carbon, and carboxyl carbon of OYL and OHB are larger than YLHA and HBHA, respectively, indicating that oxidation can increase the oxygen-containing functional groups in HAs, and can oxidize alcohol ether phenolic compounds into ketones, carbonyls, and carboxylic acid compounds. The analysis results are consistent with the NMR results.
3.8. TG-DTG Analysis. The results of TG-DTG analysis of the four HAs are shown in Figure 8. The TG and DTG curves of the four HAs are similar. From the TG curves, it can be seen that the HAs always suffer from quality loss when the temperature increases from 20°C, and the maximum weight loss rate appears in the temperature ranges of 20−120, 350− 450, and 450−600°C, indicating that there is greater loss in the quality of HAs in these three temperature ranges, respectively.
The weight loss of HAs can be divided into four stages. The first stage is 20−200°C, which is attributed to the evaporation of free water and bound water in the HAs, and the decomposition of some small molecular weight organic matters. 53 The second stage is 200−350°C; the mass loss in this stage is mainly caused by the degradation of sugar compounds in the samples, the dehydration of aliphatic alcohols, and the decomposition of functional groups such as carboxyl, phenolic hydroxyl, and carbonyl. 53−55 The third stage is 350−450°C. The quality loss in this stage is attributed to the degradation of more polycondensation structures, the breaking of C−C bonds, the oxidation of aromatic components, etc., which are mainly related to the occurrence of long-chain hydrocarbons and nitrogen-containing compounds. 33 The fourth stage is 450−600°C; the quality loss of this stage is caused by the pyrolysis of the aromatic components in the structure of lignin and other polyphenols, and the destruction of the aromatic structure in HAs. 54 Therefore, the relative mass losses of the four HAs at the corresponding stages are listed in Table 7.
It can be seen that in the range of 200−350°C, the quality loss of OYL and OHB is greater than that of YLHA and HBHA, respectively, indicating that the HAs of oxidized lignite contain more oxygen-containing functional groups and aliphatic structures, which are consistent with the analysis results of ultimate analysis, acid functional group determination, and NMR. The temperature of the peak of maximum weight loss on the DTG curves of OYL and OHB is higher than that of YLHA and HBHA, indicating that the thermal oxidation of HAs extracted from oxidized lignites is better. 37 In addition, OYL is decomposed at about 484°C, which is lower than the decomposition temperature of YLHA (508°C), and OHB is decomposed at about 500°C, which is lower than the decomposition temperature of HBHA (539°C). It shows that the heat resistance of HAs extracted from oxidized lignites is worse than that of HAs extracted from raw lignites, which is related to the less aromatic structure in HAs extracted from oxidized lignites. 56
CONCLUSIONS AND FUTURE DIRECTIONS
The HA content of YL lignite under the optimal oxidation condition is 17.6% higher than that of raw lignite (27.8%), and that of HB lignite under the optimal oxidation condition is 17.0% higher than that of raw lignite (23.9%). Through ultimate analysis, it is concluded that the carbon content of HAs extracted from the oxidized lignites reduces and the oxygen content increases. According to the determination of acid functional groups, FTIR analysis, and CP/MAS 13 C NMR analysis, the HAs extracted from oxidized lignites contain more ACS Omega http://pubs.acs.org/journal/acsodf Article carboxyl groups, C−O groups, and total acidic groups than those extracted from raw lignites, and the functional groups and structures of HAs are similar. Through CP/MAS 13 C NMR and XPS analysis, the HAs extracted from oxidized lignites contain more aliphatic carbon and less aromatic carbon than that extracted from raw lignites. The content of ether/alcohol carbon, ketone carbon, and carboxyl carbon and the f h/h value of HAs extracted from oxidized lignites are all higher than those of HAs extracted from raw lignites.
According to TG-DTG analysis, the HAs extracted from oxidized lignite have the decomposition of moisture, oxygencontaining functional groups, the aliphatic structure and the aromatic structure, and have better thermal oxidation property but worse heat resistance than that of HAs extracted from raw lignites. The structure, dispersibility, and fluorophore types of HAs extracted from oxidized lignites are similar to those extracted from raw lignites by fluorescence analysis.
The abovementioned studies show that the oxidation of H 2 O 2 can increase the oxygen-containing functional groups in HAs, and the alcohol−ether phenolic compounds in molecules of HAs can be oxidized into ketones or carbonyl compounds, indicating that the oxidation can improve the biological activity of HAs. The future research can focus on the application of HAs and make in-depth exploration by comparing the application effects of HAs extracted from lignites before and after oxidation; the mechanism of oxidation reaction and its molecular structure also can be discussed more so as to provide new ideas for the application of lignites with a high-added value. | 2021-09-26T05:19:58.391Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "ee38aab855ad102ae676f023fca38607e7ee5747",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c03257",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee38aab855ad102ae676f023fca38607e7ee5747",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270241042 | pes2o/s2orc | v3-fos-license | Palm Kernel Cake Extracts Obtained from the Combination of Bacterial Fermentation and Enzymic Hydrolysis Promote Swine Small Intestine IPEC-J2 Cell Proliferation and Alleviate LPS-Induced Inflammation In Vitro
Co-fermentation with bacteria and enzymes can reduce sugar content in palm kernel cake (PKC); however, the chemical changes and their effects on cell functionality are unclear. This study investigated the active components in pre-treated PKC extracts and their effects on pig small intestine IPEC-J2 cell proliferation and LPS-induced inflammation. The extracts contained 60.75% sugar, 36.80% mannose, 1.75% polyphenols and 0.59% flavone, as determined by chemical analyses, suggesting that the extracts were palm kernel cake oligosaccharides (PKCOS). Then, we found that 1000 µg/mL PKCOS counteracted the decrease in cell viability (CCK8 kit) caused by LPS induction by 5 µg/mL LPS (p < 0.05). Mechanistic studies conducted by RNA-seq and qPCR analyses suggested PKCOS promoted cell proliferation through the upregulation of TNF-α, PI3KAP1, MAP3K5 and Fos in the PI3K/MAPK signalling pathway; alleviated inflammation caused by LPS via the downregulation of the target genes Casp3 and TNF-α in association with apoptosis; and regulated the expression of the antioxidant genes SOD1, SOD2 and GPX4 to exert positive antioxidant effects (p < 0.05). Furthermore, PKCOS upregulated SLC5A1 (encoding SLGT1), HK and MPI in the glycolytic pathway (p < 0.05), suggesting cell survival. In summary, PKCOS has positive effects on promoting swine intestine cell proliferation against inflammation.
Introduction
Palm kernel cake (PKC), a by-product of the mannan-rich palm oil industry [1], has received close attention in recent years as an unconventional feed resource.Treatment with enzymes and microorganisms can increase the nutritional value of PKC, primarily by reducing the amount of cellulose and mannan [2].Numerous active substances are produced during the process, including volatile organic compounds (VOCs), such as ketones, aromatic hydrocarbons and short-chain fatty acids (SCFAs); microbial metabolites, such as amino acids, enzymes and antibiotics [3,4]; and oligosaccharides [5], thus promoting PKCs' use in livestock, particularly in monogastric animals which cannot secrete various cellulases.Oligosaccharides are short-chain polymers with a low molecular weight, and they consist of 2-10 monosaccharide units [6] and present specific biological activities, such as aiding in the proliferation of Bifidobacteria, the regulation of gastrointestinal function, anti-inflammatory responses and the reduction in disease [7,8].
Free radicals can be produced in many situations, such as physical conditions, chemical reactions and metabolic processes.When the body is subjected to an inflammatory stimulus, a significant increase in the production of free radicals occurs, resulting in their accumulation.The hydroxyl radical, one of the reactive oxygen species, is commonly formed in vivo and can cause serious damage to biomolecules, such as lipids, proteins and nucleic acids [9].This damage can lead to inflammation-related diseases, such as inflammatory bowel disease (IBD) and bacterial diarrhoea, particularly severe diarrhoea, in weaned piglets [10].Saccharides, which originate either from synthetic chemicals [11] or natural substances [12], were revealed to have strong radical scavenging abilities [13].Natural antioxidants from organism extracts have recently received increased interest due to consumer concerns.
The small intestine is not only the main compartment of digestion and absorption but is also the first physical barrier against external factors.Pathogenic microorganisms, harsh environmental sanitation (such as unsuitable temperature and humidity) and weaning stress often cause reduced immunity and intestinal structural abnormalities, leading to dysfunction in piglets [14].Lipopolysaccharide (LPS) is a major culprit in porcine intestinal injury and is associated with inflammation, apoptosis, proliferation, differentiation and host immune activation [15].Therefore, gut health is crucial to maintaining the integrity of small intestine functions and host health [16,17].Porcine epithelial cells (IPECs) are the first barrier against exogenous antigens, pathogens and toxins entering the circulatory system, including cyto-inflammatory factors, such as tumour necrosis factor-α (TNF-α) and interleukin-6 (IL-6) [18].The IPEC-J2 cell line, well-known as the swine jejunal epithelial cell line, is isolated from newborn piglets and is widely used for studying gut inflammation responses, immunity and barrier integrity in vitro [19,20].A study showed that chitosan oligosaccharides could attenuate LPS-induced inflammation in IPEC-J2 cells by modulating the TLR4/NF-κB signalling pathway [21].However, it is worth noting that cellular life processes, particularly cell proliferation and anti-inflammation, are linked to glycolysis [22].For example, cell proliferation and differentiation, free radical scavenging and autophagy all require energy [23,24].
Our previous study demonstrated that the use of bacterial fermentation integration with complex enzymatic hydrolysis effectively degraded cellulose in PKC and produced a large amount of reducing sugars [25], which may be rich in potentially positive functional substances, such as oligosaccharides [26].However, studies focused on extracts, mainly oligosaccharides derived from PKC and their potential effects, particularly on gut intestinal health, are rare and have yet to be further explored.Our hypothesis is that palm kernel cake oligosaccharides (PKCOS) can enter the cell via glucose transporters and have a positive impact on the life process of IPEC-J2.In this study, PKCOS was first extracted from pre-treated PKC and then partially characterised to evaluate its antioxidant activity.Finally, the effects of PKCOS on gut epithelial functions using the IPEC-J2 cell line as a model were investigated.This study will contribute to the development of novel multifunctional substances that can strengthen the health of animal gut intestines.
Preparation of PKCOS and Content Analysis
According to the description of the previous study by Xiang [27], slight modifications were made.In brief, pre-treated PKC were extracted twice using 50% ethanol at a ratio of 1:20 (PKC to ethanol, m/v) for 1 h at 60 • C. The extracted solution was centrifuged at 4000 rpm/min for 5 min, and then the supernatant was freeze-dried (LGJ-10, Deyang Yibang Lot., Shanghai, China).Finally, the freeze-dried powder was dissolved in DMSO to prepare a stock solution of 1000 µg/mL for cell assays or a stock solution of PBS 20 mg/mL for chemical analysis including total sugars, mannose and polyphenols.Everything was stored at −20 • C.
The total sugars and proteins were determined using the phenol-sulphuric acid colorimetric method [28] and Coomassie Bright Blue (Beyotime Lot., Beijing, China), respectively.Mannose was analysed using the procedure reported by Qi [29].The determination of polyphenols and flavonoids referred to Adom's method [30], and were denoted as rutin and gallic acid equivalents (mg/g PKCOS).
IPEC-J2 Cell Line and Culture In Vitro
IPEC-J2 cells were donated by Weiyun Zhu, a faculty member at Nanjing Agricultural University.The thawed IPEC-J2 cells were first seeded in T75 cell culture dishes (Corning Lot., Corning, NY, USA) and then cultured in DMEM/F12 supplemented with 10% FBS and 1% penicillin-streptomycin at 37 • C in humidified air containing 5% CO 2 until the cell density reached nearly 80%.The cells were passaged for the following experiments.
Effects of PKCOS and LPS on IPEC-J2 Cell Viability
The IPEC-J2 cells were seeded in 96-well plates (Corning Lot., Corning, NY, USA) at a density of 4 × 10 4 cells/mL in the culture medium overnight.Then, the cells were treated by either 0, 1, 20, 50, 100, 250, 500 or 1000 µg/mL PKCOS contained in DMEM/F12 medium for 6 h or treated by 0, 0.1, 1, 5, 10 or 20 µg/mL LPS contained in DMEM/F12 medium for 12 h, respectively.In the end, the cell survival rate was detected.The trial was repeated for three batches, each containing 6 replicates.Then, appropriate concentrations of PKCOS and LPS were obtained based on survival for further studies.
The cells were treated with the appropriate concentration of LPS (5 µg/mL) for 12 h, and then the appropriate concentration of PKCOS (0, 250, 500 µg/mL or 1000 µg/mL) was added in each well and cultured for 6 h to determine whether PKCOS could alleviate the damage induced by LPS on IPEC-J2.Similarly, cell viability was determined at the end of the culture, and the trial was repeated for three batches, with 6 replicates in each batch.
Mechanistic Exploration of PKCOS's Effects on the Survival Rate of IPEC-J2 Cells with or without LPS
To explore the underlying mechanisms of the effects of PKCOS on cell survival rate with or without LPS, cells were cultured in 10 cm cell culture dishes (Corning, USA) and divided into five groups, namely, the control group, named CT, treated by PBS; the DMSO group, treated by DMSO (PKCOS was dissolved in DMSO); the PKCOS group, treated by the appropriate concentration of PKCOS selected from above (1000 µg/mL) for 6 h; the LPS group, treated by the selected appropriate concentration of LPS above (5 µg/mL) for 12 h; and the LPS-PKCOS group, treated by LPS for 12 h and then supplemented with PKCOS for 6 h.Finally, cells from the five treatments were collected, and the RNA was extracted by TRIzol reagent and then stored at −80 • C for future RNA-seq and qPCR analyses.Three replicates were performed in each group/treatment (n = 3).
Cell Survival Rate Assay
The cell survival rate was determined via the CCK-8 assay according to the manufacturer's instructions (Baiscience Lot., Beijing, China) and expressed as a viability percentage according to the following formula: Cell survival rate (%) = absorbance of treatment − absorbance of blank absorbance of control − absorbance of blank × 100%
Bioinformatics Analysis and qPCR
According to a previous study [31], the procedure of RNA extraction from the IPEC-J2 cells was performed based on the manufacturer's instructions (Invitrogen, Waltham, MA, USA), and genomic DNA was removed using DNase I (TaKara.Lot., Beijing, China).RNA quality was tested using a 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA) and quantified by ND-2000 NanoDrop (Thermo Scientific, Wilmington, DE, USA), respectively.Subsequently, cDNA libraries were created following the instructions of the TruSeqTM RNA sample preparation kit (RS-122-2101, Illumina, San Diego, CA, USA), and sequencing was performed via Illumina NovaSeq 6000 sequencing (BIOZERON Co. Ltd., Shanghai, China).A p-value < 0.05 was used as a criterion to identify differentially expressed genes (DEGs).Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were performed using Kobas (http://Kobas.cbi.pku.edu.cn/home.do,accessed on 15 November 2023).
All samples were kept at a constant RNA concentration of 500 ng/µL using enzymefree water.qPCR was conducted using the SYBR Green Premix Pro Taq HS qPCR Kit (AG11718, Agbio Lot., Changsha, China) on a QuantantStudio7 Flex (ABI).The reaction system for qPCR consisted of 20 µL, with a template addition of 20 ng.The qPCR procedure was as follows: Step 1: 95 • C for 30 s, repeated 1 time; Step 2: 95 • C for 5 s, followed by 60 • C for 30 s, repeated 40 times.The primer sequences used are listed in Table 1, showing that the antioxidant gene primers are consistent with those reported in the literature [32,33].Finally, using the 2 −∆∆Cq method, the relative expression of the target genes was assessed.Table 1.Real-time fluorescence quantification of gene sequences and primer information.
Gene
Version Forward Primer (5
Statistical Analysis
The data were expressed as the mean ± standard deviation (SD).The results of figures were analysed via one-way analysis of variance (ANOVA) using SPSS 25.0 software and plotted using GraphPad Prism 8.0.2.The volcano plots of DEGs were plotted by http:// www.biozeron.com,and the KEGG pathway enrichment analyses of DEGs were conducted on https://www.omicstudio.cn/.A p-value less than 0.05 indicated statistical significance.
Contents of PKCOS
As shown in Table 2, the acquisition rate of PKCOS is nearly 9.62%, and they are mainly composed of sugars and mannose, accounting for 60.75% and 36.80%,respectively, and a very small amount of protein, which accounts for nearly 0.19%.It also contains small amounts of polyphenols and flavonoids, the values of which are 17.30 ± 0.02 (gallic acid equivalents) mg/g PKCOS and 5.90 ± 0.01 (rutin equivalent) mg/g PKCOS, respectively.
Effect of PKCOS or LPS on IPEC-J2 Cell Viability
IPEC-J2 cell survival rates were increased in a dose-dependent pattern by the different concentrations of PKCOS treatment, particularly at the concentrations of 250, 500 and 1000 µg/mL PKCOS (Figure 1A, p < 0.05).In contrast, IPEC-J2 cell viability induced by LPS in a range of 5 to 20 µg/mL presented a dose-dependent reduction (Figure 1B, p < 0.05), indicating that the cells could be damaged by LPS, and the appropriate concentration is 5 µg/mL.Figure 1C shows that PKCOS alleviates LPS-induced (5 µg/mL) cell apoptosis (p < 0.05); in particular, the addition of 250 µg/mL PKCOS returned the value of cell viability to normal, while the 1000 µg/mL concentration of PKCOS improved cell viability by around 150% (p < 0.05).
PKCOS Enhanced Cell Viability and Altered Gene Transcription Expression
A total of 347 upregulated and 471 downregulated DEGs were identified in the DMSO group, and 743 upregulated and 724 downregulated DEGs were identified in the PKCOS group compared with those identified in the CT group (Figure 2A,B), while 815 upregulated and 610 downregulated DEGs were identified in the PKCOS group compared with those identified in the DMSO group (Figure 2C).Furthermore, a series of signalling pathways were enriched by KEGG pathway analysis of DEGs, such as the metabolism of cell proliferation, including the MAPK signalling pathway, the PI3K-Akt signalling pathway, the TNF signalling pathway and cytokine-cytokine receptor interaction (Figure 2D).Therefore, qRT-PCR was performed to confirm that the above pathways were related to the metabolism of cell proliferation.
PKCOS Attenuated IPEC-J2 Cell Apoptosis Induced by LPS
In total, 441 upregulated and 447 downregulated DEGs were identified in the LPS group (Figure 4A), and 549 upregulated and 582 downregulated DEGs were identified in the LPS-PKCOS group compared with the CT group (Figure 4B), while 634 upregulated and 477 downregulated DEGs were identified in the LPS-PKCOS group compared with the LPS group (Figure 4C).The predominantly enriched pathways based on a KEGG pathway analysis of DEGs were mainly associated with cell apoptosis, including the IL-17 signalling pathway, the PI3K-Akt signalling pathway, the TNF signalling pathway, focal adhesion and ECM-receptor interaction (Figure 4D).Similar to the above, qPCR was performed to confirm those predominantly enriched pathways.
The qPCR results showed that the expression of genes CASP3 and TNF-α was increased in the LPS group compared with the CT group (Figure 5A,B, p < 0.05); however, the expression of two genes in LPS-PKCOS decreased to the value close to that in the CT group.A significant difference in IL-6 expression was observed among the three groups (CT, LPS and LPS-PKCOS; Figure 5C, p < 0.05), and it was slightly lower in the LPS group and the lowest in the LPS-PKCOS group compared to the CT group.The expression of
PKCOS Attenuated IPEC-J2 Cell Apoptosis Induced by LPS
In total, 441 upregulated and 447 downregulated DEGs were identified in the LPS group (Figure 4A), and 549 upregulated and 582 downregulated DEGs were identified in the LPS-PKCOS group compared with the CT group (Figure 4B), while 634 upregulated and 477 downregulated DEGs were identified in the LPS-PKCOS group compared with the LPS group (Figure 4C).The predominantly enriched pathways based on a KEGG pathway analysis of DEGs were mainly associated with cell apoptosis, including the IL-17 signalling pathway, the PI3K-Akt signalling pathway, the TNF signalling pathway, focal adhesion and ECM-receptor interaction (Figure 4D).Similar to the above, qPCR was performed to confirm those predominantly enriched pathways.The qPCR results showed that the expression of genes CASP3 and TNF-α was increased in the LPS group compared with the CT group (Figure 5A,B, p < 0.05); however, the expression of two genes in LPS-PKCOS decreased to the value close to that in the CT group.A significant difference in IL-6 expression was observed among the three groups (CT, LPS and LPS-PKCOS; Figure 5C, p < 0.05), and it was slightly lower in the LPS group and the lowest in the LPS-PKCOS group compared to the CT group.The expression of SOCS3, a major regulator of infection and inflammation associated with IL-6 and LIF, was observed to have lower values in both the LPS and LPS-PKCOS groups than in the CT group, and no difference was revealed between the LPS and LPS-PKCOS groups (Figure 5D), while the expression of the LIF gene showed no difference among the three groups (CT, LPS and LPS-PKCOS; Figure 5E).Similarly, the expressions of the genes PI3KAP1 and MAP3K5 in the PI3K/MAPK signalling pathways showed no differences across the three groups (Figure 5F,G).
PKCOS Promotes the Glycolysis of Cells via HK and MPI Enzymes
Based on the significant amount of mannose contained in PKCOS, the expression of genes SLC2A2 and SLC5A1, encoding the glucose transporters GLUT2 and SGLT1, respectively, was paid close attention.The SLC2A2 expression detected by qPCR in the DMSO group was close to the value in the CT group, while its expression in the other three groups (PKCOS, LPS and LPS-PKCOS) was reduced (p < 0.05).There was no difference in the relative mRNA expression of SLC2A2 between the PKCOS and LPS-PKCOS groups, and the values in both groups were the lowest (Figure 6A, p < 0.05).In contrast, the transcriptional expressions of SLC5A1 increased in the other four groups compared to the CT group.Interestingly, the transcription of SLC5A1 increased in the LPS group and decreased in the LPS-PKCOS group (Figure 6B, p < 0.05).Regarding the enzymes involved in mannose metabolism (Figure 6C), the gene mRNA expressions of three key enzymes-HK, MPI and PMM2-were evaluated by qPCR.The mRNA expression of HK and MPI in the PKCOS and LPS-PKCOS groups exhibited an increase compared with that in the CT group (Figure 6D,F, p < 0.05).However, the mRNA expressions of PMM2 showed no difference among any of the five treatments (Figure 6E).
PKCOS Promotes the Glycolysis of Cells via HK and MPI Enzymes
Based on the significant amount of mannose contained in PKCOS, the expression of genes SLC2A2 and SLC5A1, encoding the glucose transporters GLUT2 and SGLT1, respectively, was paid close attention.The SLC2A2 expression detected by qPCR in the DMSO group was close to the value in the CT group, while its expression in the other three groups (PKCOS, LPS and LPS-PKCOS) was reduced (p < 0.05).There was no difference in the relative mRNA expression of SLC2A2 between the PKCOS and LPS-PKCOS groups, and the values in both groups were the lowest (Figure 6A, p < 0.05).In contrast, the transcriptional expressions of SLC5A1 increased in the other four groups compared to the CT group.Interestingly, the transcription of SLC5A1 increased in the LPS group and decreased in the LPS-PKCOS group (Figure 6B, p < 0.05).Regarding the enzymes involved in mannose metabolism (Figure 6C), the gene mRNA expressions of three key enzymes-HK, MPI and PMM2-were evaluated by qPCR.The mRNA expression of HK and MPI in the PKCOS and LPS-PKCOS groups exhibited an increase compared with that in the CT group (Figure 6D,F, p < 0.05).However, the mRNA expressions of PMM2 showed no difference among any of the five treatments (Figure 6E).Relative mRNA expression of genes for key enzymes in mannose metabolism (D-F).MPI, the gene for mannose phosphate isomerase; PMM2, the gene for phosphomannomutase 2; HK, the gene for hexokinase.Data are presented as mean ± SD (n = 3).Bars without a common letter (a,b,c) denote statistically significant differences (p < 0.05).
PKCOS Readjusted the Antioxidant Genes' mRNA Expressions
Given that the PKCOS extract demonstrated antioxidant properties in an in vitro antioxidant assay, we postulated that PKCOS may also modulate oxidative stress in the LPS model of inflammation.Consequently, we examined the expression of antioxidant genes among the CT, LPS and LPS-PKCOS groups.As a result, PKCOS downgraded the trend of high levels of SOD1, SOD2 and KEAP1 caused by LPS, upgraded the tendency of low levels of GPX4 caused by LPS and decreased CAT and NRF2 mRNA expression (Figure 7, p < 0.05); it had no effect on GPX1.Relative mRNA expression of genes for key enzymes in mannose metabolism (D-F).MPI, the gene for mannose phosphate isomerase; PMM2, the gene for phosphomannomutase 2; HK, the gene for hexokinase.Data are presented as mean ± SD (n = 3).Bars without a common letter (a,b,c) denote statistically significant differences (p < 0.05).
PKCOS Readjusted the Antioxidant Genes' mRNA Expressions
Given that the PKCOS extract demonstrated antioxidant properties in an in vitro antioxidant assay, we postulated that PKCOS may also modulate oxidative stress in the LPS model of inflammation.Consequently, we examined the expression of antioxidant genes among the CT, LPS and LPS-PKCOS groups.As a result, PKCOS downgraded the trend of high levels of SOD1, SOD2 and KEAP1 caused by LPS, upgraded the tendency of low levels of GPX4 caused by LPS and decreased CAT and NRF2 mRNA expression (Figure 7, p < 0.05); it had no effect on GPX1.Relative mRNA expression of genes for key enzymes in mannose metabolism (D-F).MPI, the gene for mannose phosphate isomerase; PMM2, the gene for phosphomannomutase 2; HK, the gene for hexokinase.Data are presented as mean ± SD (n = 3).Bars without a common letter (a,b,c) denote statistically significant differences (p < 0.05).
PKCOS Readjusted the Antioxidant Genes' mRNA Expressions
Given that the PKCOS extract demonstrated antioxidant properties in an in vitro antioxidant assay, we postulated that PKCOS may also modulate oxidative stress in the LPS model of inflammation.Consequently, we examined the expression of antioxidant genes among the CT, LPS and LPS-PKCOS groups.As a result, PKCOS downgraded the trend of high levels of SOD1, SOD2 and KEAP1 caused by LPS, upgraded the tendency of low levels of GPX4 caused by LPS and decreased CAT and NRF2 mRNA expression (Figure 7, p < 0.05); it had no effect on GPX1.
Discussion
Our study showed that PKCOS extracted by 50% alcohol is composed of sugar, particularly mannose, and a very small amount of phenols, flavonoids and proteins.The explanation for this may be attributed to sugar also being able to bind covalently to other molecules, such as phenols [34].It has been previously reported that 50% ethanol is an effective solvent for the extraction of oligosaccharides [35], and polysaccharides are not soluble in such high concentrations of ethanol [36].Therefore, it is speculated that oligosaccharides are the major components of PKCOS.The • OH and • DPPH radicals are an extremely active free radical in biological systems and can cause oxidative damage to DNA [37].In our study, PKCOS exhibited higher T-AOC activity than açai fermentation extracts [38] but lower than Buddleja scordioides Kunth [39], higher • DPPH radical scavenging activity in vitro than BHA [40] and higher • OH radical percentage of inhibition than gallic acid [41], which may indicate that PKCOS acts as a potential antioxidant agent.
Cell activity and proliferation are frequently associated with sugar levels and their metabolism [42].In the present study, 1000 µg/mL PKCOS demonstrated an enhancement of IPEC-J2 cell viability, which is consistent with previous findings of oligosaccharides in promoting intestinal cell proliferation in piglets [43] and periodontal ligament stem cell proliferation in humans [44].The results of the gene expression conducted by qPCR demonstrated a higher expression of TNF-α, PI3KAP1, MAP3K5 and FOS in the PKCOS group than in the CT group.TNF-α is an upstream transcriptional regulator, while Fos and Bcl2L1 are downstream transcriptional regulators in PI3K/MAPK signalling pathways [45].c-Fos (edited by Fos) is a nucleophosphoprotein that heterodimerises with c-Jun and then forms an AP-1 (activator-1) complex, which binds to DNA at specific sites in the promoter and enhancer regions of the target gene and takes on the role of signal transduction [46].Studies have shown that the activation of the PI3K/MAPK pathway promotes the proliferation of zebrafish subintestinal vascular epithelial cells [47] and restoration of platelet function [48].Therefore, PKCOS may play an important role in promoting IPEC-J2 cell proliferation, and the promotion was performed through the PI3K/MAPK signalling pathway.Bcl2L1 (Bcl-XL), belonging to the Bcl-2 family, is the main antiapoptotic protein [49,50], and its mRNA expressions in the CT and PKCOS groups were similar, implying that there was no damage to IPEC-J2 cell viability treated by PKCOS.Ultimately, the pro-cellular proliferative mechanisms of PKCOS can be summarised as upregulating the expression of TNF-α, PI3KAP1, MAP3K5 and Fos in PI3K/AMPK signalling pathways.
From the metric of cell viability, it was clear that the 1000 µg/mL concentration of PKCOS was the applicable dose for relieving LPS-induced damage.Further investigation of the potential pathway by which it alleviated LPS-induced cell inflammation and apoptosis may indicate the involvement of the IL-17/TNF-α signalling pathway.Programmed cell death (PCD) is the process in which cells eliminate themselves in a controlled manner [51].Caspases (particularly caspase-3) are mediators of the signalling pathways for apoptosis and cell breakdown and play a role in chromatin condensation and DNA degradation during apoptosis [52,53].TNF-α acts as a signalling molecule in the inflammatory response [54].Increased levels of CASP-3 and TNF-α were observed in this study, which is in agreement with a previous study that APAP-induced inflammation in HepG2 Cells [55], suggesting that TNF-α and IL-17 signalling pathways were significantly enhanced by LPS treatment, based on findings from transcriptome analysis, which may be associated with cellular inflammatory and immune processes [56].LPS-induced intestinal injury is usually characterised by the overproduction of inflammatory cytokines, such as TNF-α, iNOS and IL-1β.Chitosan oligosaccharides can block LPS-induced inflammatory responses and reduce the death of cells [57].As expected, the levels of CASP-3 and TNF-α mRNA were significantly decreased after PKCOS treatment, so that the cell survival rate of IPEC-J2 was significantly increased.These results suggest that the regulation of PKCOS on the mRNA expression of TNF-α and CASP3 may result from multi-target genes and play a protective role in the LPS-induced apoptosis of IPEC cells.Similar to the other oligosaccharides obtained from plant/food/feed reported by other researchers [58], PKCOS presented anti-inflammatory effects through downregulating the expressions of target genes associated with cell death, such as CASP3 and TNF-α.
In addition, LPS caused high SOD1 and SOD2 levels in the IPEC-J2 cells, while PKCOS restored levels of SOD1 and SOD2 nearly to that in the CT group.High SOD1 and SOD2 levels were main indicators of cellular inflammation and oxidative stress [59][60][61].This suggests that PKCOS can alleviate LPS-induced oxidative stress and inflammatory responses in IPEC-J2 cells.Increased GPX4 may inhibit the onset of cellular iron death [62,63].It is noteworthy that our study showed that PKCOS can reverse the downregulation of GPX4 induced by LPS.CAT was also a closely related indicator of cell death [64], and the qPCR results showed that PKCOS downregulated the mRNA expression of this gene.This indicates that the active substances in the feed have antioxidant capacity, likely through the NRF2/KEAP1 pathway [65].In our study, PKCOS was known to downregulate the expression of NRF2 and normalise the expression of KEAP1.These alterations in antioxidant genes indicated that PKCOS also served as an antioxidant in the LPS-induced IPEC-J2 inflammation model.Considering that PKCOS has a role in regulating the expression of antioxidant genes, we concluded that PKCOS has antioxidant and anti-inflammatory capacities.
Sugar metabolism is a determinant of cell death and cell proliferation [66].Thus, the question is whether PKCOS affects the glycolytic pathway during IPEC-J2 proliferation and apoptosis.Mannose, an isoform of glucose, generally enters cells through glucose transporters.Therefore, a new question of whether mannose in PKCOS can enter cells through glucose transporters is posed.SGLT1 and GLUT2 are the main glucose transporters in the gut [67,68].The high level of SLC5A1 mRNA expression and the tendency towards a low mRNA expression level of SLC2A2 in the PKCOS and LPS-PKCOS groups compared to those in the CT group suggest that mannose in PKCOS may enter cells through SGLT1.The expression pattern of SLC5A1 in LPS-PKCOS was consistent with previous findings, in which stevia leaf extracts promoted the activity and expression of SGLT1 and enhanced the intestinal capacity to absorb glucose in rabbits [69].After entering into cells, mannose can be catalysed by hexokinase (HK) to produce 6-phospho-mannose (M6P), which is then catalysed by phosphomannose isomerase (MPI) into the glycolytic pathway or catalysed by phosphomannose mutase (PMM2) into the glycosylation pathway [70,71].Interestingly, the transcriptional expression of gene HK/MPI was increased by PKCOS treatment with and without added LPS, while the mRNA expression of PMM2 did not show a difference across the five groups, suggesting that PKCOS promoted the glycolytic pathway in IPEC-J2 cells, which may be associated with cell proliferation and alleviation of inflammation-induced apoptosis [72].Thus, one possible regulatory pathway is that PKCOS may enter cells through the transporter SGLT1 and promote the glycolytic pathway associated with cell proliferation and the alleviation of inflammation-induced apoptosis.However, further research is needed, first to determine whether PKCOS affects ATP production in terms of glycolytic pathways and energy involvement in cell proliferation and then to determine the appropriate supplemental dose for gut health and disease prevention in pigs.
Overall, these findings suggest that the potential roles of PKCOS can be summarised as promoting cell proliferation and alleviating oxidative stress and apoptosis caused by LPS (Figure 8).IPEC-J2 cell proliferation was promoted at a concentration of 1000 µg/mL PKCOS through the upregulated expression of TNF-α, PI3KAP1, MAP3K5 and Fos in PI3K/AMPK signalling pathways, while cell viability was damaged at a concentration of 5 µg/mL LPS.PKCOS alleviated the cell apoptosis induced by LPS through downregulating expressions of target genes associated with cell death, such as CASP3 and TNF-α, and regulated the expression of antioxidant genes, such as SOD1, SOD2 and GPX4, to exert positive antioxidant effects.PKCOS may enter cells through the transporter SGLT1 and promote the glycolytic pathway associated with cell proliferation and the alleviation of inflammation-induced apoptosis.
However, although some achievements were obtained, the present study has limitations that should be further investigated.For example, the antioxidant and anti-inflammatory properties could be attributed to other components, such as flavones.Furthermore, the key regulatory molecules of PKCOS affecting IPEC-J2 cells were not subjected to additional functional validation, such as protein expression or direct observation by immunofluorescence (other than qPCR).
Conclusions
In this study, an extract (PKCOS) from pre-treated PKCs was obtained, and it largely contained sugar and mannose, plus a very small number of polyphenols and flavones.In vitro studies have demonstrated that PKCOS has antioxidant activity and regulates the expression of antioxidant genes in IPEC-J2 cells.Furthermore, PKCOS has been shown to promote cell proliferation through the activation of the PI3K/MAPK signalling pathway and to regulate the expression of key target genes to alleviate LPS-induced apoptosis.The process of PKCOS-regulated cell survival may be related to glycolysis.This study provides a new approach for the exploitation of derivatives from fermented feeds as antioxidants and prebiotics.However, although some achievements were obtained, the present study has limitations that should be further investigated.For example, the antioxidant and anti-inflammatory properties could be attributed to other components, such as flavones.Furthermore, the key regulatory molecules of PKCOS affecting IPEC-J2 cells were not subjected to additional functional validation, such as protein expression or direct observation by immunofluorescence (other than qPCR).
Conclusions
In this study, an extract (PKCOS) from pre-treated PKCs was obtained, and it largely contained sugar and mannose, plus a very small number of polyphenols and flavones.In vitro studies have demonstrated that PKCOS has antioxidant activity and regulates the expression of antioxidant genes in IPEC-J2 cells.Furthermore, PKCOS has been shown to promote cell proliferation through the activation of the PI3K/MAPK signalling pathway and to regulate the expression of key target genes to alleviate LPS-induced apoptosis.The process of PKCOS-regulated cell survival may be related to glycolysis.This study provides a new approach for the exploitation of derivatives from fermented feeds as antioxidants and prebiotics.
Figure 1 .
Figure 1.Cell survival rate of IPEC-J2 cells with different treatments.(A) Cell survival rate of different concentrations of PKCOS on the viability of IPEC-J2 cells.(B) Cell survival rate of different concentrations of LPS on the viability of IPEC-J2 cells.(C) IPEC-J2 cells were treated with PBS or 5 µg/mL LPS at the indicated concentrations for 24 h and then treated with 250, 500 or 1000 µg/mL PKCOS.Data are presented as mean ± SD, and the experiment consisted of 3 batches, with each batch containing 6 replicates.Bars without a common letter (a,b,c,d) and '*' denote statistically significant differences (p < 0.05).
Figure 1 .
Figure 1.Cell survival rate of IPEC-J2 cells with different treatments.(A) Cell survival rate of different concentrations of PKCOS on the viability of IPEC-J2 cells.(B) Cell survival rate of different concentrations
Figure 2 .
Figure 2. PKCOS vs. DMSO vs. CT transcriptome analysis.(A) Volcano plot of DEGs in the DMSO and CT groups.(B) Volcano plot of DEGs in the PKCOS and CT groups.(C) Volcano plot of DEGs in the PKCOS and DMSO groups.Upregulated genes are shown in red, downregulated genes are shown in blue and genes with no significant difference in expression are indicated in black; p < 0.05 indicates significance.(D) KEGG pathway enrichment analysis of DEGs in the DMSO and CT groups, the PKCOS and DMSO groups and the PKCOS and CT groups.The size and colour of the circles represent the number of enriched genes and the p-value; n = 3.
Figure 2 . 18 Figure 3 .
Figure 2. PKCOS vs. DMSO vs. CT transcriptome analysis.(A) Volcano plot of DEGs in the DMSO and CT groups.(B) Volcano plot of DEGs in the PKCOS and CT groups.(C) Volcano plot of DEGs in
Figure 4 .
Figure 4. LPS-PKCOS vs. LPS vs. CT transcriptome analysis.(A) Volcano plot of DEGs in the LPS and CT groups.(B) Volcano plot of DEGs in the LPS-PKCOS and CT groups.(C) Volcano plot of DEGs in the LPS-PKCOS and LPS groups.Upregulated genes are shown in red, downregulated genes are shown in blue and genes with no significant difference in expression are indicated in black; p < 0.05 indicated significance.KEGG pathway enrichment analysis of DEGs in the LPS and CT groups, the LPS-PKCOS and LPS groups and the LPS-PKCOS and CT groups.(D) The size and colour of the circles represent the number of enriched genes and the p-value; n = 3.
Figure 4 .
Figure 4. LPS-PKCOS vs. LPS vs. CT transcriptome analysis.(A) Volcano plot of DEGs in the LPS and CT groups.(B) Volcano plot of DEGs in the LPS-PKCOS and CT groups.(C) Volcano plot of DEGs in the LPS-PKCOS and LPS groups.Upregulated genes shown in red, downregulated genes are shown in blue and genes with no significant difference in expression are indicated in black; p < 0.05 indicated significance.KEGG pathway enrichment analysis of DEGs in the LPS and CT groups, the LPS-PKCOS and LPS groups and the LPS-PKCOS and CT groups.(D) The size and colour of the circles represent the number of enriched genes and the p-value; n = 3.
Figure 6 .
Figure 6.Possible glucose transporters of the cellular uptake of mannose and the gene expression of three key enzymes involved in mannose metabolism.Relative mRNA expression of SLC2A2 and SLC5A1 (A,B).GLUT2, glucose transporter protein 2, edited by SLC2A2; SGLT1, Na+/Glucose Cotransporter 1, edited by SLC5A1.Schematic diagram of the mannose metabolism pathway (C).Relative mRNA expression of genes for key enzymes in mannose metabolism (D-F).MPI, the gene for mannose phosphate isomerase; PMM2, the gene for phosphomannomutase 2; HK, the gene for hexokinase.Data are presented as mean ± SD (n = 3).Bars without a common letter (a,b,c) denote statistically significant differences (p < 0.05).
Figure 6 .
Figure 6.Possible glucose transporters of the cellular uptake of mannose and the gene expression of three key enzymes involved in mannose metabolism.Relative mRNA expression of SLC2A2 and SLC5A1 (A,B).GLUT2, glucose transporter protein 2, edited by SLC2A2; SGLT1, Na+/Glucose Cotransporter 1, edited by SLC5A1.Schematic diagram of the mannose metabolism pathway (C).Relative mRNA expression of genes for key enzymes in mannose metabolism (D-F).MPI, the gene for mannose phosphate isomerase; PMM2, the gene for phosphomannomutase 2; HK, the gene for hexokinase.Data are presented as mean ± SD (n = 3).Bars without a common letter (a,b,c) denote statistically significant differences (p < 0.05).
Figure 6 .
Figure 6.Possible glucose transporters of the cellular uptake of mannose and the gene expression of three key enzymes involved in mannose metabolism.Relative mRNA expression of SLC2A2 and SLC5A1 (A,B).GLUT2, glucose transporter protein 2, edited by SLC2A2; SGLT1, Na+/Glucose Cotransporter 1, edited by SLC5A1.Schematic diagram of the mannose metabolism pathway (C).Relative mRNA expression of genes for key enzymes in mannose metabolism (D-F).MPI, the gene for mannose phosphate isomerase; PMM2, the gene for phosphomannomutase 2; HK, the gene for hexokinase.Data are presented as mean ± SD (n = 3).Bars without a common letter (a,b,c) denote statistically significant differences (p < 0.05).
Figure 8 .
Figure 8.Effect of PKCOS on the survival rate of IPEC-J2 cells with or without LPS treatment.Red arrows up represent gene upregulation and red arrows down represent gene downregulation.
Figure 8 .
Figure 8.Effect of PKCOS on the survival rate of IPEC-J2 cells with or without LPS treatment.Red arrows up represent gene upregulation and red arrows down represent gene downregulation.
Table 2 .
The contents of PKCOS.The data are expressed as mean ± SD; n = 3. | 2024-06-05T15:22:13.017Z | 2024-05-31T00:00:00.000 | {
"year": 2024,
"sha1": "5a230bf53889efc0ffc180bbdabc3c7361c3891d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/13/6/682/pdf?version=1717162686",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3733bbdf165240ca6baa0045455848090d7b260a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
260582857 | pes2o/s2orc | v3-fos-license | Improvement of an External Predictive Model Based on New Information Using a Synthetic Data Approach
Background and Objectives Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is the most frequent hereditary cerebral small vessel disease. It is caused by mutations of the NOTCH3 gene. The disease evolves progressively over decades leading to stroke, disability, cognitive decline, and functional dependency. The course and clinical severity of CADASIL seem heterogeneous. Predictive models are thus needed to improve prognostic evaluation and inform future clinical trials. A predictive model of the 3-year variation in the Mattis Dementia Rating Scale (MDRS), which reflects the global cognitive performance of patients with CADASIL, was previously proposed. This model made predictions based on demographic, clinical, and MRI data. We aimed to improve this existing predictive model by integrating a new potential factor, the location of the genetic mutation in the different epidermal growth factor (EGFr) domains of the NOTCH3 gene, dichotomized into EGFr domains 1 to 6 or 7 to 34. Methods We used a new synthetic data approach to improve the initial predictive model by incorporating additional genetic information. This method combined the predicted outcomes from the previous model and 5 “synthetic” data sets with the observed outcome in a new data set. We then applied a multiple imputation method for missing data on the mutation location. Results The new data set included 367 patients who were followed up for 30 to 42 months. In the multivariable model with synthetic data, patients with NOTCH3 mutations in EGFr domains 7 to 34 had an additional average decrease of −1.4 points (standard error 0.67, p = 0.035) in their MDRS score variation over 3 years compared with patients with mutations located in EGFr domains 1 to 6. Cross-validation results highlighted the improved predictive performance of the enhanced model. Moreover, the model estimation was found to be more robust than fitting a model without synthetic data. Discussion The use of synthetic data improved the predictive model of MDRS change over 3 years in CADASIL. The predictive performance and estimation robustness of the predictive model were enhanced using this approach, whether genetic information was used. A statistically significant association between the location of the mutation in the NOTCH3 gene and the 3-year MDRS score variation was detected.
Introduction
In clinical modeling studies, predicting disease progression from patient characteristics remains the primary goal.This is also true for cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL), the most frequent hereditary cerebral small vessel disease.CADASIL is caused by stereotyped mutations of the NOTCH3 gene, which encodes a transmembrane receptor of vascular smooth muscle cells and pericytes.These mutations lead to an odd number of cysteine residues within the epidermal growth factor repeat (EGFr) domains of the NOTCH3 receptor.They result in a progressive accumulation of NOTCH3 extracellular domains aggregating with multiple matrix proteins in the wall of cerebral arterioles and capillaries. 1wever, both clinical manifestations and disabilities vary largely among patients with CADASIL.Thus, it is crucial to continuously refine the existing prediction models to improve future prognostic and therapeutic evaluation.Some studies have suggested that sex and cardiovascular risk factors, such as smoking or hypertension, might influence the clinical expression of the disease. 2 In 2016, Chabriat et al. proposed a multivariable model to predict how the Mattis Dementia Rating Scale (MDRS) score, a global measure of cognitive performance frequently used in patients with CADASIL, evolves over a 3-year period. 3The model was built on data obtained from a prospective cohort of 290 patients recruited between September 2003 and April 2011 from 2 major referral centers for the disease (Lariboisière Hospital, Paris, France, and Ludwig Maximilians Universität, Munich, Germany).To predict the variation in the MDRS score, demographic (sex and age), clinical (modified Rankin score of 3 or above, presence of balance problems, and gait disturbances), and imaging parameters (number of lacunes, 4,5 microbleeds, 6 and brain parenchymal fraction 7,8 ) were used.
More recently, an unexpectedly large number of mutations in the NOTCH3 gene outside the EGFr 1-6 hotspot was identified in the general population. 9This raised the hypothesis that the exact position of the mutation along the gene might also influence the clinical expression of the disease.Notably, a later stroke onset was reported in patients with NOTCH3 mutations located in EGFr domains 7-34 compared with patients with a mutation inside EGFr domains 1-6. 10 Very recently, the mutation location was also shown to be strongly associated with the clinical severity of the disease, in addition to the effects of age, sex, hypertension, and hypercholesterolemia. 11 However, this association with the disease phenotype and the potential prognostic impact of the mutation location in predicting the clinical course of CADASIL remain undetermined.
In that respect, we aimed to improve the prediction of clinical score changes in patients with CADASIL using both summary information from the prognostic model previously reported by Chabriat et al. 3 and newly available individual-level data.Such an approach is in line with the growing use of external data for treatment evaluation, notably, when sample sizes are low 12 or when making early stopping decisions. 13In this study, the previously reported predictive model for the 3-year variation in the MDRS score 3 in CADASIL was used as external information.New information was obtained by creating a new data set including information related to the NOTCH3 gene mutation location that could potentially improve the prediction accuracy.5][16][17] These approaches are, however, based on a binary outcome measure, where the Bayes rule applies for updating the previous odds to the posterior odds through the likelihood ratio.We thus decided to use a new partially synthetic data approach 18 that consists of creating additional synthetic data observations from a previously reported model and then analyzing the combined data set to estimate the effect of the genetic information (here, the location of NOTCH3 mutations in the different EGFr domains) on the 3-year variation in the MDRS score in patients with CADASIL.
Methods
Patients A total of 482 patients with CADASIL aged older than 18 years were prospectively enrolled between June 03, 2003, and December 29, 2020, from the French National Referral Center for rare cerebrovascular diseases in France (cervco.fr).The diagnosis was confirmed by genetic testing showing a typical cysteine mutation in the NOTCH3 gene.A follow-up interval of 30 to 42 months was chosen to obtain a final follow-up visit, approximately 3 years from enrollment.This time frame was considered to allow the detection of significant changes in clinical scores that are usually only observed after 2 years of follow-up. 19ossary BPF = brain parenchymal fraction; CADASIL = cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy; EGFr = epidermal growth factor repeat; MAPE = mean absolute prediction error; MDRS = Mattis Dementia Rating Scale; mRS = modified Rankin Scale; MSPE = mean squared prediction error; RMSPE = root mean squared prediction error.Notably, of these 482 patients, 178 (37%) individuals enrolled before April 2011 were included in the cohort from which the first prediction model was derived. 3However, these 178 patients were previously analyzed jointly with an additional 112 German patients from Munich.The present analysis was only based on the French cohort, which has grown since 2016, including 304 new patients.
Standard Protocol Approvals, Registrations, and Patient Consents
This study was approved by an independent ethics committee (updated agreement CEEI-IRB-17/388) and conducted per the Declaration of Helsinki and guidelines for Good Clinical Practice and General Data Protection Regulation in Europe.Informed consent was obtained from all participants included in the study.
Measurements
Clinical data were collected prospectively by board-certified neurologists during individual consultations using a standardized questionnaire and a detailed neurologic assessment.Several clinical scores were systematically recorded for each individual at cohort entry to evaluate the following: (1) global cognitive performances using the Mini-Mental-State Examination score 20 ranging from 0 (worst score in patients with severe dementia) to 30 (best score) and the MDRS ranging from 0 (worst performance in patients with severe dementia) to 144 (best performance); (2) disability with the modified Rankin Scale (mRS) 21 ranging from 0 (no disability) to 6 (death), with 5 indicating severe disability and bedridden status; and (3) functional dependency using the Barthel index 22 ranging from 0 (most dependence of the patient) to 100 (total independence).Finally, the patients were assessed for occurrences of stroke (ischemic and hemorrhagic) and the presence of dementia (according to DSM IV criteria).In addition, we also recorded variables previously considered in the first multivariable model 3 : sex (male or female), age, presence of balance problems (defined on the basis of patient complaints and/or neurologic examination), gait disturbances (defined as the presence of any difficulty during walking presumably related to the disease confirmed by neurologic examination), and 3 imaging parameters obtained from brain MRI, namely, the number of lacunes, defined as small cavities of diameter less than 15 mm secondary to small deep ischemic lesions, 4,5 the presence of microbleeds, defined as rounded hypointensities of diameter less than 10 mm on susceptibilityweighted images, 6 and the brain parenchymal fraction (BPF), where the brain volume was calculated based on 3D-T1weighted images using SIENAX methods and was divided by the intracranial volume. 7,8,23,24For comparison purposes, the BPF variable for multivariable analyses was defined following the initial model and dichotomized using the baseline median value as <0.863 or ≥0.863.Finally, we considered the exposure of interest, that is, the mutation location in the NOTCH3 EGFr domains (from domains 1 to 6, or 7 to 34).
Model
We considered the reported prediction model 3 as drawn from an external population without any individual-level data.The aim of this external model was to predict the 3-year mean We wished to provide a predictive model, referred to as the "internal" or "enhanced" model, where X is the vector containing the 8 risk factors for the first reported model, γj ðj=1;…;8Þ are the estimated parameters from the published model, B represents which EGFr domains are affected (distinguishing domains 1-6 and 7-34), and Y is the 3-year variation in MDRS score.
To estimate the model defined by Equation (1), the synthetic data approach proposed by Gu in 2019 18 was applied, under the assumption that the initial and proposed models were identical in the external and new populations.Briefly, this approach consisted of creating a large number of new observations samples to create additional records (called "synthetic data"), generating pseudodata for the outcomes of these new records based on the existing prediction model.Then, to estimate the model parameters from the combined data set, missing values of the EGFr domain were handled through multiple imputations (Figure 1).All these processes are detailed in eAppendix 1 (links.lww.com/NXG/A622).
Statistical Analysis
Summary statistics, the mean (SD) for quantitative variables, and percentages for binary variables were reported unless otherwise specified.To assess whether selection biases were introduced, comparisons between the enrolled and excluded patients from the whole population were performed using the Wilcoxon nonparametric test for quantitative variables and the exact Fisher test for binary variables.
The variations in the MDRS score were computed by subtracting the baseline value from the score at the 3-year follow-up.For patients who did not have a 3-year visit (i.e., between 30 and 42 months) and whose MDRS score after 42 months was at least 140, the 3-year MDRS score was set at that score.For all other patients, MDRS scores were considered missing.
To evaluate the predictive performance of the model based on the synthetic data approach and compare the resulting model to the initial model, we performed 10-fold cross-validation. 25ence, each fold was used for modeling and testing to reduce the variability introduced when using only a simple train/test split.We used cross-validation to tune the number of times the original data set was replicated.Several metrics were used to compare the predictive performance of the different models; the mean squared prediction error (MSPE), root mean squared prediction error (RMSPE), and mean absolute prediction error (MAPE).We evaluated prediction errors using parameter mean estimates and parameter values sampled from a normal distribution centered on their mean estimates with SD equal to their standard errors to consider the variability of the estimation.
Finally, we checked the assumption that the external model did not differ in the previous and new populations by fitting a multivariable linear regression with multiple imputations but ignoring synthetic data.All statistical analyses were performed in R 4.1.1(R-project.org/).To implement multiple imputation, we use the R package MICE. 26Two-sided p values of 0.05 or less denoted statistical significance.
Data Availability
The data that support the findings of this study are available on request from the corresponding author, LB.The data are not publicly available, under the French regulation for data protection policy, because of their containing information that could compromise the privacy of research participants.
Characteristics of the Study Sample
Of the 482 patients included in the cohort study, 115 individuals did not have a follow-up of at least 3 years.Thus, the study sample consisted of the remaining 367 (76%) patients (Figure 2).There was no marked evidence of selection bias.The enrolled and excluded populations were very similar in age, sex, balance problems, gait disturbances, number of lacunes, presence of microbleeds, and mutation location (eTable 1, links.lww.com/NXG/A623).Only differences in the brain parenchymal fraction (with median values of 81% in the included patients vs 79.5% in the excluded patients) and modified Rankin score (with an increased proportion of moderate and severe disability among the excluded patients) were observed between the included and excluded patients.
Subsequent analyses included only 367 patients enrolled with a 3-year follow-up.Their baseline characteristics are shown in Table 1.Their characteristics were close to those reported in the initial cohort.Patients with mutations in EGFr domains 1-6 represented 66.8% of the sample.
Model Estimation
After generating 5 synthetic samples and deriving the predicted outcomes from the previous model, we used the whole data set to fit the multivariable linear regression model with all data available, synthetic and observed, with and without the EGFr domain variable (Table 2).The estimates of covariate effects were very close in the model not including the EGFr domain variable compared with those of the model with the new genetic marker, suggesting that prognostic information achieved from the EGFr domain is somewhat independent of that of the other predictors.All 9 predictors, except microbleeds, were associated with the 3-year MDRS variation.Notably, the genetic mutation in EGFr domains 7-34 was associated with a larger mean decrease of 1.4 in the MDRS score variation over 3 years compared with EGFr domains 1-6.
Moreover, using 10-fold cross-validation, the model incorporating the EGFr domain was selected as the best model, followed by that without the mutation domain.Both models derived from the synthetic data approach outperformed the external first model (Figure 3).Adding variability by drawing parameter values from normal distributions did not modify these findings, with a clear improvement in prediction errors for the internal model based on the synthetic data and additional genetic information.
Ignoring synthetic data, we estimated the same model on the new sample, without checking the underlying assumption that the external model applied in both populations.As tabulated in Table 3, there were some differences in the estimated effects, notably, regarding age, balance, and gait problems.
Discussion
CADASIL causes cognitive decline which is associated with a reduction in the MDRS score with disease progression.In this study, we built a new model based on a previously reported external model for predicting 3-year MDRS score variations.The predictive performance was improved when using this new model that included information on the NOTCH3 gene mutation location.
Mutation located in domain 7-34 was independently associated with a greater average decrease in MDRS over 3 years.The results differ from some previous studies with a delayed stroke onset recently reported in patients with NOTCH3 mutations located in EGFr domains 7-34 compared with patients with mutations in EGFr domains 1-6. 10 Nevertheless, in a recent work, Hack et al. found that the genotype-phenotype correlation could be further delineated, rather than basically dichotomized, 1-6 vs 7-34. 27pecifically, they classified domains 8, 11, and 26 as high risk and found them associated with greater disability, higher risk of stroke, and higher load of neuroimaging SVD markers, in a cohort of 434 patients with CADASIL. 27is prediction improvement was obtained by creating synthetic data.These data were generated by simulation, based on and mirroring properties of the original data set.The inclusion of synthetic data allows data utility to be optimized, which subsequently enhances the model prediction. 28A key advantage of this method is that it naturally incorporates knowledge into the internal data by creating a large set of "synthetic" data compatible with the initial model.We found that the best predictive performance with 10-fold cross-validation was achieved using the approach combining data augmentation and genetic information leading to an improvement of 25%, 15%, and 16% in the RMSPE, MSPE, and MAPE scores, respectively.Of interest, although we found that mutations located in EGFr domains 7-34 were independently associated with poorer outcomes than those located in EGFr domains 1-6, repeating the process after excluding the genetic information still resulted in a similar improvement.This further supports the positive effect of using synthetic data.Notably, the synthetic data method also reduced the standard errors of regression coefficients compared with the direct regression (Table 3).In this study, we only used the aggregated data from the first model, which also illustrates how external information could be used together with new individual data when testing the additional value of a biomarker in predicting the same outcome.
This approach requires some assumptions, notably, that the external model can be applied to external and internal populations.One-third of the sample individuals participated in the previous cohort from which the initial model was derived. 3hus, this first assumption could be considered acceptable here.In this study, some differences were observed in the estimated effects of the initial model parameters.These differences are possibly related to differences in baseline patient features across the 2 cohorts.Patients from the latter cohort were older and had less severe disabilities.There were also some possible changes in the recruitment, diagnosis, and care over time.
One important limitation of our study is the relatively small size of the cohort, which may not represent the entire population of patients with CADASIL. 29,30In addition, the mutation location was dichotomized into 2 groups, and the potential influence of more detailed genetic information could not be excluded. 27We used synthetic data to incorporate new information into the established model.However, other approaches are possible, such as constrained maximum likelihood, partial regression, and Bayesian approaches. 31Furthermore, the assumption that data were missing at random is also debatable.The imputation model imputed the missing variables using information from all available data, but other unknown features could not be excluded. 32,33Other genetic data, demographic features such as the level of education, professional activity, daily activity, or even other cognitive scores such as those derived from the Brief Memory and Executive Test, 34 the Montreal Cognitive Assessment, 35 or the Trail Making Test version B 36 might be useful to consider for fulfilling the missing at random assumption. 37,38Moreover, the purpose of the study was initially to evaluate the impact on model performance of adding genetic information to an existing predictive model in patients with CADASIL.Although the resulting model (Table 2) might be useful for clinical practice, formal development of a simplified prediction score as a clinical tool would require external validation on a different data set from another cohort.This was beyond the scope of the present work.
In summary, our approach, based on the creation of synthetic data, allowed us to evaluate the potential effect of additional genetic information related to the location of the NOTCH3 gene mutation on the prediction of cognitive decline in CADASIL.Additional investigations are needed to further improve this type of model using additional covariates [39][40][41][42] and supplementary information from other cohorts.
An existing risk model predicting 3-year MDRS score variation was enhanced by synthetic data obtained from a new study cohort and using already established model coefficients along with multiple imputations by chained equations.Initially used for incorporating the genetic mutation location information into the analysis model, we observed that synthetic data creation could finally enhance the prediction model.The prediction performance and estimation robustness were improved regardless of whether genetic information was included.
Figure 1
Figure 1 Proposed Synthetic Data Set Approach
Figure 2
Figure 2 Flowchart of Patient Enrollment in the Study
Figure 3
Figure 3 Tenfold Cross-Validation Average of MSPE, RMSPE, and MAPE Scores With SD
Table 2
Multivariable Linear Regression of 3-Y MDRS Score Variation With and Without the EGFr Domain Variable in 367 Patients With CADASIL Based on the Internal Model From Synthetic and Observed Data
Table 1
Comparison of Baseline Characteristics of the Study Cohort a Modified Rankin Scale score ≥3.
Table 3
Multivariable Linear Regression of 3-Y MDRS Score Variation Using Multiple Imputation Without the EGFR Domain: Original Published Model by Chabriat, Estimates on the New Cohort of 367 Patients With CADASIL With and Without Synthetic Data Neurology.org/NGNeurology: Genetics | Volume 9, Number 5 | October 2023 7 | 2023-08-06T11:51:10.032Z | 2023-08-03T00:00:00.000 | {
"year": 2023,
"sha1": "f70a953edfa2cab1846bfb1ca324f3dfee3edb1a",
"oa_license": "CCBYNCND",
"oa_url": "https://ng.neurology.org/content/nng/9/5/e200091.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bd51accf7d623e71977c7640177005e2514d1fa",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": []
} |
208248076 | pes2o/s2orc | v3-fos-license | DBSN: Measuring Uncertainty through Bayesian Learning of Deep Neural Network Structures
Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights. However, such models bring the challenges of inference, and further BNNs with weight uncertainty rarely achieve superior performance to standard models. In this paper, we investigate a new line of Bayesian deep learning by performing Bayesian reasoning on the structure of deep neural networks. Drawing inspiration from the neural architecture search, we define the network structure as gating weights on the redundant operations between computational nodes, and apply stochastic variational inference techniques to learn the structure distributions of networks. Empirically, the proposed method substantially surpasses the advanced deep neural networks across a range of classification and segmentation tasks. More importantly, our approach also preserves benefits of Bayesian principles, producing improved uncertainty estimation than the strong baselines including MC dropout and variational BNNs algorithms (e.g. noisy EK-FAC).
INTRODUCTION
Bayesian deep learning aims at equipping the flexible and expressive deep neural networks with appropriate uncertainty quantification (MacKay, 1992;Neal, 1995;Hinton & Van Camp, 1993;Graves, 2011;Blundell et al., 2015;Gal & Ghahramani, 2016). Traditionally, Bayesian neural networks (BNNs) introduce uncertainty in the network weights, addressing the over-fitting issue which standard neural networks (NNs) are prone to. Besides, the predictive uncertainty derived from the weight uncertainty is also of central importance in practical applications, e.g., medical analysis, automatic driving, and financial tasks.
Modeling the uncertainty on network weights is plausible and well-evaluated (Blundell et al., 2015;Ghosh et al., 2018). However, BNNs usually preserve benefits of Bayesian principles such as wellcalibrated predictions at the expense of compromising performance and hence are impractical in real-world applications (Osawa et al., 2019), due to various reasons. On one hand, specifying a sensible prior for networks weights is difficult Pearce et al., 2019). On the other hand, the flexible variational posterior of BNNs comes with inference challenges (Louizos & Welling, 2017;Zhang et al., 2018;Shi et al., 2018). Recently, the efficient particle-based variational methods (Liu & Wang, 2016) have been developed with promise, but they still suffer from the particle collapsing and degrading issues for BNNs due to the high dimension of the weights and the overparameterization nature of such models (Zhuo et al., 2018;. In this work, we investigate a new direction of Bayesian deep learning that performs Bayesian reasoning on the structure of neural networks while keeping the weights as point estimates. We propose an approach, named Deep Bayesian Structure Networks (DBSN). Specifically, in the spirit of differentiable neural architecture search (NAS) Xie et al., 2019), DBSN builds a deep network by repeatedly stacking a computational cell in which any two nodes (i.e. tensors) are connected by redundant transformations (see Figure 1). The network structure is defined as the gating Figure 1: BNNs with uncertainty on the weights (left) vs. DBSN with uncertainty on the network structure (right) (we only depict three operations between tensors N 1 and N 2 for simplicity). w and α represent network weights and network structure, respectively. In DBSN, w is also learnable.
weights on these transformations, whose distribution is much easier to capture than those of the high-dimensional network weights. To jointly optimize the network weights and the parameterized distribution of the network structure, we adopt a stochastic variational inference paradigm (Blundell et al., 2015) and use the reparameterization trick (Kingma & Welling, 2013). One technical challenge is driving DBSN to achieve satisfying convergence, since the network weights can hardly fit all the structures sampled from the structure distribution. To overcome this challenge, we propose two techniques. First, we advocate reducing the variance of the sampled structures with a simple modification of the sampling procedure. Second, we suggest using a more compact structure learning space than that of NAS, to make the training more feasible and more efficient.
There are at least two motivations that make DBSN an appealing choice: 1) DBSN bypasses the frustrating difficulties of characterizing weight uncertainty and enables the performance-enhancing structure learning (Zoph & Le, 2016;, so DBSN shall have better predictive performance than classic BNNs. 2) Previous analysis shows that due to the overparametrization nature of BNNs, the state-of-the-art inference algorithms for weight uncertainty can suffer from mode collapsing, as multiple configurations of weights with a fixed structure correspond to one single function. In contrast, DBSN compactly models the uncertainty of structure and performs inference in a much lower-dimensional space, avoiding this issue and hence being able to exhibit more calibrated predictive uncertainty. Moreover, in the perspective of NAS, DBSN is also promising as it provides another principled way to learn network structures by resorting to the Bayesian formalism instead of the widely used meta-learning formalism in differentiable NAS.
To empirically validate these hypotheses, we evaluate DBSN with extensive experiments. We first testify the data fitting and structure learning ability of DBSN on challenging classification and segmentation tasks. Then, we compare the quality of predictive uncertainty estimates via calibration, which is a common concern in the community. We further evaluate the predictive uncertainty on adversarial examples and out-of-distribution samples, drawn from shifted distributions from the training data, to verify whether the model knows what it knows. At last, we perform an experiment to validate a promising application of DBSN in the one-shot NAS (Bender et al., 2018;Guo et al., 2019). Surprisingly, across all the tasks, DBSN consistently achieves comparable or even better results than the strong baselines.
BACKGROUND
We first review the necessary background for DBSN and then elaborate DBSN in the next section.
STOCHASTIC VARIATIONAL INFERENCE FOR BNNS
be a set of N data points. BNNs are typically defined by placing a prior p(v) on some variables of interest (e.g., network weights or network structure) and the likelihood is p(D|v). Directly inferring the posterior distribution p(v|D) is intractable because it is hard to integrate w.r.t. v exactly. Instead, variational BNNs (Hinton & Van Camp, 1993;Graves, 2011;Blundell et al., 2015) suggest approximating p(v|D) with a θ-parameterized distribution q(v|θ) by minimizing the Kullback-Leibler (KL) divergence between them: where log p(D) is a constant w.r.t. θ and usually omitted in the minimization. To solve problem (1), the most commonly used method is the low-variance reparameterization trick (Kingma & Welling, 2013;Blundell et al., 2015), which replaces the sampling procedure v ∼ q(v|θ) with the corresponding deterministic transformation v = t(θ, ) with a sample of parameter-free noise , to enable the direct gradient back-propagation through θ.
CELL-BASED DIFFERENTIABLE NEURAL ARCHITECTURE SEARCH (NAS)
Cell-based NAS has shown promise Pham et al., 2018) and been developed to be differentiable for better scalability Xie et al., 2019;Weng et al., 2019). Generally, the network in cell-based differentiable NAS 1 is composed of a sequence of cells (e.g., modules) which have the same internal structure and are separated by upsampling or downsampling modules. Every cell contains B sequential nodes (i.e., tensors): N 1 , . . . , N B . Each node N j is connected to all of its predecessors N i so long as i < j by K possible redundant operations K , e.g., convolution, skip connection, pooling. The network structure is defined as α = {α (i,j) |1 ≤ i < j ≤ B} where α (i,j) ∈ ∆ K−1 corresponds to the gating weights on the K available operations from N i to N j . Therefore, the information gathered from N i to N j is a weighted sum of the outputs from K different operations on N i (we denote the set including the parameters of all the operations in the network as w): (2) Then, the node N j is calculated by summing all the information from its predecessors: Meta-learning-like gradient descent is adopted for optimization to reduce the prohibitive computational cost needed by RL or evolution Xie et al., 2019). However, the goal of the optimization is the network structure instead of the model performance. Thus, after training, this kind of NAS needs to prune the searched structure and re-train a new network model with the compact structure for performance comparison, which is labor-intensive and is avoided in our work.
DEEP BAYESIAN STRUCTURE NETWORKS
In this work, we propose a novel Bayesian structure learning approach for the deep neural networks. Concretely, we follow the network design of NAS but we view α as Bayesian variables and w as point estimates (see the graphical model in Figure 1). To infer the posterior distribution p(α|D, w) = p(α)p(D|α,w) , where p(α) is the prior (we omit its dependency on the hyperparameter θ 0 here), we adopt the techniques in Section 2.1. We assume both the prior and the introduced variational are fully factorizable categorical distributions, namely, p(α) = i<j p(α (i,j) ) and q(α|θ) = i<j q(α (i,j) |θ (i,j) ), where θ = {θ (i,j) ∈ R K |1 ≤ i < j ≤ B} denotes the trainable categorical logits. We rewrite Eq. (1) and obtain the negative evidence lower bound (ELBO): Notably, minimizing L w.r.t. θ and w corresponds to Bayesian inference on α and maximum a posteriori (MAP) estimation of w 2 , respectively. Thus, the optimization of the network structure and network weights can be unified as min θ,w L(θ, w). To resolve this, we relax both p(α (i,j) ) and q(α (i,j) |θ (i,j) ) to be the concrete distributions (Maddison et al., 2016). Then, samples α from q(α|θ) are generated via the softmax transformation: } are the Gumbel variables and τ ∈ R + is the temperature. Then we derive the following gradient estimators: ∇ θ L(θ, w) = E [−∇ θ log p(D|g(θ, ), w) + ∇ θ log q(g(θ, )|θ) − ∇ θ log p(g(θ, ))], (6) 1 We will refer to the cell-based differentiable NAS as NAS for short if there is no misleading. 2 This is because we use regularizor on weights, e.g., weight decay, to alleviate over-fitting. Figure 2: Each column includes 5 samples α (i,j) from an adaptive concrete distribution with some β (i,j) at τ = 1. Samples in every row share the same (i,j) . The base class probabilities are softmax(θ (i,j) ) = [0.05, 0.05, 0.5, 0.4] in each sample.
(7) The first term in Eq. (6) corresponds to the gradient of the negative log likelihood and we leave how to estimate the last two terms (i.e. log densities) in the next section. In practice, we approximate the expectation in Eq. (6) and Eq. (7) with T Monte Carlo (MC) samples, and update the structure and the weights w simultaneously.
After training, we gain the following predictive distribution: where θ * and w * denote the converged parameters. Eq. (8) implies that the model predicts by ensembling the predictions of the networks whose structures are randomly sampled.
ADAPTIVE CONCRETE DISTRIBUTION
The weight sharing mechanism in DBSN is a non-trivial contribution for Bayesian structure learning, enabling computationally efficient optimization. But it also causes unignorable training challenges. Specifically, because of the limited capacity of the shared weights w, we have challenges to train it sufficiently well to be suitable for all the structures. The under-fitting of w then brings bias in the learning of α's variational posterior and results in unsatisfying convergence of the whole model. We note that an analogous phenomenon was also observed by Mackay et al. (2019) in the gradient-based hyper-parameter optimization scenario.
Therefore, to facilitate w to fit the structure distribution better and eventually benefit the Bayesian structure learning, we expect to reduce the variance of the structure distribution. Specifically, we analyze the reparameterization procedure of the concrete distribution, and propose to multiply a tunable scalar β (i,j) with (i,j) in the sampling: Accordingly, we derive the log probability density of the adaptive concrete distribution which is slightly different from that of the concrete distribution (see the detailed derivation in Appendix A): where LΣE represents the log-sum-exp operation. With this, the last two terms of Eq. (6) can be estimated exactly.
Obviously, the adaptive concrete distribution degrades to the concrete distribution when β (i,j) = 1. As shown in Figure 2, sliding β (i,j) from 1 to 0 decreases the diversity of the sampled structures gradually. Therefore, we should also keep β (i,j) from being too small to avoid the over-fitting issue which the point-estimate structure (i.e., β (i,j) = 0) may suffer from. In practice, we choose to gradually reduce the sample variance along with the convergence of the weights, by decaying β (i,j) from 1 to 0.5 with a linear schedule in the training.
PRACTICAL IMPROVEMENTS OF THE STRUCTURE LEARNING SPACE
In order to make the training more stable and more efficient, we modify some changes to the structure learning space (i.e., the support of the structure distribution) commonly adopted in NAS.
Overall modification. To facilitate more effective information flow in the cell, we let the input of a cell (i.e., the output of the previous cell) be fixedly connected to all the internal nodes by 1×1/3×3 convolutions in the classification/segmentation tasks. We only learn the connections between the B internal nodes, as shown in Appendix F. The resulted nodes are concatenated along with the input to get the cell's output. In spirit of DenseNet (Huang et al., 2017) and FC-DenseNet (Jégou et al., 2017), we constrain the downsampling/upsampling modules to be the typical BN-ReLU-Conv-Pooling/ConvTranspose operations, to ease the learning of the network structure.
Batch normalization. NAS usually adopts the order of ReLU-Conv-BN in operations. However, in the searching stage, the learnable affine transformations in batch normalizations are always disabled to avoid the output rescaling issue . NAS does not suffer from this since it trains another network with learnable batch normalizations in the extra re-training stage. Instead, DBSN has to fix the issue because we do not re-train the model. Thus, we propose to put a complete batch normalization in the front of the next layer. Namely, we adopt the BN-ReLU-Conv-BN convolutional layers, where the first BN has learnable affine parameters while the second one does not.
Candidate operations. In order to make the training more efficient, we remove the operations which are popular in NAS but unnecessary in DBSN, including all the 5×5 convolutions that can be replaced by stacked 3×3 convolutions, and all the pooling layers which are mainly used for the downsampling module. Then, the candidate operations in DBSN are: 3×3 separable convolutions, 3×3 dilated separable convolutions, identity and zero. We follow for the detailed settings of these operations.
Group operation. To obtain the j th node in a cell, there are (j − 1)K operations from its predecessors to calculate, which can be organized into K groups according to the operation type. Note that the operations in a group are independent, so we advocate replacing them with a group operation (e.g., group convolution), which improves the efficiency significantly.
DISCUSSION
One may concern that the practical choice of weight sharing could push the structure distribution toward the most likely point for the weights and result in a Dirac structure distribution. However, the prior keeps the variational posterior from collapsing via a KL regularization (last term of Eq. (4)).
Besides, recall that w is a set including the parameters of all the redundant operations. Then, in fact, different network structures adjust w.r.t. different subsets of w, further alleviating the structure collapsing issue. The widely used technique of MC Dropout (Gal & Ghahramani, 2016;Gal et al., 2017) can also be seen as using the same weights for different structures. Their empirical results also prove that this kind of model choice is reasonable. Nevertheless, capturing the dependency of w on α may indeed bring more accurate modeling and we leave this as future work. We also emphasize that using point estimates for the weights benefits the whole model's learning significantly. On one hand, as stated in the introduction, there are still frustrating difficulties to achieve scalable Bayesian inference on the high-dimensional network weights, which is also proven by the results in Table 1, Table 3, and Appendix C. On the other hand, DBSN deploys weight decay regularizor on weights, which implicitly imposes a Gaussian prior on w. Then, DBSN performs maximum a posteriori (MAP) estimation of w, namely, estimating the mode of w's posterior distribution p(w|D), which can be viewed as doing approximate Bayesian inference on w.
RELATED WORK
Learning flexible Bayesian models has long been the goal of the community (MacKay, 1992;Neal, 1995;Balan et al., 2015;Wang & Yeung, 2016). The stochastic variational inference methods for Bayesian neural networks are particularly appealing owing to their analogy to the ordinary backpropagation (Graves, 2011;Blundell et al., 2015). More expressive distributions, such as matrixvariate Gaussians or multiplicative normalizing flows (Louizos & Welling, 2017), have also been introduced to represent the posterior dependencies, but they are hard to train without heavy approximations. Recently, there is an increasing interest in developing Adam-like optimizers to perform natural-gradient variational inference for BNNs (Zhang et al., 2018;Bae et al., 2018;Khan et al., 2018). Despite enabling the scalability, these methods seem to demonstrate compromising performance compared to the state-of-the-art deep models. Interpreting the stochastic techniques of the deep models as Bayesian inference is also insightful (Gal & Ghahramani, 2016;Kingma et al., 2015;Teye et al., 2018;Mandt et al., 2017;Lakshminarayanan et al., 2017), but these methods still have relatively restricted and inflexible posterior approximations. Dikov & Bayer (2019) propose a unified Bayesian framework to infer the posterior of both the network weights and the structure, which is most similar to DBSN, but the network structure considered by them, namely layer size and network depth, is essentially impractical for complicated deep models. Instead, we inherit the design of the structure learning space for NAS, and provide insightful techniques to improve the convergence, thus enabling effective Bayesian structure learning for deep neural networks. (Finn et al., 2017), and need to re-train another network with the pruned compact structure after the searching. In contrast, DBSN unifies the learning of weights and structure in one training stage, alleviating the mismatch of structures during the search and re-training, as well as inefficiency issues suffered by differentiable NAS.
EXPERIMENTS
To validate the structure learning ability and the predictive performance of DBSN, we first evaluate it on image classification and segmentation tasks. For the estimation of the predictive uncertainty, we concern model calibration and generalization of the predictive uncertainty to adversarial examples as well as out-of-distribution samples, following existing work. We show that DBSN outperforms strong baselines in these tasks, shedding light on practical Bayesian deep learning.
IMAGE CLASSIFICATION ON CIFAR-10 AND CIFAR-100
Setup. We set B = 7, T = 4 and K = 4, thus, α consists of 7 × 6/2 = 21 sub-variables. The whole network is composed of 12 cells and 2 downsampling modules which have a channel compression factor of 0.4 and are located at the 1/3 and 2/3 depth. We employ a 3×3 convolution before the first cell and put a global average pooling followed by a fully connected (FC) layer after the last cell. The redundant operations all have 16 output channels. We initialize w and θ following He et al. (2015) and , respectively. The prior distributions of α (i,j) are set to be concrete distributions with uniform class probabilities. A momentum SGD with initial learning rate 0.1 (divided by 10 at 50% and 75% of the training procedure following (Huang et al., 2017)), momentum 0.9 and weight decay 10 −4 is used to train the weights w. An Adam optimizer with learning rate 3 × 10 −4 , momentum (0.5, 0.999) is used to learn θ. We deploy the standard data augmentation scheme (mirroring/shifting) and normalize the data with the channel statistics. The whole training set is used for optimization. We train DBSN for 100 epochs with batch size 64, which takes one day on 4 GTX 1080-Tis. The implementation depends on PyTorch (Paszke et al., 2017) and the codes are available online at https://github.com/anonymousest/DBSN.
Comparisons between DBSN and the baselines designed by ourselves are more insightful and convincing. 1) DBSN surpasses DBSN*, revealing the effectiveness of the adaptive concrete distribution. 2) DBSN-1 is remarkably worse than DBSN owing to the higher variance of the estimated gradients with only one sample. 3) Comparison of DBSN and Fixed α validates that adapting the network structure w.r.t. the data distribution benefits the fitting of the model, resulting in substantially enhanced performance. 4) Random α, Dropout, and Drop-path train the networks with manually-designed untunable randomness, and hence are inferior to DBSN. 5) NEK-FAC gains rather compromising performance, with the powerful VGG16 architecture and one of the most advanced variational BNNs algorithms, suggesting us to prefer DBSN instead of the classic BNNs in the scenarios where the performance is a major concern. 6) BNN-LS and Fully Bayesian DBSN both have poor performance, due to the fundamental difficulties of modeling distributions over high dimensional weights. 7) PE and DARTS are two methods to learn the point-estimate network structure, both of which fall behind in terms of the test error. In particular, DARTS is much worse as it only trains the weights on half of the training set. This shows that DBSN is an appealing choice for effective neural structure learning with only one-stage training. 29.5 46.4 62.5 Bayesian SegNet (Kendall et al., 2015) 29.5 63.1 86.9 FC-DenseNet67 (Jégou et al., 2017) 3
SEMANTIC SEGMENTATION ON CAMVID
To further verify that learning the network structure w.r.t. the data helps DBSN to obtain better performance than the standard NNs and BNNs, we apply DBSN to the challenging segmentation benchmark CamVid (Brostow et al., 2008). Our implementation is based on the brief FC-DenseNet framework (Jégou et al., 2017). Specifically, we only replace the original dense blocks with the structure-learnable cells, without introducing further advanced techniques from the semantic segmentation community, to figure out the performance gain only resulted from the learnable network structure. For the setup, we set B = 5 (same as the number of layers in every dense block of FC-DenseNet67) and T = 1, and learn two cell structures for the downsampling path and upsampling path, respectively. We use a momentum SGD with initial learning rate 0.01 (which decays linearly after 350 epochs), momentum 0.9 and weight decay 10 −4 instead of the original RMSprop for better results. The other settings follow Jégou et al. (2017) and the classification experiments above. We also implement FC-DenseNet67 as a baseline. We present the results in Table 2 and Figure 3.
It is evident that DBSN surpasses the competing FC-DenseNet67 by a large margin while using fewer parameters. DBSN also demonstrates significantly better performance than the classic Bayesian SegNet which adopts MC dropout for uncertainty estimation. We emphasize this experiment shows that the proposed approach is generally applicable. It is also worth noting that the uncertainty produced by DBSN is interpretable (see Figure 3): the edges of the objects and the regions which contain overlapping have substantially higher uncertainty than the other parts.
ESTIMATION OF PREDICTIVE UNCERTAINTY
To validate that DBSN can provide promising predictive uncertainty, we evaluate it via calibration. We further examine the predictive uncertainty on adversarial examples and out-of-distribution (OOD) samples to test if the model knows what it knows. We also pay particular attention to the comparison between Drop-path and Dropout to double-check if more structured randomness (Larsson et al., 2016) benefits predictive uncertainty more.
Calibration is orthogonal to the accuracy (Lakshminarayanan et al., 2017) and can be well estimated by the Expected Calibration Error (ECE) (Guo et al., 2017). Thus, we evaluate the trained models on the test set of CIFAR-10 and CIFAR-100 and calculate their ECE, as shown in Table 3. We also plot some reliability diagrams (Guo et al., 2017) in Appendix D, to provide a direct explanation of calibration. Unsurprisingly, DBSN achieves state-of-the-art calibration. DBSN outperforms the strong baselines, Dropout and NEK-FAC. NEK-FAC, BNN-LS and Fully Bayesian DBSN all have much worse ECE than DBSN, implying structure uncertainty's superiority over weight uncertainty. Figure 4: Accuracy (solid) and entropy (dashed) vary w.r.t. the adversarial perturbation size on CIFAR-10 (left) and CIFAR-100 (right).
We also notice that Drop-path is better than Dropout in terms of ECE, supporting our hypothesis that more structured randomness is more beneficial to the predictive uncertainty.
To test the predictive uncertainty on the adversarial examples, we apply the fast gradient sign method (FGSM) (Goodfellow et al., 2014) to attack the trained models on CIFAR-10 and CIFAR-100 using the corresponding test samples 3 . Then we calculate the predictive entropy of the generated adversarial examples and depict the average entropy in Figure 4. As expected, the entropy of DBSN grows rapidly as the perturbation size increases, implying DBSN becomes pretty uncertain when encountering adversarial perturbations. By contrast, the change in entropy of Dropout and NEK-FAC is relatively moderate, which means that these methods are not as sensitive as DBSN to the adversarial examples. Besides, Drop-path is still better than Dropout, consistent with the conclusion above. We also note that Random α has the highest predictive entropy. We speculate that this is because Random α adopts the most diverse network structures (which results from the uniform class probabilities), and the ensemble of predictions from the corresponding networks is easier to be uniform. We further attack with more powerful algorithms, e.g., the Basic Iterative Method (BIM) (Kurakin et al., 2016), and provide the results in Appendix E.
Moreover, we look into the entropy of the predictive distributions on OOD samples, to adequately evaluate the quality of uncertainty estimation. We use the trained models on CIFAR-10 and CIFAR-100, and take the samples from the test set of SVHN as OOD samples. We calculate their predictive entropy and draw the empirical CDF of the entropy in Figure 5, following Louizos & Welling (2017).
The curve close to the bottom right corner is expected as it means most OOD samples have relatively large entropy (i.e., low prediction confidence). Obviously, DBSN demonstrates comparable or even better results than the competing methods like Dropout and NEK-FAC. In addition, Drop-path attains substantially improved results than Dropout. Analogous to the experiments on adversarial examples, Random α provides impressive predictive uncertainty on the OOD samples.
In conclusion, DBSN consistently delivers state-of-the-art predictive uncertainty in various scenarios, validating the effectiveness of structure uncertainty.
RETHINKING OF THE ONE-SHOT NAS
One-shot NAS (Bender et al., 2018;Guo et al., 2019) first trains the weights of a super network and then searches for a good structure given the weights. This avoids the bias induced by the gradientbased joint optimization of the differentiable NAS. However, we argue that the super network trained with the fixed (Bender et al., 2018) or uniformly sampled (Guo et al., 2019) network structures cannot flexibly focus its capacity on the most crucial operations, harming the subsequent searching.
To this end, we have conducted a set of experiments to check whether dynamically adjusting the network structure at the stage of weight training helps to find better network structures eventually.
Observing that DBSN trains a super network with adaptive network structures and Random α trains a super network with unadjustable structures (similar to the uniform sampling used by Guo et al. (2019)), we choose to search for the optimal structure distributions based on the trained weights from DBSN and Random α 4 . After searching, we train new networks with the searched structure distributions (fixed in the training) from scratch, and then test their performance. The results are shown in Table 4. The searched structure distribution based on the weights trained by DBSN outperforms the other one significantly, supporting our hypotheses. Therefore, we propose to reasonably adapt the structure in the weight-training stage of one-shot NAS, which drives the most useful operations to be optimized thoroughly and eventually yields more powerful network structures.
VISUALIZATION OF THE LEARNED STRUCTURE DISTRIBUTIONS
We visualize the learned structure distributions in Appendix F. The structure distributions for different tasks look quite different, which implies that the structures are learned in a way that accounts for the specific characteristics in the data.
CONCLUSION
In this work, we have introduced a novel Bayesian structure learning approach for deep neural networks. The proposed DBSN draws the inspiration from the network design of NAS and models the network structure as Bayesian variables. Stochastic variational inference is employed to jointly learn the network weights and the distribution of the network structure. We further develop the adaptive concrete distribution and improve the structure learning space to facilitate the convergence of the whole model. Empirically, DBSN has revealed impressive performance on the discriminative learning tasks, surpassing the advanced deep models, and presented state-of-the-art predictive uncertainty in various scenarios. In conclusion, DBSN provides a more practical way for Bayesian deep learning, without compromise between the predictive performance and the Bayesian uncertainty.
There are two major directions for future work. On one hand, the current DBSN is not efficient enough, so some strategies need to be discovered to make DBSN more efficient. On the other hand, DBSN still has a relatively restricted structure learning space. Thus, more operations can be introduced and more global network structures can be learned in future work. , θ (i,j) , β (i,j) and (i,j) as α, θ, β and , respectively. Let p = softmax(θ). Consider We denote c = K i=1 exp(z i /τ ), then α k = exp(z k /τ )/c. We consider this transformation: F (z 1 , . . . , z K ) = (α 1 , . . . , α K−1 , c), which has the following inverse transformation: F −1 (α 1 , . . . , α K−1 , c) = (τ (log α 1 + log c), . . . , τ (log α K + log c)), whose Jacobian has the determinant (refer to the derivation of the concrete distribution (Maddison et al., 2016)): Multiply this with the density of z, we get the density Let r = log c, then apply the change of variables formula, we obtain the density: We use γ to substitute log K i=1 (p i α −τ i ) 1/β , then get: Naturally, we can integrate out r, and get: Then, the log density is: which is equal to Eq. (10).
B THE EFFECTS OF THE NUMBER OF MC SAMPLES IN TEST PHASE
We draw the change of test loss, test error rate and test ECE with respect to the number of MC samples used for testing DBSN in Figure 6 (CIFAR-10) and Figure 7 (CIFAR-100). It is clear that ensembling the predictions from models with various sampled network structures enhances the final predictive performance and calibration significantly. This is in marked contrast to the situation of classic variational BNNs, where using more MC samples does not necessarily bring improvement over using the most likely sample. As shown in the plots, we would better utilize 20+ MC samples to predict the unseen data, for adequately exploiting the learned structure distribution. Indeed, we use 100 MC samples in all the experiments, except the adversarial attack experiments where we use 30 MC samples for attacking and evaluation. (Blundell et al., 2015) in BNN-LS and Fully Bayesian DBSN with VOGN, based on VOGN's official repository (https://github. com/team-approx-bayes/dl-with-bayes). With the original network size (B = 7, 12 cells), the baselines trained with VOGN needed more than one hour for one epoch. Thus we adopted smaller networks (B = 4, 3 cells), which have almost 41K parameters, for the two baselines. We also trained a DBSN in the same setting. The detailed parameters to initialize VOGN are here (https://github.com/anonymousest/DBSN/blob/master/dbsn/train_ bnn_torchsso.py#L220). The experiments were conducted on CIFAR-10 and the results are provided in Table 5. The predictive performance and uncertainty gaps between DBSN and the two baselines are very huge, which possibly results from the under-fitting of the high-dim weight distributions in BNN-LS and Fully Bayesian DBSN. We believe that our implementation is correct because our results are consistent with the original results in Table 1 of Osawa et al. (2019) (VOGN has 75.48% and 84.27% validation accuracy even with even larger 2.5M AlexNet and 11.1M ResNet-18 architectures). Further, DBSN is much more efficient than them. These comparisons strongly reveal the benefits of modeling structure uncertainty over modeling weight uncertainty, highlighting the practical value of DBSN.
D MORE RESULTS FOR CALIBRATION
We plot the reliability diagrams of 4 typical methods, which represent the deep BNN with structure uncertainty, the classic BNN with weight uncertainty, the deterministic NN with MC dropout and the standard NN, respectively, in Figure 8. Obviously, DBSN has better reliability diagrams than NEK-FAC and Dropout, proving the effectiveness of the uncertainty on network structure.
E ATTACK WITH BIM
We perform an adversarial attack using BIM algorithm. Concretely, we set the number of iteration to be 3 and set the perturbation size in every step to be 1/3 of the whole perturbation size. The experiments mainly focus on the models trained on CIFAR-10. Figure 9 shows the results. Random α, DBSN* and DBSN have increasing entropy when the perturbation size changes from 0 to 0.01, but all the other approaches are attacked successfully with entropy dropping. However, strictly, only the Random α at perturbation size 0.01 provides useful predictive uncertainty, and we can use the entropy to reject the predictions. Therefore, we have to agree that BIM is powerful enough to break all the methods, including DBSN. So we advise adjusting DBSN accordingly (e.g., employing adversarial training, using more robust loss) if we want to use DBSN to defend the adversarial attacks. Figure 13: Structure of the cell learned on CamVid (in the upsampling path). The pen width of an edge implies the sampling probability of its corresponding operation. | 2019-11-22T01:31:28.000Z | 2019-11-22T00:00:00.000 | {
"year": 2019,
"sha1": "b4b78fc418b3f93a9d7bfe19e71e3eded7de6658",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b4b78fc418b3f93a9d7bfe19e71e3eded7de6658",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
251639726 | pes2o/s2orc | v3-fos-license | A New De-Noising Method Based on Enhanced Time-Frequency Manifold and Kurtosis-Wavelet Dictionary for Rolling Bearing Fault Vibration Signal
The transient pulses caused by local faults of rolling bearings are an important measurement information for fault diagnosis. However, extracting transient pulses from complex nonstationary vibration signals with a large amount of background noise is challenging, especially in the early stage. To improve the anti-noise ability and detect incipient faults, a novel signal de-noising method based on enhanced time-frequency manifold (ETFM) and kurtosis-wavelet dictionary is proposed. First, to mine the high-dimensional features, the C-C method and Cao’s method are combined to determine the embedding dimension and delay time of phase space reconstruction. Second, the input parameters of the liner local tangent space arrangement (LLTSA) algorithm are determined by the grid search method based on Renyi entropy, and the dimension is reduced by manifold learning to obtain the ETFM with the highest time-frequency aggregation. Finally, a kurtosis-wavelet dictionary is constructed for selecting the best atom and eliminating the noise and reconstruct the defective signal. Actual simulations showed that the proposed method is more effective in noise suppression than traditional algorithms and that it can accurately reproduce the amplitude and phase information of the raw signal.
Introduction
As an important part of rotating machinery, the dynamic performance of rolling bearing affects the safe and stable operation of rotating machinery. If the defects cannot be found in time, the mechanical equipment will stop running, or major safety accidents will be caused [1][2][3]. Therefore, the rolling bearing failure is the root cause of rotating machinery failure, which must be found as soon as possible to avoid the economic losses and disasters.
As the important information of fault diagnosis, the periodic transient shock contained in the bearing vibration data reflects the important dynamic information of the fault bearing. However, the measured vibration signal is often complex, non-stationary, and contains a large amount of background noise. The useful fault information is often weak and difficult to identify. At present, the commonly used signal de-noising methods include the wavelet denoising method, modal decomposition method, spectrum subtraction method, and so on. Qiu et al. [4] successfully extracted weak pulses from rolling bearing fault signals by using Morlet wavelet. Dron et al. [5] used spectral subtraction to suppress time invariant noise and improve signal kurtosis and peak factor. Abdelkader et al. [6] optimized empirical mode decomposition through the threshold method for early faults identification of rolling bearings. Zhang et al. [7] utilized the variational modal decomposition algorithm to extract fault information of gears and rolling element bearing. However, the traditional methods have their own shortcomings, and the wavelet de-noising method is difficult to deal with 2 of 24 in terms of the white noise widely existing in each frequency band. Although the spectral subtraction method has a good suppression effect on steady-state noise, it is not suitable for dealing with background noise in nonlinear and non-stationary vibration signals. Mode decomposition has the problems of endpoint effect and mode aliasing.
Chaos theory is an important part of nonlinear science. It reflects a "random" state of motion and reveals the internal order information in deterministic nonlinear systems [8]. Takens introduced chaos theory into nonlinear time series analysis [9]. He believes that the attractor of the chaotic system can be reconstructed from the measured single-variable time series, and the judgment, analysis, and prediction of the chaotic system can be carried out according to the properties of the attractor. The phase space reconstruction that reflects the dynamic characteristics of the system needs a long enough time series and a reasonable choice of reconstruction parameters, embedding dimension and delay [10]. There are mainly two views on the selection of embedding dimension and time delay: The first view is that the two are unrelated, representing algorithms such as Cao's method [11]. Zhang et al. [12] used mutual information algorithm and Cao's method to determine the appropriate delay time and the optimal embedding dimension and effectively identified the chaotic signals. The second view holds that the embedding dimension and the time delay are correlated, representing the C-C method [13], the time window method [14]. Wang et al. [15] used the C-C method to process the raw wind speed sequence and input the reconstructed phase space into the subsequent model to successfully predict wind speed. Specifically, the C-C method does not consider the effect of the sequence length of a one-dimensional vibration signal on the performance of the algorithm and its robustness is poor. Cao's method needs a time delay to calculate the size of the embedded dimension. On this basis, this paper, on the basis of exploring the principles of C-C method and Cao's method, proposes to use C-C algorithm and Cao's method to jointly determine the time delay and embedding dimension, and map vibration signals into high-dimensional space to mine their essential features.
Manifold learning can effectively mine the embedded coordinates of low-dimensional manifolds in reconstructed high-dimensional space and realize the learning and enhancement of data features and essential information [16][17][18][19][20]. On the basis of the LTSA algorithm, the LLTSA [21] algorithm does the work of local linear extension, avoids the loss of many sensitive features, and is more suitable for the extraction of nonlinear features in rolling bearing fault diagnosis. Tang et al. [22] used the LLTSA algorithm to compress the highdimensional vectors of the samples so that the samples have better resolution. In recent years, many scholars began to combine time-frequency analysis with manifold learning, which not only analyzed the time-frequency manifold structure of signals but also achieved a good effect of noise suppression [23][24][25]. He et al. [26] analyzed the nonlinear timefrequency manifold structure of defective signals, and the extracted signal features were suitable for the diagnosis of mechanical faults. Li et al. [27] extracted the time-frequency manifold of RF signals and successfully separated and classified the signals. There are many optimization methods for input parameters of the LLTSA algorithm; Kumar et al. [28] used the frequency factor to optimize the optimal neighborhood. However, the applicability of these methods to time-frequency manifold scenarios is not high, so this paper proposes a gridded parameter search method based on Renyi entropy to optimize LLTSA. Finally, the low-dimensional manifold with the highest time-frequency aggregation is obtained, which is called the enhanced time-frequency manifold (ETFM). Because the computation process of LLTSA algorithm is nonlinear, ETFM inevitably loses the amplitude information of transient impacts which we are concerned about.
The purpose of sparse representation is to represent most or all of the information of the signal with fewer atoms, which can complete the detection of the internal structure of the data while maintaining the transient amplitude of the signal [29,30]. He et al. [31,32] combined manifold learning with sparse representation and proposed time-frequency manifold sparse algorithm, which achieved good results in fault diagnosis of rotating machinery. Tang et al. [33] used sparse representation to extract fault features submerged in noise. The dictionary determines the error between the sparse representation result and the raw signal, which has great influence on the final fault feature extraction and signal de-noising. At present, the dictionary can be divided into analysis dictionary and learned dictionary. Analysis dictionary uses fixed basis function to construct a dictionary, which has the advantage of being fast and concise. Zheng et al. [34] introduced the Gabor multi-channel model to construct the Gabor dictionary and proved the usability of this method in the field of face recognition. Fan et al. [35] used the Morlet dictionary for de-noising and feature extraction of gearbox fault signals. In different application scenarios, the adaptability of analysis dictionary is poor, while the adaptability of learned dictionary is strong because it is learned from data. Zhou et al. [36] used shift-invariant dictionary learning to extract fault shock of mechanical equipment signals. Ren et al. [37] used data sets with known fault types to train the dictionary, which improved the accuracy of fault diagnosis. Kong et al. [38] proposed a discriminative dictionary-learning-based sparse representation classification framework for intelligent planet-bearing fault identification. The quality of the sparse representation results obtained by learned dictionary largely depends on the dictionary learning algorithms, which are complex in computation and tedious in the optimization process. Therefore, combining the advantages of analysis dictionary and learned dictionary, this paper proposes the construction of kurtosis-wavelet dictionary.
In order to reduce the distortion degree of reconstructed phase space, improve the noise suppression effect of time-frequency manifold learning, and highlight the feature extraction ability of sparse representation, we propose a new de-noising method based on ETFM and kurtosis-wavelet dictionary for rolling Bearing Fault vibration signal. The denoised signal obtained by this method has a very high SNR and retains the amplitude information of the raw signal, which lays a foundation for fault feature extraction. The main contributions of this paper are as follows: We propose a method of using C-C method and Cao's method jointly to determine the time delay and embedding dimension of phase space reconstruction, which reduces the distortion degree of reconstruction space and highlights the fault characteristics.
2.
LLTSA is optimized by a gridded parameter search method based on Renyi entropy to obtain ETFM with high time-frequency resolution.
3.
The proposed kurtosis-wavelet dictionary can adaptively select the optimal atomic position with the change of kurtosis, which improves the noise suppression effect and feature extraction ability of sparse representation.
The remaining parts of this paper are organized as follows: Section 2 briefly introduces the principles of our proposed innovative approach, and Section 3 shows the complete framework of the signal de-noising method proposed by this paper. Section 4 presents and discusses the experimental results. In Section 5, ablation and comparison experiments are designed on the basis of the innovative work in this paper. Section 6 summarizes the work of this paper.
Parameter Optimization Phase Space Reconstruction
Phase space reconstruction (PSR) is an important tool to mining high-dimensional features of signals, and it is even more indispensable to reveal the dynamics law and the state space attribute of observation data. At present, the construction of phase space is essentially based on the time delay method of Taken's theorem, which holds that the evolution of any component in the system is determined by other components interacting with it. The theorem shows that the attractor of chaotic system can be reconstructed from the measured single variable time series. Therefore, the information of these relevant components is implicit in the development process of any component. To reconstruct the phase space of the system, only one component x = [x 1 , x 2 , · · · , x n ] needs to be investigated, and the m-dimensional phase space vector can be found through the observations at different time delay points; the specific formula is as follows: where m represents the dimension of the phase space, i represents the signal point, τ represents the time delay, and m represents the embedding dimension. N = n − (m − 1)τ, as long as N ≥ 2d + 1 (d is the dimension of the chaotic attractor), wherein an equivalent phase space can be reconstructed. In this phase space, the raw dynamic system can be restored, and the properties of its attractor can be studied to judge, analyze, and predict the chaotic time series. According to the time series X m i |i = 1, 2, · · · , N , these N vectors are aligned to obtain the timing matrix expressed as P ∈ R m×N , and the corresponding relationship between any element in the phase space and the initial one-dimensional time series signal is expressed by Equation (2).
In order to construct the phase space from the time series, the parameter m and the appropriate sampling interval τ are very important. In theory, the selection of τ can be almost arbitrary. However, in the actual system, τ should also be determined by repeated trial and error. If τ is too small, the orbit of phase space tends to a straight line; if τ is too large, the data points will be concentrated in a small area of the phase space.
Cao's method only needs time delay τ and a small amount of data to calculate the embedding dimension m. The C-C method can calculate τ and τ w simultaneously through correlation integral, but it has a certain selectivity for the length of time series, which is not suitable for signal with short length. Moreover, the calculation results of τ and τ w by the C-C method are unstable. Therefore, we propose to jointly determine τ and m by the C-C method and Cao's method. The specific steps are as follows:
1.
Determine the optimal time delay τ by the C-C method [13] Given a time series of length N x = {x n |n = 1, 2, · · · , N }, with delay t and embedding dimension k; reconstruct the phase space X i (n) = {x i (n), x i (n + t), x i [n + (m − 1)t]}, where i = 1, 2, . . . , M, X i (n) is a point in the phase space, which is used to represent the association of embedded timing signals The points are where d ij = X i − X j , r is the size of the neighborhood radius; if x < 0, θ(x) > 0; and if x < 0, θ(x) > 0. The time series x = {x n |n = 1, 2, · · · , N } is decomposed into t subsequences. If the length of each subsequence is the same and the two subsequences do not overlap each other, t is the reconstruction delay, and N represents the integer multiple of t, i.e., The statistics defined in the above analysis are calculated by using the block average strategy: Let N → ∞ . Due to the limited length of the actual sequence, the obtained S 2 (k, r, t) is generally not zero, and S 2 (k, r, t) ∼ t reflects the autocorrelation of the time series. According to the autocorrelation principle of the value of τ, the first zero point of S 2 (k, r, t) ∼ t is selected as the optimal delay τ, which is defined as ∆S 2 (k, t) finally is obtained the maximum deviation result of S 2 (k, r, t) ∼ t for all neighborhood radius r. From the above analysis, it can be concluded that the first value of S 2 (k, r, t) ∼ t zero point or the first local minimum point of ∆S 2 (k, t) ∼ t can be used as the optimal delay τ.
2.
Determine the optimal embedding dimension m [11] Construct an m-dimensional phase space, where · represents the norm of the vector; X i (m) and X n(i,m) (m) represent the data point of the i-th vector and the nearest neighbor vector in the m-dimensional phase space, respectively; X i (m + 1) and X n(i,m) (m + 1) represent the i-th vector of the (m+1)-dimensional phase space and its nearest point, respectively; and n(i, m) is greater than 1 and less than or equal to N − mτ. If X n(i,m) (m) is equal to X i (m) during the calculation, the next nearest vector needs to be found according to the definition of norm. The norm is defined as Due to the different natures of time series, it is difficult to judge whether E 1 (m) tends to be stable, so a judgment criterion is added: To express the method more intuitively and simply for optimizing the selection of phase space reconstruction parameters proposed in this paper, the process is shown in Figure 1.
Enhanced Time-Frequency Manifold
Time-frequency manifold (TFM) is an inherent nonlinear manifold structure that is described on the time-frequency distribution. Through the time-frequency analysis of the reconstruction space, time-frequency manifold learning is used to mine the features of the signal embedded in the low-dimensional space from the high-dimensional space. Since the non-stationary and nonlinear characteristics of the signal itself are analyzed in the
Enhanced Time-Frequency Manifold
Time-frequency manifold (TFM) is an inherent nonlinear manifold structure that is described on the time-frequency distribution. Through the time-frequency analysis of the Sensors 2022, 22, 6108 6 of 24 reconstruction space, time-frequency manifold learning is used to mine the features of the signal embedded in the low-dimensional space from the high-dimensional space. Since the non-stationary and nonlinear characteristics of the signal itself are analyzed in the learning process, time-frequency manifold learning can effectively mine and enhance the time-frequency modes of the signal and describe the time-frequency distribution.
According to the PSR introduced in Section 2.1, the time series matrix P ∈ R m×N is obtained. Then, we need to mine the fault feature information through the potential manifold in the high dimension. This paper uses the short-time Fourier transform (STFT) algorithm to perform time-frequency analysis on m time series P j in the matrix P to obtain complex matrix S j . To maintain the phase characteristics of the raw signal and ensure the consistency of the reconstructed signal and the raw signal, S j is divided into an amplitude matrix A j and a phase matrix θ j . The magnitude matrix A j obtained from each time series P j is arranged to form a time-frequency manifold in a high-dimensional space.
The raw signal is not stable, so the time-frequency manifold must be mixed with quite a lot of noise. Figure 2 shows the results of the time-frequency manifold of the innerrace defective signal. It can be clearly seen that at approximately 3000 Hz, the defective signal forms a resonance band. However, from the positions marked with dotted lines in the figure, it can be seen that a large amount of noise is mixed between the 3000 Hz resonance band and other frequency bands. To extract the defective features submerged in the noise, this subsection introduces an enhanced manifold learning algorithm to mine the low-dimensional nonlinear time-frequency manifold structure embedded in the highdimensional space.
Time-frequency manifold (TFM) is an inherent nonlinear manifold structure that described on the time-frequency distribution. Through the time-frequency analysis of t reconstruction space, time-frequency manifold learning is used to mine the features of t signal embedded in the low-dimensional space from the high-dimensional space. Sin the non-stationary and nonlinear characteristics of the signal itself are analyzed in t learning process, time-frequency manifold learning can effectively mine and enhance t time-frequency modes of the signal and describe the time-frequency distribution.
According to the PSR introduced in Section 2.1, the time series matrix ∈ × obtained. Then, we need to mine the fault feature information through the potential ma ifold in the high dimension. This paper uses the short-time Fourier transform (STFT) a gorithm to perform time-frequency analysis on m time series in the matrix to obta complex matrix . To maintain the phase characteristics of the raw signal and ensure t consistency of the reconstructed signal and the raw signal, is divided into an amp tude matrix and a phase matrix . The magnitude matrix obtained from ea time series is arranged to form a time-frequency manifold in a high-dimensional spa The raw signal is not stable, so the time-frequency manifold must be mixed with qu a lot of noise. Figure 2 shows the results of the time-frequency manifold of the inner-ra defective signal. It can be clearly seen that at approximately 3000 Hz, the defective sign forms a resonance band. However, from the positions marked with dotted lines in t figure, it can be seen that a large amount of noise is mixed between the 3000 Hz resonan band and other frequency bands. To extract the defective features submerged in the nois this subsection introduces an enhanced manifold learning algorithm to mine the low-d mensional nonlinear time-frequency manifold structure embedded in the high-dime sional space. The LLTSA algorithm is a typical nonlinear manifold learning algorithm. This alg rithm approximates the local part of the raw signal through the limit of the tangent spa The LLTSA algorithm is a typical nonlinear manifold learning algorithm. This algorithm approximates the local part of the raw signal through the limit of the tangent space and uses the local tangent space arrangement to map the signal in the low-dimensional space. LLTSA is a partial maximal linear expansion of the approximate linear tangent space based on the LTSA algorithm. It includes the characteristics of principal component analysis (PCA) and LTSA. While extracting the nonlinear features of high-dimensional data, local neighborhood information is preserved. Compared with algorithms such as LTSA, it avoids the loss of many sensitive features and is suitable for dimension reduction of nonlinear features.
The two input parameters of the LLTSA algorithm are the dimensionality reduction dimension d and the neighborhood parameter k. The specific value of the parameter is very important to the final noise suppression effect. If d is too large, the low-dimensional manifold will contain too much redundant information, and the noise suppression effect will be poor; if d is too small, the low-dimensional manifold will lose some information and miss the defective features in the manifold structure. If k is too large, it will affect the division of the linear block range of the algorithm; if k is too small, it will reduce the correlation of the neighborhood structure. For LLTSA, if the constant parameters are selected blindly, it cannot adapt to all application scenarios, which will adversely affect signal de-noising. In this paper, a method based on Renyi entropy is proposed to optimize the input parameters of LLTSA. Renyi entropy is a kind of information entropy that is an objective index to evaluate energy concentration. The smaller the value is, the higher the energy concentration. The specific calculation formula of Renyi entropy is We take the Renyi entropy value of TFM obtained after LLTSA dimensionality reduction as the grid search target. Take the (d, k) point corresponding to the smallest Renyi entropy value as the optimal input of LLTSA. The specific steps of LLTSA parameters optimization based on Renyi entropy are as follows: If the raw signal x = [x 1 , x 2 , · · · , x n ], then 1.
The time delay τ and the embedding dimension m are obtained by the C-C and Cao's methods, and the phase space matrix R m×N is obtained by the PSR.
2.
The time-frequency analysis is performed on each time series P j of the phase space matrix R m×N by the STFT, and then the amplitude matrix A j is obtained.
3.
Set the grid search initialization parameters of LLTSA, the search step size is 1 4.
Using the grid search method, the Renyi entropy value of TFM in each search is calculated, and the (d, k) corresponding to the smallest Renyi entropy value is taken as the optimal input parameters of LLTSA.
After optimizing LLTSA by grid search, we obtain low dimensional time-frequency structure enhanced time-frequency manifold (ETFM). Compared with the TFM obtained without optimization, ETFM has the highest time-frequency aggregation and the best noise suppression effect. In the next subsection, we use sparse representation algorithm to reconstruct ETFM.
Sparse Representation Based on Kurtosis-Wavelet Dictionary
In Section 2.2, we obtained ETFM with a good noise suppression effect through time-frequency manifold learning, but because of the nonlinear calculation in dimension reduction, ETFM loses the amplitude information of the raw signal. In this subsection, we use a sparse representation algorithm to overcome the distortion of ETFM. As the key of sparse representation, dictionary has an important influence on the final result of sparse representation. Therefore, kurtosis-wavelet dictionary is proposed in this paper. The kurtosis-wavelet dictionary not only has a high matching degree with the transient pulses, but also avoids the residual noise in ETFM to the greatest extent. Through the sparse representation of ETFM, the final result not only reproduces the amplitude information of the raw signal, but also has the advantage of low SNR of ETFM.
Sparse Representation Principle
The sparse analysis of the signal refers to the overcomplete representation of the signal on the redundant dictionary. For signal x ∈ H L , there is a redundant dictionary (14) km is the selected atom label in the dictionary, and g km is the coefficient of the corresponding atom d km . The sparsity of the signal is M (M K), which is the number of nonzero coefficients.
Because the result of sparse representation is not unique, it is necessary to solve the optimal solution of the sparse representation problem in order to achieve the best representation of the signal. According to Equation (14), the optimal solution must be the sparsest solution, that is, the solution with the least non-zero value in the coefficient. The greedy algorithm is a commonly used sparse representation algorithm. Its specific principle is to use the least square method to carry out local optimization according to the current data and obtain the final result through repeated iteration. The orthogonal matching pursuit (OMP) algorithm, as a typical greedy algorithm, is improved on the traditional matching pursuit (MP) algorithm. By using the Gram-Schmidt process to orthogonalize the projection direction, the approximate representation of the signal is continuously improved in this new way. The main idea is to project the signal into a redundant dictionary in each iteration so that the approximation error is minimized. Because the dictionary atoms filtered each time can be orthogonal, the OMP algorithm has a faster convergence rate than the MP algorithm. This paper uses the OMP algorithm in the process of solving the sparse expression.
ETFM Reconstruction Using Kurtosis-Wavelet Dictionary
In the process of solving the sparse representation of the signal, the atoms in the dictionary will inevitably match with the noise in the signal. This would bring useless information to the final sparse representation result, which will affect the extraction effect of signal time-frequency features. This section proposes a method to construct a kurtosiswavelet dictionary to address this problem.
First, according to the solution principle of sparse representation, the atoms closest to the raw signal must be screened in each iteration process, so the similarity between the transient pulse atoms in the dictionary and the signal fault impacts greatly affects the effect of signal sparse expression. Here, the Morlet wavelet matching periodic transient shocks are chosen to construct one-dimensional time-domain impulse atoms. Its time domain formula is as follows: In the formula, the single impact duration W and parameters τ, f , and ξ determine the wavelet waveform and characteristics. To meet the analysis requirements in the timefrequency domain in this paper, STFT is used to convert one-dimensional atoms into two-dimensional atoms, and the calculation process of constructing the time-frequency dictionary D tf is as follows: ST represents the STFT of atoms in the time domain; k and v represent the time and frequency in the time-frequency domain, respectively; and kτ is the time index of the wavelet pulse.
In addition, this paper proposes a method based on kurtosis to determine the specific position of atoms in the time domain, so that the atoms can avoid the noise in the signal as much as possible and match the transient impulses. The kurtosis is a dimensionless index parameter that is very sensitive to the impulse in time series and is especially suitable for the judgment of the defective signal of the rolling bearing. Its calculation formula is where µ represents the mean value of the time series signal, σ represents the standard deviation of the time series signal, and E(·) is the expected value obtained. The larger the kurtosis value is, the more impulse components in the signal. To distinguish the impacts submerged in the noise, this study draws on the idea of window shifting in the STFT.
Taking the raw signal of 1000 points as an example, we set the window length to 50, and then the non-overlapping moving window intercepted the signal, finally obtaining a total of 20 signals. Additionally, the kurtosis value of each intercepted signal was calculated and sorted. According to the definition and characteristics of kurtosis, there is a high probability of defective impacts at the location of the signal segments with large kurtosis values. Therefore, we chose to build atoms only at the positions corresponding to the signal segments with larger kurtosis values and combine these atoms to complete the construction of the kurtosis-wavelet dictionary. To further illustrate the specific steps of the dictionary construction method proposed in this paper and prove its effectiveness, a set of simulation signals are introduced here. The time-domain signal is shown in Figure 3a. To test the universality of this method, four different defective impacts are simulated here. The specific formula is where To make full use of the noise suppression effect of the ETFM, the transient pulses on the ETFM is learned to realize the mining of defective impact characteristics. In this paper, the time-frequency image of ETFM is used as the training sample of the kurtosis-wavelet dictionary , and the time-frequency atoms that best match the transient characteristics of ETFM are selected by the OMP algorithm. These time-frequency atoms are combined into a new dictionary . Then, is used to sparsely represent m magnitude matrices of the raw signal to obtain . The specific formula is as follows: Finally, the m reconstructed and m are combined, and the reconstructed time domain signal can be obtained through the inverse STFT and PSR synthesis technology.
Complete Framework of the Proposed Method
The complete framework is shown in Figure 4. The specific steps are as follows: 1. PSR: the C-C method and Cao's method jointly determine the best time delay and the best embedding dimension m, and the PSR technology maps the raw signal to the high-dimensional space. High-dimensional spatial time-frequency features are To restore the defective signal in the real scene, white noise with a signal-to-noise ratio of −6 dB was added to the simulation signal, and the time-domain waveform is shown in Figure 3b. According to the construction method of kurtosis-wavelet dictionary, we first set the window length to 50, moved the window without overlapping in the time sequence, and calculated the kurtosis value of each intercepted signal. The average kurtosis value of the 20-segment signal was 2.8476, and the specific positions of the 8-segment signals with kurtosis greater than the average kurtosis value are shown in the red dotted box in Figure 3b. We chose to set atoms at these positions. According to the comparison of Figure 3a,b, it can be seen that these atoms were placed in the positions of the transient pulses, which avoids noise.
To make full use of the noise suppression effect of the ETFM, the transient pulses on the ETFM is learned to realize the mining of defective impact characteristics. In this paper, the time-frequency image of ETFM is used as the training sample of the kurtosis-wavelet dictionary D tf , and the time-frequency atoms that best match the transient characteristics of ETFM are selected by the OMP algorithm. These time-frequency atoms are combined into a new dictionary D tf1 . Then, D tf1 is used to sparsely represent m magnitude matrices A j of the raw signal to obtain A j . The specific formula is as follows: Finally, the m reconstructed A j and m θ j are combined, and the reconstructed time domain signal can be obtained through the inverse STFT and PSR synthesis technology.
Complete Framework of the Proposed Method
The complete framework is shown in Figure 4. The specific steps are as follows: 1.
PSR: the C-C method and Cao's method jointly determine the best time delay τ and the best embedding dimension m, and the PSR technology maps the raw signal to the high-dimensional space. High-dimensional spatial time-frequency features are mined by STFT and divided into amplitude matrix and phase matrix.
2.
Enhance time-frequency manifold learning: LLTSA is optimized by the gridding search method, and the important features in the high-dimensional space are mined to obtain ETFM, which completes the preliminary noise reduction of the signal. 3.
Kurtosis-wavelet dictionary generation: divide the raw signal into several segments and calculate the kurtosis of each segment. Set time-frequency wavelet atoms in the signal segment with large kurtosis to complete the construction of kurtosiswavelet dictionary 4.
Sparse representation: use the OMP algorithm to solve the sparse representation problem of ETFM and update the kurtosis-wavelet dictionary at the same time. The reconstruction result of the amplitude matrix is obtained by Equations (19) and (20). Then, the reconstructed amplitude matrix is combined with the phase matrix and restored to a one-dimensional signal by the inverse STFT and phase space reconstruction technology.
Experimental Results
In this section, the signal de-noising method proposed in this paper is experimentally verified by the measured defective signals of the inner-race, outer-race, and rolling-element of the rolling bearing. The measured vibration signal of the rolling bearing used in the analysis came from the test data of the bearing test bench built by the Electrical Engineering Laboratory of Case Western Reserve University in the United States. Figure 5 shows the bearing test bench. The test bearing supports the main shaft of the motor. The model of the test bearing is 6205-2RS JEM SKF. Under the bearing corresponding to the bearing seat at the driving end of the motor spindle, also known as the bearing load area, an acceleration sensor was installed to test the vibration change signal of the bearing. The sampling frequency of the test bench was 12K. The defect-related parameters of three signals to be analyzed are listed in Table 1.
Experimental Results
In this section, the signal de-noising method proposed in this paper is experimentally verified by the measured defective signals of the inner-race, outer-race, and rolling-element of the rolling bearing. The measured vibration signal of the rolling bearing used in the analysis came from the test data of the bearing test bench built by the Electrical Engineering Laboratory of Case Western Reserve University in the United States. Figure 5 shows the bearing test bench. The test bearing supports the main shaft of the motor. The model of the test bearing is 6205-2RS JEM SKF. Under the bearing corresponding to the bearing seat at the driving end of the motor spindle, also known as the bearing load area, an acceleration sensor was installed to test the vibration change signal of the bearing. The sampling frequency of the test bench was 12K. The defect-related parameters of three signals to be analyzed are listed in Table 1.
In this section, the signal de-noising method proposed in this paper is experimentall verified by the measured defective signals of the inner-race, outer-race, and rolling-ele ment of the rolling bearing. The measured vibration signal of the rolling bearing used i the analysis came from the test data of the bearing test bench built by the Electrical Eng neering Laboratory of Case Western Reserve University in the United States. Figure 5 shows the bearing test bench. The test bearing supports the main shaft of th motor. The model of the test bearing is 6205-2RS JEM SKF. Under the bearing correspond ing to the bearing seat at the driving end of the motor spindle, also known as the bearin load area, an acceleration sensor was installed to test the vibration change signal of th bearing. The sampling frequency of the test bench was 12K. The defect-related parameter of three signals to be analyzed are listed in Table 1. First, the C-C method and Cao's method were used to jointly determine the embedded dimension and time delay in PSR. The change curve of ( , ) with in the C-C method is shown in Figure 7a. It can be clearly seen from the red circle in the figure that when was 4, ( , ) took the first local minimum value. Therefore, it can be determined that the optimal solution of should be 4. Next, taking = 4 as the input of Cao's method, the curve of 1 was shown, as in Figure 7b. It can be seen when was greater than 14, the value of 1 remained basically unchanged, so the final value of the phase space embedding dimension m of the measured signal was 14. First, the C-C method and Cao's method were used to jointly determine the embedded dimension and time delay in PSR. The change curve of ∆S 2 (k, t) with τ in the C-C method is shown in Figure 7a. It can be clearly seen from the red circle in the figure that when τ was 4, ∆S 2 (k, t) took the first local minimum value. Therefore, it can be determined that the optimal solution of τ should be 4. mined that the optimal solution of should be 4.
Next, taking = 4 as the input of Cao's method, the curve of 1 was shown, as in Figure 7b. It can be seen when was greater than 14, the value of 1 remained basically unchanged, so the final value of the phase space embedding dimension m of the measured signal was 14. After solving the time delay and embedded dimension of the defective signal by the C-C method and Cao's method, fuzzy entropy and the mean-square error (MSE) were used in this study as the quantitative evaluation index of the PSR to characterize the complexity and distortion of the reconstructed space. A larger fuzzy entropy reflected that the high-dimensional space obtained by PSR was more complex and the intrinsic structure Next, taking τ = 4 as the input of Cao's method, the curve of E1 was shown, as in Figure 7b. It can be seen when m was greater than 14, the value of E1 remained basically unchanged, so the final value of the phase space embedding dimension m of the measured signal was 14.
After solving the time delay and embedded dimension of the defective signal by the C-C method and Cao's method, fuzzy entropy and the mean-square error (MSE) were used in this study as the quantitative evaluation index of the PSR to characterize the complexity and distortion of the reconstructed space. A larger fuzzy entropy reflected that the highdimensional space obtained by PSR was more complex and the intrinsic structure was more fuzzy. A smaller fuzzy entropy reflected that the high-dimensional space had less complexity and the signal had higher SNR. MSE represents the difference between the reconstructed space of the PSR and the raw signal. A greater MSE means a more serious the degree of signal distortion. The specific calculation results are shown in Table 2. Compared with the other two algorithms, the combined algorithm of C-C and Cao had the smallest fuzzy entropy and the smallest MSE, which means the high dimension space obtained by PSR had low complexity and distortion.
Experimental Results of ETFM
After PSR, we chose the time-frequency manifold learning algorithm LLTSA to further mine and de-noise the high-dimensional space of the signal. The Renyi entropy value of TFM obtained by LLTSA was calculated by using the gridded parameter search method. The specific results are shown in Figure 8. It can be clearly seen that when d = 5 and k = 6, the ETFM with the smallest Renyi entropy was obtained.
The comparison between the time-frequency diagram of the raw signal and the ETFM is shown in Figure 9a,b. As marked in the figure, the low-dimensional manifold timefrequency diagram was completed to retain the transient pulses of the raw signal in the 3000 Hz resonance band, and there was also a certain noise suppression effect in the resonance band and other frequency bands. As seen from Table 3, compared with the raw signal, the Renyi entropy value of ETFM was much smaller, which proves that the noise suppression effect of the time-frequency manifold algorithm was powerful. Compared with the Renyi entropy of LLTSA-TFM and LTSA-TFM without optimization, ETFM also had advantages, which proved the effectiveness of the parameter optimization method on the basis of grid search. Figure 9c,d, shows the enlarged comparison of the ETFM and the LTSA-TFM near the resonance band. Both algorithms can effectively retain pulses in the resonance band of 3000 Hz. However, from the places marked in the figure, it can be clearly seen that the pulses in ETFM were more independent and obvious than that in LTSA-TFM.
Compared with the other two algorithms, the combined algorithm of C-C and Cao had the smallest fuzzy entropy and the smallest MSE, which means the high dimension space obtained by PSR had low complexity and distortion.
Experimental Results of ETFM
After PSR, we chose the time-frequency manifold learning algorithm LLTSA to further mine and de-noise the high-dimensional space of the signal. The Renyi entropy value of TFM obtained by LLTSA was calculated by using the gridded parameter search method. The specific results are shown in Figure 8. It can be clearly seen that when d = 5 and k = 6, the ETFM with the smallest Renyi entropy was obtained. The comparison between the time-frequency diagram of the raw signal and the ETFM is shown in Figure 9a,b. As marked in the figure, the low-dimensional manifold time-frequency diagram was completed to retain the transient pulses of the raw signal in the 3000 Hz resonance band, and there was also a certain noise suppression effect in the resonance band and other frequency bands. As seen from Table 3, compared with the raw signal, the Renyi entropy value of ETFM was much smaller, which proves that the noise suppression effect of the time-frequency manifold algorithm was powerful. Compared with the Renyi entropy of LLTSA-TFM and LTSA-TFM without optimization, ETFM also had advantages, which proved the effectiveness of the parameter optimization method on the basis of grid search. Figure 9c,d, shows the enlarged comparison of the ETFM and the LTSA-TFM near the resonance band. Both algorithms can effectively retain pulses in the resonance band of 3000 Hz. However, from the places marked in the figure, it can be clearly seen that the pulses in ETFM were more independent and obvious than that in LTSA-TFM. The complete framework of the method proposed in this paper was used to recon-
Experimental Results of Signal De-Noising
The complete framework of the method proposed in this paper was used to reconstruct the inner-race defective signal, and its time domain waveform is shown in Figure 10a. By comparing the raw signal with the reconstructed signal ( Figure 10c) and the residual signal (Figure 10d), we can see that the reconstructed signal accurately reproduced the transient pulses in the raw signal, and the noise between transient pulses was largely suppressed. From Figure 10b,e, the noise outside the resonance band at approximately 3000 Hz was basically filtered out, and the extracted transients of interest were represented with a certain noise suppression effect. According to Table 1, the characteristic frequency f 0 of the innerrace defective signal was 162 Hz. In Figure 10f, f 0 and its harmonics (2 f 0 , 3 f 0 ) are clearly visible, and they were not disturbed by harmonics and noise, which proves the ability of the proposed method to reveal fault characteristics.
Experimental Results of Outer-Race Defective Signal
In this subsection, we analyze the outer-race defective signal. Due to the e strong background noise, no obvious defective transient characteristics could the raw signal time-domain waveform and time-frequency diagram. First, the p of PSR were determined jointly by the C-C method and Cao's methods. As sho ure 11c,d, it can be clearly seen that when was set to 4, ( , ) took the first value. Then, taking as 4 as the input of Cao's method, the relationship betw value and m was obtained. When m was greater than 15, the E1 value remained
Experimental Results of Outer-Race Defective Signal
In this subsection, we analyze the outer-race defective signal. Due to the existence of strong background noise, no obvious defective transient characteristics could be seen on the raw signal time-domain waveform and time-frequency diagram. First, the parameters of PSR were determined jointly by the C-C method and Cao's methods. As shown in Figure 11c,d, it can be clearly seen that when τ was set to 4, ∆S 2 (k, t) took the first minimum value. Then, taking τ as 4 as the input of Cao's method, the relationship between the E1 value and m was obtained. When m was greater than 15, the E1 value remained basically unchanged. Therefore, the phase space embedding dimension m of the measured signal finally took the value of 15. In the time-frequency manifold reconstruction, the input parameters of the LLTSA algorithm were determined according to the grid search method. The specific results are shown in Figure 11e. When d = 4, and k = 4, the Renyi entropy was the smallest. The time-frequency diagram of the ETFM is shown in Figure 11f. Compared with Figure 11b, it can be seen that the manifold learning algorithm had a certain degree of noise suppression effect, and the transient pulses in the time-frequency diagram was clearer, highlighting the defective characteristics. Figure 12a is the waveform of the reconstructed signal. Figure 12b is the time-frequency diagram of the reconstructed signal. Compared with Figure 11b, the noise outside the resonance band was essentially filtered out, and the impulses between 3000 Hz and 4000 Hz were reconstructed. From the comparison of the reconstructed signal and the raw signal and the residuals of the two, it can be seen that most of the noise between the impulses did not appear in the reconstructed signal. According to Table 1, the characteristic frequency of the outer-race defective signal was 105 Hz. In Figure 12f, and its harmonic 2 are clearly visible; 3 was not clear and was affected by harmonics and noise. Figure 12a is the waveform of the reconstructed signal. Figure 12b is the time-frequency diagram of the reconstructed signal. Compared with Figure 11b, the noise outside the resonance band was essentially filtered out, and the impulses between 3000 Hz and 4000 Hz were reconstructed. From the comparison of the reconstructed signal and the raw signal and the residuals of the two, it can be seen that most of the noise between the impulses did Table 1, the characteristic frequency f 0 of the outer-race defective signal was 105 Hz. In Figure 12f, f 0 and its harmonic 2 f 0 are clearly visible; 3 f 0 was not clear and was affected by harmonics and noise.
Experimental Results of Rolling-Element Defective Signal
In this subsection, the rolling-element defective signal is analyzed. reconstruction analysis results are shown in Figure 13. The raw signal waveform is sho in Figure 13a, and the time-frequency analysis results are shown in Figure 13b. Noise distributed over the entire time-frequency domain. First, the parameters of the PSR w jointly determined by the C-C method and Cao's methods. As shown in Figure 13c, can be clearly seen that when was set to 5, ( , ) took the first local minimum, can be determined that the optimal solution of should be 5 in the parameter selec of PSR. Then, taking as 5 as the input of Cao's method, the relationship between th value and m was obtained. When m was greater than 13, the E1 value essentially remai unchanged. Therefore, the phase space embedding dimension m of the measured sig finally took the value of 13. The final result of the same grid search method is show Figure 13e. When d = 3, and k = 3, the Renyi entropy was the smallest. The time-freque
Experimental Results of Rolling-Element Defective Signal
In this subsection, the rolling-element defective signal is analyzed. The reconstruction analysis results are shown in Figure 13. The raw signal waveform is shown in Figure 13a, and the time-frequency analysis results are shown in Figure 13b. Noise was distributed over the entire time-frequency domain. First, the parameters of the PSR were jointly determined by the C-C method and Cao's methods. As shown in Figure 13c,d, it can be clearly seen that when τ was set to 5, ∆S 2 (k, t) took the first local minimum, so it can be determined that the optimal solution of τ should be 5 in the parameter selection of PSR. Then, taking τ as 5 as the input of Cao's method, the relationship between the E1 value and m was obtained. When m was greater than 13, the E1 value essentially remained unchanged. Therefore, the phase space embedding dimension m of the measured signal finally took the value of 13. The final result of the same grid search method is shown in Figure 13e. When d = 3, and k = 3, the Renyi entropy was the smallest. The time-frequency diagram of the ETFM is shown in Figure 13f. The sparsity of the OMP algorithm was set to 8, and the reconstruction results are shown in Figure 14a,b. It can be clearly seen that the reconstructed signal accurately reproduced all the pulses in the raw signal and had an certain noise suppression effect outside the resonance band noise. The noise between transient pulses in the resonance band was also filtered to some extent by the effect. According to Table 1, the characteristic frequency f 0 of the rolling-element defective signal was 60 Hz. In Figure 14f, f 0 and its harmonics (2 f 0 , 3 f 0 ) are clearly visible and were not disturbed by harmonics and noise, which proves the ability of the proposed method to reveal fault characteristics. Figure 13f. The sparsity of the OMP algorithm was set to 8, and the reconstruction results are shown in Figure 14a,b. It can be clearly seen that the reconstructed signal accurately reproduced all the pulses in the raw signal and had an certain noise suppression effect outside the resonance band noise. The noise between transient pulses in the resonance band was also filtered to some extent by the effect. According to Table 1, the characteristic frequency of the rolling-element defective signal was 60 Hz. In Figure 14f, and its harmonics (2 , 3 ) are clearly visible and were not disturbed by harmonics and noise, which proves the ability of the proposed method to reveal fault characteristics.
Ablation Experiment
In order to prove the effectiveness of our innovative work on ETFM and kurtosiswavelet dictionary, an ablation experiment is necessary. First, we did not use the ETFM to train the dictionary but used the raw signal time-frequency map while keeping the rest of the method framework the same, which is referred to as method 1 below. From the time-domain waveform ( Figure 15a) and time-frequency diagram (Figure 15b) of the final reconstructed signal, it can be seen that method 1 still had a certain noise suppression effect. However, compared with Figure 10a,b, Figure 15a,b shows more noise between extracted transients, but the defective feature extraction effect was general, which also verifies the de-noising effect of the TFM to a certain extent.
Then, we no longer used the kurtosis-wavelet dictionary, but set time-frequency atoms to form a dictionary in the entire signal time domain, which is referred to as method 2 below. The waveform is shown in Figure 15c. It can be clearly seen that method 2 also reconstructed the transient pulses in the raw signal, and the noise suppression effect was stronger than the result obtained by method 1, but it was still not as good as the result in
Ablation Experiment
In order to prove the effectiveness of our innovative work on ETFM and kurtosiswavelet dictionary, an ablation experiment is necessary. First, we did not use the ETFM to train the dictionary but used the raw signal time-frequency map while keeping the rest of the method framework the same, which is referred to as method 1 below. From the time-domain waveform ( Figure 15a) and time-frequency diagram (Figure 15b) of the final reconstructed signal, it can be seen that method 1 still had a certain noise suppression effect. However, compared with Figure 10a,b, Figure 15a,b shows more noise between extracted transients, but the defective feature extraction effect was general, which also verifies the de-noising effect of the TFM to a certain extent.
Then, we no longer used the kurtosis-wavelet dictionary, but set time-frequency atoms to form a dictionary in the entire signal time domain, which is referred to as method 2 below. The waveform is shown in Figure 15c. It can be clearly seen that method 2 also reconstructed the transient pulses in the raw signal, and the noise suppression effect was stronger than the result obtained by method 1, but it was still not as good as the result in Figure 10a,b. A lot of noise appeared, which further proves that the kurtosis-wavelet dictionary can not only reconstruct the transients of interest but also have a strong de-noising effect. Table 4 lists the average residual between the reconstructed signal and the raw signal, the energy of the reconstructed signal, and Renyi entropy. The reconstructed signal obtained by the method proposed in this paper had the smallest Renyi entropy value, the highest instantaneous frequency aggregation degree, and the best noise suppression effect. However, the average residual was the largest and the signal energy was the smallest, because method 1 and method 2 introduced noise in the reconstruction process, making the value of MSE smaller. On the basis of the above analysis, the innovative work in the ETFM and kurtosis-wavelet dictionary in this paper plays an important role in the noise suppression effect of the reconstructed signal. However, there still are some limitations. Table 4 lists the average residual between the reconstructed signal and the raw signal, the energy of the reconstructed signal, and Renyi entropy. The reconstructed signal obtained by the method proposed in this paper had the smallest Renyi entropy value, the highest instantaneous frequency aggregation degree, and the best noise suppression effect. However, the average residual was the largest and the signal energy was the smallest, because method 1 and method 2 introduced noise in the reconstruction process, making the value of MSE smaller. On the basis of the above analysis, the innovative work in the ETFM and kurtosis-wavelet dictionary in this paper plays an important role in the noise suppression effect of the reconstructed signal. However, there still are some limitations. The time-frequency matrix obtained by the STFT was too large, and the size increased with the length of the signal. This would greatly increase the computational cost of ETFM and kurtosis-wavelet dictionary generation. In the process of dictionary construction, after calculating the kurtosis of each segment of the signal, the number of signal segments we chose to set atoms in this position was generally determined by the raw signals, and automatic selection was unable to be achieved. These issues need to be further addressed in our future work.
Comparison with Other Methods
To compare the signal de-noising effect of the method proposed in this paper, four traditional filtering algorithms were used, and Figure 16 shows the final reconstruction results. According to Table 1, the characteristic frequency of the inner-race defective signal was 162 Hz. As shown in Figure 17, there was a large amount of noise in the envelope spectrum of reconstructed signals obtained by other methods. 2 3 were drowned in the noise and were difficult to distinguish. Comparing the analysis results in Figure 10f,
1.
Discrete wavelet transform (DWT): the specific reconstruction effect is shown in Figure 16a,b. It can be clearly seen from the time-frequency diagram that the highfrequency noise above 4000 Hz was basically filtered out, but the noise suppression effect in other frequency bands was poor. Compared with the time-frequency diagram of the raw signal, the transient pulses at approximately 0.07 s were not reconstructed, resulting in a certain degree of loss of defective features.
2.
Continuous wavelet transform (CWT): the specific reconstruction effect is shown in Figure 16c,d. Compared with the reconstruction result in Figure 16a,b, it had a good noise suppression effect in both the low-frequency band and high-frequency band, and completely reconstructed all transient pulses. However, the signal de-noising effect was poor at the resonance frequency of 3000 Hz.
3.
Wavelet packet transform (WPT): the specific reconstruction effect is shown in Figure 16e,f. The number of decomposition layers of WPT was 3. Since the sampling frequency of the signal was 12 kHz, according to Nyquist's law, the frequency difference of the nodes in the third-level wavelet tree was 6000/8 = 750 Hz. Here, we chose the fourth and fifth wavelet nodes whose frequencies ranged from 2250 to 3750 Hz. From the time-frequency analysis results of the reconstructed signal, it can be clearly seen that compared with the filtering algorithm of CWT, the frequency band at the center frequency of 3000 Hz was narrower. However, the noise in the resonance band was still not filtered out, and it can be seen from Figure 16e that the transient pulses were submerged in a large amount of noise.
4.
The filtering effect based on the EMD algorithm is shown in Figure 16g,h, and the IMF whose center frequency was approximately 3000 Hz was selected as the reconstructed signal. It can be seen from its time-frequency diagram that the reconstructed signal contained considerable noise because the EMD algorithm had bandpass filtering characteristics for white noise.
According to Table 1, the characteristic frequency f 0 of the inner-race defective signal was 162 Hz. As shown in Figure 17, there was a large amount of noise in the envelope spectrum of reconstructed signals obtained by other methods. 2 f 0 and 3 f 0 were drowned in the noise and were difficult to distinguish. Comparing the analysis results in Figure 10f, we can see the effectiveness of the proposed method in extracting bearing fault features. Table 5 shows the comparison between the above four filtering algorithms and the method proposed in this paper. It can be clearly seen that the Renyi entropy of the timefrequency diagram of the final reconstructed signal obtained by the method proposed in this paper was the smallest among all algorithms, and the kurtosis of the reconstructed signal was nearly twice that of other algorithms. Combining the results in Figure 16 and Table 5, it can be proven that compared with other methods, the structure of the method proposed in this paper can not only accurately reconstruct the transient pulses in the raw signal, but also make the fault characteristics more prominent and has an excellent signal de-noising effect in each frequency band. Table 5 shows the comparison between the above four filtering algorithms and method proposed in this paper. It can be clearly seen that the Renyi entropy of the t frequency diagram of the final reconstructed signal obtained by the method propose this paper was the smallest among all algorithms, and the kurtosis of the reconstru signal was nearly twice that of other algorithms. Combining the results in Figure 16 Table 5, it can be proven that compared with other methods, the structure of the met proposed in this paper can not only accurately reconstruct the transient pulses in the signal, but also make the fault characteristics more prominent and has an excellent si de-noising effect in each frequency band.
Conclusions
In this paper, a new signal de-noising method is proposed that can improve the SNR and identify the weak pulses generated by the early faults of rolling bearings. In the phase space reconstruction, we propose that the C-C method and Cao's method jointly determine the optimal time delay and embedding dimension, which reduces the distortion of the reconstructed space and excavates the high-dimensional characteristics of the signal. The proposed enhanced time-frequency manifold improves the noise suppression effect of manifold learning to a certain extent. Compared with other methods, ETFM has better timefrequency aggregation and more prominent fault pulse. The dictionary trained by ETFM has a better matching degree with the transient pulses in the signal. The proposed kurtosiswavelet dictionary can reduce the number of atoms matching the noise in the raw signal. The reconstructed signal obtained by sparse representation not only retains the amplitude and phase information of the raw signal but also has a better noise suppression effect, and the impact pulses of the reconstructed signal are more prominent. Compared with other filtering algorithms, the kurtosis of reconstructed signal obtained by the denoising method proposed in this paper was found to be 12.266, nearly twice that obtained by other methods. The Renyi entropy value of 7.648 was the lowest among all methods, and it can be seen from the time-frequency diagram that the reconstructed signal had a better noise suppression effect in all frequency bands. In addition, the algorithm proposed in this paper had good results in the application of defective signals of inner-race, outer-race, and rollingelement. This signal de-noising method has a certain application value for early fault diagnosis and defect feature extraction of bearings. However, the proposed method still has some problems, such as a too long computation time, and the time-frequency resolution is limited by the performance of the algorithms. In future research, we will optimize the computational speed of the used algorithms and apply higher precision time-frequency analysis algorithms to identify the noise in the signal. | 2022-08-18T15:18:07.096Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "78d168e50b6224d8e999cd66bf591de1dc84bdcf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/16/6108/pdf?version=1660634739",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d60c7e0d71741df04f31344b4eb40e65d3f149e9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270325881 | pes2o/s2orc | v3-fos-license | Nitrogen and Phosphorus Removal from Wastewater Using Calcareous Waste Shells—A Systematic Literature Review
: Nitrogen and phosphorus in freshwaters are a global environmental challenge. Concur-rently, the shell fi sh industry’s calcareous waste shells (CWSs) amount to ~10 million tonnes annually. CWSs can e ff ectively adsorb dissolved pollutants, including nutrients, from water, which has motivated a growing number of experimental studies on recycling CWSs in wastewater treatment. This comprehensive literature review summarises and critically assesses the e ff ectiveness of using di ff erent CWSs for removing nutrients from water. The e ff ects of CWS type, initial pollutant concentration, adsorbent dosage, particle size, and contact time (CT) are investigated. The results show that phosphorus removal has been examined more than nitrogen. Most studies have been conducted using synthetic wastewater under laboratory conditions only. There is a large variability in experimental conditions, such as CWS adsorbent dosages (0.1–100 g/L) and CT (0.083–360 h). The calcination of CWSs is frequently used to enhance adsorption capacity. The Langmuir isotherm model has been found to fi t adsorption data best when raw oyster shells are used, while the Freundlich isotherm is best when the adsorbent is calcinated mussel shells. The pseudo-second-order (PSO) kinetics model tends to describe adsorption data be tt er than the pseudo-fi rst-order (PFO) model in all shell types. There is signi fi cant potential for using calcareous waste shells to remove nutrients from wastewater in line with circular economy aspirations.
Introduction
Nutrient pollution is the current primary cause of water quality impairment globally [1].Persistent and excessive nitrogen and phosphorus inputs to freshwater systems lead to eutrophication.This is typically caused by rapid urbanisation [2,3], discharges from under-performing wastewater treatment systems [4,5], and excess fertiliser runoff from agricultural intensification [6][7][8].As a result, excessive algal growth and dissolved oxygen depletion occur, causing an imbalance in aquatic ecosystems and diminishing freshwater resources [9,10].Concentrations of 0.1 mg/L of phosphorus [5,11] and 2 mg/L of nitrate [12] are sufficient to cause freshwater eutrophication.The USEPA especially recommends a chronic criterion concentration of 2.4 mg of NH4 + /L at pH 7 and 20 °C for a 30-day average duration, not to be exceeded more than once every three years on average [13].For comparison, the maximum allowable nitrate and ammonium concentrations in drinking water specified by the World Health Organization (WHO) are 40 mg/L and 0.2 mg/L, respectively [14].
As freshwater nutrient challenges are escalating, wastes from seafood processing industries are concurrently increasing.In 2020, the commercial shellfish industry was estimated at 42.6% of global seafood production by the UN's Food and Agriculture Organization (FAO) [15].Shellfish and crustacean aquatic products comprise 70% calcareous shells, with the remaining 30% comprising edible meat [15].In New Zealand alone (pop.~ 5 million), a Pacific Island nation, commercial fishing and aquaculture industries generated NZD 2 billion (0.7% of the country's GDP) in 2019 [16].Of this, approximately 100,000 tonnes/year of green lip mussels were processed for meat, with the shells becoming waste material requiring disposal [17].The New Zealand aquaculture sector is projected to grow from NZD 600 million in 2019 to NZD 3 billion by 2035, and other countries are similarly growing their shellfish production, so finding new ways of reusing and valorising CWSs will become critical to supporting more sustainable aquaculture [15,18,19] and aligning with aspirations to adhere to the sustainable development goals.
The use of CWSs to treat various wastewater pollutants has grown over the last few years.Morris et al. [33] and Nguyen et al. [18] treated commercial dyes and heavy metals, finding that Zn 2+ was more easily removed than Pb 2+ and Cd 2+ .Nutrient removal using CWSs has also been studied, with phosphorus removal attributed to adsorption and/or precipitation [4,6,34] and the removal of nitrogenous compounds, such as ammonium (NH4 + ) and nitrate (NO3 -), attributed to chemisorption [18,35,36] or physisorption [10].In most of the studies that trialled CWSs as an adsorbent [20,37,38], the calcination of CWSs at 500-1000 °C was applied to increase their reactivity [19,33].Depending on the CWS calcination temperature and the reaction product formed, phosphate is removed via adsorption on calcite [6], or by precipitation as calcium phosphate when phosphate ions react with calcium ions from the dissolution of calcium oxide in water [39].However, Nam et al. [40] found that hydrated calcinated oyster shells did not significantly remove nitrogen, due to the high solubility of nitrogen compounds compared to the near-complete phosphorus removal via alkaline precipitation at pH 12.
Bhatnagar and Sillanpää [41] reviewed nitrate removal from wastewater using different adsorbents but did not include CWS adsorbents, focussing more on activated carbons, clay-based adsorbents, double-layered hydroxides, zeolites, agricultural waste materials, and industrial wastes.Recent review papers have discussed valorisation opportunities for CWSs from snail shells [42] and molluscan shells in wastewater treatment [43], and other shell wastes, i.e., bivalve shells [44], egg shells [45], and general shellfish waste [15], in multiple applications in agriculture, construction, environmental protection, biomaterials, and food additives.Hart [21] also provided a mini-review on the properties and various applications of waste shells in wastewater treatment, cement production, building construction aggregates, biomedical manufacturing, and industrial catalysts.Eggshells have been used in a broad range of applications including in polymer, metal, and ceramic composites, as evaluated by Vandeginste [45].Wan Mahari et al. [46] reviewed studies that used CWS biochar as an adsorbent in removing emerging pollutants from aquaculture wastewater.The conversion of large amounts of widely available CWSs from the food industry, such as eggshells, into useful resources contributes to reducing, reusing, and recycling this waste and, thus, to the circular economy.Despite these reviews on CWS valorisation opportunities, there is no comprehensive review on the use of CWSs to remove nutrients from wastewaters.This comprehensive, systematic review was therefore undertaken to synthesise and critically assess the current knowledge and research gaps in using CWSs as an adsorbent to remove nitrogen and phosphorus from wastewater.The main objectives are to (i) synthesise the literature on nutrient removal from wastewaters by CWSs; (ii) summarise the nutrient removal mechanisms, isotherm, and kinetic models in the studies; (iii) examine the effects of operating parameters (wastewater type, adsorbent dosage, contact time, pH, pollutant concentration, and particle size) on nutrient removal; and (iv) highlight knowledge gaps and future directions for the successful use of CWSs as an adsorbent for water nutrient removal.
Materials and Methods
A comprehensive search of the scientific literature published from 2010 to 2022 was conducted using Scopus and Compendex databases.Scopus is an extensive, multidisciplinary, bibliographic database, and its use has been widely reported for systematic literature reviews [47][48][49][50][51]. Burnham [52] also stated that Scopus is preferred to the Web of Science (WoS) due to its higher indexed citations and daily database updating.Scopus provides a broader and more inclusive content coverage in the engineering, chemistry, environment, and general sciences, with easier open access to authors [52,53].
The screening of the literature included searching for key terms in the title, abstract, and keywords.Terms included calcareous shell, mussel shell, oyster shell, clam shell, eggshell, or bivalve shell (Step 1, Figure 1).Literature types were confined to journal articles and conference papers (Step 2, Figure 1) for the last decade (2010-2022) and published in English (Step 3, Figure 1).Keywords were then filtered to nitrogenous and phosphorus compounds (Step 4, Figure 1), including wastewater (Step 5, Figure 1).Next, the search criteria focussed on the treatment or removal of those pollutants using CWSs (Step 6, Figure 1).Finally, manual screening excluded articles about other pollutants such as heavy metals, bacteria, or soil (Steps 7 and 8, Figure 1).
After this screening process (Figure 1), three specific criteria were applied in Step 9 (Figure 1) to refine the search further: (i) Does the study investigate nutrient removal from wastewater using CWSs?(ii) Does the study report pollutant removal capacity? (iii) Does the study discuss the possible removal mechanisms, isotherm, and kinetics?Once all these three criteria were satisfied, the total number of publications analysed in detail was 64 (Step 10, Figure 1).Relevant information from these publications was then extracted into a custom-built Excel spreadsheet and grouped into different tables and figures for data analysis and interpretation.Further data analysis involved the use of OriginPro 2024b software for data presentation in a box plot and Pearson's correlation.
Regional and Temporal Trends
Most publications (53%, n = 34) originate from China and Malaysia (Figure 2a).This may be a reflection of the sheer volume of scientific publications emerging from China (in particular) with a large population and active research community [59].Additionally, it may be because these (and other cited countries such as Brazil, Japan, Malaysia, South Korea, Taiwan, and Vietnam) have long coastlines with booming marine farms, and hence have an interest in valorising abundant seashell waste [15,33,37,42,60].There has been a sharp increase in the number of studies using CWSs to treat nutrients since 2015 (Figure 2b), with this trend expected to continue because of the increased global awareness of freshwater pollution [19] and CWS waste valorisation opportunities [7,36].Furthermore, there has been a recent trend to investigate CWS biocomposites for the purpose of removing nutrients from wastewater, such as testing eggshell and potato peel [27], oyster shell and rice husk biochar [28], oyster shell and tobacco straw biochar [61], and eggshell and palm mesocarp fibres [62].[67] NH4 + , PO4 3− (lake) [68] PO4 3− [69] Green-lipped mussels Perna canaliculus Green-lipped mussels Perna canaliculus Green mussel shells Perna viridis NH3-N (raw leachate) [30] Green mussel shells Perna viridis NH3-N (raw leachate) [70] Green mussels NH3-N (raw leachate) [
Wastewater Type and Form of Treated Nutrient
Calcareous waste shells have been investigated more for phosphorus removal (70% of studies) than for nitrogen (30%) (Figure 2c), possibly because phosphorus is easier to remove via adsorption, chemisorption, and precipitation compared to nitrogen.Phosphorus can be expressed as phosphate (PO4 3− ), phosphate-phosphorus (PO4-P), elemental phosphorus (P), and total phosphorus (TP), while nitrogen can be stated as ammonia (NH3 + ), ammonium (NH4 + ), ammoniacal nitrogen (NH3-N), nitrate (NO3 -), nitrate-nitrogen (NO3-N), and total nitrogen (TN) (Table 1).Most authors used synthetic wastewater (Figure 2c) in their experiments to remove nutrients (P-68%, N-26%), followed by domestic/municipal wastewater (P-20%, N-26%).A few reasons may explain the predominance of using laboratory synthetic wastewater in these studies.Firstly, real wastewater is inherently variable and chemically complex [42], so using synthetic wastewater, especially in early-stage laboratory experiments, is simpler and helps reduce confounding experimental effects.Secondly, access to and the potential pathogenic risks of using real wastewater can impede the progress of investigations due to health and safety concerns and logistical complexity.Lastly, field experiments would likely use actual wastewater, require larger volumes, and would be influenced by prevailing climatic variables, so they have more logistical requirements.Most of these studies used small wastewater volumes (between 15 and 2000 mL) (Table S1, Supplementary Materials) and were bench-top-scale.They focussed on the effects of different experimental parameters (e.g., dosage, HRT, and temperature) on nutrient removal [69,95] and identified the nutrient removal kinetics at play [76,93].Of all the non-synthetic wastewaters, landfill leachate was used to investigate nitrogen (especially ammoniacal nitrogen) removal, possibly because landfill leachate is characteristically high in ammonia [30,36].
The different nutrient forms investigated, as a function of the different CWS types, are detailed in Table 1 and summarised in Figure 2d.Phosphorus (predominantly phosphate) removal was investigated using all eight CWS types, including a combination of shells, whereas nitrogen removal was only tested by three CWS types: oysters, mussels, and eggshells (Figure 2d).It is likely that a greater number of studies investigated phosphate removal because it is well reported how phosphate is removed by adsorption or chemisorption under calcareous conditions [7,60,91].During adsorption, trivalent phosphate anions (PO4 3-) bond more strongly with divalent calcium cations (Ca 2+ ) compared to monovalent nitrate anions (NO3 -), resulting in faster and stronger phosphate removal [60].
The total nitrogen (TN) concentration was only measured in two studies (Figure 2d, Table 1), both using oyster shells in biological filter systems with aerobic nitrification in the upper layer(s) and anaerobic denitrification in the lower biofilter layers [79,84].Conventional biological aerated biofilters (BAFs) did not remove TN, due to the absence of an anoxic stage [84].However, raw oyster shells incorporated as alkaline filters were reported to maintain the anoxic system's neutral pH [22] and act as a biocarrier for denitrifiers [84].The removal of other nitrogenous (ammonia and nitrate) forms was investigated using oyster shells (12 studies), mussel shells (7 studies), and eggshells (4 studies) (Figure 2d).In the absence of a microorganism-driven system, the chemisorption of ammonia occurred heterogeneously on multilayers of the shells [18].In contrast, Daud et al. [72] and Detho et al. [70], who investigated ammonia removal from landfill leachate, deduced that a composite adsorbent containing mussel shells provided an increased surface area for greater monolayer adsorption compared to mussel shells only (167 mg/g).This composite adsorbent achieved a 60% reduction and adsorption capacity of 200 mg/g of ammoniacal nitrogen.Bhatnagar and Sillanpää [41] found that surface area does not substantially influence nitrate adsorption but that positively charged surfaces do, presumably as this increases the binding of nitrate anions.They found better nitrate adsorption with adsorbent protonation (activated carbon, zeolites, industrial wastes), ionic exchange (clay-based adsorbents, double-layered hydroxides), and an increase in the presence of amine and hydroxyl functional groups (chitosan, agricultural waste materials).
Most studies reported the removal of a single nutrient, and less emphasis has been placed on the simultaneous removal of nitrogenous compounds and phosphate.As mentioned earlier in this section, phosphate removal is well documented compared to that of nitrogenous compounds.Nevertheless, the successful removal of both nutrients from domestic or municipal wastewater using biofiltration [85,88] and coagulation/flocculation [60] was reported.In general, CWSs in biofilters remove phosphate via chemical sorption and precipitation, while the removal of nitrate and ammonium is facilitated by bacteria during nitrification and denitrification processes [85].On the other hand, when CWSs are used in coagulation/flocculation, its main constituent, calcium oxide (CaO), transforms into calcium hydroxide (CaOH) upon hydration, which simultaneously removes phosphate and nitrate via the precipitation of calcium phosphate (hydroxyapatite) and calcium nitrate-hydroxide complexes, respectively [60].A more detailed explanation of both nutrient removal mechanisms can be found in Section 3.5.
Effects of Experimental Conditions on Nutrient Removal
Table S1 (Supplementary Materials) summarises the key experimental conditions and nutrient removal efficiencies across the studies.Experimental variables include CWS type, pollutant concentration and form, adsorbent dosage, hydraulic retention (contact) time, pH, and temperature.
The data are also shown as box plots, highlighting data ranges and trends across the studies (Figure 3).The initial pollutant concentration(s) (Figure 3a), CWS adsorbent dosage (Figure 3b), HRT (Figure 3c), and wastewater volume (Figure 3d) for all CWS types are compared.Abbreviations for each shell type are denoted as oyster shells (OSs), mussel shells (MSs), eggshells (ESs), clam shells (CSs), white hard clams (wCSs), bivalve shells (BSs), zebra mussels (ZSs), and combination of shells (CoS).Figure 3a displays the initial pollution concentration of phosphate (P), ammonium (A), and nitrate (N) for different shell types.Generally, a larger nutrient concentration range (2-1500 mg/L) was observed for the main shell types studied (i.e., OS, MS, and ES).Phosphate removal in OS was tested between 2 and 600 mg/L with a median of 80 mg/L.In contrast, MS showed two distinct ranges, 5-20 mg/L (low) and 150-1000 mg/L (medium-high), with an overall 10 mg/L median.ES had a higher median of 400 mg/L, with more studies conducted at higher phosphate concentrations.Higher phosphate concentrations in synthetic wastewater compared to non-synthetic wastewater were used for all shell types to support isotherm and kinetic studies [8,36,91,92].Conversely, when lower phosphate concentrations were used (in MS), they corresponded to phosphate concentrations typical in freshwaters (e.g., river or lake) (Table 1).
The median ammonium concentration tested in OS was 60 mg/L, while MS (median 300 mg/L) showed three distinct concentration ranges between 2 and 500 mg/L.The high ammonium concentration corresponded to synthetic wastewater, raw landfill leachate, and swine wastewater.The medium ammonium range (100 mg/L) occurred in synthetic and domestic wastewater, while the low ammonium level (<10 mg/L) was from lake water and aquaponic wastewater (Table 1).Very few studies were observed using OS and ES for nitrate removal, with none recorded for MS.The nitrate concentration tested for OS ranged from 6 to 300 mg/L with a median of 200 mg/L, while for ES, it ranged higher at 80-800 mg/L (median 400 mg/L).Like ammonium, a high initial nitrate concentration prevailed in synthetic wastewater, with a medium-strength nitrate concentration in domestic wastewater and a low nitrate concentration in aquaponic wastewater and treated effluent.
Figure 3b shows the range of adsorbent dosages (in g/L) for different shell types.A larger range of adsorbent dosages was found in studies using OS, MS, and ES (0.1-100 g/L), a reflection of the greater number of studies using these shell types.While their medians are relatively close (10-20 g/L), the mean for OS (90 g/L) was higher than those of MS and ES (both at 25 g/L), suggesting that OS has a wider application in water treatment (represented by an extended distribution plot).Oyster shells dominated as a CWS adsorbent compared with other shell types.This may be because of their wider availability, higher calcium content, and higher resulting alkalinity.As stated earlier, the CaCO3 content in oysters, mussels, and eggshells ranges from 90 to 97%.Interestingly, the mineral composition of shells from the same species but collected from different places also differed [15], which might be attributed to variations in shells' nutrition and age and the habitat's temperature, salinity, and pH [96].Furthermore, differences in pyrolysis conditions likely affect the dosages used in the experimental studies.For example, CWSs produced by slow calcination have a higher BET surface area (81 m 2 /g) than when produced by fast calcination (20 m 2 /g) [46].
Other shell types, such as Zebra mussels, clams, white hard clams, bivalves, and combined shells, were studied in a narrower adsorbent dosage range, a reflection of the fewer publications and possibly shell regional availability.For example, abundant bivalve shells in the freshwater lake of Tonle Sap, Cambodia, were used as adsorbents for phosphate removal to control eutrophication [55], and white hard clam waste shells were utilised to remove phosphorus from anaerobically digested swine wastewater in Vietnam [56].Zebra mussels, an invasive species in North America, were mined for calcium carbonate to remove phosphorus from domestic wastewater [31].
The influence of hydraulic retention time (HRT), or the time the CWS adsorbent was in contact with the wastewater, on each shell type is shown in Figure 3c.Most shell types were investigated within an HRT range of 0.083-50 h.Median values for all shell types ranged from 10 to 20 h, except for combined shells, which had a higher median of 30 h.The two highest data points in the CoS studies corresponded to an unusually long contact time of 4 and 15 days in batch adsorption assays that skewed this median [5,94].
The wastewater volume used in the experiments is shown in Figure 3d.OS and MS sample volumes ranged between 20 and 600 mL, with median values ranging between 50 and 100 mL (like ES), suggesting that most laboratory experiments considered 50-100 mL sufficient to investigate the effect of CWM on nutrient removal under laboratory conditions.The exception was a study that reported combined shells with a median of 300 mL used in the experiments.Field trials using CWS-derived media to treat nutrients would require much greater volumes of wastewater (synthetic or real) and provide valuable additional datasets to understand the effects of wastewater volume on nutrient removal.
Isotherms and Kinetics
For the practical application of adsorption processes to treat pollutants, it is crucial to have data related to equilibrium, the adsorbent's adsorption capacity, and the kinetics of the process [97].Adsorption kinetics provide important data for designing, optimising, and troubleshooting treatment systems relying on adsorption.Adsorption isotherms explain the adsorption/desorption mechanisms of particular treatment processes [66].Table S2 (Supplementary Materials) summarises the removal mechanisms, isotherms, and reaction kinetics reported in the experimental studies reviewed.Figure 4 shows the CWS type, and the isotherm and kinetic models reported.
Calcination causes volatilisation of the organic components, increasing the calcareous shell's specific surface area and porosity [19,33].Heating the shell to above 500 °C converts CaCO3 to CO2 and calcium [28] and changes the crystal structure of CaCO3 [6].In this stage, the orthorhombic crystal structure-or aragonite, the main form of calcium carbonate in CWSs-is transformed into a calcite polymorph with a trigonal rhombohedral structure [4,6,101].Calcination is the most widely used method to enhance CWS capacity to remove nutrients from wastewater (Figure 4a).Most papers reporting calcination or other CWS modification used oyster shells, reflecting the dominance of OS in the experimental studies.
CWS acidification is applied by immersing the shells in a low-concentration acid solution overnight to dissolve CaCO3, thus producing CaO [89].The hydrothermal method involves crystallising minerals from high-temperature aqueous solutions in an autoclave or a microwave under high vapour pressures [102].Lai [78] processed oyster shells by sintering with silica micro-powder using high-temperature and observed that hydrothermal treatment showed a large adsorptive phosphorus capacity and long 'service life'.The sintering and hydrothermal treatment result in the partial transformation of CaCO3 into CaO, which partly reacts with SiO2 (silicon dioxide) to produce CaSiO3 (calcium silicate), resulting in activated Ca 2+ which reacts with phosphate in wastewater to form calcium phosphate precipitates [76,78].Isotherm models reported in the CWS studies are shown in Figure 4b.Common models (in descending order of frequency) are Langmuir, Freundlich, and Langmuir-Freundlich.The Langmuir and Freundlich models are the two most common adsorption isotherm models to describe adsorbate removal processes and mechanisms [76].While the Langmuir isotherm model involves adsorbate interaction on completely homogenous surfaces with the same activation energy, the Freundlich isotherm model is suitable for highly heterogenous surfaces [34,76].The Langmuir-Freundlich, or Sips, isotherm assumes the surface is homogeneous, but the adsorption is a cooperative process due to sorbate-sorbate interactions.This model acts like the Langmuir model (monolayer adsorption) at a high adsorbate concentration, while it behaves like the Freundlich model at a low adsorbate concentration [92,103].Other relevant isotherm models reported in the CWS adsorption studies are Temkin [69], Dubinin-Radushkevich (D-R) [95], and Koble-Corrigan [58].
It was found that two-thirds of the reviewed studies involved isotherm investigation.Langmuir is the most common isotherm in studies involving oyster shells, mussel shells, and eggshells, indicating that adsorption occurs by monolayer deposition at the surface, with each site only allowing one adsorbate unit.The Freundlich isotherm was more commonly reported in studies with mussel shells, which suggests multilayer adsorption on non-uniform surfaces on this type of shell [103].It is postulated that active sites on the mussel shells' surface strongly bind with the first layer of the adsorbate (e.g., phosphate), and that the binding strength decreases with increasing adsorbate layers.Furthermore, oyster shells are mostly used in raw form (Figure 4); for this, the Langmuir isotherm seems to produce the best fit.In the case of mussel shells, they are mainly calcinated, and the Freundlich model seems to work best.Interestingly, a Langmuir-Freundlich isotherm is only observed for eggshell studies, implying that monolayer and multilayer adsorption occur simultaneously.
Of all 64 reviewed studies, only half reported process kinetics data (Figure 4c).Of these, most (73%) reported the pseudo-second-order (PSO) model to explain the experimental data better than the pseudo-first-order (PFO) (18%), Elovich (6%), or Avrami (3%) models.Nutrient removal by the three main shell types (i.e., oyster, mussel, and eggshells) followed PSO rather than PFO kinetics.The PFO model generally fits the data well over the first 20-30 min of the adsorption process, while the PSO model describes well the adsorption behaviour over the whole time range [103].Hence, it is unsurprising to note a high agreement between the PSO model and the Langmuir isotherm [7,65,103].The PSO kinetic model assumes that the adsorbate removal process is controlled by chemisorption with predominant monolayer adsorption [7,20].In contrast, the PFO model assumes that the adsorption rate is determined by physical interaction with a one-on-one adsorption mechanism in which one pollutant unit is adsorbed onto one adsorbent surface site [8].The Elovich kinetic model is only observed for clam shells, suggesting adsorption on very heterogeneous shell surfaces [58] and possible chemical reactions on the surface of the shells (adsorbent) [57].
Removal Mechanisms
Raw CWSs are mainly made of CaCO3 aragonite [6,60], which is less effective as an adsorbent for phosphate removal.Upon calcination at 500-600 °C, CaCO3 aragonite converts to calcium carbonate calcite (also known as calcite) [4,6,60], and further calcination at 800 °C produces CaO, the most effective form for phosphate precipitation [25,33,38].It is reported that when using CWSs, phosphate is removed by retention on calcite rather than on aragonite [4] via adsorption and the nucleation of calcium phosphate (hydroxyapatite) precipitate [6].Adsorption removal on calcite occurs on the particles' surface [4], creating sites for the initial binding between calcium and phosphate [69] and subsequently initiating the heterogeneous nucleation of calcium phosphate precipitates on calcite surfaces [6].With time, more precipitates are formed by the reaction of hydrated CaO or CaOH with phosphate [60], thus enhancing phosphate removal through a precipitation mechanism [4].It was found that precipitation can remove phosphate in larger stoichiometric amounts than adsorption [4].
In line with the above, Abeynaike et al. [6] reported that a 95% or higher phosphate removal was achieved with partially calcinated mussel shells (calcite) compared to 22-45% with raw mussel shells.A similar result was also obtained by Daudzai et al. [60], where 85% phosphate removal and no nitrate removal were recorded for unrinsed raw seashells.In contrast, unrinsed calcinated seashells were able to remove almost 100% of phosphate and nitrate.
Nitrogenous compounds, such as ammonium (NH4 + ) and nitrate (NO3 − ), are removed via chemisorption [18,35,36] or physisorption [10].Monovalent nitrate anions (NO3 − ) are known to bond weakly with divalent calcium cations (Ca 2+ ) compared to trivalent phosphate anions (PO4 3− ) [60].In addition, their high solubility makes them difficult to remove.However, the presence of O-Ca-O functional groups and CO3 2− ions in calcinated mussel shells facilitate nitrate removal by the formation of calcium nitrate-hydroxide complexes [60], as shown in Figure 5. Ammonium removal via chemisorption reactions such as ion exchange and electrostatic attraction occurs heterogeneously on multilayers of raw mussel shells, with two dominant functional groups, C-O and C=O, playing the main role [18].
While the higher presence of the C-O functional group and inorganic CO3 2− in calcinated eggshells improves ammonium adsorption, the specific surface area did not significantly influence ammonium adsorption [35].
Effects of Experimental Variables on Nutrient Removal
The effect of CWS adsorbent particle size (Figure 6), adsorbent dosage (Figure 7a), and hydraulic retention time (Figure 7b) on nutrient removal was further investigated in the form of scatter plots and correlation analyses.Nutrient removal showed no clear correlation with CMS adsorbent particle size (Figure 6a (phosphate) and Figure 6b (nitrate and ammonium)), probably due to the large variation in experimental conditions used across the different studies.These variables, such as different wastewater types, pollutant strength, raw or modified CWSs, and operational conditions (pH, temperature), likely affect the adsorption capacity regardless of the particle size.More data were available for phosphate than nitrogen compounds to draw any potential relationships.Oyster shells had a larger particle size range with strong adsorption capacity (up to 1000 mg/g) across this range (Figure 6a), suggesting that they are an effective adsorbent for phosphate removal from wastewater [39].In contrast, eggshells exhibited similar adsorption capabilities as oyster shells but were only tested with particle sizes of <1 mm.Mussel shells were tested within a much smaller particle size range (around 1 mm) but showed very large variations in adsorption capacity.The relationship between pollutant removal efficiency and adsorbent dosage across the studies is shown in Figure 7a,b (insufficient data were reported for nitrate to include).The only CWS type that showed a relationship between adsorbent dosage and treatment efficiency was oyster shells (n = 4) treating ammonium at a relatively high adsorbent dosage (Pearson's r: 0.9249, R 2 = 0.8555).Nonetheless, eggshells consistently removed >80% phosphate, with both oyster and mussel shells' phosphate removal being more variable (between 20 and 100%) (Figure 7b).One important variable to note is that most of the oyster shell studies used synthetic wastewater compared to the mussel shell studies, which may help explain the greater MS treatment variability.
The relationship between nutrient removal and HRT is shown in Figure 7c,d.There were insufficient (or absent) data reported in the reviewed studies to comment on the performance of some CSW-nutrient combinations (e.g., ammonium removal by eggshells and nitrate removal with oyster and mussel shells).The only CWS types that showed apparent relationships between the parameters were oyster shells treating nitrate (Pearson's r: −0.9374, R 2 = 0.8787) and mussel shells treating phosphate (Pearson's r: 0.8047, R 2 = 0.6476), revealing that a higher nitrate removal is achieved with a lower HRT for oyster shells, and a consistently high phosphate removal at >90% for mussel shells regardless of the HRT (indicative of a fast and relatively stable reaction) (Figure 7d).
Possible Reuse of Post-Treatment Adsorbents
A few studies reviewed in this manuscript discussed nutrient desorption, particularly phosphate, after wastewater treatment [60,69,104].Desorption would allow for nutrient recovery and reuse and would help expand the lifespan of the adsorbents as they are reused [69].The desorption of phosphate was performed using 2% citric acid for an hour with more than an 80% successful regeneration rate [69].Daudzai et al. [60] reported the use of 0.5 M HCl at a 1% dosage, achieving 80% and 98% successful regeneration for phosphate and nitrate, respectively.Coincidently, both chemicals (2% citric acid and 0.5 M HCl) were applied as desorption agents with a high regeneration rate above 90% [104].The full desorption of phosphate occurred with 2% citric acid and 0.5 M HCl, suggesting that the adsorbed phosphate would be bioavailable to plants if the CWS adsorbent was to be used as a soil conditioner for agriculture [104].Pap et al. [104] also suggested that posttreated CWS adsorbent could be potentially applied as a soil amendment in acidic soils with limited phosphate-adsorbing capacity.Another possible reuse of spent CWSs would be as construction material, e.g., a replacement for magnesium phosphate cement [48].
Conclusions
This systematic review on nutrient removal from wastewater by calcareous waste shells (CWSs) revealed that most studies were conducted in the laboratory using synthetic wastewater and small (50-100 mL) wastewater volumes.Studies have focussed more on phosphorus removal than on nitrogen.Much higher phosphate removal levels have been reported compared to any nitrogenous compound.It also revealed that calcination is the most frequently used method to enhance nutrient adsorption capacity.In terms of sorption models, the Langmuir isotherm fits experimental data with raw oyster shells best, while the Freundlich model fits data better from calcinated mussel shells' experiments.Furthermore, a higher agreement has been reported between experimental data and the pseudo-second-order kinetic model (73%) than the pseudo-first-order model (18%) for all shell types.An examination of the effect of individual experimental variables on nutrient removal was inconclusive, mainly due to the limited number of individual data points reported coupled with the confounding effects of multiple different experimental variables across the studies.Identifying and controlling for confounding variables are crucial in establishing reliable correlations and drawing meaningful conclusions from research findings.The application of laboratory results to real-world solutions requires certainty on the effects of critical individual variables such as HRT, temperature, and wastewater strength.While there is commendable laboratory research showcasing the valorisation of CMS in wastewater treatment, this needs to be translated into larger-scale trials to provide more certainty in sustainable design solutions and a pathway for scaling with certainty in effectiveness.
Several potential research questions arise that can be explored in future studies: 1. Does the CaCO3 content of different CWSs affect nutrient removal? 2. Which experimental variable (HRT, adsorption dosage, particle size, or wastewater strength) is most influential on nutrient removal capacity? 3. To what extent would climatic variables influence the adsorption capacity of CWS removing nutrients if experiments were performed under field conditions? 4. Would the isotherms and kinetic studies change if studies were conducted for longer under field conditions? 5. Can CWSs be modified (functionalised) differently to improve nitrate adsorption?Supplementary Materials: The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/environments11060119/s1.Table S1: Experimental conditions and removal efficiencies using CWS; Table S2: Isotherm and kinetic results of studies that used CWS for treating nutrients in water.
Figure 1 .
Figure 1.Literature retrieval and screening process for the review (97% journal articles and 3% conference papers).
Figure 2 .
Figure 2. Number of publications by (a) country, (b) year, (c) wastewater type, and (d) nutrient forms.Total (n) of publications = 64 (Figure 2a,b); 59 for P and 23 for N (Figure2c); 81 (Figure2d).The total number of studies (81) across all wastewater forms in Figure2dis greater than the total number of publications(64) because some studies reported multiple nutrient pollutant forms within the same study.Palestine (1) refers to 1 publication.
Table 1 .
Studies using calcareous waste shells (CWSs) to treat nutrients in wastewater. | 2024-06-08T15:04:47.529Z | 2024-06-06T00:00:00.000 | {
"year": 2024,
"sha1": "2b8817bc878e71ab3f60e634cde4ea9ad1027dab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3298/11/6/119/pdf?version=1717677421",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b7173f1e8bbbf3286d2d059d955e965b83343ce3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
252683794 | pes2o/s2orc | v3-fos-license | HyperHawkes: Hypernetwork based Neural Temporal Point Process
Temporal point process serves as an essential tool for modeling time-to-event data in continuous time space. Despite having massive amounts of event sequence data from various domains like social media, healthcare etc., real world application of temporal point process faces two major challenges: 1) it is not generalizable to predict events from unseen sequences in dynamic environment 2) they are not capable of thriving in continually evolving environment with minimal supervision while retaining previously learnt knowledge. To tackle these issues, we propose \textit{HyperHawkes}, a hypernetwork based temporal point process framework which is capable of modeling time of occurrence of events for unseen sequences. Thereby, we solve the problem of zero-shot learning for time-to-event modeling. We also develop a hypernetwork based continually learning temporal point process for continuous modeling of time-to-event sequences with minimal forgetting. In this way, \textit{HyperHawkes} augments the temporal point process with zero-shot modeling and continual learning capabilities. We demonstrate the application of the proposed framework through our experiments on two real-world datasets. Our results show the efficacy of the proposed approach in terms of predicting future events under zero-shot regime for unseen event sequences. We also show that the proposed model is able to predict sequences continually while retaining information from previous event sequences, hence mitigating catastrophic forgetting for time-to-event data.
Introduction
Various applications in daily life like earthquake occurrences, social networks, financial transactions, user activity logs etc. are associated with collection of discrete asynchronous events where event occurrences are represented with timestamps. Each event sequence, consisting of a series of timestamps, is associated with a separate entity. For example, in social media, each user can be associated with the time of posting a tweet, and each tweet can be viewed as an event. Similarly, in financial transactions, each stock can be associated with the time of buy-sell order. The ability to model such sequences is of vital importance to create intelligent systems. These sequences often contain rich information, which can predict the future evolution of the sequences.
A principled mathematical framework to model such sequences in continuous time space is temporal point process (Valkeila 2008). Hawkes process (Hawkes 1971), a self exciting point process, has a rich literature in terms of theoretical importance and has been widely used in a wide array of practical applications like epidemic modeling (Diggle, Rowlingson, and Su 2005), earthquake prediction (Hainzl, Steacy, and Marsan 2010), financial modeling (Bacry, Mastromatteo, and Muzy 2015), crime prediction(Mohler et al. 2011) etc. Recent works improve the performance of standard classic Hawkes process by considering neural networks for modeling such event sequences (Mei and Eisner 2016;Du et al. 2016;Xiao et al. 2017;Omi, Ueda, and Aihara 2019). Neural Hawkes processes have proved to learn complex dependencies as against their classical counterparts. The Neural Hawkes process is one of the cornerstones of recent progress in time-to-event modeling.
Despite having improved performance, the neural Hawkes process is challenged by two potential limitations on the practical side. Neural Hawkes process typically needs to be trained on a large time-to-event dataset for the specific domain or entity. This restricts the prediction for a new and unseen entity with limited or no data and can be very crucial for certain applications. Moreover, the process of data acquisition for the time of occurrence of events for a new sequence is expensive. Besides, such a process can be time consuming for some sequences which may have low frequency of occurrence, and hence may take a long time to produce massive amounts of data which will be used to predict future time of occurrences. Secondly, real-world event occurrences happen sequentially in continuous streams. Therefore, a realistic and challenging problem is to continually learn timeto-event models in an ever-changing environment while retaining previous learnt knowledge.
Motivated by the above limitations, we consider a practical and under-explored setting for time-to-event modeling, called zero-shot event modeling. We also consider a continual learning setup where time-to-event prediction tasks arrive sequentially in an online manner. We aim to develop neural Hawkes process models which could generalize to time-to-event prediction tasks with no data and can continually learn while retaining previous knowledge. To this end, we introduce HyperHawkes, a hypernetwork based Hawkes process to generate sequence-specific parameters for the neural Hawkes process. Hypernetwork is essentially a metanetwork which can generate parameters for the neural Hawkes process network for modeling continuous events. By employing hypernetwork based learning, we improve the model's generalization ability to predict unseen sequences using the sequence descriptors. By incorporating descriptorconditioned hypernetwork, we enable learning at the level of time-to-event sequence by learning event-sequence-specific parameters, hence being able to predict unseen sequences with the help of a descriptor. For a more pragmatic setup, we augment our model to consider continually arriving sequences where each sequence can be considered as a separate task. For continually learning the event sequences, we recast the descriptor-conditioned hypernetwork to include a hypernetwork output regularizer. This regularizer will penalize the changes in previously learnt parameters, hence retaining previously learnt time-to-event modeling capabilities. We provide two variations to the proposed approach, allowing to encompass 1) zero-shot modeling 2) continual learning capabilities within the framework of neural Hawkes process. To the best of our knowledge, there is no prior work on zero-shot or continually learning time-to-event modeling.
Our contributions can be summarized as follows: • We propose two novel problems of zero-shot learning and continually learning from the paradigm of time-toevent modeling. • We propose HyperHawkes, a descriptor-conditioned hypernetwork based neural Hawkes process which can generate event sequence specific parameters, hence learning at the level of sequence. We present two variants of HyperHawkes considering architecture of neural Hawkes process. • The proposed methods can be used for predicting time of occurrences of unseen sequences, hence performing zero-shot time-to-event modeling. • We augment the model with continual learning abilities by employing hypernetwork based regularization parameter, hence avoiding catastrophic forgetting for successively appearing time-to-event sequences. • We present an experimental setup for evaluating zero shot learning and continual learning for time-to-event modeling. We demonstrate the effectiveness of the proposed models on these setups for two real-world datasets.
Hawkes Process
Hawkes process (Hawkes 1971) is a point process (Valkeila 2008) with self-triggering property. i.e occurrence of previous events trigger occurrences of future events. Hawkes process has been used in earthquake modeling (Hainzl, Steacy, and Marsan 2010) (Diggle, Rowlingson, and Su 2005;Chiang, Liu, and Mohler 2021). They provide a solid mathematical framework for modeling event sequences. Earlier works on point process modeling specify a parametric form for the intensity function characterizing the point process.
However, parametric models may not be capable of capturing the complex event dynamics. To address this, several research works were proposed (Du et al. 2016;Mei and Eisner 2017;Omi, Ueda, and Aihara 2019;Zuo et al. 2020) where the intensity function is modeled using neural networks, which are better at learning complex event dynamics. Recently, (Zuo et al. 2020) and(Zhang et al. 2020) proposed to use positional encodings in transformer language models (Vaswani et al. 2017) to model point processes. There are some efforts for learning from small data for Hawkes process (Xie et al. 2019;Salehi et al. 2019). However, they are based on the statistical Hawkes process model (not the neural Hawkes process) and are not applicable to a zero shot learning setting.
Hypernetwork, Continual Learning and ZSL
Zero-Shot Learning: ZSL (Palatucci et al. 2009;Lampert, Nickisch, and Harmeling 2009) aims to predict classes which are not in training samples. Such classes are known as unseen classes and classes which are in training samples are known as seen classes. A few methods to address zero-shot learning is through mapping function (Frome et al. 2013), generative model (Felix et al. 2018) and graph neural network (Wang, Ye, and Gupta 2018). Huge literature has addressed the problem of zero-shot learning across various domains of vision, natural language processing tasks such as text classification, relation extraction etc. Hypernetworks: They have been introduced as a metanetwork which can generate weights for another network (Ha, Dai, and Le 2017). It is used for various tasks like meta learning , neural architecture search (Zoph and Le 2016), natural language understanding (He et al. 2022) etc.
Despite having several literature in all these domains, to the best of our knowledge, there is no work in the intersection of zero-shot learning and time-to-event modeling. Also, there is no effort along the lines of continual learning for time-to-event modeling. Therefore, we address novel and essential problems in this direction which can benefit several applications.
Preliminary Problem Definition
• Zero-shot Learning for time-to-event modeling: Assume we are given a collection of N seen sequences D S = {(T 1 , d 1 ), (T 2 , d 2 ), ..., (T N , d N } where d i represents the descriptor of the i th sequence or meta-information and T i represents the times of occurrence of n i events in the i th sequence, i.e. T i = {t i j } n i j=1 . Our goal is to predict time of event occurrences forN unseen sequences D U = {(T 1 , d 1 ), (T 2 , d 2 ), ..., (TN , dN } with the help of the sequence descriptor.
• Continual learning for time-to-event modeling: Assume we are given a collection of N sequences D = the sequence descriptor and T i represents the times of occurrence of n i events in the i th sequence, i.e. T i = {t i j } n i j=1 and we assume these sequences arrive one after the other in the order of their index. Our goal is to continually learn the sequences while avoiding catastrophic forgetting from the previous sequences. So, we aim to learn a NHP model which will be able to predict the future event occurrences in all the sequences
Hawkes Process
Point processes are useful to model the distribution of points over some space and are defined using an underlying intensity function. A Hawkes process (Hawkes 1971) is a point process with self-triggering property i.e occurrence of previous events trigger occurrences of future events. Conditional intensity function for univariate Hawkes process at time t i j for the i th sequence is defined as where µ i is the base intensity function and k(·) is the triggering kernel function capturing the influence from previous events. The summation represents the effect of all events prior to time t i j which will contribute to computing the intensity at time t i j . The probability density function at time t i j given the past event times as {t i 1 , t i 2 , . . . , t i j−1 }, is obtained as follows: where the exponential term in the right-hand side represents the probability that no events occur in [t i j−1 , t i j ).
Neural Hawkes process
Standard Hawkes process assumes a parametric form for the intensity function which is not generalizable to every event prediction problem. The influences between the events can be complex and need not be exponentially decaying. Various recent works introduced neural Hawkes processes (Du et al. 2016;Mei and Eisner 2016;Omi, Ueda, and Aihara 2019) which models the intensity function as a nonlinear function of history using a neural network. The central idea of these works is to use recurrent neural networks (RNN) to model intensity function which captures the influence of past events. So, conditional intensity function is modeled as: j represents a hidden state updated using RNN and f (·) is a positive valued function ensuring positivity of the intensity function.
Proposed Model
We propose HyperHawkes, a hypernetwork based neural Hawkes process for time-to-event modeling. For time-toevent modeling, we consider the neural Hawkes process (NHP) (Omi, Ueda, and Aihara 2019) as the base model. Integrating Hypernetwork with neural Hawkes process, we introduce descriptor-conditioned hypernetwork to generate weights for each sequence which can perform time-to-event modeling. The descriptor-conditioned hypernetwork learns separate weights of NHP for each sequence. We leverage this framework for zero-shot event modeling where the hypernetwork produces weights for unseen tasks using the sequence descriptor and NHP predicts future events. Inspired by (Von Oswald et al. 2019), we also use this framework for continual learning of tasks by using a hypernetwork based regularizer. We discuss each of these pieces in detail in further subsections.
Base Model: Neural Hawkes Process
In particular we employ neural Hawkes process (Omi, Ueda, and Aihara 2019) as base model for time-to-event modeling. It uses a combination of recurrent neural network and feedforward neural network to model the intensity function. We represent history by using hidden representations generated by recurrent neural networks (RNNs) at each time step. The hidden representation h i j at time t i j is obtained as and W i r represents the parameters associated with RNN for the i th sequence such as input weight matrix V i r , recurrent weight matrix U i r , and and bias b i r . h i j is obtained by repeated application of the RNN block on a sequence formed from previous M inter-arrival times. This is used as input to a feedforward neural network to compute the intensity function (hazard function) and consequently the cumulative hazard function for computing the likelihood of event occurrences. In the proposed model, input to the feed-forward neural network is I) the hidden representation generated from RNN II) elapsed time from the most recent event. We model the conditional intensity as a function of the elapsed time from the most recent event as- is a non-negative function referred to as a hazard function. Therefore, we define cumulative hazard function in terms of inter-event interval (5) However, we need to fulfill two properties of cumulative hazard function. Firstly, it has to be a monotonically increasing function of τ i j and secondly, it has to be positive valued. We achieve these by maintaining positive weights and positive activation functions in the neural network (Chilinski and Silva 2020; Omi, Ueda, and Aihara 2019). The hazard function itself can be then obtained by differentiating the cumulative hazard function with respect to τ as The log-likelihood of observing event times is defined as follows using the cumulative hazard function: where τ i j = t i j − t i j−1 and W i = {W i r , W i t } represents the combined weights associated with RNN and FNN. In NHP, the weights of the networks are learnt by maximizing the likelihood given by (7). The gradient of the log-likelihood function is calculated using backpropagation.
HyperHawkes: Hypernetwork based Neural Hawkes Process
Hypernetwork is a meta-network which produces parameters used by other networks (Ha, Dai, and Le 2017). As discussed in the above section, the neural Hawkes process comprises of two building blocks -RNN and FNN. So, we use hypernetwork to produce weights for these two components. We use a feed-forward neural network (FNN) to produce parameters W i = {W i r , W i t } associated with the NHP. Since the nature of the RNN parameters W i r and FNN parameters W i t are different, we use two different types of hypernetworks, f r (·) producing W i r and f t (·) producing W i t . Given a sequence description d i , parameters for the RNN are generated as follows: where θ f r denotes the parameters of the hypernetwork (weight vectors of a neural network). Note that the hypernetwork parameters are the same across the sequences. The descriptor d i is used to generate the sequence specific parameters. As discussed above, the cumulative hazard function is a monotonically increasing function of τ i j and is positivevalued. The hypernetwork which will generate parameters for cumulative hazard function has to fulfill these properties. This can be achieved when hypernetwork generates only positive weights for which we use a positive activation func-tion. Hypernetwork f t (·) for FNN can be written as:
HyperHawkes for Zero-shot modeling
In this section we discuss how we employ HyperHawkes for zero-shot event modeling. Our goal is to train the model on seen sequences D S with event sequences T s and task descriptors d s and predict event times of unseen sequences in D U given a task descriptor d u . We employ HyperHawkes for performing zero-shot learning on event sequences. The central idea of the proposed approach is to predict the parameters for the neural Hawkes process for the unseen task d u . This is achieved using hypernetwork which considers the sequence descriptor as input and parameters for neural Hawkes process as output as discussed in the previous section. Consequently, we can get parameters for RNN (W u r ) using Equation 8 and FFN (W u t ) using Equation 9 and use them to model the cumulative hazard function for an unseen sequence. These parameters can then be used for predicting events in the sequence T u .
Training We adopt a training procedure where we train hypernetwork using the maximum likelihood estimation for the NHP model. For each seen sequence T s from D S , we sample a mini-batch consisting of events. The sequence descriptor d s of this sequence is used to generate parameters of the neural Hawkes process using the hyper-network and its parameters. These values are then used in Equation 7 to find the log-likelihood of the event times of the seen sequences. So, the log-likelihood described in Equation 7 will now be: The difference in training lies in the fact that by maximizing this log-likelihood, we will get weights of the hyperparameter network rather than the neural Hawkes process. So, we calculate θ f r and θ f t using the gradient of the log-likelihood function using backpropagation. Prediction For prediction of events from unseen sequence T u from D U , we employ our trained hypernetwork to produce weights {W u r , W u t } for the neural Hawkes process using sequence descriptor d u . Neural Hawkes process uses the bisection method (Omi, Ueda, and Aihara 2019) to predict the time of the next event. Bisection method provides the median t * of the predictive distribution over next event time using the relation Φ(t * − t i j |h i j ; W u r , W u t ) = log(2).
HyperHawkes for Continual Learning
In a more realistic setup of event time modeling, sequences appear one after the other and it is unrealistic to store the data and models associated with all the previous sequences.
In spite of this, we need to predict correctly on these past sequences though we have only data from new sequences. We want the NHP models to retain information from the past sequences while learning from new sequences. The standard training of the NHP model adapts them to the new sequence data (by updating parameters to optimize the loss to new sequence data) and results in forgetting what it has learnt from past sequences. The inability of the neural network models to retain knowledge from past data is known as catastrophic forgetting and continual learning techniques have been proposed to address this. Though it is studied in the vision community, to the best of our knowledge we could not find any work on the time-to-event prediction problem and with NHP models. We address the novel problem of learning the time-toevent sequences continually while retaining knowledge from past time-to-event sequences. Inspired by (Von Oswald et al. 2019), we use descriptor conditioned hypernetworks for continually learning from event sequences. Ideally, we want our model to remember the parameters of the neural Hawkes process for each sequence. A naive approach to achieve this is through storing and replaying over previous data, which is obviously memory expensive and unrealistic. However, HyperHawkes, being conditioned on the sequence descriptor, can be modified to handle this problem. The direct use of the HyperHawkes training through (10) would result in hypernetworks forgetting the generation of the NHP parameters corresponding to past event sequences. We overcome this by incorporating a regularization on the hypernetwork parameters such that it penalizes any change to the NHP parameters produced from old sequences.
Given a sequence description d s for the descriptor T s , our descriptor conditioned hypernetwork f r (·) can generate parameters W s r and f t (·) can generate parameters W s t . To perform continual learning, we use regularization to penalize changes in {W c r , W c t } generated for past sequences in order to retain information from those sequences and to learn continually. The regularization is applied to the hypernetwork parameters while learning a new event sequence, and this prevents adaptation of the hypernetworks parameters completely to the new event sequence. For a new event sequence T s and its corresponding descriptor d s , the hypernetwork parameters are learnt by minimizing the following continual learning loss over events in the sequence: where {θ f r ,θ f t } represents the stored hypernetwork parameters after learning until sequence s − 1 and {θ f r , θ f t } represent the hypernetwork parameters learnt considering the event sequence s and regularization to avoid forgetting. The regularization term ensures that the newly learnt hypernetwork parameters will be able to produce the required main network parameters from the past event sequences given the sequence descriptor without forgetting and the regularization constant β captures the importance associated with it. So, in this way, we try to retain the information from previous sequences at a meta-level. By including a simple regularization term within the framework of HyperHawkes, our model is capable of learning sequences continually without forgetting knowledge learnt from previous sequences. We are able to achieve this because of the use of sequenceconditioned hypernetwork on the top of neural Hawkes process, emphasizing its usefulness for continual learning over event sequences in addition to zero shot learning.
Experiments Datasets
Due to paucity of standard datasets for event modeling tasks which contain meta descriptions as well, we use the following two datasets: 1)Yelp: 1 : This is a dataset comprising of business information and their check-in information. Each business is associated with 82 attributes like Wheelchair Accessible, Accepts Insurance, By Appointment Only, Business Category, Business Timings etc. Also, they are associated with latitude-longitude pairs. Moreover, these businesses are associated with a fine-grained category. For higher granularity, we convert them into 22 broad categories using the hierarchy mentioned in their website 2 . Using these attributes, we create a vector of length 1229 representing a business. This vector acts as a descriptor of the business. We select businesses with more than 5000 check-ins. For continual learning, we have considered another sample of dataset consisting of business with more than 10k checkins, hence considering 26 sequences of business. This is done to reduce the number of sequences for better visualization of performance of each sequence. 2) Meme: 3 : This dataset (Leskovec, Backstrom, and Kleinberg 2009) tracks the popular phrases and quotes which appear appear most frequently over time in news media and blogs. Each meme is associated with the
Baselines
To the best of our knowledge, the proposed problem statement is the first work along this direction. Therefore, we propose our own baselines as -1) FNHP: This includes fully neural Hawkes process (Omi, Ueda, and Aihara 2019). This approach doesn't incorporate the sequence descriptor.
2) FNHP-Descriptor: In this variant, we use concatenated descriptor and time as input to RNN and FNN. For Continual learning setup, we compare against HyperHawkes without any regularization as baseline.
Implementation Details
For zero-shot setup, we perform a 60-20-20 split where 60% of sequences are considered as seen sequence and 20% for validation unseen sequence and rest 20% as test unseen sequence. We have used a single layer with 32 units for hypernetwork where we use softplus activation function for modeling cumulative hazard function. For the neural Hawkes process, we consider a recurrent neural network with one layer and 16 units and 2-layer feed-forward neural network with 16 units in each layer. We use Adam optimizer with learning rate, β 1 and β 2 as 0.
Experimental Setup
We consider these experimental setups to evaluate the performance of our model -1) Zero-Shot: Training is done on seen sequences and testing is done on unseen sequences.
2) Generalized Zero-Shot: In this, testing is done by randomly sampling 20% of events from seen sequences and unseen sequences. 3) Standard Event Modeling: Training is done on the first 70% of the events from all sequences. Testing is done on the last 20% events for unseen sequences. Mean negative log-likelihood (MNLL) and mean absolute error (MAE) are considered as evaluation metrics for both zero-shot and continual learning setup. Lower MNLL and MAE indicates better performance.
Zero-Shot Learning
Results for zero-shot setup are presented in that the proposed variants of HyperHawkes perform consistently and significantly better than the baselines for both the datasets. Table 2 presents the averaged results over all tasks by enabling HyperHawkes for continual learning. We can observe that averaged performance for the proposed model is better than the case when no regularization is incorporated. Also, we can observe that both the proposed variants perform better than the model without using regularization (corresponds to the case when β from Equation 11 is set to 0). Hence the use of regularization within the framework of HyperHawkes supports that proposed method can avoid catastrophic forgetting. Fig 2 displays sequence-wise performance for both the datasets for the proposed variants HyperHawkes-FNN and HyperHawkes-FNN-RNN.2a) displays average MNLL over previous sequences for both the models for Yelp. This shows that while training the model without regularization over new sequences, the network is unable to retain information learnt from previous sequences, hence MNLL increases as we train new sequences. However, with the use of regularization, we can avoid catastrophic forgetting, hence having lower MNLL for successive tasks. Similar behavior is observed by 2b) as well which displays average MAE over previous sequences for both the variants, with and without CL for Meme. So, this corroborates that use of regularization with HyperHawkes can help in backward transfer. 2c) shows MNLL for each sequence using the model HyperHawkes-FNN-RNN with regularization. MNLL for the model with regularization is having lower MNLL as compared to the model without any regularization. This essentially reflects that the proposed model is able to forward transfer the knowledge learnt from previous sequences as well. So, the model is able to perform forward and backward transfer, which are important continual learning desiderata. 2d) displays the effect of various regularization parameters for Yelp and Meme dataset for HyperHawkes-FNN. A possible explanation could be for Meme for β, the model is not able to learn from previous sequences and for large β might not be able to learn from new sequences. To conclude, presented results suggest that proposed framework can aid in avoiding catastrophic forgetting while learning continually.
Conclusion
In this work, we address two novel and practical limitations for time-to-event modeling. Firstly, we address zeroshot event modeling for predicting time of unseen events. Secondly, we propose an approach for continual learning for time-to-event modeling where we learn when event sequences continually and the model learns while retaining previous knowledge. To address both of these issues, we propose HyperHawkes, a descriptor conditioned hypernetwork based neural Hawkes process which can generate event sequence specific parameters. The proposed approach can predict the time of occurrences of events from unseen sequences, hence performing zero-shot time-to-event modeling. Subsequently, we augment HyperHawkes with regularization which can aid in learning time-to-event sequences continually by avoiding catastrophic forgetting. Our experiments on two real-world datasets demonstrate the effectiveness of the proposed approach for both the issues. In this way, we augment the ability of the neural Hawkes process to perform two unexplored and practical tasks of zero-shot and continually learning time-to-event modeling. | 2022-10-04T06:42:08.598Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "19d737ae992703b1851a7bff73e4b3a7f5f5d7f6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "19d737ae992703b1851a7bff73e4b3a7f5f5d7f6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257998836 | pes2o/s2orc | v3-fos-license | Spontaneous Bone Marrow Edema: Perfusion Abnormalities and Treatment with Surgical Decompression
Bone marrow edema (BME), also termed bone marrow lesions, is a syndrome characterized by bone pain and the appearance of high signal intensity on T2 fat-suppressed and short tau inversion recovery (STIR) MRI sequences. BME can be related to trauma or a variety of non-traumatic diseases, and current treatment modalities include non-steroidal anti-inflammatory drugs (NSAIDS), bisphosphonates, denosumab, extracorporeal shockwave therapy (ESWT), the vasoactive prostacyclin analogue iloprost, and surgical decompression. Spontaneous BME is a subset that has been observed with no apparent causative conditions. It is most likely caused by venous outflow obstruction and intraosseous hypertension. These are mechanistically related to impaired perfusion and ischemia in several models of BME and are related to bone remodeling. The association of perfusion abnormalities and bone pain provides the pathophysiological rationale for surgical decompression. We present a case of spontaneous BME and a second case of spontaneous migratory BME treated with surgical decompression and demonstrate resolution of pain and the high signal intensity on MRI. This report provides an integration of the clinical syndrome, MR imaging characteristics, circulatory pathophysiology, and treatment. It draws upon several studies to suggest that both the bone pain and the MRI characteristics are related to venous stasis, and when circulatory pathologies are relieved by decompression or fenestration, both the bone pain and the MRI signal abnormalities resolve.
Introduction
Bone marrow edema (BME), also termed bone marrow lesions and, currently, edemalike marrow signal intensity, is a syndrome characterized by the sudden onset of bone pain and the MRI appearance of high signal intensity on T2 fat-suppressed (FS) and STIR MRI sequences characteristic of water density [1,2]. It may not be a single pathological entity but rather several pathologies with a common MRI appearance [1,3]. It is a nonspecific hallmark of several nontraumatic diseases and is associated with bone and ligament trauma, osteoarthritis (OA), avascular necrosis (AVN), infections, and transient osteoporosis with different clinical symptomatologies, varying prognoses, and perhaps different clinical significance in these conditions [3][4][5][6][7]. It is also seen with no obvious associative or causative conditions, in which cases it has been termed spontaneous BME and has been distinguished clinically from AVN [8]. In many cases, BME spontaneously resolves and no treatment is necessary. However, a subset of patients experience intense or prolonged pain for whom treatment is needed.
A less common form of spontaneous BME is migratory BME, in which spontaneous, asynchronous, high-intensity signals on T2 FS MRI are accompanied by pain in different locations in one bone, usually in the knee, or more rarely in several bones at different times. It is also known as intra-articular regional migratory osteoporosis [9]. Migratory BME has Int. J. Mol. Sci. 2023, 24, 6761 2 of 7 been associated with low bone mineral density [2,10]. A review of the world literature in 2008 revealed 63 cases, although the condition is probably underreported and the true prevalence is unknown [11].
At our practice, we treat large, persistently painful (over 8 weeks) spontaneous BME of the knee with surgical decompression. We also decompress BME associated with AVN and mild OA, but do not decompress advanced OA, AVN with subchondral fracture or collapse, lesions associated with trauma, or mild, spotty lesions associated with other conditions, since they can resolve quickly. The surgical protocol consists of 1 or 2 arthroscopically and fluoroscopically guided 4 mm decompression portals into regions of cancellous BME as described in the case reports. To eliminate risks of fracture, corticocancellous transition zones are avoided as entry portals. Following decompression, patients are allowed to weight-bear to tolerance and usually undergo partial weight-bearing with 2 crutches for 1 week. No surgical complications have been encountered.
Case Reports
Case #1: A 60-year-old male with the spontaneous onset of disabling medial knee pain of 2 months duration. MRI demonstrated high signal intensity on T2 FS images at the medial femoral condyle ( Figure 1). Under arthroscopic and fluoroscopic guidance, a guide pin was placed into the pathologic bone, taking care to remain proximal to the articular cartilage, and a 4 mm cannulated drill was used to create the surgical decompression. Pain relief occurred within 72 h. Follow-up MRI at 12 weeks post-decompression demonstrated resolution of the imaging abnormality and preservation of the structural integrity of the subchondral bone. At 1 year postoperative, the patient remained asymptomatic with normal knee function. locations in one bone, usually in the knee, or more rarely in several bones at different times. It is also known as intra-articular regional migratory osteoporosis [9]. Migratory BME has been associated with low bone mineral density [2,10]. A review of the world literature in 2008 revealed 63 cases, although the condition is probably underreported and the true prevalence is unknown [11].
At our practice, we treat large, persistently painful (over 8 weeks) spontaneous BME of the knee with surgical decompression. We also decompress BME associated with AVN and mild OA, but do not decompress advanced OA, AVN with subchondral fracture or collapse, lesions associated with trauma, or mild, spotty lesions associated with other conditions, since they can resolve quickly. The surgical protocol consists of 1 or 2 arthroscopically and fluoroscopically guided 4 mm decompression portals into regions of cancellous BME as described in the case reports. To eliminate risks of fracture, corticocancellous transition zones are avoided as entry portals. Following decompression, patients are allowed to weight-bear to tolerance and usually undergo partial weight-bearing with 2 crutches for 1 week. No surgical complications have been encountered.
Case Reports
Case #1: A 60-year-old male with the spontaneous onset of disabling medial knee pain of 2 months duration. MRI demonstrated high signal intensity on T2 FS images at the medial femoral condyle ( Figure 1). Under arthroscopic and fluoroscopic guidance, a guide pin was placed into the pathologic bone, taking care to remain proximal to the articular cartilage, and a 4 mm cannulated drill was used to create the surgical decompression. Pain relief occurred within 72 h. Follow-up MRI at 12 weeks post-decompression demonstrated resolution of the imaging abnormality and preservation of the structural integrity of the subchondral bone. At 1 year postoperative, the patient remained asymptomatic with normal knee function. Case #2: A 53-year-old male with 3 months of spontaneous pain at the lateral femoral condyle. MRI revealed high signal intensity on T2 FS images at the lateral femoral condyle ( Figure 2). He was treated with a surgical decompression that relieved his pain within 60 Case #2: A 53-year-old male with 3 months of spontaneous pain at the lateral femoral condyle. MRI revealed high signal intensity on T2 FS images at the lateral femoral condyle ( Figure 2). He was treated with a surgical decompression that relieved his pain within 60 h. He presented again 5 weeks later, this time with spontaneous pain at the medial femoral condyle. MRI showed the characteristic high signal intensity in the medial femoral condyle on T2 FS images. The decompression track could be seen in the lateral femoral condyle together with incompletely resolved BME from his first treatment. A decompression was carried out in the medial femoral condyle with resolution of pain within a week. Posttreatment MRI at 12 weeks demonstrated resolution of BME in both condyles and no evidence of subchondral bone fracture. He remained asymptomatic with normal knee function at 1 year postoperative.
h. He presented again 5 weeks later, this time with spontaneous pain at the medial femoral condyle. MRI showed the characteristic high signal intensity in the medial femoral condyle on T2 FS images. The decompression track could be seen in the lateral femoral condyle together with incompletely resolved BME from his first treatment. A decompression was carried out in the medial femoral condyle with resolution of pain within a week. Posttreatment MRI at 12 weeks demonstrated resolution of BME in both condyles and no evidence of subchondral bone fracture. He remained asymptomatic with normal knee function at 1 year postoperative.
Discussion
The pathophysiological rationale for surgical decompression of BME is drawn from diverse observations of venous stasis, intraosseous hypertension, reduced perfusion, ischemia, and pain in BME and related conditions.
Several studies have suggested that the pain and high signal intensity on T2 FS MRI of BME are most likely caused by venous outflow obstruction leading to elevated intraosseous pressure (IOP) or intraosseous hypertension [5]. With time, the IOP rises to the point that arterial inflow can be compromised, leading to intraosseous ischemia. Venous stasis and outflow obstruction in BME of various associations were initially demonstrated by static imaging with contrast venography [12][13][14]. More recently, a study using dynamic gadolinium (Gd)-enhanced MR imaging demonstrated venous stasis associated with BME ( Figure 3). Washout of Gd from normal knees proceeded by 5 min and was complete by 10 min after contrast administration. In BME lesions, Gd washout and signal enhancement were delayed beyond 20 min of scan time [1]. Pharmacokinetic modeling enables the extraction of quantitative dynamic parameters of perfusion and has confirmed venous stasis with secondary reduced perfusion in human subjects. These and other studies with dynamic imaging have shown that spontaneous BME, as well as BME associated with OA and AVN, are accompanied by venous outflow obstruction and stasis [15].
Discussion
The pathophysiological rationale for surgical decompression of BME is drawn from diverse observations of venous stasis, intraosseous hypertension, reduced perfusion, ischemia, and pain in BME and related conditions.
Several studies have suggested that the pain and high signal intensity on T2 FS MRI of BME are most likely caused by venous outflow obstruction leading to elevated intraosseous pressure (IOP) or intraosseous hypertension [5]. With time, the IOP rises to the point that arterial inflow can be compromised, leading to intraosseous ischemia. Venous stasis and outflow obstruction in BME of various associations were initially demonstrated by static imaging with contrast venography [12][13][14]. More recently, a study using dynamic gadolinium (Gd)-enhanced MR imaging demonstrated venous stasis associated with BME ( Figure 3). Washout of Gd from normal knees proceeded by 5 min and was complete by 10 min after contrast administration. In BME lesions, Gd washout and signal enhancement were delayed beyond 20 min of scan time [1]. Pharmacokinetic modeling enables the extraction of quantitative dynamic parameters of perfusion and has confirmed venous stasis with secondary reduced perfusion in human subjects. These and other studies with dynamic imaging have shown that spontaneous BME, as well as BME associated with OA and AVN, are accompanied by venous outflow obstruction and stasis [15]. A functional study of spontaneous BME revealed pathologically high intramedullary pressures with a mean of 73 mm Hg (range 50-90 mm Hg) and related the IOP to venous stasis [17]. In another study, intraosseous hypertension was found on pressure measurements in BME [18]. Surgical fenestration into cancellous bone abolishes both intraosseous hypertension and bone pain and increases pO2 [13,14,19]. The resolution of both the clinical symptoms and the MRI appearance by surgical decompression lends support to the hypothesis that the venous stasis in bone is causally related to the clinical syndrome of BME and its imaging manifestations.
Venous stasis has been associated with elevated IOP in several clinical conditions and has been associated with bone pain, especially at the knee [12,[20][21][22][23]. A close relationship has been described between intraosseous hypertension and bone pain independent of the presence or absence of OA. Patients with bone pain at rest exhibited an IOP > 40 mmHg, while those with IOP < 35 mmHg did not [22]. Venous stasis, intraosseous hypertension, perfusion, and ischemia are mechanistically related. Linear relationships have been described between IOP and perfusion [24]. An elevation in IOP from 26-45 mm Hg reduces bone perfusion by 60%, while reduced bone perfusion results in intraosseous ischemia. Venous outflow obstruction has been shown to reduce bone pO2 by a factor of 1.5 within 30 min of venous occlusion [25]. Ischemia, if sustained and severe, can result in marrow cell death, as is seen in OA and AVN [26,27].
There is no consensus on the treatment of BME. However, based upon the observed pathophysiology of venous stasis, infusion of the vasoactive prostacyclin analogue; iloprost; or surgical decompression, or forage, have been used. Iloprost has been used for reduction of pressure in pulmonary hypertension. While iloprost infusion has been reported to be successful for treating BME [28], it involves prolonged intravenous infusion over several days and presents complications of vasodilation and hypotension [17,29,30]. Proponents of iloprost infusion point to complications of surgical decompression, including prolonged protected weight bearing and fracture [31]. The cases presented here demonstrate the efficacy of surgical decompression of spontaneous BME of the knee with In BME, concentration continues to rise as outflow is obstructed (venous stasis). Pharmacokinetic modeling with the Brix equation allows quantification of the flow curve characteristics through derivation of perfusion constants. (C) k el represents the contrast elimination constant and reveals quantitative differences in venous outflow between normal bone and BME. Adapted with permission from Ref. [16]. 2007, Annals of the New York Academy of Sciences.
A functional study of spontaneous BME revealed pathologically high intramedullary pressures with a mean of 73 mm Hg (range 50-90 mm Hg) and related the IOP to venous stasis [17]. In another study, intraosseous hypertension was found on pressure measurements in BME [18]. Surgical fenestration into cancellous bone abolishes both intraosseous hypertension and bone pain and increases pO2 [13,14,19]. The resolution of both the clinical symptoms and the MRI appearance by surgical decompression lends support to the hypothesis that the venous stasis in bone is causally related to the clinical syndrome of BME and its imaging manifestations.
Venous stasis has been associated with elevated IOP in several clinical conditions and has been associated with bone pain, especially at the knee [12,[20][21][22][23]. A close relationship has been described between intraosseous hypertension and bone pain independent of the presence or absence of OA. Patients with bone pain at rest exhibited an IOP > 40 mmHg, while those with IOP < 35 mmHg did not [22]. Venous stasis, intraosseous hypertension, perfusion, and ischemia are mechanistically related. Linear relationships have been described between IOP and perfusion [24]. An elevation in IOP from 26-45 mm Hg reduces bone perfusion by 60%, while reduced bone perfusion results in intraosseous ischemia. Venous outflow obstruction has been shown to reduce bone pO2 by a factor of 1.5 within 30 min of venous occlusion [25]. Ischemia, if sustained and severe, can result in marrow cell death, as is seen in OA and AVN [26,27].
There is no consensus on the treatment of BME. However, based upon the observed pathophysiology of venous stasis, infusion of the vasoactive prostacyclin analogue; iloprost; or surgical decompression, or forage, have been used. Iloprost has been used for reduction of pressure in pulmonary hypertension. While iloprost infusion has been reported to be successful for treating BME [28], it involves prolonged intravenous infusion over several days and presents complications of vasodilation and hypotension [17,29,30]. Proponents of iloprost infusion point to complications of surgical decompression, including prolonged protected weight bearing and fracture [31]. The cases presented here demonstrate the efficacy of surgical decompression of spontaneous BME of the knee with rapid return to function without protected weight bearing. Lastly, a 2022 systematic review comparing BME treatment modalities found that surgical decompression resulted in virtually equivalent pain resolution compared to iloprost infusion in studies examining outcomes 1-3 months postoperatively [32]. Other promising treatment modalities include non-steroidal anti-inflammatory drugs (NSAIDS), bisphosphonates, denosumab, and extracorporeal shockwave therapy (ESWT) [32].
Several small series of patients with spontaneous BME of the hip or knee treated with surgical decompression have shown pain relief within 7 days postoperative and resolution of high signal intensity on T2 FS MRI within 3-6 months depending upon followup times [33][34][35]. In our experience, as well as in the aforementioned studies, surgical decompression results in prompt and complete pain relief of spontaneous BME with minimal interference with function and no complications. Surgically related fractures can be prevented by placement of the core track in cancellous bone, avoiding corticocancellous transition zones. It has substantial advantages over iloprost infusion in that vasomotor instability does not occur and the time course to recovery is faster. We recommend that surgical decompression be considered in the setting of spontaneous BME accompanied by disabling prolonged pain without structural changes. As described here, the procedure has the potential to relieve pain without the prolonged management of other conservative treatment methods.
Conclusions
Observations of skeletal perfusion in spontaneous BME and related conditions suggest that the syndrome is causally related to venous stasis and intraosseous hypertension, and that these perfusion changes may result in ischemia. It is not our suggestion that all cases of BME need treatment, since some will resolve spontaneously. However, spontaneous BME accompanied by disabling pain of 2-3 months duration is ameliorated within a few days by surgical decompression as demonstrated in the cases presented here. It must be acknowledged that these clinical observations, however supported by physiological rationale, are uncontrolled, and a randomized controlled trial is needed to examine the surgical hypothesis contained herein. There is no consensus treatment of BME other than treatment of consequential conditions when they occur, often late in the course when structural damage to the subchondral bone has occurred. Institutional Review Board Statement: The IRB was contacted and the authors were assured that, as this is a case report and no research was being conducted, this report was not under their jurisdiction.
Informed Consent Statement: Written informed consent was obtained from the patients to publish this paper. | 2023-04-07T15:19:22.757Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "6eeda5e58a18e330cd9257288df06a3b04b34a88",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "99ba0c1390e02bf9578650c1a8f599b6f931285c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
92658245 | pes2o/s2orc | v3-fos-license | Terrestrial Isopoda ( Crustacea , Oniscidea ) from the coasts of Costa Rica , with descriptions of three new species
Seven species of terrestrial isopods are recorded from the coasts of the Pacific and Caribbean sides of Costa Rica. Three species (Buchnerillo neotropicalis, Hawaiioscia nicoyaensis and Trichorhina biocellata) are described as new and two species (Tylos niveus and Armadilloniscus cf. caraibicus) are newly recorded from the country. The poorly known species T. niveus is also illustrated. At present the total number of terrestrial isopod species recorded from Costa Rica is 30. Interestingly four typical littoral halophilic species (Ligia baudiniana, Tylos wegeneri, T. niveus and A. cf. caraibicus) are present on both the Pacific coast of Costa Rica and on the coasts of the lands encompassed by the Caribbean Sea. With the sole exception of A. cf. caraibicus, no morphological differences could be detected from the Pacific and Caribbean populations of those species. Rev. Biol. Trop. 66(Suppl. 1): S187-S210. Epub 2018 April 01.
Up to date, the diversity of Costa Rican terrestrial isopods is poorly known.Only 25 species in 16 genera and 10 families are presently recorded, numbers that are certainly very small for a tropical country like Costa Rica.Many records are very old, mainly dated in the first half of 1900 (Richardson, 1910(Richardson, , 1913;;Arcangeli, 1927Arcangeli, , 1930Arcangeli, , 1957;;Van Name, 1936), while only the families Philosciidae and Scleropactidae have been recently revised by Leistikow (1997aLeistikow ( , 1997bLeistikow ( , 1998Leistikow ( , 2000aLeistikow ( , b, 2001) ) and Schmidt (2007).However, all the records come from sporadic collections.Only few records are known for most of the forested areas and only two species (Ligia baudiniana Milne Edwards, 1840 and Tylos wegeneri Vandel, 1952) were reported from the coasts of the country.
This paper deals with recent collections of Oniscidea from both the Pacific and Caribbean coastal areas of Costa Rica, and includes the descriptions of three new species and two new records for the country.
MATERIALS AND METHODS
The specimens included in this paper have been collected in November 2015 on both sandy and rocky coasts of the Pacific side of Costa Rica.Also included are littoral terrestrial isopods deposited in the collection of the Museo de Zoología, Universidad de Costa Rica (MZUCR), San José.Specimens were collected by hand and stored in 75 % ethanol.The geographic co-ordinates of the locations were taken using WGS84 datum.Identifications are based on morphological characters.For each new species the material examined, description, etymology and remarks are given.For each species already recorded from Costa Rica the bibliographic references, material examined, distribution and remarks (when necessary) are included.Some poorly known species have been illustrated to facilitate future recognition.The taxa are illustrated with figures prepared with the aid of a camera lucida mounted on Wild M5 and M20 microscopes and digitally drawn using the method by Montesanto (2015Montesanto ( , 2016)).For some species pictures were taken with a Scanning Electron Microscope Hitachi S-3700N.
Distribution: Atlantic and Pacific shores of the Americas from Florida to Brazil and from California to Ecuador, including Galapagos Islands (Schmalfuss, 2003).
Remarks: At present, this is the only species of Ligia recorded from both coasts of Costa Rica.The species has been fully redescribed and illustrated by Leistikow (1997a).
Remarks: Tylos wegeneri was recorded from Puntarenas by Schultz (1983).This is the only record of this species for the Pacific coast, while the species seems to be widespread in the Caribbean Sea.However, no records from the Caribbean coast of Costa Rica are known.For a description and figures of this species see Vandel (1952) and Schultz (1970).Budde-Lund, 1885 Figs. 1, 2
Remarks: The main characters of this species are illustrated in Figs. 1 and 2 to confirm its identification and facilitate future recognition.For a complete list of synonyms of this species see Schmalfuss & Vergara (2000).This is the first record for the Pacific coast.
Additional material: 2 ♀♀ used for scanning microscope analysis, same data as holotype.
Etymology: The name of the species refers to the localities of collection of the species, the Neotropical Region.
Remarks:
The new species is included in Buchnerillo since it shows all the characters of the genus: small size; animal able to toll up into a perfect ball; endoantennal conglobation; dorsal surface tuberculated; cephalon with a wide frontal shield; pleonite 3 with epimera reduced; telson semicircular covering the uropods in dorsal view; antenna short and stout with a flagellum of three articles; male pleopod 2 with distal article flagelliform.Up to date, only two species of Buchnerillo are known: B. litoralis Verhoeff, 1942 andB. oceanicus Ferrara, 1974.The former is known from the shores of the Mediterranean Sea and Madeira (Schmalfuss, 2003); a record of a female specimen from Florida Keys (Paoletti & Stinner, 1989) is very doubtful and the identification needs confirmation.The latter is presently known from Somalia (Ferrara, 1974) and the Maldives (Taiti, 2014).Buchnerillo neotropicalis n. sp.differs from both species in the presence of a schisma on the pereonite 1 with inner lobe distinctly protruding backwards, whereas in B. litoralis and B. oceanicus a small rounded ventral lobe is present at the postero-lateral corners, not protruding backwards.It also differs from B. litoralis (see redescription in Vandel, 1960) in having the cephalon with frontal shield grooved, larger eyes (4 ommatidia instead of 1 -2), dorsal tubercles more prominent and male pleopod 1 endopod with distal part thicker and straight; from B. oceanicus in the frontal shield with lower margin sinuous on both sides instead of regularly curved.
The systematic position of genus Buchnerillo is still uncertain.It was included by Vandel (1960) in the section Synocheta and family Buddelundiellidae (now a subfamily of Trichoniscidae).Tabacaru (1993) recognized that the genus could not belong to the Synocheta and Schmalfuss (2003) included it into the higher Oniscidea (the Crinocheta) and hypothesised that the genus might belong to the family Detonidae, close to the genus Armadilloniscus.
According to the maxillular endite bearing only some apical setae without penicils, it might also be related to the family Olibrinidae.However, since no safe conclusion can be reached with morphological characters, we still maintain the genus as incertae sedis as proposed by Taiti & Ferrara (1991).A molecular analysis might be useful to clarify the family placement of Buchnerillo.Material examined: 1 ♀ (MZUF 9687), 1 ♀ used for scanning microscope analysis, Playa Pita, S of Tárcoles, Puntarenas, 9°44'32.9"N and 84°37'53.0"W, beach under logs, 27.XI.2015,leg.S. Taiti, J.A. Vargas & R. Vargas.
Remarks: The two female specimens here examined are morphologically very similar to Armadilloniscus caraibicus described by Paoletti & Stinner (1989) for the Caribbean coast of Venezuela.The main characters of the Costa Rican specimens are shown in Fig. 7.They show the same disposition of dorsal tubercles as those from Venezuela (see Figs. 10 and 11 in the original description by Paoletti &Stinner, 1989 andFigs. 32 and33 in Schmidt, 2002) but they are less developed.Since we have examined only 2 females, we only tentatively identify them as A. cf.caraibicus.
Etymology: The name of the species refers to the Gulf of Nicoya, where Playa Pita is located.
Remarks: The new species is included in the genus Hawaiioscia since all the most important characters (number and position of noduli laterales, maxillular teeth, penicil on maxillipedal endite, uropod and shape of male pleopod) correspond to the definition of that genus (see diagnosis in Taiti & Howarth 1997) new species is readily distinguishable from the Hawaiian species by the pigmented body, the eye well developed and in having the molar penicil of the mandible semidichotomized, instead of simple.For this last character, the new species shows closest affinities with H. rapui from which it mainly differs in having larger eyes (19 -20 instead of 8 ommatidia) and in the shape of the male pleopods 1 and 2. Additional material examined: 1 ♂, 1 ♀, used for scanning microscope analysis, same data as holotype.
Etymology: Latin: bi = double + ocellatus = having eyes.The name refers to the eye consisting of two ocelli of the same size.
Remarks: The new species belongs to the tomentosa-group of Trichorhina characterized by the presence of two noduli laterales per side on the pereonite 7.This group includes with certainty T. tomentosa (Budde-Lund, 1893), T. heterophthalma Lemos de Castro, 1964, both widespread in the tropics, and T. guanophila Souza-Kury, 1993 from Brazil.The new species is readily distinguished from all these species by the eye consisting of two ommatidia of equal size (one in T. tomentosa, two unequal ommatidia in T. heterophthalma, and five in T. guanophila); from T. guanophila also in the male pleopod 1 exopod wider than long.Two more species of Trichorhina are recorded from Costa Rica by Arcangeli (1930): T. giannellii Arcangeli, 1929, known also from Cuba, and T. marianii Arcangeli, 1930.From their original descriptions no information is present on the number and position of the noduli laterales, so we do not know if they belong to the tomentosa-group.Trichorhina biocellata n. sp.differs from these two species in the eye with only two ommatidia (four or five in T. giannellii and 10 in T. marianii).
DISCUSSION
In the present study seven species of terrestrial isopods are recorded from sandy and rocky shores of both coasts of Costa Rica.Three species (Buchnerillo neotropicalis, Hawaiioscia nicoyaensis and Trichorhina biocellata) are described as new and two species (Tylos niveus and Armadilloniscus cf.caraibicus) represent new records for Costa Rica.The total number of Oniscidean species presently known from Costa Rica increases from 25 to 30 (Table 1).
Six species are strictly littoral, halophilic: L. baudiniana, T. wegeneri, T. niveus, B. neotropicalis n. sp., A. cf.caraibicus, and H. nicoyaensis n. sp.All these species, with the exception of H. nicoyaensis and B. neotropicalis, occur on both the Pacific and Caribbean coasts of Costa Rica or in other countries along the Atlantic coast of the Americas.No morphological differences could be detected from the Pacific and Caribbean populations of these species; only A. cf.caraibicus from Playa Pita on the Pacific coast showed small differences in the less developed dorsal ornamentation, even if of the same type, from the original specimens described from the Caribbean coast of Venezuela.It will be quite interesting to check both the Pacific and Caribbean populations of these five species with molecular markers to see if there is a criptic diversity between them, as revealed in other littoral isopods, e.g. in Excirolana braziliensis Richardson, 1912, Cirolanidae (Hurtado et al., 2016), considering that the isthmus of Panama was definitely closed 2.8 Ma (O'Dea et al., 2016).
ACKNOWLEDGEMENTS
We express our sincere thanks to Rita Vargas, Curator of Crustaceans (MZUCR), and to Jeffrey Sibaja for their invaluable help in collecting part of the material here treated.We also thank Rafael Loáiciga for technical assistance with the Scanning Electron Microscope housed at CIEMIC, UCR.S.T. wishes to thank the Centro de Investigación en Ciencias del Mar y Limnlogía (CIMAR) and the University of Costa Rica for their invitation to give a talk and do research in Costa Rica at the end of 2015.Field trips and SEM access were facilitated by projects UCR-VI-808-B3-113 The benthos of Punta Morales, and UCR-VI-808-B4-117 Ecology of beaches and rocky shores of Costa Rica, both with J. Sibaja as Principal Investigator. | 2018-12-05T16:25:03.043Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "9941f4d076f8d659986dcf902156edbd4da4ab77",
"oa_license": "CCBY",
"oa_url": "https://revistas.ucr.ac.cr/index.php/rbt/article/download/33296/32771",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9941f4d076f8d659986dcf902156edbd4da4ab77",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
225336318 | pes2o/s2orc | v3-fos-license | Comparing a Three-Term Perturbation Solution of the Nonlinear ODE of the Jacobi Elliptic SN Function to Its Approximation into Circular Functions
In this paper, the nonlinear differential equation of the elliptic sn function is solved analytically using the Lindstedt-Poincar´ e perturbation method. This differential equation has a cubic nonlinearity and a constant known as the modulus of elliptic integral. This constant takes any value from zero to one and the square of its value is used as a small parameter. Fortunately, there is an exact solution to this differential equation known as the Jacobi sn elliptic function. When the modulus approaches zero, the differential equation becomes linear with the circular sine function as exact solution. The Lindstedt-Poincar´ e technique is used to render the perturbation solution uniformly valid at larger values of the independent variable and a three-term perturbation solution is obtained. This solution is compared analytically with the approximate expansion of the elliptic function into circular functions in case of a small modulus. Then, it is compared with the exact, numerically calculated, sn elliptic function. The relative percentage error is calculated at certain values of the modulus and for all values of the independent variable. The relative error is reasonably small but increases at larger values of the modulus. In addition, the approximate expansion of the exact solution gives smaller relative error than that of
Introduction
In some nonlinear problems a perturbation solution may be obtained when a small parameter exists [1]. The obtained perturbation solution depends mainly on the existence of an unperturbed solution i.e. the solution of the same problem when the small parameter vanishes. Through an iterative like procedures, the solution is getting closer to the exact one by adding terms of order of magnitudes less than the base or unperturbed solution. Difficulties arise when a singularity exists in the solution. In this case, it will be non-uniformly valid and some techniques such as those established by Lindstedt-Poincaré or Lighthill can be used to eliminate the non-uniformity in the solution [1,2,3]. When applying these techniques the analytical iterations become more difficult as more terms are included in each iteration step. The differential equation of the Jacobi elliptic sn function is an example of a nonlinear ordinary differential equation which includes a cubic nonlinearity and a small parameter. This small parameter has the property of deforming the solution from an initial function to a final one as it goes from zero to unity. The nonlinear differential equation has an exact solution known as Jacobi elliptic sine function or sn function. The value of this function can be obtained from tables or using scientific software such as Matlab or Maple. But when the value of the modulus is close to zero the sn function can be approximated as series expansion of circular functions with different harmonics. This approximation can be calculated without special software [4,5] and its explicit analytical nature makes it useful in analytical comparison with the perturbation solution.
In this paper the Lindstedt-Poincaré technique will be used to obtain a uniformly valid three-term perturbation solution to the differential equation of the Jacobi elliptic function. In the second section, the perturbation solution is derived and the effect of the modulus on its Email addresses and ORCID numbers: mohammed.ghazy@kfupm.edu.sa, https://orcid.org/0000-0002-1106-112X (M.Ghazy) behavior is analytically indicated. In the third section, an approximate series expansion to the exact solution is reviewed. The approximation is derived so that it includes the same order of the small parameter as the perturbation solution. In the fourth section, the perturbation solution is compared with the exact solution and its series expansion in case of small modulus. Solutions in addition to relative errors are tabulated and represented graphically at different values of the modulus. Detailed analysis of behavior of solutions and errors are introduced. Finally, conclusions are drawn in the fifth section.
Perturbation solution
Consider the nonlinear differential equation [6] where k is constant known as the modulus of elliptic integral and k ∈ [0, 1]. When the modulus k → 0, (1) reduces to a simple harmonic oscillator whose solution is a circular function. But when k takes any small positive value less than one i.e. k ∈ (0, 1), a cubic nonlinearity exists. Let define another small parameter ε = k 2 , where ε < k for k ∈ (0, 1). Existence of the small parameter allows using the perturbation technique to solve the above problem. Furthermore, to apply Lindstedt-Poincaré technique, let transform the independent variable from x to u through the following transformation substituting (2) into (1) gives where (.) denotes differentiation with respect to u. The next step is expand the dependent variable as series in the small parameter ε When substituting (4) into (3) then collecting and equating coefficients of equal powers of ε one obtains the following set of linear differential equations y 0 + y 0 = 0 (5) y 2 + y 2 = (2ω 2 + 2ω 1 − 3ω 1 2 )y 0 + (2ω 1 − 1)y 1 − 4ω 1 y 0 3 + 6y 1 y 0 2 .
Behavior of the perturbation solution
Equation (12) shows that any term in the obtained solution takes the form of a circular function multiplied by a finite quantity. In addition for the solution to converge the following condition should be satisfied knowing that | y n+1 y n | = O(1), the condition of convergence, then, reduces to the condition ε = o(1), which is known by the definition ε = k 2 , where k ∈ (0, 1) as indicated in the second section.
Approximate series solution
The Jacobi elliptic function sn(x; k) is the solution to the differential equation in (1) [6]. For small values of the modulus k this function can be expressed in terms of the circular sine and cosine functions. The derivation of this approximation can be started from knowing that the independent variable in (1), which is the argument of the sn function, is the incomplete elliptic integral of the first kind F(φ ; k); where φ is known as the amplitude and θ , t are dummy variables. For small values of the modulus k, the sn function can be written as follows [4,5] sn(x; k) = sin x − k 2 4 cos x(x − sin x cos x).
As we derived our perturbation solution to include k 4 , it may be reasonable if we compare with approximation of the sn function including k 4 as well. Thus, with some efforts we could derive the following approximation sn(x; k) approx = sin x − k 2 4 cos xg(x; k) + k 4 32 2 cos xg(x; k) − sin xg 2 (x; k) , where g(x; k) = (x − sin x cos x).
Results and discussion
The perturbation solution is compared with the exact solution, i.e. the elliptic sn function, and its approximate series expansion in (13). Equations (12), (13) show that when k → 0 the two solutions there reduce to the same base solution sin(x). Also, the exact sn function reduces to the same solution when k → 0. In this specific case there is no need to compare these solutions numerically. For other values of the modulus, the perturbation solution is expected to be different than the other ones. The three solutions are listed in Table 1 to Table 4 for values of the modulus k = 0.2, 0.4, 0.6, 0.8, respectively. In addition, the following relative percentage error form is used to show how close are the explicit perturbation solution and sn approx to the exact, numerically calculated, sn solution.
The perturbation solution, the approximate expansion, and the exact solution at values of the modulus k = 0.2, 0.4, 0.6, 0.8 are graphically represented in Figure 1 to Figure 4 respectively, for x ∈ [0, K(k)], where K(k) is the complete elliptic integral of the first kind. Figure 1 shows that the solutions are very close when k = 0.2. In Figure 2 to Figure 4, with increasing k, the Difference between the perturbation solution and the exact solution increases. However, one can notice the small rate of increase of the difference between the approximate expansion and the exact solution.
The relative percentage errors indicated in (15), (16) are graphically represented at the values of the modulus k = 0.2, 0.4, 0.6, 0.8 in Figure 5 to Figure 8. It is obvious from Figure 5 to Figure 8, that the relative percentage errors are undefined at x = 0 as all solutions are equal to zero at this point. More importantly, the maximum difference between the errors E pert and E approx increases with k. Moreover,one can note that in Figure 5 to Figure 8, this maximum difference occurs at the largest value x = K(k). Actually, such a behavior of the perturbation solution is expected as this solution was not enforced to satisfy the end condition. The behavior of the error in Figure 5 to Figure 8 can be attributed to the different terms of small and large magnitudes in both solutions y p and sn approx . The reason can also go back to the different way each solution is mathematically derived, even though, the perturbation Table 4: y pert , sn approx , sn, E pert , and E approx at k = 0.8 solution and the approximate expansion are built on the assumption of a small value of the modulus. The approximate series expansion in (13) includes the function g(x; k) = (x − sin x cos x) that does not exist in the perturbation solution in (12). At k = 0.2, 0.4, 0.6 the maximum absolute value of E pert is 0.99254%, 3.8914%, 8.7005% respectively, while at k = 0.8 this value jumps to 17.936%. Thus, for small values of k the relative percentage error is reasonably small and the perturbation solution based on this assumption can be used. Fortunately, in this specific problem we have an exact solution and an approximation of this solution, to compare with the perturbation solution. However, the shown results are indicative of how perturbation solution performs in cases when exact solution doesn't exist.
Conclusion
An analytical approximate perturbation solution to the nonlinear ordinary differential equation of the Jacobi elliptic sn function is obtained assuming a small value of the modulus. The relative percentage error between the perturbation solution and the numerical exact one is reasonably small. But, at larger values of the modulus, this error becomes very big. An approximate series expansion of the sn function gives smaller maximum errors than the perturbation solution. However, the magnitude and sign of the error of the series expansion change at different values of the independent variable. Results also give insights into the effect of the mathematical basis of perturbation and approximate series solutions on their accuracy even though they both depend on the small parameter assumption. In future, such results can be considered when applying a Lindstedt-Poincaré perturbation solution to nonlinear problems. | 2020-09-03T09:02:45.414Z | 2020-08-31T00:00:00.000 | {
"year": 2020,
"sha1": "993c85b0b7dac151fa73794bea35ed6c74838e84",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1266427",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "485e3bc7e9a0acc327580593a2efcc4f68e95e99",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237089656 | pes2o/s2orc | v3-fos-license | Pulmonary mucoepidermoid lung carcinoma in pediatric confused with asthma
Pulmonary mucoepidermoid carcinoma (PMEC) is an extremely rare tumor of the respiratory system. The clinical presentation of PMEC is variable and nonspecific, including cough, hemoptysis, and wheezing, and may mimic other symptoms of pneumonia or asthma. Here, we present a case of PMEC in a 12-year-old male who was diagnosed with and treated for asthma for 2 years. The patient presented with symptoms of respiratory failure that did not respond to steroids or bronchodilator medications. Chest computed tomography (CT) scans revealed an endotracheal tumor. The patient underwent complete tumor resection, with no signs of recurrence 6 months after treatment.
Introduction
Primary pulmonary mucoepidermoid carcinoma originates from the glands that line the tracheobronchial tree [1] and represents approximately 0.1%-0.2% of all primary lung tumors [2]. PMEC often affects younger patients compared with other, more common types of lung cancer [3]. Due to the tumor location, patients typically present with symptoms associated with bronchial obstruction and atelectasis [3]. The tumors can be classified as low-grade or high-grade based on histopathological results [3]. Complete surgical resection remains the primary therapy for PMEC [4]. This case emphasizes the roles of imaging and histopathology in the diagnosis, exclusion of other diseases, and avoidance of misdiagnosis and mistreatment.
Case report
A 12-year-old male patient who was diagnosed with asthma 2 years prior presented with increasing shortness of breath, wheezing, and cough. The patient had no history of allergies. The patient had previously been hospitalized several times due to the same symptoms and was treated with bronchodilators and steroids; however, the symptoms did not improve and appeared to increase in severity. A blood test revealed increased neutrophil cell count (13 G/L) and C-reactive protein level (25 mg/L). A chest computed tomography (CT) scan was performed, which revealed an intratracheal mass. This mass was well-circumscribed with homogeneous enhancement (Fig. 1). The lung parenchyma was normal, and no mediastinal lymph nodes were observed. Bronchoscopy and tumor resection were indicated. The histological results demonstrated that the tumor cells included epidermoid, mucous, and intermediate cells without keratinization (Fig. 2). The final diagnosis was a low-grade PMEC tumor with negative surgical margins. The patient was not treated with any adjuvant therapy. After surgery, the symptoms of breathlessness and wheezing disappeared. Chest CT scans 6 months after surgery showed no signs of recurrence (Fig. 1).
Discussion
Lung cancer is quite rare in children. Smoking and asbestos exposure do not appear to be risk factors for PMEC [5]. PMEC affects male and female individuals equally and is primarily located in the trachea and bronchus [1]. Only 5% of PMEC cases are classified as high-grade, with the majority (95%) classified as low-grade [6]. Low-grade tumors often occur in young patients, whereas high-grade tumors are more likely to be observed in older patients [7].
The chest radiography may show consolidation, atelectasis, or a solitary lung nodule or mass; however, the chest X-ray may also appear normal in the case of a small endobronchial tumor without airway obstruction [8]. Chest CT scans typically show an endobronchial mass, with or without bronchial dilatation, and air trapping, obstructive pneumonia, or atelectasis [2,4]. Wang et al. [9] reported that low-grade PMEC is often located in the central bronchial or trachea, with smooth and well-defined margins, oval or lobular in shape, and markedly homogeneously enhancing; high-grade PMEC tends to be peripheral, with ill-defined margins, lobular, and heterogeneous, with reduced enhancement. High-grade PMEC can be difficult to differentiate from bronchial carcinoid tumors due to the hypervascularity of the tumor on CT images [6].
Bronchoscopy is commonly used to define the localization and obtain a biopsy for a definitive diagnosis. Macroscopically, PMEC cells include mucous, epidermoid, and intermediate cells, lacking in keratinization [10]. The extracellular spaces are formed by the tumor cells and contain a mucoid substance [9]. High-grade tumors have increased nuclear pleomorphism, mitotic activity, and cellular necrosis but reduced mucoid substances and vessels compared with low-grade PMEC [9].
Surgical resection is the primary treatment option for patients with PMEC [11]. Multiple surgical approaches can be utilized, including lobectomy, segmental resection, or endoscopic removal, depending on the location and the extension of the tumor [11]. Adjuvant therapy is not indicated for cases of low-grade PMEC with complete resection [4]. No evidence currently supports the efficacy of chemotherapy or radiotherapy against high-grade PMEC, although epidermal growth factor receptor (EGFR)-targeted therapy has been suggested for unresectable or high-grade PMEC [12]. Low-grade PMEC is associated with a good prognosis and a 5-year survival rate of up to 95%, whereas high-grade PMEC is associated with a worse prognosis [3,10].
The patient in this article was a child who presented with respiratory tract obstruction symptoms and had been misdiagnosed with asthma for a long time. After complete tumor resection, the histological results revealed a low-grade PMEC; therefore, the patient was not indicated to receive adjuvant chemotherapy or radiotherapy. The symptoms of respiratory obstruction were completely solved by tumor removal.
Conclusion
Children who present with PMEC are rare, and the symptoms are often similar to other lung diseases, leading to delayed diagnosis. Chest CT scans may help determine the cause and exclude differential diagnoses, such as asthma and pneumonia. Most PMEC cases have a good prognosis, and a timely diagnosis and treatment may improve the overall survival rate of the patient.
Author contribution
Le TV and Nguyen MD contributed to this article as co-first authors. | 2021-08-17T05:26:37.679Z | 2021-07-07T00:00:00.000 | {
"year": 2021,
"sha1": "cddd0ef647a683f6fdbd20dbd09c480da833ad6f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rmcr.2021.101471",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cddd0ef647a683f6fdbd20dbd09c480da833ad6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235593976 | pes2o/s2orc | v3-fos-license | Spatial distribution analysis of dentists, dental technicians, and dental therapists in Indonesia
Background: Access to health services is needed around the world, from healthcare providers to doctors. One of the needs in public health is a system that is accessible for everyone, but, unequal distribution of healthcare provider and health workers, especially in dentistry fields is still a main problem in several countries, including Indonesia. The aim of this study is to analyze the spatial distribution of dentists, dental technicians, and dental therapists. Methods: This spatial analysis study was conducted after obtaining secondary data in Indonesia. All data were collected between September 1 st, 2020 and October 1 st, 2020 from open access sources of de-identified data. The data of dentists per area, dental technicians per area, and dental therapists per area were calculated for analysis. A spatial distribution map was prepared using the Quantum Geographic Information System (QGIS Desktop, version 3.10.6). Results: The results of this study found a ratio of dentists to members of the population in Indonesia of 1:17,105. The average number of dental technicians that work in the public health centers in each province (dental technicians per area) in Indonesia was calculated to be 0.13. The average number of dental therapists that work in the public health centers in each province (dental therapists per area) in Indonesia was calculated to be 0.40. This spatial autocorrelation illustrates that there is a relationship between values of dentists per area and dental therapists per area between provinces in Indonesia, and shows geographic clustering relationships or patterns that are grouped and have similar characteristics in adjacent locations. This spatial autocorrelation did not occur in the value of dental technicians. Conclusions: From this study we can conclude that there is an unequal distribution of dental personnel in Indonesia.
Introduction
Indonesia is a country in Southeast Asia located between Indian and Pacific oceans. It has more than 17,000 islands, but the biggest islands are Sumatra, Java, Borneo or Kalimantan, Sulawesi and Papua. Indonesia is the 14 th largest country in the world with a land area of 1,904,569 square kilometers or 735,358 square miles and is the world's largest island country. Indonesia is bordered by several country such as Malaysia, East Timor, Papua New Guinea, Singapore, Vietnam, Philippines, Palau and Australia 1 .
Based on the Global Burden of Disease (GBD) study in 2016 2 , dental and oral health problems, especially dental caries, affects nearly half of the world's population (3.58 billion people). Gum disease (periodontal) is the 11th most common oral health disease in the world. Meanwhile, in Asia Pacific, oral cancer is the third most common type of cancer. The results of the Basic Health Research (Riskesdas) in 2018 stated that 57.6% of the Indonesian population had dental and oral problems during the last 12 months, but only 10.2% received treatment by dental medical personnel (dental nurses, dentists or specialist dentists), while the rest received no treatment 2,3 . The largest proportion of dental problems in Indonesia are dental caries, necrosis tooth, and toothache complaints (45.3%). Meanwhile, the majority of oral health problems experienced by Indonesians are swollen gums (14%) 3,4 .
Effective Medical Demand (EMD) is defined as the percentage of the population who had oral health problems in the last 12 months multiplied by the percentage of the population who received dental care or treatment from dental medical personnel (specialist, dentists, dental nurses/dental therapists) and dental technicians 5 . The EMD in Indonesia is only 8.1%, it means that only few of the population receive dental treatments when they have dental problems. The overall ability to get services from dental medical personnel (EMD) in Indonesia is only 8.1% 5,6 . Three provinces of Indonesia in 2013, as South Sulawesi, South Kalimantan and Central Sulawesi had quite high levels of oral and dental problems (>35%), with EMD of 10.3%, 8.0%, and 6.4%, respectively 7 . Based on this finding, a clear picture of the gap between dental problems and the EMD that occur in society can be seen. A public health system is needed, whereby everyone can access public health services, including dental services, at an affordable rate including affordability of health services. Private health-care providers might help affordability of health services beside government health services. The main focus which needs attention is on curative care in single-practitioner and group practices in dentistry fields. But, unfortunately, these private health services are driven by market demands in society and are business-oriented services. The Geographic Information System (GIS) is a method commonly used in health research. This method could analyze various healthcare variables related to other fields such as physical, social and cultural environments 8,9 . Access to health services is very important and needed around the world. The correlation between the service provider (the health practitioner) and the consumer (the patient) is still a main problem in many other countries including Indonesia 10,11 .
As of 2012, from 8,975 public health centers, there are 5,439 public health centers with dentists and 3,536 public health center without dentists. Nationally, 47.4% of public health centers had one dentist and 13.2% of public health centers had more than two dentists 12,13 . Public health centers that have the highest efforts in dental and oral health service is in the Bali province, which is 100% and the lowest efforts in dental and oral health service in Indonesia is in Papua Province, which is 24%; the national rate for this efforts of dental and oral health services is 84%. This means, Bali is the province that is best at providing dental services and appropriately considers dentistry fields, meanwhile Papua is the worst 14,15 .
The residents ease to reach services and facilities based on the distance and travel time to a resource is called geographic accessibility. Optimal delivery of dental health services should take into consideration availability and accessibility, which together are referred to as spatial accessibility [16][17][18] . The aim of this study was to analyze the spatial distribution of dentists, dental technicians, and dental therapists in Indonesia.
Methods
This spatial analysis study was conducted after obtaining secondary data in Indonesia. All data were collected between 1 st September 2020 and 1 st October 2020 from open access sources of de-identified data; therefore, no ethical approval was required. We sought to identify the average number of dentists, dental technicians and dental therapists working in public sectors and the main building of public health centers in each province.
This study retrieved data of the number of dentists in each province in 2018 from the national Indonesian Health Facility Research report (Riset Fasilitas Kesehatan), accessed via the Ministry of Health, Indonesia.
The geographic administrative area data of Indonesia were retrieved from the Indonesia Geospatial Portal, which is publicly available. The provincial maps of the Indonesian database were processed with the Geospatial Portal for further geographic information system (GIS) based analysis to identify spatial distribution of dentists, dental technicians, and dental therapists per area by obtaining the province level polygon map that contains information regarding latitudes and longitudes of each province. We used the map of all 34 provinces of Indonesia for the analysis. Data on the distribution of dentists, dental technicians, and dental therapists was collected from the Ministry of Health and Geospatial Portal.
We created a map using the Quantum Geographic Information System (QGIS Desktop, version 3.10.6). Global and local Moran's indices were calculated to determine the autocorrelation value and local indicator of spatial autocorrelation (LISA). LISA was analyzed using GeoDa software, version 1.10.0.8. The level of significance was set at p-value of <0.05, Z α/2 at 1.96, and randomization run to 999 permutations. We used automatic Euclidean weight distance, which matched with the assumption that each province has at least four neighboring provinces. Interpretation of the LISA significance map includes the following categories: • "high-high" indicates a clustering of high value rates (positive spatial autocorrelation) • "low-high" indicates that the low value rates are adjacent to high value rates (negative spatial autocorrelation) • "low-low" indicates clustering of low value rates (positive spatial autocorrelation) • "high-low" indicates that high value rates are adjacent to low value rates (negative spatial autocorrelation) • "not significant" indicates that there is no spatial autocorrelation.
The outcome of Moran's I identifies the intensity of spatial autocorrelation along with the result of statistically significant test, that is, the p value. The following mathematical representation exhibits the computation of Moran's I: Where W ij is the spatial weight between the parameters in provinces i and j. The parameters are the number of dentists per area, dental technicians per area, and dental therapists per area. N is the total number of spatial units; S0 is the aggregate of all spatial weights; x i and x j are the amount of each dental health personnel in provinces i and j, respectively.
Results and discussion
The total number of public health centers in Indonesia were counted at 9,831. The total number of dentists, dental technicians, and dental therapists that work in public sectors of public health center main buildings in each province was 15,833, 1,214, and 3,834, respectively (Table 1). Figure 1 shows the distribution of dental health personnel in several major islands in Indonesia. The distribution is still unequal due to the big difference in the amount of dental personnel in each island. The average number of dentists that work in public sectors and public health centers in each province (dentist per area) in Indonesia was calculated at 1.61 ( Figure 2). The ratio of dentist to members of the population in Indonesia was calculated at 1:17,105. The average number of dental therapists that work in main buildings of public health centers in each province (dental therapist per area) in Indonesia was calculated at 0.40 (Figure 3). The average number of dental technicians that work in main buildings of public health centers in each province, (dental technicians per area) in Indonesia were calculated at 0.13 ( Figure 4).
There are 13 provinces that have a value of dentists per area below 1, which indicates that there are areas that do not have dentists at all in public health centers. All provinces in Indonesia have a value of dental technicians per area below 1, which shows that all regions of Indonesia do not have sufficient numbers of dental technicians for each public health center area. There are 31 provinces that have a value of dental therapists per area below 1, which indicates that there are areas that do not have dental therapists at all in a health center area ( Figure 1-Figure 4).
The highest number of dentists per distribution area is in the province of Bali, Java-Bali, and the lowest is in the province of West Papua, Papua. The highest number of dental technicians per distribution area is in the province of Bangka Belitung, Sumatra, and the lowest is in the province of North Maluku, Maluku. The highest number of dental therapists per distribution area is in the province of Bali, Java-Bali, and the lowest is in the province of Papua. Visualization maps of dentists, dental technicians, and dental therapist per area were developed and are presented in Figure 5.
Indonesia had both a shortage of and an unequal distribution of dental health personnel in public health center areas. The distribution of dentists is similar to that in other studies, which have indicated maldistribution of both general practitioners (GPs) and specialists 19 . Maldistribution indicates inequality associated with area. Dentist and dental nurses are two health workers who play an important role in providing dental health services. The distribution of the two health workers was the highest in the Java-Bali region (60.1%), and the lowest in Papua region (10.3%). However, there are a high number of dentists and dental therapists in the Maluku Islands region (57.9%) 11,12 . Based on this, it appears that there is a gap that occurs between regions related to the availability of dentists and dental therapists on duty at the health centers. These results are also supported by a previous study that found an unequal distribution [19][20][21] .
The poor distribution of health workers is not only present in dental health service personnel, but in other types of health workers too. Although there is a health worker placement policy in Indonesia which uses a temporary employee system for medical personnel (doctors and dentists), the distribution is not equal especially in remote areas. From the results of the analysis conducted by Ihsan Husain, et al. (2006), on average there are more dentists in Java-Bali compared to the other regions 10,15 . Unequal distribution of dentists can be seen from comparing the actual number versus ideal number of dentists with the total population. The ideal ratio is one dentist for 9,090 residents. In the year of 2010, based on dentist registration data from the Indonesian Medical Council (KKI), there were 22,237 registered dentists consisting of 20,665 general dentists and 1,582 specialist dentists 7 . However, not all dentists that were registered at the Indonesia database work in a public health center. That means only 60.6% of dentists work for an Indonesian public health center, and the rest works in private sector. The amount of Dentists who work in the private sector is 49.4% and they might not be distributed equally around Indonesia 9 .
Many factors cause this inequality, including the placement policy of health workers in each region (province and district / city), the quality and number of health service facilities, shifting disease patterns, especially in urban regions, high disparity in health status community between regions, and characteristics of the geographic area. From the results of a study conducted by Bappenas, there is a gap between the number of workers and public health centers who need health workers, in that the
Figure 2. Average number of dentists at every Province in each public health center.
number of workers is higher than public health structure itself. Furthermore, to ensure health workers are evenly distributed, it is necessary to develop an even infrastructure 10-12 .
Health worker placement policy is largely determined collectively by health offices and regional civil service agencies. According to a result from a study in two districts in the province of Gorontalo, the unequal distribution of healthcare workers occurs due to lack of healthcare facilities. Increasing the number of health workers every year was not followed by an equal distribution to health facilities. The study also found that there were three factors that affected the policies and development of health facilities, namely disparities of health status, migration of population, and mutual geographic characteristics 10,15 .
Based on the results of the spatial autocorrelation test with Moran's I, there was a significant spatial autocorrelation of dentists per area (I: 0.272, z-value: 3.20) and dental therapists per area (I: 0.238, z-value: 2.85) in Indonesia. Meanwhile, the value of dental technicians per area (I: 0.002, z-value: 0.47) did not show significant spatial autocorrelation ( Figure 6).
According to the World Health Organization (WHO), the ideal ratio of dentists for an area is 1:8,000 people, while Indonesia currently stands at 1:17,105. Dentists per area in Indonesia have spatial autocorrelation at the provincial level and are concentrated only in the Java-Bali region, with very few in the Papua region. For example, Saudi Arabia is one of the countries that has improved the ratio of dentists to population. In 1987 the ratio of dentists to population was 1:8,906, but in 2016 this number increased to 1:1,880. The ratio of dentists to population in Saudi Arabia recently was 5.3 per 10,000 people. This number is higher than all the developing countries in Asia-Pacific region. Also, in this region, China reported the lowest ratio of dentists to population, with 0.12 per 10,000 people. And on the other side, Japan has the highest ratio of dentists to population ratio in Asia-Pacific region with 7.7 per 10,000 people 18,22,23 .
Other studies reported that the OECD member countries, excluding Scandinavian countries and Greece, have varied dentists-to-population ratios ranging from 5 to 8 with the average of 6.1 per 10,000 people. In addition, most of the European countries have dentists-to-population ratios ranging from 5.07 to 7.3 per 10,000 people. Among Middle Eastern countries, Bahrain has the lowest dentists-to-population ratio of 1.5 per 10,000 people, and Qatar has the highest dentists-to-population ratio of 5.8 per 10,000 people 19,22,24,25 .
Based on the distribution pattern that we found, we assumed that the distribution of dentists in Indonesia occurred due to the large population, the number of dental institutions, as well as the equitable distribution of development in each region. Java island is one of the areas with an equal distribution of dentists in Indonesia, though it is also the most populated island in Indonesia 20,21 . In line with the large population growth, Java Island is an area that has 18 dental institutions out of a total of 31 dental institutions throughout Indonesia. Issues of development and availability of public facilities and infrastructure are the main reasons why dentists prefer to practice in the Java area. Dental therapists per area also have spatial autocorrelation at the provincial level, where the Nusa Tenggara region has the highest distribution and the Papua region has the least 26,27 . This spatial autocorrelation illustrates that there is a correlation between the number of dentists per area and dental therapists per area between provinces in Indonesia and shows geographic clustering relationships or patterns that are grouped and have similar characteristics in adjacent locations. This spatial autocorrelation did not occur in the value of dental technicians. The results of Moran's I test show that there is an autocorrelation or spatial relationship of the number of dentists per area and dental therapists per area in Indonesia. Furthermore, Figure 6. Spatial autocorrelation with Moran's index of dentists per area (a), dental therapists per area (b), and dental technicians per area was developed (c). The X axis is referring to the number of observations and is also known as the response axis. The Y axis is referring to the average or spatial lag of the corresponding observation of dental health personnel per area of the X axis.
Moran's scatterplot also illustrates the pattern of relationships between the existing provinces ( Figure 7).
Based on the distribution patterns, similar to what happened in the maldistribution of dentists, the maldistribution that occurs in dental therapists may also be due to the population number, the number of dental institutions, and the equal distribution of welfare in each region according to this result. In addition, we assumed that the geographical grouping of dental therapists in the Nusa Tenggara region was due to the process of substituting the role of dentists by dental therapists where the distribution of dentists in the area was low. Meanwhile, a very minimal distribution was seen in the value of dental technicians per area. In this study, it was found that all regions of Indonesia did not have a sufficient number of dental technicians for each public health center, with a mean value of less than 1 per public health center. The factor that most influences this is the very limited number of schools for dental technicians; only 10 institutions throughout Indonesia. Several conditions above then describe the minimum distribution of dental personnel in Indonesia 28 .
A sufficient number of dental health personnel, such as dentists, dental technicians, and dental therapists in a Province is very important in efforts to ensure health service. The Indonesian government has made regulations regarding the minimum number of dentists in public health centers (minimum 1), but there is not a distribution policy for dental technicians and dental therapists. Indonesia has also implemented various policies, such as increased numbers of dental students to be trained in faculties of dentistry nationwide and pre contract for dentists to work in rural areas after graduation through the Nusantara Sehat and PTT Daerah programs, to solve the shortage of dentists. However, the inequality of dental personnel distribution is still being found 29 .
We determined the significance of local spatial autocorrelation through LISA. From this test, the significance of the relationship in each province was obtained. The pattern of Moran's local significance levels is presented in Figure 8. Interpretation of the LISA significance map includes the following categories: "High-high" indicates a clustering of high-value rates (positive spatial autocorrelation) "Low-high" indicates that the low-value rates are adjacent to high-value rates (negative spatial autocorrelation) "Low -low" indicates clustering of low-value rates (positive spatial autocorrelation) "High -low" indicates that high-value rates are adjacent to low-value rates (negative spatial autocorrelation) "Not significant" indicates that there is no spatial autocorrelation.
Spatial analysis was used to identify dental health personnel shortages and geographical distribution. The traditional methods use administrative boundaries such as counties as the basic spatial units as well as dental health personnel to identify shortages based on those numbers [30][31][32] . Such approaches have been criticized for their inability to account for either the spatial variations of population demand and dental health personnel supply within those boundaries or for population-dental health personnel interactions across them. Spatial analysis can assist current and future dental health personnel, dental school administrators, and policymakers in making informed decisions to determine suitable practice locations, dental school admissions criteria, and target areas for public health initiatives 33,34 .
The problem of unequal distribution of dental health personnel should use spatial analysis to understand the issues. This study included number of dental health personnel in each public health center for the accuracy of this research. This Figure 7. Spatial autocorrelation illustration of dentists per area (a), dental therapists per area (b), and dental technicians per area (c). The plots show the spatial autocorrelation between the number of dentists and dental therapists per area in every province in Indonesia. But the dental technician is not showing the spatial autocorrelation.
research can be accurate because the characteristics of public health centers in each region are similar [35][36][37] . Future studies are recommended to use data of dental health personnel both in the public sector and in the private sector.
Conclusion
The number of dentists in Indonesia has increased due to the increase of dental students who have been trained in faculties of dentistry which are also done the pre-contracts for dentists to work in rural areas after graduation through the Nusantara Sehat and PTT Daerah programs, to solve the shortage of dentists. However, the inequality of dental personnel distribution is still found, there is no distribution policy for dental technicians and dental therapists, and the dentist-to-population ratios in Indonesia have not improved. Spatial analysis might help identify dental health personnel shortage and geographical distribution.
Source data
The distribution data of the number of dentists in each province is available at https://www.litbang.kemkes.go.id/ (Ministry of Health) and geographic area data is available from https:// tanahair.indonesia.go.id/portal-web (Geospatial Portal). | 2021-06-11T05:30:35.295Z | 2021-06-10T00:00:00.000 | {
"year": 2021,
"sha1": "38cad03460bc7e48d0d8fa4f4d24fbf2de4ed4ab",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/10-220/v2/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfbdca4231f3dcf5210882e99da1930b1f6d5ab0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16999616 | pes2o/s2orc | v3-fos-license | Tissue-specific features of the X chromosome and nucleolus spatial dynamics in a malaria mosquito, Anopheles atroparvus
Spatial organization of chromosome territories is important for maintenance of genomic stability and regulation of gene expression. Recent studies have shown tissue-specific features of chromosome attachments to the nuclear envelope in various organisms including malaria mosquitoes. However, other spatial characteristics of nucleus organization, like volume and shape of chromosome territories, have not been studied in Anopheles. We conducted a thorough analysis of tissue-specific features of the X chromosome and nucleolus volume and shape in follicular epithelium and nurse cells of the Anopheles atroparvus ovaries using a modern open-source software. DNA of the polytene X chromosome from ovarian nurse cells was obtained by microdissection and was used as a template for amplification with degenerate oligo primers. A fluorescently labeled X chromosome painting probe was hybridized with formaldehyde-fixed ovaries of mosquitoes using a 3D-FISH method. The nucleolus was stained by immunostaining with an anti-fibrillarin antibody. The analysis was conducted with TANGO—a software for a chromosome spatial organization analysis. We show that the volume and position of the X chromosome have tissue-specific characteristics. Unlike nurse cell nuclei, the growth of follicular epithelium nuclei is not accompanied with the proportional growth of the X chromosome. However, the shape of the X chromosome does not differ between the tissues. The dynamics of the X chromosome attachment regions location is tissue-specific and it is correlated with the process of nucleus growth in follicular epithelium and nurse cells.
Introduction
Interphase chromosomes maintain integrity and occupy specific volume known as chromosome territories (CTs) inside the nucleus [1][2]. Non-random organization of CTs is important for the functioning of the genetic apparatus of the cell [3]. A significant aspect of the nuclear architecture is interaction between chromatin and other nuclear compartments. For example, lamina plays a fundamental role in the process of CT formation at the nuclear periphery [4]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 The nucleolus is a ribosomal RNA synthesis center, which is formed by nucleolus organizer regions (NORs) localized on acrocentric chromosomes of humans [5] or on the X chromosome of fruit flies [6] and mosquitoes [7]. CTs that contain NORs usually localize near the nucleolus or associate with it [8]. Other chromosomal regions besides NORs known as nucleolus associated domains (NADs) may contact with the nucleolus as well [1].
CTs have tissue-specific features that have been associated with functional aspects of spatial organization of the interphase nucleus [9][10]. Some lamin-associated domains (LADs) differ between cell types while others are common to different cell types in mammals [11]. Attachments of polytene chromosomes to the nuclear envelope (NE) occur via heterochromatic regions and have tissue-specific differences in Drosophila melanogaster [12][13] and in Anopheles mosquitoes from the Maculipennis group [14][15][16]. Studying spatial organization of CTs and nucleoli would be important for understanding the spatial organization of transcription inside the cell nucleus in malaria mosquitoes. Knowledge about nuclear architecture in vectors of infectious diseases will provide a rich basis for fundamental and applied research aimed at deciphering the mechanisms controlling development and reproduction [17]. We have identified significant differences in interpositions of X and 3R chromosomes in several types of somatic and germ-line cells in Anopheles messeae [14,16]. On average, the X chromosome and 3R chromosome are located closer to each other in follicular epithelium cells (FE) in comparison with their location in ovarian nurse cells (NC). The imaginal disc cells nuclei have an intermediate arrangement of chromosome interposition, similar to that of other somatic cells and nurse cells [16].
In this work, we studied several aspects of nuclear architecture using ovarian follicles of An. atroparvus malaria mosquitoes that contain cells of both germ-line NC and somatic FE systems. This species was chosen for this study because some aspects of the spatial organization of chromosomes in the Macullipennis complex, to which An. atroparvus belongs, have been studied previously [14,15]. Importanly, An. atroparvus is a vector of malaria in Europe and the only species in the Macullipennis group with sequenced and physicaly mapped genome [17].
Our study focused on spatial organization of the X chromosome because it is the shortest polytene chromosome in the set, and it is not as curved as the autosomes. These characteristics made the X chromosome more accessible for our study of the CT by simple geometrical quantitative measurements. Furthermore, the X chromosome of An. atroparvus contains NOR(s) allowing estimation of dynamics of size and location of the nucleolus in connection with spatial reorganization of the X chromosome in different tissues. In addition, we tested the application of novel methods of analysis in studying spatial organization of chromosomes in the ovaries of malaria mosquitoes.
Mosquito colony and chromosome preparation
The An. atroparvus Tomsk laboratory colony was used for the described experiments. Mosquitoes were raised in the insectary at 24˚C, with a 12-hour cycle of light and darkness. Ovaries of An. atroparvus half-gravid females were dissected and fixed in Carnoy's fixative solution (75% ethanol, 25% acetic acid). For making preparations of polytene chromosomes from ovarian nurse cells, a single ovary from one pair was taken. Ovaries were incubated in a drop of 50% propionic acid for 5 minutes, macerated, and squashed. The quality of the chromosomal preparation was checked by AxioImager A1 microscope (Carl Zeiss, OPTEC Company, Siberian Office, Novosibirsk, Russia). High-quality preparations were frozen in liquid nitrogen. Preparations were dehydrated in a series of ethanol (50%, 70%, 90%, and 100%) and air dried. These chromosomal preparations were used for X chromosome microdissection and 2D-FISH.
Microdissection of the X chromosome
We conducted microdissection of the An. atroparvus X chromosome using the technique described in previous work [18]. Polytene chromosomes were collected from the surface of air-dried preparations with the help of a glass capillary (Narishige, Tokyo, Japan) and inverted microscope Axiovert 200 (Carl Zeiss, OPTEC Company, Siberian Office, Novosibirsk, Russia). The collected material was incubated in proteinase K followed by reprecipitation in 96% ethanol and washing with 70% ethanol. Precipitated DNA was amplified by low-temperature cycles of PCR in the presence of sequenase (Sequenase Version 2.0, Affymetrix USB, Dia-m, Novosibirsk, Russia). A resulting product was used as a template in high-temperature 33 cycles of PCR. The length of a DNA probe was checked by electrophoresis in a 2% agarose gel.
Labeling of DNA probes DOP-PCR was used for fluorescent labeling of full chromosome DNA probes in the presence of MW-6 degenerate primer according to the previously published protocol (Artemov et al., 2015). We used 5-Tetramethylrhodamine-dUTP (Biosan, Novosibirsk, Russia) as a labeled nucleotide. The resulting probe was reprecipitated in 96% ethanol, and a DNA pellet was dissolved in 10-15 μl of a hybridization mixture (50% formamide, 10% sodium dextran sulfate, 2×SSC, 1% Tween 20).
Fluorescence In Situ Hybridisation (FISH)
We checked the specificity of the X chromosome painting probe by FISH with air-dried chromosome preparations of An. atroparvus. Air-dried preparations of chromosomes were washed in 2×SSC at 37˚C for 5 min three times. Then, they were dehydrated in 70%, 80%, and 96% ethanol for 5 min each at room temperature. After that, chromosomes were treated with a 100 μg/μl pepsin solution at 37˚C, pH<7 for 10 min. Preparations were washed in 1×PBS twice for 5 min at room temperature. Then they were dehydrated again by the series of ethanol solutions (50%, 70%, 96%) for 5 min at room temperature and dried. The labeled DNA probe was dissolved in a hybridization mixture and placed on the chromosomal preparation, covered by a coverslip, and sealed with a universal adhesive "Moment-1" (Henkel, Moscow, Russia). Denaturation and hybridization steps were conducted in a programmable thermostat Thermobrite S500 (Beckman Coulter, Moscow, Russia) initially at 75˚C for 15 min (denaturation) and at 37˚C for 18 hours (hybridization). After hybridization, preparations were washed in a 50% formamide solution in 2×SSC at 45˚C three times for 5 min followed by incubation in 2×SSC at 45˚C for 5 min, 0.2×SSC at 45˚C twice for 5 min, and 0.1×SSC at 45˚C for 5 min. We applied DAPI (4',6-diamidino-2-phenylindole) with antifade (Prolong Gold Antifade, ThermoFisher Scientific, Dia-m Company, Novosibirsk, Russia) on the surface of a dry preparation and visualized chromosomes with an AxioVision Z1 microscope (Carl Zeiss, OPTEC Company, Siberian Office, Novosibirsk, Russia).
3D-FISH
Females of An. atroparvus 23 hours post blood feeding were used for the experiment. Ovaries for 3D-FISH were extracted from 3 individual mosquitoes immediately before conducting hybridization. Ovaries were dissected in 1×PBS and were processed in accordance with the 3D-FISH protocol described in our previous study [19].
3D-immunostaining of nucleolus
We determined the location of nucleoli by immunostaining interphase nuclei with a fluorescently labeled antibody against fibrillarin, the basic component of the nucleolus fibrillar domain [20]. The material was extracted in the EBR solution (0.13M NaCl, 0.04M KCl, 0.018M CaCl 2 , 9mM HEPES) at +4˚C and fixed in 4% of paraformaldehyde for 20 min at room temperature. Fixative solution was washed away by 1×PBS at room temperature three times for 5 min, and tissues were treated with the PBSTr solution (0.3% Triton-X100 in 1×PBS) for 30 min at room temperature. Then the material was incubated in block buffer (BB) (4% powdered milk, 10% FBS) for 30 min and then shaken in 1% solution of primary antibodies (Anti-fibrillarin [38F3], Abcam, Cambridge, UK) in BB at +4˚C during the night. After that, tissues were washed in PBSTr three times for 15 min at room temperature. Staining was conducted in 0.25% solution of secondary antibodies Anti-Mouse IgG-FITC (Sigma-Aldrich, Dia-m, Novosibirsk, Russia) in BB at +4˚C overnight. The washing step was performed in the same manner as above. Finally, the tissue was stained by DAPI (Prolong Gold Antifade, Ther-moFisher Scientific, Dia-m Company, Novosibirsk, Russia) at +4˚C for 8 hours.
Image analysis
The series of the z-stack images obtained by a LSM 780 confocal microscope (Carl Zeiss, OPTEC Company, Siberian Office, Novosibirsk, Russia) and ZEN 2012 software (Carl Zeiss, OPTEC Company, Siberian Office, Novosibirsk, Russia) was processed and analyzed by three different softwares: (1) Fiji (ImageJ), (2) the complex of tools and plugins for the analysis of the nuclear spatial organization, TANGO [21], (3) and MongoDB database (MongoDB, Inc.). We used the "Mean" filter from the "Fast Filters 3D" plugin and the "Gaussian Blur 3D" filter from the "Misc Filters 3D" plugin for the pre-filter analysis step. We used the classical "thresholding" algorithm from the "Hysteresis Segmenter" plugin for the segmentation of NE (NE was expected as an imaginary surface, which covers the external voxels of the DAPI-labeled chromatin), X chromosome, and nucleolus. We conducted spatial measurements for each nucleus which was used in the statistical analysis by the "Simple Measure Geometrical" plugin. We employed the following parameters: Volume (in unit), Surface (in unit), Compacity, Feret, Elongation, DC measures from the "Simple Geometrical Measurements" plugin, and the minimum and maximum radial position parameters from the "Eroded volume fraction" plugin. The description of each parameter is available in the official manual of TANGO [22]. We also used other derived parameters in the statistical analysis such as: • relative volume of the X chromosome CT: where V 3 X is the volume of the X chromosome (μm 3 ), V 3 Nuc is the volume of the nucleus (μm 3 ) • α is the angle between longitudinal axis of X chromosome and tangent to the NE drown throw intersection point of longitudinal axis of X chromosome with NE: where EFV max and EFV min are maximum and minimum radial positions of the X chromosome (%), DC nuc and LD x are the length of mean nucleus radius and longest axis of the X chromosome, respectively.
These parameters were applied for the measurement of nucleolus spatial organization. The complete data are provided in S1 Table and S2 Table. A statistical analysis was conducted with R programing language (The R Foundation) and RStudio IDE (RStudio, Inc.). We used a non-parametric Mann-Whitney U-test for sample comparisons. The results were considered significant when p<0.05. We employed a standard error in illustrations as a confidence interval.
Results
Visualization of the X chromosome and nucleolus in nuclei of An. atroparvus CTs of X chromosomes in NC and FE cells were identified by in situ hybridization of the microdissected full X chromosome probe. We conducted microdissection of three individual X chromosomes from one ovary of An. atroparvus. Success of the procedure was confirmed by visual inspection of the preparation after microdissection (Fig 1).
FISH with chromosome preparations of NC confirmed specificity of obtained microdissection probes (Fig 2A). Pericentromeric heterochromatin regions of chromosomes 2 and 3 appeared non-specifically labeled by this probe. This non-specific hybridization can be explained by the presence of homologous repetitive DNA sequences in heterochromatin of the sex chromosome and autosomes. We visualized the nucleolus with a fluorescently labeled antibody against fibrillarin (Fig 2B). 3D FISH with chromosome preparations of NC also confirmed specificity of the X chromosome painting probe (Fig 2C). The volume and intensity of the signals from non-specifically labeled autosomal regions were much smaller than the same parameters for the X chromosome. 3D FISH with chromosome preparations of FE identified a single CT corresponding to the X chromosome ( Fig 2D). Thus, the resulting microdissected painting probe allows adequate visualization of X chromosome CTs even in nonpolytenized interphase nuclei. We also successfully visualized the nucleolus with a fluorescently labeled antibody against fibrillarin in interphase nuclei of FE (Fig 2E).
Tissue-specificity of the X chromosome relative volume in nuclei of An. atroparvus
The volume of the X chromosome relative to the nuclear volume is significantly greater in FE (9.97%) than in NC (5.05%) (p = 1.733e-08, Mann-Whitney U-test) (Fig 3A). There is a weak significant negative correlation between the X chromosome volume and the nuclear volume in FE (r = −0.41, p<0.05, Pearson test) (Fig 3B), but there is a weak non-significant positive correlation between the X chromosome volume and the nuclear volume in NC (r = 0.35, p>0.05, Pearson test) (Fig 3C). Thus, unlike NC, the X CT does not contribute to the growth of the FE nucleus. In contrast, the volume of the nucleolus is in direct proportion to the nuclear volume in both tissues.
The shape of the X CT, which is expressed in terms of a standard deviation (DC) of the mean radius, elongation, and roundness, varies identically in both cell types. We found no statistically significant differences between the cell types.
Peripheral location of the X chromosome and nucleolus
The mean of maximum values of the X chromosome radial position is 95.65% meaning that the contact between the X chromosome and the NE is permanent in NC and FE (Fig 4A) in accordance with the previous study [16]. Nucleolus has also frequent contacts with the NE in both tissues (Fig 4B). These contacts are independent of the nucleolus shape and volume, which vary widely. The maximum values of the radial position of the X chromosome and nucleolus are not significantly different between NC and FE (p = 0.6567 for the X chromosome and p = 0.3229 for the nucleolus, Mann-Whitney U-test). However, the minimum values of the radial position of the X chromosome and nucleolus are significantly smaller in NC than in FE (p = 3.238e-09 for the X chromosome, and p = 0.02903 for the nucleolus, Mann-Whitney U-test) meaning that both the X chromosome and nucleolus are located closer to the center of the nucleus in NC compared with FE. This observation could provide indirect support for the greater involvement of X chromosome and nucleolus in transcription in NC than in FE. Tissue-specific dynamics of the X chromosome location during nuclear growth Quantitative characteristics of spatial relationships between the X chromosome and the NE were studied with the help of an α angle (see Materials and Methods). The mean value of α in both tissues corresponds to the position of the X chromosome when its longitudinal axis is parallel to the NE. However, some nuclei were characterized by α = 67˚. In this case, the longitudinal axis was directed toward the intranuclear space. We found the relationship between this parameter and the nuclear volume. The increase of the mean nuclear radius correlated with the decrease of the α angle in NC (Fig 5A and 5B). Thus, during the nucleus growth in NC, the X CT moves toward the NE. There was an inverse trend in FE (Fig 5C and 5D). One of the chromosome ends moved from the NE to the intranuclear space during the nucleus growth in FE.
Discussion
Tissue-specificity of the X chromosome spatial organization Previous work showed that FE and NC differ by X and 3R chromosome interposition in An. messeae [16], which could be explained by the chromocenter formation in somatic cells [14]. Here, we were not able to detect significant differences in the shape of the X chromosome or nucleolus between nuclei of NC and FE. However, we identified tissue-specific features of the X chromosome relative size, suggesting different chromatin organization in the X chromosome and/or expression level of the X chromosome genes in NC compared with FE. Previously described tissue-specific differences in the X chromosome attachment to the NE [14,16] have been further explored in this work using the new computational tool TANGO [21]. Relative size parameter depends on the correctness of the segmentation algorithm, the accuracy of which is deteriorated due to varying signal-to-noise ratios in photomicrographic images. To overcome this problem, we visually monitored the quality of the segmentation for each nucleus. In some cases, we used the "hessian transform" pre-filter for the segmentation quality improvement in accordance with the recommendations of TANGO developers. The tissuespecific differences of the dynamics of the X chromosome could be associated with the difference in the level of polyteny. Indeed, chromosome in NC are polytene, and chromosome in FE are non-polytene. Nevertheless, tissue-spcecific features in the 3D genome organization could result in gene expression differences [1,10,13].
Dynamics of the nuclear spatial organization
Based on the above described data (the displacement of the longitude axis of the X chromosome during the nucleus growth, the peripheral location of the X chromosome), we can assume that "activation" of NE-attachment regions of the X chromosome in NC and FE occurs gradually depending on the development stage. Various phases of NC and FE development can be characterized by different numbers of X chromosome attachment regions. In the late developmental stages, the pericentric region of the X chromosome in NC has a strong NEattachment region [14,16]. There are at least 3 major lamin-binding regions along the chromosome, which potentially form NE-attachments (data not shown). The pericentric region is permemantly attached to the NE, but the other attachments are being "turned on" during development, moving telomeric end from the nuclear interior to periphery. On the contrary in FE, the same attachments are being "turned off" during development, moving telomeric end from the periphery to the nucleus center. These movements could be connected with transcription activation/inactivation of distinct chromosome segments. Some chromosome movements can be forced by the change of the size of nucleolus. However the gradual increase of nucleoli size in both NC and FE cannot be the reason for different X chromosome movement in these cell types.
Dynamism of polytene chromosome attachments to the NE has been shown by the modeling of the 3D organization of the salivary glands interphase nucleus of Drosophila. Fourteen of 15 known high-frequency contacts of chromosomes with NE have been described as intercalary heterochromatin, and one is a region of late replication [23]. A computational analysis has found 33 additional sub-high-frequency chromosome attachments with the NE [24]. Twenty new attachment regions corresponded to intercalary heterochromatin, and 5 were regions of late replication. However, 3 of these attachments corresponded to euchromatin [24]. These results suggest that affinity for the NE can change gradually, with the highest affinity for the NE almost exclusively possessed by intercalary heterochromatin, and the next highest affinity for the NE mostly a property of intercalary heterochromatin. What is the effect of chromosome-NE attachments on the nucleus architecture? Computer modeling demonstrates that a nucleus with the most numerous attachments of chromosomes to the NE form more precise chromosome territories with fewer intersections between chromosomes [25]. Intra-arms contacts happen more often in a nucleus with more NE-attachment regions in comparison with a nucleus which does not contain specific attachments. At the same time, the contacts between different arms happen more rarely in nuclei with more numerous NE-attachments [25]. If chromosome-NE attachments are gradually "turned on" with an increase in the volume of the NC nucleus, we can expect a decrease in the frequency of contacts involving the X chromosome with other chromsomes.
Conclusion
Principles of the 3D genome organization must be thoroughly studied in vector species because of possible dynamic changes in the nuclear architecture upon infection with a pathogen [17]. Previously we demostrated tissue-specific features of the spatial chromosome organization in An. messeae based on data obtained by manual geometrical measurements of only two points for every nucleus [16]. In this work we have shown that the methods of the chromosome spatial organization analysis using TANGO is applicable to studying the shape and size of polytene chromosomes and chromosome dynamics during nucleus growth. The movement of the longitudinal axis of the X chromosome with the change of the nucleus size is likely associated with the change in the number of NE-attachment regions. This idea agrees well with the data obtained by modeling of Drosophila salivary gland nuclei [23][24]. The tissue-specific differences of the dynamics of the X chromosome and NE-attachment regions could result in gene expression differences [1,10,13]. Spatial characteristics of the X chromosome and nucleolus in An. atroparvus will serve as a baseline for similar studies in other species from the Maculipennis complex to which An. atroparvus belongs. Future studies will also address species-specific aspects of the nuclear architecture. A study of the evolution of the nuclear architecture will assess the possibility of using some features of 3D genome organization as markers for understanding phylogenetic relationships within the species complex.
Supporting information S1 Table. Source data with observations and corresponding parameters of nucleus and X chromosome exporting from TANGO. (CSV) S2 Table. Source data with observations and corresponding parameters of nucleus and nucleolus exporting from TANGO. (CSV) | 2018-04-03T05:18:14.746Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "65bb6959c5e95a286bb64ca92e6236355cce9e95",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171290&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65bb6959c5e95a286bb64ca92e6236355cce9e95",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3311414 | pes2o/s2orc | v3-fos-license | Ascorbic acid tethered polymeric nanoparticles enable efficient brain delivery of galantamine: An in vitro-in vivo study
The aim of this work was to enhance the transportation of the galantamine to the brain via ascorbic acid grafted PLGA-b-PEG nanoparticles (NPs) using SVCT2 transporters of choroid plexus. PLGA-b-PEG copolymer was synthesized and characterized by 1H NMR, gel permeation chromatography, and differential scanning calorimetry. PLGA-b-PEG-NH2 and PLGA-b-mPEG NPs were prepared by nanoprecipitation method. PLGA-b-PEG NPs with desirable size, polydispersity, and drug loading were used for the conjugation with ascorbic acid (PLGA-b-PEG-Asc) to facilitate SVCT2 mediated transportation of the same into the brain. The surface functionalization of NPs with ascorbic acid significantly increased cellular uptake of NPs in SVCT2 expressing NIH/3T3 cells as compared to plain PLGA and PLGA-b-mPEG NPs. In vivo pharmacodynamic efficacy was evaluated using Morris Water Maze Test, Radial Arm Maze Test and AChE activity in scopolamine induced amnetic rats. In vivo pharmacodynamic studies demonstrated significantly higher therapeutic and sustained action by drug loaded PLGA-b-PEG-Asc NPs than free drugs and drug loaded plain PLGA as well as PLGA-b-mPEG NPs. Additionally, PLGA-b-PEG-Asc NPs resulted in significantly higher biodistribution of the drug to the brain than other formulations. Hence, the results suggested that targeting of bioactives to the brain by ascorbic acid grafted PLGA-b-PEG NPs is a promising approach.
NH2 was terminally functionalized by the free NH2 group. Similarly, PLGA-b-mPEG copolymer was also synthesized using mPEG-NH2 (methoxy-PEG-NH2; Mol. wt. 2000 Da). 1 H NMR spectroscopy of the PLGA-b-PEG-NH2 was carried out at 300 MHz in CDCl3. Gel Permeation Chromatography (GPC) was carried out by a system having a refractive index detector, and the analysis was carried out at room temperature (RT, 27±2⁰C) using two serially aligned TSKGEL columns. Dimethylformamide (DMF) was used as an isocratic mobile phase with a flow rate of 1 ml/min.
Preparation and optimization of PLGA-b-PEG NPs
Nanoprecipitation method was utilized for the preparation of amino group functionalized PLGA-b-PEG-NH2 NPs 3,4 . Before preparation of drug loaded NPs, optimization of empty PLGA-b-PEG-NH2 NPs was carried out by varying different parameters which control the size of NPs e.g. different solvent (acetone and acetonitrile), the various ratio of solvent to water (keeping polymer concentration constant) and different polymer concentration. Acetone and/or acetonitrile solvents were selected due to their miscibility with water and low boiling point so that they can be removed easily during formulation development to avoid toxicity. Solvent to water ratio was varied from 0.1 to 1 i.e. from 1:1 to 1:10 (keeping 10 mg/mL polymer concentration constant). NPs optimization was also carried out by varying polymer concentrations (from 5-20 mg/mL) in the organic phase.
Briefly, the polymer was dissolved in organic solvent (either acetone or acetonitrile).
The NPs were prepared by the addition of polymer solution drop wise to de-ionized water (a non-solvent) followed by sonication for 60 sec. The NPs suspension was stirred for 6 hrs at RT to facilitate complete evaporation of the organic solvent (acetone). In the case of acetonitrile, the NPs suspension was stirred for 2 hrs, and the organic solvent was evaporated in a rotary evaporator at reduced pressure. The NPs were purified by centrifugation (15 min at 18,000 rpm). The PLGA-b-PEG-NH2 NPs were washed with water thrice and lyophilized.
The effects of the different parameters were observed on overall particle size, Pdi and zeta potential of the NPs and were characterized using dynamic light scattering (ZetaSizer Nano ZS90, Malvern Instrument, USA).
Preparation and optimization of GLM loaded PLGA-b-PEG NPs
The pre-optimized parameters of acetone and acetonitrile formulations from above studies were selected to synthesize GLM loaded PLGA-b-PEG-NH2 NPs. The NPs formulations were further optimized on the basis of the drug to polymer ratio for optimum drug loading. Briefly, GLM was dissolved in organic solvent, either acetone or acetonitrile.
The polymer was likewise dissolved in the same solvent and mixed with the drug. The effect of the different drug to polymer ratio (from 1:1 to 1:10) was optimized (keeping polymer concentration constant and varying drug concentration). The NPs were prepared by the addition of drug-polymer solution drop wise to de-ionized water (a non-solvent) followed by sonication for 60 sec. The resulting NPs suspension was processed as discussed above. The drug loaded NPs were characterized for size, Pdi and zeta potential by the same procedure as Fig. S4 online).
Firstly, a reactive imidazole carbamate intermediate was synthesized via reacting hydroxyl group of Asc with carbonyldiimidazole (CDI) using the reported method 5,6 with slight modification. For the above reaction, Asc was taken in DMSO, and the solution was stirred at room temperature for 2 hrs under N2 atmosphere to generate a reactive imidazole carbamate.
The product was precipitated with chilled diethyl ether, filtered and vacuum dried. The addition of a highly nucleophilic group (e.g. NH2) to the reactive imidazole carbamate removes imidazole to forms a stable carbamate. The PLGA-b-PEG NPs possess highly nucleophilic terminal NH2 groups of PEG. The NPs were suspended in 10M borate buffer (pH 11), and reactive imidazole carbamate of Asc was added, and the mixture was stirred at RT for 2 hrs to form carbamate bond between hydroxyl group of Asc and amino group of PEG. The NPs were collected via centrifugation and washed with de-ionized water thrice to remove un-reacted Asc followed by lyophilization. PLGA-b-PEG-Asc NPs were characterized for particle size, Pdi and zeta potential using DLS technique. Surface morphology was evaluated using SEM and TEM. Asc conjugation on PLGA-b-PEG NPs was further confirmed by FTIR spectroscopy and thermogravimetric analysis (TGA).
In vitro drug release studies
Intracellular pH of the brain is near to 7.2, and the pH of the blood is approximately
Cell Culture
The object of the present work was to develop a formulation which can increase drug delivery to the brain by crossing choroid plexus. The idea was to use SVCT2 transporter expressed on choroid plexus to transport the drug to the brain cell. Therefore, the cell line was selected on the basis of expression of plasma membrane-associated SVCT2 transporter for Asc. NIH/3T3 cell line reported to express plasma membrane-associated SVCT2 transporter for Asc 9,10 . Therefore, NIH/3T3 cell line was selected for cellular uptake studies to assess the targeting potential of the developed nanoparticulate formulations.
Cell uptake using fluorescence microscopy
This assay was carried out to evidence and compare the uptake of non-targeted and targeted NPs formulation by SVCT2 expressing NIH/3T3 cells. The cells were supplemented with 5 mL of culture medium (DMEM; Invitrogen) and maintained in a CO2 incubator at 370.5C and 5% CO2. Rhodamine B (Sigma, USA) was used as fluorescence dye to evaluate cellular uptake. Rhodamine loaded plain PLGA, PLGA-b-mPEG, and PLGA-b-PEG-Asc NPs were prepared according to the optimized method for drug loaded NPs.
However, rhodamine was used instead of GLM. The NIH/3T3 cells were seeded at a density of 2 x 10 4 cells/plate in fibronectin coated tissue culture petri dishes containing 1 mL DMEM, incubated and checked under the microscope for 40-50% confluency. The petri dishes were alienated into four groups, each group containing three petri dishes. The cells were incubated for 48 hrs in 5% CO2 at 37°C with changing of the medium following every 12 th hrs. The medium was replaced then with 2 mL antibiotic-free and serum-free medium. Subsequently, free rhodamine, rhodamine-PLGA NPs, rhodamine-PLGA-b-mPEG NPs and rhodamine-PLGA-b-PEG-Asc NPs were placed separately in a group of plates. The NPs were incubated with the cells for 0.5 and 1 hr at 37ºC. After incubation, the cells were washed with Hank's Balanced Salt Solution thrice to remove extracellular NPs 11 . The fluorescence attributed to uptake of fluorescently labeled formulation was then qualitatively observed under fluorescent microscope (Olympus, Osaka, Japan).
Drug uptake using HPLC
Drug uptake of free GLM and drug loaded formulations (GLM loaded PLGA, PLGA-b-mPEG and PLGA-b-PEG-Asc) was estimated using NIH/3T3 cells via HPLC. The cells were seeded into fibronectin coated tissue culture petri dishes at a density of 5 x 10 3 cells/plate containing DMEM medium and incubated at 37°C and 5% CO2 for 48 hr to acquire more than 75% confluency. Free drug and NPs formulations containing 50 µg/mL of GLM were suspended in the medium and added (100 µL) separately into the wells containing NIH/3T3 cells. After the different incubation period, the culture medium was removed. Cells were detached by cell scrapper by adding 2 mL media and collected from the medium in the form of the pellet by centrifugation at 4000 rpm for 15 min. The cells were washed as mentioned above. To the pellets, 500 µL of 0.5% Triton X 100 was added to rupture the cells 12 .
Internalized drug into the cells was extracted by addition of dichloromethane followed by vortexing for 15 min, and the mixture was incubated at 25°C for 6 hr. The supernatant was filtered and the drug concentration was detected using HPLC 13,14 .
Biodistribution studies
The in vivo biodistribution studies of GLM from various formulations (free drugs and drug loaded NPs formulations) were carried out via HPLC method. Biodistribution studies were performed for quantitative measurement of GLM in various organs. Animals were divided into five groups, every group containing 9 rats. Group 1 was treated as control. The details of samples given via tail vein to each group were as follows: Group 1= PBS (pH 7.4); Group 2= GLM solution; Group 3= GLM-PLGA NPs; Group 4= GLM-PLGA-b-mPEG; Group 5= GLM-PLGA-b-PEG-Asc (1.5 mg/kg equivalent of GLM). Three animals of each group were sacrificed after 1, 6 and 12 hrs post administration. The tissues from brain, lung, liver, kidney, and spleen were carefully removed immediately after sacrificing and weighed.
Subsequently, 5 mL of trichloroacetic acid (10% v/v in water) was added to one gram each of various tissues and vortexed for 1 min. A small quantity of ethanol was added, homogenized in a tissue homogenizer and centrifuged at 5000 rpm for 20 min. The supernatant was collected and analyzed for GLM content by HPLC. Nanoprecipitation | 2018-04-03T00:20:39.319Z | 2017-09-11T00:00:00.000 | {
"year": 2017,
"sha1": "8d03b11c2f94ea4456d4acba3cc1c49034a617f7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-11611-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef5ad8e9850ee3a7dbd77464bb23548153225985",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
303457 | pes2o/s2orc | v3-fos-license | Macroalgal Endophytes from the Atlantic Coast of Canada: A Potential Source of Antibiotic Natural Products?
As the need for new and more effective antibiotics increases, untapped sources of biodiversity are being explored in an effort to provide lead structures for drug discovery. Endophytic fungi from marine macroalgae have been identified as a potential source of biologically active natural products, although data to support this is limited. To assess the antibiotic potential of temperate macroalgal endophytes we isolated endophytic fungi from algae collected in the Bay of Fundy, Canada and screened fungal extracts for the presence of antimicrobial compounds. A total of 79 endophytes were isolated from 7 species of red, 4 species of brown, and 3 species of green algae. Twenty of the endophytes were identified to the genus or species level, with the remaining isolates designated codes according to their morphology. Bioactivity screening assays performed on extracts of the fermentation broths and mycelia of the isolates revealed that 43 endophytes exhibited antibacterial activity, with 32 displaying antifungal activity. Endophytic fungi from Bay of Fundy macroalgae therefore represent a significant source of antibiotic natural products and warrant further detailed investigation.
Introduction
The threat to human health as a result of the growing emergence of microbial resistance to antibiotic agents is both real and significant; current forecasts predict that broad-scale antimicrobial ineffectiveness is imminent and we may soon once again face the problems that challenged medicine in the "pre-antibiotic" era [1][2][3]. There is, therefore, an urgent need to accelerate the discovery of antibiotic molecules to facilitate the development of new therapeutic agents with novel modes-of-action to combat infectious disease [4,5]. Endophytes of macroalgae have recently gained attention as an untapped source of biodiversity with potential to yield novel bioactive metabolites [6][7][8] and high proportions of isolates from both tropical [9] and temperate [10,11] environments have exhibited significant antibiotic activities. However, data relating to the bioactivity of endophyte assemblages obtained from a given algal species is limited; evaluating the true potential of macroalgal endophytes as a source of antibiotic compounds is therefore problematic. Recent reports also suggest that there may be significant differences in the endophyte assemblages found within tropical and temperate marine macroalgal hosts [9,11], although further work is required to confirm these preliminary observations. The objectives of this study were to perform a preliminary investigation of the endophytes from macroalgae from the Atlantic coast of Canada and evaluate their potential for the production of antimicrobial compounds.
Surface Sterilization of Algae and Culture Techniques
The surfaces of the algal samples were sterilized by immersion in various sterilant solutions [11]. Prior to the isolation of endophytic fungi from marine algae, an optimal surface sterilization method was developed for each algal species (Table 1). Portions (5 cm 2 ) of algal tissue were individually surface sterilized using the appropriate optimized technique, blotted dry on autoclaved paper towel and rubbed across the surface of 2 plates of 2% malt extract agar (MEA, Becton Dickinson, Sparks, MD, USA) prepared with artificial seawater (MEA-SW, 24.4 g·L −1 ; Instant Ocean ® Sea Salt, Cincinnati, OH, USA) to verify surface sterilization had been effective. Sterilized algal species were cut using a sterile cork borer and placed on Petri plates of 2% MEA and 2% MEA with seawater. Petri plates were sealed with Parafilm™ (Pechiney Plastic Packaging Company, Chicago, IL, USA) prior to incubation.
Isolation of Endophytes
Petri plates containing surface sterilized macroalgal pieces were incubated for 14 days at room temperature (approximately 25 °C) under ambient light conditions and monitored daily for the presence of hyphae growing from the cut edges of the algal segments. The isolation frequency (IF) of emerging hyphae was determined for each species of algae collected [12]: Number of algal pieces showing fungal growth Total number of algal pieces × 100 (1) Endophytes growing from the cut edges of the segments were subcultured onto fresh media (2% MEA) to obtain pure isolates. Pure isolates were also grown on Czapek, potato dextrose, cornmeal and marine agars (Becton Dickinson, Sparks, MD, USA) to induce sporulation and differentiate between colony morphologies. Individual isolates from each algal species were then sorted into groups of homogeneous morphotypes, and a representative colony of each distinct isolate was used for the fungal identification, fermentation, and extraction.
Identification of Fungi
Fungal isolates were identified taxonomically through examination of colony and spore morphology with taxonomic classifications being confirmed by comparison of the internal transcribed spacer and 5.8S rRNA gene (ITS) DNA regions [13] with corresponding sequences available in the GenBank database (National Center for Biotechnology Information, U.S. National Library of Medicine, Bethesda, MD, USA).
The genomic DNA of all distinct fungal isolates was extracted using a DNeasy ® plant mini kit (Qiagen, Toronto, Ontario, Canada) and amplified by PCR as previously described [11]. Samples of amplified ITS DNA were submitted for sequencing (Génome-Québec; Montreal, Québec, Canada) along with the ITS 1 and ITS 4 primers. The sequences obtained were checked for ambiguity and submitted to the GenBank database and compared with existing GenBank sequence data using BLAST. In cases where electrophoresis indicated that the DNA extraction/ PCR procedure had been unsuccessful, the procedure was repeated on a fresh sample of the fungal isolate.
Isolates identified taxonomically to species level had ≥ 99% sequence similarities to entries for conspecifics in GenBank, isolates identified to the genus level had ≥ 96% sequence similarities with congeneric species, and isolates identified to class level had sequence similarities ≥82% to entries from the corresponding taxonomic rank. Isolates that could not be unequivocally identified to species level based on morphological observations were only classified to the corresponding genus even when the sequence similarities ≥ 98% were obtained with sequence data for congeneric species in GenBank. If DNA from the ITS region was not isolated after three extraction/amplification attempts, the corresponding isolate was identified on morphological observations alone. Sterile isolates that did not provide sequence data upon repeated attempts were given codes according to their morphology in plate culture [9,14,15]. All distinct fungal isolates have been archived in the UNB Saint John fungal repository (Saint John, New Brunswick, Canada).
Preparation of Extracts
For each fungal isolate, a portion (5 mm 2 ) of the fungal colony on solid medium was transferred to a 250 mL Erlenmeyer flask containing 2% Bacto™ malt extract broth (100 mL). Flasks were shaken (150 rpm) at room temperature under ambient light conditions for two weeks.
After incubation, the fungal mycelia from each culture were separated from the spent culture broth using vacuum filtration. Mycelia were extracted once with methanol (50 mL; Fisher Scientific, Ottawa, Ontario, Canada) in the dark for 24 h at 4 °C, solid residue and cell debris were removed by vacuum filtration, and the resulting solution was concentrated in vacuo to give a crude fungal extract. The spent culture broths were extracted 3 times with 50 mL ethyl acetate (Fisher Scientific) and the combined organic extracts were concentrated in vacuo to give a crude extract of the growth media. All crude extracts were stored at −20 °C until required.
Antibacterial and Antifungal Activity Assay
Antibacterial and antifungal activity against Pseudomonas aeruginosa (ATCC 10145), Staphylococcus aureus (ATCC 29213) and Candida albicans (ATCC 14053) was evaluated using a microbroth dilution antimicrobial susceptibility assay as previously described [11] with extracts being tested at a concentration of 200 μg·mL −1 .
Statistical Analyses
Antimicrobial activities of extracts were compared to the negative control using the Kruskal-Wallis non-parametric test as the data was not normally distributed (Shapiro-Wilk, p < 0.05) and the variances were not equal (Levene's test, p < 0.05). Post hoc analysis (stepwise-stepdown) was performed to determine which extracts differed from the negative control. Crude extracts were defined as biologically active if their effect on the growth of the test organism was significantly different (p < 0.05) to the negative control (0% inhibition). All statistical tests were performed using SPSS (PASW Statistics 18, IBM Corporation, Armonk, NY, USA).
Isolation of Endophytic Fungi from Marine Algae
The Bay of Fundy, with its extreme tidal range, presents a unique temperate habitat and supports a high diversity of marine macroalgal species [16,17], many of which have not been investigated for the presence of endophytes. Our results indicate that these algae support a sizeable and diverse assemblage of endophytes: fungi were isolated from 632 out of a total 3081 algal pieces resulting in an overall isolation frequency of 26% ( Table 2). The isolation frequencies of endophytic fungi varied by host, with the highest isolation frequencies obtained from the red algae P. palmata and P. umbilicalis, at rates of 87% and 72% respectively ( Table 2). The lowest isolation frequencies were from the red alga C. crispus at a rate of 0.08% (Table 2). Seventy-nine distinct endophyte species were isolated from the ten algal hosts ( Table 3). The number of distinct fungal isolates obtained varied by host with a maximum of 13 endophytic fungal species from D. ramentacea and S. latissima to a minimum of one from A. nodosum and F. vesiculosus (Table 3). Penicillium spp. were isolated from seven algae: C. crispus, D. ramentacea, M. stellatus, P. palmata, S. latissima, S. arcta and U. lactuca with a total of 11 isolates representing the six species isolated (Table 3). Six distinct Aspergillus spp. were isolated from five algae species: C. crispus, P. umbilicalis, A. nodosum, S. latissima and U. intestinalis (Table 3). Botrytis sp. was isolated from two algal hosts (D. ramentacea and P. palmata; Table 3) as was Aureobasidium pullulans (D. ramentacea and P. lanosa; Table 3). Cladosporium sp., Trametes versicolor, Coniothyrium sp., Coelomycete I, Hypoxylon sp., Helicomyces sp. and Botryotinia fuckeliana were only isolated from one host (Table 3). The majority of isolates (74%), however, could neither be identified morphologically nor through the use of molecular genetic techniques. These isolates were designated codes according to their plate morphology (Table 3). This proportion of sterile mycelia (74%) is high in comparison to other marine macroalgal endophytes from the North Atlantic (45%) [11] and may have contributed to a correspondingly lower success rate observed for the molecular identification of isolates in this study (26% compared with 55% for endophytes isolated from macroalgae of the Shetland Islands, UK) [11]. The magnitude of these discrepancies is surprising and may be due to the particular assemblage of fungal species isolated from the Bay of Fundy algae. Further work will be required to optimize procedures used for molecular identification of sterile algal endophytes in an effort to increase our ability to identify fungi isolated from this source.
A point of particular interest is the fact that none of the fungi isolated from the brown alga A. nodosum were identified as Mycosphaerella ascophylli, despite it being a well documented endophyte of that host [18][19][20][21][22]24,27,28,30]. Whilst it is possible that M. ascophylli may not have been isolated due to the particular sterilization and culture conditions used or the low growth rate of the fungus [19], it is more probable that M. ascophylli is represented in the mycelia sterilia obtained from A. nodosum as it is known to display a sterile morphotype of fine white septate hyphae [19,31].
Antimicrobial Screening Results of Algal-Derived Fungal Endophytes
Crude extracts of the endophyte isolates were screened against three pathogenic microorganisms, the Gram positive bacterium Staphylococcus aureus, the Gram negative bacterium Pseudomonas aeruginosa and the fungus Candida albicans. Extracts were defined as bioactive if they inhibited growth of the test organism in comparison to the negative control as indicated by heterogeneous subsets (p < 0.05) identified through post hoc testing of Kruskal-Wallis analyses. Seventy-eight crude extracts, tested at 200.0 μg·mL −1 , inhibited at least one of the test microorganisms. Forty-four crude extracts significantly inhibited the growth of S. aureus, whereas, 18 and 36 extracts inhibited the growth of P. aeruginosa and C. albicans, respectively. These extracts were derived from 57 individual fungal isolates (Table 4) and comprised 39 mycelia-derived and 39 medium-derived crude extracts. Of the 156 crude extracts tested, 59 showed activity against only 1 test microorganism, 18 inhibited 2 microorganisms, and 1 showed activity against all 3 test microorganisms (Table 4). In this study, 73% of the isolates (57/78) were found to have antimicrobial activity against P. aeruginosa, S. aureus, and C. albicans (43/78, 55% antibacterial; 32/78, 41%, antifungal). The antimicrobial screening results of this study are comparable to those obtained from marine algae of the Shetland Islands, UK where 61% of the isolates (39/64) had activity against at least one of the same three test organisms (36/64 isolates antibacterial and 24/64 isolates antifungal) [11]. However, these antimicrobial screening results are in contrast to those obtained from endophytic fungi of macroalgae of the Southern Indian coast (82%, 31/38 isolates) [9] and the coasts of the North and Baltic Seas (83%, 249/300) [10]. The crude extracts showing strong antimicrobial bioactivity in this work should be investigated to identify the biologically active constituents responsible for the observed bioactivity.
Conclusions
Our research has demonstrated that marine macroalgae from the Bay of Fundy, Canada, have the potential to be an excellent source of endophytic fungi. Seventy-eight distinct isolates were obtained from 14 algal hosts, with most of the endophytes that could be taxonomically identified belonging to the genera Penicillium and Aspergillus. The results from the antimicrobial screening on the mycelium and broth extracts of the endophytic fungi suggest that they are a promising source of antimicrobial extracts, with 18 extracts exhibiting >50% inhibition in the screening assays. These crude extracts should be subjected to bioassay-guided fractionation in an attempt to identify the bioactive constituents of the extracts. Further work should also be focused on improving the rate of identification either through the use of molecular techniques, as only 26% of the obtained isolates were identified through this method, or through the induction of conidia or other distinguishing morphological characteristics. The results from this study has led to further work into the endophytic fungi of Atlantic coast marine macroalgae, with a current investigation now focused on the isolation, identification and antimicrobial screening of endophytic fungi from the diverse range of macroalgae present in the Bay of Fundy. | 2015-09-18T23:22:04.000Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "a0635b4f2fc151a21adcf7c1dfc25fce11b8a8c4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/1/1/175/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0635b4f2fc151a21adcf7c1dfc25fce11b8a8c4",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
8969188 | pes2o/s2orc | v3-fos-license | The relationship between spiritual intelligence and personality traits among Jordanian university students
This study was aimed at identifying the level of spiritual intelligence and its correlation with personality traits among a group of Jordanian undergraduate students. A purposive sample of 716 male and female students was chosen from different faculties at the Hashemite University. Two questionnaires on spiritual intelligence and personality traits were distributed to members of the sample during the academic year 2013–2014. Results illustrated a medium level of spiritual intelligence in students, and indicated a positive and statistically significant relationship between spiritual intelligence dimensions (critical existential thinking, personal meaning production, transcendental awareness, and conscious state expansion) and personality traits (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness), but no significant correlation between personal meaning production and transcendental awareness dimensions and neuroticism personality traits. Finally, regression analysis results indicate that critical existential thinking is the first predictor dimension of spiritual intelligence in terms of neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness. In the light of the results of this study, many recommendations were written by the researchers.
d ultimately
he field of the humanities.
2][3][4][5] While accepting the similarity and integration between religion and spirituality, there is also agreement as to their dissimilarity and distinction, religion focusing on the sacred whereas spirituality refers to the experiential elements of meaning, eminence, and excellence. 6esearchers regard spiritual intelligence as the most significant type of intelligence because of its ability to influence change in people, societies, nd cultures.Thus, improving spiritual intelligence helps individuals toward adopting a positive outlook and in achieving inner peace.This modification in attitude improves self-motivation and control as well as helping to reduce the high stress levels commonly induced by the hectic pace of modern life. 7ayer states that individuals attain spiritual awareness when the following are achieved: 1) being attuned to the holistic harmony of the world and surpassing self-limitation; 2) being aware of higher planes and states of spiritual thought and contemplation; 3) being conscious of the spiritual dimension of daily activities, events, and relationships; 4) building awareness, which means considering daily problems in the context of ultimate life parameters; and 5) desiring to improve or elevate the self, consequently practicing forgiveness, expressing appreciation and gratitude, and practicing humility a
sp
ritual intelligence as the adaptive use of spiritual data to facilitate daily problem solving. 1 Zohar and Marshal identify spiritual intelligence as a third type of intelligence that expands the construct of behavior.It is also the intelligence by whose standards our work and comprehensive path of life are evaluated in comparison with others.It is the base we need in order for our intellectual and emotional intelligence to work effectively. 5King defines it as the group of intellectual/mental capabilities that are based upon adaptation, nonmaterialistic principle, and far-fromreality aspects. 9Vaughan defines spiritual intelligence as interest in the individual's inner mental life, mood, and relation to the existence of life, thus implying the ability for deep understanding of questions related to existence as well as the consideration of various levels of emotion. 4Nasel defines it as the ability to distinguish, search for meaning, and solve spiritual issues, 10 whereas Amram and Dryer see it as the ability to apply and use the spiritual features and capabilities which increase our life effectiveness and mental welfare.On the other hand, 11 Wigglesworth sees it as behaving wisely and mercifully while maintaining both inner peace and outward calm regardless of the prevailing circumstances. 12ekkeveettil suggests that those individuals with spiritual intelligence awareness reveal the following features and indications: flexibility (the individual's self-flexibility and ability to see the world realistically as a place of diversity and variety; also refers to the person's ability to interact, understand, and adapt to developments and innovation), selfawareness (examination of the inner self helps to comprehend one's true identity), the ability to face and learn from failure and fears, the ability to examine the relationships between different things and think collectively, and the ability to work. 13acHovec sees that spiritual intelligence is a distinguished pattern of intelligence that surpasses variances in time, culture, and religion, and is an extension of Gardner's theory of multiple intelligences.Although spiritual intelligence differs from traditional intelligence, they share common features: they increase with age, reflect the individual's mental performance pattern, and consist of a group of interdependent abilities. 14In addition, spiritual intelligence is recognized as being the representative of intelligence, which means that it refers to the integration of all other types of intelligence. 2mmons states that spiritual intelligence comprises a number of features or capabilities that vary from one person to another: the ability for excellence and eminence; the ability to access deep spiritual states of reflection, such as meditation and subjugation of self; the ability to use spiritual capacities and resources for solving daily problems; the ability to invest in the daily events, activities, and relations with others, in addition to behaving in a dignified manner in all things and toward all people; and the ability to behave with humanity and modesty, showing lenience, forgiveness, and gratitude, and expressing sympathy and humility. 1,2Noble delineated the innate human capacity for spiritual intelligence as two types of ability: the conscious realization of the materialistic reality which exists within a larger multidimensional domain, and seeking the achievement of psychological well-being. 3][18] Wilber mentions that spiritual intelligence develops and increases among individuals in three stages: the beginning stage, in which attention is focused on the self through moving toward God with supplication to Him, with prayer and thanks for His compassion, the gifts of serenity and peace, security, and assurance in times of personal adversity; conventional levels, which refer to harmony and cohesion with religion, an extension of self-interest to interest in others; and postconventional levels, or transference from the state of simple commitment to spiritual and religious consciousness to a broader inclusiveness of self-awareness, as well as understanding the different ways and means of reaching realization and coming to terms with reality. 19ersonality is a set of psychological traits and mechanism within the individual which is organized, relatively endured, and influences the individual's adaptation to the environment. 20It consists of dynamic organization traits that determine how a person adjusts to the environment. 21mong the best developed models concerning personality traits is the Big Five model. 22,23This model consists of five personality factors: neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness.This model of personality can be used to describe the most salient aspects of personality.It combines people's emotions, attitude, and behavior, and was defined as a consistent pattern of thought, feelings, or actions that distinguish people from one another. 24euroticism has an inherent negative denotation.Goldberg found that neurotic people respond more poorly to environmental stress and are more likely to interpret ordinary situations as threatening and minor frustrations as hopelessly difficult. 24Individuals who are high in neuroticism may show more emotional reactions whenever confronted with stressful situations. 25][28] Extraversion refers to social adaptability, though the popularity of this term seems to be waning. 29Extraversion is the act, state, or habit of being predominantly concerned with and obtaining gratification from outside the self, defined as a trait characterized by a keen interest in other people and external events, and venturing forth
th
onfidence into the unknown. 30penness to experience refers to how willing people are to make adjustments in notions and activities in accordance with new ideas or situations. 31,32It includes traits like having wide interests, being imaginative and insightful, attentiveness to inner feelings, preference for variety, and intellectual curiosity. 33eople with conscientiousness personalities are organized, plan ahead, and exhibit impulse control, though this should not be confused with the problems of impulse control found in neuroticism.People exhibiting neurotic impulsiveness find it difficult to resist temptation or delay gratification, while individuals have low conscientiousness are unable to motivate themselves to perform a task that they would like to accomplish. 33greeableness measures how compatible people are with other people, or how able they are to get along with others.It is a tendency to be pleasant and accommodating in social situations reflecting individual differences in concern for cooperation and social harmony. 34Agreeable traits include empathy, consideration, friendliness, generosity, and helpfulness, as well as an optimistic view of human nature.Agreeable persons tend to believe that most people are honest, decent, and trustworthy, and are less likely to suffer from social rejection.
Previous studies
In their study, Beshlideh et al conducted research with 270 male students at Shahid Chamran University at Ahvaz, examining the relationship between personality traits and spiritual intelligence.Analytical results showed statistically significant correlations between extraversion, agreeableness, and conscientiousness and critical existential thinking, personal meaning, transcendent awareness, and conscious-state spiritual intelligence, but showed no correlation between neurosis and openness personality traits and spiritual intelligence subscales. 35mrai et al conducted a study with 205 students at the University of Tehran to examine the relationship between personality traits and spiritual intelligence.The study results showed a positive relationship between the three personality traits of conscientiousness, agreeableness, and extroversion and spiritual intelligence, but a negative relationship between neuroticism and spiritual intelligence, while also showing no correlation between openness and spiritual intelligence. 36ood et al conducted a study with 120 students at the Jammu and Indira Gandhi National Open University examining the relationship between personality traits, spiritual intelligence, and well-being.Results showed a positive relationship between self-meaning generation and agreeableness and neuroticism, and a significant relationship between transcendental awareness and openness. 37arsani et al conducted a study with 121 physical education managers in Isfahan province, examining the relationship between spiritual intelligence and personality traits.The study results showed a positive significant correlation between spiritual intelligence (critical thinking, creating personal meaning, transcendental awareness, and expanding awareness) and personality traits of extraversion, agreeableness, conscientiousness, and openness to experience.Results also showed a negative and significant correlation between spiritual intelligence subscales and the neuroticism personality trait. 38
The current study
It should be noted that few studies h
e been undertaken
in Jordan or indeed in the Arab world related to the topic of spiritual intelligence.The present study was aimed at exploring the level of spiritual intelligence among Hashemite University students, and examining further the link between spiritual intelligence and personality traits.In order to achieve the objective of the present study, the following questions were generated: What is the level of spiritual intelligence among sample of undergraduate students at Hashemite University in Jordan?Is there significant correlation between spiritual intelligence and personality traits?
Methodology study sample and population
The present study adopted a descriptive research design approach.The study population consisted of all undergraduate students at the Hashemite University in Jordan (N=26,530) in the second semester of the academic year 2013-2014.Participants included 716 male and female students from different departments, different academic years, and all faculties, their selection based on purposive sample technique from the overall population.Six courses were chosen from the college of arts and sciences.Table 1 illustrates demographic characteristics of the study participants.
instruments
Research instruments included two scales: the Spiritual Intelligence Questionnaire and the Personality Traits Questionnaire.The Spiritual Intelligence Questionnaire was developed by King and included 24 items for each of the four spiritual intelligence constructs: critical existential thinking (7 items), personal meaning production (5 items), transcendental awareness (7 items), and conscious state expansion (5 items).The items are rated on a five-point scale, ranging from the response (0) not at all true of me to (4) completely true of me.Higher scores represent higher levels of spiritual intelligence. 9ing reported coefficient Cronbach's alpha for the subscale 0.78, 0.78, 0.8
and 0.91, respect
vely for critical existential thinking, personal meaning production, transcendental awareness, and conscious state expansion. 9El-Rabeea adapted and validated an Arabic version of the King 9 spiritual intelligence questionnaire for Yarmouk University students in Jordan.Cronbach's alpha was calculated at 0.72, 0.78, 0.72, and 0.72, respectively for critical existential thinking, personal meaning production, transcendental awareness, and conscious state expansion. 39In the current study, the reliability coefficient calculated using Cronbach's alpha was found to be 0.73, 0.81, 0.68, and 0.83, respectively for critical existential thinking,
rsonal meaning production, transcendenta
awareness, and conscious state expansion.
The Personality Traits Questionnaire was developed by McCrae and Costa to measure the five personality domains: neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness.The instrument consisted of 60 items, 12 items for each of the five domains.The instrument includes self-descriptive responses by the participants using a 1 (strongly disagree) to 5 (strongly agree) Likert-type scale.The internal consistency of the instruments was high (0.92, 0.89, 0.87, 0.86, 0.90, respectively) for neuroticism extraversion, openness to experience, agreeableness, and conscientiousness. 40In the current study, the reliability coefficient calculated using Cronbach's alpha was found to be 0.67, 0.64, 0.63, 0.65, and 0.67, respectively for neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness.
Data collection and analysis
Data for the current study were collected for the spiritual intelligence scale and personality traits scale.The questionnaire was distributed by the researchers during March and April during the academic year of 2013-2014.The researchers selected six courses for university optional requirements during class sessions, explained the purpose and instruction of the study, assured confidentiality of results, and handed the scales to students.At the end of class sessions, scales were collected by researchers.Seven hundred sixteen scales
Results
The first objective of the present study was to determine the level of spiritual intelligence among students at the Hashemite University.To achieve this objective, illustrative statistics including means and standard deviation were used and levels of spiritual intelligence interpreted as follows: below 3, low; 3-4, medium; above 4, high. 41Table 2 shows the mean for overall personal meaning production as 3.90, critical existential thinking as 3.48, transcendental awareness as 3.41, and conscious state expansion as 3.39, indicating a medium level of spi itual intelligence skills.The mean for neuroticism was 36.04,extraversion 42.31, openness to experience 39.63, agreeableness 42.74, and conscientiousness 43.44.
The second objective of this study was to investigate the relationship between spiritual intelligence and personality traits.The correlation matrix is presented in Table 3.There is a positive and statistically significant relationship at the level (P=0.01) between spiritual intelligence dimensions (critical existential thinking, personal meaning production, transcendental awareness, and conscious state expansion) and personality traits (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness).Results also indicated no significant correlation between personal meaning production and transcendental awareness dimensions and neuroticism personality traits.
he predictive capability of p
rsonality traits for spiritual intelligence was determined using regression analysis of the spiritual intelligence dimensions, presented in Table 4.The results show that with critical existential thinking as a dependent variable, the global model was significant (R 2 =0.120,F=19.373,R=0.346, P,0.05), with the five variables (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness) accounting for 12% variance in critical existential thinking.With personal meaning production as a dependent variable, the global model was significant (R 2 =0.107,F=17.060,R=0.327, P,0.05), with the five variables (neuroticism, extraversion, openness to experience, agreeableness and conscientiousness) accounting for 10.7% of the variance of personal meaning production.With transcendental awareness as a dependent variable, the global model was significant (R 2 =0.057,F=8.521, R=0.238, P,0.05), with the five variables (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness) accounting for 5.7% variance of transcendental awareness.For conscious state expansion, with personal meaning production as a dependent variable, the global model was significant (R 2 =0.093,F=14.593,R=0.305, P,0.05), with the five variables (neuroticism, extraversion, openness t
experien
e, agreeableness, and conscientiousness) accounting for 9.3% variance of conscious state expansion.
Discussion
Mental or intellectual capacities are not the sole variables to affect individual performance and the scores subsequently achieved in intelligence ratings, but may also be influenced by nonmental personal or mood variables.
Given a broad definition of personality as a dynamic and organized set of characteristics, intelligence might well be described as a personality trait and was indeed considered as such by some early theorists, including Cattell and Guilford in the 1950s. This study aimed to identify the level of spiritual intelligence among the Hashemite University student sample, and also to inves igate the correlation between spiritual intelligence and personality traits.Analysis of results of participants' responses to the first objective of this study, the spiritual intelligence questionnaire, indicated a medium level of spiritual intelligence.This result could be explained in light of the innovativeness of the spiritual intelligence concept and its subsequent lack of recognition by the university as a bona fide field of study.Increased attention and focus on this important area through academic courses and training programs would not only improve the effectiveness of spiritual intelligence among students, but also elevate the prestige and distinction of the university.
The s cond aim of this study was to investigate the correlation between spiritual intelligence and personality traits among the Jordanian student sample.Results revealed a positive and
tatisticall
significant relation between spiritual intelligence dimensions (critical existential thinking, personal meaning production, transcendental awareness, and conscious state expansion) and personality traits (neuroticism, extra ersion, openness to experience, agreeableness, and conscientiousness), but showed no significant correlation between personal meaning production and transcendental awareness dimensions and the neuroticism personality trait.
The correlation between neuroticism and spiritual intelligence can be interpreted as the theoretical concept of spiritual intelligence, since spiritual intelligence questions meaning or values and is based on social or spiritual life.Thus, spiritual intelligence appears to be to be a way of administering the human mind effectively, so the person enjoying a high level of spiritual intelligence has control over his reactions and responses, and a high level power of thought.Clearly, these characteristics are diametrically opposite to those engendered by neuroticism, since neurotics are always anxious, worried, stressed, easily angered, and unable to control their reactions and responses.
Regarding the positive correlation between spiritual intelligence and extraversion, this result can be illustrated based upon the logical link between the concept of spiritual intelligence and extraversion characteristics, people who are cheerful, friendly, and warm in their social relationships; in addition, they are characterized by healthy vigor, enjoy all types of physical activity, and feel positive emotions such as happiness, love, excitement, and enjoyment.Sternberg sees the prod ction of personal meaning as the ability to build character and identify the goal of all physical and mental experiences, including the ability to create a goal and significance for life, as well as the contemplation of existence.It is noted that character is sometimes described as an element of spirituality, indicating that spiritual intelligence implies contemplation of the significance of personal events and conditions in order to generate a goal and meaning for all life experiences. 44][37][38] Emmons expresses the view that there appears little doubt as to the relationship between personality processes and intelligence, asserting the p ssibility that personality characteristics or components are linked to individual variances in establishing and expressing spiritual intelligence. 1Wolman and MacHovec concur the latter elucidating further by stating some personality components to be more concordant with spiritual intelligence characteristics than others. 14,45Any attempt to examine, investigate, or correlate various personality theories inevitably recognizes the comprehensive range of characteristics encompassed in McCrae and Costa's Big Five personality theory, while noting that these researchers join a consensus of opinions finding the characteristics most commonly used to express spiritual intelligence.These attribut s of emotional stability, agreeableness, and openness are concordant with the stable, kind, responsible, open-minded, and creative natures associated with a high level of spiritual intelligence development. 46iedmont, however, presents an alternative view, positing that spirituality may represent a separate attribute of personality. 47This is supported and expanded upon by MacHovec contending that spiritual intelligence may be regarded as a personality attribute which, just as with any other personality characteristic, differs among individuals in intensity of expression.Its particularity, however, is attributed to life experience, which by its very nature is distinctly individual, idiosyncratic, and subjective, described as an all-encompassing transcendental quality revealed in a perceptive and effective manner, contributory and conducive to creativity, development, and extension of self.MacHovec describes the realization of this kind of subjective experience linked with spiritual intelligence as a feeling of enlightenment, wonder, inspiration, or heightened awareness. 149][50] Yet the concept of spiritual intelligence as a definite factor contributing to personality structure has been neither adequately studied nor developed, even theoretically.Following Piedmont's line of thought, spiritual intelligence may present a combination of personality factors (interest in, expression of, and access to spiritual information) and intelligence-related information processing (aptness and skills for effective processing of this type of information). 47In fact, the very concept can be expanded to include additional, equally substantial individual strengths and capabilities such as the natural capacity for parenting, tending and caring for those in need, or artistic expression.It may be claimed that such intrinsic predispositions, by their very presence, are dedicated conduits for the expression of spiritual intelligence and may be fostered, nurtured, and reinforced by the development of spiritual intelligence. 51
Recommendations
The faculty should take the lead in this important field with how spiritual intelligence competencies might be developed through such methods as training, coaching, and therapy.From a theoretical standpoint, we suggest that future researches work on adapting/designing new instruments in spiritual intelligence through exploratory and confirmatory factor analysis.Future research should examine the relationship between spiritual intelligence and other variables such as emotional intelligence, parenting styles, academic achievement, and motivation.
limitations
A limitation of the current study is statistical significance and adequate effect in the relationship between spiritual intelligence and personality traits.It is important to note that this study used the correlation method, and hence no clear cause and effect conclusions can be drawn from the results.Future studies might consider using an experimental design.Another limitation of this study being part of class assignment is that it was not possible timewise for participants to be recruited for another sampling method.Furthermore, the structure of the current study was such that reliability analysis could not be performed because only total scores for each questionnaire were entered into SPSS.
Conclusion
The main purpose of the current study was to assess the level of spiritual intelligence among undergraduate students at Hashemite University in Jordan and to find the association between spiritual intelligence and personality traits.The findings of the current study suggest that the level of spiritual intelligence is medium in students at the Hashemite University in Jordan.The most important finding that can be drawn from this study was that a positive and statistically significant relationship exists between spiritual intelligence dimensions (critical existential thinking, personal meaning production, transcendental awareness, and conscious state expansion) and personality traits (neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness), but no significant correlation between personal meaning production and transcendental awareness dimensions and neuroticism personality traits was found.Regression analysis results indicate that critical existential thinking is the first predictor of spiritual intelligence in terms of neuroticism, extraversion, openness to experience, agreeableness, and conscie | 2017-10-20T12:51:10.771Z | 0001-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "ff098a906773d132433d503a161232d96b06c0db",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=24199",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff098a906773d132433d503a161232d96b06c0db",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
267093966 | pes2o/s2orc | v3-fos-license | A Retrospective Evaluation of 5 Years of Clinical Results of Metal–Ceramic vs. Monolithic Zirconia Superstructures in Maxillary All-on-4TM Concept
The aim of the current study was to present the clinical and radiological outcomes of monolithic zirconia superstructures compared to the metal–ceramic ones in the All-on-4 concept for the prosthetic rehabilitation of the maxillae. A total of 30 patients were subdivided into groups according to their superstructure type (metal–ceramic (n = 15) or monolithic zirconia (n = 15)). All implants were functionally loaded within 24 h after insertion with provisional acrylic superstructures. Prosthetic complications, marginal bone loss, plaque accumulation, probing pocket depth, bleeding on probing, and bite force were documented over a period of 5 years. Marginal bone loss around the implants of the ceramic group remained well over the five years (1.21 ± 0.23 mm). However, marginal bone loss was significantly lower around the implants in the monolithic zirconia group (0.22 ± 0.14 mm) (p < 0.001). Bleeding on probing, plaque accumulation, and probing pocket depth values were correlated with marginal bone loss. Among all evaluated parameters, no differences could be detected in terms of the angulation of the implants. Detachment or chipping was observed in seven cases in the metal–ceramic superstructure group. In all cases, dentures were removed and repaired in the laboratory. In the monolithic zirconia group, chipping was detected after one year in two cases, after two years in four cases, and after five years in one case and could be managed by polishing in situ. Monolithic zirconia superstructures presented superior results regarding the parameters evaluated.
Introduction
In recent years, CAD/CAM technology has allowed the use of rapid manufacturing processes to improve the efficiency of implant-supported treatment options [1].As a result, the application of metal-free restorations as restorative materials for implant-supported superstructures has gradually increased [2].However, despite the wide range of restorative materials available on the market and the variety of therapy options today, the selection of the most appropriate material to obtain the best clinical results and meet the patients' expectations is still challenging, and it remains difficult for dental professionals to determine the best restorative solution for each therapy concept.
It has been suggested that the main advantages of zirconia in implant-supported dental rehabilitation are reduced treatment costs and treatment time [1][2][3].Another advantage in comparison to the metal-ceramic superstructures is proclaimed to be improved aesthetic outcomes, which have been attributed to the superior peri-implant soft tissue profile [3].
Within the last two decades, the use of full fixed prostheses mounted immediately after insertion on two straight mesial and two tilted distal implants-which is called the All-on-4 concept-has become nearly a standard therapy option for the rehabilitation of the edentulous jaws, with predictable clinical outcomes [4,5].The literature shows very high survival rates for this concept; however, the peri-implant health status, complication rates, and reparatory costs could vary with the superstructure type selected [6].This concept includes basically two different types of prosthetic solutions regarding the final protocol: a metal-ceramic fixed prosthesis with ceramic veneers or a fixed acrylic resin prosthesis with a metal framework and acrylic resin prosthetic teeth [5,7,8].Therefore, to date, the most investigated restorative material in the All-on-4 is the metal-ceramic vs. acrylic superstructures, and the long-term behavior of monolithic zirconia superstructures in the All-on-4 concept remains an open question.
The long-term clinical outcomes of monolithic zirconia have become the main subject of various studies.However, the long-term outcomes have yet to be elucidated in the All-on-4 concept.The aim of the current study was to present the clinical and radiological outcomes of monolithic zirconia superstructures compared to the metal-ceramic ones in the All-on-4 concept for the rehabilitation of the edentulous maxillae.The null hypothesis states that there are no differences regarding the clinical and radiological outcomes between monolithic zirconia and metal-ceramic superstructures.
Study Design
Data and clinical documentation of the patients who were treated between August 2016 and July 2017 for the rehabilitation of their edentulous upper jaw with immediately functionally loaded implant-supported fixed prosthesis according to the All-on-4 concept were collected retrospectively.Patients were divided into subgroups according to the superstructure type (monolithic zirconia or metal-supported ceramics).Inclusion criteria were:
•
Natural dentition or tooth/implant supported fixed prosthetics of the mandible; • Regular attendance of dental recall appointments with intervals of 6 months (according to local regulations [9], irregular attendance during the COVID-19 time was ignored).
Patients with the following conditions were excluded [6,7]: • Uncontrolled general diseases and/or having possible local and/or systemic contraindications for implant surgery;
Preoperative Measurements
Prior to the placement of the implants, vertical dimensions were determined, occlusal registrations were conducted, and a denture for the upper jaw was finished by a dental laboratory, which was modified and immediately mounted to serve as a provisional fixed restoration postoperatively.Preoperative photographs were made to record the aesthetic aspects such as the smile line, lip raised, and in resting position.
Surgery
Surgical interventions were performed by M.A. under local anesthesia and in some cases under conscious sedation with I.V. benzodiazepine (0.17 mg/kg).All patients received four implants (HiTec LGI, Herzliya, Israel) according to the All-on-4 treatment concept: two straightly placed mesial (placed in region 12 and 22) and two angulated distal implants (Figure 1a-c, placed in regions 15 and 25).Implant sizes were 13 mm for the straight and 16 mm for the angulated implants, respectively, all with a diameter of 4 mm.Briefly, after raising a mucoperiosteal flap with releasing incisions on the vestibular aspect in the distal to the molar and the midline area, the whole maxillary alveolar crest was exposed.In the presence of a high smile line, bone level was reduced with an ostectomy.After the identification of the anterior bony wall of the maxillary sinus through a bony window of 3 mm, the placement was started with the posterior implants with the aid of a special guide (Edentulous guide, Nobel BiocareTM, Göteborg, Sweden).The insertion of all implants followed standard procedures according to the manufacturer guidelines; however, a peak insertion torque of at least 35 N/cm should be reached to allow an immediate functional loading concept.After placement of the implants, impression copings were placed, and a primary closure was conducted via 3-0 nonresorbable sutures.The functional loading was performed within 24 h after insertion.Postoperative instructions, such as the avoidance of vigorous mouth rinsing, immediate use of ice packs in the first 24 h, upright sitting, a liquid or soft diet for the first 24 h, avoidance of eating crunchy foods, and restriction of physical activities on the day of surgery, were given.Ice packs were placed on the outside of the face where the implants were placed.Ice was used for the first 48 h to decrease swelling by applying it as continuously as possible.Antibiotics (amoxicillin 875 mg/clavulanic acid 125 mg) were given 1 h prior to surgery and two times a day for 5 days thereafter.For the patients with penicillin allergy, clindamycin 600 mg was prescribed with the same posology.The sutures were removed 7 days after surgery.
Surgery
Surgical interventions were performed by M.A. under local anesthesia and in some cases under conscious sedation with I.V. benzodiazepine (0.17 mg/kg).All patients received four implants (HiTec LGI, Herzliya, Israel) according to the All-on-4 treatment concept: two straightly placed mesial (placed in region 12 and 22) and two angulated distal implants (Figure 1a-c, placed in regions 15 and 25).Implant sizes were 13 mm for the straight and 16 mm for the angulated implants, respectively, all with a diameter of 4 mm.Briefly, after raising a mucoperiosteal flap with releasing incisions on the vestibular aspect in the distal to the molar and the midline area, the whole maxillary alveolar crest was exposed.In the presence of a high smile line, bone level was reduced with an ostectomy.After the identification of the anterior bony wall of the maxillary sinus through a bony window of 3 mm, the placement was started with the posterior implants with the aid of a special guide (Edentulous guide, Nobel BiocareTM, Göteborg, Sweden).The insertion of all implants followed standard procedures according to the manufacturer guidelines; however, a peak insertion torque of at least 35 N/cm should be reached to allow an immediate functional loading concept.After placement of the implants, impression copings were placed, and a primary closure was conducted via 3-0 nonresorbable sutures.The functional loading was performed within 24 h after insertion.Postoperative instructions, such as the avoidance of vigorous mouth rinsing, immediate use of ice packs in the first 24 h, upright sitting, a liquid or soft diet for the first 24 h, avoidance of eating crunchy foods, and restriction of physical activities on the day of surgery, were given.Ice packs were placed on the outside of the face where the implants were placed.Ice was used for the first 48 h to decrease swelling by applying it as continuously as possible.Antibiotics (amoxicillin 875 mg/clavulanic acid 125 mg) were given 1 h prior to surgery and two times a day for 5 days thereafter.For the patients with penicillin allergy, clindamycin 600 mg was prescribed with the same posology.The sutures were removed 7 days after surgery.
Provisional Prosthetic Procedure
Impressions were taken with an open-tray technique.After placement of the impression copings, the positions were determined with a surgical pen, and drill holes were created on the prefabricated impression tray.Implants replicating multiunit abutments were connected to the impression copings, screws were loosened, and the impression tray was removed with impression copings.The impression copings were removed from the material, and temporary abutments were placed on the implant replicas in the preliminary model ex situ.The implants were covered with temporary healing caps.Within 24 h, a screw-retained acrylic prosthesis was manufactured at the dental laboratory and mounted on the implants (Figure 2).To ensure an ideal occlusal relationship, both centric and lateral contacts and discursions were controlled, and the prosthesis was adjusted in situ.
Provisional Prosthetic Procedure
Impressions were taken with an open-tray technique.After placement of the impression copings, the positions were determined with a surgical pen, and drill holes were created on the prefabricated impression tray.Implants replicating multiunit abutments were connected to the impression copings, screws were loosened, and the impression tray was removed with impression copings.The impression copings were removed from the material, and temporary abutments were placed on the implant replicas in the preliminary model ex situ.The implants were covered with temporary healing caps.Within 24 h, a screw-retained acrylic prosthesis was manufactured at the dental laboratory and mounted on the implants (Figure 2).To ensure an ideal occlusal relationship, both centric and lateral contacts and discursions were controlled, and the prosthesis was adjusted in situ.
Final Prosthetic Procedure
The final prosthetic rehabilitation was performed 3 months after surgery.The patients were assigned to subgroups according to their own preference of the superstructure.For the patients who were treated with metal-ceramic superstructures, a chrome-molybdenum framework was produced with CAD/CAM (Figure 3a,b).The monolithic zirconia prostheses (Figure 3c,d) with a 1-piece design were all fabricated with the same brand of multilayered zirconia (Kuraray Europa GmbH, Hattersheim, Germany) by using the protocols recommended by the manufacturer.In both groups, the length of the posterior cantilever was determined according to 1.5-2 xA-P-spread rule [5], which allows a 10-12 mm distal cantilever extended to the molar area.The tissue surface of all superstructures was rounded, smoothened, and polished to avoid food impaction and to favor the patient's mouth hygiene.
Final Prosthetic Procedure
The final prosthetic rehabilitation was performed 3 months after surgery.The patients were assigned to subgroups according to their own preference of the superstructure.For the patients who were treated with metal-ceramic superstructures, a chrome-molybdenum framework was produced with CAD/CAM (Figure 3a,b).The monolithic zirconia prostheses (Figure 3c,d) with a 1-piece design were all fabricated with the same brand of multilayered zirconia (Kuraray Europa GmbH, Hattersheim, Germany) by using the protocols recommended by the manufacturer.In both groups, the length of the posterior cantilever was determined according to 1.5-2 xA-P-spread rule [5], which allows a 10-12 mm distal cantilever extended to the molar area.The tissue surface of all superstructures was rounded, smoothened, and polished to avoid food impaction and to favor the patient's mouth hygiene.
Final Prosthetic Procedure
The final prosthetic rehabilitation was performed 3 months after surgery.The patients were assigned to subgroups according to their own preference of the superstructure.For the patients who were treated with metal-ceramic superstructures, a chrome-molybdenum framework was produced with CAD/CAM (Figure 3a,b).The monolithic zirconia prostheses (Figure 3c,d) with a 1-piece design were all fabricated with the same brand of multilayered zirconia (Kuraray Europa GmbH, Hattersheim, Germany) by using the protocols recommended by the manufacturer.In both groups, the length of the posterior cantilever was determined according to 1.5-2 xA-P-spread rule [5], which allows a 10-12 mm distal cantilever extended to the molar area.The tissue surface of all superstructures was rounded, smoothened, and polished to avoid food impaction and to favor the patient's mouth hygiene.
Outcome Parameters
Implant survival, prosthetic complications, marginal bone loss, probing pocket depth in mm (PPD), bleeding on probing (BOP), and plaque accumulation were documented for each implant during a period of 5 years.To ensure the accuracy and reproducibility of each parameter except the radiological assessment, the superstructures were removed.The bite force and its distribution were measured preoperatively, immediately after functional loading, and during the whole examination period.
Marginal bone loss was evaluated by measuring the limbus alveolaris around the implants by using the standard right-angle parallel technique with single digital radiographs (Figure 4), as described by Brägger [10,11].The radiographs were scanned at 600 dpi (Trophy RVG UI USB Sensor, KODAK 5.0 software, Carestream, Stuttgart, Germany).The bone level was comparatively assessed with image analysis software (IC Measure, Version 2, The Imaging Source Europe GmbH, Bremen, Germany).
Outcome Parameters
Implant survival, prosthetic complications, marginal bone loss, probing pocket depth in mm (PPD), bleeding on probing (BOP), and plaque accumulation were documented for each implant during a period of 5 years.To ensure the accuracy and reproducibility of each parameter except the radiological assessment, the superstructures were removed.The bite force and its distribution were measured preoperatively, immediately after functional loading, and during the whole examination period.
Marginal bone loss was evaluated by measuring the limbus alveolaris around the implants by using the standard right-angle parallel technique with single digital radiographs (Figure 4), as described by Brägger [10,11].The radiographs were scanned at 600 dpi (Trophy RVG UI USB Sensor, KODAK 5.0 software, Carestream, Stuttgart, Germany).The bone level was comparatively assessed with image analysis software (IC Measure, Version 2, The Imaging Source Europe GmbH, Bremen, Germany).PPD was measured with a calibrated periodontal probe (Hu-Friedy, Chicago, IL, USA).BOP was documented at four sides (mesial, buccal, distal, and palatinal) according to the modified Sulcus Bleeding Index, described by Mombelli et al. [12] as follows: The deepest pocket bleeding probing "1" The deepest pocket none bleeding by probing "0" Plaque accumulation was evaluated using the Plaque Index according to Mombelli et al. [12] with the following quantification: No detection of supragingival plaque "0" Plaque only recognized by running a probe across the smooth "1" Plaque can be seen by the naked eye "2" Abundance of soft matter "3" Bite force and its distribution were assessed via a pressure-sensitive film (Dental Prescale 50H-R-FPD-703; Fuji Photo Film Co., Tokyo, Japan) (Figure 5).PPD was measured with a calibrated periodontal probe (Hu-Friedy, Chicago, IL, USA).BOP was documented at four sides (mesial, buccal, distal, and palatinal) according to the modified Sulcus Bleeding Index, described by Mombelli et al. [12] as follows: The deepest pocket bleeding probing "1" The deepest pocket none bleeding by probing "0" Plaque accumulation was evaluated using the Plaque Index according to Mombelli et al. [12] with the following quantification: No detection of supragingival plaque "0" Plaque only recognized by running a probe across the smooth "1" Plaque can be seen by the naked eye "2" Abundance of soft matter "3" Bite force and its distribution were assessed via a pressure-sensitive film (Dental Prescale 50H-R-FPD-703; Fuji Photo Film Co., Tokyo, Japan) (Figure 5).
Statistical Analysis
ANOVA was used to establish the sample size needed to compare the contrast null hypothesis H0: µ1 = µ2 resulting in 80% power with a confidence level of 5%.Data were analyzed using "Python 3.6.15"(open-source programming language).At first, mean and standard deviation for each group were calculated.The Shapiro-Wilk test was performed to assess the distribution of the parameters.Before the statistical evaluation of the groups, Levene's test was used to determine homogeneity of variances.The parametric and nonparametric methods were an independent T test and a Mann-Whitney U test for group differences and discrete parameters, respectively.The level of significance was set at p < 0.05.The Pearson correlation coefficient was calculated to analyze the relationship between scale variables.
Results
A total of 30 patients (16 women, 14 men) with a mean age of 64 ± 9.4 years were recruited.The size of both subgroups was equal (n = 15).In total, 120 implants were inserted.Due to the local regulations during the COVID-19 pandemic, the attendances between 2019 and 2020 could not be performed and the postoperative data of years 1, 2, and 5 were included.
Implant Loss
The implant survival loss was 100%.In addition, no complications such as infection of the sinus, abscess, or fistula were detected.
Prosthetic Complications
The survival rates for both immediate and final prostheses were 100%.No major complications, such as the fracture of the framework, occurred.Dislodgement of the acrylic teeth of the provisional superstructure was detected in four cases, which was managed in
Statistical Analysis
ANOVA was used to establish the sample size needed to compare the contrast null hypothesis H 0 : µ1 = µ2 resulting in 80% power with a confidence level of 5%.Data were analyzed using "Python 3.6.15"(open-source programming language).At first, mean and standard deviation for each group were calculated.The Shapiro-Wilk test was performed to assess the distribution of the parameters.Before the statistical evaluation of the groups, Levene's test was used to determine homogeneity of variances.The parametric and nonparametric methods were an independent T test and a Mann-Whitney U test for group differences and discrete parameters, respectively.The level of significance was set at p < 0.05.The Pearson correlation coefficient was calculated to analyze the relationship between scale variables.
Results
A total of 30 patients (16 women, 14 men) with a mean age of 64 ± 9.4 years were recruited.The size of both subgroups was equal (n = 15).In total, 120 implants were inserted.Due to the local regulations during the COVID-19 pandemic, the attendances between 2019 and 2020 could not be performed and the postoperative data of years 1, 2, and 5 were included.
Implant Loss
The implant survival loss was 100%.In addition, no complications such as infection of the sinus, abscess, or fistula were detected.
Prosthetic Complications
The survival rates for both immediate and final prostheses were 100%.No major complications, such as the fracture of the framework, occurred.Dislodgement of the acrylic teeth of the provisional superstructure was detected in four cases, which was managed in situ.Discoloring of the acrylic teeth was observed in nearly all provisional acrylic superstructures.
Loosening of the multiunit abutment screw was seen in two cases at the implant in the metal-ceramic group and was resolved by retightening the abutment screw.
Detachment or chipping of the veneering material was observed in seven cases in the metal-ceramic superstructure group after one (n = 2), two (n = 3), and five years (n = 2) following the adjustment.In all cases, dentures were removed and repaired in the laboratory.In the monolithic zirconia group, chipping was detected after one year in two cases, after two years in four cases, and after five years in one case and could be managed by polishing in situ.No further prosthetic complications were documented during the follow-up period of five years.
Bone Loss
In both subgroups, a progression of marginal bone loss was observed during the first year.However, after 1 year, the bone loss around implants of the metal-ceramic group was significantly higher (p < 0.001), both for tilted (1.33 ± 0.35 mm) and straight implants (1.15 ± 0.30) compared to the monolithic zirconia group (0.21 ± 0.11 mm for straight implants and 0.23 ± 0.15 mm for tilted implants, respectively).There were no significant differences in marginal bone loss between the straight and tilted implants (Figure 6).situ.Discoloring of the acrylic teeth was observed in nearly all provisional acrylic superstructures.
Loosening of the multiunit abutment screw was seen in two cases at the implant in the metal-ceramic group and was resolved by retightening the abutment screw.
Detachment or chipping of the veneering material was observed in seven cases in the metal-ceramic superstructure group after one (n = 2), two (n = 3), and five years (n = 2) following the adjustment.In all cases, dentures were removed and repaired in the laboratory.In the monolithic zirconia group, chipping was detected after one year in two cases, after two years in four cases, and after five years in one case and could be managed by polishing in situ.No further prosthetic complications were documented during the follow-up period of five years.
Bone Loss
In both subgroups, a progression of marginal bone loss was observed during the first year.However, after 1 year, the bone loss around implants of the metal-ceramic group was significantly higher (p < 0.001), both for tilted (1.33 ± 0.35 mm) and straight implants (1.15 ± 0.30) compared to the monolithic zirconia group (0.21 ± 0.11 mm for straight implants and 0.23 ± 0.15 mm for tilted implants, respectively).There were no significant differences in marginal bone loss between the straight and tilted implants (Figure 6).Bone loss was significantly lower around the implants in the monolithic zirconia group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in marginal bone loss between the straight and tilted implants.
Plaque Accumulation
The Plaque Index showed that plaque accumulation was significantly lower around the implants in the monolithic zirconia group.The scores remained nearly unchanged following the one-year examination in both groups.There were no significant differences in terms of plaque accumulation between the straight and tilted implants (Figure 7).Bone loss was significantly lower around the implants in the monolithic zirconia group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in marginal bone loss between the straight and tilted implants.
Plaque Accumulation
The Plaque Index showed that plaque accumulation was significantly lower around the implants in the monolithic zirconia group.The scores remained nearly unchanged following the one-year examination in both groups.There were no significant differences in terms of plaque accumulation between the straight and tilted implants (Figure 7).
Figure 7.
Plaque accumulation was significantly higher around the implants in the metal-ceramic group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in terms of plaque accumulation between the straight and tilted implants.
Bleeding on Probing
Bleeding on probing measurements around the tilted and straight implants revealed no statistically significant differences.However, significantly lower scores were observed around the implants in the monolithic zirconia group (Figure 8).The difference between the two subgroups was statistically significant (p < 0.001).
Figure 8. Bleeding on probing measurements around the implants revealed no statistically significant differences regarding the implant angulation.However, significantly higher values in the group with metal-acrylic superstructures were observed throughout the examination period over the monolithic zirconia group.
Probing Pocket Depth
PPD increased consistently and significantly over time in the metal-ceramic group.Significantly shallower pockets were found at the implants supporting the monolithic zirconia superstructures (Figure 9) There were no statistical differences between straight and tilted implants for both groups.Plaque accumulation was significantly higher around the implants in the metal-ceramic group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in terms of plaque accumulation between the straight and tilted implants.
Bleeding on Probing
Bleeding on probing measurements around the tilted and straight implants revealed no statistically significant differences.However, significantly lower scores were observed around the implants in the monolithic zirconia group (Figure 8).The difference between the two subgroups was statistically significant (p < 0.001).
Figure 7.
Plaque accumulation was significantly higher around the implants in the metal-ceramic group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in terms of plaque accumulation between the straight and tilted implants.
Bleeding on Probing
Bleeding on probing measurements around the tilted and straight implants revealed no statistically significant differences.However, significantly lower scores were observed around the implants in the monolithic zirconia group (Figure 8).The difference between the two subgroups was statistically significant (p < 0.001).
Figure 8. Bleeding on probing measurements around the implants revealed no statistically significant differences regarding the implant angulation.However, significantly higher values in the group with metal-acrylic superstructures were observed throughout the examination period over the monolithic zirconia group.
Probing Pocket Depth
PPD increased consistently and significantly over time in the metal-ceramic group.Significantly shallower pockets were found at the implants supporting the monolithic zirconia superstructures (Figure 9) There were no statistical differences between straight and tilted implants for both groups.Bleeding on probing measurements around the implants revealed no statistically significant differences regarding the implant angulation.However, significantly higher values in the group with metal-acrylic superstructures were observed throughout the examination period over the monolithic zirconia group.
Probing Pocket Depth
PPD increased consistently and significantly over time in the metal-ceramic group.Significantly shallower pockets were found at the implants supporting the monolithic zirconia superstructures (Figure 9) There were no statistical differences between straight and tilted implants for both groups.
Bite Force
Bite force improved immediately after functional loading in both groups.An increasing difference in favor of monolithic zirconia superstructures began to evolve from 2 years onward; however, the difference was statistically insignificant (Figure 10).Occlusal force improved after immediate functional loading in both groups.An increasing difference in favor of monolithic zirconia began to evolve from 4 years onward; however, the difference was statistically not significant.The mean values of deviation regarding the differences in occlusal forces during the whole examination period was 651.41 N for the monolithic zirconia and 623.59 N for the metal-ceramic superstructures, respectively.
Discussion
The results of this study reject the null hypothesis (H0) that there are no differences regarding the clinical and radiological outcomes between monolithic zirconia and metalceramic superstructures.
To the best of our knowledge, the clinical results of monolithic zirconia superstructures in the All-on-4 concept have not been evaluated until now.Barootchi et al. [13] compared the clinical outcomes of metal-acrylic and zirconia-implant-supported fixed prostheses and stated that zirconia fixed implant prostheses presented higher initial costs than
Bite Force
Bite force improved immediately after functional loading in both groups.An increasing difference in favor of monolithic zirconia superstructures began to evolve from 2 years onward; however, the difference was statistically insignificant (Figure 10).
Bite Force
Bite force improved immediately after functional loading in both groups.An increasing difference in favor of monolithic zirconia superstructures began to evolve from 2 years onward; however, the difference was statistically insignificant (Figure 10).
Discussion
The results of this study reject the null hypothesis (H0) that there are no differences regarding the clinical and radiological outcomes between monolithic zirconia and metalceramic superstructures.
To the best of our knowledge, the clinical results of monolithic zirconia superstructures in the All-on-4 concept have not been evaluated until now.Barootchi et al. [13] compared the clinical outcomes of metal-acrylic and zirconia-implant-supported fixed prostheses and stated that zirconia fixed implant prostheses presented higher initial costs than Figure 10.Occlusal force improved after immediate functional loading in both groups.An increasing difference in favor of monolithic zirconia began to evolve from 4 years onward; however, the difference was statistically not significant.The mean values of deviation regarding the differences in occlusal forces during the whole examination period was 651.41 N for the monolithic zirconia and 623.59 N for the metal-ceramic superstructures, respectively.
Discussion
The results of this study reject the null hypothesis (H0) that there are no differences regarding the clinical and radiological outcomes between monolithic zirconia and metalceramic superstructures.
To the best of our knowledge, the clinical results of monolithic zirconia superstructures in the All-on-4 concept have not been evaluated until now.Barootchi et al. [13] compared the clinical outcomes of metal-acrylic and zirconia-implant-supported fixed prostheses and stated that zirconia fixed implant prostheses presented higher initial costs than metalacrylic hybrids; however, they showed superior satisfactory outcomes, reduction in overall complications, and superior survival rates.This study has clearly shown that the advanced criteria for "success" in dental implantology [14], including bone resorption, were fulfilled throughout the sample after 5 years of observation for both groups.The results regarding the bone loss and inflammatory parameters in both groups were similar and parallel to those reported on the maxillary All-on-4 concept in the literature [15][16][17].However, a slight superiority of the monolithic zirconia structures regarding the PPD and marginal bone loss could be observed.
Being highly wear-resistant, hard, and durable, it has been found that zirconia restorations do not follow the natural abrasions of and changes in the masticatory system [18].However, Koenig et al. reported that there could be a weak link regarding the restoration support or the antagonist tooth, one hypothesis being that zirconia stiffness and lack of resilience do not promote occlusal stress damping [19].Considering the bite force, this study has shown that the All-on-4 concept allows a sustainable improvement in functionality immediately after the integration of the superstructure in both groups.It is obvious that bruxism could result in both biological and mechanical failures in dental implantology and could have a direct effect on the therapy outcomes.Dreyer et al. described bruxism as a contributory factor in peri-implant infections [20].Regarding the study design of the present study, the exclusion of the risk factor "bruxismus" might be viewed as a limitation.
The current study has clearly shown that the occlusal forces were significantly improved after functional loading.A retrospective assessment of the clinical performance of the complete oral rehabilitation of n bruxers treated with implants and teeth-supported restorations revealed that the survival rates of both veneered and nonveneered monolithic zirconia restorations and implants in patients with bruxism are excellent; however, veneered zirconia restorations tend to show chipping [21].Similar results have been presented by Levartosky et al. [22], who concluded that the survival and success rates of monolithic zirconia restorations were nearly perfect; however, the veneered restorations showed a high rate of minor veneer chipping.This problem could be resolved via polishing in situ.Further studies are needed to determine whether zirconia superstructures should be favored in the All-on-4 concept for patients with bruxismus.However, regarding the minor mechanical complications of the superstructures, this study revealed equal results between metal-ceramic and zirconia subgroups.All mechanical complications with an esthetic aspect, such as chipping or detachment of the veneer, could be managed within one day at the dental laboratory for metal-ceramic superstructures.The most commonly seen complication in the zirconia group was chipping, which could be managed by polishing in situ without removing the superstructure.
A literature survey assessing the 5-year survival of metal-ceramic and zirconia superstructures showed that the survival rates of all types of all-ceramic prosthesis were lower than those reported for metal-ceramic fixed ones [23].The incidence of framework fractures was significantly higher for ceramic groups, and the incidence for ceramic fractures and loss of retention secondary to the loosening of the screw was significantly higher for zirconia prostheses compared to metal-ceramic ones.Pelekanos et al. [24] described the combination of a monolithic zirconia with an anatomically shaped titanium framework to increase the flexural strength and fracture toughness and stated that this novel concept may be indicated to increase the clinical performance of full-arch prosthesis.Herklotz et al. [25] highlighted the accuracy of planning an immediate loading of full-arch zirconia restorations to avoid both mechanical and biological complications.In the current study, no major mechanical complications (implant fracture, fracture of the connection parts, or complete fracture of the framework) were detected.Loosening of the connection screw was detected in two implants in the metal-ceramic groups.
According to a survey [26] based on the data of 2039 complete arch fixed implantsupported zirconia prostheses, the survival rate of zirconia superstructures depends on: • The quality of the material;
•
Respecting the laboratory guidelines/protocols described by the manufacturer; • Minding the emergence profile and the gap above the soft tissues for better access for maintaining oral hygiene; • Avoiding excessive posterior cantilevers; • The provision of an acrylic prosthesis to allow adjustment of function and esthetics prior to the definitive prosthetic treatment.
All the above-mentioned criteria particularly overlap with the principles of the Allon-4 concept: well-polished, a convex denture base, and a restricted cantilever length determined according to 1.5-2× antero-posterior spread rule, as previously described by Malo et al. [5], which allow a 10-12 mm posterior cantilever extended posteriorly and biologically highly compatible provisional prosthetics to create an ideal soft tissue profile.
Several studies [7,27] focusing on immediate loading revealed an increase in marginal bone loss from 5 years onward.Therefore, the current results might allow a mid-term consideration.Therefore, further studies with larger and more heterogenous sample sizes and long-term (>7 years) follow-up are needed to validate the findings of this study.Additionally, the highly selected study group regarding the exclusion of common risk factors in implantology such as bruxism, smoking, diabetes, etc., which could jeopardize peri-implant health, might be viewed as a limitation.
Conclusions
• The advanced criteria for "success" in dental implantology were fulfilled throughout the sample after 5 years of observation for both metal-ceramic and monolithic zirconia superstructures; • Monolithic zirconia superstructures presented superior results regarding both clinical and radiological parameters evaluated herein; • The mechanical complications in the monolithic zirconia superstructures were easily managed by polishing, whereas detachment of the veneer in the metal-ceramic group required an ex situ repair in the laboratory;
•
No major mechanical complications could be observed during the follow-up period in either group.
Figure 4 .
Figure 4. Marginal bone loss was evaluated by measuring the limbus alveolaris around the implants, by using standard right-angle parallel technique with single digital radiographs.
Figure 4 .
Figure 4. Marginal bone loss was evaluated by measuring the limbus alveolaris around the implants, by using standard right-angle parallel technique with single digital radiographs.
Figure 5 .
Figure 5. Analysis of the bite force with Dental Pre-scale 50H type R and Occluzer FPD-703 (Fuji Photo Film Co., Tokyo, Japan).The distribution shows a well-balanced occlusal load on the molar area, which corresponds to the natural dentition.
Figure 5 .
Figure 5. Analysis of the bite force with Dental Pre-scale 50H type R and Occluzer FPD-703 (Fuji Photo Film Co., Tokyo, Japan).The distribution shows a well-balanced occlusal load on the molar area, which corresponds to the natural dentition.
Figure 6 .
Figure 6.Bone loss was significantly lower around the implants in the monolithic zirconia group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in marginal bone loss between the straight and tilted implants.
Figure 6 .
Figure 6.Bone loss was significantly lower around the implants in the monolithic zirconia group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in marginal bone loss between the straight and tilted implants.
Figure 7 .
Figure 7. Plaque accumulation was significantly higher around the implants in the metal-ceramic group and the difference between two groups was statistically significant (p < 0.001).There were no significant differences in terms of plaque accumulation between the straight and tilted implants.
Figure 8 .
Figure 8. Bleeding on probing measurements around the implants revealed no statistically significant differences regarding the implant angulation.However, significantly higher values in the group with metal-acrylic superstructures were observed throughout the examination period over the monolithic zirconia group.
Figure 9 .
Figure 9. Comparative analysis of the probing pocket depths (mm) between metal-ceramic and monolithic zirconia groups regarding the implant regions revealed significant lower values in the monolithic zirconia group (p < 0.001).
Figure 10 .
Figure10.Occlusal force improved after immediate functional loading in both groups.An increasing difference in favor of monolithic zirconia began to evolve from 4 years onward; however, the difference was statistically not significant.The mean values of deviation regarding the differences in occlusal forces during the whole examination period was 651.41 N for the monolithic zirconia and 623.59 N for the metal-ceramic superstructures, respectively.
Figure 9 .
Figure 9. Comparative analysis of the probing pocket depths (mm) between metal-ceramic and monolithic zirconia groups regarding the implant regions revealed significant lower values in the monolithic zirconia group (p < 0.001).
Figure 9 .
Figure 9. Comparative analysis of the probing pocket depths (mm) between metal-ceramic and monolithic zirconia groups regarding the implant regions revealed significant lower values in the monolithic zirconia group (p < 0.001).
Figure 10 .
Figure10.Occlusal force improved after immediate functional loading in both groups.An increasing difference in favor of monolithic zirconia began to evolve from 4 years onward; however, the difference was statistically not significant.The mean values of deviation regarding the differences in occlusal forces during the whole examination period was 651.41 N for the monolithic zirconia and 623.59 N for the metal-ceramic superstructures, respectively. | 2024-01-24T06:17:23.451Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "7329cbe065600bb74169e8b7289a9664d1282f52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/13/2/557/pdf?version=1705573613",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7c50e1d633f968ce7240d1455505e84014f6224",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263209759 | pes2o/s2orc | v3-fos-license | Seismic Behavior of Colliding Buildings, Incorporating Soil-structure Interaction and Accounting for Variability in Structural Parameters, Soil Parameters, and Seismic Action
Pounding between buildings that are not sufficiently separated has been observed several times during earthquakes. This destructive impact may severely damage the structure and lead to its collapse. Although it is impossible to completely eliminate such losses, measures can be taken to minimize them. This article investigates the effect of the variability of structural parameters, soil parameters, and seismic action on the seismic response of two colliding buildings, taking the soil-structure interaction (SSI) into account. Two adjacent structures closely separated, modeled as inelastic lumped mass systems with different structural characteristics, were considered in this study. Both structures were modeled in the analysis using multi-degree-of-freedom (MDOF) systems, and the pounding was simulated using the modified linear viscoelastic model. The analysis was conducted in two cases: probabilistic analysis and deterministic analysis. Probability curves were established to analyze the effect of the variability of the parameters on the responses of the two colliding buildings. The comparison between the two analyses indicates that the probabilistic analysis is more precise than the deterministic analysis. It has been indicated that taking into account the variability of structural parameters, soil parameters, and seismic action is efficient in determining the realistic behavior of colliding buildings. Additionally, pounding is more critical in the case of buildings founded on very soft soil, followed by those on soft soil, then on hard soil, and finally on rocky soil.
Introduction
One of the most important natural disasters facing human society today is the earthquake disaster, which is characterized by suddenness and destructiveness.In recent years, protecting against the destructive effects of earthquakes has received more attention, particularly concerning the collisions between adjacent structures, especially during previous earthquakes such as those in San Fernando 1971 [1], Mexico City 1985 [2], Lom Prieta 1989 [3], and Bhuj 2001 [4].Pounding was also observed in recent earthquakes, such as Christchurch (New Zealand, 2011) [5] and Gorkha (Nepal, 2015) [6].Additionally, pounding was observed in Algeria, especially during the Boumerdes earthquake (M = 6.8) in 2003 [7].The majority of the damages observed during these earthquakes were caused by pounding, which occurred between two adjacent structures that were located too close to each other, and the gap between them did not satisfy the minimum distance required for them to vibrate freely, see Fig. 1.
Numerous researchers have extensively investigated the phenomenon of structural pounding, considering various structural configurations and diverse ground motions.Anagnostopoulos [8] studied the pounding of buildings in series during earthquakes, where the structures were modeled as single-degree-of-freedom.The author found that the outer structures in the series responded more severely than the inner ones.Moreover, research has shown that the dynamic response of adjacent structures is significantly influenced by dynamic structural parameters such as the natural vibration period, mass, and damping.Also, a change in the structural design, the separation distance between the colliding buildings, or the ground motion excitation may lead to different results [9].
The previous investigation confirmed that the behavior of lighter and more flexible buildings is severely impacted by structural pounding during earthquakes, which can eventually lead to considerable permanent deformation of the structures due to elastic deformation.In contrast, heavier and stiffer buildings are nearly unaffected by collisions between structures [10,11].The investigations have also indicated that pounding has a negative impact on the seismic responses of closely separated buildings, and the impact increases as the separation gaps decrease.
Several mechanical devices have been used to mitigate the effects of vibrations in structures.However, the most popular and effective solution is the Tuned Mass Damper (TMD).Optimizing the parameters of this passive control system has attracted the attention of many researchers, leading to the study and development of numerous algorithms [12][13][14].Recently, Djerouni et al. [15] determined the effectiveness of using the tuned mass damper inerter (TMDI) and the tuned inerter damper (TID) to mitigate seismic pounding.
The reduction of the vulnerability of a structure requires a good knowledge of the structure and the soil that supports it.The global term for the study of these phenomena is soil-structure interaction (SSI).Many researchers have been focusing on this issue.(See, for example, Mylonakis and Gazetas [16], Mekki et al. [17], Oz et al. [18], Arboleda-Monsalve et al. [19], Liu et al. [20] and Kaveh and Ardebili [21]).They concluded that not only does the nature of the soil influence the behavior of the structure, but also that the structure influences the behavior of the soil.Furthermore, the effects of SSI play an important role in determining the dynamic response of structures, and neglecting SSI can lead to unsafe construction, especially for structures built on soft ground.
The importance of considering SSI in pounding problems has been confirmed by several researchers (for example, Sobhi and Far [22]).Moreover, Mahmoud et al. [23] studied the impact of both the supporting soil flexibility and the pounding between adjacent structures.The findings of this study indicated that including SSI reduces the peak impact forces and story peak displacements during collisions while increasing the peak accelerations.However, Pawar and Murnal [24] concluded that taking SSI into account increases structural displacement while decreasing other reactions such as base shear, impact force, and kinetic energy.In addition, the phenomenon of SSI may produce severe pounding due to an increase in displacement.Therefore, neglecting SSI may lead to incorrect conclusions about the risk of pounding.The studies have also shown that SSI has considerably increased the effect of pounding on flexible buildings compared to stiffer structures when SSI is considered [25].
There is limited research about the effects of soil type on the dynamic response of adjacent buildings when considering SSI.Recently, Miari and Jankowski [26,27] examined the impact of pounding between structures built on the same and various soil types (hard rock, rock, very dense soil, soft rock, stiff soil, and soft clay soil).The results showed that buildings constructed on soft clay soil are susceptible to the greatest displacements and shear forces, followed by structures built on stiff soil, then structures built on very dense soil and soft rock.Finally, structures built on rock and hard rock experienced the lowest displacement and shear forces.Tena-Colunga and Sánchez-Ballinas [28] conducted a parametric study to minimize heavy pounding on soft soils.They concluded that the seismic code of Mexico City should increase the minimum distance between neighboring structures.
Until now, the impact of pounding on the responses of colliding buildings considering Soil-Structure Interaction (SSI) has not been fully understood, and the results are generally contradictory.Previous investigations were limited to comparing the responses of colliding buildings in cases with fixed bases and incorporating SSI.These studies focused on a single soil type and neglected the significance of uncertainty related to the soil, the parameters of the structure, or the seismic action.However, the vulnerability of adjacent structures might increase significantly due to the variability of these parameters.These variabilities are an important source of uncertainty.It is in this context that the objective of this paper is articulated.This study aims to determine the effect of different types of uncertainties (structural parameters, soil parameters, and seismic action) on the response of adjacent structures, considering the SSI.
To achieve the objectives of this article, two multistory buildings of equal height have been considered in the study.These buildings have been modeled as inelastic lumped mass systems, with the structure on the right being stiffer than the one on the left.The models have been excited using the time history of the El Centro earthquake ( May 18, 1940).Additionally, the modified linear viscoelastic contact element was employed to simulate the pounding phenomenon, and the spring-dashpot elements have been incorporated to account for the dynamic behavior of the supporting soil.
The theory of a homogeneous, isotropic, and elastic half-space has been used to consider the SSI.The translation and the rotation of the foundation are simulated using springs and dampers adapted to the horizontal and rotational movement of the supporting soil [29], see Fig. 3.The soil-foundation parameters are dependent on the elastic properties of the soil and the dimensions of the foundation.
Where the dimensions of the foundations are (B × L), β x and β ϕ are the correct constants of sway and rocking spring.r h and r r are the equivalent radii of isolated foundations for sway and rocking springs.Whereas the soil properties are defined by Poisson's ratio ν, the mass density ρ, and the maximum shear modulus G max , which depends on the shear wave velocity V s (Eq.( 3)) [29].In the analysis that incorporates SSI, the shear modulus has been decreased to more accurately reflect the behavior of the soil.In this study, the reduced shear modulus G was assumed to be 50% of the maximum shear modulus G max and calculated by (Eq.( 3)) [29].
Model for simulating pounding force during collision
The linear viscoelastic model has been extensively and successfully used in the majority of studies on earthquake-induced structural pounding [8] because it is the most efficient and practical model to simulate the pounding force.Also, it takes into account the energy dissipation during the collision.However, the drawback of this model is the negative impact force observed just before the separation of colliding structures.To eliminate this default, Mahmoud and Jankowski [30] modified the linear viscoelastic model by activating the damping term only during the approach period of the collision.
In our case, we used the modified linear viscoelastic model (Fig. 4), which consists of three sub-elements.In the middle part, a linear spring accounts for pounding-induced elastic force, and a linear dashpot takes into account the energy dissipation during the collision.On the right, the separation gap is simulated by a GAP element.These impact elements are presented between the masses and activated only if the separation gap is closed, and the two masses are in contact.Otherwise, the impact force is transmitted to zero.
The pounding force during impact F(t), for this model is defined by Eq. ( 4).
Where δ is the deformation of colliding structural elements, δ ̇ is the relative velocity between colliding structural elements, k is the impact element's stiffness, and c is the impact element's damping, which can be defined by Eq. ( 5), see [8,31].
Where m 1 and m 2 are the masses, as illustrated in Fig. 4.Moreover, the relation between the impact damping ratio ξ and the coefficient of restitution e is defined by Eq. ( 6) [30].
Dynamic equation of motions
The dynamic equation of motion for colliding buildings considering SSI, as shown in Fig. 3, is expressed by Eq. ( 7) [25]. Period.Polytech.Civ.Eng.
Where (Ü L , U ̇L, U L ) and (Ü R , U ̇R, U R ) denote the acceleration, velocity, and displacement vectors for the left and the right buildings (Eq.( 8)).M L and M R are the matrices of masses for the left and the right building (Eqs.( 9) and ( 10)); C L and C R are the matrices of damping coefficients for the left and the right building (Eqs.( 11) and ( 12)).R L and R R are the vectors consisting of the system resisting forces for the left and the right buildings (Eq.( 13)).F is the pounding force vector, and Ü g is the vector of ground motion acceleration (Eq.( 13)).
Case study 4.1 Description of the selected structures
The case study involves two 10-story adjacent structures with different dynamic characteristics, where the right structure is stiffer than the left structure.The plan and elevation views are illustrated in Figs. 5 (a) and 5 (b).The buildings are constructed using concrete, which has a strength of 25 MPa and Young's modulus of 32164.20 MPa.
The steel used has a yield strength of 400 MPa and a modulus of elasticity of 200 GPa.The structures were designed as a column-beam system, and the columns are square with a dimension of (50 × 50) m 2 for the left structure and (60 × 60) m 2 for the right structure (Fig. 5(c)).The foundation has dimensions B = L = 2.50 m for both structures, see Fig. 5(c).The floors are infinitely rigid, with a live load Q = 1.5 KN/m 2 and a roof load G = 5.14 KN/m 2 .
Variability
The soil is a heterogeneous material since it consists of several layers whose properties are constantly changing due to soil disaggregation, which induces the variability of soil properties from one area to another.That is known as the natural variability or spatial variability of the soil.This topic has been investigated in numerous studies [32,33].Also, the effect of uncertainty on the structural response has been extensively discussed [34][35][36][37], considering that it can provide more reliable results and allow for more precise construction of the structures given that the material properties and geometric elements of a building are uncertain due to various reasons.Despite all that, the impact of uncertainty on the behavior of colliding buildings has received limited attention.For an effective investigation, it's crucial to consider all sources of variability, which are significant sources of uncertainty.
A detailed analysis of the seismic behavior of adjacent structures requires the control and consideration of several types of uncertainties: • Structural uncertainty: the variability of structural geometry on the one hand and the variability of material characteristics (strength, Young's modulus, etc.) on the other.• Soil uncertainty: characterization of mechanical properties (spatial variability) due to soil heterogeneity.• Uncertainty related to seismic action: Earthquakes differ inherently due to their nature, unpredictability, intensity, frequency content, and type of seismogenic source.• Soil-structure interaction uncertainty: modeling impedance functions and soil-foundation liaison.
To reach the target of this study, all uncertainties resulting from the variation of structural parameters, soil parameters, and seismic action have been included.Seven variables have been considered: the building's mass coefficient (m), the building's stiffness coefficient (k), the building's damping factor (c), the Poisson's ratio of the soil, the density of the soil (ρ), soil shear wave velocity (V s ), and peak ground acceleration (PGA).Each one of these parameters has been described by probability distribution laws defined by their mean and coefficients of variation (CoV), or their mean and their minimum and maximum values as indicated in Table 1.
Numerical results
Numerical simulations based on the Matlab code were used to evaluate the seismic behavior of colliding buildings, including the soil-structure interaction and the variability of structural parameters, soil parameters, and seismic action.
The model has been excited using the time history of the El Centro earthquake (May 18, 1940).To account for both horizontal and rotational movements of the supporting soil, swaying, rocking springs, and dashpots have been employed, see Fig. 3.The horizontal and rotational stiffness and damping coefficients have been calculated using the formulas given by Eqs. ( 1) and ( 2).The corrected constants of swaying and rocking springs have been taken as β x = 1, β ϕ = 0.5.The radii of equivalent circular foundations for swaying and rocking springs have been estimated and found to be equal to r r = 1.41 m and r h = 1.42 m.The adjacent structures have been modeled as MDOF systems with lumped masses at the floor levels, see Fig. 2, and the separation gap distance has been considered as d = 3 cm.
The pounding force has been modeled using the modified linear viscoelastic model (Fig. 4).The stiffness of the spring has been taken to be equal to k = 2.36 × 10 7 N/m, while the impact element's damping has been calculated based on Eq. ( 5) for the coefficient of restitution e = 0.65 [38].The analysis has been performed in two cases: deterministic analysis and probabilistic analysis.
Results of the deterministic analysis
The mean values of the different parameters have been selected for deterministic analysis (see Table 1).In addition, four soil types with different shear wave velocities have been considered to account for soil variability: V s = 125 m/s, V s = 300 m/s, V s = 600 m/s, and V s = 1350 m/s.The effect of the variability of the soil on the behavior of colliding buildings has been discussed in this analysis.Fig. 6 shows the peak story displacement at different floor levels of the left and right buildings under different soil types.The results presented in this figure have indicated that the highest peak displacements (0.21 m for the left structure and 0.20 m for the right structure) are obtained at the top floor level (10 th story), whereas the lowest peak story displacements are obtained at the 1 st story of the structures for both neighboring buildings.
The influence of the soil type has also been investigated at this stage.It has been shown that the buildings have significantly different structural responses under different site conditions.It can be seen from Fig. 6 that the peak displacement of the colliding buildings decreased with the increase in the soil shear wave velocity at all floor levels of the structures.Notably, buildings founded under the soil shear wave velocity of V s = 125 m/s produced the highest levels of displacement, followed by those of V s = 300 m/s, V s = 600 m/s, and finally V s = 1350 m/s.
An example of displacement time histories for the left and right structures at the 10th story under four different soil types is shown in Fig. 7.It can be seen from Figs. 6 and 7 that the displacements of the left building (0.21 m) and the displacements of the right building (0.20 m) are relatively similar when constructed on very soft soil, see Fig. 7(a).However, the results shown in Fig. 7(d) clearly demonstrate that the displacement of the left building (0.05 m) is greater than that of the right building (0.01 m) when they are constructed on rocky soil.In this case, the left building undergoes a large displacement resulting from structural pounding due to its lighter weight and greater flexibility.On the other hand, the right building, being stiffer, is less affected by such displacements.
Fig. 8 shows the pounding force time histories for the two colliding buildings under different soil types at the 1 st story, the 5 th story, and the top floor.The results presented in this figure indicate that the highest pounding force is obtained at the top floor level, while the lowest pounding force is observed at the 1st story.For example, for buildings built under very soft soil (V s = 125 m/s), the pounding force at the top floor is 2.92 × 10 6 N, while at the 1 st floor, it is 6.44 × 10 5 N, see Fig. 8(c).
The influence of the soil type was also investigated at this stage, and the results have indicated that the buildings have significantly different structural responses under different site conditions.It can be seen from Fig. 8 that the pounding force of the colliding buildings decreased with the increase in the soil shear wave velocity at all floor levels of the structures.Buildings founded under soil shear wave velocity V s = 125 m/s produced the highest pounding force, followed by V s = 300 m/s, V s = 600 m/s, and finally V s = 1350 m/s.That signifies that the very soft soil is the most affected by the collision.
Results of the probabilistic analysis
The probabilistic analysis has been considered with probabilistic parameters following the distribution laws defined by their mean and coefficients of variation (CoV), or their mean, minimum, and maximum values, as shown in Table 1.Moreover, 10,000 simulations have been performed to establish the probability curves (Figs. 8 and 9).The effect of the variability of structural parameters, soil parameters, and seismic action on the behavior of colliding buildings has been discussed in this analysis.
Fig. 9 compares the influence of the variability of structural parameters, soil parameters, and seismic action on the displacement probability of the two colliding buildings under different soil types at the 1 st story, the 5 th story, and the 10 th story.The results presented in this figure have indicated that the probability of displacement at the level of the top floor (0.4 m) is higher than that of the other floors, whereas the lowest probabilities of displacement have been observed on the first story of both adjacent buildings.
The effect of soil type has also been studied at this stage.The results show that the dynamic responses of colliding buildings are significantly different for structures built on different soil types.Buildings built on very soft soil have the highest displacement probabilities, followed by those built on soft soil, hard soil, and rocky soil, in that order.The comparison between Figs. 9(a) and 9(b) shows that the left building, which is lighter and more flexible, is more susceptible to undergoing higher displacement probabilities compared to the right building, which is stiffer.That is very clear for buildings constructed on rocky soil, where the displacement probability at the 10 th story is 0.13 m for the left building and 0.04 m for the right building.
Fig. 10 compares the influence of the variability of structural parameters, soil parameters, and seismic action on the probability of pounding force under different soil types at the 1 st story, the 5 th story, and the top story.The results show that the top floor has a higher probability of pounding force (6 × 10 6 N) than the other floors, while the 1 st story has the lowest probability of pounding force.The results shown in these figures show that the dynamic responses of colliding buildings are significantly different for structures built on different soil types.
The highest pounding probabilities has been obtained for buildings built on very soft soil, then for buildings built on soft soil, then for buildings built on hard soil, and finally for buildings built on rocky soil, see Fig. 10.
The comparison between the results obtained from the deterministic analysis (Figs. 6, 7, and 8) and the probabilistic analysis (Figs. 9 and 10) has indicated that the probabilistic analysis provides more accurate results since it takes into account different types of uncertainties such as structural parameters, soil parameters, and seismic action.In contrast, deterministic analyses tend to underestimate the results by about 50 %, see Table 2.
The analyses agree on the observation that the top floor experienced the highest level of impact during the collision.Moreover, the 1st floor is less affected by structural pounding when d = 3 cm, and the soil type significantly influences the dynamic responses during collisions.Additionally, the lighter structure is more susceptible to the effects of structural pounding.
Conclusions
This paper investigates the effect of the variability of structural parameters, soil parameters, and seismic action on the responses of two colliding buildings of the same height with different structural characteristics, taking into account SSI.The study is conducted in two cases: probabilistic analysis and deterministic analysis.First, the deterministic analysis is considered with deterministic parameters, and only the mean values of different parameters are accounted for.Then, the probabilistic analysis is performed with probabilistic parameters following the distribution laws defined by their mean and coefficients of variation (CoV), or their mean, minimum, and maximum values.Both analyses lead to the same conclusions.However, the probabilistic analysis provides a more precise estimation of the behavior of colliding buildings than the deterministic analysis.
The most important results of this work are as follows: • The dynamic responses of the adjacent structures are higher at the top level, which indicates that the top floor is the most affected by the structural pounding.• The soil type considerably affects the behavior of colliding buildings.It has been observed that an increase in the soil shear wave velocity reduced structural responses.Also, pounding is more severe in buildings built on very soft soil, followed by those on soft soil, hard soil, and rocky soil.
• Constructing on very soft soil is a highly challenging task that requires careful and detailed study.• The behavior of each building, when subjected to structural pounding, is significantly influenced by the structural characteristics of adjacent buildings, including masses, stiffness, and damping factors.• The study of the seismic behavior of colliding buildings requires a good knowledge of the structural parameters, the soil parameters, and the seismic action.However, neglecting the variability of these parameters may underestimate the structural responses, which may lead to erroneous results.
1 10 L ; k 1 L , k 2 L , k 3 L 10 L; and C 1 L , C 2 L , C 3 L
Model of the adjacent structures and the SSI The adjacent structures are typically modeled as multi-degree-of-freedom (MDOF) systems, with lumped masses concentrated at the levels of their floors.as shown in Fig. 2. Two 10-story buildings with different dynamic properties, where the masses, stiffnesses, and damping coefficients for the left building are represented by m 1 L , m 2 L , m 3 L ,..., m ,..., k ,..., C 10 L ; respectively, whereas the masses, stiffnesses, and damping coefficients for the right building are represented by m
Fig. 2
Fig. 2 Model of the two colliding buildings
Fig. 3
Fig. 3 Model of the two colliding buildings considering SSI
Fig. 4
Fig. 4 Model of pounding force
Fig. 5
Fig. 5 Description of the selected structure; (a) Plan view, (b) Elevation view, (c) Geometric characteristics for column and foundation
Fig. 6
Fig. 6 Peak story displacement at different floor levels of the left and right buildings under different soil types; (a) left structure, (b) right structure
Fig. 7 Fig. 8
Fig. 7 Displacement time histories for the left and right buildings under different soil types; (a) V s = 125 m/s, (b) V s = 300 m/s, (c) V s = 600 m/s, (d) V s = 1350 m/s
Fig. 9
Fig.9 The displacement probability of the two colliding buildings under different types of soil at the 1 st story, the 5 th story, and the 10 th story; (a) Left building, (b) Right building
Fig. 10 Table 2
Fig. 10 Probability of the pounding force between colliding buildings under different soil types; (a) at the 1 st story, (b) at the 5 th story, (c) at the 10 th story
Table 1
The probabilistic properties of variables | 2023-09-28T15:22:38.856Z | 2023-09-26T00:00:00.000 | {
"year": 2023,
"sha1": "77e838bae8d68a7fa0ee0c5c87a9504fb90e35be",
"oa_license": null,
"oa_url": "https://pp.bme.hu/ci/article/download/22921/9908",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a232794063738d15b05dcfec4c3677a7ff481fe8",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
114937846 | pes2o/s2orc | v3-fos-license | Resilient Modulus Characterization of Compacted Cohesive Subgrade Soil
: Soil investigations concerning cyclic loading focus on the evaluation, in particular, of design parameters, such as elastic modulus, Poisson’s ratio, or resilient modulus. Structures subjected to repeated loading are vulnerable to high deformations, especially when subgrade soils are composed of cohesive, fully-saturated soils. Such subgrade soils in the eastern part of Europe have a glacial genesis and are a mix of sand, silt, and clay fractions. The characteristic of, e.g., Young modulus variation and resilient modulus from repeated loading tests, is presented. Based on performed resonant column and cyclic triaxial tests, an analytical model is proposed. The model takes into consideration actual values of effective stress p (cid:48) , as well as loading characteristics and the position of the effective stress path. This approach results in better characterization of pavement or industrial foundation systems based on the subgrade soil in undrained conditions. The recoverable strains characterized by the resilient modulus M r value in the first cycle of loading was between 44 MPa and 59 MPa for confining pressure σ ’ 3 equal to 45 kPa, and between 48 MPa and 78 MPa for σ ’ 3 equal to 90 kPa. During cyclic loading, cohesive soil, at first, degrades. When pore pressure reaches equilibrium, the resilient modulus value starts to increase. The above-described phenomena indicate that, after the plastic deformation caused by excessive load and excess pore water pressure dissipation, the soil becomes resilient.
Introduction
The structures which are subjected to dynamic and repeated loads are mostly industrial foundations, railroads, and pavement. Such structures are based on occasionally highly plastic soils, where their origin is connected with a glacier. Such cohesive soils can be found in central and northeastern parts of Europe.
Bituminous pavements are based on rigid and granular layers, to provide the optimal distribution of traffic loads [1]. Nevertheless, soft cohesive subgrade soils, even under low loading conditions and after improvement, still develop some deformations [2].
The uneven settlement or rutting which can be observed as a premature exceeding of the serviceability limit state is caused by such deformation in the subgrade and, therefore, in the sub-base layer. The deformation of soils under dynamic and cyclic loading conditions is important to study [2][3][4][5].
in which ρ stands for soil density and V s stands for the shear wave speed. The shear wave speed is estimated using, e.g., the bender element test (BE). The bender elements are piezoelectric cantilever strips, which are placed on top of the soil specimen's bottom side. The electric signal produces compressional (P) and shear (S) waves. The wave produced by the bender element propagates through the soil sample and induces a voltage in the second bender element. The wave propagation data recorded by the emitter and receiver of the bender element as a function of time leads to estimating the shear and compression wave velocities [28].
The repeated loading conditions in which the plastic strain occurs takes place in intermediate and large strain zones. The resilient modulus M r is based on elastic theory, although subgrade materials, themselves, are not elastic. If the load is small enough, after a large number of repetitions, the soil can behave in an elastic manner; however, the deformation is nearly fully, but not fully, recoverable [29][30][31][32]. The phenomenon of plastic deformation decreases during cyclic loading and is connected with the shakedown concept, while the state after numerous cycles, in which no permanent deformation occurs, is called the "resilient state". This specific elastic state is characterized by resilient modulus M r . The influence of such factors as confining pressure, deviator stress, moisture, saturation degree on resilient modulus value was reported by many studies [33][34][35][36][37][38]. The conditions under which the soil subgrade works are characterized by long-term repeating loads.
The M r value is calculated using Equation (2): in which ∆σ stands for the deviator stress and ∆ε r stands for the recoverable strain. The resilient modulus value can be obtained by repeat loading triaxial tests of the tested soil. Numerous methods and numerical models have been proposed in order to obtain the M r value [39,40]. One of them is the k-θ model, called the "Uzan-Witczak model", which describes the resilient modulus, characteristic for varying confining pressure [41]. This model is applicable for various types of soil. Its coefficients (k 1 , k 2 , and k 3 ) for a certain type of soil remain the same, in regard to the stress state. The Uzan-Witczak model equation is presented in the Equation (3): in which θ stands for the bulk stress θ = σ x + σ y + σ z = 3σ c + ∆q, where σ c is confining pressure, ∆q is stress magnitude, p a stands for the atmospheric pressure (normalizing factor), q max is the stress deviator equal to q max = σ 1 − σ 3 ; k 1 , k 2 , and k 3 are regression constants that are a function of the soil properties [42]. The resilient modulus value of granular materials is dependent from several parameters, among which the most important are the stress level, confining pressure and moisture content, the M r value decreases along with the increase in water content [43].
The M r value in certain, constant physical soil conditions, changes with strain level. A method for describing this phenomena is degradation curve which presents a change of the modulus at a given strain level to the maximal material modulus [44].
The difference between the elastic moduli in small-strain and plastic zones is presented in Figure 1. The influence of such factors as confining pressure, deviator stress, moisture, saturation degree on resilient modulus value was reported by many studies [33][34][35][36][37][38]. The conditions under which the soil subgrade works are characterized by long-term repeating loads.
The Mr value is calculated using Equation (2): in which Δσ stands for the deviator stress and Δεr stands for the recoverable strain. The resilient modulus value can be obtained by repeat loading triaxial tests of the tested soil. Numerous methods and numerical models have been proposed in order to obtain the Mr value [39,40]. One of them is the k-θ model, called the "Uzan-Witczak model", which describes the resilient modulus, characteristic for varying confining pressure [41]. This model is applicable for various types of soil. Its coefficients (k1, k2, and k3) for a certain type of soil remain the same, in regard to the stress state. The Uzan-Witczak model equation is presented in the Equation (3): , in which θ stands for the bulk stress 3 ∆ , where σc is confining pressure, Δq is stress magnitude, pa stands for the atmospheric pressure (normalizing factor), qmax is the stress deviator equal to ; k1, k2, and k3 are regression constants that are a function of the soil properties [42].
The resilient modulus value of granular materials is dependent from several parameters, among which the most important are the stress level, confining pressure and moisture content, the Mr value decreases along with the increase in water content [43].
The Mr value in certain, constant physical soil conditions, changes with strain level. A method for describing this phenomena is degradation curve which presents a change of the modulus at a given strain level to the maximal material modulus [44].
The difference between the elastic moduli in small-strain and plastic zones is presented in Figure 1. In this article, the resilient modulus Mr characteristic for cohesive soils was presented and a new analytical model for resilient modulus calculation is proposed. The analytical model takes into account actual values of effective stress p′, excess pore water pressure, the loading characteristic In this article, the resilient modulus M r characteristic for cohesive soils was presented and a new analytical model for resilient modulus calculation is proposed. The analytical model takes into account actual values of effective stress p , excess pore water pressure, the loading characteristic (for example, q max ) and the position of the effective stress path. The proposed model describes more exact the phenomena of modulus development during cyclic undrained conditions. The cohesive soil deformation characteristics, which are common to glacial till in Northern and Eastern Europe, is presented. Tests performed on this type of soil are rather rare.
Tests which characterize the stiffness change from small to large strains consist of resonant column (RC) and torsional shear (TS) tests in a small strain range were performed with the purpose of estimating the Young modulus E degradation curve. The intermediate and large strain zones were specified by conducted cyclic triaxial tests (CTRX. Such tests are one of the pioneering methods for this type of soil.
The impact of cyclic loading in the plastic zone was investigated and an occurrence of the quasi-elastic response after numerous repetitions was studied. During the unloading stage, the hysteresis curve can be specified by different tangent resilient modulus values in this study, called M r max . The M r max characterizes the elastic response during the first phase of unloading, in which the modulus value is the greatest.
Materials and Methods
For the tested material, cohesive soil, standard laboratory tests were conducted to classify the soil and determine its physical properties. The tests consisted of particle size analysis, consistency limits, and the Proctor compaction test was conducted.
The particle size analysis led to recognising the cohesive soil as a sandy clay (saCl), by performing tests based on sieve and aerometric analysis (the Bouyoucos method using a modification made by Casagrande), in accordance with the EUROCODE 7 [45] standard. Test results are shown in Figure 2. Appl. Sci. 2017, 7, 370 4 of 20 (for example, qmax) and the position of the effective stress path. The proposed model describes more exact the phenomena of modulus development during cyclic undrained conditions. The cohesive soil deformation characteristics, which are common to glacial till in Northern and Eastern Europe, is presented. Tests performed on this type of soil are rather rare. Tests which characterize the stiffness change from small to large strains consist of resonant column (RC) and torsional shear (TS) tests in a small strain range were performed with the purpose of estimating the Young modulus E degradation curve. The intermediate and large strain zones were specified by conducted cyclic triaxial tests (CTRX. Such tests are one of the pioneering methods for this type of soil.
The impact of cyclic loading in the plastic zone was investigated and an occurrence of the quasi-elastic response after numerous repetitions was studied. During the unloading stage, the hysteresis curve can be specified by different tangent resilient modulus values in this study, called Mr max. The Mr max characterizes the elastic response during the first phase of unloading, in which the modulus value is the greatest.
Materials and Methods
For the tested material, cohesive soil, standard laboratory tests were conducted to classify the soil and determine its physical properties. The tests consisted of particle size analysis, consistency limits, and the Proctor compaction test was conducted.
The particle size analysis led to recognising the cohesive soil as a sandy clay (saCl), by performing tests based on sieve and aerometric analysis (the Bouyoucos method using a modification made by Casagrande), in accordance with the EUROCODE 7 [45] standard. Test results are shown in Figure 2. In terms of World Reference Base for Soil Resources-WRB-the tested soil can be recognised as albeluvisoil (AB). The soil was deposited by glacial processes. The sandy clay in natural conditions is unconsolidated glacial till. This type of soil shows no stratification.
The liquid limit and plasticity limit test were conducted in accordance to [46]. On the basis of six sets of tests using the Casagrande apparatus with varying moisture content the liquid limit LL was established as being equal to 18.9%, classifying this soil as a clay with low plasticity. The plasticity limit PL was equal to 10.3%.
The optimum moisture content was conducted using the AASHTO T99 [47] procedure. It was achieved by a compaction in the Proctor mold whose volume is equal to 2.2 dm 3 . Standard energy of compaction, equal to 0.59 J/cm 2 , was used. Optimum moisture content for sandy clay was equal to 10.2% and the maximum dry density at optimum moisture content reached a value of 2.09 g/cm 3 . Table 1 presents the results of physical and mechanical tests conducted on sandy clay. In terms of World Reference Base for Soil Resources-WRB-the tested soil can be recognised as albeluvisoil (AB). The soil was deposited by glacial processes. The sandy clay in natural conditions is unconsolidated glacial till. This type of soil shows no stratification.
The liquid limit and plasticity limit test were conducted in accordance to [46]. On the basis of six sets of tests using the Casagrande apparatus with varying moisture content the liquid limit LL was established as being equal to 18.9%, classifying this soil as a clay with low plasticity. The plasticity limit PL was equal to 10.3%.
The optimum moisture content was conducted using the AASHTO T99 [47] procedure. It was achieved by a compaction in the Proctor mold whose volume is equal to 2.2 dm 3 . Standard energy of compaction, equal to 0.59 J/cm 2 , was used. Optimum moisture content for sandy clay was equal to 10.2% and the maximum dry density at optimum moisture content reached a value of 2.09 g/cm 3 . Table 1 presents the results of physical and mechanical tests conducted on sandy clay.
Bender element (BE) and torsional shear (TS) tests in a resonant column apparatus led to the estimation of the shear modulus, Young modulus and, finally, the Poisson ratio. After determining the properties of the soil, several series of triaxial tests under cyclic loading conditions were conducted. The tests were performed on compacted sandy clay, with optimal moisture conditions, in accordance with the Proctor method. The maximum dry density of soil samples was equal to 2.09 g/cm 3 .
In this paper two kinds of tests, one using the RC device and second using the CTRX were ran. However, the tests were conducted with an attempt to maintain similar testing conditions during both measurements. Initial effective confining pressure σ' 3 , was set to be 45 kPa, 90 kPa, and 135 kPa during all of the conducted tests. The examined soils were initially saturated, and the B values measured in the triaxial specimens exceeded 0.95, which means full saturation of sandy clay specimens. Subsequently, the tested specimens were consolidated to the set state of stress σ' 3 .
Repeated loading triaxial (CTRX) tests were carried out with a triaxial apparatus from GDS instruments (GDS, Hampshire, UK). The device is suitable for cylindrical soil specimens of 7 cm in diameter and 14 cm in height. Samples were fully saturated, and a B-value equal to, or greater than, 0.95 was assured at each measurement.
Specimens were then subjected to isotropic effective confining pressures of 45 kPa, 90 kPa, and 135 kPa and consolidated. The cyclic-test procedure consisted of applying an average deviator stress value q m superimposed to a forward-moving pulsating sine wave with constant stress amplitude q a . Details of the experimental design are shown in Table 2. Repeated loading triaxial tests were conducted under the consolidated-undrained (CU) conditions. The frequency used during the tests was equal to 1.0 Hz.
The tests were performed in a multistage manner. After the first series of tests (10 5 cycles), further stages were conducted. Each stage was characterized by characteristic deviator stress q values.
The cyclic stresses and initial confining pressure levels were used to define the effects of cyclic loading on soil behavior.
The resonant column has a fixed-free configuration. The specimen is fixed to the pedestal at the bottom end, and the other end is connected to the drive plate, while the top cap remains free. This system is provided with a testing unit (testing chamber), control computer, back pressure system, cell pressure controller, resonant column controller, and a data acquisition box [48][49][50].
Immediately after the first mode is found, the measurements of the resonant frequency (F r ) of the vibration amplitude are made. The sinusoidal torsional vibration at variable frequency is applied in a rotary manner by a device which causes such excitations. Subsequently, these measurements are combined with the specimen size and equipment characteristics in order to determine the shear wave velocity (V S ), shear modulus (G), and shearing strain amplitude (γ). Based on the elastic wave propagation, the fundamental data-reduction equation, Equation (4), can be established: in which I and I 0 stand for the moments of specimen inertia and the driving system, respectively; ω r , the natural frequency, stands for the system, and L stands for the length of the sample. In this study, the specimens of a typical size were used, i.e., representing 70 mm in diameter and 140 mm in height. Upon calculating the shear wave velocity, the shear modulus could be computed from Equation (1). The RC apparatus can perform at the resonant frequency, TS, and BE tests on the same specimen, without change of device settings. During TS tests, the sample was subjected to small cyclical torsional motion due to a coil-magnet system at the RC. The shear stress was calculated by the torque generated this way. The shear strain levels were determined from the twist angle of the soil sample, as measured by a proximitor. The shear strain was controlled by applying a voltage between 0.004 V and 1 V to the coils, which generated a shear strains between 0.0001% and 0.003%.
The TS tests were conducted with the application of a sinusoidal load wave with frequencies of 0.1, 1 and 10 Hz. The shear modulus G and damping ratio D were estimated for all three frequencies.
For measurements at 0.1 Hz and 1 Hz, ten cycles were taken into account for G and D calculations. For the 10-Hz frequency, 100 cycles were used. The range of the tested amplitudes varied between 0.005 V and 0.6 V.
The properties of sandy clay samples are summed up in Table 3. The descriptive statistics covers mass, dimensions, and basic physical parameters. The results of standard deviation calculations show a high repetition of dry density and moisture content. The differences in sample dimensions are caused by the compaction technique in the Proctors mold designed for samples used for triaxial tests.
Results and Discussion
The triaxial tests were performed in order to, above all, designate the resilient modulus M r of sandy clay under the changing test conditions. The second objective of the test was to estimate the maximal value of resilient modulus M r max during the first stage of unloading. The permanent strain accumulation was also analyzed. The axial strain ε 1 development during the tests conducted in the triaxial apparatus are presented in Figure 3a-c.
The strain observed during this tests indicate three possible ways of soil responding to such loads. The deformation characteristic with the number of cycles N is distinguished between a stepwise failure, shakedown, and abation [51]. The concept of deformation which occurs during cyclic loading was later developed to refer to shakedown theory. The three possible categories of material response are: -Plastic shakedown, characterized by a rapid decrease of the plastic strain rate, this phenomena is followed by an equilibrium state and fully resilient strains are observed. The tests performed under constant radial stress equal to σ' 3 45 kPa, in which deviator stress amplitude q a was equal 5.30 kPa and maximal deviator stress q max was equal 31.90 kPa, resulted in low strain accumulation and shakedown response. The same permanent deformation characteristic was observed in case of tests 1.2, 1.3, 2.1 to 2.4 and 3.1 to 3.5. The second kind of soil response to cyclic loading, which is plastic creep, was recognized in test numbers 1.4 to 1.6, 2.5 to 2.6, and 3.6 to 3.9. The strain accumulation was higher and a growing tendency of strain accumulation was observed. The incremental collapse phenomena was preceded by the occurrence of shakedown. In other words, plastic creep was not observed in Stage 1.7 and 2.7. The last phenomenon, incremental collapse, occurred in Stages 1.8 and 2.8, but not in the case of tests in σ' 3 equals to 135 kPa. The strain accumulation displays characteristic growth, which led to failure.
All three phenomena were present in the performed tests. The characteristic method of deformation development leads to various resilient modulus values and the resilient response, which can be compared to the elastic phase in the small strain zone. Figure 4a-c show the effective p stress paths obtained during tests in the radial stress equal to σ' 3 45 kPa, 90 kPa, and 135 kPa, respectively.
The stress path plots show distinctly different mean effective stress paths during cycling and provides a tool to analyze the stress-path evolution. Such evolution happens as a result of pore pressure generation, which increases after numerous cycles, which cause the movement of the stress path toward the critical state line. When the critical state is reached, the stress path moves opposite to the deviator stress axis. The triaxial tests were performed in order to, above all, designate the resilient modulus Mr of sandy clay under the changing test conditions. The second objective of the test was to estimate the maximal value of resilient modulus Mr max during the first stage of unloading. The permanent strain accumulation was also analyzed. The axial strain ε1 development during the tests conducted in the triaxial apparatus are presented in Figure 3a-c. The strain observed during this tests indicate three possible ways of soil responding to such loads. The deformation characteristic with the number of cycles N is distinguished between a stepwise failure, shakedown, and abation [51]. The concept of deformation which occurs during cyclic loading was later developed to refer to shakedown theory. The three possible categories of material response are: -Plastic shakedown, characterized by a rapid decrease of the plastic strain rate, this phenomena is followed by an equilibrium state and fully resilient strains are observed. During cyclic loading (σ' 3 = 45 kPa), the stress path moves toward the deviator stress axis in the first three stages. After the third stage of loading, the stress path starts to move in the opposite direction. The pore water pressure decreases, which was the result of the increase in porosity value. The incremental collapse was observed. This process lasts during tests 1.4 to 1.6. Test 1.7 was characterized by a smaller maximal deviator q max increase. During this test the plastic strain rate was lower (see Figure 3a), as was the stress path rate. This phenomenon was caused by lower maximal deviator stress than the critical state deviatoric stress in this condition. The same observation can be made when the 2.7 test is analyzed. Conditions under σ' 3 equal to 135 kPa shows that the critical state is reached by the soil sample in Stage 3.9 (see Figure 4c). Appl. Sci. 2017, 7, 370 9 of 20 direction. The pore water pressure decreases, which was the result of the increase in porosity value. The incremental collapse was observed. This process lasts during tests 1.4 to 1.6. Test 1.7 was characterized by a smaller maximal deviator qmax increase. During this test the plastic strain rate was lower (see Figure 3a), as was the stress path rate. This phenomenon was caused by lower maximal deviator stress than the critical state deviatoric stress in this condition. The same observation can be made when the 2.7 test is analyzed. Conditions under σ'3 equal to 135 kPa shows that the critical state is reached by the soil sample in Stage 3.9 (see Figure 4c). The loading conditions of conducted cyclic triaxial tests which are called constant stress conditions, lead to a critical state but this state did not last during the time of the test, which can be observed as strain development during cycling. The strain rate decreases and after around 10 3 repetitions the purely resilient statistic can be observed.
Pore Pressure Analysis
During undrained tests, the pore pressure was not expected to dissipate due to leakage. This happened because drainage was kept steady by pressure and volume controllers. However, experiments have shown otherwise (see Figure 5a,b). Appl. Sci. 2017, 7, 370 10 of 20 The loading conditions of conducted cyclic triaxial tests which are called constant stress conditions, lead to a critical state but this state did not last during the time of the test, which can be observed as strain development during cycling. The strain rate decreases and after around 10 3 repetitions the purely resilient statistic can be observed.
Pore Pressure Analysis
During undrained tests, the pore pressure was not expected to dissipate due to leakage. This happened because drainage was kept steady by pressure and volume controllers. However, experiments have shown otherwise (see Figure 5a,b).
(c) (f) The pore water pressure develops in a similar scenario for the three radial stress test conditions. At tests 1.1, 2.1, and 3.1 the pore water pressure rises due to the first load cycle, which causes the greatest pore pressure build up. After this event, the pore water pressure rises during cyclic loading until the critical state is achieved. During tests 1.1 to 1.3, tests 2.1 to 2.5, and tests 3.1 to 3.8, The pore water pressure develops in a similar scenario for the three radial stress test conditions. At tests 1.1, 2.1, and 3.1 the pore water pressure rises due to the first load cycle, which causes the greatest pore pressure build up. After this event, the pore water pressure rises during cyclic loading until the critical state is achieved. During tests 1.1 to 1.3, tests 2.1 to 2.5, and tests 3.1 to 3.8, the change of the deviator stress value caused the response of the pore water pressure, raising the pressure. After the abovementioned tests, the increase of q max led to another behavior. In the first cycles the pore water pressure increases, which causes the development of plastic strain. This phenomenon is observed as a pore water pressure decrease due to changes in porosity.
When pore water pressure reaches the lowest value in this phase, a hardening process begins to occur. This phenomena can be recognized as the pore water pressure builds up.
Nevertheless, indirect conclusions can be drawn from the analysis of the accumulation of plastic strains, which are presented in Figure 3a-c.
Resonant Column Test Results
The maximal Young modulus value was obtained by performing the torsional shear (TS) test and resonant column test (RCA). The frequency in TS tests was equal to 1 Hz. The plot of the Young's modulus at different radial stresses is shown as a function of γ in Figure 6. The results show the low dependence of E max on the radial stress value. Appl. Sci. 2017, 7, 370 11 of 20 the change of the deviator stress value caused the response of the pore water pressure, raising the pressure. After the abovementioned tests, the increase of qmax led to another behavior. In the first cycles the pore water pressure increases, which causes the development of plastic strain. This phenomenon is observed as a pore water pressure decrease due to changes in porosity. When pore water pressure reaches the lowest value in this phase, a hardening process begins to occur. This phenomena can be recognized as the pore water pressure builds up.
Nevertheless, indirect conclusions can be drawn from the analysis of the accumulation of plastic strains, which are presented in Figure 3a-c.
Resonant Column Test Results
The maximal Young modulus value was obtained by performing the torsional shear (TS) test and resonant column test (RCA). The frequency in TS tests was equal to 1 Hz. The plot of the Young's modulus at different radial stresses is shown as a function of γ in Figure 6. The results show the low dependence of Emax on the radial stress value. The maximum value of Young modulus from RCA and TS tests for radial stress σ'3 equal to 45 kPa was 135.5 MPa, for σ'3 equal to 90 kPa, Emax was 218.2 MPa.
Analysis of Resilient Modulus Value
The hysteresis loops were analyzed and the value of Mr was established for nineteen tests. The additional values of Mr max, which characterizes the modulus with maximal slop on the stress-strain plot, were also calculated (see Figure 7). The purpose of these calculations was to evaluate the correlation between maximal resilient modulus and Emax. The resilient modulus had different values for each of the applied deviator stress levels, and the Mr max values were also different for each test. The maximum value of Young modulus from RCA and TS tests for radial stress σ' 3 equal to 45 kPa was 135.5 MPa, for σ' 3 equal to 90 kPa, E max was 218.2 MPa.
Analysis of Resilient Modulus Value
The hysteresis loops were analyzed and the value of M r was established for nineteen tests. The additional values of M r max , which characterizes the modulus with maximal slop on the stress-strain plot, were also calculated (see Figure 7). The purpose of these calculations was to evaluate the correlation between maximal resilient modulus and E max . The resilient modulus had different values for each of the applied deviator stress levels, and the M r max values were also different for each test.
The recoverable strains characterized by the resilient modulus M r are presented in Figure 8a-c for tests in σ' 3 equal to 45 kPa, 90 kPa, and 135 kPa, respectively. The M r value in the first cycle was between 44 and 59 MPa for confining pressure σ' 3 equal to 45 kPa and between 48 and 78 MPa for σ' 3 equal to 90 kPa. For σ' 3 equal to 135 kPa the M r value was between 45.0 to 81.1 MPa. During cyclic loading, the resilient modulus value decreases to around 10 3 cycles. Then, in the case of plastic creep strain response, the M r value increases in the rest of the tests.
The resilient modulus M r decrease was caused by strain development between 10 2 and 10 3 cycles. This can be observed as the M r value decreases on plots 8a-c and indirectly on plot 3a-c. After this stage, the M r value increases. The reason for that is the increase of stiffness which can be observed as smaller resilient strains in one cycle. During this phase, the stress path has not changed its value (see Figure 5c,d), the equilibrium state is achieved, and no further changes of the resilient modulus value occurs.
Analysis of Resilient Modulus Value
The hysteresis loops were analyzed and the value of Mr was established for nineteen tests. The additional values of Mr max, which characterizes the modulus with maximal slop on the stress-strain plot, were also calculated (see Figure 7). The purpose of these calculations was to evaluate the correlation between maximal resilient modulus and Emax. The resilient modulus had different values for each of the applied deviator stress levels, and the Mr max values were also different for each test. The recoverable strains characterized by the resilient modulus Mr are presented in Figure 8a-c for tests in σ'3 equal to 45 kPa, 90 kPa, and 135 kPa, respectively. The Mr value in the first cycle was between 44 and 59 MPa for confining pressure σ'3 equal to 45 kPa and between 48 and 78 MPa for σ'3 equal to 90 kPa. For σ'3 equal to 135 kPa the Mr value was between 45.0 to 81.1 MPa. During cyclic loading, the resilient modulus value decreases to around 10 3 cycles. Then, in the case of plastic creep strain response, the Mr value increases in the rest of the tests.
The resilient modulus Mr decrease was caused by strain development between 10 2 and 10 3 cycles. This can be observed as the Mr value decreases on plots 8a-c and indirectly on plot 3a-c. After this stage, the Mr value increases. The reason for that is the increase of stiffness which can be observed as smaller resilient strains in one cycle. During this phase, the stress path has not changed its value (see Figure 5c,d), the equilibrium state is achieved, and no further changes of the resilient modulus value occurs. Table 2.
The differences in the M r value are clearly dependent from the deviator stress levels, which causes decreased soil strength in further steps during this test. The deviator stress also causes different responses of the soil. In the case of the tests conducted in radial stress σ' 3 Table 2. The differences in the Mr value are clearly dependent from the deviator stress levels, which causes decreased soil strength in further steps during this test. The deviator stress also causes different responses of the soil. In the case of the tests conducted in radial stress σ'3 equal to 45 Table 2.
Analytical Model for Mr Calculation
Calculations on the average resilient modulus value (Mr avg) and analysis of effective stress paths (see Figure 4a-c) led to the estimation of the function describing the change of resilient response of tested soils as a function of the maximal deviator stress in the actual mean effective stress conditions.
The stress paths 'critical state line for both tests in radial stress σ'3 conditions was employed for the model. The inclination M of the critical state line was calculated based on Equation (5): Cycle no 1 (test 3.1 to 3.9) Figure 9. Resilient modulus value of tested sandy clay for first and last cycles, and average value in the radial stress σ' 3 equal to (a) 45 kPa, (b) 90 kPa and (c) 135 kPa, as defined in Table 2.
Analytical Model for M r Calculation
Calculations on the average resilient modulus value (M r avg ) and analysis of effective stress paths (see Figure 4a-c) led to the estimation of the function describing the change of resilient response of tested soils as a function of the maximal deviator stress in the actual mean effective stress conditions.
The stress paths 'critical state line for both tests in radial stress σ' 3 conditions was employed for the model. The inclination M of the critical state line was calculated based on Equation (5): in which q max is stated as the maximal deviator stress in the actual mean effective stress conditions. The M parameter is equal to 2.4. For characterization of how distant the actual maximal deviator stress is from the maximal deviator stress in the critical state, the T parameter is introduced. The maximal deviator stress q max (p ) is a value of deviator stress which must occur to reach the critical state. Equation (6) presents the abovementioned parameter: The T parameter, therefore, is simply the difference between q max (p ) and q max from Equation (7): The T parameter change is presented in Figure 10. The figure plots the value of the resilient modulus and T parameter in 3D space. Note that the resilient modulus is almost independent from the number of cycles. Additionally, the M r value corresponds to the value of the T parameter. This phenomenon is very similar for tests in the effective confining pressure σ' 3 equal to 45 kPa, 90 kPa, and 135 kPa. This fact strongly suggests that the change of M r can be modeled in a reasonable manner through the use of an equation involving the T parameter and number of cycles N.
Based on the test results, the selection of parameters to formulate the resilient modulus equation should be conducted carefully. On one hand, the formula must have its limitations, and the results derived from their use must be carefully interpreted. On the other hand, the use of the resilient modulus formula which have parameters significantly different for each test condition results in different M r values.
According to previously-presented test results, the level of resilient modulus in cohesive soil subjected to cyclic loading can be estimated through the combination of the number of repeated loading and T parameter.
The linear formula based on this two parameters and constants which characterize the cyclic loading conditions is presented in Equation (8): The χ parameter is termed amplitude ratio (MPa) and can be calculated based on Equation (9): The ∆u 1 is the excess pore water pressure in the first cycle in MPa. The k 1 , k 2 , and k 3 indicators are material constants. Figure 11 summarizes the target resilient modulus obtained during the tests and calculated M r value based on Equation (8). Based on the test results, the selection of parameters to formulate the resilient modulus equation should be conducted carefully. On one hand, the formula must have its limitations, and the results derived from their use must be carefully interpreted. On the other hand, the use of the resilient modulus formula which have parameters significantly different for each test condition results in different Mr values.
According to previously-presented test results, the level of resilient modulus in cohesive soil subjected to cyclic loading can be estimated through the combination of the number of repeated loading and T parameter.
The linear formula based on this two parameters and constants which characterize the cyclic loading conditions is presented in Equation (8): The χ parameter is termed amplitude ratio (MPa) and can be calculated based on Equation (9): The Δu1 is the excess pore water pressure in the first cycle in MPa. The k1, k2, and k3 indicators are material constants. Figure 11 summarizes the target resilient modulus obtained during the tests and calculated Mr value based on Equation (8).
The resilient modulus was calculated from Equation (8). This is confronted by the Uzan-Witczak model. The results of the Uzan-Witczak resilient modulus Mr was calculated based on Equation (2). The result show that the resilient modulus calculated based on Equation (8) (constants k1 = 43, k2 = 10, k3 = 0.2) better fits the obtained data. The Uzan-Witczak model parameters were fitted for this study in each test. The k1, k2, and k3 parameters were equal to 0.19, 0.91, and −0.45, The resilient modulus was calculated from Equation (8). This is confronted by the Uzan-Witczak model. The results of the Uzan-Witczak resilient modulus M r was calculated based on Equation (2). The result show that the resilient modulus calculated based on Equation (8) (constants k 1 = 43, k 2 = 10, k 3 = 0.2) better fits the obtained data. The Uzan-Witczak model parameters were fitted for this study in each test. The k 1 , k 2 , and k 3 parameters were equal to 0.19, 0.91, and −0.45, respectively, for σ' 3 equal to 45 kPa, 0.15, 0.91, and −0.45 for σ' 3 equal to 90 kPa, 0.4, 0.1 and −0.31 for σ' 3 equal to 135 kPa. The results of resilient modulus calculation also shows that if we consider a 10% error level for M r estimation, in almost all cases the resilient modulus is in this range.
The proposed resilient modulus model as well as the Uzan-Witczak, can be exploited in the analysis and design of pavement systems. This analytical models is motivated by the observation of the cohesive soil response to cyclic loading. It is obvious that the Uzan-Witczak model was created for unbound granular materials and, therefore, the resilient modulus calculation results must not be the same as the test results.
The analytical model presented above takes into consideration the actual values of effective stress p (parameter T), actual excess pore water pressure in reference to initial conditions before cyclic loading ∆u 1 , the loading characteristics (q max and χ), and the position of the effective stress path (the T parameter). This approach results in better characterization of pavement or industrial foundation systems where undrained conditions in the subgrade soil may occur.
the same as the test results.
The analytical model presented above takes into consideration the actual values of effective stress p′ (parameter T), actual excess pore water pressure in reference to initial conditions before cyclic loading Δu1, the loading characteristics (qmax and χ), and the position of the effective stress path (the T parameter). This approach results in better characterization of pavement or industrial foundation systems where undrained conditions in the subgrade soil may occur.
Maximum Resilient Modulus Value Analysis
The maximum resilient modulus Mr max change for σ'3 = 45 kPa and σ'3 = 90 kPa versus axial strain Figure 12a,b. Maximum resilient modulus Mr max during the test follows the same pattern for σ'3 equal to 45 kPa and 90 kPa. In the first cycles of tests 1.1 and 2.1, the Mr max degraded to a constant value after around 20-30 cycles. This occurrence is caused by excess pore pressure generation. When pore pressure reaches a constant value, the Mr max stabilizes and remains constant until the end of the test. The interval between the maximal and minimal Mr max values is the highest in the case of tests in
Maximum Resilient Modulus Value Analysis
The maximum resilient modulus M r max change for σ' 3 = 45 kPa and σ' 3 = 90 kPa versus axial strain Figure 12a,b. Maximum resilient modulus M r max during the test follows the same pattern for σ' 3 equal to 45 kPa and 90 kPa. In the first cycles of tests 1.1 and 2.1, the M r max degraded to a constant value after around 20-30 cycles. This occurrence is caused by excess pore pressure generation. When pore pressure reaches a constant value, the M r max stabilizes and remains constant until the end of the test. The interval between the maximal and minimal M r max values is the highest in the case of tests in the small strain zone. When the stress path moves towards the critical state path, the response of the soil changes.
The soil material in the critical state, at first, degrades. After this stage, when excess pore water pressure caused by a new amount of loading is dissipating, the maximum resilient modulus M r max remains almost constant by around 1 × 10 3 to 2 × 10 3 cycles. When pore pressure reaches equilibrium, the maximal resilient modulus value starts to increase. Indirect conclusions can be drown from Figure 5a-d. The above-described phenomenon indicates that after the plastic deformation occurrence caused by excessive load and excess pore water pressure dissipation, soil becomes resilient and the maximal resilient modulus M r max starts to increase.
The maximal resilient modulus value at third stage characterizes the high interval. This occurrence is caused by the small strain in this area in which measurements are difficult to maintain by the triaxial cell. the small strain zone. When the stress path moves towards the critical state path, the response of the soil changes. Table 2.
The soil material in the critical state, at first, degrades. After this stage, when excess pore water pressure caused by a new amount of loading is dissipating, the maximum resilient modulus Mr max remains almost constant by around 1 × 10 3 to 2 × 10 3 cycles. When pore pressure reaches equilibrium, the maximal resilient modulus value starts to increase. Indirect conclusions can be drown from Figure 5a-d. The above-described phenomenon indicates that after the plastic deformation occurrence caused by excessive load and excess pore water pressure dissipation, soil becomes resilient and the maximal resilient modulus Mr max starts to increase.
The maximal resilient modulus value at third stage characterizes the high interval. This occurrence is caused by the small strain in this area in which measurements are difficult to maintain by the triaxial cell.
Nevertheless, Figure 12a,b show a comparison of the resonant column test results and cyclic triaxial test results. The degradation curve, which shows how the Young's modulus E degrades during shearing test, fits for with maximum resilient modulus during the second stage of the above-described phenomena.
Degradation of the soil's Young's modulus value is caused by greater stress or strain amplitude. In cyclic triaxial tests, for maximum resilient modulus, the strain amplitude is greatest during the first stage of the test where the pore pressure rises and plastic strains occur. Therefore, the proper characteristic of the soil's Young's modulus degradation, in the case of the performed cyclic triaxial tests, is evident only for the first stage of tests where no critical state occurs, or for the second stage of tests where the critical state was noted. Table 2. Nevertheless, Figure 12a,b show a comparison of the resonant column test results and cyclic triaxial test results. The degradation curve, which shows how the Young's modulus E degrades during shearing test, fits for with maximum resilient modulus during the second stage of the above-described phenomena.
Degradation of the soil's Young's modulus value is caused by greater stress or strain amplitude. In cyclic triaxial tests, for maximum resilient modulus, the strain amplitude is greatest during the first stage of the test where the pore pressure rises and plastic strains occur. Therefore, the proper characteristic of the soil's Young's modulus degradation, in the case of the performed cyclic triaxial tests, is evident only for the first stage of tests where no critical state occurs, or for the second stage of tests where the critical state was noted.
Conclusions
The geotechnical design of pavement constructions need to also take into account the deformation properties of soils. The fundamental geotechnical concepts, like the strain level or deviator stress quantity, should be taken into consideration. The results presented in this paper illustrate the cohesive soil resilient modulus M r and maximum resilient modulus M r max . The test results lead to the following conclusions: 1.
The strain observed during this test indicates three possible ways of soil the respond to cyclic loading. Under the low deviator stress amplitude, low strain accumulation is observed. In intermediate deviator stress amplitude levels, the strain accumulation was higher, the growth of strain accumulation can be observed. The plastic strain accumulation presents a characteristic growth tendency, which is caused by pore pressure development.
2.
During the cyclic triaxial tests the stress path moves towards the deviator stress axis in the first few stages. When critical state of soil is achieved, the stress path starts to move in the opposite direction. The critical state did not last during the entire time of the test, which can be observed as plastic strain development during cycling. The strain rate decreases and, after numerous cycles, the purely resilient state can be observed.
3.
The pore water pressure develops in a similar scenario for three radial stress test conditions. At the first stages of the tests the pore water pressure rises due to the first load cycles, which cause the greatest pore pressure build up. Later, the pore water pressure develops through further stages of cyclic loading until the critical state is achieved. When pore water pressure reaches the equilibrium state, a hardening process begins. 4.
The maximum value of the Young's modulus from RCA and TS tests for radial stress σ' 3 equal to 45 kPa was 135.5 MPa, and for σ' 3 equal to 90 kPa, E max was 218.2 MPa. 5.
The recoverable strains characterized by the resilient modulus M r value in the first cycle was between 44 and 59 MPa for confining pressure σ' 3 equal to 45 kPa, between 48 and 78 MPa for σ' 3 equal to 90 kPa, and from 44 to 81 MPa for σ' 3 equal to 135 kPa. The resilient modulus M r decrease was caused by plastic strain development. 6.
The analytical model presented in this article takes into consideration the actual values of effective stress p (parameter T), actual excess pore water pressure in reference to initial conditions before cyclic loading ∆u 1 , the loading characteristics (q max and χ), and the position of the effective stress path (the T parameter). 7.
The maximum resilient modulus for soil material was characterized. During cyclic loading, soil first degrades, then when M r max reaches the plateau stage by around 1 × 10 3 to 2 × 10 3 cycles, pore pressure reaches equilibrium, and the maximal resilient modulus value starts to increase. | 2019-04-15T13:11:12.276Z | 2017-04-07T00:00:00.000 | {
"year": 2017,
"sha1": "892597de8940daf0eebb6c02d93dbfbeabdd12c8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/7/4/370/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "efcf6dd9926f59bc163e83790ba89eb6f9b0051f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
228063810 | pes2o/s2orc | v3-fos-license | First ultraviolet outburst detected from ASASSN-18eh strengthens its interpretation as a cataclysmic variable
As part of the Transient UV Objects project, we have discovered a new outburst (at the beginning of October 2020) of the candidate cataclysmic variable (CV) ASASSN-18eh using the UV/Optical Telescope aboard the Neil Gehrels Swift Observatory. During the outburst its brightness increased by about 6 mag in UV compared to its brightness in the quiescent state. The properties of this outburst are consistent with it being a dwarf nova, strongly supporting the CV nature of ASASSN-18eh.
INTRODUCTION
Cataclysmic variables (CVs) consist of a white dwarf (WD) in close orbit, typically of a few hours (e.g. Knigge et al. 2011), with usually a red dwarf companion (e.g. Hillman et al. 2020). The companion overflows its Roche lobe and transfers material to the WD through an accretion disk. The accretion often is not stable but goes through episodic periods (lasting a few to a few tens of days) in which the accretion rate suddenly increases significantly causing bright outbursts in between periods of quiescence (so-called dwarf novae, or DNe, increase in V brightness by about ∼ 2 − 5 mag). See Lasota (2001) for a detailed discussion about DNe and the physics involved.
The source ASASSN-18eh has been suggested as a potential CV after an outburst, reaching a V magnitude of 16.1, was observed in February 2018 (Shappee et al. 2014; ASAS-SN team 2020). As part of our Transient UV Objects (TUVO) project, we have discovered the first outburst in the ultraviolet (UV) of the same source in early October 2020.
OBSERVATIONS & ANALYSIS
The Neil Gehrels Swift Observatory (Gehrels et al. 2004) has been designed to provide rapid multi-wavelength followup observations of gamma-ray bursts using, among others, its UV/Optical Telescope (UVOT; Roming et al. 2005). The observatory offers an excellent opportunity to study transients in the UV given its many repeated pointings of the same field (owing to its high flexibility), and freely accessible daily data supply (Gehrels et al. 2004).
As part of the TUVO project (Wijnands et al. 2021), we make use of the TUVOpipe pipeline to search for transients in the data from UVOT . During October 2020, a transient was detected by the pipeline at a position of 14 28 33.52, -46 11 26.5 (J2000; errors of 5 arcsec; see Modiano et al. 2021), which is consistent with that of ASASSN-18eh, demonstrating that we detected another outburst of this source. This outburst was not reported by either the Zwicky Transient Facility (Bellm et al. 2019) or ASAS-SN (Shappee et al. 2014).
In total, UVOT has taken 36 exposures of ASASSN-18eh between December 2016 and October 2020. The observations are spread equally between the uw1, um2, and uw2 filters, which have central wavelengths of 2600, 2246, and 1928Å respectively 1 . In nine of the observations (obsIDs 00084507003, 00084507007, and 00084507013; all filters), the source position was right at the boundary of a readout steak caused by a bright star in the field-of-view (FoV) and in another one (obsID 00084507001; only uw1) the source position was located at the edge of the detector and only partially captured. For this reason these exposures have been omitted from the analysis. The resulting total exposure times per filter are ∼ 975, ∼ 1284, and ∼ 1284 seconds for uw1 (8 exposures), um2 (9 exposures), and uw2 (9 exposures) respectively. A light curve of the source has been made with the following steps: firstly, all images were aligned per filter, since UVOT pointings can sometimes be offset by a few arcsec (Poole et al. 2008). As source extraction region we used a circular region with a radius of 5 arcsec, centered on the source position as determined by our pipeline. For the background extraction region we used a circular region with a radius of ∼ 20 arcsec in an area close to the target, but without any visible sources. All images were then manually inspected for any potential artifacts such as, for example, the previously mentioned readout streaks close to the source. Specialized UVOT tools included with the HEAsoft software package 2 (Version 6.28) and the HEASARC's calibration database (CALDB) system (UVOT version 20190101) were then used to analyse the data. Uvotsource 3 was used to determine the magnitudes and fluxes of the source for each exposure. In individual exposures during quiescence the source was not detected. Therefore, all exposures during quiescence were stacked using uvotimsum 4 in order to improve the detection limit, allowing us to detect the source in quiescence. The resulting stacked image for each filter was then again passed to uvotsource using the same source and background extraction regions.
3. RESULTS Figure 1 shows the light curve of the source in the different filters. The first exposure that shows the source in outburst was taken at 21:21:36 UTC on 2 October 2020 and the last one at 00:28:48 UTC on 4 October 2020. This gives a lower limit for the outburst duration of just over a day. Unfortunately, no more observations were performed after 4 October 2020, since the field could not be observed anymore due to Solar constraints. This means that the source was still active at the time of our last observation, but further constraints are not possible.
Using the image stacking approach discussed above, we were able to detect the source in quiescence with an AB magnitude and flux density of: Therefore, the outburst had an amplitude of at least ∼ 6 mag across all filters. It is also noteworthy that all subsequent measurements during the outburst show a decreasing brightness, which indicates that the peak brightnesses were likely higher than the above quoted ones.
CONCLUSIONS
We have presented observations of the first detected UV outburst of the candidate CV ASASSN-18eh which was found during October 2020. The UV intensity during the outburst increased by at least ∼ 6 mag with a minimum duration of just over a day. Unfortunately, more observations during the October 2020 outburst could not be obtained because of Solar constraints. The next time observations can be obtained is in late December 2020 by which time it is expected that the source has decayed into quiescence again. The maximum UV brightnesses we measured during the October 2020 outburst are slightly higher compared to the brightness in V measured by ASAS-SN during the outburst at the end of February 2018. This is consistent with DNe emitting a large fraction (if not most) of their energy in the UV (e.g. Giovannelli 2008;Parikh et al. 2019), although we note that we might have missed the peak during the observations making stringent inferences difficult. The previous known outburst (and so far the only one) of the source was in February 2018, meaning that the recurrence time is not more than 2.7 years, although very likely it is shorter because additional outbursts were likely missed by surveying transient facilities. Typical recurrence times of DNe range from days to decades (Belloni et al. 2016), depending mostly on the mass ratio between the WD and its companion (Patterson 2011). These characteristics (i.e. outburst amplitude, duration, and recurrence time) are consistent with the outburst being a DN outburst of a CV, which would confirm the CV nature of the source. | 2020-12-10T02:15:51.994Z | 2020-12-09T00:00:00.000 | {
"year": 2020,
"sha1": "239a4ab1f9752d491a81646492fb148d0f919abc",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/2012.05060",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "239a4ab1f9752d491a81646492fb148d0f919abc",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
245145374 | pes2o/s2orc | v3-fos-license | Use of simulation to teach in the operating room – Don't let the COVID-19 pandemic interrupt education:an observational clinical trial
Background Simulation-based education has become the most important part of resident training in anesthesiology, especially during the pandemic. It allows learning the skills and the management of different situations without putting residents in risk of contamination, considering COVID-19 is highly contagious. The hypothesis was that simulation is still associated with improvement of knowledge acquisitions despite the context of the COVID-19 pandemic. Methods Residents of anesthesiology and intensive care subjected to an anaphylaxis simulation scenario. Their knowledge levels were assessed by true/false questions before and one month after the simulation session. The STAI test was used to measure anxiety levels before and after the scenario. Data were analyzed statistically using Wilcoxon and McNemar tests. Results Junior residents (< 2 years) received significantly higher scores in post-training theoretical tests compared to their pre-training scores (79.2 ± 9.6, 84.5 ± 8.2, p = 0.002, n = 21). There was no difference between pre- and post-test scores of seniors (80.2 ± 9, 81.8 ± 10.4, p = 0.3). Pre- and post-anxiety inventory scores were nearly the same and both were in the moderate group (39.8 ± 10.1, 39.3 ± 12.1, p = 0.8). Conclusion Simulation-based education improved the knowledge levels of the residents without raising anxiety levels. Thus, simulation-based training showed its value as an important tool of education during the pandemic, which needs to be further popularized for training at all institutions. Enlightening medical educators about this accomplished teaching method may lead to improved quality of medical education in developing countries and reshape how tomorrow's doctors are trained during pandemics.
Introduction
During the COVID-19 pandemic, continuation of medical education programs has been interrupted because of restrictions, thus simulation training method was introduced for education and Anesthesiologists must manage different types of emergencies. In an emergency, it is essential to make a quick decision and to perform interventions at a proper time. Using all the knowledge and skills which were gained during the residency, requires practice. Through technological development and high-fidelity mannequins, residents have the opportunity to learn the skills and the management of different kinds of emergencies before facing them in real patients. 1,2 Anaphylaxis is one of the rare but fatal emergencies. However, it is documented that after life-threatening allergic reactions morbidity is common and management of this emergency needs to be improved. 3 University of Lyon is a highly experienced center for simulation training, it held master-class courses for instructors. As Ankara University educators, we attended one of these courses. We used an anaphylaxis scenario for in-situ simulation training in the operating room. The primary goal of this study was to evaluate the difference between the pre-and post-simulation knowledge test scores to evaluate the effectiveness of simulation training. The secondary goal was to examine whether it creates any anxiety on participants.
Subjects
This prospective, observational, single-center study was approved by the Ethics Committee of the Ankara University School of Medicine (Serial number: I4-166-19). After informed consent, 42 residents of the Department of Anesthesiology and Intensive Care were included in the study, without taking into consideration their training levels. Two of them did not want to participate in the study. Forty residents were randomly divided into seven groups of 5 or 6 residents in each group. The information on the seniority of residents, pre-test and post-test scores, and anxiety levels by the state-trait anxiety inventory before the session were collected.
Study design
Before the simulation session, all subjects undertook a pretest, including 20 theoretical true/false questions, assessing their basic knowledge about anaphylaxis mechanism and treatment strategies. The total score was 100, with 5 points for each question. Besides, subjects' anxiety levels were assessed by the State-Trait Anxiety Inventory (STAI). 4 STAI consists of two 20-item scales for measuring the intensity of anxiety as an emotional state (S-Anxiety) and individual differences in anxiety proneness as a personality trait (T-Anxiety). STAI scores are classified as "no or low anxiety" (20−37), "moderate anxiety" (38−44), and "high anxiety" (45−80). 5 Following this, based on their randomization, all subjects received a short scenario about anaphylaxis in the operating room (Supplemental file-1) using the Resusci Anne mannequin (Laerdal Medical, Stavanger, Norway). In-situ simulations were limited to 15-min duration, followed by a 30-min standardized debriefing to review technical skills, non-technical skills, and knowledge gaps. After the simulation training, residents were requested to complete STAI again.
Finally, they were requested to stop reading about anaphylaxis until they were assessed with a post-test, one month after the session with the same questions.
Statistics
Descriptive statistics for the categorical and continuous data were given as frequency (percentage) and median (minimum-maximum), respectively. Changes in the correct answer percentage regarding each question were evaluated using the McNemar test, and pre-post differences in the total score were compared with the Wilcoxon Signed Rank Test. All statistical analyses were performed with Statistical Package for Social Sciences (SPSS Version 15.0, Chicago, IL), and the level of statistical significance was set to 0.05.
Results
Forty residents were divided into two groups as seniors and juniors regarding their year of training 2 years of training accepted as a cut-off value for being senior. Out of 40 subjects, 21 had a working experience of less than 2 years (Fig. 1).
The theoretical score improved from 79.7 § 9.2 to 83.2 § 9.3 (p = 0.04) in pretest and posttest results. Junior residents (< 2 years) received significantly higher scores in posttests compared to their pre-test scores (79.2 § 9.6, 84.5 § 8.2, p = 0002). However, there was no significant difference between pre-and post-test scores of seniors (80.2 § 9, 81.8 § 10.4, p = 0.3) (Fig. 2). Juniors scored higher than seniors in the post-test (84.5 § 8.2, 81.8 § 10.4, p = 0.236). Both state and trait STAI scores were calculated, however only the state component is reported here as a reflection of anxiety experienced at the day of the simulation training. The pre-STAI-S score was 39.8 § 10.1 while the post STAI-S score was 39.3 § 12.1. There was no difference between the preand post-state-trait anxiety inventory scores (p = 0.8) ( Table 1).
Discussion
While there was an improvement in posttest scores compared to pretest, this increase was more significant for the junior residents. Even the posttest scores of the juniors were higher than the seniors, while there was no significant difference between pretest and posttest scores of seniors. Besides, the simulation training did not make any difference in anxiety scores.
Since March 2020, face to face medical education lectures, bedside visits, hands on practices in clinics had to discontinue for a while due to the pandemic restrictions. Training programs had to be restructured according to the new normal and simulation-based learning became much more important. 6 Faculties needed to determine a new road map for residents. 7,8 While the academic community is worrying about how to educate particularly the ones who have no experience in the operating room, this study may paint a promising picture.
Simulation-based training gives health care providers the opportunity to develop their skills to manage real-life cases in the hospital. 9 In this study, true/false questions were used to assess the efficacy of the simulation session which was supposed to improve the knowledge scores of participants. In a recent study by Shailaja et al from India, 10 22 anesthesia residents had six scenarios. After that, they took pre and post simulation multiple choice question tests and the mean knowledge score was improved by 51%, whereas the mean knowledge score from pretest to posttest improved by 4.3% in our study. Furthermore, in another study by Etanaa et al. from Ethiopia, 11 non-physician anesthetists attended a 3-day course and they had nine simulation scenarios, and eventually, the posttest scores improved by 16%. This difference in results may be related to the timing of the post-tests. While Shailaja et al. applied the test right after the simulation session, Etanaa et al. applied the test after the end of all 3-day sessions. In our study, we aimed to evaluate the impact of the simulation training method on long-term knowledge retention, so the questions were given one month after the simulation session. The increase in the scores of junior residents demonstrated the success of this education modality in long-term learning.
Anxiety can be seen in those who did not participate in any simulation training. Stein, C found that, post-simulation STAI scores of emergency medical care students were significantly higher than pre-simulation scores in scheduled simulation assessments. 12 In our study, moderate anxiety was detected in the participants. The fact that none of them had participated in any simulation training before may have caused moderate anxiety scores. In addition, the fact that this session was reported as training rather than evaluation may have caused this score not to increase.
During the pandemic, healthcare workers have a lot to bother about. Residents feel uneasy about patient safety, personal safety, and their education. 13 As educationists, one of our duties should be to protect our residents from burnout during a pandemic and keep them enthusiastic about learning. It will be wise if we use the simulation method to teach them without loading another stress factor during these difficult times.
As an outcome of our perfect collaboration with the simulation team of Claude Bernard University, this scenario represents the first successfully performed simulation-based training at our institution. Furthermore, our experience with the University of Claude Bernard indicates the importance of collaborative workshops and master classes as good tools for the dissemination of this educational modality.
Limitations
Pre-and post-knowledge test questions that we used were chosen from our department's exam questions. Although these questions have not been validated, we have been using these questions to evaluate our residents' theoretical knowledge. Our results may be more impactful if our questions are validated.
We highlight the importance of simulation training, but our study comprised one emergency scenario although similar studies contain more scenarios. We indicate no difference between pre-and post-STAI scores. We held the simulation session with our residents in our operating room so, not only the environment but also trainers and the other participants were familiar to subjects and maybe anxiety scores could be different if this was a multicenter simulation with residents from different clinics. Future research with a larger number of scenarios and subjects from several clinics is required to demonstrate anxiety levels.
Conclusion
As the response to the COVID-19 pandemic restricted in-person activity, medical schools had to invent new ways to educate. Arrangements had to be made for students to retain clinical skills and knowledge to prepare them for reallife crises. Simulation is an effective training modality, which can be used to improve knowledge levels without any serious change in the state of anxiety of participants. | 2021-12-16T14:12:01.918Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "f707b7d243f8c0373c50a3639dcf228145b35926",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bjane.2021.11.010",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c10d85da80be51d6c95bc86633a78f79d53b4b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247733682 | pes2o/s2orc | v3-fos-license | TGF-β/SMAD Pathway Is Modulated by miR-26b-5p: Another Piece in the Puzzle of Chronic Lymphocytic Leukemia Progression
Simple Summary TGF-β is a key immunoregulatory pathway that can limit the proliferation of B-lymphocytes. Chronic lymphocytic leukemia (CLL) has been historically conceptualized as a neoplasm characterized by accumulation of mature B cells escaping programmed cell death and undergoing cell-cycle arrest in the G0/G1 phase. However, new evidence indicates that tumor expansion is in fact a dynamic process in which cell proliferation also plays an important role. In general, cancers progress by the emergence of subclones with genomic aberrations distinct from the initial tumor. Often, these subclones are selected for advantages in cell survival and/or growth. Here, we provide novel evidence to explain, at least in part, the origins of CLL progression in a subgroup of patients with a poor clinical outcome. In this cohort, the immunoregulatory pathway TGF-β/SMAD is modulated by miR-26b-5p and the impairment of this axis bypasses cell cycle arrest in CLL cells facilitating disease progression. Abstract Clinical and molecular heterogeneity are hallmarks of chronic lymphocytic leukemia (CLL), a neoplasm characterized by accumulation of mature and clonal long-lived CD5 + B-lymphocytes. Mutational status of the IgHV gene of leukemic clones is a powerful prognostic tool in CLL, and it is well established that unmutated CLLs (U-CLLs) have worse evolution than mutated cases. Nevertheless, progression and treatment requirement of patients can evolve independently from the mutational status. Microenvironment signaling or epigenetic changes partially explain this different behavior. Thus, we think that detailed characterization of the miRNAs landscape from patients with different clinical evolution could facilitate the understanding of this heterogeneity. Since miRNAs are key players in leukemia pathogenesis and evolution, we aim to better characterize different CLL behaviors by comparing the miRNome of clinically progressive U-CLLs vs. stable U-CLLs. Our data show up-regulation of miR-26b-5p, miR-106b-5p, and miR-142-5p in progressive cases and indicate a key role for miR-26b-5p during CLL progression. Specifically, up-regulation of miR-26b-5p in CLL cells blocks TGF-β/SMAD pathway by down-modulation of SMAD-4, resulting in lower expression of p21−Cip1 kinase inhibitor and higher expression of c-Myc oncogene. This work describes a new molecular mechanism linking CLL progression with TGF-β modulation and proposes an alternative strategy to explore in CLL therapy.
Introduction
Chronic lymphocytic leukemia (CLL) is the most common leukemia in the Western world [1]. This disease is characterized by the clonal proliferation and accumulation of mature, long-lived CD5 + B-lymphocytes in peripheral blood (PB), bone marrow (BM), lymph nodes (LN) and the spleen [2]. The clinical evolution of the patients is highly heterogeneous. One-third is diagnosed as indolent and never requires treatment; in another third, an initial indolent phase is followed by progression of the disease; and the remaining third has aggressive CLL and needs immediate treatment [3]. Additionally, the mutational profile of HV immunoglobulin genes (IgHV) divides patients into two categories [4] that differ dramatically in prognosis [5,6]. Patients expressing mutated IgHV (M-CLL) develop a more indolent disease, whereas IgHV unmutated patients (U-CLL) display the worst prognosis. Despite IgHV status being one of the most relevant prognostic factors [7], M-CLL and U-CLL often exhibit different clinical outcomes within each subgroup, and the reasons of this diversity still remain controversial. Indeed, the discovery of new prognostic markers could help to identify and better explain the origins of CLL progression, one of the most key and unsolved issues in CLL biology.
Oncogenic hallmarks of cancer progression such as genome instability and mutations, deregulated cellular energetics, avoiding immune destruction, and tumor-promoting inflammation [8] should be considered in CLL progression [9]. In fact, microenvironment interactions and/or epigenetic changes in tumor cells are at the origins of the CLL heterogeneity supporting the different clinical evolution profiles [10]. Among the multiple variables that contribute to this, the microRNA (miRNA) landscape of the tumor cell emerges as a central player in the pathogenesis and evolution of CLL [11][12][13]. Several miRNA signatures [14], as well as specific miRNAs, distinguish between different CLL subgroups [15] and show diagnostic or prognostic value [16]. miRNAs have been involved in CLL progression [12], linked to therapy resistance [16], or linked to different B cell activation pathways [17][18][19][20], but the molecular basis behind the function of most of these molecules remains poorly understood in CLL.
In this work, we specifically focus on those patients with unmutated IgHV status and different clinical outcomes. We performed miRNome analysis from eleven CLL cases, (five clinically stable U-CLLs and six clinically progressive U-CLLs). miRNome profiles were compared between these two subgroups; subsequently, the up-regulated miRNAs in patients with the poorest outcomes were selected and validated by quantitative PCR (q-PCR) in an additional cohort of 15 cases. TGF-β signaling maintains tissue homeostasis and prevents incipient tumors from progressing down the path to malignancy [21,22]. This pathway regulates not only cellular proliferation, differentiation, survival, and adhesion but also the cellular microenvironment [23]. Several works suggest that TGF-β inhibition promotes leukemic transformation [24,25], and, specifically in CLL, this has been postulated [26][27][28][29], but the molecular mechanisms that regulate this pathway during CLL progression remain unknown.
Here, we provide additional evidence supporting the view that the loss of sensitivity to TGF-β pathway contributes to the clinical and biological progression of CLL [27,30]. We describe a key role for the miR-26b-5p down-modulating the TGF-β/SMAD axis in progressive U-CLL. Our results show that miR-26b-5p is up-regulated in this subgroup, whereas SMAD-4 protein expression is decreased and mostly excluded from the nuclei of tumor cells. The kinase inhibitor p21 −Cip1 and the oncogene c-Myc are the two target genes by which TGF-β maintains tissue homeostasis, inhibiting the progression of tumor cells to the cell cycle phase G1 [25]. Supporting the idea about the existence of an impairment in TGF-β/SMAD signaling in progressive U-CLL cases, we found low expression of the kinase inhibitor p21 −Cip1 and high expression of c-Myc compared with stable U-CLL. Finally, by down-regulation of miR-26b-5p in progressive U-CLL cases, we corroborate up-regulation of SMAD-4 and p21 −Cip1 proteins and down-modulation of c-Myc gene in the transfected CLL cells. These results propose for the first time a specific role for miR-26b-5p in the TGF-β pathway in CLL, suggesting that the leukemic clone in progressive cases could acquire tumor fitness advantages by regulating the homeostatic control of TGF-β/SMAD pathway during CLL progression.
microRNAome Analysis
miRNAs were isolated from B cells obtained from eleven CLL cases, 5 stable patients and 6 progressive patients, using the mirVana isolation kit (Invitrogen Cat. AM1561). A small RNA library was generated using the Illumina TruseqTM Small RNA Preparation kit according to manufacturer guides; the purified cDNA library was used for cluster generation on Illumina's Cluster Station and then sequenced on Illumina GAIIx. A proprietary pipeline script, ACGT101-miR v4.2 (LC Sciences), was used for sequencing data analysis. For comparison analysis, raw reads of each sample were combined for mapping tracking the copy number of reads during mapping. The significance of differential expression was calculated by two-tailed T-Test (Supplementary Materials Table S1).
After obtaining a list of differentially expressed miRNAs, 8 of them were selected and validated by q-PCR. For miRNAs, the RT-PCR was performed using stem-loop (SL) primers, as described previously [31]. Briefly, 6 ng of small RNA was mixed with folded SL-primers at a final concentration of 10 nM and final volume of 7 µL. Then, 2 µL of these were loaded in the qPCR reaction (V = 10 µL) with 1.5 µM of forward primers and 0.7 µM of the universal reverse primers. All qPCR reactions were run on the Eco Real-time PCR System (Ilumina), using a two-step protocol, with specific miRNA primers, as specified in Supplementary Materials Table S2, using the Faststart Universal SYBR Green (Roche). U6 snRNA were used as endogenous controls for miRNA, and the relative expression was calculated as 2 −∆∆Ct .
Confocal Microscopy and Quantification of Nuclear Staining
Cells previously fixed in 4% of paraformaldehyde (PFA) were applied to prepared slides and left 2 h at room temperature (RT) to adhere to slides. Cells were then blocked with PBS/3%BSA/0.1%triton for 30 min at RT and left with antibodies: α-IgM-PE (1/40) (Jackson) and α-Smad4-Alexa 647 (1/250) (sc-7966 AF647, Santa Cruz Biotechnology, Santa Cruz, CA, USA), O/N at 4 • C. After washing with PBS, cells were incubated with Hoechst 33342 for nuclear staining for 5 min. Images were captured in a Zeiss LSM 800 microscope (Zeiss, Jena, Germany) using Zen system 2.3.
miRNAome Profile of U-CLL with Different Clinical Outcomes and Validation of Up-Regulated miRNAs in the Clinically Progressive Subgroup
To identify relevant miRNAs during CLL progression in U-CLL, we performed miRNome analysis from eleven CLL cases, five stable and six progressive U-CLLs. Clinical and biological characteristics of the patients in the study are provided in Table 1. Differential expression of up-and down-regulated miRNAs in progressive cases are depicted in Figure 1A (Volcano plot) and detailed in the Table 2. Our results identify 21 miRNAs differentially expressed, 3 down-regulated and 18 up-regulated in progressive cases compared with stable U-CLL, ( Figure 1A and Table 2). From these 21 miRNAs, 13 were previously described in CLL, and 7 are mentioned here for the first time ( Table 2). Considering these results and aiming to identify miRNAs involved in disease progression of U-CLL, we selected a final list of 8 miRNAs, (all of them involved in tumor progression, according to bibliographical data, Table 2). We validated their relative abundance by q-PCR in a larger CLL cohort of 15 additional U-CLL patients (8 progressive and 7 stable, Table 1). After this analysis, only 3 of the 8 selected miRNAs depicted statistically significant changes comparing the relative abundance between stable and progressive subgroups of the additional cohort ( Figure 1B). The miR-26b-5p, miR-106b-5p, and miR-142-5p were up-regulated in the progressive cases (p = 0.003, p = 0.001, and p = 0.011, respectively, Mann-Whitney Unpaired test, n = 18). Interestingly, despite not being selected based on this characteristic; they have all been involved in the modulation of TGF-β/SMAD pathway [21,22], respectively. Table 1. Clinical and molecular characterization of CLL patients, AID and LPL expression assessed by q-PCR as described in [34,35]. T.F.T = Time from initial diagnosis to first treatment for clinical progression. L.C. = Lymphocyte Count. N/D = no determinate; N/T = not treated: Refers to patients who did not receive any treatment at 4 years follow-up. (B) Quantitative PCR analysis validating the initial transcriptome data. An additional larger cohort with similar biologic and clinical characteristics (n = 15) was investigated. Mann-Whitney Unpaired test and median with 95% CI are depicted. U6 snRNA were used as endogenous quantity controls for miRNA, and the relative expression was calculated as 2 −∆∆Ct . In all cases, p < 0.05 was considered statistically significant, (* = p < 0.05, ** = p < 0.01, ns = not significant).
mRNA Level Expression of SMAD Proteins and Target Genes of TGF-β Pathway in Stable and Progressive U-CLL Cases
TGF-β protein is a pleiotropic cytokine and exerts its effect on gene expression through transcription factors known as SMAD proteins. Specifically, in CLL, different works support the role of TGF-β pathway as a growth inhibitor of CLL cells [40], and recently, SMAD protein expression has been correlated with disease progression [29]. In view of these data and our previous results, we evaluated the activation of the TGF-β axis in stable and progressive U-CLLs by comparing mRNA levels of SMAD-2, SMAD-3, SMAD-4, and SMAD-7 as well as of two recognized target genes of the TGF-β pathway, cyclindependent kinase inhibitor 1A (p21 −Cip1 ) [41] and KLF10 (Kruppel Like Factor 10) [42]. Our results showed that SMAD-2 and SMAD-4 mRNAs were significantly decreased in the progressive U-CLL subgroup (p = 0.006, n = 28 and p ≤ 0.0001, n = 27, respectively, Mann-Whitney Unpaired test), whereas SMAD-3 and SMAD-7 were not significantly changed ( Figure 2). Interestingly, in progressive U-CLL, we also found a significant down-regulation of p21 −Cip1 and KLF10 mRNA expression compared with the stable cases (p = 0.006, n = 25 and p = 0.032, n = 26, respectively, Mann-Whitney Unpaired test), Figure 2. Altogether, these results showed that in the progressive U-CLL subgroup key genes of the TGF-β/SMAD pathway (SMAD-2, SMAD-4, p21 −Cip1 and KLF10) are decreased, suggesting that the signaling provided for this axis could be inhibited in the CLL cells.
recently, SMAD protein expression has been correlated with disease progression [29]. In view of these data and our previous results, we evaluated the activation of the TGF-β axis in stable and progressive U-CLLs by comparing mRNA levels of SMAD-2, SMAD-3, SMAD-4, and SMAD-7 as well as of two recognized target genes of the TGF-β pathway, cyclin-dependent kinase inhibitor 1A (p21 −Cip1 ) [41] and KLF10 (Kruppel Like Factor 10) [42]. Our results showed that SMAD-2 and SMAD-4 mRNAs were significantly decreased in the progressive U-CLL subgroup (p = 0.006, n = 28 and p ≤ 0.0001, n = 27, respectively, Mann-Whitney Unpaired test), whereas SMAD-3 and SMAD-7 were not significantly changed ( Figure 2). Interestingly, in progressive U-CLL, we also found a significant downregulation of p21 −Cip1 and KLF10 mRNA expression compared with the stable cases (p = 0.006, n = 25 and p = 0.032, n = 26, respectively, Mann-Whitney Unpaired test), Figure 2. Altogether, these results showed that in the progressive U-CLL subgroup key genes of the TGF-β/SMAD pathway (SMAD-2, SMAD-4, p21 −Cip1 and KLF10) are decreased, suggesting that the signaling provided for this axis could be inhibited in the CLL cells. Figure 2. mRNA levels expression of SMAD proteins and of specific target genes modulated by TGF-β pathway activation. Relative expression of SMAD-2, SMAD-3, SMAD-4, SMAD-7, p21 −Cip1 , and KLF10 mRNAs were compared by q-PCR between progressive and stable U-CLL subgroups. Mann-Whitney Unpaired test and median with 95% CI are depicted. GAPDH was used as an endogenous control, and cDNA of MEC-1 cells were used as a reference sample. The relative expression was calculated as 2 −ΔΔCt . In all cases, p < 0.05 was considered statistically significant, (* = p < 0.05, ** = p < 0.01, and **** = p < 0.0001, ns = not significant). Figure 2. mRNA levels expression of SMAD proteins and of specific target genes modulated by TGFβ pathway activation. Relative expression of SMAD-2, SMAD-3, SMAD-4, SMAD-7, p21 −Cip1 , and KLF10 mRNAs were compared by q-PCR between progressive and stable U-CLL subgroups. Mann-Whitney Unpaired test and median with 95% CI are depicted. GAPDH was used as an endogenous control, and cDNA of MEC-1 cells were used as a reference sample. The relative expression was calculated as 2 −∆∆Ct . In all cases, p < 0.05 was considered statistically significant, (* = p < 0.05, ** = p < 0.01, and **** = p < 0.0001, ns = not significant). Among the three miRNAs up-regulated in the progressive U-CLL subgroup and potentially involved in TGF-β/SMAD inhibition in other tumors [21,22], it has been previously demonstrated that miR-26b binds the 3 -UTR of the SMAD-4 gene and down-regulates its expression [34]. We evaluated the correlation between miR-26b-5p and SMAD-4 mRNA expression in progressive and stable U-CLL subgroups. Our results show a significant and negative correlation between miR-26-5p and SMAD-4 expression in the progressive subgroup ( Figure 3A, p ≤ 0.0001, Spearman's rank test, r = 0.81, n = 26). This was also confirmed at the protein level by flow cytometry in the same patient cohort ( Figure 3B ously demonstrated that miR-26b binds the 3′-UTR of the SMAD-4 gene and down-regulates its expression [34]. We evaluated the correlation between miR-26b-5p and SMAD-4 mRNA expression in progressive and stable U-CLL subgroups. Our results show a significant and negative correlation between miR-26-5p and SMAD-4 expression in the progressive subgroup ( Figure 3A, p ≤ 0.0001, Spearman's rank test, r = 0.81, n = 26). This was also confirmed at the protein level by flow cytometry in the same patient cohort ( Figure 3B, p= 0.013, Mann-Whitney Unpaired test, n = 19). The cytokine TGF-β is a dimer that signals by bringing together two pairs of receptor serine/threonine kinases known as the type I and type II receptors. Phosphorylation and activation of the type I receptors propagate the signal by phosphorylating SMAD transcription factors. Once activated, the receptor substrate SMADs (RSmads) forms a complex with SMAD-4, a binding partner common to all RSmads, and shuttles to the nucleus originating transcriptional activation and repression complexes to control the expression of hundreds of target genes in a given cell [33]. Since SMAD-4 is a key player of TGF-β activation, we performed confocal microscopy with specific antibodies anti-SMAD-4 in order to compare the localization pattern between progressive and stable U-CLLs. Microscopy analysis shows that SMAD-4 expression is higher in stable patients and is mainly localized at the nucleus of leukemic cells, (Figure 3C, left panel). In contrast, progressive cases depict lower levels of SMAD-4 expression, and the protein is mainly visualized at the cytoplasm (white arrows in Figure 3C, right panel). Representative images of leukemic cells expressing SMAD-4 at the different compartments in each different CLL subgroup and an extended microscopy analysis of SMAD-4 localization in seven stable and eight progressive U-CLLs from the validation cohort are shown in Figure 3C (p = 0.040, Mann-Whitney Unpaired test, n = 15).
Up-
TGF-β inhibits progression of cell cycle phase G1 through two sets of events: mobilization of cyclin-dependent kinase (CDK) inhibitors and suppression of the c-Myc oncogene [25]. Considering our previous results, we speculated that tumor cells of progressive U-CLL cases over-expressing miR-26b-5p take advantage of TGF-β inhibition, avoiding the cell cycle arrest that normally occurs in most CLL cells. In order to support this hypothesis, we compared the expression levels of the cyclin-dependent kinase inhibitor p21 −Cip1 and the transcription factor c-Myc among stable and progressive U-CLLs. Our results show that progressive cases have lower expression levels of p21 −Cip1 protein and higher expression of the c-Myc gene compared with stable U-CLLs, Figure 3D (p = 0.007, n = 17 and p = 0.004, n = 25, respectively, Mann-Whitney Unpaired test). These results suggest not only that TGF-β/SMAD axis is blocked after SMAD-4 inhibition by miR-26b-5p but also that the function of this pathway as a cell cycle regulator is affected in the CLL cells of these progressive and unmutated patients.
Inhibition of miR-26b-5p Recovers TGF-β/SMAD Function Regulating Cell Cycle Progression Molecules in Primary CLL Cells of Progressive Unmutated Patients
Our results suggest that tumor cells of progressive U-CLL acquire the capacity to progress through the G1 phase of the cell cycle following the impairment of TGF-β/SMAD signaling. This inhibition probably initiates with the up-regulation of miR-26b-5p and the decreased expression of SMAD-4 and is followed by the down-modulation of p21 −Cip1 and up-regulation of c-Myc genes, which finally allows for restarting of the cell cycle in CLL cells. To confirm this hypothesis, we inhibited miR-26b-5p by transfection with a specific antagomir in progressive U-CLL, and we evaluated SMAD-4, p21 −Cip1 , and c-Myc expression. As previously described [33], transfection experiments with labeled antagomirs showed more than 40% of transfected CLL cells ( Figure 4A). After sorting transfected and nontransfected cells by FACS, leukemic cells were isolated to obtain mRNAs and miRNAs, whereas protein expression was visualized by flow cytometry in the corresponding subsets. Quantitative PCR analysis showed that CLL cells incorporating the specific antagomir of miR-26b-5p have decreased values of miR-26b-5p, compared to cells that were transfected with the irrelevant antagomir (miR67, control) (p = 0.031, Wilcoxon signed-rank test, n = 7) ( Figure 4A, middle panel). A representative patient showing this inhibition with the specific antagomir for mir-26b-5p and the corresponding controls (not transfected-NTand transfected with the irrelevant miR-67, -T ctrol-) are depicted in agarose gel.
In agreement with previous results highlighting the role of miR-26b-5p on SMAD-4 expression [34], our results show that inhibition of miR-26b-5p in primary CLL cells significantly increases the percentage of cells expressing SMAD-4 ( Figure 4B), (p = 0.031, Wilcoxon signed-rank test, n = 7). Furthermore, supporting the hypothesis about the existence of an impairment of TGF-β/SMAD signaling in progressive U-CLL, our results show that after inhibition of miR-26b-5p, two target genes of this pathway (p21 −Cip1 and c-Myc), both involved in regulation of cell cycle progression at G1 and S phase [35], are affected. Transfection of primary CLL cells from U-cases with the specific antagomir targeting miR-26b-5p results in up-regulation of the cyclin-dependent kinase inhibitor p21 −Cip1 and down-modulation of the oncogene c-Myc (p = 0.015 and p = 0.031, respectively, Wilcoxon signed-rank test, n = 7, Figure 4C,D). Altogether, these results show that inhibition of miR-26b-5p in progressive U-CLL unlocks the TGF-β/SMAD pathway, which in turn affects the expression of specific target genes (p21 −Cip1 and c-Myc), both of which are required to inhibit cell cycle progression through the G1 phase. Graph shows relative expression levels of mir-26b-5p in CLL cells after transfection with the specific antagomir and the irrelevant miR-67 control (p = 0.015, signed-rank test, n = 7). Agarose gel 5% stained with Ethidium Bromide shows a representative patient after transfection with the specific antagomir (Tant) and the corresponding controls (not transfected-NT-and transfected with the irrelevant miR-67, Tctrol) (right panel). (B-D) Percentages of cells expressing Smad-4 (B), p21 −Cip1 (C), and mRNA levels expression of c-MYC (D) after transfection with the specific inhibitor of miR-26b-5p and with miR-67 (control) in each patient (p = 0.031, p = 0.015, and p = 0.047, respectively, Wilcoxon signed-rank test, n = 7). Each color corresponds to a single patient. In all cases, p < 0.05 was considered statistically significant, (* = p < 0.05).
In agreement with previous results highlighting the role of miR-26b-5p on SMAD-4 expression [34], our results show that inhibition of miR-26b-5p in primary CLL cells significantly increases the percentage of cells expressing SMAD-4 ( Figure 4B), (p = 0.031, Wilcoxon signed-rank test, n = 7). Furthermore, supporting the hypothesis about the existence of an impairment of TGF-β/SMAD signaling in progressive U-CLL, our results show that after inhibition of miR-26b-5p, two target genes of this pathway (p21 −Cip1 and c-Myc), both involved in regulation of cell cycle progression at G1 and S phase [35], are affected. Transfection of primary CLL cells from U-cases with the specific antagomir targeting miR-26b-5p results in up-regulation of the cyclin-dependent kinase inhibitor p21 −Cip1 and downmodulation of the oncogene c-Myc (p = 0.015 and p = 0.031, respectively, Wilcoxon signedrank test, n = 7, Figure 4C,D). Altogether, these results show that inhibition of miR-26b-5p in progressive U-CLL unlocks the TGF-β/SMAD pathway, which in turn affects the expression of specific target genes (p21 −Cip1 and c-Myc), both of which are required to inhibit cell cycle progression through the G1 phase. . Each color corresponds to a single patient. In all cases, p < 0.05 was considered statistically significant, (* = p < 0.05).
Discussion
CLL is the prototype of a cancer where both microenvironmental factors and genetic changes in tumor cells promote the onset, expansion, and progression of the disease [2]. Molecular and clinical heterogeneity are hallmarks of CLL [43]. This leukemia develops through accumulation of malignant B cells that circulate in the PB and are continuously supported by microenvironment signals within BM and secondary lymphoid organs. Even though available treatments often induce remissions, most patients eventually relapse; thus, CLL remains an incurable disease [1]. The basis of this refractoriness is mainly related to the prominent clinical and biologic heterogeneity of CLL cells. A first layer of this heterogeneity is represented by the IgHV mutational status, which separates CLL patients into two different prognostic subgroups, mutated and unmutated cases [5,6]. A second layer underlines the importance of the microenvironment, which continuously boosts different behaviors during leukemia evolution [44]. An additional layer of this heterogeneity exists within the tumor clone itself [32,45], in which a dynamic process exists, leading to an accumulation of the malignant clone, reflecting a balance between cell proliferation and death in the same patient [46]. A better understanding of these different heterogeneity layers and their roles in tumor evolution will help to improve the efficacy of CLL therapy.
Considerable research efforts have identified the molecular pathways in leukemic cells that contribute to antiapoptotic and survival signaling. Some of them such as BCR and NF-kB [47,48], Pi3K/AKT [33,49], WNT [50], NOTCH-1 [51], and IL-4/CD40L [38], are boosted after microenvironment interactions during disease progression [52]. Different works illustrate how miRNAs are often involved in the regulation of these pathways in leukemias in general [53] and in CLL in particular [54]. miRNAs can function both as oncogenes and tumor suppressors, depending on the target gene, and the mechanism can be related to the different cancer hallmarks [8]. In CLL, miRNAs have been implicated at most levels, both in the pathogenesis and evolution of the disease [11][12][13][14]55], and different miRNAs signatures correlating with clinical outcomes have been proposed [12,14,37,56]. Furthermore, miRNAs have been involved in therapy refractoriness of CLL [16] and also linked with cell proliferation and disease progression [33]. miRNAs signatures in CLL and their association with M-CLL and U-CLL profiles [12,56], therapy resistance [16], and/or activated CLL cells [17] have been previously performed by different groups. Specifically in this work, we focused on those patients with U-IgHV profile and different clinical outcomes comparing the miRNome of stable and progressive U-CLLs. We concentrated on those miRNAs with different relative abundance identified in the progressive U-CLL subgroup, identifying 21 differentially expressed miRNAs (3 downregulated and 18 up-regulated). Of these, 7 are reported here for the first time ( Figure 1B, red names), while 13 were previously linked to CLL evolution ( Figure 1B, black labels). All of them depict a similar up-/down-modulation profile according with the IgHV status, or the previously described activation profile of CLL cells (see references in Figure 1B), except for Let-7i-5p, which was only found in M-CLL [36]. The consistency of our results with these previous reports supports the technical reliability of our initial screening approach.
Next, we validated the miRNome results by q-PCR in an extended CLL cohort of 15 patients focusing on the miRNAs that kept statistically significant differences between stable and progressive U-CLLs. Interestingly, all the three selected miRNAs displaying this pattern (miR-26b-5p, miR-106b-5p, and miR-142-5p) were previously described as targeting genes involved in the TGF-β/SMAD signaling [21,22,57], a key pathway in animal cells whose misregulation can result in tumor development [25]. Specifically in CLL, the TGF-β pathway has been described as an axis that can contribute to the clinical and biological progression, albeit not in all patients [27,30], which underlines the importance of the different heterogeneity layers that exist in CLL. In normal and premalignant cells, TGFβ can enforce homeostasis and restrain tumor progression directly through modulation of oncogenes and/or tumor-suppressors gene or indirectly through microenvironment signaling [23,25]. However, when cancer cells loose TGF-β tumor-suppressive responses, they can use TGF-β to their advantage to initiate immune evasion, growth factor production, differentiation into an invasive phenotype, and metastatic dissemination or to establish and expand metastatic colonies.
The cytokine TGF-β is a dimer that signals by bringing together two pairs of receptor serine/threonine kinases known as the type I and type II receptors. Phosphorylation and activation of the type I receptors propagate the signal by phosphorylating SMAD transcription factors. Once activated, the receptor substrate SMADs (RSmads) forms a complex with SMAD-4, a binding partner common to all RSmads, and shuttles to the nucleus originating transcriptional activation and repression complexes to control the expression of hundreds of target genes in a given cell [25]. In CLL, an interesting work of Witkowska et al. recently linked the expression levels of SMADs proteins with clinical evolution and shows that lower SMAD-4 protein levels correlate with progressive course of the disease [29]. From the three miRNAs up-regulated in the U-CLL subgroup, two of them, miR-26b-5p and miR-142-5p, have been described as targeting different SMADs proteins. Specifically, miR-26b-5p targets SMAD-4 [34], and miR-142-5p targets SMAD-3 [22]. Among these, the miR-26b is the only miRNA that has been demonstrated to bind the 3 -UTR of the SMAD-4 gene and down-regulate its expression [34]. Considering the relevance of SMAD-4 as a binding partner common to all RSmads, indispensable for shuttling SMAD-4-RSmads complexes to the nucleus and to activate or repress hundreds of target genes at once, we decided to evaluate the expression levels and cellular localization of SMAD-4 in our CLL cohort. In agreement with the results of Witkowska et al., SMAD-4 expression at both mRNA and protein levels remained lower in progressive CLL cases compared with the stable disease. Interestingly, when 26 U-CLLs were interrogated regarding SMAD-4 mRNA expression and miR-26b-5p, a negative significant correlation was found, suggesting that this miRNA could be involved in the down-modulation of SMAD-4 and, in consequence, could impair the TGF-β/SMAD pathway. Nuclear localization of SMAD-4 is a hallmark of the canonical TGF-β activation. Remarkably, our microscopy results are in agreement with our previous observations, demonstrating that, in the cases of a more aggressive disease, SMAD-4 is not only down-modulated but also excluded from the nucleus.
Altogether, our results suggest the existence of a TGF-β inactivation in primary CLL cells of progressive U-cases compared to stable patients. Although our data are not direct evidence of the cause of disease progression, they are in agreement with previous reports linking impairment of TGF-β pathway with poor clinical outcome in CLL [27][28][29][30]. In addition, these works propose that TGFβ-1 induces growth arrest in CLL cells, but about one-third of the patients are resistant to its effects [26,58]. Interestingly, results from D'Abundo et al. [14] postulate an antileukemic activity of miR-26 in CLL. Indeed, despite different studies regarding the role of TGF-β/SMAD pathway on CLL clinical progression [28,29], the molecular mechanisms responsible for TGF-β signaling in CLL, as well as the heterogeneity of the functional activity of this pathway among CLL patients with different clinical outcomes, are not completely elucidated yet.
Two well-characterized outcomes of the TGF-β pathway activation are growth-arrest and apoptosis [39]. TGF-β inhibits cell cycle progression through expression of p21 −Cip1 , which inhibits cyclinE/A-cdk2 complexes and down-regulation of c-Myc oncogene [25]. Indeed, TGF-β is frequently present in the tumor microenvironment, initially as a signal to prevent premalignant progression, but eventually as a factor that malignant cells may use to their own advantage [25,39]. In the case of progressive CLL patients, inhibition of the TGF-β pathway could be used to unblock the classical G0/G1 arrest in leukemic cells, and this action could be orchestrated by miR-26b-5p expression. In order to investigate this hypothesis, we compared the expression levels of p21 −Cip1 inhibitor and c-Myc oncogene in the progressive and stable patients of our cohort as well as after and before inactivation of miR26b-5p by specific antagomir molecules. Our results showed that specific inhibition of the miR-26b-5p significantly increases the expression of p21 −Cip1 protein, while it downregulates c-Myc.
In summary, we provide for the first time evidence about miR-26b-5p affecting tumor progression in U-CLL through inactivation of TGF-β/SMAD pathway. Furthermore, we highlight miR-26b-5p as a novel key molecule in CLL, thus identifying another piece of the complex puzzle of clinical aggressiveness in this leukemia. Our results highlight the possibility that suppression of TGF-β signaling by miR-26-5p could be beneficial for the "fitness" of the leukemic clone and suggest that TGF-β pathway modulation could become an alternative strategy to explore in CLL therapy.
Conclusions
This work focused on a subgroup of CLL patients with the poorest clinical outcome. We describe a new molecular mechanism in the progressive U-CLL subgroup, linking for the first time the TGF-β/SMAD pathway with miR-26b-5p and regulation of key molecules involved in the cell cycle regulation of CLL. Overexpression of miR-26b-5p down-modulates the TGF-β/SMAD pathway in primary CLL cells of these patients. Our results postulate a key role for the miR-26b-5p in CLL cells and underline the importance of the TGF-β molecule as an essential immunoregulatory molecule in CLL. Although TGF-β is frequently present in the tumor microenvironment, initially as a signal to prevent prema-lignant progression, our data suggest that malignant cells can circumvent this suppressive effects by reactivation of the cell cycle arrest in CLL. Our work highlights the relevance of microenvironment interactions and genetic modifications that exist in the leukemic clone during disease evolution. | 2022-03-27T15:11:05.293Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "bc7ab25df239f14b8a52ed147a498160437e9373",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/7/1676/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ef7bf94ead598cb803309506fadfb455ec2dd4c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158929032 | pes2o/s2orc | v3-fos-license | Time Coefficient Estimation for Hourly Origin-Destination Demand from Observed Link Flow Based on Semidynamic Traffic Assignment
Day-long origin-destination (OD) demand estimation for transportation forecasting is advantageous in terms of accuracy and reliability because it is not affected by hourly variations in the OD distribution. In this paper, we propose a method to estimate the time coefficient of day-long OD demand to estimate hourly OD demand and to predict hourly traffic for urban transportation planning of a large-scale road network that lacks discrete-time rich traffic data.Themodel proposed estimates the time coefficients from observed link flows given a proven day-long OD demand based on a bilevel formulation of the generalized least square and semidynamic traffic assignment (OD-modification approach). The OD-modification approach is formulated as a static userequilibrium assignment with elastic demand, based on the residual demand at the end of each period. Our model does not require settingmany parameters regarding theODdemandmatrices and the discrete-time dynamic traffic assignments. Applying themodel to large-scale road network demonstrates that it efficiently improves estimation accuracy because the 24-hour time coefficients of survey data are slightly biased and may be modified properly. In addition, the methods that partially relax the assumption of ODmodification approach and transform the estimated demand into demand based on departure time are examined.
Introduction
The four-step prediction technique on a one-day unit to predict day-long origin-destination (OD) demand and link flow is generally used worldwide for urban and transportation planning.For one-day activity, a normal pattern exists, in which people commute and shop in the morning and return home in the evening every day on weekdays.Therefore, the estimated day-long OD demand has the advantage of accuracy and reliability, because it is not affected by hourly variations in OD distribution.
However, hourly traffic prediction is important for transportation analysis.Thus, a simple estimate of hourly link flow or OD demand which multiplies the average measured time coefficients by the day-long values of link flow or OD demand is sometimes adopted for practical use in a large-scale road network that lacks real-time data.In this paper, the time coefficients of OD demand are calculated as follows: where is the time coefficient of the OD pair for the period ( = 1, 2, . . ., 24), is the hourly OD demand of OD pair for period , and is the day-long OD demand of OD pair .
Such estimated hourly values cannot ensure user equilibrium on route-choice behavior and prediction accuracy.Accurately estimating hourly link flow and OD demand of a large-scale road network by harnessing the reliable day-long OD demand is highly desired.Additionally, the input data of road network for our study can now benefit from a nationwide survey that includes an OD survey with questionnaire and link flow observations with manual counting; however, it does not provide real-time and discrete-time rich traffic data.
We therefore propose a time coefficient estimation (TCoE) model to obtain the hourly OD demand and to analyze hourly traffic predictions and measures for urban transportation planning of a large-scale road network that lacks discrete-time rich traffic data.The proposed TCoE model estimates the time coefficients from observed link flows given a proven day-long OD demand, which in turn is based on a bilevel formulation of the generalized least square and semidynamic traffic assignment.In the model, hourly OD demands are deduced from both the time coefficient and day-long demand.The semidynamic traffic assignment is a static user-equilibrium assignment with elastic demand and accounts for residual demand at the end of study period, as discussed later.
Previous work has developed several approaches of frameworks to estimate OD matrices from observed link flows, for example, entropy maximization [1], maximum likelihood [2,3], and generalized least squares [4][5][6].The generalized least square model has the advantages of relative robustness for fitting and applicability because it allows for errors on input data because link flow observations are not treated as constraints in the formulation.
Recent years have seen the development of a bilevel formulation for OD demand estimation from observed link flow [7][8][9].In this approach, the upper program uses the generalized least square to estimate the parameters of OD demand and the lower program is a static traffic assignment for calculating link flow proportion for each OD pair.
Other works have developed dynamic OD demand estimates based on the time-dependent proportional and dynamic assignments operated over continuous discrete periods (i.e., several minutes) [10][11][12][13][14]. Tsekeris and Stathopoulos [15] proposed estimating the dynamic OD matrix by an efficient algorithm in an entropy modelling.Etemadnia and Abdelghany [16] proposed a dynamic OD demand estimation through dynamic traffic assignment on the basis of the least square method.This method provides estimates for about 10-minute intervals at peak hours by dividing an urban network into several subareas.However, because of high computational cost due to dynamic traffic assignment, these models did not apply to large-scale road networks that lack rich traffic data.Cascetta et al. [17] proposed a quasi-dynamic assumption for dynamic estimates of the OD matrix.In this approach, OD demand shares are constant across a reference period (1 to 24 h); the efficiency of this model was tested in an experiment on a limited network on motorways in Italy.By using the quasi-dynamic assumption, this method can reduce the number of unknowns given the same set of observed traffic counts.However, this model requires rich data input such as time-dependent OD data (or inflow data from origin) and link flows and did not apply to a generalized large-scale road network, including a nationwide intercity network.The result of the experiment and examination of the hypothesis depends strongly on the accuracy and richness of the input data and the scale of the network.
Zhou et al. [12] proposed dynamic origin-destination demand estimation using multiday link traffic counts by using the bilevel formulation of a generalized least square and a DTA simulation program that is able to improve the accuracy by using a day-to-day count data.In this model, the upper problem uses two objectives to minimize the deviations between observed and estimated link flows and the deviations between the estimated and the target demands, of which the target demand is a static demand or the sum of dynamic demands.Stathopoulos and Tsekeris [18] investigated the problem of updating dynamic OD demands by exploiting a series of days of link traffic counts without the need for surveys.They analyzed different time-recursive mechanisms by transforming trip departures into equivalent OD trip flows which are based on the assumption of a fixed distribution of trip destinations.The technique to use the day-to-day count data for OD estimation can contribute an efficient OD estimation significantly.However, in this paper, we do not consider the day-to-day OD estimation because we focus on the network that cannot prepare such day-to-day counted data.
When we consider uncertain demand information, Tsekeris and Stathopoulos [19] investigated the problem of estimating dynamic matrices using automatic link traffic counts and the uncertain prior demand information.They proposed an optimization algorithm for the fast estimation of trip departure rates with the incorporated lower and upper bound constraints and applied the algorithm to Athena's networks.Bierlaire [20] proposed the total demand scale as a measure of quality for OD trip tables estimated from link counts.The scale means the range of the total level of demand in the network.In our model, the day-long OD demand is used as the constraint where the total of 24-hour OD demands estimated for each OD pair preserves a given day-long OD demand, although there are no reliable hourly OD demand data by survey and automatic counted data.
Note that such OD demand matrix estimation problems generally require many parameters corresponding to the number of centroids of origin and destinations.This number becomes multiples of the number of study periods.The greater this number is, the shorter the study period is.Such a detailed model may be difficult to apply to large-scale road networks including arterial roads and expressways over 24-hour periods.If the study network for estimating timevarying OD demand is large and has many routes for many OD pairs but cannot provide sufficient data, for example, provided by on-line detection equipment, a different approach may be adopted.
The TCoE model proposed herein uses a bilevel formulation, in which the upper problem is based on the generalized least square to estimate 24-hour time coefficients given the day-long OD demand matrix and observed link flows.Thus, the TCoE model does not require many parameters to be set (e.g., for the origins and destinations of the OD demand matrices) so that the hourly OD demand matrices can be calculated by multiplying the given day-long OD demand for the 24-hour time coefficients estimated by the model.The TCoE model thus efficiently improves estimation accuracy because the 24-hour time coefficients aggregated from OD surveys are somewhat biased, thereby reducing accuracy.Therefore, the results show that the TCoE model reduces the number of parameters and computational cost but retains high accuracy for the 24-hour model when applied to a large-scale network that lacks rich real-time data.The characteristics of the time coefficients of the OD survey data are discussed in Section 4. To apply this model to a largescale road network with limited input data, we adopt the OD demand-modification approach as a semidynamic traffic assignment for the lower problem of TCoE.
Fujita et al. [21] and Matsui and Fujita [22] have proposed a time-of-day user-equilibrium (TUE) traffic assignment of the OD demand-modification approach as a semidynamic traffic assignment.TUE is based on Wardrop's userequilibrium (UE) principle in which drivers choose the shortest routes for conventional UE assignments [23].TUE divides a continuous OD demand for one-day unit into demands of each study period (1 or 2 hour units).It semidynamically estimates hourly traffic flow by period by considering the residual traffic volume at the end of each period.TUE is formulated as a static UE assignment with elastic demand that modifies the OD demand to consider the residual traffic volume.That is, in the TUE, the residual traffic in the current period is added semidynamically to the demand in the next period.
The semidynamic UE assignment formulation, which considers the residual traffic at the end of each period, has been proposed in three approaches: the OD demandmodification approach [21,24], the link flow modification approach [25,26], and the vertical queue approach [27,28].Because the OD-modification approach is more applicable and incurs less computational cost than the other approaches, it was extended to several models that consider a toll road with diversion [29,30] and a mode-choice function [31].
The TCoE model uses a static UE assignment to obtain link flow proportions of hourly OD demand in the lower problem.The assignment adopted is the same model as the TUE with toll road [30] except that it uses fixed hourly demand without considering residual demand in the ODmodification approach.However, the treatment of residual demand in the TCoE model is almost the same as the OD-modification approach because the OD demandmodification approach is considered in the upper problem of the TCoE model, not treated in the lower problem of UE assignment.Employing the OD-modification approach in TCoE allows us to apply it to a large-scale road network that lacks rich traffic data to reduce a computational cost and to analyze hourly traffic situations (including peak hours), all of which are useful for urban transportation planning.
In this paper, we first review the semidynamic concept for the OD-modification approach by comparing it with the OD modification in the TCoE model.In addition, a partial relaxation of the assumption about the length of the study period in the OD-modification approach is developed for practical use.Next, we develop the TCoE model from observed traffic flow by using the generalized least square under a given daylong OD demand and the OD-modification approach.Third, before application to the study network, we clarify that a slight bias exists in the time coefficients of OD demand aggregated from survey.Finally, we demonstrate that the TCoE model can improve the accuracy of estimates of hourly link flow compared with the traffic assignments adopting the initial hourly OD demand aggregated from the survey.In addition, the method transforming the estimated hourly OD demand into OD demand based on departure time is also examined.The residual demand for the semidynamic approach in the TCoE model is not operated in the lower problem of traffic assignment but rather the upper problem of the TCoE model.This section reviews the semidynamic concept in the OD-modification approach of Fujita et al. [21] and describes the basic formulation for the OD-modification approach by comparing it with the OD modification in the TCoE model.In addition, a partial relaxation of the assumption about the length of the study period in the OD-modification approach is newly proposed for practical use in order to compare the results of applying the models to a large-scale road network.
Formulation of OD Demand-Modification Approach and TCoE Model.
When an hour is set for a study time period, the OD-modification approach semidynamically assigns hourly OD demand for a day by each hour based on the UE principle that drivers select a route with minimum time.Let (=60 min) be the length of a period and let be the hourly OD demand between OD pair (∈, ) in period (∈). is aggregated based on departure time (hourly OD-dep).Furthermore let be the travel time for OD pair in period .The OD-modification approach assumes that the maximum travel time between OD pair is less than ( < ) and that the hourly OD demand departs from the origin uniformly at the rate ( /) during period . Figure 1 shows the modification of hourly OD demand.This figure shows an OD pair with only a path and some links, the hourly OD demand , and the travel time for the path during period .
Even though many paths exist between the OD pair , the explanation of this figure is the same except for exchanging to a path demand because the OD-modification approach adopts the UE assignment in which all paths used have the same travel time.
Here, part of the demand of departing from the time ( − ) at the end of period does not arrive at its destination.Thus, some traffic does not arrive at its destination at the end of period after traveling along the path between OD pair .This nonarrived traffic is /.Therefore, the traffic that arrives at its destination is At this time, link flows, which presume observed link flows, at several points along a path between the OD pair are expressed along the downward-slopping solid line in Figure 1.The OD-modification approach cannot describe the variation of link flows midway along a path because of static assignment, although it can uniformly assign the same traffic volume as the demand to all links along the path.
Therefore, to minimize the error between observed and assigned link flows midway along the path, the ODmodification approach modifies to the level of demand along the dotted line in Figure 1 at which subtracts half of the nonarrived traffic from the current OD demand.Furthermore, the subtracted OD demand = /(2), which is a residual OD demand in the current period , is assigned to the next period.
We average the burden of a residual demand in both the current and next periods by using the parameter of 1/2 in due to the uniform burden of residual demand in a static assignment.That is, when we select the other proportion (for example, 1/3) of nonarrived traffic in a current period , 2/3 of nonarrived traffic may be loaded as the residual demand in the next period.
By considering continuous periods, the residual OD demand of the previous period also flows in the current network, so it must be added to the current OD demand.Therefore, the following equation expresses the hourly OD demand based on the midway ( : hourly OD-mid) and that averages the error of link flows along a path (this can be obtained by adding the residual OD demand from the previous period ( −1 ) to the hourly OD-dep ( ) in the current period and subtracting in the current period from ): In the above equation, is the demand function for the OD-modification approach, the travel time is a variable, and the previous residual demand −1 is a constant in current period and is based on an estimate made in previous period.
When we set = ( ), in which (⋅) is the demand function for the OD modification in (4), the ODmodification approach can be formulated as a nonlinear minimization problem, that is, a static UE assignment with elastic demand, as follows: where is the flow for link (∈) during period , (⋅) is a link-cost function on link , is the flow for path between OD pair during period , is a link-path incidence variable (=1 if path for OD pair during period includes link ; =0 otherwise), is the hourly OD demand based on departure time, and −1 (⋅) is the inverse of the demand function for OD modification.
Note that, after reformulating the Lagrange function above, we can obtain the optimality conditions.The userequilibrium conditions can be obtained by applying Karush-Kuhn-Tucker conditions with regard to path flows to the Lagrange function.By applying the Karush-Kuhn-Tucker conditions with regard to , we can deduce the demand function for the OD-modification approach in (4).
References [29,30] explored the basic model above with the extended OD modification (i.e., a semidynamical UE assignment) to an urban road network including expressways with toll load.This model adopted a diversion function based on a binary logit model between both routes with and without the toll expressway.This extended model was applied to a large-scale road network and demonstrated good accuracy and practicability.In this paper, we examine the characteristics of several time-of-day assignments of OD modification approach for a large-scale road network.We use this extended model (hereafter TUE) as the OD-modification approach.For comparison, we also adopt the TUE with fixed hourly demand (TUE-f) to eliminate the residual demand in TUE and assign hourly OD demand separately in each hour, which is the same model as a day-long UE assignment with toll road [32] when using day-long OD demand.
As mentioned earlier, the OD modification is the semidynamic assignment method that modifies the hourly ODdep ( ) into the hourly OD-mid ( ) to minimize the error of estimated link flow averaged midway over a path along the dotted line in Figure 1 for considering residual demand.Note that the TCoE model proposed herein also estimates the hourly OD-mid ( ) under a given day-long OD demand and operates the residual OD demand in almost the same way as the OD-modification approach because the TCoE model modifies the hourly OD-dep ( ) into the hourly ODmid ( ) to minimize the error between the observed and estimated link flows midway along a path as a bilevel problem with elastic hourly demand given day-long OD demand.In Section 5, we apply the TCoE model to a large-scale road network that lacks rich real-time data and demonstrate the validity of the OD modification for the TCoE model by analyzing the accuracy and comparing the results to TUE assignments.
Partial Relaxation of Assumption (𝜆 < 𝑇) in OD-Modification
Approach for Practical Use.The OD-modification approach assumes that the length of the period must be set longer than the maximum travel time.However, the model may be hard to treat if we cannot set the period length sufficiently long in practical use, which may force us to give up the application, change the strategy of not satisfying the assumption half way through calculation, or recalculate after resetting the length of the period.Therefore, provided we keep the accuracy for practice use, we consider a partial relaxation of the assumption for trips longer than the length of the period.This is done as follows.
From (4) in Section 2.2, when the travel time becomes longer than the period, the rate of the current OD demand − /(2) in the terms of twice and third in (4) is modified by more than half of and the assignment theoretically loads a demand in the current period which is less than the rate of the next period.However, the links around the origin should have sufficient burden because all the current OD demand departs in the current period.Therefore, for uniform treatment with the burden of the current OD demand in a static assignment, in case there are some long trips that exceed the length of the period, we average the burdens of both the OD demands of the current and next periods and at least load half of in the current period.However, this special handling must be controlled within the range to avoid influencing the accuracy of assignment by suitably setting the length of the period.
To keep the current theoretical OD demand − / (2) greater than or equal to half of , we replace the nonnegative constraint ≥ 0 defined in the formulation of the OD-modification approach with ≧ −1 + /2.In the iteration algorithm, we calculate after exchanging for when > .Note that TUE-f with fixed demand does not need this treatment for > .This treatment is verified through an application in Sections 4 and 5.
Formulation of TCoE Model and Calculation Method of Hourly OD Based on Departure Time
We now present the formulation and solution algorithm for the TCoE model proposed herein.Additionally, we propose a calculation method of the hourly OD demand based on departure time (hourly OD-dep) from the hourly OD demand based on midway (hourly OD-mid) estimated by the TCoE model.
Formulation of Time Coefficient Estimation Model from
Observed Traffic Flow.The TCoE model justifies the time coefficients given a day-long OD demand by minimizing the least square error between estimated link flow and observed link flow.Generally, estimates of the OD demand matrix based on observed link flow require the link flow proportion for each OD pair, which is estimated by traffic assignment.
In this study, we use a set of 24-hour time coefficients in a day for a pair of departure and arrival areas as a pattern of the time coefficients.The TCoE model can significantly reduce the number of operation variables regarding time coefficients from a pattern to a few dozen patterns of time coefficients with high accuracy, as shown in the result of the application in Section 5.That is, the TCoE model does not need many variables for all OD pairs.The TCoE model employs only a pattern of time coefficients for the whole study area or, when a study area is divided into several subareas, several patterns for the subarea pairs in dual directions.
We set a departure subarea to in set and an arrival subarea to in set .The upper problem of TCoE minimizes square errors between the observed and estimated hourly link flows under the given link flow proportion as follows: min.
where is the day-long OD demand for OD pair , x is the observed link flow for link in period , , is the flow proportion of link for hourly OD demand for OD pair in period , and is the time coefficient for departure area and arrival area in period .
This model is a bilevel problem in which the upper problem is the above minimization and the lower problem is the TUE-f assignment.Therefore, the upper problem estimates the time coefficients under the given link flow proportions and the lower problem estimates link flow proportions by using the TUE-f assignment with the hourly OD demand, which is calculated by multiplying the given day-long OD demand by the time coefficients of the upper problem.As mentioned in Section 2.2, the TCoE model gains link flow proportion for each OD pair and each period estimated by the TUE-f assignment, which is basically the same model as the day-long UE assignment except that it uses hourly OD demands and parameters related to the hourly link-cost function.
Solution Algorithm for TCoE Model.
The time coefficients of the solution that minimizes the optimum function can be obtained as a convergence value after alternately calculating the upper and lower problems, and the hourly OD demand and link flows are also estimated simultaneously.The firstorder condition for the above problem can be read as the Lagrangian function integrated with constraint conditions as follows: where ] is a Lagrange multiplier for origin area and destination area .When we set a departure area to in set and an arrival area to in set and deviate by and ] , we obtain From the Karush-Kuhn-Tucker condition, the optimum solution satisfies The numerical solution for this problem is to solve the simultaneous equations (9).Additionally, we calculate the following iterative steps under the nonnegative constraint condition ≥ 0 in (7).
Step 1. Obtain the optimum if all satisfy the following.Otherwise, set ℎ to 0 and all to (ℎ) and then go to Step 2. The optimum time coefficients obtained are multiplied by day-long OD demand to estimate the hourly OD demand which is used for TUE-f assignment in the next step.The link flow proportions given from the result of TUE-f assignment are applied to obtain the new time coefficients for the upper problem of the TCoE model.The solution of time coefficients and hourly OD demand to minimize the square errors of link flows can be obtained by converging the values of time coefficients through these calculations.
𝐸
Conversely, when setting only a pattern of time coefficients in the study area, the optimum time coefficients can be calculated by using only link flows from the TUE-f assignment without link flow proportion for each OD pair as follows.
When is set to an estimated link flow for link in period given by the TUE-f assignment and is set as a pattern of time coefficients for the study area, we get By substituting above , ( 9) is transformed into We get a pattern of time coefficients when ( 13) is solved with estimated by the TUE-f assignment.The optimum solution for the time coefficient can be obtained by the convergence of in upper and lower iterative calculations.The conventional OD demand estimation models have to use the link flow proportion for each OD pair.However, this method can do the calculation only by using estimated link flows, even if link flow proportions are not estimated due to limits of computer capacity for large-scale road networks.
Calculation Method of Hourly OD-dep from the Result of TCoE Model.
Hourly OD demand obtained by TCoE is the hourly OD-mid that minimizes the error of estimated and observed link flows midway along paths between OD pairs.Therefore, the hourly OD demand by the TCoE model differs a little from OD demand aggregated based on departure time (hourly OD-dep) from survey data.When the hourly ODdep is required for practical use, we explain the method to calculate the hourly OD-dep from the TCoE result.
When the hourly OD demand by TCoE is assumed to be the hourly OD-mid , as mentioned in Section 2, the hourly OD-dep in period is obtained by transforming (4) as follows: where is the hourly OD demand based on departure time (hourly OD-dep) for OD pair in period , is the hourly OD demand based on the midway value (hourly OD-mid), taking into account the residual demand for OD pair in period , and is the minimum travel time for OD pair in period but is set to = when > ( is the period).Here the residual demand in period is expressed by /(2).By using the above equation, hourly OD-dep can be calculated from hourly OD-mid of the TCoE model.In Section 5, we apply the hourly OD-dep to the OD-modification approach (TUE assignment).We also analyze the hourly variation pattern and examine the validity of this method.
Basic Analysis of Time Coefficients for Hourly Origin-Destination Demands
The road traffic census is one of the most important national traffic surveys in Japan.This survey includes observations of hourly link flow on arterial road and expressway and questionnaire surveys of origin destination for each automobile trip nationwide.In this section, we examine the hourly variation pattern of OD demand and compare it with the hourly variation pattern of link flows observed and assigned by TUE. Figure 2 compares hourly variation patterns for the study area (i.e., the Chukyo metropolitan area including the region of the Aichi prefecture, with 7.2 million people, a part of Mie, also with 1.1 million people, and the Gifu prefecture, also with 1.2 million people).The OD survey in Figure 2 is the average variation pattern from the hourly total of all OD survey data departing from and arriving to the study area aggregated based on their departure time.This was divided by the total of day-long OD demand within study area.The link flow observation is the average hourly total of link flow of all survey data within the study area, divided by the total of day-long link flows within the study area.
Comparison between Link Flow and Hourly OD-dep from Survey Data.
Note that the OD survey gives larger values in the daytime, but the link flow survey gives larger values in the nighttime.The reason for these differences is that the OD variation pattern tends to be underestimated at nighttime and overestimated at peak hours because data is missing from questionnaires especially at nighttime, although OD variation pattern is naturally a little different from link flow variation due to the different survey type.This bias can be seen notably in trucks because trucks usually make more trips at nighttime than other vehicles.We examine this bias of OD variation pattern in view of the results of traffic assignment for a real network in the next section.hourly variation pattern of the initial OD by applying it to the TUE, TUE-f, and TCoE mentioned in Sections 2 and 3. Additionally, to examine the bias of initial OD in a day-long unit, we also execute the day-long UE assignment over the same study network as for TUE.
Application of
The study network is composed of 484 zones, 6683 links, and 4468 nodes, which is the Chukyo metropolitan network based on the road traffic census 2010, as shown in Figure 3.The observed link flow for examining assignment accuracy uses 292 links with 24 hours of data from the road traffic census 2010.The expressway diversion function and the linkcost function in all types of UE assignment in this paper are the same type of function that is used and examined in the previous application [33][34][35].
The study network (and study area) to which TUE is applied is a large road network within the Chukyo metropolitan area that is also generally used for day-long traffic assignments for practical use.This network connects several cities inside the study area and also simply connects with the outside network including the main cities throughout Japan outside the study area.A cordon line defines the boundary of the study area.When the TUE assignment is applied to such a large network connected with the outside network, new centroids must be set along the cordon line as origins of inflow traffic into the study network.In addition, the periods during which outside traffic departs from its initial origins should be adjusted to the periods that outside traffic enters the study network from new centroids along the cordon line.However, because adjusting the origin and periods for outside trips requires significant manpower, we adjust the initial departure time of outside trips to the periods departing from the cordon line according to the travel time between the initial origin and the cordon line.By using the adjusted departure time for outside trips, hourly OD demand is aggregated from survey data.In the application of TUE, TUE-f, and TCoE, the hourly OD demand from the outside network is assigned as a fixed OD demand separately without residual demand in each hour.In the next chapter, computational time and the PC used for all calculations are described.
Result of Day-Long UE Assignment.
Figure 4 shows the result of the day-long UE assignment.The day-long UE predicts link flows with good accuracy and little bias in data variation.
Result of Hourly UE Assignment by Initial OD.
Figure 5 shows the result of the link flow estimated at 7:00 by TUE and TUE-f assignments with the initial OD.The assignment of TUE-f largely overestimates link flows compared with observed link flows.This is attributed to the initial OD at peak hours being naturally overestimated from the questionnaire survey because of the bias mentioned in Section 4.1.Although the assignment result of TUE also has some bias of overestimation at peak hours, the trend overestimated by TUE is less than that of TUE-f because TUE can realistically modify the initial OD with the residual demand between the current and next periods.Figure 6 shows the result of TUE and TUE-f assignments at 22:00, which underestimate link flows compared with observed link flows.The initial OD also seems to be underestimated at nighttime.Therefore, from the comparison of hourly variation patterns in Section 4.1 and the result that little bias exists in the day-long UE assignment (as shown in Figure 4) and the result of TUE and TUE-F assignments, the bias in assignment almost corresponds to the characteristics of overestimating or underestimating the hourly variation patterns of the survey data.The TCoE model should decrease the bias of survey data by modifying the time coefficients that minimize the error of hourly link flows.applying it to the same road network as in Section 4. First, the hourly OD demand and link flow estimated by TCoE with two patterns of 24-hour time coefficients for a passenger car and a truck are examined over the entire network.Several conditions for the study network are the same as those used for the TUE assignment in Section 4. The results of the TCoE model and the TUE assignments that use the initial OD are compared and examined by analyzing the RMS errors between observed link flow and estimated link flow.
Assignment Result and Consideration for TCoE
The convergence criterion for bilevel problems of TCoE is set as follows: the difference of the totals of current and previous steps of RMSEs is less than or equal to 0.002.This criterion is applied separately to two vehicle types.The convergence of the TCoE model is judged when the criteria for the two vehicle types are satisfied.The total computational time for above convergence is about 240 minutes with a personal computer (Intel(R) Core(TM) 4.00 GHz processor with 64 GB RAM), at which the iteration number for bilevel 1(a) shows the RMS errors of link flows by TUE-f, TUE, and TCoE.Comparing them for 24 hours and all vehicle types shows that the TUE-f assignment is much less accurate than the TCoE model.Conversely, the TCoE model can significantly improve its estimation accuracy by modifying the hourly OD demand.From the RMS error for all vehicles for the TCoE model, the accuracy was improved especially for peak hours.The accuracy at 7:00 for the TCoE model decreases by about 40% of RMS errors when compared with that of TUE-f with initial OD demand.The TCoE model can increase the accuracy for all periods uniformly because it modifies time coefficients for all periods simultaneously.Table 1(b) also shows the RMSE for link flows for the vehicles on Nagoya expressways.It indicates that TCoE model can also estimate link flows on expressways with the highest accuracy among models.Figures 7 and 8 show the scatter diagram of link flows at 7:00 and 22:00 estimated by TCoE.Comparing them with Figures 5 and 6, these results indicate that the TCoE model can cancel the bias for overestimating at peak hour and underestimating at nighttime relative to the initial OD, as noticed in Section 4.1, and thereby greatly improve the estimation accuracy.
Comparison of Assignment Results in TCoE and TUE with
Hourly OD-dep from TCoE.In Section 3.3 we discuss the method to calculate hourly OD demand based on departure time (hourly OD-dep) from the hourly OD demand based on the midway (hourly OD-mid) of the TCoE estimation.We now apply the hourly OD-dep from the TCoE model to the TUE and analyze the hourly variation pattern to test the validity of the method.1, the TUE with initial OD also has better accuracy than TUE-f.Therefore, the calculated hourly OD-dep from the TCoE model is more accurate than the initial OD for hourly OD demand based on departure time.
Application in Each Direction and Several Subareas for
TCoE.Now the TCoE model is applied to several subareas and dual directions into which the study area is divided.In the study area, Chukyo metropolitan area contains most of Aichi prefecture (including Nagoya city) and part of Mie and Gifu prefectures, which are the commuting areas for Nagoya city.For an analysis that increases the number of time coefficients in the TCoE, we set three subareas and directions (Aichi area to Aichi area, Aichi area to outside of Aichi area, outside of Aichi area to Aichi area, and outside of Aichi area to outside of Aichi area).Therefore, we apply the TCoE under the condition that the time coefficients are set to the above three subareas (four patterns) for two vehicle types, with the other conditions being the same as the application in Section 5.1.
From the RMS errors of estimated link flows in Table 2, the three-subarea model of the TCoE can reduce the errors compared with the basic model of TCoE analyzed in Section 5.1.These results indicate that setting the time coefficient by subarea and direction can increase the estimation accuracy.Figure 11 shows the hourly variation patterns for passenger cars estimated by the three-subarea model.We see the general tendency that the hourly variation pattern of Aichi to Aichi with higher volume of total OD demand results in the lower value at peak hour.
Conclusion
The day-long OD demand for transportation forecasting has advantages of accuracy and reliability because it is not affected by hourly variation of OD distribution.In this paper, we proposed the time coefficient estimation (TCoE) model to obtain the hourly OD demand from observed link flows given a proven day-long OD demand.It was constructed based on a bilevel formulation of the generalized least square and the semidynamic traffic assignment (OD-modification approach).Since the hourly OD demand matrices can be calculated by multiplying the given day-long OD demand for the 24-hour time coefficients estimated by the TCoE, TCoE is not needed to set many parameters regarding origins and destinations of the OD demand matrices.TCoE could significantly improve estimation accuracy because the initial OD demand by survey had some bias due to many data missing at nighttime, whereas the TCoE could cancel the bias with a few parameters.From the result of assignments, the accuracy at 7:00 for the TCoE reduced by about 40% the RMS errors in comparison with the TUE-f with initial OD demand.Additionally, we adopted the generalized least square formulation for TCoE to improve the accuracy of hourly OD demand because the maximum-entropy formulation requires a prior hourly OD demand and the prior hourly OD demand in our study network has some bias and is not a reliable demand.
We have reviewed the semidynamic concept for the ODmodification approach (TUE) and compared it with the OD modification in the TCoE model and newly proposed a partial relaxation method of the assumption about the study period length in the OD-modification approach.The TUE is formulated as a static user-equilibrium traffic assignment with elastic demand which modifies the OD demand in the current period to consider the residual traffic volume at the end of each period in a congested network.That is, the residual traffic of TUE is semidynamically subtracted from the demand in the current period and added to the demand in the next period corresponding to the degree of congestion in the study network in which the original hourly OD demand is preserved in both the current and next periods.The treatment of residual demand in the TCoE is almost the same as in the TUE because the OD modification is considered in the upper problem in the TCoE but in the lower problem of traffic assignment.
Hourly OD demand obtained by the TCoE is the hourly OD-mid that minimizes the error of estimated and observed link flows midway along paths between OD pairs.Therefore, the hourly OD demand by the TCoE is a little different from OD demand aggregated based on departure time (hourly OD-dep) from survey data.In case the hourly OD-dep is required for practice use, we also explored and examined the method to calculate the hourly OD-dep from the TCoE result.
The OD-modification approach (TUE) assumes that the period length must be set longer than the maximum travel time.Although a partial relaxation of the assumption was proposed, it is difficult to apply TUE into the network that many OD pairs have more travel times than the period length, such as the network with 15 minutes of period length and 30 minutes of average travel time, because in this case almost the traffic cannot reach its destination within the current period and the treatment of residual flows in Figure 1 cannot be applied adequately.Therefore, TUE can analyze the travel time and degree of congestion as an average value for each period by using the link-cost function with traffic capacity but cannot treat a congestion queue.When the queue analysis in congested network is needed, a combined application that adopts the dynamic traffic assignment in a limited study network and uses the OD demand estimated by TCoE may be considered as an efficient method reducing construction cost of large-scale road network.
However, since TUE is a user-equilibrium traffic assignment with the elastic demand, it can integrate the logit model that expresses travel behaviors such as a route choice between normal roads and toll roads of expressway.Thus, TUE can be properly applied to the route-choice analysis to predict the change of hourly traffic volume after the construction of new bypass road and expressways or after the congestion charging in peak hours.If there are a day-long OD demand for future transportation planning and a traffic assignment system in large-scale road network with the observed link flows (that may be not autocounted data) which have already been prepared, TCoE is able to be applied to the same network only by changing several parameters such as the hourly traffic capacity and simultaneously estimate the hourly OD demands and link flows during 24-hour periods.Since TUE can use a simple expression of intersections in network as a static traffic assignment, TUE has a characteristic to reduce the maintenance cost of the network with high accuracy.
Future research should examine how to set subareas and durations in a study area for good accuracy and efficiency.This paper executed the TCoE model by using as many observed link flows as possible.We could not clarify the relationship between the estimation accuracy and the number of observed links and sizes in study network.The relationship between estimation accuracy and location of observed links should also be analyzed in the future.
Figure 2 :
Figure 2: Hourly variation patterns by survey type and vehicle type.
Figure 2
shows hourly variation patterns of OD survey and link flow observation for passenger cars and trucks.The OD survey shows an hourly OD variation pattern that is a fluctuation of the 24-hour time coefficients for OD demand, as mentioned in Section 1.Similarly, the link flow observation shows the hourly link flow variation pattern of a fluctuation of 24-hour time coefficients.
Initial OD Demand into TUE and Considerations 4.2.1.Outline.The hourly OD demand based on departure time aggregated by the road traffic census 2010 is hereafter called "initial OD."We examine the characteristics of the
Figure 4 :
Figure 4: Result of day-long UE assignment.
Figure 9
Figure9compares hourly variation patterns of the hourly OD-dep obtained by the calculation method in(14), the hourly OD-mid estimated by the TCoE model, and the initial OD by survey.These hourly variation patterns are sums of the trips within the study area.As seen in this figure, the variation pattern for hourly OD-dep has a higher value during the peak hour than the hourly OD-mid because the TCoE model estimates the hourly OD-mid that is justified to fit the link flow midway along each path and not fit the link flow near each origin.In comparison with initial OD by survey, the calculated hourly OD-dep reduced the bias more by retraining the time coefficients during the peak hour and increasing them at 6:00 and at nighttime.Figure 10 compares the results of link flows assigned by the TUE with hourly OD-dep from TCoE model and the TCoE model directly, for which the link flows are summed up for all vehicles at 7:00 and 22:00.Based on this comparison, because the coefficient of determination for each period
Figure 10
Figure9compares hourly variation patterns of the hourly OD-dep obtained by the calculation method in(14), the hourly OD-mid estimated by the TCoE model, and the initial OD by survey.These hourly variation patterns are sums of the trips within the study area.As seen in this figure, the variation pattern for hourly OD-dep has a higher value during the peak hour than the hourly OD-mid because the TCoE model estimates the hourly OD-mid that is justified to fit the link flow midway along each path and not fit the link flow near each origin.In comparison with initial OD by survey, the calculated hourly OD-dep reduced the bias more by retraining the time coefficients during the peak hour and increasing them at 6:00 and at nighttime.Figure 10 compares the results of link flows assigned by the TUE with hourly OD-dep from TCoE model and the TCoE model directly, for which the link flows are summed up for all vehicles at 7:00 and 22:00.Based on this comparison, because the coefficient of determination for each period flow by TCoE (veh/h) R 2 = 0.9994 y = 1.0225x + 1.227 (b) Result at 22:00
Figure 10 :Figure 11 :
Figure 10: Comparison between results by the TCoE and the TUE with OD-dep from TCoE.
Semidynamic OD Demand-Modification Approach and TCoE Model
Figure 1: Modification image of residual demand at the end of period in OD modification approach.
Table 1 (
a) Comparison of RMS error for link flows for all vehicles The iterative algorithms of the upper problem and the lower problem (TUE) are implemented in FORTRUN and MATLAB, respectively.The average computational time for TUE in a peak hour is approximately 10 minutes in the condition that the iteration number for UE traffic assignment is limited up to 20 times.The average travel time for all OD pairs within study region is about 35 minutes in peak hours.The proportion of traffic which cannot reach its destination within a period to all OD demand in the study region is about 3% and the proportion of residual demand to all OD demand in each hour is about 25-35%.Table
Table 2 :
RMS errors for TCoE with basic model and 3-subarea model.
exceeds 0.98 and the slope is almost 1.0, these results are almost the same.From Table1and Figure10, when the TUE assigns the hourly OD-dep, the TUE can estimate traffic volumes with the same accuracy as the estimate of TCoE | 2018-12-21T14:28:12.197Z | 2017-08-29T00:00:00.000 | {
"year": 2017,
"sha1": "dca628896291b3c9f70b062508c1b4108a788023",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jat/2017/6495861.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dca628896291b3c9f70b062508c1b4108a788023",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
43615598 | pes2o/s2orc | v3-fos-license | Roles and Responsibilities for Sustaining Open Source Platforms and Tools
Developing, deploying and maintaining open source software is increasingly a core part of the core operations of cultural heritage organizations. From preservation infrastructure, to tools for acquiring digital and digitized content, to platforms that provide access, enhance content, and enable various modes for users to engage with and make use of content, much of the core work of libraries, archives and museums is entangled with software. As a result, cultural heritage organizations of all sizes are increasingly involved in roles as open source software creators, contributors, maintainers, and adopters. Participants in this workshop shared their respective perspectives on institutional roles in this emerging open source ecosystem. Through discussion, participants created drafts of a checklist for establishing FOSS projects, documentation of project sustainability techniques, a model for conceptualizing the role of open source community building activities throughout projects and an initial model for key institutional roles for projects at different levels of maturity.
INTRODUCTION
As cultural heritage institutions become increasingly involved in collaborative development, deployment and maintenance of open source software an ecosystem of researchers, nonprofit organizations, cultural heritage institutions, service providers and funders have emerged to help make this work possible.The roles and responsibilities that these entities should take are often only evident in the successes of individual open source tools and platforms.Through facilitated discussion, participants in this workshop focused on formalizing the kinds of roles that these organizations can and should play in developing, deploying, sustaining, and disseminating open source software, tools, best practices, and services.
PARTICIPANTS
There were 35 participants in this day-long workshop.Attendees brought their experience working in a range of roles at a variety of institutions.There were participants from the Computer History Museum, the National Library of Sweden, the National Archives of Australia, the Bentley Historical Library, the State Archives of North Carolina, Artefactual Systems, Educopia Institute, and the John F. Kennedy Presidential Library.Participants also represented a cross section of roles (administrators, archivists, librarians, lawyers, software developers, and community managers) within organizations.This diversity of backgrounds, roles and perspectives provided invaluable input, leading to fruitful discussion.
WORKING GROUP OUTCOMES
The attendees organized themselves into four working groups.These groups began drafting guides and resources to address a range of pressing needs for improving investment and planning for FOSS digital preservation projects.The work of the groups is briefly described below.
Checklist for Establishing FOSS Projects
Where does one start when planning a successful open source project, or open sourcing an existing software project?While there is some work related to the maturity of FOSS projects [4] there is still a significant need for the guidance in this area.Recognizing the complexity in this space, one group began drafting a checklist for key issues to consider and explore when considering starting an open source project or shifting an existing software development project to an open source model.The group identified a range of individual issues organized into five categories; planning, legal and licensing, requirements and testing, user community, and developer community.When revised and completed, this checklist will be useful as a resource to both establish plans and also as a tool to evaluate plans for proposed tools.
Identifying Sustainability Techniques
Establishing approaches to address the sustainability of FOSS digital library projects remains a key issue area in the field [5].There are various modes for generating the funds or in kind contributions necessary to make an open source software project sustainable.Through discussion of a range of individual projects and of related research this group articulated a series of techniques for sustainability and noted their strengths and weaknesses.Through this process, the group produced a set of notes highlighting key features of successful open source projects.In particular, participants noted that most mature open source projects in the digital library sector leverage core operating resources across multiple organizations.The group also noted that the most successful projects incorporate multiple streams of funding and resources helping to ensure sustainability.iPres 2015 conference proceedings will be made available under a Creative Commons license.With the exception of any logos, emblems, trademarks or other nominated third-party images/text, this work is available for reuse under a Creative Commons Attribution 3.0 unported license.Authorship of this work must be attributed.View a copy of this license.
FOSS Community Building Planning
The success of open source software projects is anchored in their ability to engage and develop communities [1].Through discussion of the development of successful and vital open source communities around a range of different individual projects this group began articulating critical community building activities.These activities are tied to different stages in a project (from conceptualization, to design and development, through to implementation and adoption).A key take away from the group is the importance of establishing community development plans at every stage of a project's development.There is a clear need to complete the development of this model to clarify and share which activities are appropriate at particular stages of a project.
Organizational Roles & Project Maturity
This group examined and discussed different, successful open source software projects.They defined a set of project phase, identifying key roles for different institutional partners during the development of these projects.This suggested the following roles over three distinct phases of development.
Key roles identified for the initial development/ start-up phase of a product were: As a project reaches it's initial roll out and moves toward maturity, it becomes important to engage:
professional associations (to get the word out about the project), a sustainable home (an organization focused on running and managing the project and providing services, managing membership models, and serving as a host for member driven governance).
When a product reaches maturity, it ideally will have cultivated: other providers (companies and or non-profits providing additional services around the product); and a developer community (a community of developers from multiple organizations contributing to the project.
CONCLUSIONS & NEXT STEPS
Each group identified some next steps and key participants that plan to carry forward the work started in the meeting.This will fully realize the development of resources and guides that can be used to improve the planning, delivery and quality of open source software for digital library and digital preservation tools and systems.In closing remarks, Paul Wheatley of the Digital Preservation Coalition, stressed how critical it is for knowledge and best practice in this area to advance.Every year significant resources are invested in software development across the sector.Without further development of the kinds of resources started by these working groups, it is difficult to ensure that those investments are making the maximum impact. | 2017-11-29T03:42:45.178Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "3876f327926b6e42f9bff79dcfe5fd7621ac8593",
"oa_license": "CCBY",
"oa_url": "https://osf.io/w4z4v/download",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3876f327926b6e42f9bff79dcfe5fd7621ac8593",
"s2fieldsofstudy": [
"Computer Science",
"History"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
} |
16278890 | pes2o/s2orc | v3-fos-license | The kinetics of inhibitor production resulting from hydrothermal deconstruction of wheat straw studied using a pressurised microwave reactor
Background The use of a microwave synthesis reactor has allowed kinetic data for the hydrothermal reactions of straw biomass to be established from short times, avoiding corrections required for slow heating in conventional reactors, or two-step heating. Access to realistic kinetic data is important for predictions of optimal reaction conditions for the pretreatment of biomass for bioethanol processes, which is required to minimise production of inhibitory compounds and to maximise sugar and ethanol yields. Results The gravimetric loss through solubilisation of straw provided a global measure of the extent of hydrothermal deconstruction. The kinetic profiles of furan and lignin-derived inhibitors were determined in the hydrothermal hydrolysates by UV analysis, with concentrations of formic and acetic acid determined by HPLC. Kinetic analyses were either carried out by direct fitting to simple first order equations or by numerical integration of sequential reactions. Conclusions A classical Arrhenius activation energy of 148 kJmol−1 has been determined for primary solubilisation, which is higher than the activation energy associated with historical measures of reaction severity. The gravimetric loss is primarily due to depolymerisation of the hemicellulose component of straw, but a minor proportion of lignin is solubilised at the same rate and hence may be associated with the more hydrophilic lignin-hemicellulose interface. Acetic acid is liberated primarily from hydrolysis of pendant acetate groups on hemicellulose, although this occurs at a rate that is too slow to provide catalytic enhancement to the primary solubilisation reactions. However, the increase in protons may enhance secondary reactions leading to the production of furans and formic acid. The work has suggested that formic acid may be formed under these hydrothermal conditions via direct reaction of sugar end groups rather than furan breakdown. However, furan degradation is found to be significant, which may limit ultimate quantities generated in hydrolysate liquors.
The kinetics of inhibitor production resulting from hydrothermal deconstruction of wheat straw studied using a pressurised microwave reactor Ibbett et al.
Introduction
Physicochemical pretreatments are required to increase the efficiency of enzyme hydrolysis of the polysaccharide fraction of lignocellulosic biomass, in order to liberate fermentable sugars for production of ethanol. The lignocellulosic cell wall consists of an assembly of cellulose fibrils, sheaved in a layer of hemicellulose, which acts as an interface with a surrounding network of lignin [1]. This highly recalcitrant structure must be subjected to controlled deconstruction in order to increase the accessibility and reactivity of the cellulose fibrils, which is required to maximise the rate and yield of saccharification to liberate glucose. Hydrothermal processing is one of the most promising deconstruction methods for lignocellulosic biomass, carried out in water at high temperatures and pressures, where the cell wall is disrupted by hydrolysis of the hemicellulose component coupled with degradation and temporary liquefaction of the lignin component [2,3]. The process is particularly attractive as it has low chemical demand and leads to a deconstructed product with high cellulose digestibility. The technology has been demonstrated successfully at pilot-scale and has the potential for large-scale operation, although some disadvantages still need to be overcome before full commercialisation is realised. Most significantly, the reaction conditions lead to the formation of aromatic, furanic and organic acid by-products, which inhibit yeast fermentation [4]. Also, the need for operation at high temperatures leads to a high energy demand, with the corresponding engineering difficulties in containing water at high temperature and pressures. A better understanding of the mechanisms and chemistry of hydrothermal deconstruction of lignocellulosic biomass is therefore required, in order to establish process conditions which minimise inhibitor production, whilst maximising the desirable deconstruction reactions and minimising energy consumption.
Central to the improvement in understanding of hydrothermal processes is the need for fundamental information on the kinetics of the various pathways associated with the deconstruction of lignocellulosic biomass. This includes the reactions involving the disassembly of separate hemicellulose and lignin fractions in the cell wall and the degradation reactions within these fractions leading to formation of inhibitory by-products. Information at this basic level will be helpful both in the design of processing technologies and also in the selection of operational times, temperatures and concentrations. However, access to controlled kinetic data is challenging, as typical small-scale reactors used for research investigations require significant heat-up times, so measurements may suffer from nonisothermal conditions [5]. This difficulty can be partly circumvented by controlled overheating and then stabilisation [6], injection of preheated chemicals [7], or by correction for the heat-up time [8], or by working with a severity factor scale based on non-isothermal measurements [9]. Studies of the progress of hydrothermal deconstruction reactions of biomass under different conditions have been reported, which allude to more fundamental kinetic parameters [10][11][12][13]. However, the underlying rate constants and activation energies for reactions concerned with inhibitor generation are reported more rarely [5,14,15], without which it is difficult to make effective predictions or carry out modelling.
In this new study we have made use of a laboratory microwave synthesis reactor with the capability to apply controlled heating to samples of biomass in pressurisable vessels. Other studies of microwave processing of biomass have been reported, either for pyrolysis [16], extraction [17] or chemical conversion [18]. Microwaveassisted kinetic studies of biomass pyrolysis have also been reported [19] and also aqueous alkali-assisted pretreatment at ambient pressure [20]. A kinetic study of microwave-assisted synthesis of furfural from xylose has been reported using liquid water and water/organic mixtures at elevated pressure [21].
The use of microwave dielectric heating at 2.45 GHz, rather than conductive or steam heating, is not considered to influence the fundamental reactions taking place in the biomass material [17], but offers a method for uniform volumetric delivery of thermal energy by virtue of the dielectric properties of water, which is present in excess within the reaction mixture. This is distinct from studies where specific microwave enhancements have been postulated [22]. The reactor can be programmed to heat the sample rapidly, with accurate temperature feedback, then to maintain a precise set temperature for a selected time and then to cool the sample rapidly by forced air flow to halt the progress of reactions. The reaction products for each experiment time can then be collected and analysed to build up kinetic profiles for concentrations of relevant species. Direct gravimetric measurements of total solubilised species have been made in order to link with classical studies of biomass conversion reactions, for example associated with wood pulping [23]. The collection of gravimetric data has also allowed a simplification of interpretation and kinetic modelling, and has also allowed comparisons with reaction severity equations commonly used for biomass pretreatment for ethanol production [11]. We have demonstrated how the hydrothermal deconstruction of wheat straw can be interpreted using standard or modified kinetic models, to gain understanding of the interrelationships between kinetic parameters and hence chemical pathways between different solubilised species. These associations will also assist in understanding the morphological relationships between cell wall components and changes occurring during deconstruction. A better appreciation of the origins of formation of inhibitor compounds will also assist in programmes concerned with selection of optimal feedstock phenotypes, where cellulose structure and composition may be manipulated by breeding or genetic modification.
Kinetics of primary hydrothermal reactions Global solubilisation
The rates of solubilisation of the wheat straw at the selected reaction temperatures of 180, 200 and 220°C are shown from the gravimetric profiles in Figure 1a. The combined mass loss from the depolymerisation and solubilisation of all species therefore provides an unambiguous measure of the extent of deconstruction of wheat straw, so the derived kinetic parameters can be used to describe a global reaction ordinate, as discussed later. Around 5% of the sample mass was found to be easily extractable in water, consisting of loosely bound sugars, soluble inorganic material and other residues. This constituted a background which was subtracted prior to kinetic analysis. After subtraction, at all the studied process temperatures the kinetic profiles reached an asymptote of around 35 wt% of total biomass. The non-polar waxy fraction of wheat straw accounts for approximately 4% of total mass, from Table 1, which may form a melt dispersion during the hydrothermal treatment but is expected to re-precipitate on cooling and should therefore not contribute to the mass loss [24].
The kinetic profiles for the total mass loss could be successfully fitted to a first order kinetic equation (1), overlayed on the data in Figure 1a, where S t is the mass removed at time t, in units of g/kg-original dry mass, S inf is the mass removed at infinite time and k is the rate constant. The fitted rate constants for each temperature are summarised in Table 2, from which the activation energy (E) and pre-exponential factor (A) could be derived, via Arrhenius analysis using equation (2).
The knowledge of the Arrhenius parameters allows a prediction of the extent of progress of the collective deconstruction reactions of wheat straw under any combination of time and temperature. From this it is possible to revisit the concept of reaction severity (the reaction ordinate), based on the approach followed by Overend and Chornet from earlier work [23,25]. This assumes that, in bulk, the thermally activated depolymerisation-solubilisation reactions in woody materials follow pseudo-first order kinetics, so the global extent of reaction is related to the severity parameter (R o ), which is defined as an empirical function of time and temperature in equation (3), where T r is the reaction temperature and T b is a reference temperature (usually taken as 100°C). For comparison, the classical function ln(k.t) can be calculated directly from equation (2) using the Arrhenius parameters of the current study, which correlates linearly with the function ln(R o ) derived from equation (3). However, the correlation is offset as a result of a different choice of Arrhenius activation energy for the original equations referenced by Overend and Chornet and Chum and others, of 113 kJmol −1 , compared to the value of 148 kJ/ mol found from the current study [23,25]. Future studies might make use of this higher activation energy, for calculation of the classical function (k.t) or ln(k.t), if this is more predictive of solubilisation behaviour of arable plant residues under hydrothermal conditions. Further kinetic studies would be required to confirm if this higher activation energy for deconstruction is appropriate for other biomass residues or energy crops. An activation energy for solubilisation of 130 kJmol −1 was found for the autohydrolysis of corn cobs using a slow heat-up reactor, with delayed onset of sampling [5]. The current microwave protocol achieved more genuine isothermal conditions, accurately recorded by in situ temperature measurement.
Lignin solubilisation
Approximately 6 to 8% of the total lignin content of the straw was solubilised in the reaction liquor under the hydrothermal conditions of this study. Depolymerisation is believed to be due to scission of the predominant beta-aryl ether bonds between phenylpropanoid units, which liberates fragments sufficiently small and with sufficient polarity to achieve water solubility [26]. All solubilised aromatic fragments derived from lignin are expected to have similar chromophores, including vanillic, cumaric, cynamic species, and so on. The UV data therefore provides a total measure of lignin-derived inhibitor concentration, which avoids the need for chromatographic determination of each specific molecular species. The concentration profiles at each reaction temperature could be fitted to a first order kinetic equation, analogous to equation (1), with a defined final concentration, as shown in Figure 1b, with rate constants and derived activation energy also summarised in Table 2. The late data points showed some evidence of lignin recondensation, more prevalent at lower liquor ratio as described later, which was accounted for by a constant factor. Given this additional influence, the rate constants determined for lignin solubilisation were slightly higher than that of global solubilisation and the corresponding activation energy of 135 kJmol −1 was slightly lower, although consistent with hydrolysis type reactions. However, with lignin it is also possible that in hydrophobic regions the aryl-ether bonds will undergo thermally induced homolytic rupture [27]. Scissions in these less accessible lignin regions would be more likely to be followed by recondensation of the reactive species, resulting in insolubility as described in earlier studies [3,10]. The lignin fragments extracted into the liquor may therefore be from the more hydrophilic water accessible regions of the lignin network, which might be more closely associated with the hemicellulose components of the cell wall. From the kinetic profiles, the final amount of solubilised lignin increased slightly with increasing reaction temperature from 180 to 220°C, which may be a result of an improvement in accessibility through higher water activity.
Hemicellulose solubilisation
The majority of the cellulose component of the cell wall is retained in the solid residue under these hydrothermal conditions, although a minor amount of glucan may be depolymerised and will therefore make a small contribution to the mass loss [23]. The major proportion of solubilised mass is a result of the hydrolysis of the arabinoxylan polysaccharides in the cell wall, which are extracted into the aqueous liquor after sufficient bond scissions of the polymer backbone have occurred to reduce the molecular weight below the solubility threshold. This leads to the creation of a range of soluble xylose monomers and oligomers, together with minor amounts of other hemicellulose sugars, with various functionalities. Discounting the minor contribution from cellulose, the difference between the values for total gravimetric mass loss and that of solubilised lignin provided a measure of the rate of solubilisation of all hemicellulose species. The resultant profiles were fitted successfully to the first order kinetic equation (1), with the corresponding activation energy for wheat straw under these hydrothermal conditions found to be 149 kJmol −1 . This was only slightly different from that of total biomass solubilisation and is consistent with published values for hydrolysis of other hemicellulose materials [3]. The gravimetric protocol for hemicellulose solubilisation provides a less complex alternative to the full compositional analysis of the solid residue at each time point. Also, the gravimetric measurement avoids the complication of identification of both oligomers and monomers in the hydrothermal liquors in subsequent kinetic analyses, or the uncertainty in defining the oligomer molecular weight at the point of liberation from the biomass solid. The relationship between the rate constant for mass removal and that of individual polysaccharide bond scission has been considered mathematically for a model cellulose, where a constant factor was applied, approximately proportional to the degree of polymerisation of the soluble fragments [28]. However, a realistic heterogeneous model would be more complex, as liberation of hemicellulose fragments from the insoluble cell wall will require other energy input, for example to break hydrogen bonds and to allow conformational movement of the fragments away from the cell wall surfaces. The activation energies for oligomer production by weight loss First or single number in brackets, ±95% confidence limits of parameter estimation; second number in brackets, ± standard error of parameter estimation. and individual monomer production by bond scission may therefore be different, as explored in the next section.
Kinetics of secondary hydrothermal reactions Generation of furans
The hemicellulose oligomers liberated into solution undergo continuing hydrolysis to form a variety of pentose monomers, which then undergo dehydration reactions to give furfural as a major product, which then further degrades to a variety of molecules, including various organic acids and condensation products [29]. The UV analysis method provided a fast, reliable measure of the evolution of total furans with time, as shown for wheat straw at the three reaction temperatures in Figure 2a.
A minor amount of cellulose may be solubilised under hydrothermal conditions, although other soluble C6 sugars may be derived from minor glucan or galactan constituents of hemicellulose. These C6 sugars will undergo dehydration to form 5-(hydroxymethyl)furfural (HMF), which will be detected by UV within the total furan response.
For subsequent kinetic analyses all furanic species are assumed to be derived from hemicellulose. Separate analyses by HPLC confirmed that concentrations of HMF were present at a constant 7% proportion of furfural in the hydrolysates from this study, and it was therefore assumed that both C6 and C5 dehydration followed the same rate behaviour. For this study a scheme of linked first order reactions was proposed to account for the production and degradation of furanic species, outlined schematically in Figure 3, where the individual first order rate constants are shown for each step in the pathway. Various assumptions are implicit in the scheme, which have been introduced to improve the manageability of analysis, hopefully without sacrificing chemical reality. Firstly, it was assumed that there is a very low statistical chance of sugar monomers being directly released by endwise loss from polymer chains held in the cell wall. The apparent mono-exponential kinetics of initial hemicellulose solubilisation from the cell wall (rate constant k s ) also suggested that it was only necessary to consider one initial reacting species and hence a single initial reaction step. Also, it was considered that only sugars converted to monomer form (rate constant k 2 ) could undergo subsequent conversion to furanic species (k 3 ), via dehydration mechanisms [29]. Initially it was assumed that all hemicellulose followed the furan pathway, although in the later discussion it is postulated that some might react to give formic acid. Degradation of furans via all possible pathways was approximated by a single rate constant (k 4 ).
The reaction scheme is represented mathematically by a set of first order differential rate equations, shown as equations (4, 5, 6 and 7), where C, H, M and F are the concentrations of unreacted hemicellulose polymer, soluble hemicellulose oligomers, sugar monomers and furans, respectively.
The evolution of the concentrations of the different species with time were calculated by numerical integration of these coupled ordinary differential equations (ODEs). Errors between calculated and experimentally measured furan concentrations were minimised by adjustment of the k 2 , k 3 and k 4 rate constants, with the k s rate constant fixed initially for each minimisation, as determined from the kinetics of gravimetric solubilisation of hemicellulose. The initial amount of hemicellulose polysaccharide (C o ) was taken from the straw compositional assay of 235 g/kg-dry biomass, as sugar monomer, with the molecular weights of furfural and xylose used for mole conversions. The mass balance of all species was confirmed over time. The resulting fitted concentration profiles are superimposed on the experimental furan data in Figure 2a, with corresponding rate constants from the minimisation summarised in Table 3. The differential rate equation (5) also includes an optional term for generation of formic acid directly from soluble oligomers, which is considered separately below.
From Table 3, the values of the 95% confidence limits of the Arrhenius parameter estimates are high, as a result of the limited number of temperature points, although standard errors are acceptable. Comparison of values must therefore be carried out with caution, although with this in mind, the results of the kinetic analysis are broadly consistent with those from previous studies [5]. The activation energy for oligomer release appears to be higher than that for subsequent hydrolysis of oligomers to monomers, which may be a result of the intermolecular hydrogen bonding and steric constraint of hemicellulose polymer within the cell wall matrix. The current analysis also confirms that furan degradation (k 4 ) is an important onward pathway, which from the current work appears to have a relatively high activation energy, so becomes increasingly significant at higher process temperatures. This may limit the ultimate concentration in hydrolysate liquors, which will impact on the level of inhibitory contributions of furans in subsequent fermentations. Model studies using xylose have also identified a limit in furfural concentration, which has been interpreted as a result of competing reactions for xylose degradation, rather than furfural instability [15]. However, such model studies will not be influenced by catalytic contributions from the generation of acetic acid from hemicellulose, as discussed later, which may accelerate the rate of furfural breakdown [30]. Also, it is likely that in true biomass mixtures there will be many opportunities for furfural to react with other labile groups or reactive intermediates derived from cell wall components, providing other pathways for its degradation. Such pathways will be of increasing significance as reaction severity is increased, helping to explain the higher activation energy and rate constants than observed for model degradation of furfural [15,19].
Generation of acetic acid
Acetic acid is believed to be formed as a result of proton catalysed hydrolysis of pendant acetate groups attached at the hydroxyl positions of arabinoxylan polysaccharide [14]. This is consistent with the finding that in water the pH of the reaction liquor typically falls from pH 7 to around pH 4 after treatment. In keeping with this mechanism, the data could be fitted successfully to a first order exponential equation (8), where Ac is the concentration of acetic acid, with fitted parameters summarised in Table 3.
The calculated activation energy of 104 kJmol −1 is consistent with ester hydrolysis, although at all temperatures the measured rate constants for acetic acid formation were at least five times lower than the corresponding rate constants for solubilisation of hemicellulose. Deacetylation must therefore take place by hydrolysis of acetate groups of xylan oligomers that are already in solution, possibly as well as on intact hemicellulose structures in the cell wall. However, the rate profiles for acetic acid formation were insufficiently resolved to show any evidence of deviation from simple exponential behaviour, which might indicate more than one environment. The relatively slow rate of acetic acid generation suggests that any reduction in pH that this entails will have limited catalytic effect on the kinetics of primary solubilisation, which must therefore occur as a result of true autocatalysis by water. However, the reducing pH may influence catalysis of slower secondary reactions of saccharides in solution.
Fitting of the experimental profiles for generation of acetic acid was achieved with an almost constant initial acetate concentration, of 30 to 32 g/kg (as the acid) on a total biomass basis, at all experimental temperatures. In molar terms, this corresponded to a degree of substitution (DS) on the xylan units of hemicellulose of around 0.44, from the analysed sugar composition of the straw, or 0.36 if substitution is possible on all sugar monomers. This is higher than the DS values of around 0.2 to 0.3, which have been found by analytical hydrolysis of the Hustler variety of wheat straw [31]. Similar molar equivalents of acetic acid were detected in pretreatment hydrolysates of straw by other workers [32].
The trends in acetic acid concentrations in treatment liquors have also been studied by following the hydrothermal reactions of hemicellulose separated from straw, obtained by extraction as detailed in the Methods section. Kinetic data using the microwave reactor at temperatures of 180 and 200°C are shown in Figure 4a, for reactions in water and also in solutions of 1% sulphuric acid. With water as a reaction medium the concentration of acetic acid continued to evolve beyond the experiment time limit, mirroring the data for the whole straw. However, in the presence of the dilute acid catalyst the rate of evolution was faster, as expected, reaching an asymptote of similar concentration at both reaction temperatures. This supports the conclusion that a finite number of acetate substituents on hemicellulose are available for hydrolysis, Single or first number in brackets, ±95% confidence limits of parameter estimation; second number in brackets, ± standard error of parameter estimation. a Unchanged from results with K f = 0.^Above high limit.
with no suggestion of continuing liberation of acetic acid due to breakdown of terminal or monomer sugar groups [16]. This allows some confidence in estimation of the total potential acetic acid concentrations, which might be expected in hydrothermal process or fermentation liquors. From this same series of hemicellulose reactions it was observed that in water the rate of appearance of xylose monomers in the reaction liquor was slow, in Figure 4b, with most solubilised hemicelluloses remaining in oligomer form. The reaction in 1% sulphuric acid was faster, with the hydrolysis of glycosidic bonds leading to a pronounced increase in concentration of xylose monomers with time, in Figure 4b, up to a maximum depending on reaction temperature, followed by a reduction in concentration as degradation reactions became more pronounced. The concentration of arabinose followed a similar set of profiles, in Figure 4c, although at lower overall concentrations, reflecting the lower arabinose content of the hemicellulose. However, under equivalent conditions it was noted that arabinose appeared in solution at a faster rate than xylose, presumably due the ease of cleaving of the single α1-2 or α1-3 arabinose linkages to the xylan backbone. Degradation of the less stable furanose ring was also at a faster rate under equivalent conditions. If the hydrolysis under acid conditions is considered to lead to the most quantitative generation of monomers, then making estimations from the data for degradation losses, the comparison of molar concentrations in the reaction liquor suggests the degree of acetate substitution on the original extracted arabinoxylan of around 0.36, including xylose and arabinose but excluding other minor saccharides. This is in line with the amount of acetate determined from the hydrothermal treatment of the whole straw.
Generation of formic acid
Formic acid is also detected in significant amounts in hydrothermal hydrolysates, although its origins are less well understood, with possible mechanisms associated with the degradation of furan species [16,29]. However, the kinetic profiles in Figure 2c reveal that in molar terms formic acid is released under these hydrothermal conditions at a faster rate than production of furfural, especially at early times, so may not be generated purely by furfural breakdown. The apparent first order behaviour is also inconsistent with this reaction pathway. However, an alternative mechanism for the generation of formic acid, which may be consistent with the data, involves the direct breakdown of a saccharide reducing end group, which is first rearranged to form the 1-2 endiol, and which then hydrolyses to liberate formic acid [33]. This mechanism would require an early availability of saccharide end groups in the hydrolysate liquor, which would come from hemicellulose fragments including oligomers and hydrolysed substituents.
The pseudo-first order kinetic behaviour shown by formic acid may therefore conceal a more complicated reaction scheme, which may involve competition with the main pathway for generation of furfural. If it is assumed that formic acid can be formed by end group reactions of solubilised hemicellulose then this will result in a split in the k 2 pathway from this point, which may be introduced into the scheme by inclusion of an additional parallel first order reaction with a rate constant k fa , as shown in Figure 3. A corresponding additional term is included in the differential equation (5). Attempts at calculation of formic acid kinetic profiles with this revised scheme were quite successful, with fitted data shown as continuous lines in Figure 2c, with parameters summarised in Table 3. The pseudo-first order behaviour then comes from the rapid solubilisation of hemicellulose, giving an apparent fixed reagent concentration. Subsequent adjustment of the other rate constants in the analysis were required to maintain good fits of furfural profiles, which led to a reduction in k 2 for monosaccharide formation and an increase in k 3 for furfural formation, with the rate constant k 4 for furfural degradation unchanged. The alternative rate parameters including k fc are summarised in Table 3. The alternative fits to the furan experimental profiles are shown in Figure 5, which are acceptable for data at 180 and 200°C, but slightly less so at 220°C. However, overall the scheme is still reasonable as an explanation of formic acid generation from polysaccharide containing biomass under these hydrothermal conditions.
The influence of liquor content
A further series of hydrolysate liquors were collected from hydrothermal reactions at a temperature of 200°C, at a lower water to biomass ratio of 4:1. The UV measurement of the liquors was again used and provided an indication of the evolution of the key soluble lignin and furanic compounds over time, shown in comparison with the corresponding 10:1 liquor ratio data in Figure 6. From the gravimetric weight loss determinations, in g/kgdry biomass units, it appeared that the global solubilisation reactions proceeded at the same rate at the lower liquor ratio, within the limits of the experimental resolution, releasing the same amount of biomass material. The overall rate of lignin solubilisation was similar at both 4:1 and 10:1 liquor ratios, but a greater amount of lignin was apparently solubilised at lower liquor ratio. There was also a noticeable reduction in soluble lignin concentration as the reaction proceeded to longer times, which is presumed to be due to an increased likelihood of recondensation of reactive lignin species at higher biomass concentration. This was also mirrored in a slight reduction in overall weight loss at longer reaction times. Acetic acid was apparently liberated at a higher rate at the lower 4:1 liquor ratio, from Table 3, but still at a lower rate than that of the solubilisation of the hemicellulose polysaccharides. However, it is possible that the faster generation of additional protons and the reduction in absolute pH has induced to an additional autocatalytic influence on the ester hydrolysis reactions. The development of furan species may also be accelerated at lower liquor ratio for the same reason with Figure 6 showing a maximum furan concentration reached at earlier times and at a higher level than at 10:1 liquor ratio. The same catalytic influences may also help to explain the faster rate of formation of formic acid, with the kinetic constants from the simulation summarised in Table 3.
Implications and limitations of the kinetic analyses
Overall, it is hoped that the kinetic analyses of this study will assist in understanding the key reactions resulting from hydrothermal deconstruction of biomass and may also assist with the development of improved cell wall structure-property relationships. The scheme of kinetic analyses is less complex than other published models and the methodology has been chosen as the simplest effective approach for describing the series of changing concentrations of reaction products [34]. The microwave technique allows the determination of species concentrations at accurately defined temperatures and times, providing additional insights into the relationships and interconnections between the various deconstruction processes. The tools used in this work to perform numerical integration of coupled ODEs have not permitted the determination of confidence limits for the k 2 , k 3 and k 4 constants of the kinetic scheme, and also the sparseness of rate constant data points leads in some instances to high confidence limits for activation energies. However, the return of high confidence limits does not necessarily invalidate the best fit parameter values [34].
This work has shown that flexibility in choice of reaction conditions will be helpful in minimising inhibitor release, although other release factors relate to the intrinsic chemical nature of the cell wall polymers. The work shows that the liberation of soluble lignin species occurs at a very similar rate to the reactions leading to hemicellulose depolymerisation, so may be related to the same deconstruction processes in the cell wall. Hence, a reduction in production of soluble aromatic inhibitors will result in a reduced overall extent of deconstruction, which will need to be counterbalanced by achieving a higher effectiveness of subsequent enzyme digestion. The work also shows that the rate of acetate release is somewhat slower than that of the global progress of deconstruction, so the increase in acetic acid concentration might also be minimised by avoiding an over-long reaction time. The selection of a compromise reaction time would also be beneficial in minimising the secondary generation of furan species. However, the work does show that the level of furan and lignin-derived inhibitors are likely to reach a maximum as further reactions induce either recondensation or breakdown into smaller species. This might place a limit on the required chemical tolerance of yeast. There will also be a limit on the level of tolerance required towards acetic acid, when all acetate groups are hydrolysed from the hemicellulose. However, the operation at excessive reaction times or increased severity is undesirable, as sugar yield will be reduced and concentrations of toxic end products such as of formic acid will be increased. Operation at lower liquor ratio is desirable to maximise sugar concentrations for subsequent enzyme hydrolysis and fermentation, but this is seen to accelerate the rate of generation of some inhibitor compounds.
The knowledge of degradation pathways may provide further direction to programmes concerned with optimisation of phenotypic characteristics of biomass for highest process efficiencies. The hemicellulose of the wheat variety used in the study is quite highly acetylated, which will exaggerate the difficulty in minimising the concentration of acetic acid in process liquors. The selection of varieties with lower acetyl content in the cell wall could therefore be beneficial from a process perspective. Likewise, the production of formic acid may be more prevalent in hydrolysates from plant species containing hemicelluloses with greater amounts of labile substituents such as arabinose. Selection for a low arabinose or an overall less decorated hemicellulose as a constituent of the cell wall may be beneficial in limiting the generation of all sugar-derived inhibitors, including both organic acids and furans, allowing a wider range of deconstruction process conditions.
The kinetic information allows a consideration of the most favourable overall reaction conditions in a hypothetical process, striking a compromise between the necessary biomass deconstruction and the undesired liberation of inhibitor compounds. From the primary kinetics of solubilisation, it is concluded that at a reaction temperature of 200°C the major deconstruction reactions are complete by 20 minutes, which is a practical residence time in a continuous system (4). At this residence time the liberation of soluble lignin has unavoidably reached a maximum, but other inhibitors, including formic acid, acetic acid and furanic compounds, are at concentrations below their eventual maximum. From the current model, the inhibitor concentrations at this time interval are summarised in Table 4 for the two liquor ratios considered. Clearly real-life optimisation would need to take account of many other factors, defined by a full optimisation problem, including mixing, chemical diffusion and thermal diffusion, and energy/efficiency trade-offs.
Conclusions
The use of a microwave synthesis reactor has allowed kinetic data for the hydrothermal reactions of straw biomass to be established from short times, avoiding corrections required for slow heating in conventional reactors, or two-step heating. The gravimetric loss through solubilisation of straw provides a global measure of the extent of deconstruction, giving rise to an Arrhenius activation energy of 148 kJmol −1 , which is higher than activation energies used historically for derivation of empirical measures of reaction severity. The gravimetric loss is primarily due to depolymerisation of the hemicellulose component of straw, but a minor proportion of lignin is solubilised at the same rate and hence may be associated with the more hydrophilic lignin-hemicellulose interface. Acetic acid is liberated primarily from hydrolysis of pendant acetate groups on hemicellulose, although the rate is too slow to provide catalytic enhancement to the primary solubilisation reactions. However, the increase in acidity may enhance secondary reactions leading to the production of furans and formic acid. The work has suggested that formic acid may be formed under these hydrothermal conditions via direct reaction of sugar end groups rather than furan breakdown. Furan degradation reactions are found to be significant, which may limit ultimate concentrations of furans in hydrolysate liquors.
Samples
Wheat straw (Zebedee variety), comprising stem and leaf components, was provided by the University of Nottingham farm, Nottingham, UK, which was stored under dry conditions after harvesting for approximately 3 months prior to use. All samples were knife milled to a 2 mm mesh size (Pulverisette 19; Fritsch GmbH, Idar-Oberstein, Germany), which was a form suitable for hydrothermal treatments. Prior to investigations, samples were conditioned to equilibrium moisture content in the ambient laboratory environment. Gravimetric moisture determinations of all as-received materials were carried out, with subsequent analytical results quoted on a dry mass basis. A separate sample of wheat straw was swelled in 4 M potassium hydroxide over 2 hours at ambient temperature at a liquor to solid ratio of 20:1, in order to extract a hemicellulose-rich fraction. The extraction liquor was diluted in water, neutralised with acetic acid and then the soluble products were precipitated by addition of acetone. The precipitate was filtered and rinsed in aqueous ethanol before vacuum drying, followed by weighing to provide an estimate of total hemicellulose content.
Analysis of straw composition
Total acid hydrolysis of the as-received wheat straw for sugar analysis was carried out by immersion in 12 M sulphuric acid for 2 hours at 35°C, then 1 M sulphuric acid for 2 hours at 98°C [9]. Analysis of soluble sugar monomers was by high-performance anion exchange chromatography with pulsed amperometric detection (Dionex, Camberley, UK), using a CarboPac PA20 column under isocratic conditions, with 10 mM NaOH as the mobile phase at a working flow rate of 0.5 ml/min. Glucose, xylose, arabinose and galactose were used as standards with mannitol as internal standard. Analysis of lignin in the as-received straw was carried out by extraction using acetyl bromide in water/dioxane solvent, followed by measurement of absorbance at 280 nm [9]. Lignin quantification was performed by calibration using a low sulphate lignin reference material. The amount of organic extractable material in as-received straw was determined by Soxhlet extraction for 18 hours in pure ethanol. The extracted mass was determined by weight after rotary evaporation of the liquor followed by vacuum drying [35]. All compositional values were calculated on a dry weight basis, summarised in Table 1.
Hydrothermal reactions
A bench-top microwave reactor was used for all hydrothermal processing (Monowave 300; Anton Paar, St Albans, UK). One gram amounts of as-received dry biomass were added to each 30 ml glass reactor vial, into which was decanted either 10 or 4 ml of demineralised water (liquor ratio of 10:1 or 4:1). The mixture was shaken to ensure full wetting of the biomass particles, then stood for 1 hour before carrying out reactions. Vials were sealed with plastic/silicone caps, which were fitted with an insert for a ruby luminescence thermometer positioned with the tip centrally located within the biomass material. Vials were loaded sequentially into the reactor, which was programmed for fastest possible heating up to the chosen set temperature, either 180, 200 or 220°C. The instrument delivered a dynamic power profile giving a smooth heat-up in a time of around 80 ± 10 seconds for all temperatures, accurately controlled from the ruby luminescence temperature sensor. The heat-up phase was followed by a controlled isothermal period maintained at lower power, with a temperature accuracy of ±0.5°C, which was followed by rapid cooling by forced air circulation around the vial. The reactor was opened at a safe temperature of less than 60°C. An equivalent set of assay experiments were carried out using the hemicellulose-rich solid extracted from the straw, at reaction temperatures of 180 and 200°C in an aqueous solution of 1% (w/w) sulphuric acid.
Determination of solubilised mass
Following the hydrothermal reaction, the vials with the 10:1 liquor ratio samples were shaken by hand and allowed to stand for 15 minutes, before vacuum filtering through a preweighed glass filter paper. The filtered hydrothermal liquor (hydrolysate) was collected for analysis and then further repeated quantities of water were rinsed through the remaining filter residue, to ensure full removal of all water soluble products. The filter paper and residue were then allowed to dry in the ambient laboratory atmosphere overnight. Finally, all filter papers were dried at 105°C for 2 hours in an air circulation oven, then weighed to establish the retained weight of the solid straw residue and hence the weight loss through solubilisation during the hydrothermal reaction. Independent measurement of replicate samples indicated that the standard error of weight loss determinations was around ±6 g/kg. The same filtration and washing procedure was used for the 4:1 liquor ratio samples except that 6 ml of distilled water was added to the vials after removal from the reactor, to make them up to the same volume as the 10:1 liquor ratio samples.
Analysis of hydrothermal liquors
Analysis of the total amount of solubilised lignin in the separated hydrothermal liquors was carried out by measurement of the UV absorbance at 320 nm, according to a published method [36]. Analysis of the total amount of furanic compounds in the hydrothermal liquors was also determined by measurement of UV absorbance, at 280 nm, with subtraction of the intensity at 320 nm, to account for spectral overlap with lignin [37]. Independent measurements indicated that the standard error of lignin determination was around ±0.4 g/kg. Determination of acetic and formic acids was performed by HPLC using a Rezex ROA-Organic Acid H + column, at ambient temperature, with 0.005 N H 2 SO 4 mobile phase at a flow rate of 0.5 ml/min, with UV detection. The standard error of measurement was around ±0.6 g/kg. Confirmatory analyses of furfural and HMF were also carried out by the Rezex chromatographic method, to verify the robustness of the UV method for total furans. The sugar monomer concentrations in the hydrolysates from the separate reactions of the extracted hemicellulose-rich solid were analysed using the anion exchange chromatography method, as above. The concentrations of acetic acid in the hemicellulose hydrolysates were determined using the Rezex method as described.
Data analysis
Fitting of simple first order integral rate equations (1 and 8) was carried out using GraphPad Prism (GraphPad Software, Inc, San Diego, CA, USA), including reporting of 95% confidence intervals of rate constant parameters. Arrhenius analysis was also carried out using GraphPad Prism, including reporting of both 95% confidence intervals and standard errors of parameter estimates. For the linked kinetic scheme in Figure 3 the calculation of species concentrations was performed by numerical integration of the coupled ODEs (4, 5, 6 and 7) using a step method in Microsoft Excel (Microsoft, Redmond, WA, USA). Mass conservation by this method was confirmed. Error minimisation between calculated and experimental concentrations was carried out using the Excel Solver Add-in. Also, R 2 correlation coefficients were calculated from the squared residuals between calculated and experimental data points. Calculation of confidence intervals was not possible via the Excel method. | 2015-03-07T18:39:34.000Z | 2014-03-29T00:00:00.000 | {
"year": 2014,
"sha1": "fd73fc36d45c39c9a37431a0a84fbe78babf4a63",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/1754-6834-7-45",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3363e8bb64b1ed734c63716ce5f5fdaeb3ef062",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
69445073 | pes2o/s2orc | v3-fos-license | A Novel Compact MIMO Planar Inverted-F Antenna (PIFA) for Future 5G Wireless Devices
In the last decade, mobile communication has shown a tremendous growth in various categories of wireless communication. A novel miniature MIMO Planar inverted F Antenna (PIFA) for 5G future communication has been presented in this paper as a MIMO antenna working at 10 GHz band is not available in recent literature. The proposed PIFA has low profile structure with two radiating elements on its opposite edges. The proposed 5G frequency band of 10 GHz is covered by the MIMO antenna. Various antenna & MIMO performance parameters such as isolation, return loss, VSWR, diversity gain, ECC and radiation pattern are also presented and discussed. Simulated and measured results are in good match with each other.
Introduction
The antenna designs for 5G future application should be compact in size which is suitable for their use in various wireless devices. The wireless systems for future 5G communication will require number of antenna elements in the devices to support higher throughput and capacity enhancement [1]. Therefore conventional antennas are replaced by compact Planar Inverted F Antennas. Due to multiband properties and compact dimensions, PIFA is the best antenna structure to be used in mobile devices. The PIFA has been devised from Inverted-F Antenna (IFA) as IFA suffered from narrow bandwidth, due to which the wire radiator was replaced by a shorting plane in PIFA. The main advantage of PIFA is improved performance with a compact size [2,3]. This ever growing need for mobile data approaches the limits of 4G technologies. This requirement led to the efforts to work towards next future mobile communication generation i.e. fifth Generation (5G) and define, develop and standardize systems and services for this next generation communication system [1]. Recently studies are going on to design an antenna covering 5G wireless standards. Fifth-generation technology requires various techniques such as advanced MIMO structure, advance small cell technology, Internet of Things (IoT) etc. [4]- [6]. Using 5G technology in future will connect millions of devices and their simultaneous operation will be possible. The human race will become smarter with the rise of smart cities, smart power grids, telemedicine, smart transportation, machine to machine (M2M) communication and these future systems will become a reality due to 5G communication. Various antenna designs for 5G wireless standards have been proposed in the recent past [7]. For serving the future demands of communication devices Massive MIMO is a promising technology. One of the major challenges in building Massive MIMO systems is the limited size of Base Station and Mobile devices which put a constraint on number of antennas [8]. In [9] a magneto-electric dipole antenna is proposed which use a novel H-shaped tapered ground technique. Due to this technique the antenna height gets reduced significantly. In one of the designs, authors proposed a 3 element single band PIFA antenna system resonating at 28 GHz mmWave band for future 5G wireless communications. The isolation among the PIFA elements is observed to be at -13 dB level [10]. For 5G communication, there are various single element and MIMO antenna designs which have been presented till now by using different antenna structures such as Microstrip patch antenna, Dielectric Resonator Antenna (DRA) or PIFA [7], [10]. In this paper, a novel MIMO antenna design for the 5G wireless standard is discussed that is developed by using PIFA structure with edge feeding mechanism. The design is an extension of single element PIFA from our previously published work [2]. Compared to other conventional antennas, PIFA has numerous benefits such as small & compact structure, multiband behavior, mechanically robust, low cost and reduced absorption rate [5]. PIFA has reduced Specific Absorption Rate (SAR) value; hence lesser radiations are incident towards user's body and head [6]. The proposed antenna covers a wide band of more than 1 GHz by using a shorting stub, edge feeding and truncated radiating patch. In this paper, a MIMO PIFA is presented for wearable electronics & future 5G wireless devices. The proposed MIMO PIFA has a novel and compact structure with overall dimensions 18mm × 10mm × 3.5mm and covers wide frequency band suitable for future 5G communication. The antenna shows wideband property after introducing a shorting plate and truncated patch. High-Frequency Structure Simulator (HFSS) is utilized for antenna design & analysis.
Antenna Structure
The proposed MIMO PIFA design is shown in Figure 1 which has truncated patch elements which uses edge feeding mechanism. Rogers RT Duroid 5880 substrate material is used for fabrication of the proposed MIMO PIFA having dielectric constant (ɛ r = 2.2) and thickness (h= 0.8mm). The top radiating patch which has truncated edges to enhance antenna parameters and current distribution. The shorting plate shorts the ground plane and truncated radiating patch. Edge feeding mechanism is used to excite the antenna elements using lumped port with an overall height of 3.5mm. As compared with the conventional PIFA designs, the top radiating patch is truncated at two adjacent edges while conventional PIFAs uses a rectangular patch. Moreover, the coaxial feed is replaced by a lumped feed, hence making it a different antenna from conventional PIFA design. The detailed dimensions of the proposed MIMO PIFA antenna system are shown in Figure 1 (a) and (b). The ground plane dimensions are Lg = 40 mm and Wg = 10 mm and thickness of the substrate is 0.8 mm. The dimensions of the top patch are Lp = 7 mm, Wp = 4 mm and height of patch is 3.5 mm from the ground plane. The width of shorting plate and feed pin is Ws = 2mm and Wf = 3.5mm respectively. The three slots on the ground plane out of which two are identical and on one side of the substrate while the other one is on opposite side of the substrate. Dimensions of two identical slots are 7 mm x 0.5 mm and the third slot is 4.5 mm x 0.5 mm as shown in Fig. 1 (b). Fig. 1 (c) shows a 3D view of proposed MIMO PIFA showing two antenna elements placed above the ground plane and three open ended slots on the ground plane. To enhance antenna MIMO performance parameters mainly isolation, the ground plane of the antenna is modified by introducing three open ended slots. The rectangular slots in the ground plane consequently manipulate the current distribution. The slots are arranged in such a manner that the isolation between closely placed antenna elements will be below the acceptable level of -15 dB. The slots introduced in the ground plane are in the center of the plane so that they will not affect the resonance behaviour of any of the radiating patch. The final proposed antenna shows a wideband frequency range of 9-11 GHz covering candidate 5G communication standards. Proposed PIFA is fabricated using Rogers RT Duroid 5880 having thickness 0.8mm. For the top radiating patch, a copper sheet of thickness 0.2 mm is used. SMA connector is used to excite the antenna elements. Fabricated antenna prototype is as shown in Figure 2. Figure 2 (a) shows a 3D view of fabricated prototype of proposed MIMO PIFA in which we can observe that the radiating patches are fed by SMA connectors. Before testing the prototype on Vector Network Analyzer (VNA) the calibration needs to be done of both the ports of VNA for desired frequency sweep. After calibrating the VNA, one end of testing cables are connected to the ports of VNA and the other end to the SMA connectors of the antenna. Then S-parameter plots can be observed on VNA display from where we can plot S11, S12 or S21, S22 which will be discussed in the next section.
Return Loss Plot
As shown in Fig. 3, S parameters are obtained after simulation, -10dB return loss level is considered excellent in case of mobile communication. For isolation between the elements, S21 or S12 is observed for which the acceptable value is less than -15 dB. As it can be seen from Fig. 4, the measured return loss of one element is minimum at 10 GHz and the second element is at 9.80 GHz. The isolation (S12 or S21) is well below -25 dB at the operating band.
Diversity Gain
For MIMO antenna structures, the diversity gain is an important figure of merit. The diversity gain of the antenna can be calculated from S-parameters and is shown in Figure 6. The value is very much near to 10 dB which is excellent characteristics for MIMO antenna system.
Radiation Pattern
As shown in Figure 7, the radiation pattern of the antenna is omnidirectional with good coverage on the front side of the antenna. The radiation pattern is plotted for phi = 0 0 and 90 0 .
Effective Correlation Coefficient (ECC)
As it can be seen in Figure 8 the ECC plot is obtained from Sparameters and it should be less than 0.5. The obtained value for ECC is around 0.002 for the whole band of operation which refers to the fact that proposed MIMO antenna has excellent MIMO performance characteristics. Table 1 below, simulated and measured results are compared, there is a good match between both the results. There is a mismatch in overall bandwidth and resonant frequency of antenna elements in simulated and measured results due to material losses, handmade fabrication, soldering defects etc. But isolation is significantly lower during measurement, there is 10 dB decrease in isolation between the elements.
Conclusion
In this work, a novel edge fed MIMO Planar Inverted F Antenna has been proposed for future 5G communication devices. The designed antenna can be integrated with any wireless device because of its compact size and low height. The designed antenna shows good radiation pattern and excellent MIMO performance characteristics. The antenna structure is very compact i.e. 18mm × 10mm × 3.5mm and can be easily placed in the housing of the wireless devices. | 2019-02-19T14:07:47.620Z | 2018-09-22T00:00:00.000 | {
"year": 2018,
"sha1": "8a38c750645f2d3782c25471206ddd9e79cb204d",
"oa_license": "CCBY",
"oa_url": "https://sciencepubco.com/index.php/ijet/article/download/20099/9398",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eee2109fecc66b82bdf05c3acf79827be19a8bc8",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
16188205 | pes2o/s2orc | v3-fos-license | Biochemical artifacts in experiments involving repeated biopsies in the same muscle
Abstract Needle biopsies are being extensively used in clinical trials addressing muscular adaptation to exercise and diet. Still, the potential artifacts due to biopsy sampling are often overlooked. Healthy volunteers (n = 9) underwent two biopsies through a single skin incision in a pretest. Two days later (posttest) another biopsy was taken 3 cm proximally and 3 cm distally to the pretest incision. Muscle oxygenation status (tissue oxygenation index [TOI]) was measured by near‐infrared spectroscopy. Biopsy samples were analyzed for 40 key markers (mRNA and protein contents) of myocellular O2 sensing, inflammation, cell proliferation, mitochondrial biogenesis, protein synthesis and breakdown, oxidative stress, and energy metabolism. In the pretest, all measurements were identical between proximal and distal biopsies. However, compared with the pretest, TOI in the posttest was reduced in the proximal (−10%, P < 0.05), but not in the distal area. Conversely, most inflammatory markers were upregulated at the distal (100–500%, P < 0.05), but not at the proximal site. Overall, 29 of the 40 markers measured, equally distributed over all pathways studied, were either up‐ or downregulated by 50–500% (P < 0.05). In addition, 19 markers yielded conflicting results between the proximal and distal measurements (P < 0.05). This study clearly documents that prior muscle biopsies can cause major disturbances in myocellular signaling pathways in needle biopsies specimens sampled 48 h later. In addition, different biopsy sites within identical experimental conditions yielded conflicting results.
Introduction
The understanding of human muscle physiology and biochemistry during exercise has exponentially grown since Bergstr€ om et al. in the early 1960s introduced the use of the percutaneous needle biopsy technique in the context of exercise experiments in healthy volunteers (Bergstrom 1963). For more than 50 years the Bergstr€ om needle biopsy procedure has been extensively used to investigate exercise and training effects in muscles. The method also has been improved by the application of suction on the needle cavity to increase sample yield during cutting (Melendez et al. 2007). Often serial biopsies are taken before and after an acute exercise bout and in different experimental conditions, and it is often assumed that the changes measured in the samples obtained, are directly caused by the experimental conditions. Nonetheless, there is clear evidence in literature to prove that needle biopsies cause topical structural damage (Staron et al. 1992), which results in immune-activation similar to the typical immunological responses seen after muscle damage due to eccentric contractions (Malm et al. 2000). This is also accompanied by a wide spectrum of adaptive biochemical events in the vicinity of the injured muscle area, including regulation of energy substrate metabolism and signaling pathways as well as changes in gene expression (Lundby et al. 2005;Friedmann-Bette et al. 2012). Thus, studies have demonstrated that biopsies can inhibit muscle ATP and glycogen resynthesis for several days post exercise (Costill et al. 1988;Constantin-Teodosiu et al. 1996). Furthermore, multiple biopsies within a 2 h time window in m. vastus lateralis markedly increased mRNA contents of interleukin-6 (IL-6) and signal transducer and activator of transcription 3 (STAT3), while phosphorylation status of some pivotal signaling proteins in cellular stress, inflammation, and muscle damage was unaltered (Guerra et al. 2011). Vissing et al. (2005) also demonstrated that the expression of several genes that were assumed to be induced by exercise, in fact was an artifact due to the biopsy sampling procedure.
Different factors must be considered when evaluating the potential risk of artifacts in biopsies sampled in the vicinity of another recent biopsy in the same muscle belly. This includes the time interval between the sequential biopsies, the distance between biopsy sites, as well as the orientation of the next biopsy (distal vs. proximal) relative to the previous ones. Surprisingly, the pivotal importance of eliminating biopsy artifacts notwithstanding, work to define optimal conditions for repeated biopsy sampling in the same muscle is only fragmentary. In addition, in most studies published, details on the precise positioning of repeated biopsies are not even mentioned.
We recently planned a series of studies to investigate the effects of exercise training in hypoxia on muscle adaptation, with special attention to downstream targets of intramyocellular O 2 sensing via HIFs. In this regard it is crucial to know whether results from sequential muscle biopsies reflect the impact of the experimental conditions, indeed, or result from local hypoxia due to biopsyinduced muscle damage. In fact, needle biopsies could affect local oxygenation status in the muscle by either direct structural damage or postintervention inflammatory responses (Tidball 2005;Smith et al. 2008). In addition, local hypoxia so formed could stimulate calcium release from the sarcoplasmic reticulum and regulate gene transcription of calcium-calmodulin-dependent intramyocellular proteins such as glucose transporter type 4 (GLUT4), peroxisome proliferator-activated receptor c coactivator 1-a (PGC-1a), phospholamban, myoglobin, and several mitochondrial genes (Lewis et al. 1982;Cartee et al. 1991;Eu et al. 2000;Wright et al. 2005;Lanner et al. 2006;Rose et al. 2006Rose et al. , 2009Deshmukh et al. 2009;Kanatous et al. 2009). Support for such presumptions comes from preliminary studies in our laboratory shown by near-infrared spectroscopy that a standard biopsy procedure in m. vastus lateralis reduce local muscle oxygenation status for at least a week, with peak deoxygenation values occurring~48 h post biopsying. On the basis of this observation, we decided to more extensively explore the effects of a needle biopsy procedure in m. vastus lateralis on cellular responses in another biopsy obtained in the same muscle 48 h later. Therefore, we investigated both mRNA and protein contents of key markers in myo-cellular O 2 sensing, inflammation, skeletal myogenesis and cell proliferation, mitochondrial biogenesis, oxidative stress, protein synthesis and breakdown, and energy substrate pathways in an experiments involving multiple needle biopsies with a 2-day interval in m. vastus lateralis.
Subjects
Nine male subjects volunteered to participate in the study after they were informed in detail of the experimental procedures. All subjects were nonsmokers and were diagnosed to be healthy by means of a medical questionnaire. They were instructed not to change their dietary and training habits throughout the study period. Their age and body weight were 21.8 years (range: 21-23) and 62.6 kg (range: 57.8-66.3), respectively. One week before the start of the study the subjects participated in an incremental cycling test (100 + 40 W per 4 min) to determine VO 2 max (62.3 mL min À1 kg À1 ; range: 55-74). Furthermore, to allow for valid measurements of muscle oxygenation status by near-infrared spectroscopy (NIRS) (Ferrari et al. 2004), only subjects with small skinfolds ≤5 mm overlying m. vastus lateralis were included (range: 3.8-5.0 mm). Subjects were also asked not to participate in any strenuous exercise from 2 days prior to the experimental sessions. The study protocol was approved by the local Ethics Committee (KU Leuven) and was in accordance with The Declaration of Helsinki. All subjects signed an informed consent.
Experimental protocol
The subjects participated in a pretest and posttest session which were interspersed by a 2-day interval. Each session included measurements of oxygenation status by NIRS and biopsies in m. vastus lateralis using a 5-mm Bergs-tr€ om needle (see below for details about NIRS and biopsy procedures). At each occasion the subjects reported to the laboratory in the morning between 8:00 and 9:00 AM after an overnight fast. On arrival, they rested for 30-min in a comfortable chair while the skin overlaying the mid part of the m. vastus lateralis belly on the legs was prepared for NIRS measurements (both legs) and biopsies (right leg only). Following the rest period oxygenation status was registered for 10 min where after a standard double muscle biopsy procedure (same incision but different positioning of the needle) in the right leg was performed. Because exercise immediately post biopsying may affect the acute recovery of the wound, and most experiments using muscle biopsies involve exercise post biopsying, subjects cycled for 30 min on a bicycle ergometer (Avantronic â Cyclus 2, Leipzig, Germany) at a workload corresponding to 70% of VO 2 max obtained from the prescreening session. However, no post exercise biopsies were taken to limit the affected zone in the muscle to a single site. The posttest was identical to the pretest, except for the location of the biopsies and omission of the exercise bout following the biopsies.
NIRS measurements and analysis
We used the Niro-200 NIRS instrument (Hamamatsu, Japan) to measure TOI (2 Hz sampling rate). TOI is a valid parameter (Boushel et al. 2001;Quaresima and Ferrari 2009) to assess the fraction of O 2 -saturated tissue hemoglobin and myoglobin content, reflecting the balance between O 2 supply and tissue O 2 consumption. In both the pretest and the posttest two pairs of near-infrared probes, each consisting of a light emitter and a light detector at 4 cm distance, were fixed on the right m. vastus lateralis ( Fig. 1). One pair was positioned 3 cm proximally to the skin incision used in the pretest, whereas the other pair was positioned 3 cm distally. Two other pairs of probes were put identically on the left leg to serve as a control free of biopsy-induced injury. In the pretest the contour lines of the probes were drawn on the shaved skin to allow for identical repositioning during the posttest.
Muscle biopsy procedure
In the pretest right m. vastus lateralis was biopsied using a 5-mm Bergstr€ om-type needle with suction being applied, through a single 5-mm incision in the skin under local anesthesia (2% xylocaine without epinephrine, 1 mL subcutaneously) (Fig. 2). The skin incision was made over the belly of the right m. vastus lateralis at 1/3 of the imaginary line connecting the upper lateral border of the patella with the spina iliaca anterior superior. Two biopsies were taken through the same incision, one with the tip of the needle pointing distally from the incision site, and another one with the tip pointing proximally and with an angle of~45°between the needle and the leg's surface ( Fig. 2A). A cm-scale engraved on the biopsy needles was used as a reference to check adequate positioning of the needle for each biopsy. Each biopsy included the cutting of two samples: after cutting the first sample the needle was rotated 180°around its axis to cut the second sample. Immediately after pressure was applied on the biopsy site until bleeding had completely stopped. The incision was then sutured with adhesive strips (Steri-Strips TM , 3M Health Care, Maplewood, MN), and was covered with a plastic gauze (OpSite, Smith & Nephew, London, UK). During the posttest one biopsy was taken 3 cm proximal to the pretest incision, and one biopsy 3 cm distal. These biopsies were taken with the needle inserted 3.5 cm perpendicular to the skin in order to position the cutting window central into the supposed NIRS voxels reaching till 2 cm under the skin (Fig. 2B) (Ferrari et al. 2004).
Biochemical analysis -Western blot
Details of the immunoblotting procedures have been described previously (Deldicque et al. 2010;D'Hulst et al. 2013). Briefly, frozen muscle tissue (~20 mg) was homogenized 3 9 5 sec with a Polytronmixer in ice-cold buffer (1:10, w/v) (50 mmol/L Tris-HCl pH 7.0, 270 mmol/L sucrose, 5 mmol/L EGTA, 1 mmol/L EDTA, 1 mmol/L sodium orthovanadate, 50 mmol/L glycerophosphate, 5 mmol/L sodium pyrophosphate, 50 mmol/L sodium fluoride, 1 mmol/L DTT, 0.1% Triton-X 100 and a complete protease inhibitor tablet [Roche Applied Science, Vilvoorde, Belgium]). Homogenates were then centrifuged at 10,000g for 10 min at 4°C. The supernatant was collected and immediately stored at À80°C. The protein concentration was measured using the DC protein assay kid (Bio-Rad laboratories, Nazareth, Belgium). Using SDS-PAGE (8-12% gels), 30-80 lg of proteins were separated and transferred to PVDF membranes. Subsequently, membranes were blocked with TBST (tris-buffered saline, 0.1% Tween 20) containing 5% nonfat dry milk for 1 h and afterwards incubated overnight (4°C) with the following antibodies (1:1000, Cell Signaling, Leiden, the Netherlands): total eukaryotic elongation factor 2 (eEF2), hypoxia-inducible factor 1 a (HIF-1a), neuronal nitric oxide synthase (nNOS), phospho-ribosomal protein S6 kinase 1 (S6K1) Thr389, total S6K1, phosphoeukaryotic initiation factor 2 a (eIF2a) Ser51, total eIF2a, phospho-AMP-activated protein kinase (AMPK) Thr172, total AMPK, phospho-glycogen synthase kinase 3b (GSK-3b) Ser9, nuclear factor kappa B inhibitor a (IjB-a), pan Akt, phospho-Akt Ser473. Horseradish peroxidaseconjugated anti-mouse (1:10,000), anti-rabbit (1:5000), or anti-goat (1:20,000) secondary antibodies (Sigma-Aldrich, Bornem, Belgium) were used for chemiluminescent detection of proteins. Membranes were scanned and quantified with Genesnap and Genetools softwares (Syngene, Cambridge, UK), respectively. Then, membranes were stripped and reprobed with the antibody for the total form of the respective protein to ascertain the relative amount of the phosphorylated protein compared to the total form throughout the whole experiment. The results are presented as the ratio protein of interest/eEF2 or as the ratio phosphorylated/total form of the proteins when the phosphorylation status of the protein was measured. eEF2 protein was not different between pre-and posttest. A value of 1.0 was assigned to the mean value of the samples from the pretest from the proximal as well as from the distal site to which the other corresponding value from the posttest was reported.
RNA extraction and reverse transcription
The method used for reverse transcription is described in detail elsewhere (Vincent et al. 1985;Jamart et al. 2011). Briefly, total RNA was extracted using TRIzol (Invitrogen, Vilvoorde, Belgium) from 20 to 25 mg of frozen muscle tissue. RNA quality and quantity were assessed by spectrophotometry with a Nanodrop (Thermo Scientific, Erembodegem, Belgium). One microgram of RNA was reverse transcribed using the High Capacity cDNA Reverse Transcription kit (Applied Biosystems, Gent, Belgium) according to manufacturer's instructions.
Statistical analysis
Prior to statistical analysis, NIRS-data were preprocessed with a Butterworth filter in customer level made mathematical software (Matlab, The Mathworks, Natick, MA). NIRS outputs were first visually inspected to evaluate whether TOI-values had acquired a steady-state. For each of the four measurement sites a single average TOI-value was calculated. The effects of the muscle biopsy procedure on TOI and biochemical measurements were evaluated using a repeated-measures analysis of variance (ANOVA) (Statistica 9.0, Statsoft, Tulsa, OK). A two-way ANOVA was performed to assess the main effects of biopsy location (proximal vs. distal), and time (pretest vs. posttest). Bonferroni post hoc comparisons were used when appropriate. A probability level (P) ≤ 0.05 was considered statistically significant. All data are expressed as means AE SEM.
Muscle tissue oxygenation index
Tissue oxygenation index (TOI) was measured proximally and distally (see Fig. 2A) in both legs before (pretest) and 48 h after (posttest) the pretest biopsies (Fig. 3). TOIs were identical between legs in the pretest. However, in the posttest, compared with the control leg TOI in the biopsied leg was reduced by~10% at the proximal site (P < 0.05), but not at the distal site.
Muscle mRNA and protein contents
Biochemical assays were performed in the proximal and distal biopsy samples obtained in both the pretest and the posttest (see Fig. 2). Proximal and distal samples in the pretest yielded identical results for all measurements (Table 1). Therefore, posttest data are expressed relative to the corresponding pretest value .
Oxygen sensing pathways
Compared with the pretest, mRNA contents of HIF-1a, HIF-2a, REDD1, and nNOS were upregulated in the posttest (P < 0.05) (Fig. 4). However, HIF-1a mRNA content was increased in the distal biopsy only (+169%), while HIF-2a (+97%), REDD1 (+135%), and nNOS (+112%) were increased only proximally. VEGF (À32%) and nNOS (À58%) mRNA content was downregulated in the posttest, but only in the distal biopsy (P < 0.05). mRNA contents in the posttest were significantly different between proximal and distal biopsies for VEGF, REDD1, and nNOS (P < 0.05). Compared with the pretest, HIF-1a protein content in the posttest was elevated at both the proximal and distal site, whereas mRNA content was increased only distally (P < 0.05). For nNOS though both protein and mRNA expressions were lower in the distal biopsy than in the proximal one (P < 0.05).
Inflammation markers
Compared with the pretest, mRNA contents of IL-6 (+476%), CycloA (+168%), and TNF-a (+141%, P < 0.10) were increased at the distal biopsy site (P < 0.05), but not proximally (Fig. 5). mRNA contents in the posttest were different between proximal and distal biopsies for both IL-6 and CycloA (P < 0.05). IKB-a expression was similar between the pretest and the posttest both at the mRNA level and at the protein level.
Mitochondrial biogenesis markers
Compared with the pretest, mRNA contents of TFAM (+87%) and PPAR-c (+50%) were elevated in the posttest, but for TFAM this only occurred in the proximal biopsy (P < 0.05) and not in the distal one (Fig. 7). PGC-1a content significantly changed at neither biopsy site although a trend toward an increased content at the proximal biopsy site was observed (P = 0.10). mRNA contents were different between the distal and proximal biopsies for both PGC-1a and TFAM.
Protein synthesis and breakdown markers
mRNA contents of MURF-1 (approximately +60%), BNIP-3 (approximately +160%), and Akt-1 (approximately +65%) were increased in the posttest in both the distal and proximal biopsies (P < 0.05) (Fig. 8). mRNA level of S6K1 (+103%) and Akt-2 (+93%) was upregulated only proximally, whereas MAFBx mRNA content was upregulated only distally (+94%) (P < 0.05). For S6K1 and Akt-2 significant differences in mRNA contents were found between the proximal and distal biopsies. There was no good match Values are means AE SEM (n = 9) and represent muscle tissue oxygenation index (TOI, %) measured by NIRS in the pretest and in the posttest. Biopsies were taken as shown in Figure 2. A standard biopsy procedure was performed in m. vastus lateralis while TOI was measured by near-infrared spectroscopy 3 cm proximal and 3 cm distal to the pretest incision site. The contralateral leg served as a control leg. See Methods for further details. *P < 0.05 compared with control leg. § P < 0.05 compared with pretest.
2014 | Vol. 2 | Iss. 5 | e00286 Page 6 A two-way repeated-measures analysis of variance was performed to assess main effects of biopsy position (proximal vs. distal biopsy) and time (pretest vs. posttest). Only the P-values for the comparison of proximal versus distal biopsies in the pretest are reported in this table. Data shown refer to the muscle mRNA contents (unless stated "protein content") of all myocellular metabolism markers described in the Methods and Results section. between changes in total protein and mRNA contents. In fact, total protein contents of S6K1, Akt, and elF2a were constant between the pretest and the posttest (data not shown). Conversely, the phosphorylated fraction of Akt was consistently decreased in the posttest (approximately À70%), S6K1, on the other hand, was increased (~50%) (P < 0.05). The fraction of phosphorylated elF2a was similar between the pretest and the posttest.
Oxidative stress markers
Compared with the pretest all markers of oxidative stress measured were increased, at least in one of the two sites studied (P < 0.05) (Fig. 9). Only SOD-1 mRNA (approximately +55%) upregulation was similar between biopsy sites, while SOD-2 (+175%) and NAPDH-oxidase (+188%) mRNA contents were increased only in the distal biopsy samples. In contrast, catalase mRNA level (+83%) was elevated only proximally. mRNA values were significantly different between distal and proximal muscle samples for SOD-2, catalase, as well as NAPDH-oxidase (P < 0.05).
Glucose and lipid metabolism markers
All investigated mRNA's encoding glucose and lipid metabolism markers were higher in the posttest than in the pretest (P < 0.05) (Fig. 10). However, GSK3-a (approximately +60%) and AMPK-a1 (approximately +130%) mRNAs were higher in both distal and proximal samples, GLUT-4 (+62%) and GSK-3b (+98%) mRNAs on the other hand, were increased only at the proximal biopsy site (P < 0.05). Conversely, AMPK-a2 (+144%) showed upregulation only distally (P < 0.05). There was no good match between changes in total protein and mRNA contents. In fact total protein content of AMPK was constant between the pretest and the posttest (data not shown). Furthermore, a minor decline occurred in the phosphorylated form of GSK3 protein at the distal biopsy site (P < 0.05). The fraction of phosphorylated AMPK was similar between the pretest and the posttest.
Discussion
Needle biopsying to obtain skeletal muscle tissue in healthy volunteers has become a standard procedure in exercise physiology and biochemistry research. In fact, current knowledge on myocellular adaptation to exercise and recovery largely originates from studies using the Bergstr€ om needle biopsy procedure (Bergstrom 1963(Bergstrom , 1975. Nonetheless, it is well documented that needle biopsies per se cause topical structural damage and inflammation (Staron et al. 1992;Malm et al. 2000), which conceivably may affect observations in muscle tissue sampled in the vicinity of other recent biopsies. However, literature data on the confounding effects of biopsying per se on biochemical events in muscle are rather fragmentary though (Costill et al. 1988;Constantin-Teodosiu et al. 1996;Malm et al. 2000;Lundby et al. 2005;Vissing et al. 2005;Guerra et al. 2011;Friedmann-Bette et al. 2012). Therefore, in this study we took muscle samples from m. vastus lateralis using a conventional 5 mm Bergstr€ om-type biopsy needle. In the pretest a muscle sample was cut both 3 cm proximal and 3 cm distal to a central skin incision (see Fig. 2). Forty-eight hours later in the posttest, additional samples were taken adjacent to the earlier proximal and distal biopsy sites. We compared mRNA and protein contents of a series of key markers for myocellular O 2 sensing, inflammation, cell proliferation, mitochondrial biogenesis, protein synthesis and breakdown, oxidative stress, and energy metabolism between the pretest and the posttest samples. In the pretest, all measurements yielded identical results for samples obtained either proximally or distally from the skin incision. However, the pretest caused major alterations in all signaling pathways assessed in the posttest biopsies. Twenty-nine of the 40 cellular markers measured were either up-or downregulated by 50-500%. However, proximal and distal samples yielded discrepant results for about half of these markers, with mRNA data being much more volatile than measurements of protein expression and phosphorylation status. Independent of the etiology, acute muscle injury causes fiber damage and necrosis. This ignites an inflammatory reaction that is an essential step toward regeneration. Inflammatory cells produce several growth factors as well as stimulate the release of muscle regeneratory factors needed for muscle progenitor cell activation and differentiation (Tidball 1995(Tidball , 2005Huard et al. 2002;Jarvinen et al. 2005;Turner and Badylak 2012). Accordingly, it was shown that the microtrauma caused by a biopsy procedure elicits a local inflammatory response (Costill et al. 1988;Tidball 2005;Smith et al. 2008) which is full-blown within 24-48 h. In this study, compared with the pretest all mRNA markers of inflammation, that is IL-6, TNF-a, and CycloA, but not IjB-a (see Fig. 5 increased 48 h after the pretest biopsies (approximately posttest). The fivefold increase in IL-6 mRNA was most explicit, which is in line with previous findings (Guerra et al. 2011). However, inflammatory markers were increased only in the distal biopsies, not in the proximal ones. A likely explanation for such differential response between samples is the formation of a hematoma "downstream" to the biopsy area. During the exercise bout following the pretest biopsy, blood and capillary filtrate due to gravitation conceivably drained from proximal to distal, which in turn triggered an inflammatory response at the distal site (Tidball 1995;Jarvinen et al. 2005). Following each biopsy we applied local compression for 5-10 min to stop visible bleeding before the subjects returned to the upright position to perform the exercise bout. However, such procedure seemingly is inadequate to fully eliminate internal bleeding during subsequent muscle contractions. The absence of activation of inflammatory markers at the proximal site also proves that the inflammation certainly was not due to the exercise per se but specifically to the biopsy procedure. We also postulated that mechanical damage to the microcirculation due to needle insertion, in conjunction with the ongoing inflammatory processes, might affect local oxygenation status and thereby impact on O 2 -sensing pathways. Therefore, we measured local tissue oxygenation status (TOI) by near-infrared spectroscopy. Baseline TOIs in the pretest were identical between the two legs. In the control leg TOIs yielded normal basal values also in the posttest (Quaresima et al. 2002;Tew et al. 2010). However, in the biopsied leg TOI dropped by approximately 10-15% in the posttest (see Fig. 3), yet only at the proximal site. Therefore, contrary to our hypothesis, ongoing inflammation processes do not seem to affect local oxygenation status as inflammatory markers were only upregulated at the distal site, whereas TOI was only decreased at the proximal site. Although the origin of local muscle deoxygenation could not be determined at the hand of our data, the decrease in TOI could be the trigger for elevated HIF-2a, REDD1, and nNOS mRNA contents at the proximal site. In contrast, in the distal but not in the proximal biopsies VEGF, REDD1, and nNOS mRNAs were slightly decreased, while HIF-1a mRNA as well as protein content were increased (Fig. 4). In fact, only for HIF-1a protein expression proximal and distal biopsies showed an identical approximately twofold increment, confirming that HIF-1 stabilization is not only dependent on the level of hypoxia (Zhong et al. 2010;Luo et al. 2011) as those results do not match TOI data perfectly. Also, changes in HIF-1a protein content did not match changes in mRNA content, which has already been shown after resistance exercise (Ameln et al. 2005).
It is well established that mitochondria play an important role in hypoxia adaptation by regulation of cellular energy balance and reactive oxygen species homeostasis in relation with HIF-1 stabilization (Solaini et al. 2010). PGC-1a, TFAM, and PPAR-c are implicated in mitochondrial biogenesis and function (Scarpulla 1813;Jornayvaz and Shulman 2010;Wenz 2013). Concomitant with the drop of TOI at the proximal site in the posttest, mRNA levels of TFAM and PPAR-c were elevated, while PGC-1a mRNA tended to increase (see Fig. 7). However, once again contrasting results were found between proximal and distal biopsies for both PGC-1 and TFAM mRNAs. In fact, only PPAR-c mRNA content was increased to the same degree in muscle tissue sampled either proximally or distally. Interpretation of the aforementioned results also requires considering the precise positioning as well as the frequency of the repeated biopsies (see Figs. 1 and 2). In the pretest the samples were cut approximately 2-3 cm below the bottom end of the virtual NIRS voxel with the express purpose to avoid structural muscle damage within the NIRS voxel, which would invalidate the measurements. Conversely, in the posttest, samples were cut within the NIRS voxel. The biopsies in the posttest thus reflect cellular and molecular events happening in intact muscle tissue within 2-3 cm distance from another biopsy taken 48 h before. We limited the number of biopsies in the pretest to just two, yet still observed major impact on the posttest measurements. It is reasonable to assume that the effects of muscle biopsying on local oxygenation status and cellular O 2 sensing probably will be exaggerated if even more than two needle biopsies were taken from the same muscle within one experiment.
Muscle damage by needle biopsying probably also results in increased protein turnover by activation of both catalytic and anabolic pathways (Huard et al. 2002;Watford 2003;Butterfield et al. 2006). At the catalytic side of protein turnover, in the posttest mRNA levels of the ubiquitin-ligases MAFBx and MURF-1, as well as mRNA of BNIP-3, which is implicated in cellular autophagy and apoptosis (Zhang and Ney 2009), were increased in both the proximal and distal biopsies (see Fig. 8). Conversely, at the anabolic side of protein turnover, phosphorylation of Akt and its downstream target GSK-3 was decreased in the posttest, whereas phosphorylation of S6K1 was increased, indicating that additional regulatory signals acted on S6K1 itself or between Akt and S6K1, probably at the level of mTOR, the kinase for S6K1 at Thr389. Potential candidates could have been AMPK and REDD1 as the latters are known to inhibit mTOR (Brugarolas et al. 2004;Liu et al. 2006). In the present case, a decrease in AMPK phosphorylation or in REDD1 content could have explained the increase in S6K1 phosphorylation but AMPK phosphorylation was unchanged in the posttest and REDD1 mRNA level increased at the proximal site while not changing at the distal site. Therefore, another untested mechanism should have led to the increased S6K1 in the posttest. Importantly, changes in S6K1, Akt GSK-3, AMPK, and elF2a phosphorylation were consistent between tissue sampled either proximally or distally in the muscle. This was not the case for most of the changes in mRNA levels. Compared with the pretest AMPK-a1 and a2 mRNAs were substantially upregulated in the posttest, yet for AMPK-a2 this occurred only in the distal biopsies. The same holds true for MyoD, Myf5, and myogenin mRNAs, nuclear transcription factors that play an important role in muscle repair (Zanou and Gailly 2013), which also exhibited discrepant responses between proximal and distal biopsies (see Fig. 6). In addition, for none of the above variables the changes in mRNA content between the pretest and the posttest translated into similar changes in the total protein content of the corresponding proteins.
It is also well known that regulation of redox balance by the NADPH-oxidase, SOD, and catalase enzymes plays an important role in regulation of wound healing post injury (Soneja et al. 2005;Filippin et al. 2009). Accordingly, compared with the pretest both SOD-1, SOD-2, catalase, and NADPH-oxidase mRNA levels were increased. However again, only for SOD-1 similar changes were found between the proximal and the distal biopsies. It is noteworthy to mention that by analogy with the mRNA markers of inflammation (TNF-a, IL-6, and CycloA, see Fig. 5), also NADPH-oxidase mRNA content was increased only distally in the muscle belly. Indeed, both leukocyte and muscle cell NADPH-oxidases play a pivotal role in tissue inflammation and repair by producing the reactive oxygen species superoxide and as such contribute to myocellular oxidative stress and regulate proliferation of skeletal muscle precursor cells (Bokoch and Knaus 2003;Mofarrahi et al. 2008;Jiang et al. 2011). An important message from the present work is that the exact site of sample cutting within the muscle belly, rather than the location of the skin incision, is the reference to use whenever planning repeated muscle biopsies in the same muscle with only days in between. In fact in most studies involving percutaneous needle biopsies two muscle samples are cut via a single skin incision. Typically one sample is cut with the needle pushed up 2-3 cm proximally into the muscle belly, while the second tissue sample is cut 2-3 cm distally to the incision. Using a similar procedure for later follow-up biopsies via a new skin incision 3-5 cm either distally or proximally to the first incision, does not exclude to cut samples within a 2 cm radius from an earlier biopsy spot.
In conclusion, this study clearly demonstrates that needle biopsies per se, at least by causing local tissue inflammation and/or topical deoxygenation, can substantially alter biochemical events happening in needle biopsy specimens sampled at a later day in the same muscle belly. It is crucial to take into account these potential artifacts whenever investigating the cellular mechanisms implicated in adaptation to exercise, recovery, or hypoxia. mRNA data clearly are much more sensitive to biopsy-induced artifacts than protein measurements. We recommend that the methodology section of studies involving repeated muscle biopsies from the same muscle with only a few days in between, provides a detailed description of the precise tissue sampling sites within the muscle, rather than just mentioning the distance between skin incisions which does not exclude artifacts at all. | 2017-06-17T15:54:19.210Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "210bb9b4f33ef06829df80f28fbd3e35869e1fa5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14814/phy2.286",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "210bb9b4f33ef06829df80f28fbd3e35869e1fa5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266950938 | pes2o/s2orc | v3-fos-license | THE SIMULTANEOUS QUANTIFICATION OF RIFAMPICIN AND ISONIAZID IN PATIENTS WITH TUBERCULOSIS APPLIED TO VOLUMETRIC ABSORPTIVE MICROSAMPLING DEVICES USING HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY
Objective: Rifampicin and isoniazid are the main tuberculosis treatment regimens requiring blood level measurement to optimize the treatment process. This study aims to analyze rifampicin and isoniazid quantitatively in volumetric absorptive microsampling (VAMS) prepared from a small volume of TB patients using HPLC. Methods: Analytes on the VAMS tip were extracted using 1000 ml of acetonitrile containing 10 µg/ml of cilostazol as an internal standard. Analytical separation was performed on the C-18 column at 40 ℃ with a mobile phase mixture of 50 mmol ammonium acetate buffer pH 5.0-acetonitrile-methanol (40:30:30), flow rate 0.5 ml/min. The analysis was carried out with the calibration curve over a range of 1.0–30 µg/ml for rifampicin and 0.4-20 µg/ml for isoniazid. Results: Analyte analysis in 21 patients showed that the measured value of rifampicin was 3.39–16.77 µg/ml, and isoniazid was 2.63–10.43 µg/ml at 2 h post-dose. 52.38% of patients had low blood concentrations in at least one of the drugs, 28.57% of the patients were in the therapeutic range, and 23.81% had a high blood concentration of isoniazid alone. Conclusion: The concentration of rifampicin and isoniazid in 21 tuberculosis patients varied. Dose adjustment is needed because most patients had low blood concentrations of one of the drugs, and a limited number had a high blood isoniazid concentration alone. Only some patients simultaneously had plasma concentrations within the target range of the drugs. This method was valid and reliably utilized for therapeutic drug monitoring of antituberculosis .
INTRODUCTION
Tuberculosis (TB) is a severe disease developed from Mycobacterium tuberculosis that remains one of the most prevalent causes of mortality worldwide.Approximately 10.6 million people were infected, with 1.6 million deaths in 2021.Indonesia has become the third-largest contributor to global TB cases among the top 30 TB burden countries, accounting for 9.2% of all TB cases [1].The global TB burden remains strongly linked to hazardous living conditions, HIV co-infection, the emergence of drug-resistant TB, and poor treatment outcomes [2].The poor treatment outcomes are caused by many factors, including low blood drug concentrations [3].The drugs commonly prescribed in the treatment of TB are rifampicin and isoniazid.The bactericidal action and post-antibiotic effect of rifampicin occurred at blood concentrations of 8-24 μg/ml [4].Isoniazid blood concentrations are suggested to be 3-6 µg/ml to provide the therapeutic effect [5].Therefore, determining blood drug concentrations is essential to therapeutic drug monitoring to improve outcomes during therapy.
Generally, the determination of drug concentrations involves blood samples acquired by venipuncture.Despite being regarded as the gold standard, this conventional technique is invasive and has several limitations, such as requiring a particular storage condition, managed shipments, and huge sample volumes.Micro-sampling techniques such as DBS and VAMS have been developed to overcome the disadvantages of conventional sampling techniques.This method acknowledges a smaller volume of blood samples, safe handling, inexpensive shipping, room temperature storage, and minimal invasiveness, increasing patient comfort.However, the dried blood spot (DBS) technique has a drawback in that the influence of different hematocrit levels (HCT) will affect spot size, sample homogeneity, drying time, and analyte recovery [6].
Another micro-sampling technique, volumetric absorptive microsampling (VAMS), can minimize the effect of hematocrit on DBS.The porous hydrophilic tip of VAMS has been designed to absorb a fixed sample volume [6].Previous studies have successfully reported the utilization of VAMS in various therapeutic drug monitoring activities, such as imatinib mesylate [7], clozapine [8], and phenylalanine [9].However, applying the VAMS method to analyze rifampicin and isoniazid from tuberculosis patients has never been reported.The analytical method in this study operated on highperformance liquid chromatography (HPLC) with a PDA detector because the method is more economical than LC-MS/MS, which has been used in previous studies [10][11][12].This study aims to quantitatively analyze rifampicin and isoniazid in TB patients applied to volumetric absorptive microsampling through a valid and reliable HPLC method.
Chromatographic condition
Chromatographic analysis was performed using a C-18 column (5 μm; 250 mm x 4.6 mm) with a temperature of 40 ℃ .The mobile phase contained 0.05 M of ammonium acetate buffer, pH 5,0acetonitrile-methanol (40:30:30) under isocratic elution conditions with the flow within 0.5 ml/min.A volume of 20 ml was used as the injection volume, and 261 nm was the detection wavelength.
Sample preparation
Sample Preparation of VAMS was prepared by dipping the Mitra® tip in the spiked whole blood with the appropriate concentration and drying it for an hour.Mitra® tips were removed and put in a microtube.The extraction process was performed using a protein precipitation technique by adding 1 ml of acetonitrile and 50 µl of internal standard 10 µg/ml into the sample.It was sonicated at 30 ℃ for 15 min, vortexed for 2 min, and centrifuged for 5 min at 10,000 rpm.The supernatant was pipetted as much as 850 μl and evaporated under nitrogen at 40 °C for 20 min.The dried extract was reconstituted in 200 μl of methanol.The mixture was homogenized with a 10 s vortex and a 5 min sonication.A total of 20 µl aliquots were injected into the HPLC system.
Method validation in volumetric absorptive micro-sampling
Method validation in this study referred to the US Food and Drug Administration (FDA) guidance on bioanalytical method validation.The full validation of the analytical method in volumetric absorptive micro-sampling was performed in terms of parameters, selectivity, carry-over, the lower limit of quantification (LLOQ), linear calibration curve, accuracy, precision, dilution integrity, and stability test.In addition, the recovery test was also completed.The validation was carried out with the coefficient of variation (CV) and the relative difference (% diff) requirement of<20% for LLOQ and 15% for the other validated concentration [13].
Selectivity and carry-over
The selectivity test was performed by determining LLOQ and blank samples from six different sources.Carry-over was evaluated by analyzing the blank after the upper limit of quantification (ULOQ) concentration was analyzed.It was carried out in five replicates.The blank interference response at the retention time of the analyte should be±20% of the LLOQ response and should not exceed 5% of the internal standard response to qualify for both the selectivity and carry-over tests [13].
Calibration curve
The calibration curve measured three replicates of blank, zero, and six concentration levels ranging from 1.0-30.0μg/ml and 0.4-20 μg/ml for rifampicin and isoniazid, respectively.The linear equation used to recalculate the concentration of calibration standards was constructed by plotting the peak area ratio (PAR) of the IS versus the concentration of the analytes.The calibrators should be 15% of theoretical concentrations in each validation run, except at LLOQ, which should be 20% [13].
Precision and accuracy
The precision and accuracy test assessed four level concentrations (LLOQ, OCL, QCM, and QCH) on the same day (within-run) and different days (between-run) on five replicates of each.The requirement for within-and between-run precision (CV) was±15%, and the accuracy was±15% of nominal concentrations, except for LLOQ, which was±20% [13].
Recovery
The recovery was carried out by comparing the response of the extracted sample and blank spiked with the analyte post-extraction at three level concentrations (QCL, QCM, and QCH).It was evaluated three times.The reproducible was qualified with a CV value not exceeding 15% [13].
Dilution integrity
Dilution integrity was tested for five replicates of the higher ULOQ concentration (2x QCH), serially diluted to ½ and ¼ of the concentration.The acceptance criteria for dilution integrity were accuracy and precision (CV) within±15% [13].
Stability
The stability test analyzed QCL and QCH in VAMS samples and the standard solution of rifampicin, isoniazid, and cilostazol, each with three replications.The VAMS samples were stored at room temperature, and the standard solutions were at 4℃ .The stability of VAMS samples was analyzed at 0, 6, and 24 h for short-term stability and on days 7, 14, and 30 for long-term stability.The standard solutions were analyzed on days 7, 14, and 30.The accuracy at each level should be±15% [13].
Ethical approval
This study has been accepted for ethical approval by the ethics committee at dr. Chasbullah Abdulmadjid General Hospital Bekasi No. 012/KEPK/RSCAM/V/2022.
Application of the method
All patients were determined based on the propriety of inclusion criteria, including those diagnosed with pulmonary tuberculosis at dr. Chasbullah Abdulmadjid Hospital.The patients received the rifampicin and isoniazid regimen in a fixed dose combination and were 18-50 y old during blood collection.Blood samples from the patients were collected 2 and 6 h after administration from the fingertips and absorbed in 30 µl of Mitra® VAMS.The tips were stored in the Mitra® clamshell at room temperature and put with a desiccant until analysis was conducted.The sample tips that were going to be analyzed were taken off the handle and put into 1000 μl of acetonitrile with 10 µg/ml IS added.The extraction procedure followed the sample preparation described previously.The concentrations of rifampicin and isoniazid were calculated using daily calibration curves, and the quality control samples were added at each analytical run to provide data validity.
Selectivity and carry-over
The analytical method was validated to ensure that it was selective, sensitive, accurate, reproducible, and suitable for analyzing the samples.The method was found to have high selectivity because no interference peaks were detected in the retention time of each analyte.The blank response is shown in fig. 1.The retention time for considered analytes was 2.55 min for isoniazid, 12.41 min for rifampicin, and 10.93 min for the internal standard.The result showed no interference from the endogenous components or crossinterference between analytes and the IS under the assay conditions.
Fig. 1: Chromatogram of a blank sample
The carry-over was calculated as the peak area observed in the blank expressed as a percentage of the mean peak area determined in the same run for the lowest calibration standard.The carry-over test results met the requirement that the mean interference response at the retention time of rifampicin was less than 2.53%, isoniazid was less than 1.71%, and cilostazol as an internal standard was less than 0.14%.These indicated that the previous assay with the highest concentration would not influence the current assay.
Calibration curve
The lower limit of quantitation (LLOQ) was 1.0 and 0.4 µg/ml for each rifampicin and isoniazid, demonstrating satisfactory sensitivity for this method.Rifampicin at a concentration of 1.0 μg/ml produced an accuracy value (%diff) of-3.97% to 13.06% with a CV value of 6.40%.At the same time, isoniazid at a concentration of 0.4 mg/ml obtained a %diff value between 0.51% and 18.43% with a CV value of 6.40%.The linearity was determined graphically by plotting the back-calculated concentration versus the theoretical concentration.The calibration curve conducted the linear regression y=0.0295x-0.0073for rifampicin and y=0.046x+0.0204for isoniazid.The correlation coefficient (R 2 ) of each rifampicin and isoniazid was 0.9975 and 0.9987, indicating that the instrument response and analyte concentration have a linear relationship.The method produced well-defined results proportional to the analyte concentration within the specified range since all the backcalculated concentrations were<15% of the theoretical concentrations and<20% for LLOQ.
Accuracy, precision, and recovery test
This method was sufficiently precise and accurate since all the QCL, QCM, and QCH samples were less than 15%, and the LLOQ was beneath 20% over three consecutive, independent runs.The accuracy and precision within and between-run results were summarized in table 1.The results for all calibrator samples emphasize the robustness of this method for measuring blood rifampicin and isoniazid applied in VAMS.
The recovery of the extraction of the VAMS device is an important aspect to be evaluated because it may affect the level of the analyte measuring process in the patient.The recovery test was calculated from the area ratio of the extracted analyte to the analyte spiked after extraction.The spiked post-extraction could be a 100% reference point reflecting the VAMS extraction recovery.It is shown in table 1 that each analyte in each calibrator level had a high recovery value (≥ 90%) with a CV of ≤ 15%.This data indicated that the extraction process in this study was optimal and reproducible.
Dilution integrity and stability
Dilution integrity had to be established to ensure accurate measurement for samples with concentrations above the upper limit of the standard curve.The dilution integrity preparing five replicates of VAMS with a concentration above ULOQ (2xQCH) then diluted to half and quarter dilutions resulted in the % diff value ranging from-13.24% to 9.56% for isoniazid and from-14.75% to 12.49% for rifampicin.The CV values were less than 8.18% and 10.20% for isoniazid and rifampicin, respectively.This data suggested that samples with concentrations higher than the standard curve upper limit could be diluted with a blank matrix without affecting the final calculated concentration.
The results of the storage stability of stock solution showed a %diff ranging from 14.30 to-2.01%for all analytes in methanol at room temperature (25 °C) for 24 h and in the refrigerator (4 °C) for a month.The results of the rifampicin and isoniazid stability tests on VAMS also showed good stability because the values of %CV and %diff were less than 15% for all control samples.The stability of the analytes in VAMS has been demonstrated during a roomtemperature storage period of up to a month with desiccant and protection from light.All stability test results met the FDA requirements, indicating that each step taken during sample preparation, processing, analysis, and even the storage conditions of the VAMS used will not affect the concentration of the analyte.Therefore, the VAMS technique is suitable for collecting blood samples from TB patients to monitor drug concentrations.
Analysis of study samples
Monitoring rifampicin and isoniazid concentrations aims to evaluate the current dosing, which can help determine individually antituberculosis dose regimen.Inappropriate dosage is one of the drug-related problems in adult TB patients [14].Adjustment doses could improve the antituberculosis treatment outcome by maximizing the therapeutic effect and minimizing its toxicity.A total of 21 patients had signed informed consent before the sampling and analysis process.The characteristics of the patient are summarized in table 2. All samples were obtained at 2 and 6 h after the administration due to the variability of oral absorption.The 2 h post-dose concentrations of isoniazid and rifampin are usually the most informative due to the Cmax occurring.Unfortunately, low 2 h values do not characterize delayed absorption or malabsorption.Thus, 6 h post-dose was collected to differentiate between these two scenarios.The value also provides information regarding eliminating drugs with short half-lives, such as rifampicin and isoniazid [15].At 2 h post-dose, 66.67% and 47.62% of patients had concentrations in the therapeutic range of isoniazid and rifampicin, respectively.23.81% of patients had a high isoniazid level, with the highest being on patient SN03 at 10.43 µg/ml.This result is slightly higher than a study conducted in Bali, which found that 16.7% of patients had isoniazid levels above the therapeutic range [16].The isoniazid level in the upper therapeutic range might be associated with slow acetylation.The rate of acetylation of isoniazid significantly alters its blood concentrations, in which the slow acetylator patients have higher levels of isoniazid than intermediate and rapid acetylation [17].It might be responsible for the increased risk of adverse reactions such as hepatotoxicity since isoniazid can bind to liver proteins and cause immune-mediated liver injury.In Indonesia, 26.2% of 172 patients experienced major adverse reactions to antituberculosis, 60% of which were drug-induced hepatitis [18].
In contrast, none of the patients had a rifampicin concentration level above the therapeutic range, but most patients (52.38%) had a low concentration.The subtherapeutic of isoniazid was found in 9.52% of patients, with the lowest on patient SN11 of 2.63 µg/ml.In the previous study, low levels of both isoniazid and rifampicin were also identified in 34% out of 60 patients [19].Low drug levels are associated with an unfavorable clinical response, the acquisition of drug resistance, and treatment failure [20,21].Hence, the adjustment of the dose should be projected individually.Blood sampling 6 h after administration showed the highest of 7.88 and 2.99 µg/ml, while the lowest of 1.17 and 0.41 µg/ml for rifampicin and isoniazid, respectively.The results confirmed that none of the patients experienced delayed absorption since no one had a high concentration of rifampicin and isoniazid at 6 h post-dose.The results of the analysis are demonstrated in fig. 3 and 4. Based on these results, 52.38% of patients had low blood concentrations in at least one of the drugs for both at 2 and 6 h postdose.Those might be identified as malabsorption cases.Rifampicin and isoniazid dosages can be adjusted to maintain therapeutic effects and prevent toxicity.The results are quite interesting, even though a limited number of subjects were used.Using more subjects with a cohort study will provide more comprehensive information regarding follow-up therapy to improve the success of antituberculosis therapy.However, this study can complement several other studies on using rifampicin and isoniazid in Indonesia [16,22].The use of VAMS in this study gives more advantages that provide patient comfort due to its minimal invasiveness and the small volume needed.Moreover, no conversion factors are needed since no significant differences exist in determining the drugs in plasma and microsampled [23,24].It can be concluded that the method is reliable and effective for therapeutic drug monitoring of rifampicin and isoniazid.
CONCLUSION
The volumetric absorptive micro-sampling technique was utilized to analyze rifampicin and isoniazid concentrations in 21 tuberculosis patients.52.38% of patients had low blood concentrations in at least one of the drugs, indicating that a treatment dose adjustment is needed.28.57% of the patients were in the therapeutic range, and 23.81% had a high blood concentration of isoniazid.This method was valid and reliably utilized for therapeutic drug monitoring. | 2024-01-12T16:16:54.038Z | 2024-01-07T00:00:00.000 | {
"year": 2024,
"sha1": "fe61250df55c314a98975a9f1cb291660cac915c",
"oa_license": "CCBY",
"oa_url": "https://iasjournals.com/journals/index.php/ijap/article/download/49108/29420",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ecd86e4d91944a34d14aeaa85270bdf0c862b8fb",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
256972589 | pes2o/s2orc | v3-fos-license | Digitalisation Medical Records: Improving Efficiency and Reducing Burnout in Healthcare
(1) Background: electronic medical record (EMR) systems remain a significant priority for the improvement of healthcare services. However, their implementation may have resulted in a burden on healthcare workers (HCWs). This study aimed to determine the prevalence of burnout symptoms among HCWs who use EMRs at their workplace, as well as burnout-associated factors. (2) Methods: an analytical cross-sectional study was conducted at six public health clinics equipped with an electronic medical record system. The respondents were from a heterogeneity of job descriptions. Consent was obtained before enrolment into the study. A questionnaire was distributed through an online platform. Ethical approval was secured. (3) Results: a total of 161 respondents were included in the final analysis, accounting for a 90.0% response rate. The prevalence of burnout symptoms was 10.7% (n = 17). Three significant predictors were obtained in the final model: experiencing ineffective screen layouts and navigation systems, experiencing physical or verbal abuse by patients, and having a poor relationship with colleagues. (4) Conclusions: the prevalence of burnout symptoms among healthcare workers working with electronic medical record systems was low. Despite several limitations and barriers to implementation, a paradigm shift is needed to equip all health sectors with electronic medical record systems to improve healthcare service delivery. Continuous technical support and financial resources are important to ensure a smooth transition and integration.
Introduction
Burnout is not categorised as a form of medical condition according to the International Classification of Disease (ICD-11). Burnout is often related to long-term workplace stress that is not adequately managed. It often refers to an occupational context, not related to other areas of experience in life [1]. The aetiology of burnout is multifactorial. It can be influenced by individual factors, occupational factors, personality factors, coping style, and other factors [2]. Critically, three dimensions of burnout are frequently explored by researchers: (i) feelings of energy depletion or exhaustion, (ii) increased mental distance from one's job or feelings of negativity or cynicism about one's job, and (iii) reduced professional efficacy [1].
Burnout conditions can occur in any occupation. This includes the medical fraternity, ranging across the span of a doctor's life, affecting medical students, house officers, medical officers and specialists [3]. Based on a previous national report, an estimated 26.5% of Malaysia's junior doctors experience burnout, particularly those who have worked for less than 6 months and those who are in emergency postings [4]. Furthermore, doctors in Malaysian urban hospitals are 5 times more likely to experience episodes of burnout (51%) compared to nurses, assistant medical officers, and hospital attendants [5]. Adding to these statistics, doctors working in a paediatric department in Malaysia are more likely to experience burnout compared to other departments such as accident and emergency, medical, orthopaedics, psychiatry, obstetrics and gynaecology, and surgery [5].
Previous research found that 74.5% (155/208) of the participants who reported burnout symptoms listed electronic medical record (EMR) systems as one of the contributing factors [3]. Recent qualitative research indicated that the use of EMR systems may have a detrimental effect on clinical reasoning and interprofessional collaborative practices [6]. However, EMRs have helped to improve doctors' dedication to their work because they can assist clinicians in building the narrative of the patient [6]. Hence, achieving nationwide adoption of EMR systems remains a significant priority for the improvement of healthcare services. However, their implementation may have resulted in hospitals incurring expensive upfront and ongoing costs, experiencing difficulty in medical collaboration, and, ultimately, being unable to use the systems, in practice, due to limited resources [7].
In 1989, Davis explained, using the technology acceptance model (TAM), that acceptance can be influenced by external variables, perceived usefulness, perceived ease of use, attitude towards using, behavioural intention to use, and actual system use. The perceived usefulness of the system occurs when the product improves job performance or assists in accomplishing tasks quickly and efficiently. The perceived ease of use occurs when users believe the product is flexible and easy to understand, and when it is easy to become skilled at using it [8]. A mixed-methods study in Malaysia classified challenges in data completeness in hospital information systems into system factors and human factors [9]. Therefore, in the present study, besides sociodemographic and occupational factors, the EMR system's intrinsic (related to the software and hardware) and extrinsic (related to human factors and overall satisfaction) factors were examined.
Significant challenges in adopting EMR system tools have been reported in many studies. A study in Malaysia found that issues related to EMR adoption were cost, technology, and human and legal-related concerns, and proposed an EMR adoption framework [10]. It reviewed the current situation of hospital information system (HIS) integration based on models such as the theory of reason action and the theory of planned behaviour, in addition to the TAM. That study then classified potential barriers and facilitators for using EMRs into three categories: organisational challenges, human challenges, and technological challenges.
In Malaysia, studies related to EMR systems have focused on their implementation in hospitals [11][12][13]. Exhaustive studies in similar settings have been performed abroad [14][15][16]. To our knowledge, limited studies have investigated EMR implementation in primary healthcare clinics (PHCs) or Klinik Kesihatan [17,18]. In the hospital EMR system, sometimes referred to as the HIS or clinical information system (CIS), a larger network of data management exists for hospital environments. The HIS is an important tool in improving a hospital's operations and services. However, only 15.2% of Malaysian public hospitals have implemented a HIS. One of the known factors in the slow progress of adopting HIS is human context, with intertwining contributions of technological maturity, organisational preparedness, and environmental context [19].
The implementation of EMR systems creates both favourable and unfavourable outcomes for the end user [20]. One aspect is burnout symptoms among healthcare workers (HCWs), reflected by overall dissatisfaction and frustration with using the EMR system [20]. This is because any alteration to the existing workflow, such as replacing a manual medical records system with an EMR system, modifies the usual work processes in a PHC [21]. Despite the known advantages when the technology is utilised, the implementation of an EMR system in a PHC inevitably causes burnout symptoms. Hence, this study aimed to determine the factors contributing towards burnout among HCWs using EMR systems in PHCs and to measure its health burden. Ultimately, the findings of this study may contribute as a guide in the focus area of mental health burden among HCWs in PHCs and aid in planning appropriate interventions. The research questions for this study were: (1) What is the prevalence of burnout among HCWs using EMR systems in PHCs? (2) What factors contribute to burnout among HCWs using EMR systems in PHCs? (3) What are the predictors of burnout among HCWs using EMR systems in PHCs?
Study Setting
This was an analytical cross-sectional study conducted in Seremban District, Negeri Sembilan, Malaysia, from October 2020 to March 2021. Six PHCs were chosen for a pilot project in 2015 under the Seremban district health office to use an EMR system, named Tele Primary Care and Oral Health Clinic Information System (TPC-OHCIS), from 2017 [22]. The TPC-OHCIS system is a one-on-one end-user system that requires all job categories to have unique user IDs and computers (desktops or laptops) to access the system at the specific health facility. TPC-OHCIS is one of the e-Health initiatives of the Ministry of Health Malaysia. It is designed to integrate and improve the existing EMR system and be used separately in PHCs. Teleprimary Care (TPC) is used in PHCs, and the Oral Health Clinical Information System (OHCIS) is for dental clinics; both systems were integrated into TPC-OHCIS. The TPC-OHCIS system is currently used as the daily real-time operating system, for example, for data entry. Subsequently, the system has taken over the overall functions of other clinical systems in use. Thus, patient health records can now span information, from prenatal care to elderly care, in a 'womb to tomb' approach [22]. Hence, these six PHCs were chosen as the study setting. These PHCs are also among the busiest clinics in Seremban District and have a total of 755 HCW users of the TPC-OHCIS system.
Study Participants and Samples Size
The participants in this study were HCWs from different backgrounds: family medicine specialists (FMSs), medical officers (MOs), assistant MOs, and nurses. The exclusion criteria were HCWs currently diagnosed and receiving treatment for mental illnesses. The inclusion criteria were:
2.
HCWs currently working in the six PHCs in Seremban District.
3.
HCWs with at least one month of working experience using the EMR system in their PHC.
The sample size (n) in this study was calculated using the two proportions formulated by Lwanga and Lemeshow in 1990 [23]. The estimation of the sample size in this study was based on Siau et al. (2018) [5]. Based on this calculation, n = 65 HCWs for each group: doctors and nurses. Considering the adjustment for the comparison between the 2 groups, the total number of samples was 131 HCWs.
Study Instruments
An online, self-administered, structured questionnaire, consisting of five main sections, was used. The sections were (A) sociodemography, six items; (B) occupational factors, seven items; (C) EMR system intrinsic factors, seven items; (D) EMR system extrinsic factors, six items; and (E) adapted Mini Z survey to measure burnout.
Various models are used to explore the acceptance of EMR systems in healthcare industries. The most common are the TAM [1] and the unified theory of acceptance of use of technology (UTAUT) model [2]. Another simple method is to group EMR system implementation challenges into system and human factors [9]. System-related factors include old hardware and insufficient availability, stability in networks and connectivity, difficulty in retrieving data, the flexibility of the EMR system, and interoperability issues; these were referred to as intrinsic factors for Section C. Overcoming these issues strategically enables the prevention of user resistance towards using the EMR system and avoiding burnout.
For Section D, the EMR system's extrinsic factors were defined as features-in addition to the built-in digital system-on the user's and management's side, that have an impact on the system implementation [3]. From the same model as the EMR system's intrinsic factors, the TAM by Davis (1989) [1] explains these external factors that influence the acceptance of technology by the end user [8]. Another study also suggested that EMR system implementation challenges can be grouped into system and human factors [9]. Extrinsic factors, referred to as human-related factors, include the habit of entering data in free-text format, using copy/paste without verification, poor inter-personnel communication, user non-compliance with operating protocols, and user incompetence [9].
The original Mini Z instrument was developed from the physician work-life study [24], consisting of ten questions using a five-point Likert scale, and one open-ended question at the end. These ten items assess three outcomes: burnout, stress, and satisfaction. The seven drivers of burnout consist of four deriving from occupational factors-(i) work control, (ii) work chaos, (iii) teamwork, and (iv) value alignment with leadership-and three from EMR system components-(i) documentation time pressure, (ii) EMR use at home, and (iii) EMR proficiency. Of the three outcomes (burnout, stress, and satisfaction), stress was excluded from this study because it was not the outcome of interest, and satisfaction was repositioned into Section D-EMR system extrinsic factors. Of the seven drivers of burnout, two work-related items-work control and work chaos-were excluded during content validation by experts because they were not clear in the local context. Teamwork and value alignment with leadership were translated and retranslated and included in Section B: occupational factors. Only two out of three EMR system components in the original Mini Z were retained in this study because Item b, EMR use at home, did not relate to the local context since the system is only for use in healthcare facilities.
Some studies have only used a single question item (Q #2) to reflect burnout symptoms, rather than all items in the Mini Z questionnaire. This single-item tool has been demonstrated to be strongly correlated with the emotional exhaustion scale of the Maslach Burnout Inventory (MBI) [25]. The Pearson correlations and ANOVA used in the singleitem measurement of burnout in the Mini Z against the 22-item MBI was r = 0.64 (p < 0.0001) with emotional exhaustion and R 2 of 0.5 (p < 0.0001) [25]. Hence, this study opted to use a single-item questionnaire to measure the emotional exhaustion burnout component, given that it is an alternative to the MBI, with the ultimate intention of increasing the response rate [25]. Some of the items were adapted from the Mini Z questionnaire, some (using EMR at home) were not relevant to the local context, and a few new items were added regarding occupational factors and the EMR system's application in the PHCs.
The content of the questionnaire was assessed by the members of the supervisory team and other panels of experts in the field, such as public health medicine specialists with experience working closely with the EMR system in PHCs, and HCWs who were involved in using TPC-OHCIS. All elements of components were measured, such as sociodemographic and occupational factors, the EMR system's intrinsic and extrinsic factors, and burnout. Corrections and comments were made based on suggestions and recommendations. The content validity ratio (CVR) was used. Thus, the CVR value for all the main 26 items in the questionnaire was 1. Lawshe's CVR method, introduced in 1975, is popular in scale development for health and education sciences. The CVR value is between 0 and 1 if the essential item is more than half but less than all, and it is negative when the expert-rated essential item is less than half [26]. The face validity of the English-language questionnaire was assessed during the pre-test of the questionnaire with five non-expert judges, who commented on its language, structure, and sentences. Due to minor adaptation of the items in the questionnaire, a new internal consistency reliability was estimated using Cronbach's alpha (CA). Based on the pilot study performed with 43 HCWs who were EMR users in Kuala Lumpur and Putrajaya PHCs, the CA was 0.716 for occupational factors (3 items), 0.873 for the EMR system's intrinsic factors (7 items) and 0.740 for the EMR system's extrinsic factors (5 items). The overall CA for all 15 Likert-scale items was 0.902.
Data Collection
After Medical Research and Ethics Committee approval on 9 June 2021, and meticulous discussion with the Negeri Sembilan State Health Department, leading to approval on 15 June 2021, data collection for this study was conducted using the validated, structured questionnaire for a duration of one week (15 June 2021 to 22 June 2021). A name list of HCWs in PHC clinics using the EMR system was obtained from the responsible officer before data collection and randomization were performed. After permission was granted by the site supervisor in Seremban, the HCW was provided a Google form link. They were invited to participate in this study after the objectives of the study were explained to them, and after they were advised to read the information sheet provided online. The expected duration to answer the questionnaire was five to ten minutes for each respondent. The online patient information sheet was completed and informed consent was obtained if the HCW agreed to participate in the study. The confidentiality of each HCW was maintained throughout the process. Participants were assured that their identities would remain anonymous, and data would be further analysed to maintain the validity of the data. This study abided by the Declaration of Helsinki.
Data Analysis
Data analysis was carried out using the IBM Statistical Package for Social Sciences (SPSS) version 26.0. Descriptive analysis for continuous data was expressed as median and IQR, as all continuous data were not normally distributed. Descriptive analysis for categorical data was performed by using frequency and percentage. The dependent variable for this study was burnout symptom, measured using the adapted Mini-Z questions of burnouts. For bivariate analysis, categorical data were analysed using Chi-square or Fisher's exact test if the requirements of the Chi-square test were not met. All categorical variables that were significant in the bivariate analysis were included in formulating multivariate analysis by using multiple logistic regressions (MLogR). A value of p < 0.05 was considered statically significant. The data collection was analysed by using an intentionto-treat basis.
Ethical Approval
The research was registered with National Medical Research Register (NMRR) with NMRR registration number NMRR-21-551-58900 (IIR) and received approval from the Medical Ethics Committee (MREC), Ministry of Health Malaysia. Written consent was acquired from participants before the study. Participants' information from this study is kept strictly confidential and their anonymity was maintained. The documents for informed consent and the information sheet were distributed online to the participants.
Results
The questionnaire was distributed to a total of 170 HCWs based on simple randomisation from the sampling frame received before the study commencement. A total of 161 respondents consented and completed the questionnaire. Therefore, the response rate was 90%. The prevalence of burnout among HCWs was 10.7% (17 response rates out of 161 participants), whereas 89.3% of participants were not experiencing burnout ( Table 1). The majority of the respondents were under 40 years old (n = 111, 69.4%), female (n = 130, 81.8%), of Malay ethnicity (n = 141, 88.1%), married (n = 145, 90.6%), with less than 3 children (n = 92, 57.5%), and working as an MO (n = 49, 30.6%; Table 2). Table 3 shows the results of the association analysis between independent variables and burnout symptoms among HCWs. All sociodemographic factors were not statistically significant. Three occupational factors showed significant association: having experienced verbal or physical aggression from patients (X 2 = 6.002, p = 0.014), lower scores on relationships with colleagues (X 2 = 8.008, p = 0.011), and lower scores on workplace efficiency (X 2 = 7.378, p = 0.013). Apart from that, 6 out of 7 EMR intrinsic factors showed significant association: time spent on documentation (X 2 = 5.795, p = 0.023), the efficiency of screen layout and navigation (X 2 = 11.445, p = 0.002), stability of the EMR system (X 2 = 5.849, p = 0.016), integration of the EMR system with other electronic systems (X 2 = 6.402, p = 0.011), having technical support (X 2 = 6.443, p = 0.0011), and hardware and infrastructure (X 2 = 9.705, p = 0.002). However, only a single EMR extrinsic factor showed a statistically significant bivariate analysis: overall satisfaction with the EMR system (X 2 = 7.385, p = 0.013).
To determine the predictors for burnout symptoms among HCWs using the EMR, multiple logistic regression analysis was used. All 17 variables were tested, and analysis was performed using the forward method. Three significant predictors were obtained in this analysis: having experienced patients' verbal and physical aggression, a lower score on relationships with colleagues, and a lower score for screen layout and navigation in the EMR system (Table 4). Respondents who had experienced verbal or physical aggression from patients were 5.7 times more likely to be experiencing burnout compared to those who never had experienced this. Respondents who scored lower in their relationships with colleagues were 3.9 times more likely to be experiencing burnout compared to those who scored higher. Respondents who scored lower in the efficiency of screen layout and navigation of the EMR system were 5.3 more likely to be experiencing burnout compared to participants who scored higher.
Discussion
This is among the first studies in Malaysia using the single-item Mini Z survey as opposed to the MBI-HSS or CBI, which are more popular in measuring burnout among HCWs with other factors (occupational) than the EMR system. Overall, our study showed a lower prevalence, of 10.7%, compared to the burnout level among MOs (25.5%) in a tertiary hospital in Klang Valley in Malaysia [27]. Another study in Malaysia, conducted among house officers, MOs, and specialists (n = 313) in hospitals, reported that 15.9% had burnout symptoms [25]. Other studies using the Mini Z among HCWs have reported rates of 25-30% in the US and 25.6% in Canada [3,28,29]. The lower prevalence may be because the TPC-OHCIS system has been implemented since 2017, and users in the PHCs, especially the pilot site in Seremban, had already been familiarised with and accepted the system challenges. The EMR system may also have served as an aid in the PHC workflow setting. This is beneficial in terms of the staff management system at the PHC or Seremban District Health level. It prevents staff from encountering dissatisfaction that naturally leads to burnout. The difference in work in PHC settings compared to hospital settings may contribute to the lower prevalence of burnout in this study, especially the nature of work, the type of cases seen, the urgency of cases, and working hours in clinics.
In this study, respondents' belief that they possessed poor relationships with their colleagues was significantly associated with burnout symptoms. Poor interpersonal relationships with colleagues remained significant after being included in the final model of multivariate analysis. This finding is similar to another study in Malaysia where conflict among colleagues became a significant predictor of burnout [30]. Interpersonal relationships are important in the workplace. Most HCWs spend more time at their workplace with their colleagues than at home with their families. Hence, those who are close and have good relationships with colleagues may find them to be a source of help when they encounter difficulties in operating the EMR system. As a result, a lower level of frustration towards the EMR system can improve burnout symptoms. Through good relationships with co-workers, HCWs can have a strong and cooperative working environment that fosters relaxation and serenity.
Among respondents, a belief that the screen layout and navigation were inefficient was a significant predictor for prevalent burnout symptoms. This is in line with a study performed in the US among primary care doctors, in which the inability to navigate the EMR system quickly was associated with high stress and burnout levels [20]. Doctors working in an academic psychiatric hospital also reported that too much 'clicking' was significantly associated with the negative perception of the EMR system and contributed to burnout [3].
The final predictor in our study was the history of exposure to physical or verbal abuse by clients or patients. In any circumstance, HCWs who have experienced physical or verbal abuse will likely experience a higher risk of burnout [31]. This also aligns with the Malaysian study of emotional exhaustion as a predictor of burnout when dealing with difficult patients [30]. In other studies, workplace violence has been essentially related to verbal abuse, which is more common than physical abuse [32,33]. A fast and simple online reporting system can be implemented to monitor HCWs who have experienced violence in the workplace so that immediate action can be taken. Appropriate counselling exercises or allowing unrecorded leave are a few actions that can help in managing their acute stress.
Even though only three predictors in the final model contributed to the risk of burnout symptoms among HCWs working with an EMR system, other factors are still worth mentioning. For example, a belief by respondents that their professional values were not aligned with department leaders was significantly associated with burnout symptoms. A survey among US primary care doctors found that professional value was a significant predictor of burnout using the Mini Z survey [20]. These findings were also similar to a study among Malaysian paediatricians that discovered that a lack of appreciation from superiors (p = 0.019) was a significant predictor of burnout [30]. With this information, we can plan activities with leaders and staff in the department to strengthen their relationships and suggest transparency in communication from time to time as one of the burnout intervention programs.
Apart from that, workplace efficiency could also affect burnout symptoms. This is supported by one local study in Malaysia that stated that a crowded workplace and a hostile working environment were significantly associated with burnout among HCWs [30]. The stability of the EMR system also contributed to low burnout symptoms. This is supported by findings in the US where EMR systems with a problem in their response time were associated with higher stress and burnout level [20]. Ensuring a stable internet connection and server at PHCs may prevent the increase in burnout symptoms among HCWs. Likewise, the availability of technical support could reflect the cause of low burnout symptoms. This is supported by a systematic review that reported that one of the barriers to the successful implementation of EMR systems was the lack of technical support, regarding hardware and software issues, to assist doctors [34]. Other studies have reported that limited hardware for the EMR system affected healthcare performance and user acceptance of the technologies [9,10]. However, this study is one of the first known to associate the EMR system hardware and infrastructure inadequacy with burnout among HCW.
Strength and Limitation
The main strength of this study was that it used the latest EMR system (TPC-OHCIS) in primary healthcare settings. Therefore, the findings of this study can be used as a guide for system developers or system architects to focus on HCW preferences for EMR systems. Designing the correct layout that is preferred by the HCWs is very important because they are the end users of the EMR system in PHC clinics. Furthermore, the application's user interface is culturally accepted, reducing confounding factors of system diversity. The random sampling technique of selecting participants is another strength of this study. Probability sampling would enable the result to be generalised widely.
This study has its limitations. The cross-sectional study design employed means that the temporal relationship between the outcome and the associated factors in the independent variables cannot be determined [35]. Additionally, an online survey was used instead of face-to-face interaction due to the COVID-19 pandemic and movement control orders from the government at the time. As a result, it is difficult to ascertain whether all the respondents truly comprehended the questions asked or if they simply answered the questions provided to them [36]. Finally, our study employing the use of a single-item questionnaire to measure burnout may not be truly a representation of the condition. The item is more skewed towards measuring emotional exhaustion, so the result should be interpreted with care.
Conclusions
To conclude, the use of an EMR system in PHCs was associated with a low level of burnout. The predictors of burnout symptoms among the users of the EMR system were the following: experiencing ineffective screen layout and navigation system, HCWs who had experienced physical or verbal abuse by patients, and having a poor relationship with colleagues. Hence, a continuous support system and empowerment program should be provided to all HCWs working with EMR systems. More efforts can be made to increase capacity-building and full utilisation of EMRs. | 2023-02-18T16:14:28.697Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "dd89827941969455f22f934c473c4b73267b8a9c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "efff55cfa6518de8bb71478ebdf647005d6d1f56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
231641030 | pes2o/s2orc | v3-fos-license | A comparative study to evaluate CT-based semantic and radiomic features in preoperative diagnosis of invasive pulmonary adenocarcinomas manifesting as subsolid nodules
This study aims to predict the histological invasiveness of pulmonary adenocarcinoma spectrum manifesting with subsolid nodules ≦ 3 cm using the preoperative CT-based radiomic approach. A total of 186 patients with 203 SSNs confirmed with surgically pathologic proof were retrospectively reviewed from February 2016 to March 2020 for training cohort modeling. The validation cohort included 50 subjects with 57 SSNs confirmed with surgically pathologic proof from April 2020 to August 2020. CT-based radiomic features were extracted using an open-source software with 3D nodular volume segmentation manually. The association between CT-based conventional features/selected radiomic features and histological invasiveness of pulmonary adenocarcinoma status were analyzed. Diagnostic models were built using conventional CT features, selected radiomic CT features and experienced radiologists. In addition, we compared diagnostic performance between radiomic CT feature, conventional CT features and experienced radiologists. In the training cohort of 203 SSNs, there were 106 invasive lesions and 97 pre-invasive lesions. Logistic analysis identified that a selected radiomic feature named GLCM_Entropy_log10 was the predictor for histological invasiveness of pulmonary adenocarcinoma spectrum (OR: 38.081, 95% CI 2.735–530.309, p = 0.007). The sensitivity and specificity for predicting histological invasiveness of pulmonary adenocarcinoma spectrum using the cutoff value of CT-based radiomic parameter (GLCM_Entropy_log10) were 84.8% and 79.2% respectively (area under curve, 0.878). The diagnostic model of CT-based radiomic feature was compared to those of conventional CT feature (morphologic and quantitative) and three experienced radiologists. The diagnostic performance of radiomic feature was similar to those of the quantitative CT feature (nodular size and solid component, both lung and mediastinal window) in prediction invasive pulmonary adenocarcinoma (IPA). The AUC value of CT radiomic feature was higher than those of conventional CT morphologic feature and three experienced radiologists. The c-statistic of the training cohort model was 0.878 (95% CI 0.831–0.925) and 0.923 (0.854–0.991) in the validation cohort. Calibration was good in both cohorts. The diagnostic performance of CT-based radiomic feature is not inferior to solid component (lung and mediastinal window) and nodular size for predicting invasiveness. CT-based radiomic feature and nomogram could help to differentiate IPA lesions from preinvasive lesions in the both independent training and validation cohorts. The nomogram may help clinicians with decision making in the management of subsolid nodules.
Material and method
Study cohort. The study population consisted of 186 subjects with 203 SSN pathologically proved and classified as pulmonary adenocarcinoma spectrum lesions according to the IASLC/ATS/ERS classification from February 2016 to March 2020 for training cohort modeling. The validation cohort included 50 subjects with 57 SSNs confirmed with surgically pathologic proof from April 2020 to August 2020. The flowchart summarizes the study design and diagnostic performance by each approach shown in Fig. 1. The inclusion criteria were as follows: (1) patients with SSNs ≦ 30 mm in diameter; (2) patients who did not receive preoperative treatment prior to surgery; (3) patients who underwent surgical resection within 3 months of CT; and (4) the pre-operative chest CT scan with thin-slice thickness before surgical intervention (≦ 2.5 mm). The protocol of this study was approved by the Institutional Review Board (IRB) of Kaohsiung Veterans General Hospital, and the study was followed the guidelines of the Helsinki Declaration. All methods were performed in accordance with the relevant guidelines and regulations. Written informed consent was waived due to the retrospective study design by the IRB of Kaohsiung Veterans General Hospital (No. VGHKS19-CT6-19).
CT imaging protocol and acquisition. All preoperative chest CT scans were performed with a 16-slice CT (Somatom Sensation 16, Siemens Healthcare, Erlangen, Germany), a 64-slice CT (Aquilion 64; Toshiba Medical Systems), and 256-slice CT (Revolution CT, GE Healthcare, Milwaukee, USA) from the lung apex to the base without contrast enhancement as described in the previous study 13 . CT scans were acquired at full inspiration without contrast medium. The details of the scanning parameters using similar protocol for different vendors are listed as follows (Supplementary Table 1): Tube voltage, 120 kVp; body mass index (BMI)-dependent tube current 220 mAs to 350 mAs according to the BMI. Images were reconstructed with a section thickness of 1-2.5 mm using soft tissue kernel algorithm (different CT protocols in detail shown in Supplementary Table 1).
Conventional CT features (qualitative and quantitative).
The assessments of radiologic characteristics were reviewed independently by two radiologists, who were blinded to the pathologic reports. Disagreements were solved in consensus. The CT-based features were based the following qualitative and quantitative data. Qualitative features were as the followings: (1) nodular type according to Fleischer classification (GGNs manifest as haziness opacity in the lung that does not obliterate the bronchovascular bundle; part-solid nodules consist of both ground-glass opacity and solid components) 14,15 ; (2) novel nodular type according to the novel classification (classification into pure GGN, heterogeneous GGN (partly consolidated on lung windows), and part-solid nodules (with a mediastinal window solid component) according to the previous prospective study proposed by Kakinuma et al.) 10 ; (3) abnormal cystic-like space change (an example shown in Fig. 2); (4) Airbronchogram (an example shown in Fig. 3); (5) shape (smooth, lobulated or spiculated border); (6) round (oval or irregular). CT-based qualitative imaging features were recorded in consensus using long-axis diameter. Quantitative features were as the followings: (1) nodular size; (2) solid component in a mediastinal window; (3) solid component in a lung window. In addition, three readers were asked in the interpretation of each SSN according to 2 levels: preinvasive lesions or invasive lesions. A diagnostic performance comparison was conducted between radiomic CT feature and the three radiologists in the classification between preinvasive lesions and invasive lesions in the training cohort.
Quantitative radiomic CT feature. Radiomic features of these 203 SSNs were extracted using the LifeX package (LifeX, version 5.10, Orsay, France, http://www.lifex soft.org) for nodule segmentation with volume of interest (VOI) of at least 64 voxels for training cohort modeling 16 . The contours of these SSNs were delineated manually by an experienced thoracic radiologist. Regions of interest (ROI) were delineated around the nodule boundary for each section. A total of 41 features were derived from CT images and group according to intensity, shape, and second and higher-order features (Supplementary Table 2). For the histogram of the gray level distribution, the following features were extraction: the minimum, maximum, mean, and standard deviation of the Hounsfield units (HU) distribution. For first-order metrics extracted from the histogram, the following features www.nature.com/scientificreports/ were extraction: SkewnessH, KurtosisH, EntropyH and EnergyH. For second order metrics calculated from cooccurrence matrices: the following features were extraction: homogeneity, energy, contrast, correlation, entropy and dissimilarity. For higher-order metrics extracted from the grey-level histogram, the parameters included features of grey-level co-occurrence matrix (GLCM), neighborhood grey-level dependence matrix (NGLDM), grey-level run length matrix (GLRLM), and grey-level zone length matrix (GLZLM).
Pathologic evaluation. All surgical resected specimens were fixed in 10% formalin and embedded in paraffin with haematoxylin and eosin staining for pathological diagnosis. The surgically resected SSNs specimens were histopathologically analyzed by two senior pathologists experienced in lung pathology classified as AAH, AIS, MIA, and IPA. www.nature.com/scientificreports/ According to the revised lung adenocarcinoma (IASLC/ATS/ERS) classification of 2011 7,8 . The discordant cases were subsequently discussed in a consensus meeting until a consensus was obtained. All SSNs were divided into two groups: a preinvasive lesions group (AAH, AIS and MIA lesions) and invasive lesions group (invasive adenocarcinoma lesions) according to the revised lung adenocarcinoma (IASLC/ATS/ERS) classification.
Statistical analyses. All statistical analyses were performed using SPSS 22.0 for Windows (SPSS Inc, Chicago, IL) and Stata version 13.1 (StataCorp, College Station, Texas 77845 USA). Because all the continuous variables are normally distributed, Student's t-test was used to test the differences between two groups. Continuous variables are presented as mean ± standard deviation (SD). Categorical variables were summarized as frequencies and percentages and compared using the chi-square or Fisher exact test to examine differences in demographic characteristics. Univariate and multivariate logistic regression were used to determine these parameters for differentiating IPA lesions from preinvasive lesions. The results were expressed as an odds ratio (OR) with a 95% confidence interval (CI). Receiver operating characteristic (ROC) curve for the model was constructed, and the area under the curve (AUC) was calculated to compare the diagnostic performance of conventional CT features, radiomic CT feature and three experienced radiologists. In addition, sensitivity, specificity, PPV, NPV, positive LR (LR+) and negative LR (LR−) were calculated to measure the overall accuracy of the multiple tests. Calibration was assessed by the Hosmer-Lemeshow goodness-of-fit statistic and by calibration graphs plotting predicted IPA against the observed rates in deciles of predicted risk. A nomogram was established based on the radiomic parameter in the training cohort. The statistical significance for all tests was set at P < 0.05.
Result
Demographics and clinical characteristics. We retrospectively review thin-slice thickness images of 203 SSNs in 186 subjects who had subsolid nodule(s) preoperatively and subsequently underwent surgical resection with pathologically confirmed adenocarcinoma spectrum lesions at our hospital within the three-month interval for the training cohort modeling. Of the 203 SSNs, 97 SSNs had pre-invasive lesions and 106 SSNs had invasive lesions. Table 1 summarizes the patients' characteristics in the training and validation cohorts. For clinical characteristics, there were no significant differences in the percentage of sex ratio, smoking history, lesion location, cystic change, airbronchogram, shape, and round between these two groups. Compared with the validation cohort, there were no differences in age, nodular size, solid component_lung_window, and solid component_mediastinal_window in the training cohort shown in Table 1.
In the selected 12 features in this study cohort, there were no significant differences in the training cohort and validation cohort in terms of CONVENTIONAL_HUmean, CONVENTIONAL_HUstd, CONVENTIONAL_ HUQ2, CONVENTIONAL_HUQ3, HISTO_Entropy_log10, HISTO_Entropy_log2, GLCM_Entropy_log10, GLCM_Entropy_log2 (= Joint entropy), GLRLM_HGRE, GLRLM_SRHGE, GLZLM_HGZE, GLZLM_SZHGE shown in Table 2. Univariate and multiple logistic regression analyses of conventional CT characteristics and radiomic texture features in prediction of invasive lesions are shown in Table 3. The results of the univariate logistic regression model suggested that all conventional CT characteristics and radiomic texture features had significant association on the prediction of invasive lesions. Based on multiple logistic regression analyses, GLCM_Entropy_log10 was the only one independently important predictor for invasive lesions. Table 4 shows the sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), likelihood ratio (LR) (+), and LR (−) values based on conventional CT features and radiomic features for invasive lesions prediction with SSNs. A comparison of diagnostic performance of conventional CT feature, radiomic texture features and three radiologists in prediction of invasive lesions are summarized in Table 5. Diagnostic performance showed that GLCM_Entropy_log10 was the best predictor for differentiating preinvasive lesions from invasive lesions. The optimal cut-off value for GLCM_Entropy_log10 in differentiating preinvasive lesions from invasive lesions was with a sensitivity of 84.80% and a specificity of 79.20% (PPV = 81.66%; NPV = 82.66%). In model 1, GLCM_Entropy_log10 had the largest AUC value of 0.878, which was significantly higher than those of the conventional CT morphologic characteristics (abnormal cystic-like space change: 0.542; air-bronchogram: 0.764; shape: 0.823; round: 0.798). In the model 2, GLCM_Entropy_log10 had the similar diagnostic performance with conventional quantitative CT features. Among these potential quantitative CT features predictive parameters, nodule size was the most sensitive sign. However, the solid components (mediastinal and lung window) were the two parameters with optimal balance between the sensitivity and specificity. To compare with diagnostic performance of radiomic features versus subsolid nodule's classification system (Fleischer and novel classification system), the model 3 showed that GLCM_Entropy_log10 had the similar diagnostic performance with the novel SSN classification system. However, GLCM_Entropy_log10 had superior diagnostic performance over the Fleischer classification system in invasion lesion's prediction.
In the model 4, GLCM_Entropy_log10 had the highest AUC value of 0.878, which was significantly higher than the AUC of the three experienced radiologists (radiologist 1: 0.692; radiologist 2: 0.806; radiologist 3: 0.759).
Discussion
The heterogeneous behaviors of persistent subsolid nodules are most frequently encountered diagnostic and management dilemmas in the Asian lung cancer screening program with high prevalence of non-smoking related lung cancers 3,4,13,17,18 . In addition, discrepancies in subsolid nodule categorization caused by disagreement on presence of a solid component, which may lead to different clinical decision and management [19][20][21] . In www.nature.com/scientificreports/ this context, the texture analysis of subsolid nodules has been recognized in differentiating invasive pulmonary adenocarcinomas from preinvasive lesions by quantitative assessment. To distinguish invasive pulmonary adenocarcinomas from preinvasive lesions is important in clinical decision making for lung cancer screening and subsolid nodule's management 13,22,23 . In this study, our study results demonstrated that GLCM-based feature (GLCM_Entropy_log10) was the independent predictor for invasive pulmonary adenocarcinomas prediction. We built a nomogram based on the GLCM-based feature (GLCM_Entropy_log10) to predict IPA, and it showed good discrimination and goodness-of-fit.
Furthermore, our study results demonstrate the superior performance of the GLCM-based feature (GLCM_ Entropy_log10) over CT-based morphologic features in the study. The GLCM-based feature (GLCM_Entropy_ log10) yielded a significantly higher AUC for prediction of invasive pulmonary adenocarcinomas when compared to the CT-based morphologic features. Previous studies have demonstrated that the solid component is the major determinant in prediction of invasive degree of the lung adenocarcinoma spectrum lesions [24][25][26] . These results are in line with our above findings. In addition, our study result demonstrated that GLCM-based feature (GLCM_Entropy_log10) has similar diagnostic performance to solid component (mediastinal window or lung window) in prediction of invasive lesions. In contrast to computer-aid texture quantitative analysis, CT-based quantitative and qualitative features perceived by naked eye will lead to a large inter-observer variability depended on radiologists 27 . In addition, imaging interpretation by the visual process through the naked eye could not fully understand the underlying biological heterogeneity of subsolid nodules. These findings suggest that texture analysis as a non-invasive, mathematical quantitative method of assessing that biological heterogeneity within the subsolid nodules might be of clinical relevance in predicting the pathologic invasiveness of the lesions of the pulmonary adenocarcinoma spectrums. Table 2. Selected radiomic features of the study population with SSNs in the training and validation cohorts. HU hounsfield unit, GLCM gray-level co-occurrence matrix, GLRLM grey-level run length matrix, HGRE high grey-level run emphasis; SRHGE: short-run high grey-level emphasis, GLZLM grey-level zone length matrix, HGZE high grey-level zone emphasis, SZHGE short-zone high grey-level emphasis. www.nature.com/scientificreports/ Previous studies have utilized different models of radiomic score to distinguish invasive pulmonary adenocarcinomas from preinvasive lesions that present as subsolid nodules ≦ 3 cm [28][29][30][31][32] . However different models with several different extracted radiomic features are utilized [33][34][35] . Therefore, the verification of research results is difficult to apply in the real world due to complex radio-score models. In the present study, we use a single simplified approach of the radiomic feature parameter in identifying the pathologic invasiveness of lung adenocarcinoma lesions and comparison with the performance of the conventional CT morphologic features and experienced radiologists. To the authors' knowledge, no published studies have comprehensively investigated the Table 4. The diagnostic performance based on conventional CT features and radiomic features for invasive lesions prediction with SSNs. SSN subsolid nodule, AUC area under curve, HU hounsfield unit, GLCM gray-level co-occurrence matrix, GLZLM grey-level zone length matrix, SZHGE short-zone high grey-level emphasis, PSN part-solid nodule. Table 5. Comparison of ROC curves for radiomic feature, conventional CT feature and radiologists in differential diagnosis of invasive lesions versus preinvasive lesions. ROC receiver operating characteristic, AUC area under curve, GLCM gray-level co-occurrence matrix. www.nature.com/scientificreports/ difference of the diagnostic performance between the simplified radiomic parameter, conventional CT features and radiologists. In this model established with only one simplified texture feature generated for this study, the sensitivity, specificity, and AUC were 84.8%, 79.2% and 0.878 (95% CI 0.831-0.925), respectively. There was significant difference (abnormal cystic-like space change, p < 0.001; air-bronchogram, p < 0.001; shape, p = 0.049; round, p = 0.008) in the AUC between the models based on only one simplified texture feature and conventional CT morphologic features. In addition, the diagnostic performance of our model with only one simplified texture feature was higher than those of all three radiologists (all three readers, p < 0.001). In this study, our study result is in line with high intra-tumor heterogeneity associated with high entropy, suggestive of progression and invasiveness degree of adenocarcinoma spectrum lesions. Previous studies have demonstrated that histogram-based 75th-90th percentile CT numbers and entropy were best predictors to distinguish between IPA and AIS-MIA 36 .
AUC (%) Sensitivity (%) Specificity (%) P
In addition, we identify only one simplified second-order GLCM-based quantitative statistical texture parameter which represents the whole-tumor texture feature to significantly differentiate invasive lesions from preinvasive lesions. In this study, the manual segmentation of SSNs usually takes 3 min delineated in a dozen of slices. www.nature.com/scientificreports/ In the future, a deep-learning based automatic nodule segmentation can be used to extract this specific GLCM-based feature, and therefore to develop a computer-aided detection system to assist clinical decisionmaking in differentiation IPA lesions from preinvasive lesions.
The main strength of this study is that we established a simplified radiomic signature based on only onesecond order statistical radiomic feature, which showed better diagnostic performance in differentiation of IPA from pre-invasive lesions compared with those of conventional CT morphologic model or experienced three radiologists.
In addition, GLCM-based feature (GLCM_Entropy_log10) has similar diagnostic performance to solid component (mediastinal window or lung window) in prediction of invasive lesions. However, our study has several limitations. First, there as a potential of patient selection bias due to the retrospective single-site study. Further validation of these results in prospective multi-center studies is warranted. Second, nodule segmentation was performed manually by experienced radiologists, which may significantly contribute to interobserver variability 27 . However, the results of interobserver variability was very low according to our preliminary report based on 40 cases. Third, different CT vendors with lack of standardization of scanning parameters would limit the external validity and generalizability of study results in the real-world practice [37][38][39][40] . However, all the study subjects in our study were performed with thin slice thickness of ≦ 2.5 mm that had met ACR accreditation for LDCT imaging protocols.
Conclusion
In conclusion, a simplified radiomic signature and nomogram based on GLCM-based feature (GLCM_Entropy_ log10) could help to differentiate invasive lesions from pre-invasive lesions groups. For invasive lesion's prediction, the value of GLCM-based feature (GLCM_Entropy_log10) higher than 2.963 yielded the optimal discrimination between invasive and preinvasive lesions groups, with a sensitivity and specificity of 84.8% and 79.2%, respectively. In addition, radiomic feature may provide superior diagnostic performance compared with those of morphologic CT features and radiologists. The nomogram may help clinicians with decision making in the management of subsolid nodules.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2021-01-20T06:16:19.687Z | 2021-01-18T00:00:00.000 | {
"year": 2021,
"sha1": "dcd5dee9c8a120dcc75f37792eb07e00311b8534",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-79690-4.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "95f17acdad93122281108c2f13cf86dcd244d235",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240159641 | pes2o/s2orc | v3-fos-license | Corticosteroid treatment in severe patients with SARS-CoV-2 and chronic HBV co-infection: a retrospective multicenter study
Background The impact of corticosteroids on patients with severe coronavirus disease 2019 (COVID-19)/chronic hepatitis B virus (HBV) co-infection is currently unknown. We aimed to investigate the association of corticosteroids on these patients. Methods This retrospective multicenter study screened 5447 confirmed COVID-19 patients hospitalized between Jan 1, 2020 to Apr 18, 2020 in seven centers in China, where the prevalence of chronic HBV infection is moderate to high. Severe patients who had chronic HBV and acute SARS-cov-2 infection were potentially eligible. The diagnosis of chronic HBV infection was based on positive testing for hepatitis B surface antigen (HBsAg) or HBV DNA during hospitalization and a medical history of chronic HBV infection. Severe patients (meeting one of following criteria: respiratory rate > 30 breaths/min; severe respiratory distress; or SpO2 ≤ 93% on room air; or oxygen index < 300 mmHg) with COVID-19/HBV co-infection were identified. The bias of confounding variables on corticosteroids effects was minimized using multivariable logistic regression model and inverse probability of treatment weighting (IPTW) based on propensity score. Results The prevalence of HBV co-infection in COVID-19 patients was 4.1%. There were 105 patients with severe COVID-19/HBV co-infections (median age 62 years, 57.1% male). Fifty-five patients received corticosteroid treatment and 50 patients did not. In the multivariable analysis, corticosteroid therapy (OR, 6.32, 95% CI 1.17–34.24, P = 0.033) was identified as an independent risk factor for 28-day mortality. With IPTW analysis, corticosteroid treatment was associated with delayed SARS-CoV-2 viral RNA clearance (OR, 2.95, 95% CI 1.63–5.32, P < 0.001), increased risk of 28-day and in-hospital mortality (OR, 4.90, 95% CI 1.68–14.28, P = 0.004; OR, 5.64, 95% CI 1.95–16.30, P = 0.001, respectively), and acute liver injury (OR, 4.50, 95% CI 2.57–7.85, P < 0.001). Methylprednisolone dose per day and cumulative dose in non-survivors were significantly higher than in survivors. Conclusions In patients with severe COVID-19/HBV co-infection, corticosteroid treatment may be associated with increased risk of 28-day and in-hospital mortality. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-022-07882-6.
Introduction
The pandemic of COVID-19 induced by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is placing a sustained burden to health care, economic and social systems worldwide [1]. By 2016, there were approximately 292 million people with chronic hepatitis B (CHB) in the world, resulting in severe liver disease [2]. China has a moderate to high incidence of chronic HBV infection, and the prevalence rate of surface antigen in the population is around 4.51-9.51% [3]. COVID-19 may be complicated with acute liver injury. There are still insufficient data on COVID-19/HBV coinfection [4]. Whether the pre-existing chronic HBV infection may aggravate the clinical course of COVID-19 and vice versa is largely unknown [4].
Chronic infection with HBV may result from abnormal host immune responses [2,3]. Patients with preexisting HBV infection might be more susceptible to SARS-CoV-2 infection because of an immunocompromised status [5,6]. In addition, in the presence of co-infection with HBV, immune responses to SARS-CoV-2 may substantially differ from the one observed in immunocompetent patients. Therefore, co-infection with HBV and SARS-CoV-2 might synergistically confer to immune dysfunction and subsequent quite differential immune status during the disease process.
Adults with severe COVID-19 typically present with dysregulated innate and adaptive immune responses resulting in multisystem inflammatory syndrome. Severe COVID-19 patients are characterized by an excessive production of inflammatory cytokines/mediators (IL-6, IL-10, and ferritin) [7]. Patients included in this study were hospitalized at the very beginning of the COVID-19 pandemic, and the effects of corticosteroids treatment were not clear at that time. Whether a patient received corticosteroids or not were largely depend on physicians' decision. Current WHO guidelines recommend the use of corticosteroids in COVID 19 patients who require oxygen supplementation [8]. However, it remains unclear if the benefit to risk ratio of corticosteroids remains favorable across all subgroups of patients [9,10]. Thus far, there is little information on efficacy and safety of corticosteroids in the subgroup of patients with severe COVID-19 and HBV co-infection.
We performed a multicenter retrospective study to investigate the effects of treatment with corticosteroids on clinical outcomes in severe COVID-19 patients with chronic HBV co-infection.
Study design and participants
This is a retrospective study enrolling patients hospitalized between Jan 1, 2020 to Apr 18, 2020 Inclusion criteria were patients who fulfilled confirmed diagnosis of severe COVID-19 and chronic HBV at admission. The diagnosis of chronic HBV infection was based on a medical history of chronic HBV infection and positive testing for hepatitis B surface antigen (HBsAg) or HBV DNA [11]. Patients with COVID-19 were considered to have severe illness if they met at least one of following criteria [12]: respiratory rate > 30 breaths/min; severe respiratory distress; or SpO 2 ≤ 93% on room air, or oxygen index < 300 mmHg.
Data collection and study outcomes
Data extraction was performed by a trained team of physicians using a standardized form to collect data from electronic medical records on demographic characteristics, medical history, underlying medical conditions, symptoms and signs from disease onset to hospital admission, complications and outcomes, laboratory tests and treatments. All recorded data were double-checked by trained physicians and a third researcher adjudicated any discrepancies.
The primary outcomes were all cause mortality at 28-day from hospital admission and hospital discharge. The secondary outcomes were development of acute respiratory distress syndrome (ARDS), sepsis shock, acute liver injury, acute kidney injury (AKI), acute cardiac injury, the need for invasive mechanical ventilation, for continuous renal replacement therapy (CRRT), and the time from symptoms onset to SARS-CoV-2 RNA clearance in respiratory secretions.
Statistical analysis
The Kolmogorov-Smirnov test or Shapiro-Wilk test was used to test the normality for continuous variables. Continuous variables with normal distribution were expressed as mean ± SD and compared using unpaired, 2-tailed Student's t test. Continuous variables with skewed distribution were presented as median (interquartile range) and compared with Mann-Whitney U test. Categorical variables were summarized as numbers (percentages) and compared by Pearson Chi-square test or Fisher's exact test. Kaplan-Meier estimator was constructed to estimate the survival curves over 28-day period and log-rank test was used to compare the survival probability between corticosteroid treatment group and non-corticosteroid treatment group. To explore the risk factors associated with 28-day mortality, univariate analysis and multivariable logistic regression model were constructed to estimate the OR and 95% confidence interval (95% CI). The variables in the multivariable logistic regression model were as follows: Lymphocyte count, Hs-CRP, age, gender, ALT, comorbidity, albumin and d-dimer on hospital admission and corticosteroid treatment initiation respectively, all of which were selected based on existing literatures [13,14], and significance of the P value in the univariate analysis according to the data in this study.
To confirm the association of corticosteroid therapy on mortality, we performed three analytic strategies to minimize the bias introduced by confounding variables. First, we performed IPTW analysis based on propensity score to estimate causal treatment effects. To this purpose, propensity score for each patient was calculated by logistic regression model that included the same variables that had been used in the priori logistic regression model. In both unweighted and pseudo-population cohorts, the standardized mean difference (SMD) was computed. An SMD of > 10% suggested an imbalance between groups. Second, we constructed an extended Cox regression model which incorporated the same above-mentioned confounding variables and corticosteroid therapy as a time-varying exposure variable, as previously described [15,16]. Third, we constructed a multivariable logistic regression model which incorporated corticosteroid therapy as a categorical (yes/no) variable and the same confounding variables with the values at the time of corticosteroid therapy initiation (not the values at the baseline) to avoid the issue of "indication bias". The time of variable drawn from patients without corticosteroids was according to the median initiation time of corticosteroid treatment.
In addition, the association of corticosteroid therapy on 28-day mortality was analyzed in six predefined subgroups: male vs. female; age ≥ 65 years vs. age < 65 years; lymphocyte < 0.8 × 10 9 /L vs ≥ 0.8 × 10 9 /L; d-dimer < 1 µg/ mL vs ≥ 1 µg/mL; albumin < 30 g/L vs ≥ 30 g/L, Hs-CRP < 5 mg/L vs ≥ 5 mg/L. The cut-off value for continuous variables in each subgroup was determined according to previous clinical constraints [13,14,17]. In subgroups, OR with 95% CI were estimated by logistic regression analysis. Numerical missing data was imputed by median and categorical data was imputed by the category with the most frequency. A two-tailed P value of 0.05 or less was considered statistically significant. Statistical analyses were done using SPSS software, version 22.0 (SPSS Inc. Chicago, Illinois, United States), SAS9.4, and R 3.6.2 (R Foundation for Statistical Computing).
Demographic and clinical characteristics of patients with COVID-19/HBV co-infection
A total of 5447 adult patients with confirmed COVID-19 were screened, we excluded 820 without results of HBV serological marker test. Among 4627 remainders, 190 patients were HBsAg-positive. The prevalence of HBV co-infection in hospitalized COVID-19 patients was 4.1% ( Fig. 1).
The median initiation time of corticosteroid treatment was 2 (1, 5) days after admission.
Laboratory findings at baseline and at the time of corticosteroid therapy initiation
The baseline laboratory data and characteristics were displayed in Additional file 1: Tables S1-S3. There was no significant difference between corticosteroids treated and corticosteroids free patients for leukocytes and platelets counts, and plasma levels of d-dimer, ALT, AST, ALP, bilirubin, pre-albumin, albumin, total cholesterol, triglyceride, high density lipoprotein, high-sensitivity troponin, IL-6. In addition, SOFA and APACHE II scores at baseline were not statistically different between the two groups (Additional file 1: Table S3). Laboratory findings at the time of corticosteroid therapy initiation were displayed in Additional file 1: Table S4. Likewise, there was no significant difference between corticosteroids treated and corticosteroids free patients for above mentioned laboratory parameters.
Using different analytic strategies for adjustment yielded highly consistent results, including the IPTW analysis (OR, 4 , S2 and Tables S5, S6). In the subgroup analysis, the associations between corticosteroid therapy and mortality were not significantly changed with varying subpopulations based on gender, age, lymphocyte, d-dimer, albumin and Hs-CRP (Fig. 4).
Secondary outcomes
More patients had SARS-CoV-2 RNA positive result in upper respiratory tract more than 20 days after symptoms onset in patients treated with versus without corticosteroids (58.2% vs 18.0%, P < 0.001) ( Table 2).
The IQR time from symptoms onset to SARS-CoV-2 RNA clearance was longer in corticosteroids treated versus corticosteroids-free patients (IQR: 24 days vs 17 days, P = 0.026) ( Table 2).
There was no significant difference between corticosteroids treated and corticosteroids free patients for the incidence of septic shock, AKI and acute cardiac injury. However, corticosteroids treatment was associated with increased risk of acute liver injury (60.0% vs 38.0%, P = 0.024) ( Table 2), and this result was confirmed by using other analytic strategies, including IPTW analysis (OR 4.50, 95% CI 2.57-7.85, P < 0.001) ( Table 4) and the multivariable logistic regression model which incorporated variables with the values at the time of corticosteroid therapy initiation (OR, 1.85, 95% CI 1.07-3.20, P = 0.029) (Additional file 1: Table S6).
Laboratory parameters
As compared to corticosteroids-free patients, corticosteroids treated patients had significantly increased neutrophils counts and d-dimer levels (time points were 14 days, 28 days after admission respectively) (all P < 0.05) (Fig. 5C, F). Corticosteroids treatment decreased lymphocyte counts (P < 0.05) (Fig. 5D). Serum levels for ALT, bilirubin and IL-6 were not statistically different during corticosteroid treatment (time points were 7 days, 14 days, and 28 days after admission respectively) (all P > 0.05) (Fig. 5A, B, E).
Corticosteroid therapy among patients with severe COVID-19 and HBV co-infection
Most patients (48/55, 87.3%) received corticosteroid therapy more than 7 days after symptoms onset, including 15 non-survivors (Table 5). All patients in corticosteroid treatment group received methylprednisolone. In subgroup analysis, methylprednisolone average dose was significantly higher in non-survivors (83 mg/day) than in survivors (40 mg/day) ( Table 5).
Discussion
To the best of our knowledge, this is the first report on the clinical impact of corticosteroid treatment on patients with severe COVID-19/HBV co-infection. We retrospectively reviewed and found that corticosteroid treatment was associated with higher mortality in patients with severe COVID-19 and HBV co-infection. Furthermore, doses of 83 mg/day or more of methylprednisolone, and initiation after 7 days from first symptoms may be associated with increased mortality. Survivors in corticosteroid group received corticosteroid therapy with lower cumulative dose (< 400 mg methylprednisolone) and daily dose (< 80 mg methylprednisolone). Our study showed that corticosteroid treatment was associated with higher d-dimer level and neutrophils count. The proportion of patients with corticosteroid therapy receiving therapeutic anticoagulants was higher than in corticosteroids-free patients. These results may contribute in the identification of subgroups of patients with COVID 19 who may not receive corticosteroids. A recent report found that the prevalence rate of HBV in the general population was 7-11%, while that of COVID-19 patients was only 0-1.3%. By contrast, in our cohort of COVID-19 patients, the prevalence of HBV was 4.1%. Corticosteroids were more likely to be given to in patients with severe COVID-19. Patients in this cohort study were hospitalized at the very beginning of the COVID-19 pandemic and the evidences on corticosteroid therapy were limited. Physician made decision to implement corticosteroid therapy or not among severe patients based on their experiences. The current management of patients with severe COVID-19 has substantially changed. The UK-based Randomized Evaluation of COVID-19 Therapy (RECOVERY) trial reported that dexamethasone reduced mortality by onethird (29.3% vs 41.4% for usual care) in severe COVID-19 patients who required respiratory support [18]. One meta-analysis of clinical trials found that compared with usual care or placebo, systemic corticosteroid treatment was associated with lower 28-day mortality [9]. To date, there is no definite recommendation on whether corticosteroids should be used or not in patients with severe COVID-19 and HBV co-infection. Our findings suggested that in these patients, corticosteroids may be associated with increased short-term mortality.
One explanation for worse outcomes with corticosteroids in patients with COVID-19 and HBV co-infection may be a combination of HBV and SARS-CoV-2 mediated effects and immune response. Chronic HBV infection is characterized by dysfunction of innate and adaptive immune response, particularly a deficiency in virus-specific CD8+ T cells [15]. The function of B cells producing antibodies in HBV infection is also impaired [16]. The decrease in immune cells, especially lymphocytes, CD4+ T cells and CD8+ T cells is a marker of poor prognosis in COVID-19 patients [19]. This is consistent with our results of decreased lymphocyte count in non-survivors. Therefore, immune deficiency caused by chronic HBV infection may play a role in the progression Table 4 The comparison of primary and secondary outcomes of patients with severe COVID-19 and HBV co-infection according to corticosteroids and non-corticosteroids treatment after IPTW analysis (on admission) COVID-19: coronavirus disease 2019; HBV: hepatitis B virus; CRRT: continuous renal replacement therapy; SARS-CoV-2: severe acute respiratory syndrome coronavirus 2; ARDS: acute respiratory distress syndrome; OR: odds ratio; CI: confidence intervals P values indicate differences between corticosteroid and non-corticosteroid. P < 0.05 was considered statistically significant of COVID-19 disease. The immune-suppressing effects of corticosteroid therapy, which are mediated mainly by T-cell responses [20], may exacerbate the immune dysfunction in patients with COVID-19 and HBV coinfection. It was found that the immune deficiency may affect the immune response to SARS-COV-2 resulting in delayed viral clearance [5,21]. In our study, 58.2% corticosteroid treated patients still had detectable SARS-CoV-2 RNA in upper respiratory tract after 20 days from the onset of symptoms, which was significantly more than in corticosteroids free patients (18.0%). In COVID-19 with HCV and HIV, it was also found that immune-deficiency would alter host response to SARS-COV-227. Therefore, immune deficiency may relate to clinical course of COVID-19 and HBV co-infection after receiving corticosteroid therapy. Similarly, observational studies in patients with SARS and MERS suggested that corticosteroid therapy was associated with delayed viral clearance from blood and respiratory tract and increased risk of secondary infection [22,23]. Our results were consistent with previous studies and further supported a role of corticosteroids in prolonging SARS-CoV-2 replication in patients with COVID-19 and HBV co-infection.
As an immune suppressive drug, corticosteroid therapy is a risk factor of HBV reactivation in chronic HBV infection patients [24]. Hepatitis B reactivation is the reappearance or rise of HBV DNA in the serum of patients with past or chronic HBV infection [24]. It may result in fulminant hepatitis and may cause death. Recently, a large size observational study showed that patients who had a history of HBV infection and received systemic corticosteroids with high peak daily doses (> 40 mg prednisolone equivalent) had a higher risk of hepatitis flare, although the mortality was not significantly increased. It should be cautious to systematically use corticosteroids in patients with HBV infection [25]. In our study, corticosteroids were associated with increased incidence of acute live injury, suggesting altered liver function. Liu et al. found that COVID-19 patients co-infected with chronic HBV could have a risk of hepatitis B reactivation, especially in patients with corticosteroid therapy [5]. A majority of patients in this cohort study were HBeAg-negative CHB. HBV DNA was detected in nearly one-third patients and did not find positive results. Most patients in this cohort study might be HBV portage and active chronic infection was infrequent. Since most patients in this cohort did not undergo multiple HBV DNA tests, further studies are needed to determine whether these multiple organ function injuries are related to HBV reactivation. In addition, patients with HBV related cirrhosis have very poor immune function and liver function [6]. Pre-existing cirrhosis increased the risk of poor outcome related to COVID-19. Only 15 patients had cirrhosis in our study and 11 survived. Therefore, the effects of different stages of HBV infection on the prognosis of COVID-19 patients need to be further studied.
Furthermore, this study provided preliminary evidence for the association of corticosteroids on laboratory findings in severe COVID-19/HBV co-infection patients. We found that corticosteroid treatment was associated with increased d-dimer levels. SARS-CoV-2 infection induced coagulopathy and secondary hyper-fibrinolysis [26,27]. Autopsy in COVID-19 found systemic microvascular thrombosis in most cases [27,28]. Higher d-dimer levels on admission could effectively predict in-hospital mortality in COVID-19 patients as well as persistent elevated levels [28]. In our study, most of non-survivors had high d-dimer levels. In patients with severe COVID-19 and chronic HBV co-infection, corticosteroids might increase risk of coagulopathy and thrombosis.
Our study has several limitations. First, the small sample size generated a wide confidence interval may result in imprecision of the effects estimation. Although the results from different models consistently suggested a harm effects of corticosteroid therapy on 28-mortality, future large-scale and multi-center studies are warranted to validate our findings. Second, missing data on HBV-DNA levels prevented us analyzing the association of corticosteroids according to various clinical phases of chronic HBV infections (active infection versus carriage). We did not know the exact rate of HBV infection for all patients included in this study, and 820 patients were excluded owing to the lack of HBV serological testing. Third, we were unable to explore if the liver injury was associated with concurrent drug therapies for COVID-19. Forth, only a small proportion of patients received nucleotide/nucleoside analogue therapy which precluded assessment of the impact of nucleotide/nucleoside analogue therapy on liver function and outcomes. Five, patients in our study arrived at hospital late and the median time of admission was 14 days. Therefore, some patients received corticosteroids therapy in the late stage, which might cause bias impact on results estimation.
Conclusions
In patients with severe COVID-19 and HBV co-infection, corticosteroid treatment was associated with increased risk for 28-day and in-hospital mortality, high d-dimer level, neutrophil count and acute liver injury, and delayed SARS-CoV-2 viral RNA clearance. There is a risk that corticosteroid treatment and severe disease are interconnected, therefore it should be addressed in a placebocontrolled randomized trial in the future.
Implications for clinical practice and future research
COVID-19 and HBV co-infection is not infrequent and there is an urgent need to warn about the potential risk associated with corticosteroid treatment in this subgroup of patients. This multicenter study may warn physicians and hepatologists about the potential detrimental effects of corticosteroid therapy on patients with severe COVID-19 and chronic HBV co-infection. The clinical features and the underlying mechanism of different response to corticosteroid therapy in severe COVID-19 patients with chronic HBV co-infection need to further investigations in the context of recommendations to use corticosteroids in the routine management of COVID-19 requiring oxygen supplementation. | 2021-11-13T13:00:48.449Z | 2022-11-28T00:00:00.000 | {
"year": 2022,
"sha1": "b64bfd8d209ad40bc699dfa458ec49b6781212e6",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1014370/v1.pdf?c=1635447034000",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "5737e1fe91732b97cba52ca6d7b71203ad1ac238",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39073943 | pes2o/s2orc | v3-fos-license | Spaces of morphisms from a projective space to a toric variety
In this paper we study the space of morphisms from a complex projective space to a compact smooth toric variety X. It is shown that the first author's stability theorem for the spaces of rational maps from CP^m to CP^n extends to the spaces of continuous morphisms from CP^m to X, essentially, with the same proof. In the case of curves, our result improves the known bounds for the stabilization dimension.
Introduction
Given two complex algebraic varieties one can ask how well the space of continuous algebraic morphisms between them approximates the space of all corresponding continuous maps. This question was posed by G. Segal in [20] where he studied the spaces of holomorphic maps from a Riemann surface to a complex projective space. He proved the following result. Let S be a Riemann surface of genus g, F d (S, CP n ) the space of rational functions of degree d from S to CP n , and M d (S, CP n ) the corresponding space of continuous maps. Similarly, denote by F * d (S, CP n ) and M * d (S, CP n ) the respective spaces of basepoint-preserving maps. Theorem 1 ( [20]). The inclusions F d (S, CP n ) → M d (S, CP n ) and F * d (S, CP n ) → M * d (S, CP n ) are homology equivalences up to dimension (d − 2g)(2n − 1) for all g ≥ 0. When g = 0 these maps are also homotopy equivalences up to the same dimension.
Segal conjectured that his results could be generalized much further to include spaces of curves in algebraic varieties more general than projective spaces. Such generalizations were obtained in subsequent works by many authors: Kirwan [14], Gravesen [9], Guest [10,11], Mann and Milgram [15,16], Hurtubise [13], Boyer, Hurtubise and Milgram [2], Cohen, Lupercio and Segal [4], and others. At the moment, the strongest results for the case when g = 0 (rational curves) and the target is a finite-dimensional variety X are those of Guest [11] for the case when X is a (possibly singular) toric variety and that of Boyer, Hurtubise and Milgram [2] for the case when X is a smooth Kähler manifold with a holomorphic action of a connected complex solvable Lie group which has an open dense orbit on which the group acts freely. A very general conjecture on this subject is given in the paper of Cohen, Jones and Segal [3]. Several papers [12,17] give real versions of Segal's theorem (the first real version being due to Segal himself in [20]).
Segal's theorem can also be generalized to the case when the domain of the maps has dimension greater than 1. Using Vassiliev's simplicial resolutions, the first author in [18] and [19] proved an analog of Segal's theorem for the spaces of continuous rational maps from CP m to CP n for 1 ≤ m ≤ n (see also a real version in [1]).
The purpose of this note is to show that the proof given in [18,19] extends to the spaces of maps from CP m to a smooth compact toric variety. We shall use the homogeneous coordinates for a toric variety X and the description of the morphisms CP m → X given by Cox in [5] and [6]: a morphism CP m → X is given in these coordinates by a collection of homogeneous polynomials with certain properties. Let P f be the space of morphisms from CP m to X given by polynomials of fixed degrees, and whose restriction to a fixed hyperplane CP m−1 coincide with a given morphism f : CP m−1 → X. Consider the space of all continuous maps from CP m to X whose restriction to CP m−1 coincide with f . This space is homotopy equivalent to the 2m-fold loop space of X and in what follows will be denoted by Ω 2m X. We shall show that the inclusion P f → Ω 2m X induces isomorphism in homology groups up to some dimension depending on m, X and the degrees of the polynomials contained in P f . The precise statement of our theorem will be given in the next section.
In the case m = 1 our result includes the homology part of the theorem of Guest (for smooth varieties only) as a particular case and, moreover, gives an improvement over the results of [11] in what concerns the stabilization dimension.
Toric varieties
A toric variety X is a complex algebraic variety which admits an action of the algebraic torus (C * ) n , with an open and dense orbit on which the action is free.
Given such X, we shall use the following notations: • N ≃ Z n -a lattice in R n .
• M -the integer dual of N .
• Σ ⊂ N R = R n -the fan of X.
• n i -the primitive generator of ρ i ∩ N .
• D i -the divisor on X corresponding to ρ i . (Recall that the one-dimensional cones of Σ are in correspondence with irreducible (C * ) n -invariant Weil divisors on X.) • A n−1 (X) -the group of Weil divisors modulo the subgroup of principal divisors. Two divisors a i D i and b i D i are in the same class in A n−1 (X) if and only if there exists m ∈ M such that a i = n i , m + b i for i = 1, ..., r. The details of the theory of toric varieties can be found in [5,6,7,8].
In what follows we shall make extensive use of the homogeneous coordinate ring of X. It is defined as R = C[x 1 , ..., x r ], where the variable x i corresponds to ρ i . Each monomial i x ai i determines a divisor D = i a i D i ; we shall use the notation We say that the monomial i x ai i has degree [D] ∈ A n−1 (X); it follows that two monomials i x ai i and i x bi i have the same degree if and only if there is some If we set then the ring R is A n−1 (X)-graded: There is an isomorphism where O X (D) is the coherent sheaf on X determined by the Weil divisor D.
In what follows we shall assume that X is a compact and smooth toric variety. Each cone σ ∈ Σ determines the monomial and the group The group G acts on C r − Y by multiplication coordinatewise. The quotient is the toric variety associated to the fan Σ. This description gives us the homogeneous coordinates on X. In these coordinates the morphisms CP m → X can be described explicitly as follows.
Let P i : C m+1 → C homogeneous polynomials for i = 1, ..., r such that P i has degree d i with i d i n i = 0, and (P 1 (x), ..., P r (x)) / ∈ Y for all x ∈ C m+1 − {0}. Then the r-tuple (P 1 , ..., P r ) induces a morphism f : CP m → X. Furthermore, We say that a morphism CP m → X has degreed = (d 1 , ..., d r ) if it is given by an r-tuple of homogeneous polynomials A set of edge generators {n i1 , ..., n ij } for the fan Σ is primitive if it does not lie in any cone of Σ but every proper subset does. It can be proved that We can now state the main result of the present paper.
Theorem 2. Let X be a smooth compact toric variety associated to a fan Σ, with k the cardinality of the smallest primitive set of edge generators for Σ. Then, for all m < k, the inclusion of P f (d) into Ω 2m X induces isomorphisms in homology groups for all dimensions smaller than d(2k − 2m − 1) − 1, whered = (d 1 , ..., d r ) and d = min{d i }.
In the case of curves, that is, m = 1, our estimate is, in general, much stronger than the theorem of Guest [11] and the result of Boyer, Hurtubise and Milgram [2] specialized to the case of toric varieties, both of which give the stabilization dimension d.
The strategy of the proof is exactly the same as in [18] and [19] and mirrors other applications of Vassiliev's method. While it contains no essential novelty as compared to the case of a projective space [19], we find it necessary to go through the whole proof in some detail, since the case of a general smooth toric variety involves a number of additional features.
First, we construct a sequence of finite-dimensional approximations to the space of all continuous maps from CP n to a toric variety. These finite-dimensional approximations are spaces of maps that are given by polynomials in both holomorphic and antiholomorphic variables, of fixed degrees, and the first in the sequence is the space of holomorphic maps. The Stone-Weierstrass Theorem implies that these approximations indeed have the space of all continuous maps as a topological limit.
The second part of the proof consists in comparing the topology of two successive finite-dimensional approximations. Here we make use of the fact that these spaces are complements of discriminants in affine spaces. In particular, their topology can be related to the topology of the corresponding discriminants by the Alexander duality. In order to calculate the cohomology of these discriminants one resolves their singularities with the help of the Vassiliev's simplicial resolution, and considers the natural filtrations by the "fibrewise skeleta".
The successive quotients of these filtrations are easy to describe explicitly for the first few terms. For higher terms, it suffices to give a dimension estimate which allows to discard them. One minor new observation in the present note is that in order to describe the relative effect of the stabilization maps on the topology of the discriminants, one does not really need to write down the corresponding spectral sequence; this, of course, is not a significant simplification of the argument.
In the next section we construct the spaces of finite-dimensional approximations and show that the limit space is, indeed, homotopy equivalent to the space of all continuous maps. Then, in Section 4 we describe the approximation spaces as complements of discriminants and study the topology of these discriminants.
The Stone-Weierstrass Theorem
A (p, q)-polynomial in the variables z i andz i (where 0 ≤ i ≤ m) is a complexvalued homogeneous polynomial of degree p in the "holomorphic" variables z i and degree q in "antiholomorphic" variablesz i . Ifp = (p 1 , ..., p r ),q = (q 1 , ..., q r ) ∈ Z r with we define a (p,q)-map as a map F : CP m → X given by a r-tuple of (p i , q i )polynomials such that (F 1 (x), ...., F r (x)) / ∈ Y for all x ∈ C m+1 \ {0}.
Note that (d, 0)-maps are the morphisms of degreed. Two collections (P i ) and (P ′ i ) of (p i , q i )-polynomials determine the same morphism F : CP m → X if and only if there are functions g 1 , . . . , g r : C m+1 \ {0} → C such that P i = g i P ′ i and (g 1 (x), ..., g r (x)) ∈ G for all x ∈ C m+1 \ {0}. In particular, the g i do not vanish. Given (1)ā = (a 1 , ..., a r ) ∈ Z r with i a i n i = 0, a (p,q)-map can be written as a (p+ā,q+ā)-map by multiplying all the polynomials by (|z| 2a1 , ..., |z| 2ar ) coordinatewise.
Let f : CP m−1 → X be a (p,q)-map and assume that we have chosen the (p i , q i )-polynomials f i that define it. Denote by W i pi,qi the complex affine space of all (p i , q i )-polynomials whose restriction to CP m−1 concides with f i , and by Wp ,q the cartesian product of the W pi,qi . Let P f (p,q) ⊂ Wp ,q be the subspace of all the r-tuples in Wp ,q whose values are not in Y for all points in C m+1 \ {0}. These are precisely the r-tuples that define (p,q)-maps, though we stress that more than one r-tuple in P f (p,q) may correspond to the same map.
The space P f (d + ∞, ∞) is defined as the direct limit of these inclusions, wherē d =p −q.
In the next section we shall see that these maps induce isomorphisms in homology in the dimensions given in Theorem 2. Here, we shall prove that the spaces P f (p,q) approximate the space of all continuous maps topologically: The principal tool in the proof of this statement is the Stone-Weierstrass Theorem for vector bundles (see, for instance, [18]): Theorem 4. Let E be a locally trivial real vector bundle over a compact space B, s α : E → B a set of its sections, and let A be a subalgebra of the R-algebra C(B) of continuous real-valued functions on B. Suppose that • the subalgebra A separates points of B, that is, for any pair x, y ∈ B there exists h ∈ A such that h(x) = h(y), • for any y ∈ B there exists h ∈ A such that h(y) = 0, • for any y ∈ B the fibre of E over y is spanned by s α (y). Then the A-module generated by the s α is dense in the space of all continuous sections of E.
The Stone-Weierstrass Theorem has the following consequence: Lemma 5. Let W be a finite CW -complex. Any continuous map F : CP m × W → X can be uniformly approximated by a (p, q)-map, for some p and q, whose coefficients are functions on W . Moreover, if the restriction of F to CP m−1 ×{x} is represented by a collection of polynomials f i,x , the approximating polynomials can be chosen so as to coincide with f i,x · |z| 2k , for some k, on CP m−1 × W .
The proof is entirely analogous to the proof in [18] for the case X = CP n . Indeed, a continuous map F : CP m × W → X can be thought as a family of maps CP m → X depending on a parameter w ∈ W . In particular, it can be given as a collection of m + 1 sections s i of the line bundles O(d i ), with ρ d i n i , m = 0 for some m ∈ M , and each s i depending on w ∈ W . Exactly as in [18], by the Stone-Weierstrass Theorem these sections can be approximated by functions, which, for any given w are (d i + q, q)-polynomials for some q, and these approximations can be chosen so as to give an arbirarly close approximation of F . Proof of Proposition 3. For any compact Riemannian manifold M and any space E there exists ǫ > 0 such that any two maps E → M that are uniformly ǫ-close, are homotopic. Thus, Lemma 5 implies that the map of the homotopy groups is surjective for all k. Setting W = S k we get that any element of π k Ω 2m (X) can be approximated by a class in π k P f (d + ∞, ∞). On the other hand, this isomorphism is also injective, as any homotopy that goes through continuous maps can be approximated by a homotopy through (p, q)-maps (set W = S k × [0, 1]). Both spaces of maps have homotopy types of CW-complexes, so by the Whitehead Theorem they are homotopy equivalent. 4. The space P f (p,q) as a complement to a discriminant In what follows the notation C m will be used for the affine chart z m = 1 in CP m . We shall use the notation T • for the one-point compactification of a space T .
Recall that we consider the toric variety X as the quotient (C r −Y )/G. The space P f (p,q) is the complement of a discriminant Θ in the space Wp ,q which consists of collections of complex polynomials of multidegreep in the variables z i andq in the z i : Θ = Θp ,q = {(F 1 , ..., F r ) ∈ Wp ,q | (F 1 (z), ..., F r (z)) ∈ Y for some z ∈ C m }.
The cohomology groups of P f (p,q) are related to the homology groups of the onepoint compactification of Θ by the Alexander duality: where Np ,q is the complex dimension of Wp ,q .
In order to study the topology of Θ • , we shall use Vassiliev's method of simplicial resolutions [21]. We construct a simplicial resolution of Θ together with a natural filtration, and describe the first several terms of this filtration. Since not all the terms can be effectively described, we shall then use the truncation procedure as in [19].
Let Z(p,q) = Z ⊂ Wp ,q × C m be the set There is a projection map Z → Θ which forgets the point x ∈ C m . We denote by Z ∆ (p,q), or simply by Z ∆ the space of the non-degenerate simplicial resolution associated to this map. It is defined as the union of the spaces Z l = Z l (p,q), with positive l, which are constructed as follows. First, embed (non-linearly) the space C m into some vector space V in such a way that the image of any 2l distinct points of C m are not contained in any 2l − 2 dimensional subspace. This gives an embedding Z ֒→ Wp ,q × C m ֒→ Wp ,q × V. The subspace Z l ⊂ Wp ,q × V consists of all the l − 1-simplices which lie in the fibres of the projection map onto Wp ,q and whose vertices are on Z l . We have a natural projection map Z l → Θ whose fibres are (l − 1)-skeleta of simplices of various, not necessarily finite, dimensions.
Consider the space of configurations of l distinct points in C m with labels in Y : where the symmetric group S l acts by permuting the l coordinates (x 1 , s 1 ), . . . , (x l , s l ). We write a point of R l as (x i , s i ) . Recall that Y can be seen as the union where n 1 , ..., n r are generators for the 1-dimensional cones of the fan Σ associated to X. Let k be the cardinality of the smallest primitive set of edge generators of Σ. Then R l is a cell complex of dimension 2l(m + r − k).
Write p for min{p 1 , ..., p r }. We have the following Proposition 6. For l ≤ p the space Z l \ Z l−1 is an affine bundle over the space R l , of real rank 2(Np ,q − rl) + l − 1.
Proof. A point in Z l \ Z l−1 ⊂ Wp ,q × V can be written as (F, t) where F = (F 1 , ..., F r ) and t is in the convex hull in V of l distinct points x 1 , ..., x l ∈ C m with F (x j ) ∈ Y. There is a projection The inverse image of a fixed point (x j , s j ) ∈ R l is the cartesian product of the space of r-tuples of (p i , q i )-polynomials with (F 1 (x j ), ..., F r (x j )) = s j and the convex hull of x 1 , ..., x l in V . Each equation F i (x j ) = s j , with x j and s j fixed, is a linear equation in the space of coefficients W pi,qi . If l ≤ p, all these equations are affinely independent and, hence, the space of r-tuples of (p i , q i )polynomials satisfying F (x j ) = s j has real dimension 2(Np ,q − rl). The convex hull of the x j is an l − 1 simplex; adding the dimensions we get the proposition.
This description of the filtration Z l allows us to write a spectral sequence converging to the (co)homology of P f (p,q); the building blocks for this spectral sequence are the (co)homology groups of the one-point compactifications of the spaces R l with suitably twisted coefficients. However, we do not need to construct this spectral sequence explicitly in order to obtain the stabilization theorem. for all i > 2Np ,q + p(2m − 2k + 1). From the Alexander duality it follows that H i (P f (p,q), Z) = H i (P f (p +ā,q +ā), Z) for all i < p(2k − 2m − 1) − 1. The fact that this isomorphism is induced by the stabilization map follows from the definition of the Alexander duality pairing as the linking number. If a map induces isomorphisms in integral cohomology up to some dimension, it also induces isomorphisms in integral homology in the same dimensions. We get Proposition 8. The morphism H i (P f (p,q), Z) → H i (P f (p +ā,q +ā), Z) induced by the stabilization map is an isomorphism for all i < p(2k − 2m − 1) − 1.
Since the direct limit of the stabilization maps is homotopy equivalent to the space of continuous maps by Proposition 3, Theorem 2 now follows. | 2012-10-10T03:21:11.000Z | 2012-10-10T00:00:00.000 | {
"year": 2012,
"sha1": "42d58fff445eece9e900d355ae1b6d53b478394a",
"oa_license": "CCBYNC",
"oa_url": "https://repositorio.unal.edu.co/bitstream/unal/49347/1/45194-216959-1-SM.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "42d58fff445eece9e900d355ae1b6d53b478394a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
17772817 | pes2o/s2orc | v3-fos-license | Automated region of interest retrieval and classification using spectral analysis
Efficient use of whole slide imaging in pathology needs automated region of interest (ROI) retrieval and classification, through the use of image analysis and data sorting tools. One possible method for data sorting uses Spectral Analysis for Dimensionality Reduction. We present some interesting results in the field of histopathology and cytohematology. In histopathology, we developed a Computer-Aided Diagnosis system applied to low-resolution images representing the totality of histological breast tumour sections. The images can be digitized directly at low resolution or be obtained from sub-sampled high-resolution virtual slides. Spectral Analysis is used (1) for image segmentation (stroma, tumour epithelium), by determining a «distance» between all the images of the database, (2) for choosing representative images and characteristic patterns of each histological type in order to index them, and (3) for visualizing images or features similar to a sample provided by the pathologist. In cytohematology, we studied a blood smear virtual slide acquired through high resolution oil scanning and Spectral Analysis is used to sort selected nucleated blood cell classes so that the pathologist may easily focus on specific classes whose morphology could then be studied more carefully or which can be analyzed through complementary instruments, like Multispectral Imaging or Raman MicroSpectroscopy.
Introduction
Efficient use of whole slide imaging (WSI) in Pathology needs an automated region of interest (ROI), retrieval and classification; this can be achieved through the use of image segmentation and data sorting tools. The present paper aims at illustrating, through two examples, the power of spectral analysis, which can be used alone or in from 9th European Congress on Telepathology and 3rd International Congress on Virtual Microscopy Toledo, Spain. 15-17 May 2008 addition to image segmentation for data reduction, feature classification as well as image visualisation.
Material
The first application concerns 73 WSI of HES stained histological sections of breast tumours recorded at a resolution of 6.3 μm/pixel. The second one concerns a WSI of MG stained blood smear recorded at a resolution of 0.17 μm/pixel using an Aperio slide scanner.
A minimal segmentation was performed to isolate breast tumour tissue or to eliminate erythrocytes from blood smear.
Principle of spectral analysis
The main point of this technique is to introduce a useful metric on data set based on the connectivity of points within the graph of data, and also provide coordinates on the data set that reorganize the points according to this metric [1,2]. Let X = {x 1 ,x 2 ,...,x N } be N data points (images), each datawhere n is the dimension of the space data (measures). The first step is to represent the dataset X = {x 1 ,x 2 ,...,x N } by a weighted symetric graph G = (V, E) where each data point x i corresponds to a node. Two nodes x i and x j are connected by an edge with weight w(x i ,x j ) = w(x j ,x i ), reflecting the degree of similarity (or affinity) between these two points. The weight w(.,.) describes the first-order interaction between the data points and its choice is application-driven. For instance, in applications where a distance d(.,.) already exists on the data, it is custom to weight the edge between x i and x j by: where ε > 0 is a scale parameter, while other weighting functions can be also used.
Following a classical construction in spectral graph theory and manifold learning, we now create a random walk on the data set X by forming the kernel: Where: is the degree of node x i .
As we have that p(x i , x j ) ≥ 0 and the quantity p(x i , x j ) ≥ 0 can be interpreted as the probability of random walker to jump from x i to x j in single time step.
From spectral theory and harmonic analysis we know that the eigenfunctions can be interpreted as a generalization of the Fourier harmonics on the manifold defined by the data points. In our problem, smaller eigenvalues correspond to higher frequency eigenfunctions, and larger eigenvalues correspond to lowers ones.
The eigenvalues and eigenvectors provide embedding coordinates for the set X. The data points can be mapped into Euclidean space via embedding: The second eigenvector ψ 2 is known as the Fiedler vector and can be used to order the underlying dataset X (segmentation and data reduction). When it is associated with the third eigenvector ψ 3 , it allows a visualization of the base.
Breast cancer
In this case, the ψ 2 eigenvector was used to segment tumour tissue into two classes: stroma and epithelial zones ( Figure 1). The method has been applied to all the images of the database. Then spectral analysis was used to select the most representative epithelial zone patches of each histological type ( Figure 2). Finally, ψ 2 and ψ 3 allow a data sorting and a visualization of each patch and its neighbourhood in order to present the most similar patches (Figure 3).
Blood smears
For this application, spectral analysis was used to "segment", by data sorting, the image base of isolated blood cells into two classes: polymorphonuclear cells and lymphocytes (Figure 4).
Conclusion
Spectral Analysis is a promising approach for computer aided diagnosis of cancers (automated global analysis of histological tumour sections) as well as for automated sorting of isolated cells. The resulting concentration of objects of interest allows the pathologist to focus on specific regions whose morphology can be further studied more carefully or analyzed through complementary instruments, like Multispectral Imaging or Raman spectroscopy.
Diagnostic Pathology 2008, 3:S17 http://www.diagnosticpathology.org/content/3/S1/S17 Page 3 of 4 (page number not for citation purposes) Visualization of each patch and its neighbourhood in order to exhibit the most similar patches http://www.diagnosticpathology.org/content/3/S1/S17 Result of isolated white blood cell base "segmentation" by spectral analysis: (a) visualization of the data sorting (lymphocytes are shown in blue) allowing the partition of (b) the base of isolated cells, (c) view of isolated cells sorted by spectral analysis Figure 4 Result of isolated white blood cell base "segmentation" by spectral analysis: (a) visualization of the data sorting (lymphocytes are shown in blue) allowing the partition of (b) the base of isolated cells, (c) view of isolated cells sorted by spectral analysis. | 2014-10-01T00:00:00.000Z | 2008-07-15T00:00:00.000 | {
"year": 2008,
"sha1": "b0ddec4faaab45844e50f3f1717ec865acbd2141",
"oa_license": "CCBY",
"oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/1746-1596-3-S1-S17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0ddec4faaab45844e50f3f1717ec865acbd2141",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
252929980 | pes2o/s2orc | v3-fos-license | Medicare Opt-Out Trends Among Dermatologists May Reflect Systemic Health Policy: Cross-sectional Analysis
Background Provider opt-out of accepting Medicare insurance is a nationally tracked metric by the Centers for Medicare & Medicaid Services (CMS) for all physicians, including dermatologists. Although this usually only consists of a small number of providers, the magnitude of opting out has varied historically, often tracing changes in systemic health care policy. Objective In this paper, we explored dermatologist opt-out data since 2001, as reported by the CMS, to characterize trends and provide evidence that shifts in provider opt-out may represent a potential indicator of the state of health policy and possible needs for reform as it pertains to Medicare. Methods The publicly available Opt Out Affidavits data set, available from the CMS, was evaluated for providers in all dermatologic specialties from January 1, 2001, to May 27, 2022. Results There were a total of 196 dermatology opt-outs in the overall period, with the largest spike being 33 providers in 2016, followed by generally consistent decreases through 2021. In the most recent 12 months of data, the number of new monthly opt-outs from January 2022 to May 2022 was significantly higher than that of the trailing 7 months of 2021 (P=.03). Conclusions Despite decreasing numbers of dermatologist opt-outs in the late-2010s, 2022 was marked by a significant increase in opt-outs. The reduced acceptance of Medicare by dermatologists may present risks to care access, so it is important to frequently assess physician opt-out data and changes over time.
Introduction
Private contracting with Medicare patients is a practice associated with provider "opt-out" from the federal program, where billing and collecting from Medicare is precluded; although the impact of dermatologist opt-out likely varies based on factors such as practice type, provider density, and population composition, fewer physicians accepting Medicare inherently presents greater risks for care access, especially in remote, low-income, or population-sparse areas [1].
Due to the Medicare program's role in providing broad access to care, it is important to explore characteristics associated with provider Medicare opt-out and trends over time to assess potential impacts on aspects of care delivery. Although literature on opting out is limited and the practice is infrequent [2,3], trends among provider opt-out may be revelatory of systemic issues such as complex Medicare reimbursement [1], bureaucratic intricacies, and prolonged accounts receivable periods, which can strain practitioners [4]. Therefore, assessing national metrics such as Medicare opt-out may also provide insights into health policy and systemic changes that shape Medicare provider participation.
Methods
This cross-sectional analysis evaluates publicly available data from the Opt Out Affidavits data set available from the Centers for Medicare & Medicaid Services, comprehensive of all 50 states and the District of Columbia. We included all entries for physicians indicating dermatologic specialties over the total available period (from January 1, 2001, to May 27, 2022).
Results
There were 196 providers in the overall period who opted out of Medicare. From 2001 to 2011, annual opt-outs were ≤1. In 2012, twelve new providers opted out, followed by annual increases and a peak of 33 in 2016. After 2016, new opt-outs generally decrease by up to 12 providers annually, with a maximum decrease of 40% (8/20) from 2018-2019 ( Figure 1). In 2021, there were 9 new opt-outs, and there were 10 in the first 5 months of 2022. Considering the most recent 12 months, the number of new monthly opt-outs for the first 5 months of 2022 (mean 2.0) was significantly higher than that of the trailing 7 months of 2021 (mean 0.57; P=.03). In the entire period, 112 (N=196, 57.1%) providers were located in New York, Texas, or California.
Discussion
Overall, 196 (1.8%) dermatologists out of 11,003 total practicing dermatologists in the United States [3] opted out of Medicare. The majority of opt-outs were seen in New York, Texas, and California; although some of these opt-out providers are located in cities with populations lower than 10,000, all are in localities comprising statistical metropolitan areas, suggesting that there is likely still reasonable access to alternate avenues of care for Medicare beneficiaries in these areas. Opt-outs were uncommon until 2012, but the period from 2012 to 2016 represented the largest recorded spike.
Given that provider enrollment for participation in the Medicare program, or "opting in," is a relatively uncomplicated process consisting of a 1-time application, other persistent systemic issues may have relevance to the mid-2010s shift. Rising practice operational expenses [1], complex compliance or regulatory requirements, and uncertainties from delayed payments [3,4], along with resource-constraining policies such as prior authorizations, can make it challenging for providers to effectively deliver patient-centric care [5]. The mid-2010s surge may be explained by heightened consolidation, as 15% of clinic acquisitions among private equity groups from 2014 to 2016 were dermatology clinics [3]. Greater prevalence of large group practices can present difficulty for independent practitioners to negotiate with insurers [3] and remain economically viable if Medicare comprises a large portion of their payer mix given the associated administrative challenges [5]. Another possible contributor to the 2016 spike may be the Medicare Access and CHIP (Children's Health Insurance Program) Reauthorization Act of 2015; although beneficial in promoting patient-centric care, it may be accompanied by a higher risk exposure for providers and additional administrative strain [6]. Further investigation and provider surveying are needed to determine which specific issues are driving the described patterns in provider opt-out, since it is unclear whether the primary catalyst for provider opt-out is economic, logistic, or administrative factors. Although the reduction in dermatology opt-outs during the late-2010s likely represents a positive shift for patients and providers, the latest data show a significant monthly increase in opt-out providers, which should be monitored to ensure optimal care access for communities. Limitations of this analysis include the lack of commercial insurance opt-out data, absent information on nonphysician provider statuses, and unavailable information around reopting into Medicare or those who retired with opt-out status.
In an indirect manner, Medicare opt-out has been previously proposed as a figurative voice for providers to express sentiments about reimbursement policy [1] and may implicitly represent the impacts of other policy challenges on the state of practice. Additionally, the implications of physician opt-out can be broad, where individuals served by Medicare in certain localities may experience inadequate access to care and poorer health outcomes with increasing provider opt-out. As a result, trends in Medicare opt-out should be followed closely to evaluate possible needs to review or refine systemic dermatologic health policy in favor of both patients and providers. | 2022-10-18T15:08:15.196Z | 2022-08-31T00:00:00.000 | {
"year": 2022,
"sha1": "42feed879afeb7adac0e08ea89569982d7607e60",
"oa_license": "CCBY",
"oa_url": "https://derma.jmir.org/2022/4/e42345/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0e64aa1abbcb71a89f7be3f30efe42a3adca17b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118856543 | pes2o/s2orc | v3-fos-license | Carrier-carrier inelastic scattering events for spatially separated electrons: magnetic asymmetry and turnstile electron transfer
We consider a single electron traveling along a strictly one-dimensional quantum wire interacting with another electron in a quantum ring capacitively coupled to the wire. We develop an exact numerical method for treating the scattering problem within the stationary two-electron wave function picture. The considered process conserves the total energy but the electron within the wire passes a part of its energy to the ring. We demonstrate that the inelastic scattering results in both magnetic asymmetry of the transfer probability and a turnstile action of the ring on the electrons traveling separately along the ring. We demonstrate that the inelastic backscattering and / or inelastic electron transfer can be selectively eliminated from the process by inclusion of an energy filter into the wire in form of a double barrier system with the resonant energy level tuned to the energy of the incident electron. We demonstrate that the magnetic symmetry is restored when the inelastic backscattering is switched o?, and the turnstile character of the ring is removed when the energy transfer to the ring is excluded for both transferred and backscattered electron waves. We discuss the relation of the present results to the conductance systems based on the electron gas.
We consider a single electron traveling along a strictly one-dimensional quantum wire interacting with another electron in a quantum ring capacitively coupled to the wire. We develop an exact numerical method for treating the scattering problem within the stationary two-electron wave function picture. The considered process conserves the total energy but the electron within the wire passes a part of its energy to the ring. We demonstrate that the inelastic scattering results in both magnetic asymmetry of the transfer probability and a turnstile action of the ring on the electrons traveling separately along the ring. We demonstrate that the inelastic backscattering and / or inelastic electron transfer can be selectively eliminated from the process by inclusion of an energy filter into the wire in form of a double barrier system with the resonant energy level tuned to the energy of the incident electron. We demonstrate that the magnetic symmetry is restored when the inelastic backscattering is switched off, and the turnstile character of the ring is removed when the energy transfer to the ring is excluded for both transferred and backscattered electron waves. We discuss the relation of the present results to the conductance systems based on the electron gas.
I. INTRODUCTION
During two past decades a significant progress in control and manipulation of separate electrons within the solid state devices has been made. A single electron was trapped within a quantum dot 1 in a localized state.
Monitoring the flow of current by resolving the passage of separate electrons has been achieved. 2 An ultrafast single-electron pumping in a system of quantum dots connected in series was realized. 3 Single-electron Aharonov-Bohm interference was demonstrated 4 using a Coulombblockaded quantum dot as a valve injecting separate carriers into the channel via cotunneling events. Recently, single-electron transfer in a channel placed above the Fermi energy of the reservoirs was reported 5 with the surface acoustic waves used to trap the moving carrier.
A single electron moving within the channel can be scattered inelastically and pass its energy to the environment. On the other hand for the conventional experiments with the electron gas, inelastic scattering of the Fermi level electrons is forbidden by the Pauli exclusion principle. The electron transport is strictly a Fermi level property in the linear regime, where the current I is necessarily an even function of the external magnetic field B, i.e. I(B, V ) = G(B)V , where G is the linear conductance and V the applied bias. The Landauer-Büttiker approach derives the linear conductance G(B) = e 2 h T (B) out of the electron transfer probability T , and the latter is an even function of the magnetic field T (B) = T (−B). The Onsager-Casimir 6 symmetry G(B) = G(−B) 7 does not hold for the non-linear transport, 8 where a finite energy window participates in the current flow. Asymmetry of conductance by the non-linear currents carried by the electron gas was studied both experimentally 9-15 and theoretically 8,16-24 in a number of papers.
Here we consider a single electron injected into a quantum wire and its probability to pass through an interaction range of another electron confined in a quantum ring placed in neighborhood, close enough to allow the capacitive coupling 2,4,5 between the carriers. We find that this probability is asymmetric in B. We investigate the relation of the magnetic asymmetry with the inelastic scattering effects. We indicate that the magnetic symmetry of the electron transfer is restored when the inelastic backscattering is excluded. The latter is achieved by inserting a narrow band-pass energy filter in form of a double barrier structure into the channel with the resonant energy fixed at the energy of the incoming electron. We show that the energy filter introduced into the channel restores the magnetic symmetry of the transfer probability only for the electrons traveling in one direction and not the other, hence the turnstile character of the system is observed with or without the energy filter.
An appearance of the magnetic asymmetry of the single electron transfer probability was previously discussed in an bent quantum wire 25 or a cavity 26 asymmetrically connected to terminals. Both papers 25,26 used a time dependent wave-packet approaches and indicated that the asymmetry of the transfer probability arises when the channel electron interacts with the surrounding environment. The present study of the role of the inelastic scattering requires a discussion of the incoming electron of a definite energy rather than the wave packet dynamics. We develop such an approach below and explain its relation to wave packet scattering. The results of this paper are based on a solution of the two-electron Hamiltonian eigenequation with an exact account taken for the interaction and the electron-electron correlations. This paper is organized as follows. In the next section we first sketch the two-electron Hamiltonian used in this paper in strictly one dimensional models of both the wire and the ring. Next, we present a time-dependent approach to the scattering problem and then the timeindependent treatment. We demonstrate that the results of the latter can be understood as the limit of monoenergetic wave packet scattering. Section III contains the results and IV the discussion. Summary and conclusions are given in section V.
II. THEORY
The system considered in this paper is schematically depicted in Fig. 1. An electron is confined in a circular quantum ring of radius R = 30 nm. Initially, this electron is in its ground-state, with a definite angular momentum and circularly symmetric charge distribution. Another electron injected from outside goes along the straight channel, interacts with the ring-confined-electron and is partially backscattered. The total energy of the twoelectron system is a conserved quantity. The incoming electron is scattered inelastically when the ring absorbs a part of its energy.
The Hamiltonian of the electron in the circular ring with center in point (x c , y c , 0) is given by h r = 1 2m * (p + eA) 2 + V (r c ) with r 2 c = (x − x c ) 2 + (y − y c ) 2 . The magnetic field (0, 0, B) is oriented perpendicular to the plane of electron confinement. For the symmetric gauge A s = B 2 (−(y − yc), x − x c , 0) the Hamiltonian of the ring electron takes the form h r = −h 2 2m * ∇ 2 + V (r c )+ e 2 B 2 8m * r 2 c + eB 2m * l c , where l c is the operator of the angular momentum z-component with respect to the ring center. Operators h r and l c have common eigenstates φ c l = f l (r c ) exp(ilθ), with the angular momentum quantum number l. In the limit of a thin ring the radial wave function f l tends to the ground-state of a particle confined in an infinite quantum well and looses its dependence on l. The energy spectrum is then given by ε l = E r +h 2 2m * R 2 (l + Φ Φ0 ) 2 (see the inset to Fig. 1), where Φ 0 = h e is the flux quantum, Φ = BπR 2 and E r is the ground-state energy of the radial confinement. The latter is independent of l and as such is irrelevant for the scattering process. We skip E r in the following formulae.
For the scattering problem it is most convenient to use another gauge A = B(0, x, 0), since then the diamagnetic term produced by the kinetic energy operator ( e 2 B 2 8m * x 2 ) vanishes at the axis of the channel x = 0. In the following we assume that the channel is so thin that the electron in its motion along the channel is in its lowest state of lateral quantization. For the strictly 1D channel with x = 0 axis the kinetic momentum π y = p y + eBx is independent of B, and thus the wave vector q of the motion along the lead corresponds to the same energy and probability current flux for any B.
In order to replace A s by A the gauge transformation . Upon the transformation the ring wave functions change to where the phase factor introduced by χ is independent of l. Although with A the angular momentum with respect to the ring center does not commute with the Hamiltonian, l still remains a good quantum number for description of the ring eingestates.
With the assumptions explained above the twoelectron Hamiltonian used in this work reads where ∂y 2 is the channel electron Hamiltonian and W is the interaction potential. The latter is taken in the screened Coulomb form with dielectric constant ǫ = 12.9 and the screening length λ = 500 nm The general form of the two-electron wave function can without a loss of generality be developed in the basis of product of single-particle eigenstates with definite angular momentum for the ring and the wave vector within the channel q where the partial wave packets are defined as The electrons occupying separate regions in space (the wire and the ring) are essentially distinguishable. Antisymmetrization of Eq. (6) does not affect any of the results presented below due to the complete separability of the electron wave functions. 27 For that reason we skipped the anti-symmetrization in the following. One puts the wave function (5) into the Schrödinger equation ih ∂Ψ ∂t = HΨ and projects the result on the ring eigenstates, which leads to a set of equations for the partial wave packets . Note, that the phase factor due to the gauge transformation (1) is canceled in the evaluation of the interaction matrix W kl .
In the time dependent calculation we take for the initial condition a Gaussian wave packet Ψ l (y, t) = ∆k 2π 1/4 exp(− ∆k 2 4 (y − y 0 ) 2 + iqy), where l corresponds to the ground-state angular quantum number, the average momentum q > 0 and y 0 is far below the ring. For k = l, in the initial condition Ψ k = 0 is applied. Calculations are performed with a finite difference scheme for the channel of length 16 µm with ∆y = 2 nm. The results converge when |l| ≤ 3 ring eigenstates are included into the basis.
B. Stationary description of the scattering
The time-independent approach described in this section is suitable for treating the scattering for the incident electron of a definite energy. The stationary approach is also more computationally effective and does not require very large computational box since transparent boundary conditions can readily be applied. For ∆k = 0 the incoming electron has a definite momentumhq, and a definite energy E i =h 2 q 2 2m * , hence the total energy E tot of the system is also a well-definite quantity E tot = E i + ε l , where ε l is the ring ground-state energy. Therefore, the two-electron wave function for the scattering satisfies the time-independent Schrödinger equation We use the form of the function which is a time-independent counterpart of Eq. (5). Insertion of Eq. (9) into Eq. (8) followed by projection on a ring eigenstate gives a system of eigenequations for ψ l , The electron in the ring is initially in its ground-state with angular momentum l -as in the time independent picture. Therefore, the partial wave ψ l at the input side is a superposition of the incoming and backscattered waves a exp(iq l y)+ b exp(−iq l y). Since Ψ is defined up to a normalization constant, at the bottom of the computational box (3µm long) we simply set ψ l (0, y = 0) = a + b = 1 as the boundary condition. After the solution of Eqs. (10) the values of the incoming a and the backscattered b amplitudes are extracted from the form ψ l along the lead.
The partial waves for k = l appear only due to the interaction of the incoming electron with the ring, and they all correspond to the electron flow from the ring to the ends of the channels. Thus, far away above [below] the ring the partial wave functions corresponding to k-th angular momentum quantum number correspond to transferred [backscattered] electron and have the form of . For E tot > ε the wave vector q k is real and the boundary condition ψ k (y + ∆y) = ψ k (y) exp(iq k ∆y) [ψ k (y + ∆y) = ψ k (y) exp(−iq k ∆y)] is applied at the top [bottom] end of the computational channel. For E tot < ε the wave vector q k is imaginary and the wave function vanishes exponentially along the lead. The partial waves with imaginary q k are counterparts of the evanescent modes 28 for scattering in two-dimensional channels. For imaginary wave vectors we put zero for ψ k at the ends of the computational box. Upon solution of Eq. (10), the amplitudes a, b, c l , d l are calculated. The total transfer probability is given by
III. RESULTS
In Fig. 2 we plotted the electron transfer probability obtained by the time-independent method in function of the incident electron energy, for three values of the magnetic field. For E i < 1 meV the transfer probability vanishes and for E i > 3 meV the value of T becomes close to 1 independent of B. Around E i = 1.6 meV a distinct asymmetry of T as a function of B is found. The insets displays the charge density within the ring calculated as ρ(r 2 ) = dr 1 |Ψ(r 1 , r 2 )| 2 . For B = 0.4 T the density is shifted off the channel (at right to the ring), and consistently T is larger.
The results of the time-dependent simulation for the packet average energy of E i =h 2 q 2 2m * = 1.6 meV are plotted in Fig. 3 in function of B for a number of initial dispersions of the wave vector ∆k. The horizontal line shows the result obtained for a rigid charge of the ring which is independent of B. All the B dependence of the transfer probabilities given in Fig. 3 is due to the properties of the ring as an inelastic scatterer which change with the magnetic field. The discontinuities present in the transfer probabilities at B = ±B 0 = ±0.73 T result from ground-state angular momentum transitions within the ring [see the top inset to Fig. 1]. With ∆k decreasing to 0 the results converge to the result of the stationary description of the scattering for the incoming electron energy of definite energy E i =h 2 q 2 2m * = 1.6 meV which are plotted with the red line in Fig. 2. The rest of the results presented in this work was obtained with the stationary description of the transport.
The electron transfer probability as depicted in Fig. 3 is a distinctly asymmetric function of B. The asymmetry along with the character of the discontinuities at the ring ground-state transformations can be understood as due to the relation of the backscattering to the an- The absorption of the angular momentum by the ring is associated with transition from l = 0 to l = 1 energy level. This is less energetically expensive when B becomes negative due to decreasing energy spacing between the ground state energy and the l = 1 energy level (see the inset to Fig. 1). Consistently, the contribution of l = 1 energy level to the total backscattering probability grows as B decreases below 0 -see Fig. 4(b). Fig. 4(a) shows that for B just above the ring-state transition l = 1 ring state dominates also in the transfer probability. Below the ground-state angular momentum transition which occurs at B = −0.73 T the ring ground state l is Our results for the single-electron scattering indicate that the energy absorption is associated both with the electron transfer [ Fig. 4(a)] and backscattering [ Fig. 4(b)], which is accompanied by magnetic symmetry violation for the electron transfer probability. We found that one can eliminate selectively the effects of inelastic scattering in the transferred or backscattering waves by a proper tailoring of the potential profile along the channel. For that purpose we used a double barrier structure (DBS) with center placed on the channel far (1200 nm) below the ring. Figure 5 shows the applied potential profile and the inset to the figure the electron transfer probability through the DBS. We can see the resonant peak at the electron energy of 1.6 meV. The resonant energy was set equal to the energy of the incoming electron, so that the DBS acts like an energy filter -it is opaque for the electron that lost a part of its energy, i.e. to the partial waves with k = l.
In Fig. 6 we plotted with the red line the transfer probability for the DBS energy filter placed above the ring. Fig. 7 shows the plot of partial waves along the channel. Above the DBS one finds only the partial wave associated with l = 0, i.e. with the ground-state of the ring. The electron can transfer across the structure only provided that the it preserves its initial energy. Therefore, no excitation of the ring electron is possible when the channel electron transfers across the structure. In Fig. 7 we can see that far below the ring we have an interference of l = 0 incoming and backscattered waves. No interference is observed in the partial wave with l = 1 near x = 0 (|ψ 1 | is constant), since there is no incoming wave with l = 1. Nevertheless an oscillation of l = 1 wave is observed between the DBS and the ring. The potential of the ring and the DBS form a wide quantum well in which the partial waves [for instance l = 1 in Fig. 7] oscillate back and forth. The presence of the wide well is also responsible for the resonances appearing at the T (B) dependence in Fig. 6. T (B) for the DBS placed above the ring remains an asymmetric function of B.
The transfer probability T becomes an even function of B (blue curve in Fig. 6) when the DBS energy filter is placed below the ring, which removes inelastically scattered partial waves of the total backscattered wave function. The partial wave function plots given in Fig. 8(a) and Fig. 8(b) show that below the DBS only the partial wave with l = 0 is found, but above the structure we see an appearance of the partial waves for l = 0.
For B > 0 just below B 0 we found that T (B) is nearly the same for the double barrier structure placed both below and above the ring [see the blue and red curves which nearly coincide in Fig. 5 just below B 0 ]. Note, that for the DBS below the ring at B = 0.6 T we find that the contribution of l = 0 in the transferred wave function is negligible [ Fig. 8(b)]. The absorption of the angular momentum by the ring is weak for B → B 0 due to the large energy cost of this ring excitation [see the discussion of Fig. 2], hence the similar results found for both locations of DBS.
In Fig. 6 with the dashed curve we plotted the electron transfer probability for the DBS below the ring and the electron incident from the upper end of the wire. In this case the electron is first scattered by the ring and then by the DBS. We can see that for a single DBS present within the wire the transfer probability from one end of the wire to the other is different than in the opposite direction (the dashed curve in Fig. 6 can be obtained from the red one by inversion B → −B), i.e. the system acts like a turnstile. Figure 9 gives the electron transfer probability for two DBS placed both below and above the ring. The inelastic scattering is switched off for both the transferred and backscattered trajectories. The partial waves given in Fig. 10 show that the ring does get excited but only for the channel electron staying between the two DBS. We find that the transfer probability is symmetric with respect to both the magnetic field and the direction from which the electron comes to the ring. The small deviations off the symmetries visible at a closer inspection of Fig. 9 are due to small but finite width of the resonance peak (see the inset to Fig. 5). The inelastic scattering is allowed with the energy losses smaller than the width of the peak. Figs. 3 and 4 indicate that the asymmetry of the transfer probability as a function of the magnetic field is a result of 1) geometrical asymmetry of the system 2) inelastic electron scattering -the absorption of the angular momentum by the ring which is necessarily accompanied by the energy absorption 3) the energy transfer occurs through the electron-electron interaction.
Results of
For systems with the two-dimensional electron gas it was pointed out 8,17 that the magnetic asymmetry of conductance may result from the potential landscape within the device being not an even function of B -the potential produced by charges at the edges of the channel in the Hall effect 8 as the most basic example. In this case the asymmetry of the charge distribution is translated to the asymmetry of the transport by the electron-electron interaction. The role of the electron-electron interaction for the magnetic asymmetry of the transport in the electron gas was also indicated in Refs. 17,18,22 . In the present study of the single-electron transport the asymmetry is due to the properties of the ring -the enhancement of the backscattering accompanied by absorption of the angular momentum of the channel electron -which are not an even function of B due to the form of the ring energy spectrum. Here, the backscattering is only due to the electron-electron interaction. Although in the linear transport regime the inelastic scattering of the electrons at the Fermi level is blocked by the fact that the states of lower energies are occupied, in the non-linear transport the inelastic scattering is not only allowed but necessary for thermalization of the carriers passing between electron reservoirs of unequal electrochemical potentials. The asymmetry that we find in this work results from the energy transferred by the channel electron to the ring, i.e. it occurs due to the inelastic scattering. The magnetic symmetry is restored when the inelastic backscattering is excluded. The invariance of the backscattering is invoked in explaining T (B) = T (−B) symmetry when the transfer kinetics is very different for both magnetic field orientations -see the deflection of the electron trajectories by the Lorentz force in Ref. 25. In the present work the Lorentz force is excluded by the strict 1D approximation for the channels width. Nevertheless, the different kinetics resulting in the same transfer probability was also found in Figs. 8(a) and 8(b).
A single DBS placed below the ring restores the magnetic symmetry of the transfer, still only for the electron injected from one side of the channel and not the other (the microreversibility is not restored -see Fig. 6). Thus, for a single DBS present within the wire the electron transfer probability from one end of the terminal to the other are unequal. The turnstile character of the system is also a result of the inelastic scattering. The conditions present in the linear transport regime -with the inelastic scattering excluded at both the transfer and the backscattering -were simulated with two DBS placed at both sides of the ring. This configuration of energy filters restores the microreversibility of the system. The transfer probability becomes an even function of B, although the kinetics of the electron transfer is not identical for ±B [ Fig. 10]. Moreover, the microreversibility is also restored [ Fig. 9], although the system with two DBS is still not spatially symmetric under a point inversion.
V. SUMMARY AND CONCLUSIONS
We have studied single-electron scattering process on an electron localized in a quantum ring off the electron transport channel. We developed for that purpose a time-independent approach based on an expansion of the two-electron function in a basis of ring eigenstates and explained its relation to the numerically exact timedependent scattering picture. We have found that the electron transfer probability is an asymmetric function of B and that the asymmetry results from the energy cost of the angular momentum absorption by the ring which is not an even function of B. We have demonstrated that the symmetry is restored when the electron backscattering with the energy loss is excluded. The exclusion was performed by a double barrier structure with the resonant state set at the energy of the incoming electron. In order to remove the turnstile character of the ring as a scatterer one needs to employ a pair of double barrier structures at both the entrance and the exit to the ring interaction range. | 2011-12-07T10:29:26.000Z | 2011-12-07T00:00:00.000 | {
"year": 2011,
"sha1": "a03f0e0609f40115974dea5106ecc000c81d4cb5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1112.1515",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a03f0e0609f40115974dea5106ecc000c81d4cb5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
85514528 | pes2o/s2orc | v3-fos-license | FAM83D is associated with gender, AJCC stage, overall survival and disease-free survival in hepatocellular carcinoma
Prognostic significance of family with sequence similarity 83, member D (FAM83D) in hepatocellular carcinoma (HCC) patients has not been well-investigated using Gene Expression Omnibus (GEO) series and TCGA database, we compared FAM83D expression levels between tumor and adjacent tissues, and correlated FAM83D in tumors with outcomes and clinico-pathological features in HCC patients. Validated in GSE33006, GSE45436, GSE84402 and TCGA, FAM83D was significantly overexpressed in tumor tissues than that in adjacent tissues (all P<0.01). FAM83D up-regulation was significantly associated with worse overall survival (OS) and disease-free survival (DFS) in HCC patients (Log rank P=0.00583 and P=4.178E-04, respectively). Cox analysis revealed that FAM83D high expression was significantly associated with OS in HCC patients [hazard ratio (HR) = 1.44, 95% confidence interval (CI) = 1.005–2.063, P=0.047]. Additionally, patients deceased or recurred/progressed had significantly higher FAM83D mRNA levels than those living or disease-free (P=0.0011 and P=0.0238, respectively). FAM83D high expression group had significantly more male patients and advanced American Joint Committee on Cancer (AJCC) stage cases (P=0.048 and P=0.047, respectively). FAM83D mRNA were significantly overexpressed in male (P=0.0193). Compared with patients with AJCC stage I, those with AJCC stage II and stage III–IV had significantly higher FAM83D mRNA levels (P = 0.0346 and P=0.0045, respectively). In conclusion, overexpressed in tumors, FAM83D is associated with gender, AJCC stage, tumor recurrence and survival in HCC.
Introduction
Primary liver cancer, comprising 75-85% cases of hepatocellular carcinoma (HCC), is predicted to be the sixth most commonly diagnosed cancer and the fourth leading cause of cancer death worldwide in 2018 worldwide [1][2][3]. Although advanced in surgical and nonsurgical therapeutics have been improved over the past decades for the disease, the clinical outcome of HCC remains poor [4] and more than 70% cases developed tumor recurrence at 5 years [5,6]. Hence, the development of novel targeted therapies for HCC treatment requires identification of reliable targets [7,8].
Recently, the family with sequence similarity 83 (FAM83) was shown to have oncogenic potential [9]. A higher expression level of a signature of FAM83 family members was associated with poor prognosis in a number of human cancers [10,11]. In breast cancer, alterations in FAM83 family genes correlated significantly with TP53 mutation and inversely associated with PIK3CA and E-cadherin mutations [9].As a member of FAM83 family, FAM83D is involved in mitotic processes to regulate cell division [12]. Emerging evidence indicated that FAM83D expression is elevated in a wide variety of tumor types including ovarian cancer [13], metastatic lung adenocarcinomas [14] and HCC [15,16], suggesting the possibility that FAM83D is an oncogene for many human malignancies. However, data concerning the expression profiles and clinical impact of FAM83D in HCC patients has not been elucidated.
Our study investigated FAM83D expression levels between tumor and adjacent tissues, and consequently correlated FAM83D in tumors with outcomes and clinico-pathological characteristics in HCC patients, hoping that the data may provide potential biomarker candidates and useful insights into the pathogenesis and progression of HCC.
Source of data
The gene expression data were processed using the RMA algorithm. Gene expression profiles for HCC including GSE33006, GSE45436 and GSE84402 were obtained from Gene Expression Omnibus (GEO) database (https: //www.ncbi.nlm.nih.gov/geo/). Tumor and adjacent samples in GSE33006 [17], GSE45436 and GSE84402 [18] were processed on Affymetrix Human Genome U133 Plus 2.0 Array. Affy, AffyPLM and Limma packages in R program were used for quality assessment and identifying FAM83D mRNA expression levels of tumor and adjacent normal samples in each GEO profile. edgeR package was used for identifying FAM83D expression levels in tumor and adjacent tissues in HCC patients.
Survival analysis
To investigate prognostic significance of FAM83D for predicting the overall survival (OS) and disease-free survival (DFS) of HCC patients, Liver Hepatocellular Carcinoma (TCGA, Provisional) database in cBioPortal for cancer genomics online service was used [19,20]. A z-score threshold + − 2.0 of mRNA expression was selected in genomic profiles and 373 cases with sequenced tumors were conducted for survival analysis.
Additionally, gene data with z scores and clinical data of HCC patients in Liver Hepatocellular Carcinoma (TCGA, Provisional) database were downloaded from cBioPortal and matched with VLOOKUP index in EXCEL, seven hepatocholangiocarcinoma and three fibrolamellar carcinoma cases were excluded, 367 HCC patients were included for further analysis investigating associations between FAM83D and survivals and clinico-pathological features in HCC with FAM83D median cutoff.
Statistical analysis
The data are presented as mean + − standard deviation (S.D.) or constituent ratio. Differences between the individual groups were analyzed using Student's t-test, χ 2 test or Ridit analysis. The Kaplan-Meier method was used to compare OS and RFS between different groups, and the log-rank test was used to estimate the difference in survivals. Factors associated with the OS in HCC patients were assessed both by Cox univariate and multivariate analysis. Only covariates significantly associated with outcomes at univariate analysis (two-sided P<0.10) included in the multivariate model. Results were reported as hazard ratios (HR) or odd ratios (OR) with 95% confidence intervals (CI). PASW Statistics software version 23.0 from SPSS Inc. (Chicago, IL, U.S.A.) was used. A two-tailed P<0.05 were considered significant for all tests.
FAM83D expression in HCC patients
The FAM83D mRNA expression levels were calculated in GSE33006, GSE45436 and GSE84402. As shown in Figure 1, FAM83D mRNA were significantly overexpressed in tumor tissues than those in adjacent tissues in the three GEO series (All P<0.001, Figure 1A-C). For validation, FAM83D mRNA was also significantly up-regulated in tumors than that in nontumors in HCC patients in TCGA profile (P<0.0001, Figure 1D). In addition, we investigate FAM83D alteration distribution in liver cancer. As shown in Figure 2, FAM83D mRNA was up-regulated in approximately 8% HCC patients, no up-regulation of FAM83D mRNA was found in other histological malignancies including HCC plus intrahepatic cholangiocarcinoma, fibrolamellar carcinoma and hepatobiliary cancer ( Figure 2).
Associations between FAM83D and outcomes in HCC patients
Using Liver Hepatocellular Carcinoma (TCGA, Provisional) database in cBioPortal for cancer genomics online service, we conducted associations between FAM83D and HCC survival. As shown in Figure 3, FAM83D up-regulation was significantly associated with worse OS (Log rank P=0.00583, Figure 3A) and DFS (Log rank P=4.178E-04, Figure 3B) in HCC patients. Similarly, the mortality was significantly higher in HCC patients with FAM83D up-regulation than that in cases without alteration (60.7 vs 33.0%, P=0.003, Figure 3C). And, HCC patients with FAM83D up-regulation had significantly higher recurrence rate than those without FAM83D alteration (78.3 vs 52.7%, P=0.018, Figure 3C).
Moreover, we matched gene data with z scores and clinical data of HCC patients in Liver Hepatocellular Carcinoma (TCGA, Provisional) database with VLOOKUP index. We grouped HCC patients with FAM83D median cutoff. As shown in Figure 4, HCC patients in FAM83D high expression group suffered from significantly poor OS (Log rank P=0.006, Figure 4A) and DFS (Log rank P=0.042, Figure 4B).
In addition, we performed Cox-regression analysis to investigate the associations between clinico-pathological factors and OS in HCC patients. As shown in Table 2, Univariate-Cox analysis revealed that FAM83D high expression, advanced American Joint Committee on Cancer (AJCC) stage and vascular invasion should be potential risk factors for OS and DFS in HCC patients (all P<0.10, Tables 2 and 3). When these factors included in multivariate-Cox regression, FAM83D overexpression and advanced AJCC stage were identified as risk factors for OS in HCC patients (both HR > 1.0 and P<0.05, Table 2). And, advanced AJCC stage and macrovascular invasion significantly associated with DFS in HCC patients (both HR > 1.0 and P<0.05, Table 3). Table 1 summarized clinico-pathological features in FAM83D high and low expression groups in HCC patients. FAM83D high expression group had significantly more male cases (P=0.048, Table 1). And, HCC patients in FAM83D high expression group suffered from significantly advanced AJCC stage (P=0.047, Table 1). Additionally, we compared FAM83D mRNA expression levels grouped by gender, AJCC stage and survival status. We found that FAM83D mRNA were significantly overexpressed in male (P=0.0193, Figure 5A). Compared with patients with AJCC stage I, those with AJCC stage II and stage III-IV had significantly higher FAM83D mRNA levels (P=0.0346 and P=0.0045, respectively, Figure 5B). Consistent with above, patients deceased or recurred/progressed had significantly higher FAM83D mRNA levels than those living or disease-free (P=0.0011 and P=0.0238, respectively, Figure 5C,D).
Discussion
As key intermediates in oncogenic EGFR, MAPK, RAS/RAF/MEK/ERK and PI3K/AKT/mTOR signaling, FAM83 involved in a variety of important cancer cell signaling functions and overexpressed in many human cancers [9,10,[21][22][23][24]. In 17 distinct tumor types, FAM83A, FAM83B and FAM83D most frequently overexpressed in several diverse tissue types [10]. Evidence suggested that elevated expression of FAM83 members is associated with elevated tumor grade and decreased OS [10,21]. Therefore, the FAM83 members are emerging as intriguing oncogenes worthy of additional study. FAM83D, also known as CHICA, binds to the chromokines in KID and localizes to the spindle during mitosis to regulate spindle maintenance, mitotic-progression and cytokinesis [12,[25][26][27]. Forced expression of FAM83D in nonmalignant cells in culture promoted proliferation and invasion of breast cancer cells and down-regulated the expression of F-box and WD repeat domain-containing 7 (FBXW7), a suppressor of c-Myc, mTOR and C-Jun expression [28]. In colorectal cancer, FAM83D knockdown up-regulated the protein expression level of FBXW7, but diminished the Notch1 protein expression level [29]. As FAM83D regulates tumorigenesis by hyperactivating mTOR, the levels of FAM83D may also predict patient response to rapamycin [28]. The gene amplification and elevated protein expression of FAM83D increased the migration and invasion of breast epithelial cells and was associated with poor prognosis [28,30]. In addition, FAM83D expression was elevated in gastric tumors, and its expression strongly correlated with lymph node metastasis and TNM stage [31]. Exerted its oncogenic activity by regulating cell cycle, FAM83D overexpression is associated with tumor size, lymph node metastases and advanced TNM stage and worse OS in lung adenocarcinoma [32].
In our study, we found that FAM83D was overexpressed in HCC tumors. Patients with advanced AJCC stage had significantly higher FAM83D levels. Interestingly, male patients might be apt to FAM83D elevation compared with All baseline covariates were included in univariable analysis. Only covariates significantly associated with OS in HCC patients at univariable analysis (two-sided P-value < 0.10) are shown and included in the multivariable model. All baseline covariates were included in univariable analysis. Only covariates significantly associated with DFS in HCC patients at univariable analysis (two-sided P-value < 0.10) are shown and included in the multivariable model. female cases. Furthermore, FAM83D elevation in tumors was associated with in worse OS and DFS in HCC patients. The elevation of FAM83D in HCC tumors has been proved previously [15,33]. In our analysis based on TCGA profile, FAM83D mRNA was up-regulated in approximately 8% HCC patients. However, a study by Liao et al. [15] demonstrated that FAM83D was significantly up-regulated in 76.6% of the HCC specimens at the mRNA level and in 69.44% of the HCC specimens at the protein level compared with adjacent noncancerous liver specimens. They also found that FAM83D mRNA expression level was positively correlated with the level of alpha-fetoprotein (AFP), TNM stage, the presence of a portal vein tumor thrombus, OS and DFS time of HCC patients [15,34]. Another report by Lin et al. also indicated that FAM83D overexpression significantly correlated with high HCC recurrence rate after liver transplantation and poor HCC characteristics including high AFP and poor differentiation [33]. In hepatocellular cell lines, FAM83D activates MEK/ERK signaling pathway and promotes the entry into S phase of cell cycle progression [16]. In a xenograft tumorigenesis model, FAM83D knockdown apparently inhibited tumor growth and metastasis [33]. FAM83D promotes HCC recurrence by promoting CD44 expression and CD44 + cancer stem cells malignancy via activating the MAPK, TGF-β and Hippo signaling pathways [33]. Consistent with previous publications, we assumed that FAM83D may contribute to hepatocarcinogenesis and constitute a potential therapeutic target in HCC. In summary, FAM83D may serve as a promising prognostic predictor and therapeutic target for HCC. Up-regulated in tumors, FAM83D is associated with gender, AJCC stage, tumor recurrence and survival in HCC patients. Future research focusing on FAM83D by which FAM83D exerts its oncogenic effects, especially in male population with advanced AJCC stage, requires further clarification. | 2019-03-27T13:03:18.561Z | 2019-03-25T00:00:00.000 | {
"year": 2019,
"sha1": "1a3c42b85780b8637866a890d54f1fa7c931a3ca",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/bioscirep/article-pdf/39/5/BSR20181640/848180/bsr-2018-1640.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34b134b0706e0eb6a5e1c09b76c8fbcf5cd9916e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257728274 | pes2o/s2orc | v3-fos-license | Energy security in Poland – transformation and role of nuclear energy
. Currently, Poland is facing the challenge of energy transformation towards a zero-emissional energy system. In this article author presents basic assumptions of Poland’s energy security system. Moreover, a particular focus is set on replacement of coal power plant, which are currently a fundament of Polish electricity production system. Therefore, in this article is examined a several aspects of transformation of national energy mix with analise of nuclear energy as a potentially significant part of future energy security fundament. This article also presents the possibilities of implementing various types of nuclear power facilities, as well as de lege lata and de lege ferenda postulates in the Polish nuclear law.
INTRODUCTION
Poland, as a country, finished their state transformation in 90's of XX century in energy security area as de facto coal uniculture.Power plants in which primary source was hard coal and brown coal mined mostly in Polish territory covered in advance national demand for electrical energy.However, in last 30 years, as the effect of economic development establishes urgent necessity for higher supply of energy connected with gradually phaseout of exploitation coal-fired boilers due to end of their projected lifetime.Moreover, as an effect of Poland's accession to the European Union and development of multilateral worldwide cooperation, the state has started to become a subject and a partner in common climate policy.In recent years the European Commission set new priorities in the political and legal sphere aiming to become the first climate-neutral continent.These actions resulted in the necessity of adjusting Member States energy policy to new directives.Along with these indicators the Polish government prepared an energy transformation strategy in the zero-emissional energy system direction.
Fundamental core of this strategy is the comprehensive revision of existing solutions, and in several aspects transformation will require to create completely new economy areas and branches.One of them is the nuclear energy sector, since nowadays it was outside of political energy debate in Poland.However, this sector will be crucial in future energy security policy.Nevertheless, both currently and in the future Poland's energy security will be determined by opportunities and challenges established as a result of achieving the next steps and milestones in the country energy transformation process.Successfully executed energy transformation process is comprehensive and complicated, therefore it is important to introduce their most significant aspects.
In accordance with above, it is not possible to deeply analyse all relevant issues, due to complexity and permeability of legal, political, economical and other aspects.Therefore, in this paper one significant question is set about role of nuclear energy in insurance of energy security in Poland.To answer this, it is essential to put into consideration current Polish energy mix challenges and governmental strategy papers alongside relevant issues in European Union policy changes.
The main objective of this paper is to examine Polish energy system from perspective of insurance an energy security and role of nuclear energy therein.Therefore, in accordance with above mentioned objective, achieving this aim will be essentially connected with wide, interdisciplinary approach.This is inevitable, primarily due to number and quantity of different factors directly or indirectly affect Polish energy security.
Moreover, as a result of adopted methodology and analytical approach, the author considered the role of nuclear energy as a key factor in ensuring the future energy security in Poland.As a result of internationalisation and harmonisation of nuclear law, it is necessary to acknowledge that above mentioned systems should be accompanied with comparative legal methodology to examine subject matter.
ENERGY SECURITY -DEFINITION AND MAIN INDICATORS
Polish Energy Law Act defines energy security as a state of economy enabling current and perspectival coverage of recipients demand for fuel and energy in an technically and economically justified way with obtained environmental protection requirements (Prawo energetyczne).Polish Energy Policy to 2040 enhances the above definition with the sentence " [...], with assurance of competitiveness of an energy efficient economy and decreased level of energy sector impact to the environment." (PEP2040) Polish legislator, creating definitions in legal acts and in next steps widening the scope in strategic documents aimed to obtain de facto legislative consensus, connecting the most comprehensive definitional scoping with the most possible accuracy and precision of implemented legal terms.Nevertheless, the energy security term is the definition related to almost all areas of state activity and more, so fundamentally important is to describe several characteristic aspects to enable a deeper point of view to certain issues.In literature and doctrine there are established energy security aspects like diversification level and exploitation of domestic and foreign sources of supply into energy resources (Braun, 2018, p. 27), quality of state supervisory and implementation of development and investment decisions (Czwołek et al, 2018), level of rely to import (Chrzan, 2015) or the ability to create international strategic alliances (Ruszel & Podmiotko, 2019).
Described above characteristic indicators, despite of the scale and necessity of energy security assurance, presents main challenges that states have to take care of.Remaining stability of electrical energy supply in the climate change era and reducing emissions in the worldwide economy could result in significant concern in areas like environmental protection, sustainable economy or energy transition in connection with support from communities personally involved in these processes.
POLISH ENERGY POLICY TO 2040
In accordance with previous considerations concerning on issues included in Polish Energy Policy to 2040 [further: PEP2040], author come to conclusion that this strategic document is setting fundamental and basic state activity directions, primarily in energy and energy transition area, with points of reference based on directions and
Piotr Betkowski
Energy security in Poland -transformation and role of nuclear energy directives of European Union energy-climate policy, in particular European Green Deal (Komunikat, 2019).PEP2040 is based on three pillars, so-called main areas, in which this document is influential.First pillar was set as a just transition, by which legislators remain in that situation, where new job opportunities will be created in economy sectors exposed to transformation processes, by establishing new areas of industry.Particularly, the basic outline of this proposition will be the contradiction against structural unemployment, especially in places strongly attached to coal mines.Second pillar described in PEP2040 is zeroemissional energy system, presented as a gradual replacement of highly-emissive power plants, as coal powered boilers with low-or zero-emissional sources and in transition period use of natural gas.Third pillar is the good air quality, which is de facto an effect of the energy transformation path into zero-emission objective and in the long term resulting in a better condition of social health.In PEP2040 there was set also eight particular objectives with attached to them strategic projects on fundamental meaning to established energy security in Poland.However, the most significant aspect is to enlight these activities, which especially are a part of national energy security aspect -the capacity of fully coverage domestic power supply demand by internal resources.This objective is fundamental due to starting point of energy transformation (Herold et al., 2017) alongside with creation the completely new energy system from this point (PEP2040).These strategic projects are introducing the offshore wind farms and Polish Nuclear Power Programme.First programme is currently in development phase of establishing legal framework.This contribution comes from offshore wind characteristic, which rely on high amount of individualized projects realised in joint venture formula, both public and private entities in cooperation with foreign partners.In the offshore wind sector, the state role is to embody regulatory and licensing framework, legal norms and institutional support to the subjects involved in the investment process.Due to PEP2040, offshore wind plants should reach 11 GWe of installed power capacity in 2040 (Polish Press Agency, 2021).Nevertheless, this type of energy has fluctuable and unpredictable characteristics, which create the necessity of introducing a stable energy source, under operation in system baseload.In the transitional period this function will be covered by gas power plants, however as a final source it would be the necessary to develop new energy producing units, which have the ability to produce electricity in constant exploitation, stability of the operational system, reluctance against external factors, the less possible emissions and extended life expectancy.Currently only one power plant type could meet these requirements and it is a nuclear power plant.
POLISH NUCLEAR POWER PROGRAMME AND PEP 2040
In PEP2040 it has been described that the estimated installed capacity of nuclear energy will be around 6-9 GW, which is estimated to cover 20-25% of electricity demand in 2045.However, to obtain this objective there are several challenges, such as financial overcapitalisation, long construction time, obedience with higher and more rigorous standards in comparison with other energy sources and obligatory state involvement in the investment process.To answer these concerns, the Polish government adopted the updated Polish Nuclear Power Progamme, (PNPP) as a strategic document complementary to PEP2040 (PNPP, 2021).Introduction of nuclear energy was established in PEP2040 and implemented in both just transition and zeroemissional energy system pillars.These areas, as intended will be realised by enhancing, and in several aspects rebuilding new economic areas oriented to cooperate with nuclear energy.PNPP is set as one of the most spectacular infrastructural projects in Polish history.Another aspect is concerning just transition objectives and include indicated preferred locations for nuclear power plants siting (PNPP, 2021).Besides locations set in the Baltic Sea neighbourhood this document also indicates these areas, where there are currently existing large, system power plants.Constructing nuclear power plants as a replacement of overexploatation coal powered plants will have a positive effect on the remaining workforce and therefore contradit structural unemployment.PNPP, as described above, is the fundamental aspect of the second PEP2040 pillar -zeroemissional energy system, due to the unique characteristics of nuclear power plants as the energy source.
IMPORTANCE OF NUCLEAR ENERGY TO POLAND'S ENERGY SECURITY
PNPP, whose main target is to introduce nuclear energy in Poland, is one of the key elements in the process of ensuring energy security in understanding both the Energy Law Act and PEP2040.To obtain this objective, there are plans to construct 6 nuclear reactors with a total installed capacity of 6-9 GW, based on big, proven PWR units.Expanding the above sentence, PNPP is focused on big nuclear, which means large-scale nuclear reactors with power output above 1 GWe.This approach exclude small modular reactors, also called SMRs, because in general they are projected on power output typically 50-300 MWe.Second fundamental criteria in PNPP is that nuclear reactor, besides of indicated power, have to be also prepared in PWR technology, pressurized water reactor.This type of technology is currently the most worldwide widespread and there was none nuclear accident with significant releases into environment.Moreover, low financial costs of exploitation are prioritising this type technology and their maturity and construction experiences can ensure they stick with the schedule set in PNPP.In this strategic document it is described that only one type of nuclear reactor will be chosen, due to less financial expenditures and scale effect.
Another important question is to explain what energy mix means.In PEP2040 it isn't defined, however from phrases used in this document explicite it follows that the energy mix is the comprehensive system of all generation sources.This system could be presented as a chart of all generation sources or percentage value of a particular generation source in general electricity production.In this part of the article is necessary to point that different energy sources have other level of installed capacity usage, capacity factor.In practice this factor result in paradoxical situation, where high installed capacity is not connected with significant amount of produced electrical energy.As a result of doctrine and subject literature analysis with other factors, in strategic documents are set probability measures to describe future energy mix.The PNPP time horizon is set to 2045, two years after last, sixth nuclear reactor connection to the electrical grid.
As a result of presented above aspects, author conclude that characteristic indicator of nuclear power is high capacity factor.Therefore in PNPP future energy mix nuclear reactors despite of approximately low installed capacity should cover from 20 to even 27 per cent share of Poland's energy mix in 2045 (PNPP 2021).In comparison, offshore wind sector with estimated installed power capacity 9,6 GWe will have only 18% share in future energy mix.Moreover, energy production in nuclear power plants are constant, undisturbed by external conditions, including weather threats.As a result, nuclear power plants have unique characteristics of operation in system baseload.This aspect creates the opportunity to increase the percentage share of other energy sources, especially unstable renewable energy sources such as solar panels or onshore and offshore wind farms.Cooperation and coexistence of renewable energy sources with stabilisation role of nuclear energy could potentially complete second pillar objectives set in PEP2040 -zeroemissional energy system.
Ensuring the stability of electrical energy supply alongside independence from hostile actions, which could threaten the stability of the energy system, are described as fundamental aspects of energy security.Nevertheless, this term also includes other factors.Nuclear power plants can also operate in cogeneration feature, with specific outcome where heated water can be utilised in cogeneration process to supply district heating systems in Polish cities. Cogeneration in particular focus on using power plants to other purposes than produce electrical energy, in example district heating or hydrogen production.This process can decrease of highly-emissional coal-powered steam boilers in electroheating plants and therefore increase energy security.Second important aspect of nuclear energy is the availability to utilise overproduced energy to power electrolysers to obtain clean hydrogen, which will be meaningful addition to Polish Hydrogen Strategy (Ministry of Climate and Environment, 2021).Moreover to energy security influence one more factor -stability of fuel supply to power plants.In nuclear energy, compared with gas or coal, the cost of nuclear fuel is insignificant and the overall quality and safety of its overview is set by the International Atomic Energy Agency in cooperation with the European Atomic Energy Agency (Euratom).
PNPP also contains some basic arguments on behalf of the advantages of including nuclear energy in the national energy mix.These arguments are related to the overall agreement that zero-emission sources, including nuclear power, was helpful to avoid 1,84 million premature deaths between 1970 and 2009 (Kharecha & Hansen, 2013).Experts also indicate that nuclear power has low demands for concrete and steel per unit of produced electricity (Peterson et al., 2005) and in the construction process, it does not possess a significant demand for rare earth materials as the renewable energy sources energy technologies (IAEA, 2017).Nuclear power plants also need the lowest amount of space to generate the same amount of energy as the other sources in the energy mix (Fritsche et al., 2017) and what is more important, the life cycle of NPP could be extended to 80-100 years, i.e.Turkey Point and Peach Bottom NPPs in the USA.
NUCLEAR ENERGY DEVELOPMENT TO 2040 -CHALLENGES AND THREATS
The above position of nuclear energy in the energy security system was focused on the narrow, domestic scope level of this definition.However Poland, as a country located in the middle of Europe in geopolitical meaning, can not define energy security with only internal factors.In the currently globalised world, success of long-term financial demanding infrastructure projects, as PNPP, depends on many fluctuations.These projects in one hand are the part of state foreign policy alongside with strengthening national economy potential.On the other hand, establishing in Poland perspective nuclear energy sector should be described as state obligation and even as a state duty, despite of costs of investment.This approach is available due to overnational consensus that in climate change era, nuclear energy is this investment which shows that clean and reliable energy source is the necessity in Poland.
In external meaning energy security is described in connection with regional and international cooperation, politics and influence.Poland, as a European Union member state, from 2004 have multiple benefit from this status, but also is obliged to several actions.One of them is to reduce CO2 emissions from national energy system, with acquired starting point, in example high percentage of coal in Polish energy mix.Poland, as the only one country in Soviet Union did not introduced nuclear energy as a part of national energy mix from variety of reasons (Nowacki, 2014).As an effect nuclear power plant in Żarnowiec and in EJ WARTA near Poznań construction was abandoned and energy production was based on coal.To change this direction, nuclear energy in Poland is a necessity in the nearest future.is this project should have chance to become successful, cooperation with European Union is a necessity.
To fully understand the current situation in the European Union concerning nuclear energy it is important to describe two events.First was oil crisis from 1973, which shows the dependance of gas and oil, therefore creating the strong necessity of diversification of supply from multiple directions.The effect in the European Union created a strong pro-nuclear movement, with the Messmer plan from France in the front (Morrison, 2015).However, the Chernobyl accident in the Ukraine from 1986 resulted in anti-nuclear movements rising and several countries stopped their nuclear plants construction and closed operational nuclear reactors, replacing them with renewable energy sources and natural gas.This turnover was observed especially in Germany, where as a part of Energiewende political nuclear power was presented as a dangerous source to society.Only a few European countries still recognised nuclear energy as an important part of the national energy mix, or even as a state ratio (Nowak, 2019).
The European Union set the European Green Deal, the strategy aimed to conduct an energy transition in zeroemissional continent to 2050 and to climate neutrality.As a basis of this project is massive support to renewable energy sources, which are planned to decarbonise the commonwealth economy with insurance of energy security.One of the financial mechanisms to ensure this green turn is the so-called taxonomy, system in which sustainable and do not cause significant harm to the environment project will have financial support from the European Commission.This conception for a long time excluded nuclear power, due to political tendencies in the EU.
Nevertheless, in recent years significant change has happen.Countries, where nuclear power is the important part of the national energy mix, to those who are planning to develop this type of energy source started actions aimed to reintroduce nuclear energy into the Old Continent.These actions de facto created a new movement called the European Renaissance and it is fundamental to obtain climate neutrality objectives to 2050, set in the European Union.In this case France, Poland, Czech Republic, Hungary, Romania, Slovenia and Slovakia together sent the letter to the European Commission President, which postulated the inclusion of nuclear energy into the climate targets of the European Union (Polish Prime Minister, 2021).
Presented aspects were based on the fact that Poland and other EU Member States became not only a part of the European Union, but also the European Atomic Energy Community, Euratom.This organization was created alongside other union structures and the Euratom Treaty has the same legal position as the EU Treaty (Merger Treaty, 1963).Moreover, in the Euratom Treaty, European Union is obliged to fast promote and develop nuclear energy in cooperation with Member States, who are planning to introduce nuclear power into the national energy mix (Euratom Treaty, 2012).
Despite coexistence in the legal area of the European Union and Euratom, the second organization seems like being stopped from significant activity.However, the process of reinstatement of Euratom importance has begun, but this process was trying to be stopped by several EU countries with anti-nuclear politics.One of these states is Austria, which pledged a case against a former member of the European Union, Great Britain in the case of the newly build Hinkley Point C nuclear power plant, to the Court of Justice of the European Union, which on 22 September 2020 dismissed this Austrian case (CJEU, 2020).This ruling has significant importance in the Euratom nuclear law system, primarily due to precedential value thereof.In particular, the Court considered that state aid from the Great Britain government to facilitate new nuclear build in the UK can be approved by the European Commission and at the same time be compatible with the internal market.Moreover, in this sentence CJEU addressed legal matters previously omitted in the case law, alongside with indicated dual legal regimes coexisting in the European Union -TFEU and Euratom Treaty, in which several provisions can be potentially mutually influencing, i.e. state aid law.Therefore, it is necessary to indicate some of the comprehensive aspects of this CJEU judgement.
Primo, this sentence de facto is a reinstitution of the Euratom Treaty and reminds of their legal position in the legal sources system in the EU, as a one of the fundamental and most important legal act in the European Union.
Secundo, dismissal of Austrian case, supported by Luxembourg creates a legal precedent in following cases in this area.Moreover, the Advocate General of CJEU stated that this case could be presented as a part of a legal dispute between countries supporting nuclear energy and those who are diminishing it.
Tertio, to Member States which are planning to introduce nuclear energy as a part of national energy mix to provide energy security now have a legal basics to facilitate and financially support the new nuclear built, recognised an importance of the point 77 from CJEU sentence, which stated "that those measures was a part of a set of energy policy measures taken by the United Kingdom in the context of the reform of the electricity market, designed to ensure security of supply, diversification of sources and decarbonisation [...] according to the Commission, it would not be to adress the future gap in energy generating capacity [...] by relying solely on renewable energy sources".
Hinkley Point C NPP will be containing EPR, European Pressurized Reactor, 1.650 MWe, which should provide approximately 7% of total UK demand (HM, 2020).
Nevertheless, CJEU referred to above case in the second judgement in state aid to new nuclear power plant matter in case T-101/18 (CJEU, 2022).In this sentence, the Court significantly ruled in accordance with the sentence in the previous case.Moreover, it can be observed a comprehensive consistency with acknowledging of Euratom Treaty provisions concerned on promotion of nuclear power as a common interest of the Member States.
From the perspective of the nuclear industry and pro-nuclear Member States, the first CJEU ruling was precedential and created a legal basis to impose Euratom Treaty provisions as a lex specialis over other Treaties and therefore eliminate one of the challenges to promote new nuclear built in the European Union.Second judgement is a confirmation of the importance of the Euratom Treaty and a reflection of the rising shift in nuclear law.
To conclude, there can be observed kind of nuclear renaissance in Europe, caused by obligatory fit into new climate neutrality and zero-emission legal framework.Alongside these tendencies, legal and political criteria to countries planning to introduce nuclear energy to their national mix gradually are more and more suitable to these intentions, which could allow safe predictions that nuclear energy will be more significant in future energy mix.However, with development of new nuclear power plants construction or plans, several countries will be more determined to create anti-nuclear rhetoric in their foreign policy, which could potentially be dangerous to nuclear energy and therefore to provide energy security in Poland in PNPP and PEP2040 definitions.
ENVIRONMENTAL ASPECTS OF NUCLEAR ENERGY IN POLISH ENERGY TRANSFORMATION
As was mentioned before, in 2022, Poland continues their dependency on high-emissional coal power plants, which have significant influence not only to human health, but more importantly to environment.Due to governmental plans to conduct energy transformation to zero-emission system, these power plants will have to be replaced by other sources.Before energy crisis situation in Europe, natural gas was recognized as a transitional source, less emission than coal, before renewable energy sources will be more common and widespread.However, the current situation in the gas market creates difficulties, but also new opportunities to accelerate one forgotten energy source in Europe.Nuclear energy is responsible for 25% of all energy demand in the European Union, and for 50% of low-emission energy.(IEA, 2019).Moreover, nuclear energy has the lowest carbon footprint of all energy sources, which can be understood as a "green" incentive to build more nuclear reactors.Also, nuclear energy is recognized as a source that is currently more important, as a key technology to mitigate climate change and environmental aspects.(Dabetic Ex Filipovic et al., 2017).
Nuclear energy and nuclear power plants development in the European Union should be recognized as a key element not only to achieve and counter energy crisis, but more importantly, as a fundamental aspect of mitigating climate change and providing environmental protection, due to replacement of coal powered plants.In the following years, to achieve a zero-emission energy system and become the first climate-neutral continent, the European Union should create a new strategy, particularly focusing on nuclear energy to establish a sustainable process of mitigation of climate changes.
Dabetic Ex Filipovic et al. ( 2017) study shows that: A detailed insight into relevant scientific papers published in prestigious scientific journals produced a conclusion that nuclear power compared to fossil fuels has significant ecological advantages, especially when it comes to decarbonisation of the economy.The majority of papers state that nuclear energy, under normal conditions, almost does not produce harmful gases, and that small amounts of radioactive gases, which are regularly released under controlled conditions, can not cause effects such as acid rain, smog and ozone depletion.Thus, nuclear energy can be considered as a good support to global action to mitigate climate change.
CONCLUSION
To provide energy safety as a fundamental aspect of Poland's energy security is determined by several factors.In strategic planning of energy transition into zero-emissional future it is not enough to take into account only internal aspects, which are dependent on domestic actions of the state.In the current, globalized world, a number of factors influencing the state energy policy created necessity of coordination international multilateral cooperation with 20 or 30 years time horizon.That's why nuclear energy could be this factor, which guarantee stability and zero-emissional character of the national energy mix in the long term.This type of energy source is also part of state interest security, energy sovereignty, diversification factor in electroenergetic system and a part of national foreign policy, also in their energy sector.
Moreover, currently can be observed gradually externalisation of a positive approach to nuclear energy development in European Union climate policy and adjustment of legal requirements in domestic, regional and multinational level to energy transition into zero emission target.As an effect of a tendencies described above, nuclear energy has a chance to become one of the key elements to provide energy security in Poland.However the time horizon set in PNPP is far away and the schedule contains milestones, which will demand to keep up this community consensus and political support for nuclear energy.
As a result of adopted methodology and hypothetical considerations, it should be indicated that nuclear new build is currently reconciled as a fundamental of future Polish and European energy security.This approach is reinforced by CJEU rulings, in which not only enhanced role of the Euratom Treaty over the TFEU, but also the importance of state aid and support to nuclear power plants in Europe.This trend on a regional level is a clear sign of the incoming European Nuclear Renaissance.This also creates new opportunities to Poland, where there are plans to build not only Westinghouse AP-1000 PWR reactors in cooperation with state-owned PEJ company, but also South Korean APR-1400 reactors in public-private PGE-ZEPAK-KHNP partnership.Moreover, there are plans to construct undisclosed number of SMR -small modular reactors by public and private companies, i.e.BWRX-300 and scientific project concerned on High-Temperature Gas-Cooled HTGR research reactor construction.All of those new nuclear builds potentially can benefit from state aid and compliance with European Commission requirements.
In the current economic and geopolitical situation in the European Union, conducted with challenges caused by the natural gas market and with armed conflict in Ukraine, there is currently more than ever a fundamental necessity of changing the approach to nuclear energy in the European Union.Extraordinary and unusual problems that EU was faced in 2022 autumn and winter could be recognized as an opportunity to rewrite EU political, economic, energy and most importantly, environmental policies and strategies.Renewable energy sources will and should be enhanced on EU level to construct as much as possible offshore and onshore wind farms, PV panels and to introduce energy efficiency and energy saving policy in European Union.However, these actions should be backuped by a stable, reliable, environment-friendly source that can be a baseground to accelerated developmentnuclear energy. | 2023-03-25T15:15:33.699Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "29f428dc823f399b1d975febce4b9fc062afee4f",
"oa_license": "CCBY",
"oa_url": "https://repozytorium.uwb.edu.pl/jspui/bitstream/11320/14797/1/EJTR_2022_Vol_6_No_2_P_Betkowski_Energy_security_in_Poland.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "14776d9daa1f891c1ee170a27d09d24ad6163252",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": []
} |
195352635 | pes2o/s2orc | v3-fos-license | Invoking Health and Human Rights in the United States : Museums , Classrooms , and Community-Based Participatory Research
The United States is rough terrain for those aiming to stake health-related human rights claims on domestic soil. Less than a decade ago, the passage of the 2010 Patient Protection and Affordable Care Act (ACA), which was designed as a massive expansion of insurance-based health coverage, led some health and human rights scholars to wax optimistic. The ACA—the Obama administration’s signature piece of legislation—passed by a razor-thin margin in US Congress. For human rights optimists, this legislation deserved praise for adopting “significant national reforms consistent with human rights norms” in a manner “Corresponding with international law, [and] following both the spirit and substance of the UDHR [Universal Declaration of Human Rights] and ICESCR [International Covenant on Economic, Social, and Cultural Rights].”1 As pessimists were quick to point out, however, the ACA’s protections have always been “inherently unstable.”2 First, this market-based arrangement grounds access to health care in a statutory right—in other words, a right that can be modified or revoked. In addition, it sidesteps international norms and commitments precisely “by avoiding the specific language of rights and obligations of international law.”3 Early predictions of the ACA’s promise from a human rights standpoint are thus difficult to reconcile with current realities. Some aspects of the law have gained wide popularity, especially its requirement to ensure health coverage for people with “pre-existing conditions.” During the first two years of the Trump administration, however, the Republican-led Congress sought repeatedly to undermine the ACA and erode its protections through court challenges, budgetary obstruction, and obfuscation about the nature and stipulations of the law itself. Numerous attempts to “repeal and replace” the ACA failed, and these efforts effectively stopped after the Democratic party took control of the US House of Representatives in the 2018 midterm elections. Meanwhile, arguments supporting a human right to health have gathered support from a small, politically liberal segment of the US electorate, especially since the presidential election of 2016.4 Although the country’s overall legal and policy climate is no more hospitable to health-related human rights claims now than before the passage of the ACA, this special section shares evidence that human rights can “travel” and transform even in settings where they lack legal traction, including the United States.5 As these papers demonstrate, human rights can function beyond the spheres of law and policy as a power-
ful "idiom of social justice mobilization for health" by introducing new terms and concepts, deepening awareness of historical legacies, and proposing new narrative frames for interpreting current and past situations of disparity and injustice. 6 This special section looks beyond the juridical domain to explore three cases in which unconventional encounters with human rights spurred non-specialists-that is, members of the American public-to contemplate the relationship between health and human rights. In the first case, I write about an exhibition with a provocative title at a federal museum: "Health Is a Human Right: Race and Place in America." This exhibit was designed to commemorate the 25th anniversary of the Office of Minority Health and Health Equity at the US Centers for Disease Control and Prevention (CDC) in 2013. In the second paper, Bisan Salhi and Peter J. Brown analyze a pedagogical attempt to spark engagement with human rights concepts among US undergraduate students of global health. In the third paper, Nadia Gaber investigates two efforts to use community-based participatory research strategies to help protect and fulfill residents' right to water in the American cities of Flint and Detroit, Michigan. Authors of all three papers are medical anthropologists with cross-training in public health or clinical medicine, and all employ qualitative research methods, including audio-recorded interviews, open-ended surveys, and participant observation.
By exploring how human rights principles and logics can reverberate in extra-juridical spaces, papers in this section draw on critical human rights scholarship to train their gaze on what anthropologist Richard Wilson calls the "social life of rights." For Wilson, it is necessary to "look beyond the formal, legalistic, and normative dimensions of human rights, where they will always be a 'good thing,'" and consider "how rights are transformed, deformed, appropriated, and resisted by state and societal actors when inserted into a particular historical and political context." 7 In a similar vein, Peggy Levitt and Sally Engle Merry call attention to the "vernacularization" of human rights discourse by local actors, and Mark Goodale advocates for a "skeptical distance from the exalted claims of human rights" while analyzing the "different registers through which the idea of human rights is conceived." 8 By exploring the social life of rights in museum, classroom, and citizen-science contexts, this special section sheds light on the potential as well as the limits of human rights frames in confronting health inequities and injustices in the United States. Through their analyses, the authors engage several important questions: What's at stake in invoking the human right to health in conversations about health inequities in the United States? What obstacles do US researchers, public health professionals, and activists face in attempting to confront domestic health inequities and injustices using a human rights idiom? Finally, what new opportunities do these US engagements with human rights language reveal, and what lessons do they offer the health and human rights community more broadly?
Before summarizing the papers themselves, I provide a brief historical overview of American presidential administrations' resistance to confronting health issues in a human rights idiom.
Health and human rights in the United States: Legacies and missed opportunities Under different circumstances, the vision of President Franklin Delano Roosevelt and human rights pioneer Eleanor Roosevelt might have propelled the United States to an enduring leadership role in refining and implementing international commitments to health as a human right. FDR's 1941 "Four Freedoms" speech, for instance, introduced the notion that states are obligated to provide for the health of their people. On the domestic front, his 1944 State of the Union address called for a "second Bill of Rights" promising every American citizen the "right to adequate medical care and the opportunity to achieve and enjoy good health." 9 Four years later, Eleanor Roosevelt represented the United States at the deliberations culminating in the 1948 Universal Declaration of Human Rights (UDHR), which affirmed that, "Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services." 10 Rather than carry this legacy forward, however, the United States retreated. During the Cold War, the world was divided-in effect, into countries rallying behind civil and political rights, led by the United States, and advocates of economic and social rights, led by the Soviet Union. This sharp distinction faded with decolonization and the end of the Cold War, and more than 150 countries have now ratified the International Covenant on Economic, Social Cultural Rights (ICESCR), although the United States still has not. Neither has the United States ratified most other international treaties that include a right to health commitment. Instead, successive American presidential administrations have sought to avoid incurring obligations relating to the right to health or other economic, social, and cultural rights, and the country has promoted a raft of neoliberal strategies in its foreign policy that push in the opposite direction. 11 Given its general unwillingness to join "with other countries in advancing and adhering to the international framework of human rights laws," some human rights experts have characterized the United States as a "rogue state" in human rights terms. 12 Meanwhile, on the domestic front, a variety of obstacles have impeded efforts since the 1940s to enshrine right to health commitments in US law. 13 These factors range widely, from the individualist approach to rights within the Anglo-American tradition to the resistance of powerful stakeholders (including the American Medical Association and the private health insurance industry); the rise of neoliberal economic policy under the Reagan administration; and the willingness of left-leaning Democrats to entertain market-based solutions to universal health coverage rather than pushing harder for a "single-payer" solution or "public option" during the ACA debates. 14 Despite strong legacies of civil society struggle against the egregious health disparities that persist in the United States even post-ACA, human rights claims have been invoked only infrequently by those commit-ted to combating the country's health inequities, and only with moderate, typically localized (such as state-level) success. 15 Although the notion that all Americans possess a basic human right to health may be gaining some popularity since the 2016 presidential election cycle, the impact of this shift on both national and local politics remains to be seen. For the time being, most struggles against health inequities in the United States employ other "idioms of social justice mobilization." 16 Some of these idioms, like "health disparities" and the "social determinants of health," aim for descriptive neutrality or scientific objectivity. Others, such as "health inequities" and, increasingly "structural racism," involve built-in forms of political critique.
Overview of the papers
The original catalyst for this special section was the aforementioned museum exhibition "Health Is a Human Right: Race and Place in America," which was created to commemorate the 25th anniversary of the Office of Minority Health and Health Equity (OMHHE) at CDC. During the seven months it was on display at the Smithsonian-affiliated David J. Sencer Museum, located on CDC's main campus in Atlanta, Georgia, the exhibition attracted nearly 50,000 visitors. The special section itself began as an invited panel at the 2016 American Anthropological Association Annual (AAA) Meetings in Minneapolis, Minnesota. Although the original panel included companion perspectives on the exhibition from its originators at CDC, the shifting political landscapes limited their inclusion in this section.
In the first paper, I examine the origins, aims, and content of the CDC Museum's exhibition and the apparent contradiction it embodies. The paper asks three questions: First, how can this exhibition, in this particular locale, be reconciled-if at all-with the absence of any firm right to health commitment in the United States? Second, what does the exhibition reveal about the "social life" of health-related human rights claims? Finally, what might we learn from the exhibition about the potential role of museums and museology in sparking Health and Human Rights Journal public engagement with health and human rights issues, especially in settings where human rights have some rhetorical power, but lack legal or political traction? The second paper, by Salhi and Brown, approaches the CDC museum exhibition from a different angle: exploring the reactions of university students who visited as part of a semester-long course on global health. Drawing on written student assessments and their own long-term teaching experience in American university settings, the authors describe the exhibition as a rude awakening for many students. In particular, many were surprised to discover a long history of health-related human rights violations within the United States, ranging from 20th century legacies of eugenics and forced sterilization, to systemic violations whose effects persist until today, including "redlining," the dumping of toxic waste near residential communities, and lack of access to safe water and/or basic sanitation, especially among impoverished communities of color. 17 Student-visitors to the exhibition, the authors write, "displayed an intuitive sense of-and support for-certain human rights" even as they lacked "the vocabulary or framework to anchor these sentiments" and arrived "unaware that human rights are dynamic legal tools and principles that apply in regional, national, and international spheres." The authors acknowledge the power of a well-curated exhibition to spark new thinking about health and human rights in two ways: by showing that health-related violations can, and do, happen on American soil, and by demonstrating the relevance of human rights laws and logic for domestic efforts to name injustices and mobilize for change.
Finally, Gaber's paper addresses one of the exhibition themes of greatest concern to Salhi and Brown's students: contemporary violations of the human right to water. Although 99% of US residents have safe access to drinking water and 89.5% have safe access to sanitation, water insecurity is increasingly a problem, not just for rural communities but also in urban settings. 18 Drawing on ethnographic fieldwork involving community-based participatory research (CBPR) projects in the cities of Flint and Detroit, Michigan, Gaber argues that human rights frameworks are growing more important as citizens mobilize for water justice despite the lack of a human right to water under US law. In their efforts to "generate data in the absence of credible, public information about the water crises," CBPR projects in Flint and Detroit show how health evidence can "play a unique role in protecting the human right to water … by supporting ethical demands, policy recommendations, and local organizing efforts with robust, reliable data." Moreover, Gaber shows how CBPR findings framed in a human rights idiom can influence how violations and questions of redress are debated in the court of public opinion. In all, her paper suggests an important role for CBPR in certain kinds of human rights claims-making in the United States, given its ability to bring community member voices, values, and demands into political and even legal conversations that presumed experts might otherwise dominate.
Conclusion
As the first United Nations Special Rapporteur on the right to health Paul Hunt and colleagues have observed, there are many ways to assess "how human rights are making a difference for health." 19 Certainly this assertion is true, and its meaning may be even broader than its authors originally intended. For those who fall on what Mark Goodale describes as the "establishment" side of the human rights enterprise, opportunities to help human rights make a difference are increasingly well-defined; these include strategies to improve the effectiveness of legal interventions; strengthen claims for institutional legitimacy; and develop clearer lines of accountability. 20 Goodale contrasts this "establishment" orientation with what he calls an "alternative" position espoused by those for whom "the status of human rights remains as 'unsettled' (Sarat & Kearns 2001) as ever." 21 Although he and others in this "alternative" camp might remain "agnostic about the underlying value claims and political aspirations that ground existing human rights activism," they are not inclined to abandon the human rights project altogether. Rather, they N U M B E R 1 Health and Human Rights Journal 161 see the need for a "reconfigured theory and practice of human rights that is pluralist, decentralized, and perhaps even 'de-juridified.'" 22 Among other things, Goodale's proposal for radical reconfiguration clarifies the extent to which human rights can, and do, travel meaningfully beyond spaces of law. In addition, it invites reflection on other ways in which human rights can make a difference for health-even in places where the "non-practice" of human rights is more common than its practice. 23 In such places, non-specialist members of the public may have little or no understanding of what human rights entail, or how rights violations and health inequities are interconnected. This specialized language may someday catalyze new ways of thinking-but first, citizens and community members will need an introduction. Unconventional invocations of human rights like those explored in this special section-especially in museums and community-based participatory research settings-may effectively serve this role. By showing how human rights can be meaningful, timely, and relevant even in countries lacking formal human rights commitments, such informal encounters can spark creative thinking and help expand public imaginings of how human rights can make a difference for health. | 2019-06-28T20:14:21.476Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "319cea16ab9bd23704a209240475100aea9c9475",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3583ba0aee2718de9969dee8d19bc9eea7087375",
"s2fieldsofstudy": [
"Political Science",
"Medicine",
"Law"
],
"extfieldsofstudy": []
} |
127090993 | pes2o/s2orc | v3-fos-license | New Cubic B-spline Approximations for Solving Non-linear Third-order Korteweg-de Vries Equation
Objectives: In this work, the approximate solution of non-linear third order Korteweg-de Vries equation has been studied. Methods: The proposed numerical technique engages finite difference formulation for temporal discretization, whereas, the discretization in space direction is achieved by means of a new cubic B-spline approximation. Findings: In order to corroborate this effort, three test problems have been considered and the computational outcomes are compared with the current methods. It is found that the proposed scheme involves straight forward computations and operates superior to the existing methods. Novelty/Improvements: The proposed numerical scheme is novel for Korteweg-de Vries equation and has never been employed for this purpose before. Indian Journal of Science and Technology, Vol 12(6), DOI: 10.17485/ijst/2019/v12i6/141953, February 2019
Introduction
The third order non-linear Korteweg-de Vries (KdV) equation occurs in many physical applications such as non-linear plasma waves which exhibit certain dissipative effects 1 , propagation of waves 2 and propagation of bores in shallow water waves 3 . The KdV equation is given by In recent years, the KdV equation has gained a considerable research attraction due to its numerous applications in real life phenomena. Especially, the traveling wave solution has been considered extensively. Kutluay et al. 4 employed integral methods with heat balance to study the small time solutions to KdV equation. The numerical solution to third order KdV equation was discussed by Bahadir 5 using exponential finite difference scheme. Ozer and Kultuay 6 proposed a numerical technique for solving KdV type equations. The authors in 7 employed the method of lines for small times solution of KdV equation. Dehghan and Shokri 8 proposed a numerical method based on multi-quadratic radial basis functions for solving KdV equation. Dag and Dereli 9 explored the numerical solution of KdV equation by means of radial basis functions. A mesh free method based on radial basis functions was presented by Khattak and Tirmizi 10 for approximate solution of KdV equation. Xiao et al. 11 investigated the numerical solution to KdV equation using multi-quadric quasi-interpolation operator. Sarboland and Aminataei 12 proposed a numerical scheme based on integrated radial basis functions and multi-quadric quasi-interpolation operator for solving of KdV equation. Rashid et al. 13 solved Hirota-Satsuma coupled KdV equation by Fourier Pseudo-spectral method.
The spline functions are used extensively to solve the initial and boundary value problems. These functions preserve a smoothness at the nodes and have the ability to provide the numerical solution in the entire domain with great accuracy. Irk et al. 14 In this work, the numerical solution of non-linear KdV equation has been considered. The usual finite difference scheme 24 and new Cubic B-Spline (CBS) approximations 25,26 have been used for temporal and spatial discretization respectively.
The roadways of this study is: In section 2, we shall discuss some preliminaries of ordinary CBS interpolation. The numerical method is presented in section 3 and experimental outcomes are given in section 4.
Cubic B-spline Functions
For r > 0 and x x x Using(4), the typical CBS functions are defined as 28 where, c t p ( )' s are, time dependent real constants, yet to be calculated. For simplicity, we express the CBS and T i respectively. The third degree basis spline functions (5) together with (6) yield the following relations Moreover, for second and third order derivatives, we shall use the following new CBS approximations 25,26 Vol 12 (6)
Description of the Numerical Method
In this section, we present the numerical scheme for solving non-linear KdV equation. Applying usual finite difference method and θ weighted scheme, the problem is discretized in time direction as where, ∆t is the step size in time direction, 0 1 Substituting (12) into (11), we get Forθ = 1 2 , the relation (13) can be rearranged as Substituting the approximation for u and its derivatives at the knot x i , equation (14) where, w (31) The above system can be expressed in matrix form as The unknown column vector C 0 is determined by wellknown Thomas algorithm. The numerical computations are executed in Mathematica 9.
Numerical Results
In this section, the approximate solution to (1)-(2) is presented. The accuracy and validity of the proposed numerical method is tested by three error norms L ∞ , L 2 and Root Mean Square (RMS), which are calculated as , , , The error norms L ∞ , L 2 and RMS are listed in Tables 1-3, when n = 200 and ∆t = 0.01. It is revealed that the proposed numerical scheme produces more reliable and accurate results as compared to MQRBF 8 , MQ 10 , IMQ 10 , MQQI 11 and IMQQI 12 . Figure 1 shows a very close agreement of the numerical solution with closed form solution for t = 1,3,5. Three dimensional plots of exact and approximate solutions are shown in Figures 2 and 3. The absolute computational error using n t = = 200 0 01 , . ∆ is displayed in Figure 4. Table 1. Absolute numerical error for Example 1, when 0 40 The exact solution is u x t x t , sech The computational error norms L ∞ , L 2 and RMS are listed in Table 4 when n = 200 and ∆t = 0.01. Figure 5 shows the approximate and exact solution at t = 0.2,0.4,0.6,0.8,1. The three dimensional plots of analytical and approximate solutions are displayed in Figures 6 and 7. The absolute computational error is portrayed in Figure 8 using n = 200 and ∆t = 0.01.
Conclusion
In this paper, numerical solution of non-linear third order KdV equation has been explored. We conclude the outcomes of this research as: 1. The presented algorithm is based on usual finite difference scheme and CBS collocation method. 2. The proposed technique is novel for third order nonlinear KdV equation. 3. Usual finite difference scheme has been employed for temporal discretization. 4. The new CBS approximations have been used to interpolate the solution in space direction. 5. Due to straightforward and simple application, it outperforms the MQRBF 8 , MQ 10 , IMQ 10 , MQQI 11 and IMQQI 12 approaches.
Acknowledgments
This study was fully supported by | 2019-04-23T13:23:38.555Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "64197b6d5248a5b43058d3a80c093754ed1cfedd",
"oa_license": "CCBY",
"oa_url": "https://sciresol.s3.us-east-2.amazonaws.com/IJST/Articles/2019/Issue-6/Article11.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7caaeaaf008f665b5e7c94d7d96a9ee447875545",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
126381627 | pes2o/s2orc | v3-fos-license | Decay of Strong Solutions for 4 D Navier-Stokes Equations Posed on Lipschitz Domains
Initial-boundary value problems for 4D Navier-Stokes equations posed on bounded and unbounded 4D parallelepipeds were considered. The existence and uniqueness of regular global solutions on bounded parallelepipeds and their exponential decay as well as the existence, uniqueness, and exponential decay of strong solutions on an unbounded parallelepiped have been established provided that initial data and domains satisfy some special conditions.
Introduction
This work concerns the existence and uniqueness of global strong solutions and sharp decay estimates of solutions to initial-boundary value problems for the 4D Navier-Stokes equations: ∇ ⋅ = 0 in Ω, (, 0) = 0 () , where Ω is either a bounded or an unbounded parallelepiped in R 4 with the homogeneous Dirichlet condition on the boundary of Ω.
The question of decay of the energy for weak solutions had been stated by J. Leray in [1] and attracts till now attention of many pure and applied mathematicians [2][3][4][5][6][7][8][9].In all of these papers, the decay rate of ‖‖() 2 (Ω) was controlled by the first eigenvalue of the operator = −Δ, where is the projection operator on the solenoidal subspace of 2 (Ω).Obviously, this approach does not work in unbounded domains; see [6,7,9].
It is well known that solutions of the 2D Navier-Stokes equations posed on smooth bounded domains with the Dirichlet boundary conditions are globally regular [4,[6][7][8][9].On the other hand, the question of regularity for 3D and 4D NSE with arbitrary initial data is till now an open problem even for smooth domains; see [6,7,9].Small initial data help to solve this problem [6,7,9] as well as the so-called "thin" domains when some size of a domain is small [10,11].The question of regularity becomes more difficult while a domain is Lipschitzian [10,12,13].
Our goal here is making some geometrical restrictions, to prove the existence and uniqueness of strong global solutions in 4D Lipschitz domains for arbitrary regular initial data as well as exponential decay of solutions.
In this work, making use of ideas of [15], we have established that ∈ 2,4/3 (Ω) for a 4D bounded parallelepiped.The following inequality holds: where > 0. Introduction.Section 2 contains notations and auxiliary facts.In Section 3, existence, uniqueness, and decay of global strong solutions on a bounded 4D parallelepiped have been established.In Section 4, the existence, uniqueness, and decay of regular solutions on bounded 4D parallelepipeds and strong solutions on 4D unbounded parallelepipeds have been demonstrated.
Notations and Auxiliary Facts
Let = ( 1 , 2 , 3 , 4 ) and Ω be a domain in R 4 .Define as in [9], p.2-4 We denote for scalar functions () the Banach space (Ω), 1 < < +∞ with the norm For = 2, 2 (Ω) is a Hilbert space with the scalar product The Sobolev space , (Ω) is a Banach space with the norm When = 2, ,2 (Ω) = (Ω) is a Hilbert space with the following scalar product and the norm: Let D(Ω) or D(Ω) be the space of ∞ functions with compact support in Ω or Ω.The closure of ∞ functions in , (Ω) is denoted by , 0 (Ω) and ( 0 (Ω) when = 2).Define the auxiliary spaces which are projections for the solenoidal vector functions, The space is equipped with the natural 2 inner product.
The space will be equipped with the scalar product when Ω is bounded.If Ω is unbounded, we define the inner product as the sum of the inner products as follows: We use the usual notations of Sobolev spaces , , , and for vector functions and the following notations for the norms: (i) For vector functions () = ( 1 (), 2 (), 3 (), 4 ()), The closures of V in 2 (Ω) and in 1 0 (Ω) are the basic spaces in our study.We denote them by and , respectively.Remark 1.By definition, is a proper subspace of 1 0 (Ω).
The next lemmas will be used in estimates.
Uniqueness of the Strong
that can be rewritten as Acting in the same manner as by the proof of Estimate II, we come to the inequality By conditions of Theorem 5, for = 0. Taking into account Estimates (28) and using standard arguments, we get for all > 0 Hence, (53) becomes This implies ≡ 0 that proves uniqueness of the strong solution and completes the proof of Theorem 5.
More Regularity
Consider the Poisson problem in a bounded domain Ω ∈ R : In [15] Theorem 11, p. 120-123, the following has been proved.
Theorem 7. The problem (57) posed in a parallelepiped
Returning to the original problem for the Navier-Stokes equations, where () is a vector function from R 4 into R 4 and is a real function from R 4 into R, and making use of Galerkin approximations, we establish the following result.Theorem 8. Given 0 ∈ 2 (Ω)∩ and a domain Ω satisfying (25), then problem (61) has a unique regular solution (, ) such that which for all Φ() ∈ satisfies the following integral identity: Moreover, where Proof (decay of 2,4/3 (Ω)-norm).Taking into account that conditions of Theorem 8 and of Theorem 5 are the same, by Theorem 5, we have a unique strong solution of (61).Hence, to prove Theorem 8, it is sufficient to establish that ‖‖ 2,4 /3 (Ω) ().
Advances in Mathematical Physics
Returning to (65) and having ∈ 4/3 (Ω), we obtain, due to Theorem 7, By the Sobolev theorems, The proof of Theorem 8 is complete.
In some sense, this is the superior regularity for the problem (61).It looks like = 4 is the critical case of the Navier-Stokes system.
Conclusions.In our work, we tried to respond to some questions posed by J. Leray [1], namely, regularity of global solutions of the Navier-Stokes equations and their decay.Therefore, our results can be divided into two parts: the first one concerns decay of global regular solutions of the 4D Navier-Stokes equations posed on bounded 4D parallelepipeds.It is known that there exist global regular solutions for the 2D Navier-Stokes equations posed on smooth bounded domains [4,6,8,9], but regularity in nonsmooth (Lipschitz) domains is not obvious.For bounded 4D parallelepipeds, we have established the existence of a unique global regular solution which decays exponentially as → +∞ provided that initial data satisfies (25).We demonstrated that the decay rate is different for different norms; see (77), where is defined by the geometrical characteristics of a domain Ω.
The second part of our work concerns decay of solutions for the 4D Navier-Stokes equations posed on an unbounded parallelepiped.In existing publications [3,4,6,9], the decay rate of ‖‖ 2 (Ω) () is controlled by the first eigenvalue of the operator = −Δ, where is the projection operator on a solenoidal subspace of 2 (Ω).It is clear that this approach does not work in unbounded domains.
On the other hand, our approach based on the Steklov inequalities allowed us to estimate the decay rate of a strong solution for the 4D Navier-Stokes equations posed on an unbounded 4D parallelepiped.
We must emphasize that this estimate is the first one which gives an explicit value of the decay rate for unbounded 4D domains.Results established in our work can be used in constructing of numerical schemes for solving initialboundary value problems for the Navier-Stokes equations appearing in Mechanics of viscous liquid.From the physical point of view, decay estimates show that the decay rate of perturbations of solutions caused by the initial data is bigger for bigger values of viscosity ] and smaller sizes of 4D parallelepipeds.
My interest for the 4D Navier-Stokes equations is purely mathematical and, on my opinion, can not be extended to higher dimensions beyond 4. I must also note that there are publications on the existence of weak solutions for 4D Navier-Stokes equations [7], [9] p. 189-197. | 2019-04-22T13:13:15.338Z | 2018-12-13T00:00:00.000 | {
"year": 2018,
"sha1": "38bbb82340793b15d4415df5f225fe08fe41dde8",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/amp/2018/5807385.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "38bbb82340793b15d4415df5f225fe08fe41dde8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
29080121 | pes2o/s2orc | v3-fos-license | Electroacupuncture at Fengchi (GB20) inhibits calcitonin gene-related peptide expression in the trigeminovascular system of a rat model of migraine
Most migraine patients suffer from cutaneous allodynia; however, the underlying mechanisms are unclear. Calcitonin gene-related peptide (CGRP) plays an important role in the pathophysiology of migraine, and it is therefore, a potential therapeutic target for treating the pain. In the present study, a rat model of conscious migraine, induced by repeated electrical stimulation of the superior sagittal sinus, was established and treated with electroacupuncture at Fengchi (GB20) (depth of 2-3 mm, frequency of 2/15 Hz, intensity of 0.5-1.0 mA, 15 minutes/day, for 7 consecutive days). Electroacupuncture at GB20 significantly alleviated the decrease in hind paw and facial withdrawal thresholds and significantly lessened the increase in the levels of CGRP in the trigeminal ganglion, trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus in rats with migraine. No CGRP-positive cells were detected in the trigeminal nucleus caudalis or ventroposterior medial thalamic nucleus by immunofluorescence. Our findings suggest that electroacupuncture treatment ameliorates migraine pain and associated cutaneous allodynia by modulating the trigeminovascular system ascending pathway, at least in part by inhibiting CGRP expression in the trigeminal ganglion.
Introduction
Migraine is a common primary headache disorder that is characterized by recurrent, throbbing, unilateral headaches, and affects 14.7% of the population worldwide (Vos et al., 2012; Headache Classification Committee of the International Headache Society, 2013). Most migraines are accompanied by cutaneous allodynia, which is an altered sensory perception to innocuous stimuli (Lovati et al., 2009). According to the Global Burden of Disease Study in 2010, migraine ranked as the third most prevalent and seventh most disabling disease worldwide (Vos et al., 2012). Current anti-migraine drugs, including non-prescription painkillers, nonsteroidal anti-inflammatory drugs, triptans and ergot alkaloids, are unable to fully meet the needs of migraine sufferers because of their suboptimal efficacy, adverse effects, and contraindications (Reddy, 2013).
Electroacupuncture (EA) is a promising complementary strategy for treating migraine. Some reviews have suggested that acupuncture is a treatment choice for migraineurs, with few adverse events, which could be used as a supplement to other non-pharmacologic treatment options (Endres et al., 2007;Linde et al., 2016). However, the mechanisms underlying the analgesic effect of EA on migraine are unknown. It is generally thought that the activation and sensitization of trigeminovascular system nociceptive pathways are responsible for migraine headaches and cutaneous allodynia (Pietrobon and Moskowitz, 2013). A recent study demonstrated that EA at Fengchi (GB20) exerts antinociceptive effects by modulating serotonin . Advances in the understanding of the function of calcitonin gene-related peptide (CGRP) in trigeminovascular system nociceptive pathways suggest that CGRP is a promising target for migraine therapy (Pietrobon and Moskowitz, 2013). CGRP-targeting drugs developed for migraine, such as CGRP receptor antagonists and CGRP-blocking antibodies, were shown to be efficacious in treating migraine attacks in clinical trials (Russo, 2015).
We hypothesized that EA treatment at GB20 might modulate CGRP levels in the trigeminal ganglion, trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus of the trigeminovascular system ascending pathway, and alleviate cutaneous allodynia. To test this hypothesis, an experimental rat model of migraine was established by repeated electrical stimulation of the superior sagittal sinus, which mimics migraine headache and cutaneous allodynia. Then, we evaluated cutaneous allodynia using electronic von Frey anesthesiometry and CGRP expression in the trigeminovascular system by western blot assay and immunofluorescence to explore the mechanisms underlying the effects of EA treatment on migraine and cutaneous allodynia.
Animals
This study was approved by the Beijing Institutional Review Board for Animal Experiments (Use Committee of Capital Medical University, Beijing; Approval number: AEEI 2015-075). Surgeries were performed under anesthesia, and all possible efforts were made to minimize suffering. After a recovery period of 1 week, baseline withdrawal threshold was measured by von Frey anesthesiometry on day 0. On days 2, 4 and 6, facial and hind paw withdrawal thresholds were tested for a total of 3 sessions. Rats received dural electrical stimulation every other day from days 1 to 7 (on days 1, 3, 5 and 7). Electroacupuncture (or non-acupuncture point acupuncture) treatment was performed daily from days 1 to 7 (on days 1, 2, 3, 4, 5, 6 and 7) in the different groups.
Forty male 6-week-old, specific-pathogen-free Sprague Dawley rats (Vital River Laboratories, No. 11400700103582, Beijing, China), weighing 210 ± 10 g, were used in this study. Rats were individually maintained in a climate-controlled laboratory environment (room temperature, 23 ± 2°C; humidity, 50 ± 10%) on a 12-hour light/dark cycle with unlimited access to water and food. The rats were acclimated to the new environment for 1 week before undergoing brain surgery to implant the electrodes required for electrical stimulation.
Group assignment
After the acclimation period, 40 animals were randomly divided into the following four groups (n = 10): a control group, which only received electrode implantation; a model group, which only received electrical stimulation of the superior sagittal sinus; an EA group, which received EA at GB20 after electrical stimulation of the superior sagittal sinus; and a non-acupuncture point electroacupuncture (NA) group, which received EA at a distant non-acupuncture point (approximately 10 mm above the iliac crest) after electrical stimulation of the superior sagittal sinus (Li et al., 2015). The experiment began on the first day after recovery and lasted 7 days. Three sessions of electrical stimulation were given to the EA, NA and model groups with a stimulator (YC-2 stimulator; Chengdu Instrument Factory, Chengdu, Sichuan Province, China) every other day (on days 1, 3 and 5). From day 1 to day 7, the EA and NA groups received EA after electrical stimulation for a total of seven sessions. The number of animals used in this study was 10 per group, estimated according to a power calculation described in a previous study (Gao et al., 2014). A diagram of the experimental protocol is shown in Figure 1.
Establishment of the rat model of conscious migraine As described in a previous study , rats were anesthetized with an intraperitoneal injection of 60 mg/kg pentobarbital sodium (Sigma-Aldrich, St. Louis, MO, USA). Two holes (1 mm in diameter) were drilled in the midline suture of the skull with a saline-cooled drill (78001; RWD Life Science, Shenzhen, Guangdong Province, China) (Dong et al., 2011). One hole was located 4 mm anterior to the bregma, and Densitometry results are shown as the mean ± SD (n = 5). CGRP levels were normalized against β-actin and analyzed using one-way analysis of variance. ***P < 0.001, vs. control group; #P < 0.05, ##P < 0.01, vs. model group; †P < 0.05, vs. EA group. Control group: Only electrode implantation; model group: electrical stimulation of the superior sagittal sinus; EA group: EA at Fengchi (GB20) after electrical stimulation of the superior sagittal sinus; NA group: EA at a distant non-acupuncture point (approximately 10 mm above the iliac crest) after electrical stimulation of the superior sagittal sinus. EA: Electroacupuncture; NA: non-acupuncture; Ctrl: control; M: model. the other was 6 mm posterior to the bregma. The cranial holes were located over the dura mater around the superior sagittal sinus. Two tailored electrode fixtures (Beijing Jiandeer, Beijing, China) were placed into the cranial holes such that they were contacting the superior sagittal sinus. A pair of screws (M1.4 × 2.8 mm) were implanted into the screw holes of each electrode fixture to stabilize the electrodes and then covered with dental cement (Shanghai New Century Dental Materials, Shanghai, China). To prevent clogging, the obturator was inserted into the external terminal of the electrode. The operation was performed under a surgical microscope. Penicillin (0.04 million IU/100 g; Harbin Pharmaceutical Group, China) was administered intramuscularly to prevent infection. All rats had a 7-day recovery period before the experiments began. Before dural electrical stimulation, each rat was placed in a transparent cage (diameter, 40 cm; height, 17.5 cm) and allowed to habituate for 20 minutes. The obturator was removed from the electrode fixtures, and a delivery electrode tip that was connected to the current source output of the electrical stimulator was inserted. Based on previous studies, dural electrical stimuli, which consisted of 0.5-ms monophasic square-wave pulses of 1.8-2.0 mA (intensity) and 20 Hz (frequency) were given to the rats in the EA, NA and model groups for a 15-minute period every other day for a total of three sessions . Rats in the control group were connected to the stimulator, but were given no stimulation.
EA at GB20
Each rat was consciously placed into a tailored fixture that restricted movement and exposed the head and neck. According to the WHO Standard Acupuncture Point Locations (World Health Organization Regional Office for the Western Pacific, 2008), GB20 is located "in the anterior region of the Data are shown as the mean ± SD (n = 10 in each group at each time point) and were analyzed using repeated measures analysis of variance. ***P < 0.001, vs. control group; ##P < 0.01, ###P < 0.001, vs. model group; † †P < 0.01, † † †P < 0.001, vs. EA group. Control group: Only electrode implantation; model group: electrical stimulation of the superior sagittal sinus; EA group: EA at Fengchi (GB20) after electrical stimulation of the superior sagittal sinus; NA group: EA at a distant, non-acupuncture point (approximately 10 mm above the iliac crest) after electrical stimulation of the superior sagittal sinus. Day 0: baseline; days 2, 4, 6: 2, 4, 6 days after modeling. EA: Electroacupuncture; NA: non-acupuncture. Ctrl: control; M: model. neck, inferior to the occipital bone, in the depression between the origins of sternocleidomastoid and the trapezius muscles". The anatomical location of GB20 in rats is similar to that in humans-3 mm lateral to the midpoint of a line joining the two ears at the back of the head (Siu et al., 2005). For rats in the EA group, a pair of stainless steel acupuncture needles (diameter, 0.25 mm; length, 25 mm; Suzhou Medical Appliance Factory, Suzhou, Jiangsu Province, China) were inserted into GB20 to a depth of 2-3 mm in the direction of the opposite eye, bilaterally. The needle handle was then connected to an electrical stimulator (Han's acupuncture point nerve stimulator HNAS-200E; Nanjing, Jiangsu Province, China) for 15 minutes/day. EA was applied at a frequency of 2/15 Hz (amplitude-modulated wave) and an intensity of 0.5-1.0 mA (depending on the reaction of the rat) . For rats in the NA group, needles were inserted bilaterally at distant non-acupuncture points (approximately 10 mm above the iliac crest) to a depth of 2-3 mm, and EA was performed with the same parameters as in the EA group. Animals in the control and model groups were similarly placed into fixtures for 20 minutes, but no acupuncture was applied.
Behavioral testing of the allodynia response
An electronic von-Frey anesthesiometer (Model 2390, IITC Life Science, Woodland Hills, CA, USA) was used to test facial and hind paw withdrawal thresholds. The von Frey anesthesiometry probe was applied vertically to the skin of the face or hind paw until the rat made an escape movement. When the escape response occurred, the maximum force was recorded as the withdrawal threshold by the device. All measurements of withdrawal threshold were made by the same operator, who was blinded to the reading until an escape response was elicited (Moore et al., 2013).
Facial allodynia
Rats were placed in a tailored plastic tube restraint (length, 25 cm; inner diameter, 8 cm) with a mesh inlay at the front, so that the periorbital region was easily accessed. After a 30-minute habituation period, the von Frey anesthesiometry tip was applied to the periorbital region with steady vertical pressure until an escape movement occurred. A total of three trials were completed with 30-second intervals. The mean value of the trials was considered the withdrawal threshold.
Hind paw allodynia
Rats were placed separately under transparent plastic boxes on an elevated mesh platform for 30 minutes. The von Frey anesthesiometry probe was inserted through the mesh to prod the hind paw until the paw was withdrawn from the tip or lifted off the mesh floor. The assay was performed three times with 30-second intervals. The mean value of the trials was used to determine the withdrawal threshold.
Western blot assay
Rats were anesthetized with 10% chloral hydrate (15 mL/kg, intraperitoneally), and the brains and trigeminal ganglia were rapidly removed, frozen in liquid nitrogen, and stored at −80°C until dissection. A week later, the trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus were dissected on a frozen microtome. Using a magnifying glass, bilateral tissue punches from the ventroposterior medial thalamic nucleus and trigeminal nucleus caudalis regions were taken from frontal brain sections (300 μm) with a stainless steel cannula (inner diameter, 1,000 μm) and pooled (Paxinos and Watson, 1998). Samples were placed into 1.8-mL prechilled tubes and stored at −80°C. For western blot assay, total tissue samples from different regions were collected and homogenized in radioimmunoprecipitation assay buffer (70-WB019; MultiSciences, Hangzhou, China) with an ultrasonic cell crusher. The homogenate was centrifuged at 13,000 × g for 15 minutes at 4°C, and 300 μL of the supernatant was collected and stored at −20°C until analysis. The protein concentration of the samples was determined by bicinchoninic acid assay using the Micro BCA Protein Assay Kit (Thermo Scientific, Rockford, IL, USA). Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (12% gradient gels), using 50 µg of protein per well, and then transferred to a polyvinylidene difluoride membrane. The polyvinylidene difluoride blots were blocked in 10% skim milk for 1 hour at room temperature. Membranes were incubated with either a rabbit anti-CGRP polyclonal antibody (Ab139264, Abcam, Cambridge, UK; 1:1,000) or a mouse β-actin polyclonal antibody (4967 CST, Cell Signaling Technology, Boston, MA, USA; 1:1,000) overnight at 4°C. Horseradish peroxidase-conjugated secondary antibody, goat anti-rabbit IgG (H+L) (111-035-003, Jackson ImmunoResearch Laboratories, West Grove, PA, USA; 1:10,000) or goat anti-mouse IgG (H+L) (115-035-003, Jackson ImmunoResearch Laboratories; 1:10,000), was diluted in 10% skim milk/Tris-buffered saline/Tween 20 (TBST) prior to use at room temperature. All between-incubation washes were in TBST. Signals were detected using the enhanced chemiluminescence kit (WBKLS0500, Millipore Corporation, Billerica, MA, USA) and Kodak film (Eastman Kodak, Rochester, NY, USA). The integrated optical density values of the detected proteins were analyzed using Quantity One software (Bio-Rad Laboratories, Hercules, CA, USA). CGRP was expressed as a ratio to β-actin (the loading control).
Immunofluorescence analysis
Rats were anesthetized with an intraperitoneal injection of 10% chloral hydrate (15 mL/kg) and perfused through the ascending aorta with 100 mL of 0.1 M PBS, followed by 500 mL of 4% paraformaldehyde in phosphate buffered saline (PBS). Brains and trigeminal ganglia were removed and post fixed in 4% paraformaldehyde/PBS overnight at 4°C, and then transferred to 30% sucrose/PBS for cryopreservation and incubated for 72 hours. A week later, the trigeminal ganglia and brains were sectioned coronally (25-mm-thick slices) through the thalamus and trigeminal nucleus caudalis with a cryostat (CM3050S, Leica, Wetzlar, Germany). Sections were incubated with a rabbit anti-CGRP polyclonal antibody (ab81887, Abcam, Cambridge, UK; 1:200) overnight at 4°C, and then with Alexa 488 goat anti-rabbit secondary antibody (111-545-003, Jackson ImmunoResearch Laboratories; 1:500) for 2 hours at room temperature. The sections were then mounted, dehydrated, and cover-slipped with anti-fade reagent (AR1109; Boster Bioengineering, Wuhan, China). Sections were imaged using a Leica DM5500 B semi-automatic light microscope, and cells were counted on a grid using the 40× objective lens. Cell counts per 100 µm 2 (CGRP-positive cell density) in the trigeminal ganglion, trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus were determined by an observer blinded to the groupings using ImageJ software. Anatomical boundaries were determined according to a rat brain atlas (Paxinos and Watson, 1998).
Statistical analysis
Data are shown as the mean ± SD. Withdrawal thresholds were analyzed using repeated measures analysis of variance. All other data were analyzed using one-way analysis of variance using SPSS v12.0 software (SPSS, Chicago, IL, USA). Post-hoc testing was performed using Bonferroni (homogeneity of variance) or Tamhane (heterogeneity of variance) test. Differences with P values less than 0.05 were considered significant.
EA inhibited the reduction in facial and hind paw withdrawal thresholds in a rat model of conscious migraine
To evaluate the effect of EA at GB20 on cutaneous allodynia in our rat model, von Frey anesthesiometry was used to assess facial and hind paw withdrawal thresholds. For the facial withdrawal threshold, there were no significant differences in the baseline among the four groups (P > 0.05; Table 1). Repetitive dural electrical stimulation significantly decreased the facial withdrawal threshold of the model group compared with the control group (P < 0.001). The withdrawal threshold was significantly higher in the EA group than in the model group (P < 0.001). The facial withdrawal threshold in the NA group did not differ significantly from that in the model group (P > 0.05).
For the hind paw withdrawal threshold, no significant difference was observed at baseline among the four groups (P > 0.05; Table 1). However, after repeated dural electrical stimulation, the hind paw withdrawal threshold was significantly lower in the model group than in the control group (P < 0.001). The hind paw withdrawal threshold was significantly higher in the EA group than in the model group, suggesting that EA at GB20 attenuates the decrease in withdrawal threshold induced by dural electrical stimulation (P < 0.001). The hind paw withdrawal thresholds in the NA and model groups did not differ significantly (P > 0.05).
EA decreased CGRP levels in the trigeminal ganglion, trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus in a rat model of conscious migraine Western blot assay for CGRP
To investigate the effect of EA on CGRP levels, western blot assay was performed to examine the levels of CGRP in the trigeminal ganglion, trigeminal nucleus caudalis, and ventroposterior medial thalamic nucleus in all groups. In all three regions examined, CGRP protein levels in brain lysates from the model group (after repeated dural electrical stimulation) were significantly higher than those in the control group (n = 5; P < 0.001). In contrast, the EA group had significant lower CGRP protein levels compared with the model group (P < 0.05), while the levels in the NA group did not differ significantly compared with the model group (P > 0.05; Figure 2).
Immunofluorescence analysis of CGRP-positive cells
In the trigeminal ganglion, immunofluorescence analysis revealed that the model group contained significantly more CGRP-positive cells than the control group (n = 5; P < 0.10). The mean number of CGRP-positive cells in the EA group was significantly lower than that in the model group (P < 0.05), whereas the numbers of CGRP-positive cells in the NA and model groups did not differ (P > 0.05). No significant difference was observed between the control and EA groups (P > 0.05; Figure 3). However, no CGRP-positive cells were detected in the trigeminal nucleus caudalis or ventroposterior medial thalamic nucleus (data not shown).
Discussion
The current findings show that EA at GB20 alleviates cutaneous allodynia in a recurrent migraine model and reduces CGRP levels in the trigeminovascular system ascending pathway. CGRP has been considered a potential new therapeutic target for migraine. Our findings suggest that EA relieves migraine pain and cutaneous allodynia by reducing CGRP levels.
GB20 and migraine
In this study, we chose GB20 to treat migraine and cutaneous allodynia. According to traditional acupuncture theory, migraine is related to dysfunction of the Gallbladder Meridian (Wu, 2009). GB20 is a point in the Gallbladder Meridian located near the region of migraine headache (World Health Organization Regional Office for the Western Pacific, 2008). Therefore, stimulation of GB20 should modulate the function of the Gallbladder Meridian and relieve migraine headache. Indeed, GB20 is one of the most commonly used acupuncture points for migraine in clinical practice and clinical trials (Zheng et al., 2010;Linde et al., 2016).
Trigeminovascular neurons and migraine
Activation and sensitization of the trigeminovascular pain pathway are implicated in the pathophysiology of migraine and cutaneous allodynia (Pietrobon and Moskowitz, 2013). In the present study, we focused on three groups of trigeminovascular neurons in the trigeminovascular system ascending pathway (trigeminal ganglion, trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus) (Noseda and Burstein, 2013). The trigeminal ganglion neurons have sensory fibers that innervate meningeal vessels and afferent projections which synapse with the trigeminal nucleus cau-dalis neurons. The trigeminal nucleus caudalis conveys signals from the trigeminal ganglion, while sensory signals from the trigeminal nucleus caudalis and extracephalic skin converge onto ventroposterior medial thalamic nucleus neurons. The nociceptive input is transmitted to cortical areas where the perception of the migraine headache and cutaneous allodynia is recognized (Pietrobon and Moskowitz, 2013).
EA at GB20 significantly alleviated dural electrical stimulation-induced cutaneous allodynia
Approximately two-thirds of migraineurs suffer from cutaneous allodynia after migraine attacks, which is characterized by a decreased threshold for the perception of pain induced by non-noxious stimuli (Burstein et al., 2000;Lipton et al., 2008;Louter et al., 2013). Studies have shown that sensitization of the trigeminovascular system can lead to cephalic and extracephalic allodynia (Burstein et al., 1998(Burstein et al., , 2010. Facial and hind paw withdrawal threshold is a commonly used indicator of mechanical allodynia in migraine research (Romero-Reyes and Ye, 2013). In this study, the decreased withdrawal thresholds in the face and hind paw mimicked cutaneous allodynia.
Central sensitization of trigeminovascular neurons is thought to be the main mechanism of cutaneous allodynia (Bernstein and Burstein, 2012). Accumulating evidence indicates that cephalic cutaneous allodynia results from sensitization of trigeminal nucleus caudalis neurons, while extracephalic cutaneous allodynia represents sensitization of ventroposterior medial thalamic nucleus neurons (Pietrobon and Moskowitz, 2013). In this study, dural electrical stimulation decreased the facial and hind paw withdrawal thresholds, which mimic cephalic and extracephalic cutaneous allodynia, respectively. After sensitization, these neurons exhibited hypersensitivity to cephalic or extracephalic stimuli. EA at GB20 significantly ameliorated these reductions in withdrawal threshold, whereas EA at a distant, non-acupuncture point (NA group) failed to do so, indicating that improvement of these behavioral measures was specific to EA treatment at GB20.
EA at GB20 decreased CGRP levels in the trigeminal ganglion, trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus
In the peripheral trigeminovascular system, CGRP is involved in the activation of meningeal nociceptors during migraine attacks. Neurogenic inflammation is thought to be a key mechanism in the activation and sensitization of perivascular meningeal afferents (Uddman et al., 1985). Accumulating evidence indicates that CGRP plays an important role in this inflammatory response. CGRP directly dilates the meningeal arteries (Brain and Grant, 2004), triggers the release of pro-inflammatory substances from mast cells, and increases substance P release to promote inflammation (Zhang et al., 2007;Lennerz et al., 2008). In the trigeminovascular system, cell bodies in the trigeminal ganglion are the main source of CGRP (Uddman et al., 1985;Durham, 2006). Our results suggest that repeated dural electrical stimulation significantly in-creases CGRP levels and the number of CGRP-positive cells in the trigeminal ganglion. EA at GB20, but not at the NA point, led to a decrease in CGRP expression and the number of CGRP-positive cells within the trigeminal ganglion, suggesting that it inhibits CGRP-mediated inflammation.
In the central trigeminovascular system, CGRP is a neuromodulator at second-and third-order trigeminovascular neurons that are involved in central sensitization (Raddant and Russo, 2011). In the trigeminal nucleus caudalis, CGRP is released from the central terminals of trigeminal ganglion neurons (Jenkins et al., 2004;Fischer, 2010). Expression studies have shown CGRP immunoreactivity in presynaptic afferent terminals, but not in the neuronal bodies, and CGRP receptor components (RAMP1 and CLR) have been detected in the spinal trigeminal tract region (Eftekhari and Edvinsson, 2011). Furthermore, microiontophoresis of α-CGRP excites some trigeminal nucleus caudalis neurons, and CGRP receptor antagonists block the enhanced nociceptive trigeminovascular transmission in the trigeminal nucleus caudalis (Storer et al., 2004;Summ et al., 2010). In the ventroposterior medial thalamic nucleus, the presence of CGRP receptors has been demonstrated, and microiontophoresis of CGRP increases the spontaneous firing of ventroposterior medial thalamic nucleus neurons, which can be suppressed by the CGRP receptor antagonist CGRP 8-37 (Summ et al., 2010).
In present study, repeated dural electrical stimulation significantly increased CGRP levels in the trigeminal nucleus caudalis and ventroposterior medial thalamic nucleus, as shown by western blot assay. These increases were largely blocked by EA at GB20. Consistent with previous studies, we did not detect CGRP-positive cells in the trigeminal nucleus caudalis or ventroposterior medial thalamic nucleus using immunofluorescence.
CGRP in the ventroposterior medial thalamic nucleus may come from the trigeminal ganglion
It has been demonstrated that some CGRP in the trigeminal nucleus caudalis is released from the trigeminal ganglion; however, the origin of CGRP in the ventroposterior medial thalamic nucleus is unclear (Eftekhari and Edvinsson, 2011). A previous study showed that inflammatory soup-induced CGRP release into the jugular vein and cerebrospinal fluid is mainly derived from primary trigeminal afferents (Hoffmann et al., 2012). Plasma CGRP is unlikely to reach the ventroposterior medial thalamic nucleus because of poor penetration of the blood-brain barrier (Edvinsson, 2015a). Therefore, it is likely that ventroposterior medial thalamic nucleus neurons are modulated by CGRP in the cerebrospinal fluid. Moreover, the majority of CGRP mRNA is synthesized in the trigeminal ganglion, which is the major source of CGRP in the trigeminovascular system (Durham, 2006;Bhatt et al., 2014).
CGRP is a potential new therapeutic target for migraine treatment (Edvinsson, 2015a). Previous studies have shown that pain relief is accompanied by normalization of CGRP levels, and CGRP-targeting drugs are effective in clinical trials, although the sites of action are still unclear (Edvinsson, 2015b). In this study, EA at GB20 significantly reduced CGRP levels in the trigeminovascular system ascending pathway and reversed the hypersensitivity caused by dural electrical stimulation. EA at a non-acupuncture point failed to normalize CGRP levels and the withdrawal threshold. Thus, the anti-migraine effect of EA appears to be specific to GB20.
Here, we observed an inhibitory effect of EA at GB20 on CGRP expression in the trigeminal ganglion. However, the underlying mechanism is still unclear. Previous studies have demonstrated that CGRP expression within the trigeminal ganglion is involved in the activation of MAPK signaling pathways (Durham and Russo, 2003;Bellamy et al., 2006;Bowen et al., 2006;Dieterle et al., 2011). Furthermore, the MAPK signaling pathway has been shown to play a role in acupuncture-induced analgesia (Fang et al., 2013;Du et al., 2014;Park et al., 2014). Therefore, it is possible that the inhibition by EA of CGRP expression in the trigeminal ganglion modulates the MAPK signaling pathway. Uncovering the cell and molecular mechanisms within the trigeminal ganglia that underlie the EA-induced analgesic effect will require additional research.
In conclusion, EA treatment ameliorates migraine pain and the associated cutaneous allodynia by inhibiting CGRP expression in the trigeminal ganglion to modulate the trigeminovascular system ascending pathway. A limitation of our study is that we used withdrawal thresholds to evaluate sensitization of trigeminovascular system neurons without direct evidence of sensitization. Additional CGRP receptor and signaling pathway studies are required to elucidate the mechanisms underlying the effects of EA on migraine.
Author contributions: LPZ and PP conducted the experiments. LPZ, PP and LL interpreted the data and wrote the manuscript. ZYQ and YPZ revised the manuscript. LPW supervised the research program and contributed to integration of the research team. All authors have read and approved the final manuscript. | 2018-04-03T04:42:38.074Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "92c8dafaeff4db957214fdbd6808cfd526c0c676",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1673-5374.206652",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8538e42c910845ee5396f45e9b97319dc7d5c9f5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221326915 | pes2o/s2orc | v3-fos-license | Knockdown of Tripartite Motif 8 Protects H9C2 Cells Against Hypoxia/Reoxygenation-Induced Injury Through the Activation of PI3K/Akt Signaling Pathway
Tripartite motif 8 (TRIM8) is a member of the TRIM protein family that has been found to be implicated in cardiovascular disease. However, the role of TRIM8 in myocardial ischemia/reperfusion (I/R) has not been investigated. We aimed to explore the effect of TRIM8 on cardiomyocyte H9c2 cells exposed to hypoxia/reoxygenation (H/R). We found that TRIM8 expression was markedly upregulated in H9c2 cells after stimulation with H/R. Gain- and loss-of-function assays proved that TRIM8 knockdown improved cell viability of H/R-stimulated H9c2 cells. In addition, TRIM8 knockdown suppressed reactive oxygen species production and elevated the levels of superoxide dismutase and glutathione peroxidase. Knockdown of TRIM8 suppressed the caspase-3 activity, as well as caused significant increase in bcl-2 expression and decrease in bax expression. Furthermore, TRIM8 overexpression exhibited apposite effects with knockdown of TRIM8. Finally, knockdown of TRIM8 enhanced the activation of PI3K/Akt signaling pathway in H/R-stimulated H9c2 cells. Inhibition of PI3K/Akt by LY294002 reversed the effects of TRIM8 knockdown on cell viability, oxidative stress, and apoptosis of H9c2 cells. These present findings defined TRIM8 as a therapeutic target for attenuating and preventing myocardial I/R injury.
Introduction
Myocardial infarction (MI) is one of the leading causes of morbidity and mortality in patients with coronary heart diseases worldwide 1 . Myocardial ischemia/reperfusion (I/R) is a very complex pathophysiological process that has been demonstrated to be a critical mechanism of MI 2,3 . Although early reperfusion is well acknowledged to provide oxygen and nutrients to the ischemic area, it also has side effects on myocardium which is called myocardial I/R injury 4,5 . These insults significantly diminish the therapeutic benefits of reperfusion. Better understanding of the mechanisms of I/R injury may be helpful for exploring more suitable strategies to minimize myocardial damage.
Myocardial I/R injury is a very complex pathophysiological process with multiple molecular and cellular events, such as ion accumulation, mitochondrial dysfunction, reactive oxygen species (ROS) formation, activation of oxidative stress and inflammation, and apoptosis 6,7 . Among these, ROS are critical mediators in myocardial I/R injury, as evidenced by interventions that enhancement of ROS scavenging protects against reperfusion injury 8 . Enhanced ROS production induces oxidative stress, which may contribute to myocardial injury and cardiomyocyte death 9 . Therefore, it is necessary to develop new strategies for attenuating ROS production and apply them in the clinical patient care.
Tripartite motif 8 (TRIM8), a member of the TRIM protein family, was reported to be involved in various biological processes, such as cell survival, differentiation, inflammation, innate immune response, and apoptosis [10][11][12][13] . Recently, accumulating studies demonstrate that TRIM8 is involved in multiple diseases, including cardiovascular disease. Overexpression of TRIM8 exaggerates cardiac hypertrophy both in vivo and in vitro 14 . In addition, TRIM8 has been found to be involved in hepatic I/R injury 15 . However, the role of TRIM8 in mycardial I/R injury remains unclear. The aim of this study was to investigate the role of TRIM8 in H9c2 cells exposed to hypoxia/reoxygenation (H/R).
H/R Model
The protocol of H9c2 cells exposed to H/R stimulation was performed as follows. Briefly, cultured H9c2 cells were subjected to hypoxia in a hypoxic chamber (1% O 2 , 5% CO 2 , and 94% N 2 ) for 2 h, followed by 24 h of reoxygenation in a normoxic chamber (95% air and 5% CO 2 ). Cells in control group were kept in normoxic condition. Cell viability, oxidative stress, and apoptosis were examined using the methods described afterward.
Rescue Assay
To confirm that PI3K/Akt signaling pathway contributes to the regulation of the TRIM8 inhibition-mediated cardiacprotective effect, H9c2 cells were transfected with si-TRIM8 or si-NC in the presence of LY294002 (10 mM) for 48 h, followed by H/R stimulation.
Quantitative Real-time Polymerase Chain Reaction
The total RNA of the H9c2 cells was extracted by RNA extraction kit (Applied Biosystems, Foster, CA, USA) according to the instructions. The reverse transcription was performed using 2 mg of total RNA with a SuperScript III First-Strand Synthesis system (Invitrogen). Quantitative assay of TRIM8 gene expression was performed using a QuantiTect SYBR Green kit (Toyobo, Osaka, Japan) on an ABI Prism 7700 Sequence Detection System (Applied Biosystems). The gene expression was calculated by the d2 ÀDDCt method. The specific primer sequences were-TRIM8 F: 5 0 -GAC GGA TTC ACG GAC AGT AA-3 0 , R: 5 0 -TTG ATG CTG GCC AGG C-3 0 ; b-actin F: 5 0 -GGG AAA TTC AAC GGC ACA GT-3 0 , R: 5 0 -AGA TGG TGA TGG GCT TCC C-3 0 .
Measurement of Intracellular ROS
The detection of intracellular ROS levels was depended on the fluorescent probe 2,7-dichlorodihydrofluorescein diacetate (DCFH-DA), which is a cell-permeable indicator for ROS. In brief, H9c2 cells were incubated with 10 mM of DCFH-DA for 20 min at 37 C. Then, the fluorescence of formed DCF was visualized under a fluorescent microscope with excitation/emission set at 502/523 nm.
Western Blot
H9c2 cells were homogenized with ice-cold RIPA lysis buffer (Beyotime, Shanghai, China). Protein concentration in the lysates was measured according to the Bradford method using a commercial kit (Beyotime). Equal amounts of protein were proceeded to 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis and then transferred electrophoretically to polyvinylidene difluoride membranes (Thermo Fisher Scientific, Waltham, MA, USA). After blocking with 5% nonfat milk for 1 h at room temperature, the membranes were incubated with the specific primary antibodies against bax, bcl-2, p-PI3 K, PI3 K, p-Akt, Akt, and b-actin (Invitrogen) diluted in blocking buffer at 4 C overnight. Following washing three times with trisbuffered saline Tween-20, the membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Invitrogen) for 1 h at 37 C. The proteins signals were visualized using an enhanced chemiluminescent detection kit (Pierce, Rockford, IL, USA). The optical density was analyzed using Bio-Image Analysis System (Bio-Rad Laboratories).
Enzyme-linked Immunosorbent Assay
The levels of superoxide dismutase (SOD), glutathione peroxidase (GPx), and caspase-3 in lysates were measured using commercial assay kits purchased from MyBioSource (San Diego, CA, USA) according to the manufacturer's instructions.
Statistical Analysis
Quantitative results were analyzed by GraphPad Prism version 6.0 (GraphPad Software, Inc., San Diego, CA, USA) from at least three separate experiments and expressed as mean + standard deviation. Comparisons were made by Student's t-tests or one-way analysis of variance with Tukey's post hoc tests. P-value less than 0.05 was considered statistically significant.
Results
The Expression of TRIM8 was Upregulated in H9c2 Cells Exposed to H/R H9c2 cells were exposed to hypoxia condition for 2 h, followed by reoxygenation for another 24 h. The results showed that expression of TRIM8 at both mRNA and protein levels was significantly upregulated by H/R treatment compared to the control H9c2 cells (Fig. 1A, B).
Knockdown of TRIM8 Improved the Viability of H9c2 Cells Exposed to H/R Subsequently, we explored the role of TRIM8 in H/R-treated H9c2 cells through transfection with si-TRIM8. The western blot revealed that TRIM8 protein expression was markedly decreased by si-TRIM8 in H9c2 cells, as compared with the si-NC group ( Fig. 2A). Cell viability of H9c2 cells exhibited a significant reduction after H/R exposure, while TRIM8 knockdown caused a recovery of cell viability (Fig. 2B).
Downregulation of TRIM8 Inhibited H/R-Induced Oxidative Stress in H9c2 Cells
At the end of reperfusion, as shown in Fig. 3A, ROS production was significantly increased by H/R treatment, as compared with control H9c2 cells, which was markedly attenuated by downregulation of TRIM8. In addition, downregulation of TRIM8 also induced an improvement in activities of SOD and GPx when compared with H/R group (Fig. 3B, C).
TRIM8 Knockdown Inhibited Apoptosis in H9c2 Cells
After H/R Next, we found that H/R-caused increase in caspase-3 activity was suppressed by TRIM8 knockdown in H9c2 cells (Fig. 4A). Meanwhile, the bcl-2 expression was decreased, while bax expression was increased in response to H/R, when compared with control cells. However, H/R-induced changes in bcl-2 and bax expressions were significantly reversed by knockdown of TRIM8 (Fig. 4B-D).
TRIM8 Promoted H/R-Induced Oxidative Stress and Apoptosis in H9c2 Cells
To further study the function of TRIM8 in H/R-treated H9c2 cells, we overexpressed TRIM8 in H9c2 cells by transfection with pcDNA3.0-TRIM8, which was confirmed by western blot (Fig. 5A). MTT assay results disclosed that TRIM8 overexpression promoted the decreased cell viability upon H/R induction (Fig. 5B). The H/R-induced production of ROS and increase in caspase-3 activity were enhanced by TRIM8 overexpression (Fig. 5C, D).
TRIM8 Knockdown Enhanced the Activation of PI3K/ Akt Signaling Pathway in H/R-Stimulated H9c2 Cells
To further explore the mechanism of TRIM8, we studied the effect of TRIM8 knockdown on PI3K/Akt signaling pathway through detecting the expressions of PI3 K, p-PI3 K, Akt, and p-Akt. As shown in Fig. 6, H/R treatment significantly decreased the expression levels of p-PI3 K and p-Akt in H9c2 cells, as compared with the control group. However,
Inhibition of PI3K/Akt Reversed the Effects of TRIM8 Knockdown on H9c2 Cells
In addition, LY294002 was used to block the activation of PI3K/Akt signaling pathway. As shown in Fig. 7A, the improvement of cell viability in H/R þ si-TRIM8 group was lost in cells treated with LY294002. Inhibition of PI3K/Akt signaling pathway was also shown to prevent the antioxidative and antiapoptotic effects of si-TRIM8 in H/R-stimulated H9c2 cells, as evidenced by increased ROS production and caspase-3 activity (Fig. 7B, C).
Discussion
I/R-induced injury has been described as one of the main factors that contribute to the observed morbidity and mortality in MI. ROS have been found to play a key role in the pathophysiology of I/R injury and mediate injury to the insulted tissues 16 . During the reperfusion stage of an ischemic tissue, a burst of ROS is produced due to the abundance of oxygen supply 17 . The excessive production of ROS induces oxidative stress, which can result in direct cytotoxic effects. Besides, the generated oxidative stress also induces the production of ROS, as well as the formation of inflammatory mediators through redox-mediated signaling pathways, leading to post I/R inflammatory injury 18 . These oxidative and inflammatory responses may cause cell apoptosis. Therefore, the methods eliminating the I/R injury include: preconditioning techniques, and minimizing the oxidative stress during reperfusion with the use of antioxidants, anti-inflammatory agents, and scavengers for ROS.
TRIM8 is a member of the TRIM family that is mostly ubiquitous in murine and human tissues 10 . TRIM8 plays an important role in response to various physiological and pathological conditions. Blocking of TRIM8 protects against lipopolysaccharide-induced acute lung injury in mice through its anti-inflammatory and antioxidative activities with decreased ROS production; increased SOD; and lessened IL-1b, IL-6, and TNF-a expression in lung tissues 19 . Additionally, TRIM8 was found to be upregulated in liver of mice subjected to hepatic I/R injury. TRIM8 deficiency relieves hepatocyte injury triggered by I/R. Silencing of Trim8 expression alleviates hepatic inflammation responses and inhibits apoptosis in vitro and in vivo 15 . TRIM8 overexpression exaggerates cardiac hypertrophy in pressure overload-induced mice and Ang II-induced cardiomyocyte hypertrophy in vitro 14 . Thus, we speculated that TRIM8 might be involved in myocardial I/R injury. Our results showed that TRIM8 expression was markedly upregulated in H9c2 cells after stimulation with H/R. Knockdown of TRIM8 improved the cell viability and inhibited oxidative stress and apoptosis of H9c2 cells, while TRIM8 overexpression exhibited apposite effects.
The PI3K/Akt signaling pathway is a conserved pathway to many aspects of cell growth and survival, in physiological as well as in pathological conditions 20,21 . Recent studies have identified that PI3K/Akt is crucial for limiting oxidative stress, proinflammatory, and apoptotic events in response to I/R stimuli [22][23][24] . It has been well documented that activation of PI3K/Akt is associated with decreased myocardial ischemic injury. PI3K/Akt activation inhibits cardiomyocytes apoptosis induced by hypoxia, and it protects hearts against I/R injury [25][26][27] . Troxerutin reduces myocardial infarct size, improves cardiac function, and decreases the levels of inflammatory cytokines as well as some apoptosis markers in a myocardial I/R injury model in rats via activating PI3K/Akt pathway 28 . Urolithin A reduces myocardial infarct size and cell apoptosis, and enhances antioxidant capacity in mice after I/R through PI3K/Akt pathway 29 . In the current study, we aimed to evaluate the role of PI3K/Akt pathway in the protective effects of TRIM8 on H/R-stimulated H9c2 cells. The data showed that knockdown of TRIM8 enhanced the activation of PI3K/Akt signaling pathway in H/R-stimulated H9c2 cells. While inhibition of PI3K/Akt by LY294002 reversed the effects of TRIM8 knockdown on H9c2 cells, implying that the protective effects of TRIM8 knockdown were mediated by PI3K/Akt signaling pathway.
Conclusion
In summary, the present study has demonstrated that knockdown of TRIM8 has protective effects on H/R-stimulated H9c2 cells with improvement of cell viability, decreased ROS production, increased antioxidants, and decreased cell apoptosis markers. Knockdown of TRIM8 protected H9c2 cells against H/R stimulation through the activation of PI3K/Akt signaling pathway. These present findings defined TRIM8 as a crucial mediator in myocardial I/R injury, thus providing a potentially novel therapeutic target for attenuating and preventing MI. However, to translate our present discoveries into clinical usage, future in vivo studies need to be addressed.
Ethical Approval
This study was approved by the Ethics Committee at The Second Affiliated Hospital of Xi'an Jiaotong University (Xi'an, China).
Statement of Human and Animal Rights
All procedures in this study were conducted in accordance with The Second Affiliated Hospital of Xi'an Jiaotong University of Ethics Committee's or Institutional Review Board's (Approval Number: 00024) approved protocols.
Statement of Informed Consent
Written informed consent was obtained from the patients for their anonymized information to be published in this article.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This | 2020-08-27T09:13:21.894Z | 2020-08-25T00:00:00.000 | {
"year": 2020,
"sha1": "66b9aabdb37409db380474775ec257fc214d3bf0",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0963689720949247",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b953388175dbf86b97c4467a110c0fa541347885",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
56350353 | pes2o/s2orc | v3-fos-license | MORPHODYNAMIC UPSCALING WITH THE MORFAC APPROACH
The Morphological Acceleration Factor (MORFAC) approach for morphodynamic upscaling enables the simulation of long term coastal evolution. However the general validity of the MORFAC concept for coastal applications has not yet been comprehensively investigated. Furthermore, a robust and objective method for the a priori determination of the highest MORFAC that is suitable for a given simulation (i.e. critical MORFAC) does not currently exist. This paper presents some initial results of an ongoing, long-term study that attempts to rigorously and methodically investigate the limitations and strengths of the MORFAC approach. Based on the results of a numerical modelling exercise using the morphodynamic model Delft3D, the main dependencies and sensitivities of the MORFAC approach are investigated. Also, a criterion is proposed for the a priori determination of the critical MORFAC, based on the CFL condition for bed form migration.
INTRODUCTION
Until recently it was only possible to numerically simulate coastal evolution at time scales of up to a couple of years while using traditional morphodynamic upscaling techniques such as the 'continuity correction' method.The introduction of the morphological acceleration factor (MORFAC) concept to coastal morphodynamic modelling by Lesser et al. (2004) and Roelvink (2006) has changed this.The MORFAC approach enables numerical simulations of coastal morphological evolution due to waves and currents at time scales of decades (Lesser 2009, Tonnon et al. 2007, Jones et al. 2007, Lesser et al. 2004) and -under very uniform forcing conditions (e.g.tides only) -for centuries (Dissanayake et al. 2009a, b, Van der Wegen and Roelvink 2008, Van der Wegen et al. 2008).
Bed level update in coastal morphodynamic models is facilitated via the sediment continuity equation.However as the time scales associated with bed level changes are generally much greater than those associated with hydrodynamic forcing, to enable reasonably fast computations, these models have until recently adopted the approach of updating bed levels and feeding them back into the hydrodynamic calculations only every few hydrodynamic time steps.The MORFAC approach departs from this traditional way of thinking and essentially multiplies the bed levels computed after each hydrodynamic time step by a factor (MORFAC) to enable much faster computation.The significantly upscaled new bathymetry is then used in the next hydrodynamic step, see Fig. 1.
Bottom change multiplied by MORFAC
Although it is very tempting to simply accept the MORFAC concept due to the massive increase in modelling time scales it affords, such a boldly new concept should be rigorously and methodically assessed prior to its general acceptance.This paper presents some initial results of an ongoing study that attempts to systematically investigate the limitations and strengths of the MORFAC approach.Based on the results of a numerical modelling exercise using Delft3D, some of the main dependencies and sensitivities of the MORFAC approach are demonstrated, and a preliminary method for the a priori determination of the critical MORFAC is suggested.
METHODS
In this study it is assumed that the most accurate numerical model simulation currently possible is one undertaken with MORFAC (MF) = 1 (i.e. benchmark simulation).Two simple and opposing idealised cases were considered: the morphological evolution of a symmetrical protrusion (hump) and a depression (trench), one being the direct opposite of the other, both morphologically and hydrodynamically.The hump is expected to represent features such as sand bars, while the trench represents channel-like features; both commonly found features in the coastal zone.In both cases, the bed perturbation was initially located on a flat bed and subjected to uniform unidirectional flow.The ambient water depth was set to 4m above the plane bed, while the amplitude of the bed perturbation was set to 2m, resulting in a minimum water depth of 2m above the hump, and a maximum water depth of 6m in the trench, see Fig. 2. Two series of Delft3D simulations were undertaken in profile mode for the two bed perturbations considered.Uniform unidirectional flow was introduced from the left hand side (LHS) boundary and a zero-gradient flow boundary condition was imposed at the downstream (RHS) boundary.The Engelund and Hansen (1967) total load formulation was used for sediment transport calculations.In each series of simulations, the flow velocity, grid size, hydrodynamic time step and the MORFAC were varied systematically.
The critical MORFAC (MF crit ) is the highest MORFAC resulting in bed level predictions that are similar to those predicted by a benchmark case at the same morphological time (MT).Therefore, first, the benchmark conditions have to be determined.The benchmark conditions for each forcing condition were thus obtained by continuing the respective MF = 1 cases until morphodynamic equilibrium was reached (say at MT = T e ).Morphodynamic equilibrium was defined when both the change in amplitude and the change in propagation speed of the bed perturbation approached zero.The bed levels predicted at MT = T e by subsequent MF > 1 simulations were then compared with the equilibrium morphology predicted by the benchmark simulation using the Brier Skill Score (BSS) (Van Rijn et al. 2003).
A BSS value of 1 indicates a perfect match between the bed levels predicted by the benchmark case and a case with a higher MORFAC.BSS values lower than unity indicate a difference between bed levels predicted by the two cases; the lower the BSS the greater the divergence between the predicted bed levels.In this study, BSS values less than 0.99 were considered to represent an unacceptable departure from the benchmark case.For a given set of model conditions (flow velocity, grid size, and time step), as the MORFAC was gradually increased from unity, the first MORFAC at which the BSS dropped below 0.99, before reaching MT = T e , was considered as MF crit .
Hump versus Trench
Two separate sets of simulations were undertaken for the hump and trench cases with flow velocities (U) of 0.5, 0.7, 0.9, 1.1, 1.3 and 1.5m/s.The grid size and hydrodynamic time step were kept constant at 15m and 3s respectively (resulting in a constant Courant number of 2.5) while the MORFAC was systematically increased from 1 to MF crit .The dependency of MF crit on the ambient Froude number (Fr) (i.e.Fr at the upstream boundary) is shown in Fig. 3. Two phenomena are clearly visible in Fig. 3. First, for both cases, MF crit decreases exponentially with ambient Fr.This is intuitively correct as higher velocities will result in higher sediment transport and thus larger bed level variations which will eventually lead to hydrodynamic instabilities in the model.Second, for a given Froude number, the trench case can accommodate a significantly higher MF crit .This is also intuitively correct as velocities (and thus sediment transport) will increase over the hump while they will decrease over the trench.Therefore, further analysis is restricted to the hump case.
MF crit versus grid size, hydrodynamic time step and Courant number
A second series of simulations was undertaken to investigate the dependency and sensitivity of MF crit on grid size dx, hydrodynamic time step dt and Courant number Cr for the hump case.These simulations were undertaken for flow velocities (U) of 0.9m/s and 1.3m/s.For each U, first dt was kept constant at 3s while dx was systematically doubled from 3.75m to 60m (~ 1/5th of hump width).Then dx was kept constant at 15m while dt was systematically doubled from 0.75s to 12s.This set of simulations resulted in Cr values that varied between 0.5 and 10 (10 being the largest recommended Cr value for Delft3D).MF crit vs. Cr for this set of simulations is plotted in Fig. 4 which shows that MF crit can vary by up to 1500 for the same Cr.Thus MF crit appears not to be directly governed by Cr.To investigate the individual dependencies between MF crit and dx, dt, in addition to the above set of simulations where Cr was allowed to freely vary while dx and dt were varied, another set of simulations where Cr was forced to remain constant at 2.5 while dx and dt were varied was undertaken.Fig. 5 shows the resulting variations of MF crit with dx and dt for both constant and varying Cr.An almost linear correlation between MF crit and dx can be seen, regardless of whether Cr is constant or varying.However, the dependency of MF crit on dt is less clear.For the cases where Cr was allowed to vary with dt (i.e. by keeping dx constant -solid circles), MF crit increases with decreasing dt, which is intuitively correct as numerical stability increases with decreasing dt.However, when Cr is forced to be constant while increasing/decreasing dt (i.e. by simultaneously decreasing/increasing dx), MF crit decreases with decreasing dt (unfilled circles), which is counter-intuitive.This indicates that the dependency of MF crit on dx over-rides that on dt.
CRITERION FOR THE A PRIORI DETERMINATION OF MF crit
To ensure numerical stability, the propagation of bed forms in one morphological time step should not exceed the grid cell size (CFL criterion).The propagation speed (or celerity) of bed forms C bed is commonly estimated as: where b is the power of the sediment transport formulation used, ε is the porosity, h is the water depth and S is the sediment transport magnitude.
Including the effect of MORFAC, the above mentioned CFL condition requires that: To examine whether the simulations undertaken here satisfy the above criterion, the CFL MF values for the last successful simulation (i.e. the simulation associated with MF crit ) and the first unsuccessful simulation for each simulated combination of U, dx and dt are plotted as a binary plot (see Fig. 6).Successful and unsuccessful simulations are indicated by values of 1 and -1 respectively on the Y-axis of Fig. 6.It is clear that the CFL MF values associated with both successful and unsuccessful simulations fall below the critical value of 1.This indicates that while the MF crit simulations do satisfy the CFL MF criterion given by Eq. 2, simulations fail at lower MF values than they should.Otherwise, all the circles (i.e.unsuccessful cases) in Fig. 6 would have been located farther to the right on the X-axis such that the associated CFL MF values would be > 1.There could be numerous reasons for this premature failure including: failure of the assumed linear relationship between morphological response and hydrodynamic response; MORFAC induced errors in bed form celerity and/or amplitude; and variable relationship between MORFAC and numerical errors (due to advection/diffusion scheme), et cetera.
Based on the above analysis, it is clear that the development of a simple and definitive criterion to predict MF crit for even this simplest case of unidirectional flow over a symmetric bed feature is nontrivial.Nevertheless, based on Fig. 6, it appears that, at least for the range of conditions tested here, CFL MF < 0.05 may be used as a preliminary guide to obtain a safe first estimate of MF crit .This returns a MF crit value of about 100 when typical values used in nearshore coastal applications are substituted in Eq. 2 for dt (3-10s), dx (5-20m) and C bed (~ 0.001 m/s).
CONCLUSIONS
A strategically designed series of Delft3D simulations has provided new insights regarding the dependencies and sensitivities of the MORFAC approach for morphodynamic upscaling.The main findings are: 1.The critical MORFAC (MF crit ), has a strong dependency on the Froude number Fr and the grid size dx.MF crit decreases exponentially as Fr increases, while it increases almost linearly with dx. 2. MF crit does not appear to be directly governed by the Courant number (Cr). 3. The criterion CFL MF < 0.05 may provide a safe first estimate of MF crit .
It should be noted that the results and conclusions presented herein may not be directly applicable to complex real-life situations which are likely to incorporate highly non-uniform morphology and time varying non-linear forcing which may include tides, waves, wind etc. Research is currently being undertaken to further investigate the many dependencies and sensitivities of the MORFAC approach with the ultimate goal of developing an effective criterion for the a priori determination of MF crit for any given real-life situation.
Figure 1 .
Figure 1.General structure of coastal morphodynamic models and the MORFAC concept.
Figure 2 .
Figure 2. Initial bathymetries: (a) hump on a flat bed, and (b) trench on a flat bed.
Figure 3 .
Figure 3. MF crit versus ambient Froude number for the hump and trench cases.Grid size = 15m and hydrodynamic time step = 3s in all cases.
Figure 4 .
Figure 4. MF crit versus Cr for U = 0.9m/s.dx was varied between 3.75m and 60m while dt was varied between 0.75s and 12s.Cr varied freely between 0.5 and 10.
Figure 6 .
Figure 6.CFL for bed form propagation (CFL MF ) for the last successful simulation (i.e. the simulation associated with MF crit ) (asterisks) and the first unsuccessful simulation (solid circles) for each simulated combination of U, dx and dt. | 2018-12-17T19:54:28.696Z | 2011-01-29T00:00:00.000 | {
"year": 2011,
"sha1": "b5307592682ac44b1931da0790c314c77ec5c81c",
"oa_license": "CCBY",
"oa_url": "https://icce-ojs-tamu.tdl.org/icce/article/download/1085/pdf_183",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b5307592682ac44b1931da0790c314c77ec5c81c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
249135492 | pes2o/s2orc | v3-fos-license | Fresh transfer of an average quality slow growing day-3 embryo versus frozen transfer in a poor responder: a clinical management dilemma
A common conundrum faced by clinicians is whether to go for fresh transfer or culture the embryos for future frozen transfer in a case of slow-growing embryo. This case report describes a successful pregnancy with the fresh transfer of a single day 3- 6-cell grade B embryo in a patient with poor ovarian reserve. Although more research is needed in this context, the fresh transfer can be considered as a treatment option in patients with optimal endometrium and well-controlled progesterone levels with slow-growing embryos.
INTRODUCTION
Embryo transfer can either be performed at a cleavage stage (day 3) or blastocyst stage (day 5). The blastocyst stage was once considered to be the best stage of embryos with the maximum clinical pregnancy rates. The logic behind this is that it improves the synchronicity between the embryo and the endometrium enabling embryo self-selection. However, even with the best culture media and condition, it is not necessary that all cleavage-stage embryos will convert to blastocyst. Actually, extended culture may sometimes harm the good quality embryos due to suboptimal culture conditions. According to the Cochrane review of 2016 (Glujovsky et al., 2016), there are no significant differences in the cumulative pregnancy rates between day-3 and day-5 transfers of fresh and frozen cycles.
Slower growing embryos are expected to have lower implantation and clinical pregnancy rates as compared to normal growing embryos (Shapiro et al., 2001). However, there is still a paucity of evidence regarding the better perspective, i.e., going for fresh transfer or continuing to culture and freezing the embryos for frozen transfer at a later date.
This report discusses the case of a patient in whom we transferred a slow growing average quality embryo in a fresh transfer cycle rather than culturing it further and freezing for transfer in subsequent cycles, whose outcome was a successful pregnancy.
CASE REPORT
A 31-year-old woman visited our fertility clinic in March 2020 with a history of secondary infertility and a married life of six years. She had conceived naturally 3 years back, but since it was an unwanted pregnancy, medical termination was performed at 6 weeks of gestation. She had a regular, 28-day cycle. The patient did not have a significant medical or surgical history. She had no history of drinking or smoking. Her husband was 36 years old. He had a history of erectile dysfunction and hypertension, for which he was evaluated and given appropriate treatment. He did not drink or smoke. There was no other significant history.
Basic infertility evaluation showed her ovarian reserve was low, as indicated by an AMH (anti-Müllerian hormone) level of 0.1 ng/ml and an AFC (antral follicle count) of 3. Her baseline estradiol and FSH (follicle-stimulating hormone) levels were within normal limits -28.5 pg/ml and 4.5 mIU/ml, respectively. Her hysterosalpingogram was suggestive of bilateral patent tubes. Semen analysis revealed normozoospermia. On further evaluation, she was diagnosed to have overt diabetes mellitus with a fasting blood sugar of 263 mg/dl and HbA1c of 12.7%). She was started on oral hypoglycaemic agents and her blood sugars were controlled over a period of three months.
The patient was counselled for the need of pooling IVF (in-vitro fertilization) in view of her poor ovarian reserve. The controlled ovarian stimulation cycle was started in June 2020 using the antagonist protocol. Only one dominant follicle (18.5 mm) developed after nine days of stimulation and a total of 2700 IU of gonadotropins (INJ GONAL-F (recombinant FSH)-1875 IU + INJ HUMOG-HP (Highly purified HMG)-825 IU). Ovulation was triggered with a dual trigger (Inj ovitrelle 250 mcg + Inj Decapeptyl 0.2 mg). Oocyte retrieval was performed 34.5 hours after trigger and one oocyte was retrieved and successfully fertilized. Progesterone level on the day of trigger was 0.1 ng/ml. Endometrial thickness on the day of pick-up was 10 mm. We planned for a fresh transfer on day 3 and started the patient on 50 mg intramuscular progesterone from the day of oocyte retrieval for 3 days. On day 3 (assessment was done 68 hours after insemination), we got a 6-cell grade B embryo (Alpha Scientists in Reproductive Medicine & ESHRE Special Interest Group of Embryology, 2011) (20% fragmentation) (Figure 1). The patient was counselled about the slow growing and average quality embryo. Fifteen days later, we got a positive beta-HCG report of 299.50 mIU/ml. The patient was followed with regular antenatal check-up and scans. At present, her pregnancy is progressing well with her blood sugars controlled on insulin. She is currently 26 weeks pregnant.
DISCUSSION
The patient described in this case belonged to the Poseidon group 3 (Humaidan et al., 2016) and had a very low ovarian reserve (AMH-0.1 ng/ml). The various options available for the treatment of such patients include oocyte retrieval followed by fresh transfer or pooling IVF cycle or counselling for donor oocyte IVF (Çelik et al., 2018). In this case, the patient was not in favour of donor oocytes. Also, other factors favouring optimal outcome with self-IVF included her age, previous history of natural conception, and normal FSH levels. Hence the decision of going ahead with self-oocyte was made.
However, despite stimulation with high doses of gonadotropin, we could get only one oocyte, which later fertilised. Here, fresh transfer was chosen over freezing and pooling IVF because of the cost factor involved in the vitrification of the embryos. Also, a study by Haas et al. (2019) demonstrated that when they transferred fresh slow growing embryos, the outcome was better as compared to culturing and freezing them and then thawing them later for a frozen transfer.
ACCUVIT (accumulation and vitrification) of embryos is an evidence-based effective treatment option in patients with very low ovarian reserves, like the one described in this case report (Cobo et al., 2012). However, the chances of a slow growing embryo converting to blastocyst on further culturing are not very good.
To conclude, if the endometrium is in optimal condition on the day of trigger and oocyte retrieval before ovum-pick up, with progesterone well controlled, one could consider the option of fresh transfer.
CONCLUSION
Although more research is needed in this context, fresh transfer can be considered as a treatment option in patients with an optimal endometrium (pattern and thickness) and slow growing embryos. | 2022-05-29T15:18:08.751Z | 2022-11-09T00:00:00.000 | {
"year": 2022,
"sha1": "bc2809eeb9c5b517f94c5191a17a1eebeca7dccd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2822167d0c4d38eac2cbd8205af2e63b9a0b5aa",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
256788093 | pes2o/s2orc | v3-fos-license | Migrant Agency in an Institutional Context: The Akmola–Astana Migration System
Abstract This article addresses one of the key challenges facing transitional and emerging economies: managing rural–urban migration to tackle rural decline and the associated rapid urbanisation. We introduce New Institutionalism as a novel conceptual framework to analyse the interactions between the institutional environment and migrant agency in a rural–urban system: the Akmola–Astana migration system in northern Kazakhstan. Our results suggest that the government might be more successful if it engages migrant agency and incentivises remaining in rural areas instead of designing policies to discourage rural–urban migration.
and drawing on empirical data from the Akmola-Astana migration system in northern Kazakhstan. 1ur focus on rural-urban migration is timely, given the rapid urbanisation that is currently taking place in many transitional and emerging economies, especially in post-Soviet states. 2 While urbanisation is frequently credited with accelerating economic development and societal well-being, underlying rural-urban migration processes also entail societal costs.On the one hand, rural out-migration has been linked to the loss of young human capital, known as 'brain drain', and an associated demographic shift to the ageing of the rural population.On the other hand, urban migration can put a strain on affordable housing and the provision of public goods (Massey et al. 1998, p. 48).This has prompted some governments to design regulatory policies to manage rural-urban migration (Beauchemin & Schoumaker 2005, p. 1129).These policies usually focus on either enabling or constraining migrant agency by providing incentives for potential migrants to remain in rural areas or by deterring them from moving to cities (de Haas 2011, p. 6).Since its independence in 1991, Kazakhstan's government, for instance, has been experimenting with various policy interventions to regulate internal migration flows, initially encouraging migration into Astana, its new capital, founded in 1997, and later trying to halt it.Astana experienced a period of rapid urbanisation, more than tripling its population from around 300,000 inhabitants in the 1980s and most of the 1990s to one million in 2018.The highest share of incoming migrants to Astana originated from the surrounding Akmola province (StatKaz 2019).
It is now well accepted that structure/institutions and agency both shape migration processes and that neither has superior explanatory power vis-à-vis these processes. 3In fact, focusing on the individual agency to migrate without considering the institutional environment that aims to shape migration, and vice versa, runs the risk of missing the complexity of migratory processes (Castles & Miller 2013, pp. 28, 30).Simultaneously considering the interrelated forces of institutions and agency provides for a much more nuanced understanding of how migration processes might be influenced and shaped (Lacroix 2014, p. 671).Still, studies on rural-urban migration do not often explicitly consider institutions and agency simultaneously in their analyses and the interaction between them (Bakewell 2014, pp. 306, 309).Yet, there is a need to understand the complex interactions between individual (migrant) agency and the political, economic and social environment that facilitate or constrain agency in order to understand how policy measures affect migration processes.To fill this gap, we apply New Institutionalism to better understand migration processes, as it provides an opportunity to examine migrants as social agents who manoeuvre their way through complex institutional incentives and constraints while exerting, at the same time, pressure on institutions to change.While New Institutionalism has been widely used, it has not, however, been applied to the analysis of migration processes.Although there are migration research frameworks that incorporate institution and agency interaction, such as translocality (Greiner & Sakdapolrak 2013, p. 375) or migration systems (Bakewell 2014, p. 306), New Institutionalism is, in our case, an ideal conceptual framework as the core of the theory focuses on a changing institutional environment.Valid reasoning about people's agency in a migration system should include the investigation of both ends-the sending and receiving locations of the migration system and the linkages between them (Castles & Miller 2013, p. 27).This is even more true given that the livelihoods of most migrants and migrant households are translocal (Thieme 2008b, p. 67). 4 Our central research question is, therefore, how does the institutional environment affect people's agency in the sending and receiving areas of the Akmola-Astana migration system and vice versa?
In the next sections, we first outline the merits of New Institutionalism as a conceptual framework and demonstrate how it helps to understand the interaction between institutions and agency in migration processes.Second, we describe the evolution and transformation of Kazakhstani migration institutions, unpack individual migration decision-making processes and look at how the agency of (potential) migrants has been framed by existing state interventions.We will show, for instance, that in reaction to the Kazakhstani government artificially raising urban housing costs through interventions in the Astana housing market, migrants responded through collective agency by taking advantage of their family networks.Urban relatives provided accommodation and rural relatives financially supported their migrant family with so-called reverse remittances.This and other results suggest that instead of constraining migrants' agency, promoting institutions that expand the agency of the rural population to stay is more effective in moderating rural-urban migration.
The merits of new institutionalism in framing migration processes
Although we can look back at decades of research on international and internal migration, no comprehensive theory is available to explain migration processes such as deciding to migrate or not.A number of renowned scholars, such as Portes (1997, p. 811) and Castles (2010Castles ( , p. 1582)), even argue against the idea of an all-embracing theory for migration studies.They suggest that migration research should use middle-range theories that can integrate the insights of various social sciences in order to improve the understanding of migration.De Haas (2010b, p. 241) acknowledges that these appeals correspond with a general paradigm shift in contemporary social theory away from grand theories and towards hybrid approaches that can integrate a range of disciplines, paradigms and theories that are both flexible and disciplinarily neutral (King & Skeldon 2010, p. 1634).
A conceptual framework based on New Institutionalism makes it possible to merge general sociological and economic assumptions about institutions and agency with actual policies.New Institutionalism is at the junction of political science, sociology, history and economics and, as pointed out by March and Olsen, its 'spirit is to supplement rather than to reject alternative approaches' (March & Olsen 2006, p. 16).As such, New Institutionalism has great power to provide an integrative framework for complex research designs (Goodin & Klingemann 1996, p. 25).
New Institutionalism is able to theoretically embrace institutional genesis, reproduction and change, and it links individual agency at different societal levels (Thelen 2004, p. 31).At the macro-level, New Institutionalism can explain global and national processes of social, economic, political and cultural change while linking them to agency within an institutional environment.The institutional meso-level is attached to networks, communities and localities that are relevant for social interaction and link the macro-and micro-levels with each other (de Haas 2010a, p. 1591).
There are three well-established schools of New Institutionalism: historical institutionalism, sociological institutionalism and rational choice institutionalism (Olsson 2016, pp. 1, 22).Although the schools differ somewhat in their understanding of the mechanisms of institutional change, they are united in their theoretical core (Peters 2012, p. 184).Thus, in line with scholars as diverse as Jakimow (2013, p. 494), Lowndes and Roberts (2013, pp. 40-1), Koning (2016, p. 639), and March and Olsen (2006, p. 16), we combine elements from all three schools into one conceptual framework.
Figure 1 depicts our conceptual framework of New Institutionalism in the context of a migration system.Comprehensive consideration of all three schools extends our understanding of the ways in which institutions at different societal levels affect individual agency; in our case, the study of rural-urban migration decisions (Campbell 2004, p. xiv;Jakimow 2013, p. 499) but also path dependencies in migration systems, or positive and negative feedback loops, to better understand institutional change or stasis across time.Sociological institutionalism delivers an understanding of how a migration system interacts with societal norms, beliefs and ideas; how ordinary or elite actors within a particular institution foster or inhibit institutional change, on the one hand, and how institutions influence agency, on the other.Rational choice institutionalism allows us to understand the reasoning behind decisions related to migration processes; for example, subjective cost-benefit considerations, the effects of possession and the use of power.
As mentioned before, we follow an institutionalist view of agency where the institutional environment (both in sending and receiving areas) frames the ability of actors to make choices (Lowndes & Roberts 2013, pp. 16, 52).Agency possesses an iterative (based on past patterns of thought or action), a projective (based on perceived possible future trajectories of action) and an evaluative element (based on the practical and normative assessment of alternative actions) (Emirbayer & Mische 1998, p. 971).This implies that exercising agency involves evaluating a given situation, prior experiences and possible solutions, responding to uncertainties or challenges, inventing new possibilities, and mediating between and contextualising possible consequences.This institutionalist view of agency is classified by Lowndes and Roberts (2013, p. 106).We will use this systematisation of agency in our analysis of migrant decision-making in the Akmola-Astana migration system and use four different classifications of agency: cumulative agency, when the actions of many independent actors have an effect on an institution; collective agency, when actors work together under the same institutional environment; combative agency, when actors oppose other actors and their institutions; and constrained agency, when actors are always constrained to some extent.Although any one of these agency types may dominate in certain situations, they often exist simultaneously, demonstrating the diversity and overlapping agency of people (Coe & Jordhus-Lier 2011, p. 217).
Methodology
Our research uses an approach that mixes qualitative and quantitative data, collected between 2016 and 2017 in the Akmola province of Kazakhstan.This article draws on the 68 qualitative semi-structured interviews with potential and actual migrants, and policymakers.The interviews were conducted by a lead researcher, who was assisted by domestic research assistants.Interviews were held either in Russian or Kazakh and took usually 30-40 minutes. 5We conducted interviews with 23 government officials at different administrative levels and four migration experts and political scientists, who were purposely selected.Moreover, we interviewed 27 potential rural migrants in several villages in the Akmola province and 14 rural-urban migrants in Astana in order to understand the regulatory/policy environment as well as the agency of (potential) migrants. 6Interview participants were identified using a mix of random route sampling7 and snowball sampling.The random route sampling was used to seek out people with different life experiences, perspectives and characteristics.In remote villages, interview participants were also asked for their assistance in finding additional interview partners, that is, individuals with migration experiences or with a migrant family member.A list of all interviews referenced in this article can be found in Table A3 in the Appendix.
This article also draws on data from a quantitative survey of 400 rural households (potential migrants).The qualitative research further served as the basis for the design of the household survey.The quantitative survey of rural households followed a three-stage clustered random sampling procedure (districts, villages, households).In each village, ten households were randomly identified via random route sampling, as household lists were not made available.Within the households, the person between the ages of 16 and 50 who had most recently celebrated a birthday was interviewed.On average, interviews took 80 minutes and were conducted in Russian or Kazakh depending on the preference of the respondent.In these interviews, relevant data on all adults, and on the general socio-economic situation of the household, were collected.Survey questions aimed at understanding the institution-agency interaction of rural dwellers who intend to stay put or to migrate.
The Akmola-Astana migration system
During Soviet times, notable parts of the population moved to and from Kazakhstan.After the collapse of the Soviet Union, many migrants, most notably ethnic Germans and Russians, returned to their place of origin or titular states.In 1990, the population of Kazakhstan was estimated to be 16.3 million.Emigration related to the dissolution fall of the Soviet Union caused this number to drop to 14.9 million in 2003.To counter the losses, in the late 1990s the Kazakhstani government initiated a return programme, the Oralman Programme, to encourage the return of ethnic Kazakhs living, for example, in Mongolia, Uzbekistan or China.In the years that followed, the Kazakhstani economy became stronger, leading to significantly less out-migration and a slightly higher return of ethnic Kazakhs to their home state.This resulted in a positive migration balance that has been ongoing since 2004.Combined with a high birth rate amongst ethnic Kazakhs, the total population grew to 16.3 million in 201016.3 million in and to 18.3 million in 201816.3 million in (StatKaz 2019)).
Institutional environment of the Akmola-Astana migration system
The institutional environment of the Akmola-Astana migration system can be separated into two categories: urban planning policies, especially in the context of nation-building that revolved around the development of the new capital Astana (see Table 1); and rural and regional development policy measures such as building infrastructure and regional education and health facilities aimed at slowing the rural exodus (see Table 2).Internal passport and registration policy (Historical institutionalism) The former Soviet Union used an internal passport system and city registration to regulate population movement and urbanisation (Osmonova 2016, p. 237).After Kazakhstan's independence, this institution was reformed to allow registration by current address.Registration at the place of residence is still mandatory but can no longer be denied.If a person owns a home or shows a valid rental contract at the place where they wish to register, registration is granted.If not properly registered, new urban migrants are, however, often harassed by the police (Yessenova 2005, p. 670).*Nevertheless, most potential and actual migrants perceive this to be a nuisance rather than a deterrent to rural-urban migration.Therefore, many people in Astana remain unregistered despite forgoing potential benefits associated with registration, for example, public services such as free healthcare (Sanghera et al. 2012, pp. 15, 28).This in turn fosters the translocality of migrants as they have to return to their place of registration to access certain public services.
Astana
'city of modernity' and political/cultural centre (Sociological institutionalism) The Soviet narrative that cities are the cradle of modernisation and progress is still in effect and perpetuated in today's Kazakhstan (Alexander et al. 2007, p. 2).The narrative promoted by the government is that Astana is 'catching up with the world' and provides a chance for ordinary Kazakhs to participate in modernity (Anacker 2004, p. 531;Laszczkowski 2016b, p. 149).About half of the villagers in the household survey stated that compared to cities like Astana there was no social or cultural life in the village and that their children would have a better life in the city.Moreover, attractive new higher education facilities were established in Astana, which culminated in the establishment of the Nazarbayev University, probably the most prestigious university in Kazakhstan; see, for example, Koch (2014b, p. 51).Frequently, students who move to the city for education develop aspirations for an urban career and lifestyle.This weakens their familial and emotional attachment to their rural home region.Thus, many do not return (Buchenrieder et al. 2020).Therefore, the Astana migration system is characterised by the accelerated rural-urban migration of young people from the countryside and a lack of well-educated young professionals and an over-ageing of the sedentary population in rural areas.About three-quarters of the household survey respondents stated that, in cities like Astana, their children would have access to better education facilities and 40% of them stated that lack of education facilities is a major constraint to remaining in situ.†
Urban job market (Rational choice institutionalism)
Moving the capital to Astana created a vibrant job market with relatively high salaries compared to the surrounding rural areas.Potential migrants in rural areas often acknowledge in both qualitative interviews and in our household survey ease of finding high-paying jobs in Astana.About half of the household survey respondents stated that they believed their career prospects would improve and they could achieve a higher standard of living in the city.
(Continued ) Moving the capital city from Almaty to Astana lies at the heart of Kazakhstan's official nation-building project (Anacker 2004, p. 515;Bekus 2017, p. 806;Caron 2019, p. 183).The government transformed the narrative of a multi-ethnic country into one of a country for (ethnic) Kazakhs.This includes recognising Kazakh (together with Russian) as the national language (GovReKaz 1997;Caron 2019, p. 201).
(Sociological institutionalism) Kazakhification is an unofficial (the official discourse describes it as 'harmonisation') but on many levels observable policy, for example, through the renaming of streets referring to Kazakh traditions or historical figures and the dismantling of Soviet monuments (Bekus 2017, pp. 797, 800;Caron 2019, pp. 186-87, 196).More importantly, speaking Kazakh has become more relevant in businesses and, in particular, within the public administration (Wolfel 2002, p. 501;Peyrouse 2007, pp. 484-85;Bissenova 2017, p. 652).Public employees must be able to speak both languages at a sufficiently high level.However, only 10% of ethnic Russians surveyed stated that their level of Kazakh was sufficient to find a job in the city.Thus, there is a strong rural-urban migration constraint along ethnic lines.This is also reflected in the fact that ethnic Russians originating from the countryside and studying at higher education facilities in Astana have a much higher return intention than ethnic Kazakhs (Buchenrieder et al. 2020).
Housing market/ city development (Historical institutionalism)
Initially, migrants were welcomed to fill the new remote capital city, Astana.Following a slow start, Astana grew quickly until the early 2000s, when the city government found itself needing to restrict the influx because the urban infrastructure could not keep pace.As the City Planning Department regulates the designation of building land, construction planning and building permissions, it determines the size of the housing market.By rationing the number of construction permits for new housing, it intentionally drove housing prices up.About 70% of the household survey respondents stated that they could not find affordable housing in the city (see Table A1 in the Appendix).‡
(Sociological institutionalism)
As a counterbalance to the high property prices and rents, the government set up housing programmes for state employees.These subsidised housing programmes allow people with low-income occupations to gain access to affordable housing in Astana (Bissenova 2017, p. 644).However, the process can take up to seven years from application to allocation.¶ (Continued ) To some degree, a number of these policies are at odds with each other and risk creating a contradicting institutional environment for migrants.Astana continues to grow8 and is being promoted as a city designed for more than three million inhabitants, as declared by Nursultan Nazarbayev (President of Kazakhstan 1990-2019) in 2016. 9This may further spur ruralurban mobility.There is also an understanding amongst government officials and experts that it may be impossible to create equal living conditions across all regions of Kazakhstan and that some remote villages are beyond saving and further urbanisation should be promoted.10Nevertheless, senior politicians and government officials see unregulated urban growth as a major problem that will increase the burden on social infrastructure, especially healthcare and education.11Moreover, government elites may be even more concerned about dissatisfied urban masses living in precarious conditions close to the centre of power.Once a critical mass of angry youth is reached, this could have the potential for regime-changing protests.Hence, there is a strong desire to regulate urban migration. 12igrant agency in the Akmola-Astana migration system The government's portrayal of Astana as a shiny modern metropolis, combined with the fact that the capital city is an economic and political power centre, continues to motivate rural residents to migrate to Astana (cumulative agency).As mentioned above, this massive migration influx has taken a toll on urban infrastructure, which, from 2003 onwards, prompted the city government to create policies (starting after 2003) aimed at repelling in-migration; for example, by limiting free health care only to registered city citizens. 13One way to cope with the high housing costs is to In response to over-ageing and the lack of human capital in rural areas, special scholarship programmes were set up.These programmes offer scholarships for young rural adults to study professions in demand in rural areas.Graduates are attracted to rural areas, in a nationwide initiative, by the offer of higher salaries and subsidised housing.Participants of these programmes are required to work for five years in rural areas (GovReKaz 2018).Furthermore, decentralised education facilities have been established and adapted to regional demand.In Akmola this has led to the opening of colleges in regional towns, mostly in the following professional fields: agriculture, mechanical engineering and primary health care.Thus, young adults do not have to leave their home region for higher education, which may even lead them to stay in the region after graduation.The decentralisation of education facilities also creates local jobs and attracts new businesses (Buchenrieder et al. 2020 stay in precarious accommodation without proper registration.14Landlords became aware of a lucrative business opportunity and began renting out single apartments to unusually high numbers of migrants, sometimes up to 20 people.However, this practice was prohibited in 2016 through a government regulation stipulating a minimum of 15 square metres per tenant. 15Despite these state interventions, migrants continue to flow into the capital in the hope of finding better paid jobs or starting a business.Most of them have relatives or acquaintances already living in Astana who help them to find their first jobs.These jobs are usually below their level of education, often in the service or construction sector.In many interviews, including 16, 18, 19, 20 and 21, respondents noted their intention to bring their families to Astana but admitted that, at the time, they were only able to take care of themselves and were struggling with the precarious income-cost ratio.This job mismatch may also have been caused by a rather naïve belief in the supposedly endless job and business opportunities in Astana, as shown in the example of Rustam. 16Rustam was a student in his final year of mining technology at a regional college and planned to work in the gold mine in his home village.However, he wanted to move to Astana in about five years, because 'there are so many job opportunities'.It was not likely, however, that Rustam would find a job in Astana that corresponded with his academic qualification. Collective migrant agency-actors working together in the same institutional environment -is shaped through collective decision-making within families because individual migration may affect the whole the family.Almost all respondents in the rural household survey agreed that migration decisions were made jointly within the family.17One of our interviews provides some further insight here. 18Madina was an ethnic Kazakh from Mongolia who had moved to Akmola ten years ago.She worked as a system operator in a local gold mine and lived with her three daughters.Together they decided that, in the medium to long term, they wished to move to an urban centre and planned their professional lives accordingly.The eldest daughter worked with Madina in the mine; the second eldest attended a college in Astana and the youngest was still going to high school in the village but planned to study in Astana as well.The eldest daughter had already tried her luck once in Astana but failed: she was unable to find a job with a salary high enough to cover the exorbitant cost of living and, lacking a supportive family network in Astana, was forced to return to the countryside.However, the whole family intended to move together to a city, preferably Astana, as soon as the youngest daughter graduated from high school.This example highlights the translocal characteristic of migration processes and the safety net function of families at the place of origin. 19he joint decision-making of family members also relates to risk-pooling strategies, such as that of remittances sent by urban migrants to their families at the place of origin.Like in other parts of Central Asia this exemplifies the strong translocal links between different locations (Thieme 2008a, pp. 327, 338).However, unlike other parts of Central Asia, such as Kyrgyzstan, where transnational livelihoods are multilocal, spanning across countries and rural and urban locations (Thieme 2014 p. 140), in Kazakhstan they are mostly intranational covering rural and urban.Also, receiving remittances and return migration are far less common in Kazakhstan, and there is a reverse flow of remittances from rural to urban areas. 20Around one third of the respondents in the household survey sent reverse remittances compared to only one fifth who received them.Furthermore, about four times more money was transferred to urban migrants than was received by rural households.This is relevant as urban income levels often do not match migrants' expectations (Osmonova 2016, p. 239). 21According to Dietz et al. (2011, p. 23), a large share of migrants in Astana reported their income to be the same as or even less than before they migrated.It is more likely that family members who stay behind will later join the migrant, for example, after they retire, and not the other way round.In our qualitative interviews, villagers reported the intention to reunite with their adult children in urban areas once they were old or had retired. 22Overall, 25% of the rural household survey respondents stated that there was the possibility that they would move at a later point in their life.
The case of Sergei and Maria23 illustrates combative agency (opposing other actors and their institutions).Born in a small village in Akmola province, they were both ethnic Russians.Neither had received a higher education.Soon after Astana became the new capital, Sergei was drawn to the city and worked as a day labourer on construction sites before moving on to work for an air conditioner maintenance company.The owner started a side business producing advertising material such as small give-aways made from plastic or metal but also huge banners covering whole building facades where Sergei began to work.After a while he left the company and set up his own business producing advertising material.Because he did not speak Kazakh, the municipality and government departments, however, are no longer hiring his company.Public jobs require a certificate of proficiency in Kazakh (Peyrouse 2007, p. 485).This does not officially cover government contracts for external service suppliers.However, government officials lean towards applying this regulation informally in government tenders.The couple contemplated leaving for Russia but decided against it.At the time of interview, Sergei's company catered to the private sector and employed around 20 people.The couple and their two children counted as upper middle class in Astana.Despite losing contracts because of the government preference for Kazakh speakers, the company managed to adapt its customer base and the couple remained in Astana.Similarly, Blackburn (2019, p. 227) provides evidence that Russian speakers are pushed towards the private sector in Kazakhstan.
Constrained agency refers to the fact that actors are always faced with constraints that affect their agency.This is of course also true for (potential) migrants, who may find their intentions to move or stay constrained for various reasons.In our household survey, many respondents stated that the lack of good education facilities in rural areas was a major constraint to remaining in situ (see Table A1 in the Appendix).An example of this is the case of Nurlan, who worked as a security guard in Astana.Previously, he had lived and worked in Baikonur and his family had no intention of moving.However, after his wife visited Astana, she was convinced that the city would offer the best educational facilities and prospects for their children, and they moved to Astana. 24In contrast, a substantial number of the household survey respondents thought of themselves as 'forced stayers' (25%).Many of them stated that they would find it difficult to move because they had to take care of family members, including elderly relatives who were unwilling or unable to move.Even though nursing homes existed in both rural and urban areas, almost all of our household survey respondents said it was still socially unacceptable to place elderly relatives into these homes.
Nevertheless, the majority of our household survey respondents (68%) did not wish to move and were voluntary stayers.Most were quite content with their life in the countryside and believed that with hard work they could still make a decent living in the village (see Table A1 in the Appendix for details).Even if the migration system did not encourage them to stay, neither did it force them to move.Many people were also emotionally attached to the countryside.They preferred the rural lifestyle and appreciated the natural environment and the traditional social norms, as shown by statements such as, 'Nature is so beautiful'; 'I love fishing and hunting'; and 'All my friends and family live here'. 25Not surprisingly, 71% of our household survey respondents stated that they enjoyed the rural way of life and almost 90% mentioned that growing their own food was important to them. 26However, despite this positive perception of life in the countryside, many respondents wanted their children and grandchildren to move to the city because life there was perceived to be more comfortable and to have better educational and job prospects (see Table A1 in the Appendix). 27This was frequently the starting point for relocating the whole family.In our interviews, we repeatedly heard the statement: 'We 24 The name has been changed to maintain the anonymity of our interviewee (interview 30, Nurlan, Astana migrant, Astana, 2 March 2017).
27 Interviews: 3, village mayor, Stepnogorsk district, 27 May 2016;19, villager, migrant household, Shortandy district, 17 September 2016;26, migration expert, Astana, 28 May 2016;33, villager, migrant household, Shortandy district, 16 September 2016;34, villager, migrant household, Shortandy district, 16 September 2016;35, villager, migrant household, Shortandy district, 18 June 2016.will stay until our children finish school (secondary/high school), and then we will all move to the city'. 28rban areas located in north Kazakhstan were heavily Russified during Soviet times, with the result that many urban Kazakhs speak better Russian than Kazakh (Wolfel 2002, p. 490).Although Russian is still important as a business language, the political environment is such that fluent command of Kazakh is a demanded skill, and one can observe a change in language use in favour of Kazakh, especially amongst young Kazakhs (Smagulova 2016, p. 101).As mentioned before, public jobs require a certificate of proficiency in Kazakh and private companies in urban centres often seek native Kazakh speakers as contact persons for state authorities. 29This is shown in the case of Alexandr, an ethnic Russian.He studied accounting in Astana but could not find a job in his profession as he lacked the necessary Kazakh language skills.His friends from university all remained in Astana even if it meant having to take on jobs unrelated to their education, which he refused to do.For this reason and because he liked the rural lifestyle, he decided to return to his village. 30At the time of our fieldwork it had been roughly 20 years since the issuance of Law No.151-I on languages that defined Kazakh as the state language (GovReKaz 1997).Despite free language courses offered by the government, however, ethnic Russians overwhelmingly had not learnt the language.Nevertheless, many Russians consider Kazakh-language proficiency to be linked to better economic opportunities (Smagulova 2016, p. 101).It is therefore possible that lack of Kazakh proficiency deters some ethnic Russians from moving to the city.Studies suggest several reasons for this linguistic shortfall, including the negative feelings of many Russians in relation to a perceived loss of status as the ethnic majority and political elite (Laitin 1998, p. 155;Wolfel 2002, p. 501;Blackburn 2019, p. 224).Nevertheless, Russian-speaking urban Kazakhs still have a very positive attitude towards Russianspeaking minorities (Blackburn, 2019, p. 230).
This interpretation is supported by our finding that ethnic Russians were more inclined to stay in the countryside compared to ethnic Kazakhs.Only 15% of ethnic Russians in our household survey intended to move within the next three years compared to 30% of ethnic Kazakhs.Similarly, findings by Aldashev and Dietz (2014, p. 393) indicated that ethnic Russians were less mobile within Kazakhstan.This may in part be explained by the fact that ethnic Russians who emigrated from Kazakhstan after the collapse of the Soviet Union were mostly young people.Thus, the ethnic Russian population is on average older than the ethnic Kazakh population (Peyrouse 2007, p. 493) and it is well known that older people are less mobile (see Oh 2003).However, at the same time, ethnic Russian university graduates interviewed in a quantitative survey had a much greater intention to return to the countryside than their Kazakh counterparts (Buchenrieder et al. 2020).Also, fewer ethnic Russians in our household survey have supportive family networks in Astana (only 16% compared to 33% of ethnic Kazakhs) and these networks are also smaller (three relatives for ethnic Russians and ten relatives for ethnic Kazakhs).This may result in less family support for new arrivals and also affect the reverse remittances sent.Although ethnic Russians in our household survey send fewer reverse remittances (as fewer of them move), the average amount they send is substantially higher (about 20%) compared to the average amount sent by rural ethnic Kazakh families.If our results hold true also for a larger sample, this could mean that the Kazakhification of the northern countryside might reverse in the long run.However, the ethnic Russian population is still shrinking due to low birth rates and over-ageing (Laruelle 2018, p. 71), which may offset the fact that more ethnic Kazakhs from the north are moving to the cities.
Discussion of interaction results between institutions and agency in the Akmola-Astana migration system
Kazakhstan was the only Soviet successor state whose titular group was an ethnic minority (Schatz 2000, p. 489).The most important task of the newly independent government was thus to establish a Kazakh conception of nationhood in which the ethnic composition of the population played an important role.The goal was to increase the ethnic Kazakh population in Kazakhstan to above 50% (Kesici 2011, p. 32) and to claim the land as historically Kazakh territory.Today, ethnic Kazakhs represent the majority due to the emigration of ethnic Russians and other minorities, a high birth rate amongst ethnic Kazakhs, and the resettlement of returning ethnic Kazakh migrants (Alff 2012, p. 6).Regional imbalances do however still exist.Thus, the government promoted the internal relocation of ethnic Kazakhs into the northern regions, such as Akmola province, where traditionally Kazakhs had been a minority (Smailov 2011, p. 19).The foundation of Astana as the capital city in 1997 in Akmola province was a defining point for the Akmola-Astana migration system, one that significantly influenced internal migration processes in Kazakhstan.After Astana became the capital city, people moved there from all over Kazakhstan, above all, from the surrounding northern areas.Although, as pointed out by Anacker (2004, p. 524), it is an open secret that this was a geopolitical decision to finally claim the northern parts of the country as ethnic Kazakh territory, the formal justification for attracting Kazakhs to Akmola and, more precisely, to Astana was the creation of a focal point for economic growth in the northern territory of Kazakhstan (Anacker 2004, pp. 523, 528;Koch 2013, p. 144).A possible downside of nation-building and Kazakhification policies, although not in themselves anti-Soviet or anti-Russian,31 is that the agency of non-Kazakhs, especially ethnic Russians, could be constrained due to their often insufficient command of Kazakh.However, the Kazakhification process has slowed in recent years (Blackburn 2019, p. 217).Despite these constraints, ethnic Russians only complained about it in two of our qualitative interviews32 and, as pointed out by Spehr and Kassenova (2012, p. 146), the majority of non-Kazakhs are proud to be Kazakhstani (whether this holds true in contemporary Kazakhstan is uncertain; however, looking at the current invasion of Ukraine by Russia, we assume that this has rather strengthened any Kazakhstani emotion).
Transforming Astana into a capital city perpetuated older ideas.The government framed its narrative for Astana on the Soviet narrative of centralisation and cities as cradles of modernity (Laszczkowski 2011b(Laszczkowski , p. 96, 2016a, p. 61;, p. 61;Koch 2014a, p. 434).In general, Kazakhstani cultural life is centred on the urban world and rural life is rarely celebrated. 33Nevertheless, rural migrants and visitors are impressed by the spectacular architecture and the ongoing construction boom, a view shared by most Astana residents (Osmonova 2016, p. 241;Laszczkowski 2016b, pp. 153-54, 157).These institutional changes and the comparably high salaries strengthened cumulative agency by creating an enormous draw for people in the surrounding countryside, pulling them towards Astana.Nevertheless, there is often disappointment once rural newcomers are confronted with the realities of life in Astana, which often do not correspond with their ideals.Subsequently, they often underreport the harsh living conditions and high costs of living when talking with their rural family and friends. 34 Until the population grew to around 500,000 people in 2003, the city government of Astana welcomed the influx of migrants.After that point, the city government began to change certain public service institutions, for instance, only registered city citizens gained access to free health treatment or educational services because the development of the urban infrastructure could no longer keep pace with the influx of people.Officials from the city's migration department 35 referred to a situation of 'self-defence' because central government regulations were missing and therefore, they had to invent countermeasures. 36Despite calls by senior Kazakh politicians for more controls over internal migration, in the post-Soviet era, citizens have every right to choose their place of settlement, as stipulated in the constitution, and such a reversal to Soviet-style controls would be unenforceable. 37The City Planning Department was transformed from a service agency providing transport, education and health services into a gatekeeper institution (this informal and diffuse process is hard to pinpoint but it began at least since the year 2011).Since it controls development, land-use designation and planning permission, it effectively regulates the housing market. 38As a result, in spite of the construction boom, the housing market is rationed and Astana has, together with Almaty, the highest housing prices in the country (Osmonova 2016, p. 241).In this case, the high prices are a desired effect to reduce the influx of poorer internal migrants. 39As a consequence, the wage surplus paid in Astana compared to the surrounding countryside is often eaten up by the higher urban housing costs, given the shortage of affordable flats (OECD 2017, p. 55).To counterbalance the rising costs, the government set up a housing subsidisation programme for its employees mostly in the 2010s (Bissenova 2017, p. 644).There is, however, a long waiting list. 40Thus, residents in Astana often call upon the collective agency of their extended family to combat the severely rationed housing market.The rural-based family may offer financial support (that is, reverse remittances) and the urban-based family, if available, may provide accommodation, not only increasing the chances of migrants staying in the city but also reinforcing the extended family network itself.Poorer migrants without family networks are forced by these interventions in the housing market into precarious housing as Yessenova (2010, p. 21) has shown in her work on Almaty.
Another form of combative agency is to build illegally, for example, on family land, in the hope that these buildings will in the end receive a legal status.These illegal buildings, found frequently on the outskirts of Astana, are often of low quality. 41They are populated mainly by poorer people, who often face additional insecurity as these buildings, which were constructed without a permit, face the possibility of demolition orders by government officials.Another negative outcome is that corruption and fraud thrive in such an anarchical climate, as, for example, in the case of Aigerim. 42Her family sold all their property in the north and used the money to buy land close to Astana.They had several meetings with local government officials, who promised them a construction permit.After paying for the land, they applied for planning permission, as promised and learned that their area had never been designated for residential development.They now live in dire circumstances in a very small apartment elsewhere in the city.
upon by the longer-term residents while, on the other, their labour and the economic surplus they create are dearly needed. 43he collapse of the Soviet education system in the 1990s led to a dramatically reduced number of regional colleges (OECD 2007, p. 36).Only the large education facilities in the major cities remained intact (Toleubayev et al. 2010, p. 368), and were even supplemented by new facilities, offering programmes in crafts and trades (Buchenrieder et al. 2020).As a result, young people were drawn into bigger cities (cumulative agency) and often stayed on after graduation, as they grew accustomed to city amenities and their social bonds to their home region weakened.This migration behaviour is known as the 'migrating-to-learn'-'learning-to-migrate' chain (Rérat 2016, p. 279).Nevertheless, the increased rural-urban migration of young people results in a lack of professionals with higher education in rural areas, with flow-on effects for the quality of public service infrastructure in those areas; for example, in health and education.Chronic public underinvestment may also play a role.Thus, the elderly or families with older children in urban centres are also likely to relocate to towns as their agency to stay in the rural area of origin becomes constrained.
A reassessment of the education system took place in the early 2000s.In a reaction to counter the lack of well-educated professionals in rural areas, the central government set up scholarship programmes and decentralised higher education facilities (Buchenrieder et al. 2020).Moreover, the previous focus on urban development programmes was complemented by a more decentralised rural and regional development approach.However, the continuation of these rural and regional development programmes crucially depends on the national budget, which is heavily dependent on the price of fossil energy (Gallo 2021, p. 19).If the national budget is hit by an external shock such as low commodity prices-for example, for oil and gas exports-the policy pendulum may swing back to focus again on urban development, where the power elite resides.
Conclusion
It is important for migration studies to systematically describe the interaction of institutions and agency at the region of origin and destination when explaining migration processes.Our conceptual approach therefore embraces institutions and agency in an egalitarian manner by applying a New Institutionalism framework.Our case study is the Akmola-Astana migration system in Kazakhstan.This migration system is intriguing because the Kazakhstani government has proactively and reactively altered institutions in Akmola and/or Astana, thus altering the agency of potential and/or actual migrants.
Initially, the cumulative agency of rural people moving to Astana was welcomed and promoted by creating a narrative of Astana as a 'centre of modernity' and an 'economic growth pole'.However, the influx of people overwhelmed the city's infrastructure.On behalf of the city government, which wanted to reduce migration, the City Planning Department changed its policies to restrict the housing market, which led to higher housing prices.Migrants with functioning family networks could counter this through collective agency, either by staying with relatives in Astana or by receiving reverse remittances from their rural relatives to finance an apartment of their own.Using combative agency, migrants without a pronounced social (mostly family) network mitigated high housing costs with measures resulting in precarious housing situations and/ or highly insecure livelihoods (risk of eviction/demolition).However, the 'Astana narrative' is as strong as ever and migrants are still attracted to the city, even in the face of an unfavourable income-cost ratio.
In the countryside of Akmola, some people were constrained in their agency to stay, especially young adults in search of higher education.In response, the Kazakhstani government decentralised higher education facilities, set up special scholarship programmes to attract qualified young professionals as state employees to rural areas, and implemented rural economic stimuli programmes to improve the rural job market.Ideally, these institutional changes will help to develop a positive narrative of rural and small-town economic areas and help dismantle the belief that migration is a precondition for success.As discussed, the Kazakhstani government created various narratives to promote its goals.Nevertheless, it would probably be more effective for the state media to show the life and living conditions of everyday urban citizens in Astana, including precarious housing, the high cost of living and the struggle to make ends meet (for example, the need to take on more than one job) and not only idealised pictures of the modern metropolis.This would give many rural dwellers a more realistic picture of what to expect in Astana.The strong use of family networks for housing and reverse remittances is also a result of the sometimes ill-prepared or even irrational decisions of younger people to relocate.This is also shown in the severe mismatch between the jobs young people often work in and their level of education.Many of the migrants working in menial jobs are overqualified, as our qualitative research revealed.If they do not progress in their career, their substantial investment in their education may not produce the expected return.Thus, providing young adults in the countryside with appropriate information on urban jobs, qualifications, potential income and housing costs may be a more promising approach than simply trying to constrain their agency.This would also lessen the need for reverse remittances to support unsustainable livelihoods in urban areas.
One effect of the Kazakhification policy is that speaking Kazakh has become more important both in the private sector and, even more so, for public employees, who must now master both Kazakh and Russian.Since ethnic Russians often do not speak Kazakh well enough, they are underrepresented in the urban labour market, particularly in public administration.This has had an interesting side effect.Ethnic Russians have a higher intention to stay in rural areas compared to their Kazakh neighbours.If our analysis holds true, in the long term this may even lead to a persistent (re-emerging) Russification of the northern countryside.
By developing a conceptual guide on the basis of New Institutionalism for the deconstruction of migration systems, this study has wider significance that extends beyond Kazakhstan.The analysis of institutions and agency under a cross-school New . Historical institutionalism not only considers time FIGURE 1. CONCEPTUAL FRAMEWORK FOR MIGRATION SYSTEMS BASED ON NEW INSTITUTIONALISM Source: Authors.
TABLE 1 (
Continued)In Kazakhstan, the family network goes beyond the core family and can include around 100 individuals.This extended family network is useful in providing accommodation at expensive city locations(Laszczkowski 2011a, p. 84)or financial support, for example, reverse remittances from the countryside to subsidise city life.According toDietz et al. (2011, p. 21), about 40% of the migrants finance their relocation primarily through family transfers.This in turn strengthens the familial institutions and fosters their continuity.Almost 70% of the household survey respondents stated that they would need prolonged financial support from their families if they moved to a city.
TABLE 2 MIGRATION
Dufhues et al. 2021)).*The'RoadmapforEmployment 2020' (GovReKaz 2016) laid out a plan for the development and support of small-scale businesses in rural areas, for example, through microcredit schemes or farming subsidisation.†Severalgovernmentprogrammes exist to make rural livelihoods more comfortable by improving infrastructure and incomes.‡In the last decade, those migrating to bigger cities such as Astana have often originated from smaller towns.In the Kazakh context, these are considered 'rural'.The government therefore established special economic stimuli programmes for the development of small towns(Bissenova 2017, p. 646).One measure is to relocate state administrative agencies and thus public jobs to small towns and develop economic clusters.In the latter, Soviet-style 'single-industry towns' are transformed into local economic clusters (GovReKaz 2016).
TABLE 2 (
Continued) Rural health and education facilities have undergone notable improvement due to public investment.Nevertheless, they are still falling behind those in cities as this is where economic growth is concentrated.Health and education facilities in Astana are almost always assessed as superior (see TableA2).The same services in the rural areas, however, were considered by only one quarter or half of the household survey respondents respectively as better than average.Furthermore, access to free medical treatment is only granted at the place of registration.For families with children and the sick, this continues to be an incentive to move to and register in larger cities. * Interviews: 8, government official, employment department, Stepnogorsk, 25 May 2016; 9, village mayor, Stepnogorsk district, 26 May 2016; 10, village mayor, Shortandy district, 16 September 2016; 11, village mayor, Akkol district, 19 September 2016.† 'Implementation of the Enbek Programme', Official information source of the Prime Minister of the Republic of Kazakhstan, 5 March 2020, available at: https://primeminister.kz/en/news/reviews/veterinary-requirements-investment-attraction-export-of-productsresults-achieved-in-agriculture-development-of-kazakhstan-in-2019,accessed 23 May 2022; interview 12, town mayor, regional town Pavlodar province, 1 June 2016.‡' V Kazakhstane lyudi chashche pereezshayut iz aulov v goroda', Argumenty i fakty, 12 February 2020, available at: https://kzaif.kz/society/v_kazahstane_lyudi_chashche_pereezzhayut_iz_aulov_v_goroda, accessed 23 May 2022; interviews: 3, village mayor, Stepnogorsk district, 27 May 2016; 11, village mayor, Akkol district, 19 September 2016; 13, town mayor, regional town, Akmola province, 20 September 2016.¶ Interviews: 8, government official, employment department, Stepnogorsk, 25 May 2016; 12, town mayor, Sources: | 2023-02-12T16:50:09.907Z | 2022-11-28T00:00:00.000 | {
"year": 2024,
"sha1": "39722f842aaaf7a244533868b6a4d9923f099095",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/09668136.2022.2134305",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "b0245b3b291e15a3b2d25fd7bdb46831a45f9631",
"s2fieldsofstudy": [
"Sociology",
"Political Science"
],
"extfieldsofstudy": []
} |
264487206 | pes2o/s2orc | v3-fos-license | ELM Ridge Regression Boosting
We discuss a boosting approach for the Ridge Regression (RR) method, with applications to the Extreme Learning Machine (ELM), and we show that the proposed method significantly improves the classification performance and robustness of ELMs.
Introduction
In this short note we consider a class of simple feed-forward neural networks [1], also known as Extreme Learning Machines (ELM) [2].ELMs consist of a hidden layer where the data is encoded using random projections, and an output layer where the weights are computed using the Ridge Regression (RR) method.Here we propose a new RR boosting approach for ELMs, which significantly improves their classification performance and robustness.
Let us assume that X = [x n,m ] N ×M is a data matrix, where each row x n = [x n,0 , x n,1 , ...x n,M −1 ] ∈ R M is a data point from one of the K classes.The classification problem requires the mapping of the rows of a new, unclassified data matrix X = [x nm ] N ′ ×M , to the corresponding classes {0, 1, ..., K −1}.
The first layer of the ELM encodes both X and X matrices using the same random projections matrix R = [x j,m ] J×M drawn from the normal distribution r j,m ∈ N (0, 1): where h() is the activation function, which is applied applied element-wise.The second layer of ELM solves the Ridge Regression (RR) problem: where Y = [y nk ] N ×K is the N × K target matrix for K classes, and λ > 0 is the regularization parameter.
Each row y n ∈ R K of Y corresponds to the class of the data point x n .The classes are encoded using the one-hot encoding approach: The solution of the above RR problem is: where I is the J × J identity matrix.
Therefore, in order to classify the rows of a new data matrix X we use the following criterion: where 2 Boosting method Several boosting methods have been previously proposed for the RR problem [3], [4], [5].Our approach here is different, and it uses several levels of boosting.
At the first boosting level, ℓ = 0, one computes the approximation: and then continues by successively solving for the next T − 1 approximations: After T iteration steps the first level will provide an approximation: Then we set: and we repeat the procedure for the next L − 1 boosting levels, obtaining Y ℓ , ℓ = 1, ..., L − 1.
The general equations for ℓ = 1, ..., L − 1 can be written as: Given an unclassified data matrix X = [x nm ] N ′ ×M , where each row is a new sample, we encode it using the same random projection matrices: we compute the output: and then we use the decision criterion (5) to decide the class for each new sample (row of X).
We should note that different random projection matrices R ℓ t are generated for each level and each time step: and α ∈ (0, 1) is a discount parameter (or a "learning" rate).Also, the random projection matrices can be generated on the fly, and they don't require additional storage.
Numerical results
In order to illustrate the proposed method we use two well known data sets: MNIST [6] and fashion-MNIST (fMNIST) [7].The MNIST data set is a large database of handwritten digits {0, 1, ..., 9}, containing 60,000 training images and 10,000 testing images.These are monochrome images with an intensity in the interval [0, 255], and the size of 28 × 28 = 784 pixels.The fashion-MNIST dataset also consists of 60,000 training images and a test set of 10,000 images.The images are also monochrome, with an intensity in the interval [0, 255] and the size of 28 × 28 = 784 pixels.However, the fashion-MNIST is harder to classify, since it is a more complex dataset, containing images from K = 10 different apparel classes: 0 -t-shirt/top; 1 -trouser; 2 -pullover; 3 -dress; 4 -coat; 5sandal; 6 -shirt; 7 -sneaker; 8 -bag; 9 -ankle boot.
In all numerical experiments we have used the following data normalization [10]:
tanh() activation
In Figure 1 we give the classification accuracy for both data sets, as a function of the boosting level ℓ = 0, 1, ..., 19, when the following parameters were kept fixed: µ = 1, α = 1/2, T = 50, J = M .Also, in this case the activation function was tanh(), which is typically used in ELMs and other neural networks.One can see that the classification accuracy increases with the number of boosting levels ℓ.In the case of MNIST the classification achieves an accuracy of η > 98.9%, for ℓ ≥ 7.
Similarly in the case of fMINIST we obtain an accuracy of η > 91.3%, for ℓ ≥ 7. We should note that the regularization constant was set to a high value λ = 1, which discourages overfitting.
sign() activation
In a second experiment we replaced the tanh() activation function with the sign() function.This function is typically used in the approximate nearest neighbor problem, which can be stated as following [8], [9].Given a data set of points X = {x 0 , x 1 , ..., x N −1 }, x n ∈ R M , and a query point q ∈ R M , the goal is to find a point x ∈ X such that d(q, x) ≤ (1 + ε)d(q, x * ), ε > 0, where x * is the true nearest neighbor of q, and d() is a distance measure.The similarity hashing maps the data set X and the query q to a set of hash values (hashes).In order to construct the hashes here we will use the random projection approach, which is known to preserve distance under the angular distance measures.Given an input vector x ∈ R M and a random hyperplane defined by r ∈ R M , where r m ∈ N (0, 1) for m = 0, 1, ..., M − 1, we define a hashing function as: where • is the dot product.With this choice of the hash function, one can easily show that given two vectors x, x ′ ∈ R M the probability of h(r, x) = h(r, x ′ ) is: where θ(x, x ′ ) is the angle between x and x ′ .
We should note that each randomly drawn hyperplane defines a different hash function, and therefore to map the data vectors x ∈ R M to {−1, +1} J we need to define J hash functions: One can see that in our case, each column of the matrix R = [r j,m ] J×M corresponds to a randomly drawn hyperplane.
In Figure 2 we give the classification results.Once can see that method is quite robust, and the accuracy only drops by about 0.5%, comparing to the tanh() activation.
Conclusion
We have discussed a ridge regression boosting method with applications to ELMs.The proposed method significantly improves the accuracy of the ELM.The method is based "on the fly" random projections which do not require additional storage, only the random seed needs to be the same for both data training and testing processing.In the case of MNIST and fMNIST after about seven boosting levels the method saturates and no significant improvements can be seen.Besides the good classification results this boosting approach is also very robust to noise perturbations.For example, if 10% of the pixels in the images are randomly set to zero, the classification accuracy is still reaching 98% for MNIST and respectively 90% for fMNIST.
Figure 2 :
Figure 2: The classification results for the sign() activation function (see the text for details).
Figure 1: The classification results for the tanh() activation function (see the text for details). | 2023-10-26T18:40:58.627Z | 2023-10-24T00:00:00.000 | {
"year": 2023,
"sha1": "f3f8f3e6c0d287e1929f48ba71b8ae3ab0949268",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f3f8f3e6c0d287e1929f48ba71b8ae3ab0949268",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254591633 | pes2o/s2orc | v3-fos-license | Inflationary entanglement
We investigate the entanglement due to geometric corrections in particle creation during inflation. To do so, we propose a single-field inflationary scenario, nonminimally coupled to the scalar curvature of spacetime. We require particle production to be purely geometric, setting to zero the Bogolubov coefficients and computing the $S$ matrix associated to spacetime perturbations, which are traced back to inflaton fluctuations. The corresponding particle density leads to a nonzero entanglement entropy whose effects are investigated at primordial time of Universe evolution. The possibility of modeling our particle candidate in terms of dark matter is discussed. The classical back-reaction of inhomogeneities on the homogeneous dynamical background degrees of freedom is also studied and quantified in the slow-roll regime.
I. INTRODUCTION
Throughout the Universe evolution, inflation is realized at primordial epoch with the aim of healing the main issues related to the standard Big Bang paradigm [1,2]. It represents a phase of strong acceleration, slightly similar to late time dark energy 1 epoch [6], driven by an inflaton field 2 , quite different of barotropic fluids, widely adopted for established dark energy scenarios. The inflationary potential is still unknown and can be built up using the approach of either small or large fields, with exceptions provided by intermediate field representations [10], or by means of couplings among more than one field, see e.g. [11].
One of the main goals of inflation is to reproduce inhomogeneities responsible for the formation of large-scale structures [12]. Thus, inflation appears to be the natural landscape in which overdensities formed at primordial time. For any inflationary potentials, as a consequence of Einstein's field equations, the cosmological inhomogeneities plausibly generate particles, a mechanism known as geometric particle production [13,14]. Such process is conceptually different from the well known gravitational particle production (GPP) from vacuum, which is typically associated to Bogolubov transforma-times may still be present to some extent at late times, since decoherence due to coupling with other fields (except for gravity) would be excluded. So, if spacetime geometry affects entanglement, a given perturbed FRW background induced by inflationary dynamics is expected to work analogously, leading to non-negligible effects.
Motivated by these considerations, we here investigate entanglement arising from geometric particle production in a single-field inflationary scenario 6 , where perturbations are traced back to quantum fluctuations of the scalar inflaton field. By assumption, the inflaton dominates the energy density of the Universe during inflation. Accordingly, any fluctuation in the inflaton results, through Einstein's equations, in a perturbation of the metric. The dynamics of these fluctuations will be then responsible for the geometric mechanism of particle production here studied. In addition, we consider from a classical perspective the back-reaction effects induced by inhomogeneities on the homogeneous dynamical background [51][52][53]. We start by assuming in our Lagrangian a Yukawa-like term, i.e., a non-minimal interacting term between the inflaton and the scalar curvature. The Universe evolution during inflation is modeled by a quaside Sitter solution [54] for the scale factor, in the perturbed FRW background. We evaluate then the modes and the corresponding analytical solutions for the inflaton field involving a chaotic potential. Once obtained the e-folding number, the perturbation solution and the end of inflation, we go further with particle production up to the second geometric order, taking zero Bogolubov coefficients at first order expansion. The corresponding geometric particles are thus computed together with their probabilities for positive and negative coupling constant, ξ. We infer the amplitude element, adopting the Dyson expansion over the S matrix and afterwards we focus on back-reaction effects. As a final step, the entanglement entropy is computed, showing how it increases in case of negative coupling constant, ξ. Physical consequences on inflationary dynamics, dark matter abundance under the form of geometric particles and entanglement signature are also debated.
The paper is structured as follows. In Sec. II we work out our cosmological framework, introducing the corresponding single-field description. In Sec. III, quantum fluctuations are investigated by perturbing the field and the FRW metric. Afterwards, in Sec. IV, inflation is studied as one adopts a quasi-de Sitter scale factor, getting rise to perturbed solutions for the field itself. Once evaluated the e-folding number and the corresponding inflationary end, we shift to particle production in Sec. V, where we also compare our geometric mechanism of production to inflationary particle production in warm 6 Multifield inflation may also lead to interesting results in the context of geometric particle production, starting for example from the proposal of Ref. [50]. This could be subject of future investigations.
inflation scenarios. In Sec. VI, we investigate how classical back-reaction effects occur in the primordial Universe, emphasizing that they slightly contribute to the overall shift of the energy-momentum tensor, thus being negligible in our framework. Finally, entanglement due to geometric production is quantified in Sec. VII. Conclusions and perspectives are discussed in Sec. VIII 7 .
II. COSMOLOGICAL SETUP OF INFLATIONARY DYNAMICS
We start from the usual Lagrangian density for the inflaton ϕ, introducing a finite coupling ξ between the field itself and the scalar curvature R, The potential V (ϕ) is left unspecified for the moment, whereas the FRW line element, in cosmic time t, reads Thus, we take the variation of the action for Eq. (1) with respect to ϕ, obtaining the equation of motion corresponding to the inflaton dynamics, when the FRW background is not perturbed. Here, dot indicates derivative with respect to t. In Eq. (3) the curvature R is function of the Hubble parameter H =ȧ(t)/a and V ,ϕ ≡ ∂V (ϕ)/∂ϕ. Notice that here the inflaton is still depending on the event x µ ≡ (t, x), instead of time coordinate only. In the next section we will split the field ϕ in a background homogeneous contribution and quantum fluctuations associated with it. The dynamics of the inflaton field is more easily evaluated in conformal time, i.e., τ = dt/a(t), where the unperturbed metric tensor becomes namely proportional to the Minkowski metric tensor, η µν . The zero-order equation of motion for the inflaton is then [54,55] 1 where the prime denotes derivatives with respect to conformal time and we made explicit the zero-order scalar curvature [14]. Introducing the effective potential for a generic scalar curvature R, that corresponds to an interacting term, non-minimally coupled to curvature, we can therefore rewrite Eq. (5) as that holds for any metric tensor g µν .
III. QUANTUM FLUCTUATIONS DURING INFLATION
We here introduce perturbations in the aforementioned framework. To do so, we first split the inflaton field as a homogeneous background contribution, say ϕ 0 , plus a term associated to its quantum fluctuations, namely Second, we employ metric perturbations, whose most general expression for the line element, describing scalar degrees of freedom, is [9,54] where Φ, Ψ, B and E are scalar quantities which depend on space and time coordinates and D ij ≡ ∂ i ∂ j − 1 3 δ ij ∇ 2 . Now, the variation of Eq. (7) consists in the sum of four different contributions, corresponding to the variations of 1 √ −g , √ −g, g µν and ϕ. By adopting the wellknown identity and recalling the zero-th order equation of motion for the field 8 one arrives at the first-order perturbed equation where H ≡ a ′ /a and the variation of the scalar curvature is [54] δR = 1 When perturbations are generated by a single scalar field, it can be shown that the perturbation potentials are equal, i.e., Φ = Ψ. Moreover, choosing the longitudinal gauge 9 , namely E = B = 0, and assuming plane-wave perturbations [54,56], i.e., adopting the following ansatz: it is straightforward to get from Eq. (12) that turns out to be a complicated version of the equations of motion for ϕ at a perturbative level. Once the δϕ k modes are obtained, the full expansion for quantum fluctuations of the inflaton field reads [13] δφ(x, τ ) = where the ladder operators satisfy canonical commutation relations We will discuss normalization of the inflaton modes later on. In order to solve analytically Eq. (15), one has to argue particular energy conditions, corresponding to given lengthscales for the inflaton fluctuations.
A. Super-Hubble scales
To leading order, each Fourier mode in Eq. (16) evolves independently. The comoving Hubble radius during inflation, 9 Geometric particle production can be also studied in the synchronous gauge, specified by the condition h 0ν = 0 [14]. In Appendix A we discuss scalar perturbations in this gauge, starting from the potential Ψ derived here.
plays a key role in determining the mode dynamics: on sub-Hubble scales (k ≥ aH I ) the inflaton fluctuations typically oscillates, while they are nearly timeindependent on super-Hubble scales (k < aH I ), as we will see. Formally, one can define an Hilbert space associated to fluctuation modes, which naturally divides into a sub-Hubble subspace and a super-Hubble one [57]. Note that the comoving Hubble radius decreases as function of time during the inflationary period: this means that the boundary between the two subspaces depend on time, i.e., the more inflation goes on the more modes get off the horizon. This is a specific feature of systems on a dynamically expanding background.
It has been shown [58][59][60] that some mixing may arise between sub-and super-Hubble modes, leading to decoherence of the reduced density matrix for both subsystems. However, these effects typically derive from interaction terms which are cubic in the perturbation variables [57], i.e., of higher order with respect to the fieldgeometry coupling studied here. For this reason, decoherence effects will be neglected in this work but will be subject of future investigations.
In the context of GPP, it can be proven [20] that particle production is dominant on super-Hubble scales, with respect to sub-Hubble ones, if one assumes a pure de Sitter evolution during inflation. Accordingly, it seems interesting to generalize such a framework by including perturbations.
More precisely, the modes of interest are well inside the horizon at the beginning of inflation, and leave it, becoming super-Hubble, subsequently. This mechanism may also affect geometric particle production, as we will see. Moreover, the choice of such scales will naturally provide an infrared and ultraviolet cutoff for the momenta of the particles that will be produced.
The term Ψ ′ k ϕ ′ k in Eq. (15) can be neglected on super-Hubble scales because perturbations are nearly frozen 10 in this limit. In this limit we also have where is the slow-roll parameter and G the gravitational constant. Eq. (19) can be derived from the (0,i)-component of perturbed Einstein's equations [54]. 10 Accordingly, a term of the form Ψ ′ k ϕ ′ k would be of higher order with respect to Ψ k V eff ,ϕ , since also Ψ k is small [54] on these scales. The same reasoning apply to the terms Ψ ′ k and Ψ ′′ k in the curvature contribution.
Bearing the above considerations in mind, we can rewrite Eq. (15) as For |ξ| ≪ 1, we can neglect the contribution arising from the variation of the scalar curvature, since we also need the perturbation potential to satisfy |Ψ k | ≪ 1.
In the case of slow-roll approximation, we can also set ϕ ′′ ≃ 0 and thus write the derivative of the potential as function of ϕ ′ in the background equation, Eq. (11).
Performing now the usual rescaling procedure over the field [54,55], and choosing the chaotic potential 11 where m is the mass of the field, we arrive at
IV. INFLATION IN A QUASI-DE SITTER SPACETIME
During inflation, the slow-roll parameters are small and almost constant [61]. Commonly, one refers to this assuming that a suitable solution for the scale factor turns out to be purely de Sitter. However, this happens only in the simplest cases, i.e., when vacuum energy dominates [62]. In fact, since vacuum energy is constant, the scale factor naturally reads as an exponential, implying a de Sitter phase. Clearly, for a generic potential, that does not reduce to vacuum energy during inflation, the situation is different. Indeed, one has to solve the equations of motion for the field and, by virtue of the Friedmann equations, arguing the exact form of a(τ ) throughout inflation. This is clearly not easy and quite often appears as a sole numerical investigation. 11 Chaotic potentials usually exhibit the graceful exits, byproduct of attractor solutions as ϕ → 0. The standard forms, namely ∼ ϕ 2 and ∼ ϕ 4 , have been recently ruled out by the Planck satellite results [21] that, conversely, showed that they can work only if the curvature is coupled to ϕ. We here limit ourselves to ∼ ϕ 2 in order to compute an analytic toy-model approach for entanglement production during inflation. More complicated cases invoke alternative potentials [12] and will be object of future efforts.
Thus, during the inflationary stage one can approximate the scale factor through a quasi-de Sitter function that appears to suitably approximate the real dynamics and the slow-roll parameter as well. In particular, following Ref. [54], we here propose the approximate quasi-de Sitter solution provided by where τ < 0 and H I is the Hubble parameter during inflation, up to small corrections. In this respect, Planck data impose severe upper bounds on H I , leading to [21] where M pl is the Planck mass. The parameter v in Eq. (25) essentially describes small deviations from a pure de Sitter phase. We notice that implying that we can identify v as a small and constant slow-roll parameter. Departures from this approximate version of the scale factor with respect to the real solution for a(τ ) are extremely small, in the slow-roll regime. Accordingly, we set v ≡ ϵ from now on. Using now Eq. (25) and noting that a ′′ /a ≃ (2 + 3ϵ)/τ 2 , the equation of motion for perturbations (24) finally gives (28) This equation can be recast in the form where This result coincides with that of Ref. [54] in the case of minimal coupling, ξ = 0, if one introduces the further parameter η = m 2 /3H 2 I . For small ξ, i.e., ξ ≃ (ϵ; η), it is easy to see that 12 The general solution of Eq. (29) can be written in the form 12 More precisely, since the inflaton field is massive, the condition |ξ| ≲ 10 −3 is required during inflation, see e.g. [63,64].
where H (1) ν and H (2) ν are Hankel functions and the constants c 1 (k) and c 2 (k) can be determined by imposing the normalized initial vacuum state.
A common choice is the Bunch-Davies vacuum [65][66][67], i.e., to impose that in the ultraviolet regime k ≫ aH I the solution for δχ k matches the following plane-wave solution: Thus, knowing that we can then set This gives the solution On super-Hubble scales, since H Restoring now the original fluctuation δϕ k , we obtain which is plotted in Fig. 1 as function of conformal time.
We remark that this result is correct only in the slowroll regime, where the Universe expansion can be described by a scale factor of the form (25). The corresponding perturbation can be now derived from Eq. (19), once we solve Eq. (11) for the background field.
Hence, including the slow-roll hypothesis, this equation gives 1 where compactly The integration constant c 0 can be determined by imposing the initial value of the background field ϕ(τ i ), while the coupling constant ξ is small, as previously discussed.
For ξ ≃ (ϵ; η) we can neglect second order terms and thus write The initial and final times τ i , τ f for the inflationary era can be derived by selecting a given number of e-foldings N . Commonly one takes N ≳ 60, i.e., those minimally needed to speed the Universe up during inflation, Since we are focusing on modes exceeding the Hubble horizon after the beginning of inflation, we set k > k i = H I a(τ i ) and we further require the perturbation potential to be small with respect to the background, namely |Ψ k | ≪ 1 , ∀k.
For instance, a viable choice is τ i = −10 3 , that in turn gives k i = 0.001. Accordingly, via Eq. (43) we can derive the corresponding value for τ f and recalling the relation between the inflaton field value and the number of e-foldings 13 before the end of inflation we can fix ϕ(τ i ) and the corresponding value for c 0 . The geometric perturbation, Eq. (19), on super-Hubble scales finally takes the form and it is plotted in Fig. 2 as function of time, assuming k = k i .
V. PARTICLE PRODUCTION IN INFLATIONARY PHASE
Once the geometric perturbation has been computed, we can quantify the amount of particles arising from the coupling of inflaton fluctuations δϕ(x, τ ) to gravity. According to previous findings [36], we will call geometric particles those quasi-particles obtained when the inflaton is coupled to the scalar curvature R.
Writing the perturbed metric in the form g µν = a 2 (τ )(η µν + h µν ), we can describe at first order the interaction between fluctuations and spacetime perturbations via the Lagrangian 14 [13] where g µν ≡ a 2 (τ )η µν , H µν = a 2 (τ )h µν and T µν is the zero-order energy-momentum tensor associated to fluctuations, which is given by Since the energy-momentum tensor is quadratic in the fluctuations, particles are produced in pairs at first perturbative order.
The corresponding number density of geometric particles produced at a given time τ * is given by where β p and β q are Bogolubov coefficients [13,15], associated to the field evolution with respect to the homogeneous background. See, for example, [36] for a derivation of Eq. (48). As discussed in the Introduction, nonzero Bogolubov coefficients leads to gravitational (or quantum) particle production (GPP), provided by a consolidate mechanism, see e.g. [15][16][17][18], and also widelyinvestigated in the inflationary regime [19,20]. In the context of cosmological perturbations, the GPP process due to inflationary expansion is also associated to entropy generation, as the result of squeezing of fluctuations modes on super-Hubble scales [70][71][72]. As we will discuss later on, this mechanism differs from entanglement production due to perturbations only. The main disadvantage in dealing with Bogolubov transformations on a FRW background is that they only mix modes of the same momentum [24]. This leads, in principle, to particle-antiparticle pair production, which may annihilate. On the other side, geometric particle production is not restricted to such pairs. This is due to the presence of inhomogeneities, which break space translation symmetry so that linear momentum is no longer conserved. In Eq. (48) we notice the presence of a purely geometric contribution, namely the first term, but we also notice that nonzero Bogolubov coefficients can enhance the geometric mechanism of production here studied, resulting in a larger number of particles produced. This effect should be further investigated, especially in the attempt of deducing dark matter from a geometric mechanism of particle production [36]. Alternative proposals for geometric quasi-particles have also been studied, see e.g. [37].
Before discussing particle production associated to the inflaton fluctuations, we underline that the number den-sity in Eq. (48) can be computed analytically only in some special cases. For example, assuming a conformally coupled massless scalar field (m = 0, ξ = 1/6) it can be shown that the Bogoliubov coefficients are zero, i.e., the background expansion does not produce particles and the second order number density reduces to where C µνρσ is the Weyl tensor associated to the perturbed metric g µν . Other examples are discussed in [13].
The S matrixŜ in Eq. (48) is derived from the firstorder Dyson's expansion, namelŷ We remark that a proper definition of the S matrix in curved spacetime is not straightforward [17,73,74]. First of all, we need the interaction to be switched off in the distant past and future, as for Minkowski spacetime. In our model this assumption seems realistic, since in inflationary cosmology all pre-existing classical fluctuations can be typically neglected (see e.g. [75]), while at the end of the slow-roll regime we expect back-reaction effects to gradually restore homogeneity, as discussed in Sec. VI. At the same time, we are faced with the problem of properly defining particle states, which is a peculiar issue of quantum field theory in curved spacetime. In a de Sitter background, which clearly does not possess asymptotically flat regions, a valid definition of initial no-particle states can be given in terms of the adiabatic vacuum [17]. In particular, it can be shown that the Bunch-Davies vacuum introduced in Sec. IV is a local attractor in the space of initial states for an expanding background [76].
In eternal de Sitter space one can prove that no particle production arises due to the background. However, the Universe dynamics is clearly not described by a scale factor of the form (25) at any time and, more subtly, a de Sitter background still produces thermal radiation, which can be detected by comoving observers in it [17,77]. For these reasons, a realistic description of spacetime evolution necessarily requires the inclusion of Bogolubov coefficients, as result of the transition from inflation to radiation/matter domination and then late times [20]. In turn, this also implies an increase of the total amount of geometric particles produced, as shown by Eq. (48). We will deepen this point in future works.
For the moment, we focus on the probability amplitude where N is a normalization factor. Exploiting the fact that the perturbation tensor is diagonal and writing explicitly all the curvature terms, Eq. (51) can be expressed as where and for i = 1, 2, 3. Recalling Eq. (14) for the perturbation potential, the integral over space leads to a Dirac delta. Moreover, time integration has to be performed so that all the modes of interest are in super-Hubble form, Eq. (38). In particular, we evaluated particle production in the time interval τ ∈ [τ * , τ f ], with τ * = τ i /1000. Such a choice ensures that all modes in the range 0.001 ≤ k < 1 lie within super-Hubble scales during this interval. In Fig. 3 we show the probability of particle production as function of the momentum p x , assuming p y = p z = 0 and q = q x = 0.01 GeV.
A. Geometric particle production and warm inflation
Particle production during inflation, which we quantified in the context of cosmological perturbations, is also a peculiar property of the so-called warm inflation scenario 15 . In this framework, one typically assumes that the interaction between the inflaton and radiation fields leads to dissipative effects, which can be interpreted in 15 Warm inflation represents an alternative to the more popular cold inflation scenario. It allows for interactions between the inflaton and other quantum fields within the slow-roll regime, which are not present in the standard picture of inflation (denoted as "cold" for this reason). We will not discuss technical aspects or models of warm inflation here: the interested reader may consult seminal papers on this topic [78][79][80][81][82], while more recent developments are summarized in the review [83] and references therein.
terms of particle production [84,85]. During slow-roll, the evolution of the background inflaton field in warm inflation is described (in cosmic time) by [84] (3H + Γ)φ + V ,ϕ ≃ 0, (55) where for simplicity we have assumed no field-curvature coupling. The coefficient Γ quantifies the effects of dissipation due to interactions, i.e., the energy transferred from the inflaton to other fields. Such term can be derived assuming some specific interaction Lagrangians, by means of thermal quantum field theory [83,85]. However, an intuitive estimate of the dissipation rate can be obtained following Ref. [79]: for a given interaction, we can argue that the dissipation rate is proportional to the probability of pair production and the corresponding background temperature when such process occurs. From Fig. 3, we notice that, in case of geometric production, probability amplitudes are usually low, except for modes that exit the horizon at the beginning of inflation. Even lower amplitudes are expected if the zeroorder energy-momentum tensor, Eq. (47), is not associated to the inflaton fluctuations (e.g. for radiation or other scalar/fermionic fields), as consequence of the fact that interactions are purely gravitational in our model. Similarly, we expect the production rate to be negligible for sub-Hubble modes, due to the oscillatory behaviour of the inflaton fluctuations in that regime. This suggests that geometric particle production cannot account for large dissipation rates Γ ≳ H during inflation, at least in a single field scenario. However, many successful warm inflation models propose a twostage mechanism, where the inflaton interacts with heavy intermediate fields, that in turn are coupled to light fields (either fermions or bosons) [86]. Possible extensions of the here-proposed model to multifield inflation could then shed further light on the dissipative effects associated to geometric particle production.
Finally, we remark that dissipation could be also interpreted in terms of back-reaction of the produced particles on the background field dynamics 16 . In the next section we discuss, from a classical perspective, back-reaction due to the inflaton fluctuations and the associated metric perturbations. 16 The topic of back-reaction in cosmological perturbation theory has been widely investigated in recent years, see e.g. [87] for a review of the different techniques adopted. However, it has been shown that in some cases [88] back-reaction from particle production cannot be described by an interaction term of the form Γφ.
VI. BACK-REACTION EFFECTS AND CONSEQUENCES ON THE ENERGY-MOMENTUM TENSOR
The particle production mechanism discussed in Sec. V is based on the so-called external field approximation, i.e, once the geometric perturbation has been computed, we evaluate the corresponding probability of pair production in this fixed (perturbed) background. However, as already noted in [13], we expect metric perturbations to alter the background evolution of the Universe, in such a way to reduce the particle creation rate. Accordingly, such back-reaction effects should be taken into account in order to properly deal with the dynamics of a perturbed spacetime.
Back-reaction associated to density inhomogeneities was first studied in [89,90], focusing on its effects on local observables, such as the expansion rate of the Universe. A further step was the formulation of the classical 17 back-reaction problem in a gauge-invariant way [51,52]: this can be done via the introduction of an effective energy-momentum tensor (EEMT) for cosmological perturbations.
Following the notation of [51], we start by denoting the metric, g µν , and matter, ϕ, fields by the collective variable q a . Accordingly, we can write where the background field q a 0 is defined as the homogeneous part of q a on the hypersurfaces of constant time, while the perturbations δq a depend both on time and spatial coordinates and satisfy |δq a | ≪ q a 0 . From Figs. 1-2 we clearly see that this assumption is satisfied both for metric and matter perturbations in our case. We also require where the above describes a spatial averaging, defined with respect to the background metric. Denoting the Einstein equations by we can expand the tensor Π µν in a functional power series [51] in powers of δq a around the background q a 0 , if we treat G µν and T µν as functionals of q a .
Thus, we have Π(q a 0 ) + Π ,a q a 0 δq a + 1 2 Π ,ab q a 0 δq a δq b + O(δq 3 ) = 0. (59) 17 As it will be clear soon, in the following we will deal with both metric and matter perturbations at a classical level, i.e., introducing a generalized variable δq to describe inhomogeneities. For a semiclassical treatment of back-reaction in a quasi de Sitter spacetime, see for example [91,92].
Taking the spatial average of Eq. (59) we obtain the corrected equations, which take into account the backreaction of small perturbations on the evolution of the background, say We can require the latter expression to be gaugeinvariant. To so so, we can introduce the new variable where L X denotes a Lie derivative and is X is constructed as a linear combination of the perturbation variables in Eq. (9), as shown in [51,52]. Accordingly, we define which is the gauge-invariant EEMT for cosmological perturbations.
A. Back-reaction in inflationary regimes
In the inflationary Universe scenario, the EEMT separates into two independent pieces, the first due to scalar perturbations and the second due to tensor modes 18 τ µν (δQ) = τ scalar µν (δQ) + τ tensor µν (δQ).
We focus on the scalar contribution and exploit gauge invariance to move to the longitudinal gauge. As discussed in Sec. III, for a scalar field the variable Ψ entirely characterizes metric perturbations, in this gauge. Under the slow-roll assumption, when dealing with super-Hubble perturbations the following results are obtained, as function of cosmic time: Moving to conformal time, the energy density associated to back-reaction is then and similarly one finds for the pressure p br = −1/3 τ i i ≃ −ρ br . 18 As it is well-known, vector modes decay in an expanding Universe, so they can be neglected in our analysis.
The correlator ⟨Ψ 2 ⟩ is given by [52] where the modes have been computed in Eq. (45). The integral runs over all modes with scales larger than the Hubble radius, i.e., but smaller than the Hubble radius at initial time τ i , namely all the modes that exit the Hubble horizon after the beginning of inflation (super-Hubble scales). The effects of back-reaction can be then quantified by considering the fractional contribution of (scalar) perturbations to the total energy density: ρ br /ρ 0 , where ρ 0 ≃ V eff is the background energy density of the scalar field ϕ. Fig. 2. The contribution of back-reaction is very small in both cases, so it can be neglected when dealing with geometric particle production.
In Fig. 4, the contribution of back-reaction is plotted for both positive and negative values of the coupling parameter ξ. As expected, ρ br < 0, so back-reaction reduces the total amount of geometric particles produced, since it gives a negative contribution to the zero-order energy-momentum tensor, Eq. (47).
However, its effects are almost negligible in the whole slow-roll phase, so it can be safely neglected when dealing with particle production during inflation. In other words, the effects of back-reaction does not significantly influence the net geometric particle production during the inflationary regime.
We also remark that back-reaction effects disappear in the limiting case of a pure de Sitter expansion, as due to ϵ = 0. This result appears evident, since for a pure de Sitter phase no particle production is possible at a perturbative level. The net effect would therefore be not to produce particles, but rather only to accelerate the Universe.
However, the above considerations do not enable one to ignore back-reaction at all stages of primordial Universe. Indeed, we expect back-reaction to play a more relevant role as the slow-roll approximation is no longer valid, i.e., close to the transition to reheating [52]. In that epoch, therefore, baryon production appears to be dominant in fulfillment of the standard picture of reheating.
VII. ENTANGLEMENT PRODUCTION AT PRIMORDIAL TIME
We finally quantify the entanglement entropy arising from geometric particle production at second order in the perturbation, i.e., when a purely geometric contribution is present. We follow the same approach introduced in Ref. [29] and, as anticipated, we set here β p = β q = 0. In this way we neglect the contribution due to GPP, that is typically interpreted in term of squeezing entropy between k and −k modes. This entropy has been widely investigated in cosmological scenarios, as discussed in the Introduction and more specifically in Sec. V for the case of inflaton fluctuations. Crucially, it does not depend on the interaction, simply arising from the fact that the definition of positive and negative frequency modes typically differs between asymptotic in and out regions. Neglecting Bogolubov transformations, the S matrix (50) gives the following final state of the system where we have introduced the shorthand notation ⟨p, q|Ŝ|0⟩ ≡ S (1) pq and the constant N is derived from ⟨Φ|Φ⟩ = 1.
Eq. (70) describes a bipartite pure state, whose entanglement entropy is quantified as usual by the von Neumann entropy of the reduced state, obtained after tracing out the p or q modes. Accordingly, the reduced density The corresponding von Neumann entropy S(ρ p ) is plotted in Fig. 5 as function of the momentum p x , assuming again for simplicity that both particles are produced on the x-axis.
In analogy with the entanglement entropy associated to GPP [22,24,28], we notice that entanglement generation is higher as p → p i , where p i is defined as k i in Sec. IV. This is due to the bosonic nature of the field, for which modes of smaller p are more easily excited, as expected. The main difference, as discussed above, is that geometric particle production allows mode-mixing, thus leading to entanglement between particle pairs with q ̸ = −p.
Another crucial point is the following: for scalar fields, entanglement due to GPP arises as consequence of the fact that the final state of the system is in the form of independent two-mode squeezed states [24]. On the contrary, in our framework the evolution of the initial vacuum is governed by the S matrix, that leads to the final state (70). So, despite the mode dependence of the entanglement entropy is similar in both scenarios, the origin of such entropy turns out to be completely different. We remark that going beyond first order in Eq. (70) would imply that particle production is not limited to pairs, thus enriching the overall picture of geometric cosmological entanglement. The same consideration applies if non-quadratic (e.g. quartic) potentials are chosen to describe the inflaton dynamics in place of Eq. (23), suggesting that the inflaton potential may significantly affect mode dependence of geometric entropy. Since our approach to inhomogeneities is a perturbative one, we notice that the amount of such geometric entanglement is typically small in our model: possible extensions to fully inhomogeneous spacetimes may shed further light on the properties of cosmological entanglement. We also notice that entanglement entropy is sensitive to the sign of the coupling constant between the field and the scalar curvature. This may be of crucial importance in understanding the nature of such coupling.
In fact, changing the coupling constant in the interacting potential can be interpreted as modifying the type of interaction between the scalar field and the gravity sector. Indeed, the ξ positive sign corresponds to the attractive behavior of the Yukawa-like contribution to the effective potential. Hence, shifting from positive to negative signs in the Yukawa contribution may lead to repulsive gravity effects and, in such a way, we can justify the deep difference that occurs as ξ is modified. Repulsive gravity effects are not so rare in cosmological scenarios. For instance, dark energy models and/or extended theories of gravity seem to show similar effects [94]. Analogously, in black holes and naked singularities often repulsive gravity are predicted to occur [95,96].
VIII. CONCLUSIONS AND PERSPECTIVES
In this paper, we quantified the entanglement entropy associated to geometric particle production, specializing to the second order of perturbative expansion, i.e., assuming a purely geometric contribution. To do so, we adopted a single-field inflationary scenario, where the inflaton fluctuations are responsible for metric perturbations and also leads to back-reaction effects, studied from a classical point of view.
We investigated the dynamics of these fluctuations, understanding how they are responsible for the geometric mechanism of particle production, conjecturing these particles to contribute to dark matter abundance in the very early Universe.
We evaluated the modes and the corresponding analytical solutions for the inflaton field. The here-involved potential is a quadratic chaotic one, coupled to the scalar curvature. The corresponding effective potential is inves-tigated and we computed the number of e-foldings, employing a quasi-de Sitter scale factor for the dynamics.
We studied then particle production and back-reaction effects. So, taking zero Bogolubov coefficients at first order expansion, we showed that the corresponding geometric particles and their probabilities for positive and negative coupling constants, ξ, are not deeply influenced by back-reaction effects. In fact, to show that, we got the amplitude element, adopting the Dyson expansion over the S matrix, quantifying couples of particles with different momenta, in the limit of super-Hubble scales. We also compared geometric production rates to realistic dissipation rates in warm inflation scenarios, where the interaction of the inflaton with other quantum fields in the slow-roll regime leads to particle production. We argued that tracing back such production to geometric effects would not lead to sufficient dissipation in a single field inflationary scenario.
Afterwards, we modeled the entropy of entanglement as due to the mode mixing of the above-obtained expansion. We showed its mode dependence and we focused on physical consequences on inflationary dynamics.
In analogy with the entanglement entropy associated to GPP [22,24,28], we noticed that entanglement generation is higher as p → p i , where p i is defined as k i , as a consequence of the bosonic nature of the field itself, for which modes of smaller p are more easily excited, as expected.
However, entanglement generated by the sole expansion of the Universe has a different nature, because in this case the asymptotic out state of the system can be written as independent two-mode squeezed states, while inhomogeneities excite the initial vacuum only in terms of particle pairs. Consequently, we emphasized that the origin of such entropy turns out to be completely different.
The presence of inhomogeneities in the early Universe cannot be neglected, since these fluctuations are the seeds of cosmic structure. Accordingly, a complete characterization of cosmological entanglement cannot ignore spacetime perturbations. In particular, we demonstrated that the entropy due to geometric particle production is sensitive to the details of the expansion, e.g. to the initial value of the inflaton field and the Hubble parameter during inflation. This means that geometric cosmological entanglement may be useful in deducing some parameters which were crucial for the Universe evolution.
The latter is true in particular if the particle candidate in our model can be interpreted as dark matter, which is expected to have negligible interaction with standard matter: in this case residual quantum correlations may have survived to present time.
In general, our perturbative approach furnished a small correction under the form of geometric entanglement, as a consequence of how we treated inhomogeneities. Our model can be refined by including also the contribution due to Bogolubov coefficients at second perturbative order for particle production, which is expected to increase the total amount of entanglement.
Future works will also shed light on how to quantify entanglement in non-perturbative inhomogeneous contexts. Moreover, we will discuss additional properties of cosmological entanglement, changing both the effective potential, likely considering more realistic ones, and the spacetime, assuming inhomogeneous solutions, instead of perturbing FRW. Finally, we will investigate more carefully the role played by such geometric production in dark matter scenarios, also including back-reaction effects both from a classical and semi-classical point of view.
does not depend on time, but only on the momentum k. Accordingly, the differential equation (A5) can be solved analytically. The corresponding solution of Eq. (A5) is given by From Eq. (A3c) we notice that h ∥ ij (x, τ ) ∝ k i k j β(x, τ ) and since we are dealing with super-Hubble scales with k ≪ 1 (see Fig. 3), this contribution can be neglected with respect to h(x, τ ), which is given by 19 We see that the value of c 2 does not affect h(x, τ ), which is the physical perturbation. For this reason, we can safely set c 2 = 0. The constant c 1 is in principle arbitrary. However, it can be fixed by imposing that the total number of particles produced is a gauge-invariant quantity, i.e., exploiting the results of Sec. V. The perturbation tensor in synchronous gauge then reads where again we remark we are dealing with super-Hubble scales and h is given by Eq. (A9). | 2022-12-14T06:41:14.574Z | 2022-12-13T00:00:00.000 | {
"year": 2022,
"sha1": "2a9b05caafaa0aa246874c4168fa9d9dad9c6191",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2212.06448",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2a9b05caafaa0aa246874c4168fa9d9dad9c6191",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238021050 | pes2o/s2orc | v3-fos-license | Organization of outdoor practice of students
In this article we consider ways of solving the problem of lacking practical experience in drawing and painting among the students of architecture departments. We propose a solution to this problem through creation of teaching methods aimed to develop a perception of of nature, compositional and technical skills and abilities. We study the principles of the approach to the plein air practice working program of the leading architectural universities in Russia, and also compare and analyze two approaches to the program: an interdisciplinary approach based on the relationship of architecture with the visual arts drawing and studying architectural monuments and holding a plein air in the form of master classes by professional artists, with an emphasis on techniques and technologies of work in the plein air. Revealing the methodological features of building a program for mastering the universal and general professional competencies of an architect and designer: acquaintance with the monuments of architectural heritage, a creative research approach to the object of study, the development of compositional thinking and the basics of linear constructive drawing, the development of graphic techniques necessary for working on sketches of projects. Recommendations are given for the development of tasks for plein air practice for students of architecture, reconstruction, urban planning and design departments.
Introduction
Drawing and painting are special subjects required in the training of an architect and designer. Due to the small number of hours of independent work, the time to complete sketches is not enough to master the related skills. This negatively affects the development of the material of educational plein air practice. Working at plein air implies a lot of independence. The student must choose the object for sketching, find an interesting angle, think over the composition, draw an image. This corresponds to the performance of a professional artistwhen the composition of the project is thought out and solved by the author, who performs the whole range of tasks for its implementation. From this perspective, it is supposed to consider what tasks and in what sequence form the corresponding skills.
What contributes to the creative development of an architect and designer? The scientific article by Zakharova, N., Vlasova, I., Kartavtseva, O. "Technologies of tutorial assistance in the visual activity distance education for the bachelors-designers" [1] examines the development of a creative personality with the help of an all-round artistic and cultural development of students in collaboration with the teacher. The article by Zakharova, N., Vlasova, I. "Technological initiative for the roadmap implementation for the bachelorsdesigners' creative development in art" [2], considered a personality-oriented and practiceoriented approach to the organization of the educational process. The article by Vlasova, I., Ushanyova, Y., Pisarenko, S. "Professional competencies formation in the field of aesthetic artwork evaluation" [3] examines the developmental possibilities of the methodology for students' independent search for works of fine art. In articles by Burovkina L.A., Koreshkov V.V., Prishchepa A.A. "The problem of preparing future designers" [4] and Prishchepa A., Maidibor O. "The value of ancient architecture for an educational program of masters of architectural space design" [5] discusses educational opportunities for studying the history of architecture, archeology, cultural history, local history, arts and crafts in the formation of the image of an architectural project. The article by Asu Besgena, Nilgun Kuloglub, Sara Fathalizadehalemdaric "Teaching / Learning Strategies through Art: Art and Basic Design Education" [6] is devoted to the development of visual thinking in designers through comprehension of the elements and principles of art. Asu Besgen's article "Teaching / Learning Strategies through Art: Painting and Basic Design Education" [7] examines the relationship between painting and architecture, the study of compositional schemes using examples of works of fine art, the study of architectural monuments in which painting is a spatial element. Peter O Adewale, Olasunmbo Bolanle Adhuze in "Entry qualifications and academic performance of architecture students in Nigerian Polytechnics: Are the admission requirements still relevant?» [8] consider the situation in the architectural education of Nigeria, where the basis of the program is mathematics and physics and prove that there is no direct dependence of the success of graduates on the development of these subjects and it is necessary to develop graphic skills. In article by Ana Torres, Juan Serra, Jorge Llopis, Ángela García, Nacho Cabodevilla "The Need and Experience Of Drawing With Ingeniuty. An Analysis of the Graphic Practice in Architectural Education" [9] discuss the importance of exploratory sketches and constant drawing exercises. Mohammadjavad Mahdavinejada, Raha Bahtooeia, Seyyed Mohammadmahdi Hosseinikiab, Mahsa Bagheric, Ayoob Aliniaye Motlaghd, Fatemeh Farhat "Aesthetics and Architectural Education and Learning Process" [10] raise a question of the relationship between aesthetics and architecture in teaching.
Melda Arca Yalçın, Mine Ulusoy in the article "Personal and professional attitudes of architecture students" [11] conduct a study of students' opinions about the most demanded professional qualities. Adina Palea, Georgeta Ciobanu, Annamaria Kilyeni in "Educational skills in training landscape architecture students: developing communication skills" [12] consider the development of communication skills necessary for integration in the labor market. Designers Denis Weil Matt Mayfield in "Tomorrow's Critical Design Competencies: Building a Course System for the 21st Century" [13] discusses what are the most important competencies for a designer.
Discussion Gunnar Swanson "Educating the Designer of 2025" stands out in foreign scientific articles [14]. There is an interesting statement that broad education does not guarantee high qualifications of the student, and most importantly, the development of curiosity and practical project activities; there is no one right way for everyone, and the variety of methods is more useful. First of all, it is necessary to encourage the student's interest in learning, and thinking is best developed through the creation of real projects. The author recommends teaching students to teamwork systems thinking and acquainting them with large-scale projects. Hacer Mutlu Danaci in his article "Creativity and knowledge in architectural education" [15] also raises the question of the application of theoretical knowledge in practice and points out the difficulties caused by the insufficient number of seminars and practices.
The lack of practical experience in drawing and painting can be compensated for by developing teaching methods aimed at developing the perception of nature in the plein air. The object of the research is to consider the process of formation to skills and abilities of work in the plein air to increase its effectiveness. The practice was held for first-and secondyear students in the direction of "Architecture", "Urban planning", "Reconstruction and restoration of architectural heritage" and "Design" of the Don State Technical University.
─ This article discusses the types of building work program for plein air practice in architectural universities in Russia.
─ The connection between the plein air work program and the professional competencies of students in the areas of architecture and design has been established.
─ Two different programs of plein air analysis of the creative achievements of students are considered.
Materials and methods
The program of artistic practice in various architectural universities in Russia is based on several principles. One of them examines the step-by-step study of the techniques and methods of constructing a graphic image from simple to complex by students, the study of compositional techniques used both in graphics and in design. Another type of program provides for the study with the help of graphics of types of architectural objects: industrial and civil buildings -housing, cultural, and religious buildings -analysis of the architectural concept of the object, identification of the compositional structure of the monument. Another type of building program of artistic practice involves its connection with the design art history: drawing architectural monuments of a certain period, identifying compositional and stylistic features of the architecture.
The place of the plein air practice is the Crimean Peninsula 2019
Excursions -Sevastopol, Balaklava, Bakhchisarai Historical, Cultural and Archaeological Museum-Reserve, State Historical and Archaeological Museum-Reserve Tauric Chersonesos. The practice program was based on an interdisciplinary approach -the relationship between architecture and the visual arts. The tasks were set: a study of the monuments of the architectural heritage of Russia through the comprehension of their architectural forms in drawing and painting; the complex impact of various plastic arts on the development of compositional skills of future architects, the development of graphic skills and color perception.
The students showed interest in the excursions, the sketches turned out to be expressive and varied in technique. In drawing the Bakhchisarai Palace, the task was to display the variety of decorative finishes without splitting the composition. To emphasize the organizing role of monumental and decorative art in the architectural image of the object, the expressiveness of the ornament, the beauty of lines and silhouette, and the rhythms of color spots. The sketches turned out to be interesting, in the sketches the characteristic color of the monument was successfully displayed. Many have carefully and enthusiastically worked on the decorative elements. When depicting the embankment of Balaklava, the task was to depict an open space, a multifaceted composition. Despite the variety of colors, the works turned out to be expressive, although they caused difficulties at the stage of generalization. Independent sketches of boats, yachts, and motor ships turned out to be interesting and expressive. There was a review of the preparatory material, then a search for ideas for the final composition. The excursions were conducted in the Gelendzhik Museum of History and Local Lore, dolmens on the Zhane River, Lake Abrau in the village of Abrau-Dyurso. The practice was carried out in the form of master classes by the masters of the Russian Academy of Arts. Students took part in exhibition activities. Objectives of the practice: acquaintance with the work of outstanding figures of culture and art; training in drawing techniques under the guidance of masters; study of related areas necessary for an architect in his future professional activity.
The place of the plein air practice is the village of Divnomorskoye 2020
The master classes were held on the following topics: "Study of a seascape", "City landscape", "Study of a city street", "Sunset at the sea", "Mountain landscape". The artists showed the methods of drawing large planes with local colour, the technique of multilayer watercolor painting. In addition, they talked about the correct conduct of the work, the tonal gradation of objects. The one of the master classes was devoted to the use of a limited palette of colors in painting and the meaning of tone. Another master class was devoted to the use of an extended palette of colors, or, as it is also called, multicolor. Sketches of many students turned out to be successful, color was harmoniously found, lighting was rendered correctly. The fast pace of work ensured freshness of color and general expressiveness of the work. Minor issues were related to the quality of the watercolor paper and brushes. There were difficulties with tonal analysis and identification of spatial plans, a fear of color -sluggish tone relationships.
There were many other master classes devoted to graphic sketches in different techniques, quick sketches for attention, memory and logical thinking. By that way, the speed of perception and drawing is developed, since a short time does not allow one to be distracted by details and it is necessary to grasp only the main thing -shape, design, volume, plasticity, silhouette.
Results
The following Table 1 gives the comparison of the two places of the plein air practice. Table 1. The sequence of the formation of graphic skills.
The practice base -Crimean Peninsula
The practice base -settlement Divnomorskoe Knowledge about monuments of architectural heritage, their history, and features of the artistic style and planning The skill of determining the local color of an object, the principle of working with large color ratios Compositional skills have improved.
The skill of determining the general tone in the study has improved.
Mastered work in different graphic techniques
The skill of analyzing tonal relationships have been formed The skill of comparing and analyzing tonal relationships, which has a positive effect on the transfer of spatial plans in sketches and sketches The skill of comparing and analyzing color relationships has improved. Achieved great expressiveness in the transmission of the states of nature. Improved the light perception Analysis of spatial plans Skills of working in a limited and multicolored palette.
Compositional integrity in sketch and study solutions Selection of compositional means, selection of analogs, bringing the work to completion. Skills in working with various graphic materials, speed of perception, and drawing from life. The concept of the relationship between architecture and plastic arts Practical experience To compare the research results, the number of types of practical tasks for both programs was calculated and is presented in the Table 2. The poll showed that the greatest interest in the first practice was aroused by excursions on architectural objects, in the second -acquaintance with famous artists and masterclasses. Skills were formed in a different sequence. The perception of the architectural and natural form and its use in creating an image occurred in different ways. The methods of work aimed at concretizing one task gave positive results in general visual literacy and freedom of expression of the compositional concept.
As a result of the program's recast become some changes in this context. The more specific the task was, the more interesting and different the results were. The task was divided into stages. The students concentrated all their attention on practicing one skill, after which they moved on to the next task. As a result of consistent mastering, students have thoughtfully treated the task and the learning outcomes were not temporary, but permanent.
Discussion
Based on these results, it is proposed to build a program and tasks for plein air practice by the main competencies of the architect: 1. Acquaintance with architectural objects. You need to look at them from different points of view, find interesting angles, and notice the lighting conditions in which they are most interesting. This is an interesting stage, combining a walk, excursion, scientific research, and reportage shooting.
2. Sketches and sketches of landscape and architecture in pencil. When the objects are found, it is necessary to make many small pencil sketches and sketches to find interesting compositions. Use various compositional techniques and a different scale of images: closeup -an architectural detail in the environment, medium plan: 3-5 large objects (adjacent buildings, architectural forms of a part of the street, background: street perspective, city panorama.
3. Sketches are performed in fixed graphics -they teach you to be thoughtful and organized, hone your technique.
4. Analysis of the architectural object. Sketching with soft materials -drawing from the stain. The angle from which the compositional structure of the object is most readable is determined. An indispensable condition is the recognition of the object: the transfer of the planning structure of the object, the proportion of the main forms. Black-and-white modeling of the object, elaboration of details, and ornaments is being carried out.
5. Peculiarities of color perception in plein air conditions. The study of the technology of writing with watercolors in a wet manner, in gouache, the technique of color stretch, marks the use of corpus techniques and mixed media. 6. Study for the state: morning, afternoon, and evening. The influence of lighting in natural conditions, the difference in color in sunny and cloudy diffused lighting. 7. A sketch in a limited palette. Evening. The value of tone in painting. 8. Composition "Multi-faceted urban landscape". Performed in the last days of practice when sufficient material has already been collected to create a composition. Independent creative search for a graphic solution.
Further research is supposed to be devoted to the educational possibilities of conducting the plein air practice in the form of master classes by professional artists.
Conclusion
Consideration of various programs showed that most universities are unanimous on the question of what competencies of the future architect, laid down in the curriculum can be provided by artistic practice. Universal competencies -Systemic and critical thinking UC-1, Teamwork and leadership UC-3, Self-organization, and self-development UC-6; General professional competencies -Artistic and graphic GPC-1, General engineering GPC-4.
The main objectives of the plein air practice program for students of architectural direction are: ─ Consolidation of theoretical knowledge gained during the study of the disciplines "Drawing" and "Painting"; ─ Knowledge about the methods of visual display and modeling of three-dimensional forms and space, the logic of constructing volumetric spatial forms, the properties of graphic means of expressing architectural design; ─ Mastering the means and methods of creative work on composition: the ability to analyze and model various volumes and spaces, both in nature and by presentation; ─ Acquisition of practical experience in the use of knowledge, skills, and abilities of graphic representation of architectural objects and their elements, acquisition of knowledge in related areas of professional activity -construction and art history.
The main requirements for the content of the plein air practice program are: 1. Acquaintance with monuments of architectural heritage and plastic arts that complement their image: sculpture, arts and crafts, painting, graphics, and landscape architecture.
2. Creative and research approach to the object of the image, collecting graphic material, preliminary sketches, and sketches from nature.
3. The study of the techniques, technologies, materials of plastic arts, drawing, painting, and composition necessary for the formation of a future professional architect as a competent specialist in this area who can convey his ideas in graphic expression, using the means of related arts to express an architectural project and managing specialists in these areas. | 2021-08-27T17:01:31.744Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "10530f8739b52845f5df6e2675c5691941af4d14",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/49/e3sconf_interagromash2021_12062.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "70c3683da56b1904ce3a593cad9fa78e51a79527",
"s2fieldsofstudy": [
"Art",
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
241261169 | pes2o/s2orc | v3-fos-license | A Tale of Two States: Analyzing the State of Higher Education in Kerala and Karnataka
In India, over 32 crore active learners have been affected by the closure of educational institutions across the country. There are a lot of obstacles such as accessibility, pedagogy and curriculum, infrastructure, and capabilities that can affect the transformation process. However, there is a positive side to digital learning such as increased efficiency, and scope of mass scale-up of operations. Every state in India faces unique challenges and a single policy representing the entire country’s higher education system is unviable. The authors attempt to analyze the state of higher education in the state of Kerala and Karnataka based on four parameters, namely, accessibility, quality, privatization, and digital infrastructure. The last parameter being prime and obvious because pandemic is the inflection point for e-learning and digital infrastructure plays an important role. The research was conducted through an in-detailed literature review of research papers, policy documents, news articles, and government reports.
Introduction
In India, over 32 crore active learners have been affected by the closure of educational institutions across the country. When classroom learning is hampered, universities switch to online remote learning. Although the transition was quickly adopted by most universities and colleges, it still has a long way to go. There are a lot of obstacles such as accessibility, pedagogy and curriculum, infrastructure, and capabilities that can affect the transformation process. However, there is a positive side to digital learning such as increased efficiency, and scope of mass scale-up of operations. To promote online learning Ministry of Human Resource and Development (MHRD) has come up with various initiatives. One such initiative is SWAYAM Prabha, an online education resource for students but lack of interaction with teachers, and assessment and evaluation problems lead to an important demarcation between online learning and e-learning.
Every state in India faces unique challenges and a single policy representing the entire country's higher education system is unviable. Therefore, each state must be assessed separately to identify the gaps and comfort points of the state's adoption of e-learning in the higher education system. In this policy brief, the author will focus on state-level policies for higher education and digital literacy in Kerala.
Kerala
The higher education system in Kerala is regulated by the Kerala University Act and Kerala Education Rules 1959. Currently, the sector is governed by multiple authorities such as the Higher Education Council, Department of Education, Directorate of College Education, Department of Technical Education, and universities. According to Kerala's economic review published in 2016, there are 14 universities and three deemed universities functioning in the State.
The district-wise spread of arts and science colleges gives a clear picture of the physical infrastructure. There are 63 colleges in the northern region and 89 colleges in central and southern districts. Out of 213 colleges in total, 153 are private un-aided and the rest are under the government. The number of students enrolled in arts and science colleges is 2.7 lakhs out of which 71 percent are girls. The most preferred degree among students is BA followed by B.Sc., then B.Com. However, in the transition from undergraduate to postgraduate studies, there is a significant drop in enrolment. For a case, the number of students for MA degree falls to 12000 from 96000 in BA degree.
For engineering, there are 171 self-financed colleges, 9 government colleges, and three private aided colleges. In Kerala, engineering colleges are skewedly located, 35 colleges in the north and going down through the state, the number of colleges increases to 72 and 96. Students enrolled in engineering colleges are over five thousand at the undergraduate level and 1500 at the postgraduate level. The Gross Enrolment Ratio according to 2018-19 All India Survey on Higher Education is 37 among general category, 25 and 23 among SC and ST population.
A review of the above statistics shows that higher education in Kerala is in a fairly satisfactory condition.
However, the district-wise statistics and comparison with other southern states give a picture of disparity. The academicians have raised a lot of issues in the higher education sphere in Kerala such as accessibility, equity, privatization, and quality of education.
Karnataka
Karnataka has one of the most highly educated populations in India. The state has the largest number of schools and educational institutions, nearly half of which are managed by the government. The higher education system in Karnataka comprises degree colleges, technical and vocational colleges, universities, institutions, deemed to be universities, and institutions of higher education having national importance. There are different types of colleges such as government colleges, privately managed colleges, private-aided colleges, University colleges, and professional colleges. In Karnataka, higher education is governed by the Karnataka State Higher Education Council Act. The regulatory authorities in the state are Karnataka State Higher Education Council (KSHEC), Directorate of College, and Directorate of Technical Education. As of 2018, there are 28 state public universities, one central university, 16 state private universities, 11 deemed universities. In total there are 64 universities in the state. There are close to 4000 colleges in the state out of which 3020 are private colleges. In 2018-19 percentage of expenditure on higher education of the state, GDP is 2.25% and 20% of the total education expenditure. Gross Enrolment Ratio in 2018 is 28.8, among SC and ST population it is 21 and 19 respectively. GER among the male and female population is at par in general, SC and ST categories.
The following section will cover the specific parameters for the analysis of the state of higher education in two states viz. accessibility, quality, privatization, and digital infrastructure.
Accessibility
In Kerala, even though there is a vast network of higher education institutions, colleges per lakh eligible population are one of the lowest compared to other southern states of India. There are only 45 colleges per lakh population of 18-23 years age group leaving only Tamil Nadu behind. District-wise data shows that engineering colleges in northern districts such as Wayanad and Kasargod with higher populations are far less compared to other regions.
The 2011 census shows that most of the colleges in central and southern regions lie within 5-10 kilometers of a household whereas the colleges in northern regions lie beyond ten kilometers. An interesting data points higher education enrollment is approximately seven lakhs, the lowest in the state allowing us to conclude reasons for low enrolment such as afflux of students to other states for better job opportunities and better instruction quality.
According to the All India Survey on Higher Education (AISHE)-2019, college density is 53 per lakh eligible population in Karnataka as compared to the national average, 28. This number is higher when compared to its neighbor states. This has increased from 41 in 2011-12. Karnataka also has the highest number of students coming from foreign countries i.e. over ten thousand in 2017-18. Even the number of private unaided colleges is the highest among southern states in India. Kerala -1032 Tamil Nadu -2131 Andhra Pradesh -2223 Telangana -1700. There are over 15 lakh students enrolled in the undergraduate course and over 2 lakh students at the postgraduate level. Going down to the district level, there are 30 districts in Karnataka and district-wise analysis of higher education categorizes them into three major categories -lagging, strongest, and districts that require special attention. Yadgir, Raichur, Chamrajnagar, Koppal, Haveri -these districts are categorized as the weakest districts in the state. In the above districts, there is usually low enrolment among backward classes and women, low institutional density, some are drought-prone areas, one of the districts have a low Gender Parity Index. Dakshina Kannada, Mysuru, Bangalore Urban, Udupi, Dharwad are categorized as the strongest five districts in the higher education sector in the state. These districts have high institutional density, higher enrolment, gender parity, etc. Districts that require special needs are Haveri, Bangalore Rural, Bellary, Kodagu, and Chitradurga.
The urban-rural differentiation in terms of access to higher education can be exemplified by understanding the differences between Bengaluru Urban and Bengaluru Rural. In general, there are a higher number of state public universities, private universities, and polytechnic colleges in urban Bengaluru than in rural Bengaluru. Additionally, GER among SC and ST students is far behind in rural Bengaluru than its urban counterpart. As a result of such unbalanced development, higher education in rural Bengaluru is riddled with lower performance, higher gender parity, and low enrolment ratio. It is clear that rural Bengaluru is in a special need category and urban Bengaluru is in the progressive category in the higher education sector.
Quality
The Annual Survey on Education Report 2018 highlights the fact that poor quality of school education has a ripple effect on higher education. Kerala's record highest literacy rate in the country, a laudable school system is responsible for this achievement. However, the higher education system did not manage to replicate the model of school education. Quality of education can be ascertained if it is providing returns to students in the form of rightly matched employment or further study opportunity and making law-abiding informed citizens.
Around twenty-five percent of the undergraduate students from seven colleges in Kerala preferred to study out of the state for higher studies concluding that the higher education system in Kerala is not sufficient to meet the needs of its students. The 2016 economic review of Kerala highlights the fact that the number of job seekers in Kerala is rising with the expanding labor demand for manual jobs. Another possible reason for students from Kerala to move out is the mismatch between the courses available and courses demanded by the students.
Literature focusing on the issue of the quality of higher education is limited, however, one of the researchers attempts to link quality with the privatization of education. The research concludes in self-financing colleges the percentage of students passing in an academic year was as low as 35 percent whereas the percentage increased to 65 percent in government colleges affiliated to the same university.
Karnataka Higher Education Perspective Plan lays importance on enhancing the quality of education on various parameters such as improving institutional infrastructure, capacity building of teachers, quality of teaching and learning, better assessment, and accreditation of colleges. Some academicians in the state are of the view that accreditation by NAAC is a basic parameter to assess the quality of universities and colleges in Karnataka. There has been an attempt to make the Internal Quality Assurance Cell (IQAC) work better. One significant achievement is the development of a data bank. Many colleges have been successful in this direction.
Among all states in the country, Karnataka received the largest migrants for education with 1,80,000 students. Even though Karnataka's population is only 5% of the Indian population, its share in higher education is close to 15%. A paper that studied the factors responsible for such a high influx of migrant students to Karnataka concludes that prospects of a better career and job/entrepreneurship prospects compel students to migrate.
Privatizing Higher Education
After the formation of the state in 1956, most of the colleges were government-run and private aided till the 1990s. From 1990 to 2007-08 the proportion of self-financing colleges in the field of medical and engineering increased to 80-90 percent, further rising to 93 percent in 2017.
Privatization of higher education has raised issues such as quality of learning, accessibility, and affordability. One of the authors noted that the performance of self-financing colleges affiliated with Kerala University is lower than that of government-run affiliated colleges recording pass percentage of students.
Privatization has given a major thrust to professional and technical education in the state. The economic review of 2016 has laid out the priority for the 12th five-year plan to promote public-private partnership in the field of technical education, research and innovation institutions. In this effort, the government has come into agreement with many multinational companies for establishing centers for research and training at the university campuses.
It is noted that privatization is necessary for professional and technical education to make the learning according to market needs thus increasing the chances of employability. However, researchers and educators have highlighted various issues arising because of commercialization such as financial barriers leading to inaccessibility, inequity, and deteriorating quality which may continue to persist with lack of government intervention.
There has been a noticeable increase in self-finance colleges in the state, which in turn has raised quality issues. Most of these colleges are outside the purview of UGC Grants and are run purely on user fees and philanthropic contributions. Therefore, the tendency is to start only such relevant market-driven courses with an unexplainable fee structure. The capitation lobby is very active in teacher education, management, engineering, and paramedical colleges.
Kerala KFON (Kerala Fibre Optic Network)
KFON forms an essential part of the Kerala government's ambitious plan to provide Internet connection to below poverty line (BPL) families, government offices, hospitals, and schools. About 12 lakhs of BPL families will be given a free Internet connection. KFON is a highly scalable network infrastructure that provides on-demand, affordable broadband connectivity of up to 100 Mbps for organizations and households. The network connects the state administration with all urban and rural areas to address the digital divide.
As of 2016, the state has a vast Optical Fiber Communication network of which BSNL had contributed majorly covering 20,000 km, followed by Reliance (8,500 km), and other major players being Airtel, Vodafone, Idea, and Tata (nearly 3,225 km put together). There is a great digital divide persisting in the country especially between urban and rural areas. In India, only 10 percent of households have uninterrupted internet and a computer. In 2015 only 18 percent of people belonging to the age group 14-29 years in rural areas were able to operate computers as compared to 48 percent in urban areas. The impact assessment report of the National Digital Literacy Mission published in 2015 provides a summary of the state of digital literacy in Kerala.
As of 2013-14, 81% males and 74% females belonging to the 14-29 years of age group know how to operate a computer. As of 2013-14, only 30% of households have computers with an internet facility. According to Digital Literacy Index, Kerala is a moderate performer. There are various programs launched by the state government since the 9th five-year plan in the ICT sector to bridge the digital divide and promote e-literacy in the state. Kerala is the only state in the country where at least one person in 49% of families is computer literate. There are 78.35 lakh households in the state, almost 39.17 lakh have at least one family member who knows how to operate a computer. 97% of villages in the state have an internet café, even as the national average is 17%. 5.1.2 Digital Education Akshaya project was launched in 2002 in Kerala to bridge the digital divide by providing training to at least one member of a family to be e-literate. In the initial stage, basic training was provided to the selected candidates to familiarize themselves with the basics and scope of IT. It was the largest rural e-literacy training project worldwide organized ever and concluded by transforming 32.8 lakh citizens benefitting from the initiative.
Akshaya Centres have emerged as the finest network of effective Common Service Centres (CSC) to deliver citizen-centric services. The project uses various models of delivery such as G2C, G2B as well as B2C services. Presently, around 2,650 Akshaya e-centres are spread across Kerala with at least 2 centers in each Panchayats. By bringing ICT to all segments of people Akshaya acts as a vehicle for improved quality of life, accessibility to information, transparency in governance, and overall socio-economic growth.
KITE (Kerala Infrastructure and Technology for Education) was established by the government to foster, promote and implement the modernization of educational institutions in the State of Kerala. The spectrum of KITE includes Information & Communication Technology, Capacity Building, Content Development, Connectivity, e-Learning, Satellite-based education, Support and Maintenance mechanism, e-Governance, or other related activities.
In 2020, the scope of KITE is further expanded to the Higher Education sector also including Arts & Science, Engineering colleges, and Universities, to fuel ICT support in learning and teaching activities.
The project has so far reached out to more than 39 lakh students spread across 12,600 schools of Kerala with a high level of digital literacy.
Kerala IT Policy 2017 -The main agenda of the policy is to establish Kerala as a leading IT destination and generate direct and indirect employment opportunities in the IT sector. It necessitates building necessary technological infrastructure, enhances the necessary human capital required to both produce and use innovative technologies through education and skill-building. It also lays out the aim to make the state 100% e-literate and utilize ICT in all walks of life to ensure equitable and inclusive development of the society. "I am also Digital" -In March 2020 Kerala government launched an ambitious program to bring 100 percent digital literacy to the state of Kerala. The e-literacy drive was launched by Kerala State IT Mission. The scheme is intended to run through people's participatory mode, wherein the trainers will be invited from the communities to impart training to people in their community. Educating people about using digital technology and cybersecurity will enhance the capacity of society.
Karnataka
A study was conducted in 2016 among school students in urban and rural areas across the state. The purpose of the paper was to examine the digital divide and various problems faced by the students in using the computer and to know the reasons for not using computers by rural and urban students. A total of almost 2600 sample population were selected from 64 rural and urban high schools of two districts of Karnataka state. The findings of the research reveal that various digital barriers are existing in the state. The study found that only 20.66 % of rural students used the computer for various purposes while the number of students who came from the urban area (69.70 percent) used computers. The reason for less usage of computers is because of the non-availability of the computer in rural schools, the students are not able to use the computer.
There is a significant difference between rural and urban students concerning their familiarity with various applications of the computer. The majority of rural students used computers to play computer games whereas the majority of urban students used the computer for project works.
The most important findings were that the majority of rural and urban students faced electric power failure as one of the problems faced while using computers. And most of the rural students were mainly dependent on the computer available at schools since they do not have computer facilities at home. 5.2.1 Digital Literacy The impact assessment report of the National Digital Literacy Mission published in 2015 provides a summary of the state of digital literacy in Karnataka. According to the report, as of 2013-14, 41% of males and 30% of females belonging to the 14-29 years of age group knows how to operate a computer. Additionally, the report suggests that close to 20% of households have computers with an internet facility. According to the Digital Literacy Index, Karnataka is a moderate performer, with scope for improvement in rural areas. 5.2.2 Digital Education Jananasangama -Smart Karnataka Education Yardstick (SMART-Key) -"Jananasangama" Smart Karnataka Education Yardstick is an initiative by the Government of Karnataka. It is a 100-point program that aims to remove the existing deficiencies in the system by adopting the technology. The objective of the program is improving access and quality of learning, reducing cost, making administration transparent, etc. The following are some of the targets which are intended to promote e-learning in higher education in the state: Establish smart class, tele-education, e-library. Integrating digital libraries and e-contents.
Make e-content available to all students and staff in the state. Smart support for students. Setting up multimedia recording studios for e-contents. Transforming the examination system. As mentioned earlier there are 100 ICT-based initiatives ongoing in the higher education system in the state. These are related to all aspects such as assignments, online books, online exams, class, and curriculum, to list a few. E-content, E-library, integration of e-resources of all universities are the major digital initiatives to promote online learning in the state.
The presentation by Karnataka State Higher Education highlights the importance of ICT in solving critical issues existing in the higher education sector in Karnataka. The presentation mentions that E-Literacy for all levels, E-learning for teachers, students, and staff, E-content for UG level are already implemented Initiatives announced such as online admission process, E-Library resource, E-content for PG levels, online exams are yet to be implemented. According to Telecom Statistics Report -2018, there are in total 30.16 million internet subscribers. 6.59 million in rural areas and 23.57 in urban areas.
Other Digital Initiatives
The government of Karnataka has taken various initiatives to ease the use of technology across all user cohorts. One major development has come in the form of deployment of Wi-Fi hotspots using NOFN backhaul of BSNL at 2150 Gram Panchayats across Karnataka. Wi-Fi Connectivity for all governments of Karnataka websites is limited to 100Mb/day for each GP's. In 2019, the government announced the setting up of 4000 WiFi hotspots in the capital city in partnership with private companies. However, this was not the first time such a project was announced. Since 2014 many such projects have been launched but are never implemented.
Another initiative called 'CHETNA' aims to empower, mentor, and support girls through various activities including training sessions to use the laptop, as well as interactive sessions with leaders from various domains like science, technology.
During the lockdown as part of India's strategy to reduce the spread of the Coronavirus pandemic, the Karnataka Government launched an e-learning program, GetCETGo for students preparing for the Common Entrance Test (CET) and National Eligibility-cum-Entrance Test (NEET), to make it easier for students to prepare for entrance exams.
Conclusion
The research leads us to a paradoxical conclusion. Firstly, the state of higher education in terms of quality, and accessibility was not up to the mark in both states, and with the advent of a pandemic, the situation must have worsened. Secondly, the digital infrastructure in the state is largely financed and built by the government but in higher education, the role of government is shrinking and market forces have more influence. It leads us to an interesting crossroads of government, private market forces, and higher education institutions where all three players have different agendas and priorities. Therefore, it will need a more analytical approach to understand | 2021-08-20T19:04:01.321Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "c597900e0fd0644c7f0f18ea081f667e81f68db1",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/JEP/article/download/56109/57947",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bea69af546d69435dfcefa0c56c5ea4827510db8",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
244441385 | pes2o/s2orc | v3-fos-license | IMPLEMENTATION OF NON-PHARMACEUTICAL INTERVENTION OF COVID-19 IN MRT THROUGH ENGINEERING CONTROLLED QUEUE LINE USING PARTICIPATORY ERGONOMICS APPROACH
The viral transmission in public places and transportations can be minimized by following the world health organization (WHO) guideline. However, the uncertainty in a dynamic system complicates the social engagement to the physical distancing regulation. This study aims to overcome this obstacle in MRT stations and train by developing an adaptive queue line system. The system was developed using low-cost hardware and open-source software to guide passengers using visual information. The system works by capturing seat images and identify the presence of humans using a cloud machine learning service. The physical representation of MRT was translated to data representation using the internet of things (IoT). The data then streamed using an asynchronous API with a representative endpoint. The endpoint is then accessed by a display computer in the destination station platform to provide visual information. The visual information was ergonomically designed with visual display principles, including the minimum content load, layout, color combination, and dimension of contents. The design of the system was evaluated by Markov simulation of virus transmission in train and usability testing of the visual design. The implementation of the system has balanced the queue line capacity in station and crowd spots distribution in MRT. The system was effective due to the visual cortex manipulation by visual information. Consequently, the aerosol and falling droplets’ viral transmission radius can be reduced. Accordingly, the chance for airborne transmission can be lowered. Therefore, the adaptive queue line system is a non-pharmaceutical intervention of viral transmission diseases in public transportation.
Computer Sciences reached its full potential. This happens because the existing system is unable to evolve immediately following the need for the current situation and condition. Thus, the current urban transport system can't act as a social guidance system. The social guidance system development in this study is the realization of non-pharmacological intervention in [11] study by a data technology application and as an extension to the system explained by [15]. This study aims to manage the passenger queue line of MRT to increase its queue discipline alongside its safety. This study also promotes the way to adapt to the new public situation during pandemics. The effort to achieve this study goal is through the development of the adaptive queue line system as the social guidance system in the MRT queue line. The adaptive queue line system was designed using a holistic approach. This research continues to improve MRT management, such as: thermal comfort [16], aerodynamic noise [16], and usability tests on the MRT information system during the pandemic. The research provided information for the MRT passengers to select the best queue line in front of the wagon doors according to availability of space. The user interface information of visual display has been built based on the Ergonomics approach, and it employed a usability test. The system successfully showed how to innovate to tackle or to reduce the spreading of COVID-19 virus.
Materials and Methods
The adaptive queue line system software, hardware, and the procedures to build the system are presented in this section.
1. Adaptive Queue Line System Hardware
The capacity detection system was built by a microcomputer and IoT cameras installation in an MRT carriage. The microcomputer was Raspberry Pi 4B with a Raspbian buster operating system. The microcomputer task was as a message broker for IoT cameras and the client for a cloud server. The IoT cameras and the microcomputer were communicated using the Message Queuing Telemetry Transport (MQTT) protocol. The IoT camera used was ESP32 Camera consisting of the OV2460 camera module and ESP32-S IoT microcontroller board. The microcomputer and IoT cameras were powered using 12V Battery. The camera was embedded on the ceiling of every seat row. Each camera was assigned with a string address as its MQTT topic. The topic was subscribed by the broker and the target uniform resource locator (URL). The topic was used as a link to send the detection data from the MQTT broker to the cloud server. The train displayed on the monitor was determined by the distance between the station and the train. The train location is tracked using GY-NeoP6MV2 GPS Tracker. The GPS tracker was connected to the Raspberry-Pi microcomputer using a Serial Peripheral Interface (SPI).
Adaptive Queue Line System Software
The software of the adaptive queue line system was classified into cloud server software, IoT software, client software, and user interface (UI) software. All of the software used was developed using Python programming language. The cloud application was a representational state transfer application programmable interface (REST-API) application developed using the FASTAPI module. The server was an asynchronous gateway interface (ASGI). The ASGI server was built using Uvicorn as it was bundled with FASTAPI. The cloud server has a role in locating the detection data to the right URL for each seat, priority seat, and hand strap. The file extension of the detection data was JavaScript object notation (JSON) containing the status of the regular seat, priority seat, and hand strap. Following that, the data were temporarily serialized using Pydantic as a database validator module.
The status of a seat, priority seat and hand strap represents its availability. The status was determined based on the result of the image classification. The image classification for seat and hand strap status determination was done using the Clarifai general model. Clarifai is a cloud machine learning service that provides machine learning REST-full service with some predefined models. The general model is a predefined deep neural network model for image classification. Clarifai's general model understands the features of a person. These include the existence of a person in an Computer Sciences image, gender, and the priority needs. The input image was an image containing an area that fit an image of a single person. However, the image captured by the camera was a seat row image that contains multiple seats. Therefore, the image was cropped using the python image library (PIL) in the client application. The client application was run on a microcomputer. In conjunction with the image classification task, the client application also has to post the status to the URL specified by each camera MQTT topic to the server. Therefore, the client computer has to subscribe to the topics provided by the camera by running Mosquitto as an MQTT broker application.
The seat status of an oncoming train has to be visible at the next station. Therefore, the URL accessed by the computer on the next station was determined using a distance matrix relationship. The distance matrix API of the Google cloud service was used to measure the distance of a train to the next station. The train was the first point of location input for the distance matrix API and the list of the station data was the input for the second point of location. Hence, the distance between the train and every station in its route was measured over time.
The NMEA satellite signal was used to locate the train. The NMEA signal was decoded through the Pynmea2 module installed in the microcomputer. The train locator client application post current latitude and longitude obtained from GPGGA and GPRMC decoding. The distance between the train and the station was measured in the server by employing the GeoPy module. The distance was measured against all MRT Jakarta Phase 1 Route as listed in Table 1. The GUI application on the station displays the train that closest to the station. The GUI application was informed by the server about the distance. Every calculation was done on the server. The GUI application requests the data using the GET method to the server. The server returns the closest train id. The ESP32 camera IoT application has been developed using MicroPython. The MQTT module used was an asynchronous MQTT module for MicroPython. The firmware used for the ESP32 camera was micropython-camera-driver. The firmware was installed to the camera using the esptool python package in a notebook computer run on Ubuntu 19.10 operating system. Each camera represents a seat row or hand strap row. The MQTT topic of a camera was specified based on its seat row or hand strap number. The camera was sent the captured image to the MQTT broker in realtime. The quality of service (QoS) in image data transfer was set to 0 to send only the latest camera capture.
3. Visual Information Design
The seat availability status information was transferred to the passengers by the visual display. The Infographic design and physical system for displaying the visual information were considered.
Computer Sciences
The infographic was a graphical user interface (GUI) application developed using visual studio 2019 with C# language. The GUI design was started by a layout design. The design principle used was the Gestalt principle to define the layout sequence. The design process was continued by determining the font height and proportion for 10m visibility. The font height was determined using (1). The width, boldness, the distance between fonts, and space distance were determined from its height based on a proportion coefficient. The proportion coefficients were determined by following the guide [17]. The color of the visual information was determined by following web content accessibility initiative guidelines (WCAG) 2.0: (1) The visual information was presented on a screen next to a corresponding MRT Carriage door. The screen used was a 32 inch 1080p HD LED screen. The luminance of the screen was 300 nits (1 nits = 1 cd/m 2 ). The height of the screen was determined using the latest version of Indonesian Anthropometry [18]. The dimensions involved were D2 (eye height) and D9 (eye height in sit position) in the 50 th percentile. The screen tilt angle was determined based on the viewing angle requirement for flat-panel display television [19]. Therefore, the tilt angle consideration was ranged from -8° to 8° and the information must be visible for sitting and standing person.
4. Usability Testing
The usability testing was performed to evaluate the ease of use of the system and the user-friendly visual information. The displayed prototype was tested by using a usability approach with 25 respondents. Usability is defined in ISO 9241-11 as the level of user satisfaction, as well as the effective and efficient use of a product by certain users for certain purposes. Usability dimensions need to cover five things: learnability, efficiency, memorability, errors, and satisfaction. The usability test worked on five respondents, [20] stated that 5 respondents would give usability problems finding an average of 85.55 %. The first stage of measuring usability was calculating the learnability of the respondents by giving a test in the form of 2 assignments for each test. The first test was identifying the MRT carriages with the most number of public seats available and the second test was selecting the seat that will be occupied in the selected MRT carriage. According to the collecting data of 5 respondents, the success rate is presented in (2): %. (2) The second stage was measuring the efficiency of the respondents to understand the visual display of queue application. As well as the success rate measurement, the efficiency measurement was by giving the respondent the same tasks. However, the quantity measured by (3) is how fast the respondents understand the displayed information: The error dimension depends on the total error of the opportunities number in carrying out those 2 assignments which are the learnability dimension and the efficiency dimension. The opportunities were the rooms to improve the GUI design. To determine opportunities, it is necessary to observe the activities details in using the GUI application. The error rate was quantified using (4):
Error Rate
Total Defects Total Opportunities = .
(4) Computer Sciences The satisfaction dimension was measured by recapitulating the System Usability Scale (SUS) score data to determine the percentile rank score. The formula to calculate the total score of the SUS for each respondent was provided in equation (5):
5. Non-Pharmaceutical Intervention Evaluation
The viral spread of COVID-19 via direct and indirect transmission can be modeled as a Markov state. The COVID-19 virus can be in one of three possible states. The first state was on the infected or a carrier person's body. The second state was in the indirect transmission state. Indirect transmission happens when the virus spreads across solid surfaces, air, or liquid. The third state was the direct transmission state. Direct transmission occurs when the virus is transmitted from person to person.
As a Markov state, each transmission state transition is only affected by the latest state. There is a probability of state transition for each state. In this model, the state probability was assumed to be fixed. The Markov state transition is depicted in Fig. 1. The Carrier Body (CB) probability was determined as an equiprobable state with a slightly higher chance to transmit the virus than re-inhale the virus. The Direct Transmission (DT) state was a state of a non-infected person. Therefore, DT was a passive state because a non-infected person can't spread the virus. The Indirect Transmission (IT) state is a state that represents the COVID-19 virus without a host. Therefore, the virus has the probability to be transmitted to a person, untransmitted, and regained back by the infected person. The regain probability was small because the infected person is in a moving condition.
Fig. 1. The States of COVID 19 Transmission in MRT Carriage
The matrix expression of Fig. 1 is shown in (6): The probability of a non-transmitted virus was low. The probability of direct and indirect transmission was equal as modeled in (7): The evaluation of the process was performed based on the step number taken by CB. The step number of a CB in the existing system is independent. The step number of a CB in the AQLS system was dependent on the success rate of system usability. The higher the success rate the fewer steps the passengers take to find a seat. In this approximation model, a step is a square area as big as the seat. Computer Sciences Therefore, the AQLS non-pharmaceutical activity can be predicted by simple calculation of the final probability matrix with the step number as shown in (8): The value of n was the number of the state change. The state always changes whenever CB taking a step. The complete Markov chain result in n cycle was defined as step transmission probability (STP). STP defines the probability of a CB to transmit the virus during each step. STP derived from the equation (8) and calculated using a Python program with the help of the NumPy module. The input of the program is the step taken by CB in an MRT carriage and the approximated number of people met by CB in each step taken. The STP and step number were the basis to determine the predicted transmission in each step (PTS) as written in equation (9): The person transmitted by CB can be predicted with the equation (10). The occurrence probability was set to 0.3. This indicates there was a 30 % chance of a person can be transmitted through the change of state. The result was floored to a lower integer value.
The existing queue line system and AQLS were compared using the constructed models by scenario-based test. There were 10 scenario tests of seat selection by CB in an MRT carriage.
6. Experimental procedures
The research worked in some steps to achieve the goal of crowd management in MRT stations, which are described as follows: 1. Selecting and preparing equipment, e. g. camera to record physical representation of a train wagon.
2. Mapping the MRT carriage physical position to API endpoints. 3. Capturing seat images and identifies the presence of humans using a cloud machine learning service.
4. Posting the data (c) to the specified endpoint on cloud server through client microcomputer. 5. Streaming the endpoint data (d) using an asynchronous API. 6. Display computers access the endpoint data (e) on cloud server. 7. Designing the visual display based on an ergonomics approach. 8. Testing visual display design by usability test based on recapitulating the System Usability Scale (SUS) score.
9. Testing the virus transmission by Markov simulation.
Results and discussion 1. The Design of Adaptive Queue Line System
In this subsection, the implementation of the adaptive queue line system design is reported. The system was designed to manage the passenger queue line in front of MRT doors based on a real-time seat availability detection system. The seat availability status of an oncoming train will be served on monitors in front of corresponding MRT Carriage doors. Therefore, the passenger in the queue line knows the position of the available seats. Consequently, the passengers can adjust their position in a queue based on oncoming train capacity.
The adaptive queue line system was implemented in queue line management. The overview of the implemented system was summarized in Fig. 2. The passengers in the queue line receive seat availability information from the screen on the platform. Each screen was placed next to each door. The incoming passenger to the queue line will see the screen to find the available seat on the train. This leads to the generation of rational choices for a passenger to queue in a queue line that cor-responds to the available seat. Therefore, the adaptive queue line system has a role as a decision support system for the passenger to help the passenger to decide the seat location. The implication of the adaptive queue line system implementation is the increase in queue discipline and safety.
The information that passed from the incoming train to the passengers in the queue line was based on the current status of the seat availability. Therefore, the status information sent to the passenger has to be the latest seat status. As shown in Fig. 2, the physical representation of the current status was modeled into a data representation. The data representation was modeled as close as possible to its physical representation by mimicking the train modularity with the «has a» relationship. The parent object that has the children object relationship is limited by «one to many» relationship. This will convert the data from the train to representative information that able to be mapped into a specified URL in the cloud server. Then, the information will be displayed on the screen.
The status information was determined by the seat availability detection system. The seat availability detection system has the responsibility for feeding the seat images to the Clarifai general model and processing the resulting output. As a multiclass classification model, the Clarifai general model generates more than one output. The output consists of the probability value and the predicted concept as the data label. As illustrated in Fig. 3, the predicted concepts were checked by the keyword lookup. In case one of the conditions in keyword lookup is true, then the seat is taken. These lookup conditions were determined using predefined keywords. The predefined keywords are people, person, man, and woman. In Fig. 3, the classification result shows people, woman, and man in predicted concepts. Hence, the seat is taken and the status data was sent to the seat URL.
The result of the train positioning on the client application consists of GPS Geocodes. The result of train position tracking displayed on a microcomputer console using a GPS tracker can be seen in Fig. 4, a. As shown in Fig. 4, a, the latitude and longitude data always displayed after the device received GPRMC and GPGGA geocode message. Whenever the device was unable to receive the signal, the read error message will appear.
The train position data will be posted to the server to determine the destination station based on the order in Table 1. The destination data format is shown in Fig. 4, a after the latitude and longitude lines. Those data were accessed through an URL with a «/destination/» endpoint. The destination was saved in the station data table as the JSON data. The station data table is then accessed by the GUI application in the destination station to get the train id. The MRT Carriage information of the train with the corresponding id is displayed in the visual display. This process is schematically figured in Fig. 4, b. Fig. 2. Overview of the adaptive queue line system
2. Visual Information Display
The infographic has to be easy to understand by the passengers. Therefore, a neat and tidy layout is needed to deliver visual information. This intended to make the passengers in the queue line easy to find narrative hints in the displayed infographics while maintaining visibility in the long-range. The narrative hints in the information are the collection of symbols that are grouped [21].
Concept In Keyword
List ?
Seat is taken Computer Sciences As seen in Fig. 5, the visual information of the adaptive queue line system was displayed while maintaining its simplicity. The standard collections of symbols are used to represent the system. This collection of symbols consists of lines, rectangles, and circles. The grouped lines were representing an MRT Carriage. The rectangles with rounded ends were representing the seat. The availability of the seat was represented by its color. Red indicates a seat was taken, light blue indicates a seat is available; black indicates a seat is not available due to physical distancing, and a stronger blue represents a priority seat. The circle between the rectangles represents the location of the hand strap for the standing passenger.
Fig. 5. Visual Information in the queue line
The text information was presented under the MRT Carriage image. The current capacity of each seat type on an MRT Carriage is displayed using text. Therefore, the font height was determined based on the visual distance using (1). Font height is the height of the uppercase character measured from the descender line to the ascender line. The results of the calculation are presented in Table 2. The presented results were the roundup of the maximum visual distance as an input to the (1). The required font height was directly proportional to the visual distance. The visual distance is the distance between the object and the eye. As the distance gets farther, the required height is getting taller. The height of the font becomes the parameter for other font visual attribute sizes. The other font visual attributes are determined using certain proportion coefficients to the height. The width, height, thickness, and distance between lower cases and upper cases for 6 m and 10 m visual distances were determined. The results were presented side by side in Table 3. The 6 m and 10 m visual distances were chosen according to the distance between the edges of the MRT platform and the gates to the platform. The information on the screen has to be visible from that distance. At least, the fonts, shapes, colors, and icons are distinguishable. This requires scaling the font while keeping its legibility. This can be achieved by keeping the font x-height to baseline ratio to make sure the typeface consistent when scaled. The consistency also includes the arrangement consistency which determines the clarity of the information provided by a text [22]. When the fonts have been scaled the distance between the fonts in a word and distance between words has to be maintained. Hence, the font height was the only parameter used to determine other visual attribute distances such as width, thickness, and distances to maintain consistency.
Usability Test Result
The usability test of the AQLS system has overviews the system implementation. The result of the usability test with 25 respondents is shown in Fig. 6.
Fig. 6. Respondents Usability Test Results
The learnability of each respondent measures the adaptability of the system. Respondent R1 is 60 years old and R2 is 59 years old are the older respondents with lower learnability. This indicates the age of the users affects the adaptability of the system. The older respondents are less adaptive to the system than the younger respondents. Even if each respondent only makes one error during the test, the satisfaction score of R1 and R2 (57.5) is lower than other respondents. On average, all the respondents delivered a satisfaction level of SUS score = 73. Based on the SUS (3) the neatness of writing can be improved. Thus, there is some room for improvements available to increase the effectiveness of the AQLS system. Besides the effectiveness, the efficiency of the usability is crucial in quick interchange systems such as the MRT Queue line. The amount of time the passengers in the queue line needed to understand the seat or standing spot position to target defined as the efficiency of the AQLS system. For example, R1 understands the first task in 11.41 s and the second task in 8.84 s will have efficiency = 0.108 goal/s. In the same way, the average of efficiency to learn the tasks for 5 respondents = 0.126 goal/s. As shown in Fig. 6, the efficiency of the system is centered towards 0.0 because the shorter the time it takes to understand the task the higher the efficiency to understand the system.
4. Non-Pharmaceutical Intervention Effectiveness Evaluation
The objective of the AQLS system design is as a participatory ergonomics system with an engineering control to minimize virus transmission. Some entrance scenarios were evaluated using the Markov chain prediction approach to model the decision of the passengers. The cases are listed in Table 4. The passenger decision modeled in Table 4 is a CB. Case 1 to 6 is a normal case in which a CB walking towards a seat or a standing spot in one direction. Case 7 and 8 are a special case that models a CB that unable find a seat because all the seats were full. Therefore, the CB has to find a standing spot. Likewise, cases 9 and 10 also modeled the two-directional movement of a CB inside an MRT carriage because a CB drops its phone near door 3 or seatrow 7. The target location is the final destination of the CB. The result of implementing AQLS is a better decision to choose the entrance with a minimum distance to the target location. The entrance chosen by passengers in AQLS is assumed to always have the shortest path to the target location. This also represents the passengers with perfect learnability in understanding the system usability. These better decisions are represented by the number of steps taken by a passenger to the target location. In the case of CB, the numbers of the steps determine the contact of a CB to another passenger. More steps mean more contact with another passenger. Therefore, the chance of a CB to transmit the virus to another passenger is defined by the entrance selection.
The Markov chain was used to evaluate the virus transmission chance based on the number of steps of a CB and the possible numbers of contact in each step. The number of steps in Table 4 was plugged into equation (10) to calculate the possible number of persons transmit-ted after made a contact with CB. The result of the Markov chain simulation is served in Fig. 7. The predicted transmitted person in the existing queue line system is always higher than the AQLS. Hence, successful AQLS system implementation is important to build safer and healthier public transportation.
Fig. 7. Comparison of the transmitted person in the existing system and AQLS
The adaptive queue line system ensures the certainty of the queue line capacity and the seating capacity to reduce physical contact of passengers during pandemics. The seat availability information guides a passenger to a specific seat or hand strap in the MRT carriage. As a result, no seat or hand strap searching process will be performed by the passengers inside the MRT. The uniformity of the queue line also can be assured. This because the seat maps information of an MRT Carriage is displayed on the queue line. Therefore, the passengers will decide to embark on the train through the nearest door to a chosen seat or hand strap. The decision to move to the next MRT Carriage only available if the MRT Carriage already reaches its maximum capacity. This is strengthening the physical distancing regulation during the pandemics. Moreover, the implementation of the system will also prevent queue lines from overload.
The adaptive queue line system key role is as a non-pharmaceutical intervention for disease transmission. COVID-19 can easily be transmitted in the crowd without physical distancing regulation. The air around an infected person is concentrated by the virus [23]. The virus will spread through aerosol under the physical distancing distance range. As the virus is attached to a micro-droplet, it is carried by the droplet and follows the parabolic motion of the falling droplets. This can infect sitting passengers by an infected standing CB or by a CB that is searching for a seat inside an MRT carriage. As illustrated in Fig. 8, a the sitting passengers are in the falling droplet area. The worse scenario is a standing CB can transmit the virus to two sitting passengers as illustrated in Fig. 8, b. However, the falling droplet area radius is dependent on the ejection force magnitude of the Droplet. Thus, the adaptive queue line system can be effective to minimize the COVID-19 transmission in MRT by aerosol and falling droplets.
In conjunction with aerosol and falling droplet transmission, airborne disease transmission possibility has been minimized through the implementation of an adaptive queue line system. Balan cing the number of passengers during embarkation can reduce the transmission radius. Without this, some spots on MRT will look like as on the left side of Fig. 8, a. The effect of grouping people together as in the left side image of Fig. 8, a is not only widening the transmission radius but also has an impact on the higher possibility of airborne transmission. The whole MRT Carriage virus concentration in the air can be raised if the crowds are formed in more than one spot inside an MRT Carriage. This phenomenon can only be avoided by non-pharmaceutical interventions. This is because the dynamics of the system can't be controlled over time due to the existence of some uncertainties [24]. Corresponding to that, the adaptive queue line system is designed as a guidance system to keep the passengers engaged with physical distancing regulation. Hence, the non-pharmaceutical intervention role of the adaptive queue line system can be achieved only if the passenger is willing to use the AQLS. The result of the usability test proves that AQLS is a usable system. The high learnability implies that AQLS is an easy to master system. The only factor that slows down a new user learning process is age. This is shown by an error that was made by the older respondent when identifying seat position from the monitor. Previous study on the effect of age in cognitive ability showed no differences in the learning rate of younger and older respondents [25]. However, aging declines the spatial and visual pattern recognition ability [26]. Therefore, there is a room for improvement in GUI visual design. Information triggers are needed to improve the learning process of the passengers when using the system for the first time. The addition of information triggers such as user guides or an administrative procedure to use AQLS system should be able to maintain the efficiency while increasing learnability.
The efficiency of the AQLS system depends on the respondent learnability. The speed of acquiring information until it is learnable is defined as learning efficiency. Similar to learnability, the learning efficiency of a respondent is also affected by age. The older respondents have some difficulties recognizing patterns on time. Even though the respondents are able to distinguish lines and graphics in GUI visual display. Therefore, the image clarity of the GUI components is unable to accelerate the spatial and visual pattern recognition process. Regardless of that, the usability test was only conducted once. As a result, the effect of the task repetition was not included in this study. The efficiency of a task completion must be followed by the effectiveness. The task must be done correctly to be able to achieve the initial goal. The effectiveness metric is the error that is measured based on the incorrect result of each task that was completed by each respondent. There is only one respondent that successfully completed all tasks correctly. However, all respondents are having no past experience of using a smart queue line system such as AQLS. The respondents were still able to manage to perform some tasks correctly without repetition and past experiences. This indicates the learning curve of the AQLS system is shallow for the majority of users.
The high satisfaction of the AQLS system portrays the user acceptance. Usability satisfaction dimension measures how far the user expectation of the system and the actual system. Based on the results, the learnability score determines the satisfaction score. The decrease of learnability score is followed by a decrease in satisfaction score. Hence, the satisfaction score can be decreased as the age of the users increases. Although the satisfaction score is lower, it does not mean the older users do not want to use the AQLS system. As a mathematical model, the satisfaction formula does not count user excitement to learn the system. Therefore, there is a bias in the satisfaction score.
The real condition is all respondents are showing excitement to learn the AQLS system. Hence, this study reveals the limitation of usability testing to evaluate the system. The user's willingness to use the designed system was not considered as an input to the satisfaction formula. In fact, the user engagement can be predicted based on the willingness. Current usability testing models only focus on the task completion by the users. There are no variables that correspond to user engagement. Therefore, the low satisfaction score can have an alternative meaning. The alternative meaning is the user is still in the introductory phase to the system. In this phase, the user is building the understanding of the system. At this stage, the user engagement or willingness to use the system is still unknown but the conclusion has been enforced through mathematical calculation of satisfaction score. Therefore, the usability testing objective to learn about the user is not achieved at this point. However, the other usability test objectives are still achieved. The usability test of the AQLS system is still able to uncover system design problems and discover opportunities to improve the system design.
The non-pharmaceutical intervention of AQLS has been proven effective according to the Markov chain simulation result. The steps taken by a passenger inside an MRT carriage can be used as a physical contact indicator. In each physical contact, the virus can be transmitted directly or indirectly. By assuming a passenger as a CB, the input for the Markov chain simulation is the number of steps and number of persons contacted with a CB inside an MRT carriage. As a rough approximation, the simulation results are consistent throughout all tested scenarios. This consistency is achieved by assuming each passenger in the queue line is engaged to use the AQLS system. This means every passenger pays attention to the information displayed on screen. Each passenger chooses the entrance based on the information provided on screen. As a result, the Markov simulation result implies that the non-pharmaceutical intervention role of AQLS system only will be achieved through correct implementation of the system.
The usability test result shows that the excitement of the passengers to use the system is not directly correlated to the correctness of the AQLS system implementation. Passengers may eager to use the AQLS system even if the passenger does not understand the system or is still learning the system. This will result in an ineffective non-pharmaceutical intervention role in the early stage of the AQLS implementation. The only factor to improve correctness of the AQLS implementation is the passenger motivation. Personal motivation is driven by the perceived value [27]. Therefore, each individual passenger has a unique perception of the AQLS implementation. This is the challenge that needs to be overcome in AQLS implementation. Nonetheless, the Markov simulation has predicted the positive outcome when passengers follow the visual information guide.
The information provided on the screen is the guidance to the passenger decision-making. Visual information is known to manipulate brain neural circuitry that is involved in decision-making [28]. The human visual cortex can process information without involving the higher-level area of the brain [29]. This implies the faster classification and recognition of an object or a situation can be triggered by visual stimulation. Therefore, the queue line can be managed adaptively and adjusted with high discipline based on the visual information provided. The provided information has become the guidance to maintain the queue line discipline while adjusting the position.
In this study, the graphical user interface (GUI) has been ergonomically designed based on the user experience principle according to ISO 9241 guideline (ISO Ergonomics of human-system interaction, 2018). Therefore, the presented visual information can be absorbed with the minimum content load by the passengers in the queue line. The minimum content load of visual information is important for the adaptability of the system. As a consequence, the optimum tradeoff between the contrast and the visual information content of the GUI has to be determined. These optimal attributes will ease the queue line adaptation for the passenger.
The change of visual distance affects the visual perception speed and visual display information clarity of text information. This because, the main feature that determines a font shape is its x-height to baseline ratio [22]. The change of ascended and descended height of the font will make the font appear thinner. As a result, a font of a character in the middle of a text will have a greater distance. This will lead to a different perception and enforce re-recognition because the visual style of the information has been changed resulting in decreased clarity. The increase in x-height of a sans-serif font increases its visual recognition speed [30]. Therefore, the proportion coefficients in table 3 were determined based on the visual perception speed and information clarity consistency.
The easiness of the passenger adaptation in the queue line implies a higher safety level in the queue line. This is due to the visual stimulation response as a parameter of time to take action [31]. The time to take action will be reduced if the passengers easily understand their position and direction in the queue line. The action of the passenger is to decide to move to the desired location according to visual information guidance. The displayed visual information has been designed to direct the passengers to the minimum capacity location. As a consequence, the passenger traffic around the platform is able to be balanced. Passenger traffic balancing reduces the meeting chance of the disembark passengers from the train and embarked passengers. This is useful to reduce the spread of the virus during pandemics. That way, the non-pharmaceutical intervention of the system will be as effective as the Markov chain simulated result. Alongside that, the adaptive queue line system has the potential to reduce accidents on the MRT door.
Overall, the AQLS system has been proven advantageous to minimize crowd formation inside MRT and in a station. However, this system is still in an emerging phase. More tests and evaluation in system effectiveness during operation is needed. In conjunction with it, the evaluation of individual passenger response to the visual information provided by the system is important. Therefore, the analysis of individual behavior of a passenger should be performed in the future. Evaluation of passenger engagement to the system is continuously needed. Hence, the adaptability of the system needs to be extended to develop a successful participative system. In addition, the evaluation of AQLS system effectiveness in minimizing virus transmission is also still in simulated form in this study. In future study, the real evaluation must be strived.
Conclusions
This study has successfully demonstrated the development of an information system for pandemic widespread prevention and improving MRT mass transportation safety. The adaptive queue line system will ensure the number of queues in front of the train doors at the destination station platform. Furthermore, this queue will reduce passenger traffic and crowds on the train due to seat searches. The integration of ergonomic, data technology, and information technology in the areas of visual display design, user interface experiences, IoT, asynchronous APIs, cloud computing, and machine learning have realized this information system. The adaptive queue line system is a non-pharmaceutical intervention of viral transmission disease inside and outside MRT by maintaining each passenger engagement with the physical distancing regulation using a real-time visual information guide that balances the queue line capacity and the distribution of the crowd spots. | 2021-11-21T16:11:23.926Z | 2021-11-18T00:00:00.000 | {
"year": 2021,
"sha1": "b2610abe0def86689ae39c97d4cfc5acff2d944e",
"oa_license": "CCBY",
"oa_url": "http://journal.eu-jr.eu/engineering/article/download/1923/1791",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c129cef521ffcf4239cce71eafa8ac0b250c92a4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247223231 | pes2o/s2orc | v3-fos-license | In Vivo Clearance of Apoptotic Debris From Tumor Xenografts Exposed to Chemically Modified Tetrac: Is There a Role for Thyroid Hormone Analogues in Efferocytosis?
Apoptosis is induced in cancer cells and tumor xenografts by the thyroid hormone analogue tetraiodothyroacetic acid (tetrac) or chemically modified forms of tetrac. The effect is initiated at a hormone receptor on the extracellular domain of plasma membrane integrin αvβ3. The tumor response to tetrac includes 80% reduction in size of glioblastoma xenograft in two weeks of treatment, with absence of residual apoptotic cancer cell debris; this is consistent with efferocytosis. The molecular basis for efferocytosis linked to tetrac is incompletely understood, but several factors are proposed to play roles. Tetrac-based anticancer agents are pro-apoptotic by multiple intrinsic and extrinsic pathways and differential effects on specific gene expression, e.g., downregulation of the X-linked inhibitor of apoptosis (XIAP) gene and upregulation of pro-apoptotic chemokine gene, CXCL10. Tetrac also enhances transcription of chemokine CXCR4, which is relevant to macrophage function. Tetrac may locally control the conformation of phagocyte plasma membrane integrin αvβ3; this is a cell surface recognition system for apoptotic debris that contains phagocytosis signals. How tetrac may facilitate the catabolism of the engulfed apoptotic cell debris requires additional investigation.
INTRODUCTION
The phagocytotic clearance of apoptotic cells in nonmalignant and malignant tissues is a process designated efferocytosis (1)(2)(3)(4)(5)(6). The molecular mechanisms that underlie efferocytosis are incompletely understood (2,5), but involve signals generated by apoptotic cells that attract and engage macrophages and other cells that can express phagocytic function. We have described a cell surface receptor for thyroid hormone analogues on integrin avb3 that is generously expressed by cancer cells (7,8). At this receptor, hormone analogue tetraiodothyroacetic acid (tetrac) and chemically modified tetrac induce tumor cell apoptosis by multiple mechanisms (8)(9)(10). This pharmacologic anticancer activity has been associated in human tumor xenografts with substantial graft shrinkage and disappearance of apoptotic cell debris (11)(12)(13)(14)(15)(16)(17). The mechanisms of anticancer activity of thyroid hormone analogue actions on tumor cells have been extensively studied; here we examine these actions for ways in which they may contribute to efferocytosis. Tetrac derivatives are also anti-angiogenic and the clearance of blood vessel debris in angiogenic models has been found to be efficient and is not associated with hemorrhage (18,19); this is in contrast to certain other anti-angiogenic agents such as antibodies to vascular growth factors (20,21).
We assume that phagocytosis of apoptotic endothelial cells in tetrac-treated models is via one or more mechanisms similar to that in efferocytosis in tumor grafts. Integrin avb3 has been implicated in the 'recognition' by phagocytosing cells of tumor cells (4,5), as discussed below. The binding of thyroid hormone analogues by avb3 may induce important changes in the conformation of the extracellular component of the integrin ('activation') (22). Eighty percent of the protein is extracellular (23,24). Such conformational changes facilitate the interaction of phagocytes and apoptotic tumor cells and uptake of treatmentdamaged cancer cells by phagocytes. Thyroid hormone can facilitate phagocytosis of bacteria by a mechanism that depends on avb3 (25). If the apoptosing cancer cell is interacting with platelets, thyroid hormone may induce local release of ATP by the platelet and the ATP may act to facilitate monocyte/phagocyte recognition of the apoptotic cell (5). It is observations such as these that encouraged the current work.
It is important to appreciate that tetrac and the principal product of the thyroid gland-L-thyroxine (T4)-from which tetrac is derived have radically different functions at the thyroid hormone receptor on integrin avb3 (8,10). For example, T4 is anti-apoptotic (9) and thus will not contribute to efferocytosis. In contrast, tetrac and chemically modified versions of tetrac, are pro-apoptotic via avb3 and block the anti-apoptotic actions of T4 (9). The action described in the current report of tetrac on efferocytosis related to tumor cells undergoing apoptosis is novel, but glucocorticoids (26,27) and parathyroid hormone (PTH) (28) are examples of other hormones recently shown to modulate efferocytosis linked to wound-healing and clearance of the debris of the inflammatory process.
The clearance by phagocytes of apoptotic cells from a tissue field is widely regarded to involve three processes. These are generation of apoptotic announcement ('find me') and pro-phagocytotic ('eat me') signals that may be read by appropriate white blood cells and the post-engulfment processing of debris (2,5).
Cell Culture
Human glioblastoma U87-luc and GBM 052814 cells were grown in DMEM that was supplemented with 10% FBS, 1% penicillin, and 1% streptomycin. Cells were cultured at 37°C to subconfluence and treated with 0.25% (w/v) trypsin/EDTA to induce cell release from flasks. Cells were washed with culture medium that was free of phenol red and FBS and counted.
Animals
Immunodeficient female NCr nude homozygous mice aged 5-6 weeks and weighing 18-20 g were purchased from Taconic Laboratories (Germantown, NY, USA). All animal studies were conducted at the animal facility of the Veteran Affairs (VA) Medical Center, Albany, NY, USA in accordance with current institutional guidelines for humane animal treatment and approved by the VA IACUC. Mice were maintained under specific pathogen-free conditions and housed under controlled conditions of temperature (20-24°C) and humidity (60-70%) and 12 h light/dark cycle. Animals were fed a standard pelleted mouse chow. Mice were allowed to acclimatize for 5 days before study.
Glioblastoma Xenografts and Treatments
For the glioblastoma tumor model, U87-luc and GBM 052814 were harvested, suspended in 100 mL of DMEM with 50% Matrigel ® and 2 × 10 6 cells were implanted subcutaneously dorsally in each flank, achieving two independent tumors per animal. Immediately prior to initiation of treatments, animals (n=40) were randomized into treatment groups (5 animals/ group) by tumor volume measured with Vernier calipers. After detection of a palpable tumor mass (4-5 days post implantation), the treatments of control (PBS) or P-bi-TAT (10 mg/kg body weight) were administered daily, subcutaneously, on the ventral side of the animal, for 21 days (ON Treatment) and in another group of mice treatments were administrated daily for 21 days followed by 21 days discontinuation (ON + OFF Treatment). The tumor volume (width and length) was measured with vernier calipers at 3-day intervals during the ON and ON + OFF studies, and the volumes were calculated using the standard formula W X L 2 /2. Animals were humanely sacrificed, and tumors were collected and fixed in 10% formalin.
Histopathology
The fixed samples were placed in cassettes and dehydrated, using an automated tissue processor. The processed tissues were embedded in paraffin wax and the blocks trimmed and sectioned to about 5 x 5 x 4 µm size, using a microtome. The tissue sections were mounted on glass slides using a hot plate and subsequently treated in the order of 100%, 90%, and 70% ethanol for 2 min each. Finally, the tissue sections were rinsed with water, stained with Harris's hematoxylin and eosin (H&E), and examined under a light microscope at lower magnification (4X and 10X) and higher magnification (40X). Efferocytosis was defined by percent of total apoptotic tumor cells (TUNELpositive cells) exhibiting pyknotic nuclei with dark brown staining and pointed apoptosis.
TUNEL Assay
Analysis of apoptotic cells in tumor tissue was done with terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling (TUNEL) staining using an HR-DAB detection kit, according to the manufacturer's directions (Abcam, Cambridge, MA, USA). TUNEL-positive cells had pyknotic nuclei with dark brown staining and pointed apoptosis. Images of the sections were taken with a light microscope (Leica, Buffalo Grove, IL, USA) at 40X and 100X magnification. The percentage of apoptotic cells was calculated on the basis of control (PBS-treated) samples.
Immunohistochemistry
Tumor sections were deparaffinized in xylene and rehydrated, followed by antigen retrieval with retrieval buffer (sodium citrate buffer, pH 6.0; Abcam). The peroxidase activity was inhibited by 3% hydrogen peroxide for 10 min and the sections were incubated with 10% normal goat serum (Vector Laboratories, Burlingame, CA, USA) to block the non-specific binding of reagents. Rat antimouse CD68 antibody (1:100, Bio-Rad, Hercules, CA, USA) was applied as primary antibody overnight in a moist chamber at 4°C. Goat anti-rat immunoglobulin (1:100, Abcam) was applied as secondary antibody for 2 h at 37°C, followed by streptavidin-HRP for detection. Immunostaining was developed with DAB + NI-Chromogen substrate (Vector Laboratories), followed by counterstaining with methyl green. Tumor tissue was examined under the light microscope at magnification 40X and 100X.
CD68-positive cells were developed for light microscopy with DAB + NI-Chromogen substrate (dark-brown staining). Other cells stained green with methyl green (Figures 2 and 3, below). The percentage of CD68-positive cells was calculated based PBSexposed control samples.
Statistical Analysis
Statistical analysis was performed with GraphPad Prism software (GraphPad, San Diego, CA). Data are presented as means +/-SD. For comparison between 2 data sets, Student's t test was used; ANOVA was used in comparisons of 3 or more sets of data. *P <0.05, **P <0.01 and ***P<0.001 indicated statistical significance.
Efferocytosis
In a series of reports on the actions of chemically modified forms of tetrac, we have shown that the agents are effective in reducing xenograft size by 80-95% (16,17,29,30). In the current study, a substantial decrease was confirmed in the tumor growth rate of xenografts of U87-luc and GBM 052814 cells in response to tetrac-based P-bi-TAT ( Figure 1).
Histopathologic examination of the xenografts after 3 weeks of P-bi-TAT therapy in U87-luc glioblastoma cells ( Figure 2) and in a primary culture of glioblastoma cells (GBM 052814) ( Figure 3) has revealed evidence of apoptosis as indicated by cell shrinkage, blebbing and nuclear fragmentation (31), in the course of xenograft shrinkage of about 90% (29). These findings are in accordance with our previous results (29).
As shown in histological and immunological staining in the upper panels and summarized in the lower panels of Figures 2 and 3, P-bi-TAT induces apoptosis, consistent with our previous results (15). Indicated graphically in the lower panels ( Figures 2B, C and 3B, C), discontinuation of the drug for 3 weeks was associated with prototypic efferocytotic clearance of the apoptotic debris. The histological studies indicated that there was no build-up of macrocytes in the tumor xenografts following drug withdrawal. There was no resumption of residual xenograft growth in tetrac-treated animals.
Necrosis
A limited degree of necrosis was seen in the two GBM models exposed to P-bi-TAT for 3 weeks. H&E staining of control and Pbi-TAT-treated xenografts was used to estimate extent of tissue necrosis, characterized on light microscopy by large, blurred, red-stained material that contained blue-stained nuclear fragments. Non-necrotic areas were characterized by dark purple-stained living cells. We found 5% and 8-10% necrosis, respectively, in treated U87-luc and GBM 052814 cells.
Induction of Apoptosis and Effectiveness of the Process
The induction of apoptosis in cancer cells by tetrac or chemically modified tetrac involves multiple intrinsic and extrinsic mechanisms (9,14,32). Specific serine phosphorylation of p53 is a critical feature of the intrinsic pathway ( Figure 4) (9), which has been studied as a biochemical site of competition between T4 and tetrac (32)(33)(34)(35). The extrinsic pathway depends on activation of caspases ( Figure 4). In this regard; thyroid hormone analogues have been shown to affect caspases 2 (9), 3 and 9 (10). Tetrac is pro-apoptotic, but T4, the iodothyronine analogue from which tetraci is derived, is anti-apoptotic (10). As an antiapoptotic factor, T4 decreases activity of specific caspases, whereas tetrac-based agents include caspase activation among their pro-apoptotic actions (9, 10).
Apoptotic Cell Signaling: 'Find Me' Signal to Phagocytes
The 'find me' signals from apoptotic cells to phagocytes include nucleotides such as ATP and UTP and certain chemokines, such as CX3CL1 (fractalkine), that are released by dying tumor cells (4,5). Nucleotide synthesis is activated by T4 via avb3, as this pathway has a substantial and positive effect on tumor cell respiration (36). However, tetrac and chemically modified forms of tetrac-such as P-bi-TAT-block cell respiration by downregulating expression of genes that code for ATP synthases and NADH dehydrogenase. These enzymes are a part of mitochondrial respiration in cancer cells. Thus, the anticancer activity of tetrac is likely to reduce the amounts of ATP and of CX3CL1 (34) available for release as apoptosis progresses. A remaining question is, what are the 'find me' signaling molecules in efferocytosis induced by tetrac?
The action of tetrac on chemokine expression is generally downregulation (37), but in contrast, chemokine ligand CXCL10 and receptor CXCR4 mRNAs are found in abundance in tetractreated human breast and thyroid cancer cells (37) CXCL10 can serve as an attractant for macrophages and T cells (38,39), as well as being pro-apoptotic (40). CXCR4 is a human and murine macrophage chemokine receptor, shown to be increased in a model of inflammatory disease (peritonitis) (41). Among its functions is promotion of macrophage egress from inflammation sites and entry into lymphatics. A hallmark feature of the global gene expression changes induced by tetrac-based agents at the integrin is that the downstream effects conform to specific pathways that lead to changes in biologic function. These changes should not be viewed as isolated target gene effects. Ongoing studies of the actions of P-bi-TAT on gene expression in GBM cells are consistent with this concept and reveal, as they have in other types of cancer cells (10,37), actions on efferocytosis-related genes, including those in signal transduction pathways that modulate pro-apoptotic, anti-inflammatory and anti-proteolytic activities (GV Glinsky: unpublished observations).
Apoptotic Cell Signaling: 'Eat Me' (to Phagocyte)
There is general agreement that the plasma membrane avb3 of phagocytes is a sensor for apoptotic cell membrane components that incite phagocytosis (4,5). Tetrac controls the conformation of this integrin between the activated and non-activated state (22). This important effect will modify the ability of the integrin to recognize and consequently bind potential ligands in the plasma membrane of apoptotic tumor cells. In this regard, a number of such ligands have been identified (4,5). We suspect that a variety of growth factor receptors found on the surface of cancer cells and that communicate with avb3 (10) may also be part of the 'eat me' paradigm. The chemokine ligand CX3CL1 (fractalkine) may be a component of phagocyte recruitment (3,42). We would point out, however, that tetrac-based drugs inhibit the expression of a number of chemokines, including fractalkine in cancer cells (9). It would thus seem unlikely for such agents (chemokines) to be present in excess in tumor cells undergoing tetrac-induced apoptosis.
We would also propose that the apoptosis process itself is potentiated by the actions of tetrac molecules to downregulate expression of genes such as the X-linked inhibitor of apoptosis (XIAP) and to upregulate transcription of pro-apoptotic genes such as those for certain caspases (13,43,44). While the clearance of apoptotic tumor cell debris in response to tetrac has been satisfactorily demonstrated, the possibility that tetrac may interfere with phagocytosis of bacteria, e.g., meningococcus, has been raised (25). However, the tetrac effect in this example was demonstrated only in the presence of supraphysiologic concentrations of T4 and T3 and it is not clear whether there is an effect of tetrac or modified tetrac on bacterial clearance when T4 and T3 levels are normal range. When phosphatidylserine (PtdSer) moves from the inner to the outer plasma membrane leaflet in the early course of apoptosis in cancer cells, it is an 'eat me' signal (5). We assume that tetrac-induced apoptosis is associated with PtdSer signaling, however, this process has not yet been verified in tetrac-treated cells. The migration of phagocytic cells to and from a tissue region of apoptotic cells is in part a function of 'find me' signaling, but the rate of migration of phagocytes is at least in part a process regulated by extracellular matrix protein cues. The effect of the cues can be a function of the presence of thyroid hormone analogues. We have shown that extracellular matrix proteins may influence the rate of migration of endothelial cells when thyroid hormone as T4 is present (17). Tetrac blocks this motility rate enhancement action of T4. This effect needs to be examined in phagocytes, since a slowing egress of such cells may facilitate completeness of local phagocytic uptake of debris.
CONCLUSIONS
The anticancer activity of tetrac and chemically modified tetrac in preclinical studies is associated with extensive tumor cell apoptosis and with frank xenograft shrinkage and no local accumulation of cellular debris. These results presented in Figures 2 and 3 are very satisfactory examples of efferocytosis. As noted above, the extensive literature of efferocytosis is based on signaling that attracts phagocytes to sites of apoptosis and on signaling that activates phagocytes and their capacity to cope with the internalized remnants of dead cancer cells. In the case of tetrac, drug-induced apoptosis is extensive in mechanism, involving multiple intrinsic and extrinsic apoptotic pathways (9). We can assume, then, that the 'find me' and 'eat me' signals identified in a substantial number of reports are ample in tetrac-treated cancer cells that are undergoing apoptosis. However, except for nucleotides and chemokines, these signaling molecules have not been specifically sought in tetrac-induced apoptotic cells. Further, tetrac has been shown to decrease transcription of a variety of chemokines and we do not know whether genes such as that for CX3CL1 can be expressed in tetrac-exposed cells.
The shrinkage of tetrac-exposed xenografts testifies to the extent of 'eat me' signaling from dying cancer cells. The multiple functions of the thyroid hormone analogue receptor on the extracellular domain of avb3 suggest that there are changes in the conformation and functions of this integrin (8, 10) that play the major role in the activation of phagocytosis-capable cells that have been attracted to the tumor that is dying. Are the interactions of phagocyte avb3 with apoptotic cell bleb proteins-for example, growth factor receptor proteins-'eat me' signals? FIGURE 4 | Schematic overview of extrinsic and intrinsic apoptosis pathways in the cell and points at which thyroid hormone in these pathways is anti-apoptotic. The pathways converge at the mitochondrion and cause its permeabilization, with release of cytochrome c and consequent apoptosis. Genes or proteins circled in red or green identify loci of differential actions of thyroid hormone on these multiple factors in apoptosis that are discussed in the current work. Red and green colors identify downregulation and upregulation of factors respectively. The Fas receptor by its interaction with Fas ligand activates the extrinsic pathway. An intrinsic pathway activator is DNA damage from different factors such as radiation or chemotherapy. Bcl-2, B cell lymphoma-2; Bcl-xL, Bcl-2-related gene, long form; Bad, Bcl-2/Bcl-xL-associated death domain protein; Bak, Bcl-2 homologus antagonist killer protein; Bax, Bcl-2-associated X protein; IAPs, inhibitors of apoptosis; MCL-1, myeloid leukemia cell-1; XIAP, X-linked inhibitor of apoptosis. The absence of tumor cell debris in xenografts exposed to tetracbased agents indicates that the process of 'engulfment' must proceed efficiently in phagocytes exposed to tetrac. This issue has not been specifically examined in any model. We raise the possibility that phagocytes activated in the presence of tetrac and apoptotic tumor cells may not only metabolize the internalized cancer cell debris, but also serve to export the tumor tissue debris; this reflects CXCR4-promoted egress from tumor mass of macrophage containing such debris. We know that cell migration in response to extracellular matrix protein cues may be regulated through thyroid hormone actions at avb3 (18). However, the initial observations of this phenomenon showed that T4 enhances the effects of cue proteins and tetrac decrease this effect.
Studies are needed to further define the signaling mechanisms activated by tetrac and chemically modified tetrac to advertise the apoptotic process and to stimulate phagocytosis by nearby macrophages. P-bi-TAT is a chemically-modified thyroid hormone analogue that may be added to glucocorticoids (26,27) and PTH (28) as a group of hormones that can stimulate phagocytosis and efferocytosis.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
All animal studies were conducted at the animal facility of the Veteran Affairs (VA) Medical Center, Albany, NY, USA in accordance with current institutional guidelines for humane animal treatment and approved by the VA IACUC. | 2022-03-04T14:17:31.679Z | 2022-03-04T00:00:00.000 | {
"year": 2022,
"sha1": "42a420a8ac10728adb1c7509361dc126b457e3cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "42a420a8ac10728adb1c7509361dc126b457e3cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9234799 | pes2o/s2orc | v3-fos-license | An Analysis of Reproducibility of Dai and Iotn Indexes in a Brazilian Scene
The aim of this study was to evaluate the ability of the indexes DAI and IOTN in predicting the need of orthodontic treatment based in one property: reproducibility. The index DAI was developed in USA in 1989 and can identify 10 occlusal alterations that result, mathematically , in scores, with weights based in its relative importance according with the judgment of lay-people. The IOTN was developed in England also in 1989 and incorporates an aesthetic component AC and a component of dental health DHC. The AC component consists on a scale illustrated with 10 photos which had been divided in bands of degrees in accordance with a hierarchic scale and classifies the patients in degrees of treatment needed. The instruments of collection of the data were: plastic rule of DHC component and an aesthetic visual scale of component AC praised for the IOTN and one periodontal OMS probe praised for DAI. The sample was composite by 60 patients. The results indicated that both indexes were highly reproducible in accordance with Pearson and Spearman coefficients, which were strengthened by t-tests of Student and Wilcoxon, respectively. The correlation results between the examiners had varied between r=0.85 and r=1.00.
Introduction
Nowadays, the existing health services are deficient in supplying basic oral health attention to the great majority of the population and this number becomes even more reduced when it is considered oral specialized service.Regarding to occlusal problems, where there is a need of doing an intervention orthodontic, this situation is even more serious.In fact, it is becoming a reality more distant to patients of lower income 1 .
A way to outline such problems would be to know the epidemiologic situation of a certain population.Then, services could be planed and executed with justness to overcome the indiscriminate service for free demand or it demands drug addict 2 .Therefore, some indexes that affect malocclusion at a collective level were developed.Based on scientific parameters, such indexes allow an evaluation of the need for treatment of a certain population group, identifying the individuals with more treatment needed, it could be executed in a simple way and allowing a fairer access to the services [3][4][5] .
Because of aspects as the great variety of occlusal indexes, variables related to the size of the sample, to the appraised scholars' socioeconomic factor among other, it exists in the literature many studies with specific characteristics that it turn difficult the comparison not only in relation to the malocclusion prevalence 3 found as in relation to the reproducibility [6][7][8] to the consistence [8][9][10][11] and to the benefits brought by the orthodontic treatment 12 .
Therefore, in agreement with the problems already described, Emrich et al. 13 emphasized that the explanation to the great variability of the results of studies on the prevalence of the malocclusion was the unanimity absence among the professionals than it would be really an malocclusion and, therefore, only for them the classification of Angle 14 would reach that homogeneity degree.
Due to the availability of several indexes and taking into account their characteristics, it is evident the need for accomplishment of a study that evaluates the reproducibility and the capacity of diagnostic prediction of the need of orthodontic treatment 15 .
Then, comparisons were carried out among cases diagnosed thoroughly by two examiners making use of two indexes used, DAI 16 and IOTN 17 .
In this way, the objectives of this work were: to evaluate the reproducibility of IOTN Dental Index and Aesthetic Component of Health and of the Index of Dental Aesthetics, as established by OMS 18 ; to compare the two indexes regarding the reproducibility; to compare the reproducibility of the components AC and DHC of IOTN.
Material and method
The reproducibility of the indexes DAI and IOTN that consists of the capacity of an index in expressing the same values obtained when an individual is re-examined by the same examiner or by different examiners, it was evaluated.With this purpose, 60 scholars with the age 12 yearold were appraised by two trained examiners and gagged.To start the data collection, explanatory letters were given to the director of the school with intention of asking permission so that this study could be developed.Besides, parents of each selected student signed a term of consent to carry out the study approved by ethic committee.
Index of Orthodontic Treatment Need (IOTN)
This index has an Aesthetic component (AC) and one of dental health (DHC).
The DHC registers, through a plastic ruler extolled by the index, the occlusal characteristics of an malocclusion that harm the teething and adjacent structures.There are five levels, from the Degree 1 (there is no need for treatment) up to the Degree 5 (there is a great need for treatment).This index serves as basic guide for an impartial judgment of the malocclusion.There are two manners of evaluating the data from DHC index.The first one takes into account only the degree (from 1 to 5), and the second indicates the cause for the categorization at this level.Each characteristic is represented by a letter besides the number of the Degree, being like this, 5a for larger overjet than 9mm.
The aesthetic component (AC) consists of a scale of ten colored pictures showing different levels of beauty of the smiles which was the same adopted by Evans and Shaw 19 denominated of SCAN.The objective of the scale is to find a similar smile or with level of equivalent severity to the individual appraised, placing the smile in relation to the number 1 that represents the most attractive smile and the number 10 that it represents the less attractive smile 16 .
Dental Aesthetic Index (DAI)
The index of Dental Aesthetics is constituted by three groups of different conditions, which are: teething (dental loss), space (crowding, spacing, diastema (mm), previous maxillary crowding (mm) and crowding previous mandibular (mm)) and occlusion (maxillary overjet and mandibular (mm), vertical open bite previous (mm) and relationship molar) 17 .
IOTN index
The component of dental health of this index can be subdivided in three stages of severity according to the need of orthodontic treatment.If the patient is framed within the degrees 1 and 2 is considered to be without or little need for orthodontic treatment.At the degree 3 is considered to be with moderate need of orthodontic treatment and at the degrees 4 and 5 is considered to be with severe need of orthodontic treatment.
In the same way, this criterion can be applied to the aesthetic component of this index.The degrees from 1 to 4 are related to patients without or little need for orthodontic treatment, the degrees 5, 6 and 7 with moderate need of orthodontic treatment and the degrees 8, 9 and 10 are related to severe need for orthodontic treatment.
DAI index
The final scores of the DAI index are obtained mathematically through which some of their components are multiplied by their respective weights.The final result is then added to the constant of an equation of value 13.The resulting sum represents the DAI score.
The severity of the malocclusion within a population is classified based on the score obtained through the evaluation of the characteristics occlusal that results in the final score.Patients with smaller scores than 25 are considered with any or little need.Scores between 26 to 30 are considered with elective treatment indication and scores between 31 to 35 or larger than 36, are considered with treatment indications highly desirable and indispensable, respectively.
To evaluate and to compare the degree of orthodontic treatment need of the cases analyzed, according with both indexes, the patients who were considered without the need for orthodontic treatment, according to DAI, presented scores up to 25, with moderate need of treatment with scores of 26-35 and with severe treatment need with larger scores than 36.
Statistical analysis
To evaluate the reproducibility it was verified the agreement among the data obtained by the examiners in relation to DHC-IOTN, AC-IOTN, to the characteristics of the DAI, the patients' AC and the positive or negative answers of them when questioned by each examiner if there was a need for orthodontic treatment.
In order to achieve this goal, it was necessary to separate the quantitative variables with and without normal distribution and the qualitative ones (categories and strips) to the statistical tests could be used in an appropriate way.
The statistical tests applied in this study will be described for each type of variable with their respective functions to a significance level of 5%
Results
The data that presented a normal distribution, maxillary overjet and the final data of the DAI were very well correlated among the examiners (r=0.96) in accordance with the Pearson correlation coefficient.According to the t test of Student, there was not statistical significant difference (p> 0.05) among the averages obtained by the two examiners.In other words, the averages obtained were similar (Table 1).
The data that did not present a normal distribution also presented a high correlation according to the Spearman correlation coefficient, being in a decreasing order of correlation: overjet mandibular and open bite (r = 1.00), diastema (r=0.997),maxillary crowding (r = 0.947), teething (r = 0.92), crowding mandibular (r = 0.852).
The results obtained from the Wilcoxon test corroborated with the data obtained by the t test and also did not demonstrate significant difference between the medium obtained for these characteristics (Table 2).
The accordance obtained among the categorical data were also high (Table 3) being presented in decreasing order of Kappa correlation: the patient's opinion r = 0.986, subcategories of DHC (letter) r=0.936, because they are easier to agree than the number of DHC, DHC (numeric value) r = 0.889, AC of the professional r=0.839,AC of the patient r=0.690.The Kappa correlation among the components of the DAI also in decreasing order were: crowding r=0.905, molar relationship r=0.866 and spacing r = 0.850.In order to evaluate if there was significant difference among the results obtained by the two examiners, the Wilcoxon test was used, except for the patient's opinion.For all these variables, the value of p was not statistically significant and, therefore, there has no difference among the two examiners.For the patient's opinion, the test of proportions was used because they are considered to be objective answers and also, the value of p did not show significant difference between the two examiners.
The degree of need of orthodontic treatment according to the two examiners was evaluated by strips (SNT, NMT and NST) in accordance with each index (Table 4).In order to evaluate the distribution of the data among the strips of the indexes DAI, DHC-IOTN, AC-IOTN and the patient's AC, the chi-square or Fisher test was carried out.The results indicated that the value p=0,00 was statistically significant for both indexes, evidencing that a significant difference existed.Therefore, the distribution of the data obtained among the examiners was not statistically the same.The classification of the strips by scores can give this type of result.The frequency of the data obtained by the examiners in relation to the degrees of need for orthodontic treatment in agreement with the indexes DAI, DHC-IOTN, AC-IOTN and the patient's AC, can be visualized in the Figures 1, 2, and 3.
Regarding the frequency of the scores of each index, the results indicated that the degrees 4 and 5 regarding severe need (NST) were the most frequent for the index DHC-IOTN (Figure 4), the degrees 2 and 4 for AC-IOTN (Figure 5) equivalent the without need of orthodontic treatment (SNT) and for the index DAI in the strip among 26-35 was the score 33 (moderate need of treatment) and in the strip among 36-100 the score 39 (severe need of orthodontic treatment) (Figure 6).
Discussion
Based on the premise that the conventional orthodontic diagnosis is qualitative and, therefore, a descriptive procedure that hinders a quantitative evaluation, many quantitative systems of evaluation of the malocclusion and of the need for orthodontic treatment have been developed during the last fifty years.Each index summarizes a group of characteristic occlusal that results in numeric values.For each one of these indexes a cut point exists below which the severity of the malocclusion is considered smaller than the need of orthodontic treatment and values above this point the cases are considered with need of orthodontic treatment.As a consequence, an index with cut point works as a diagnosis test 8 .The correlation results obtained among the examiners were considered high, for the variables with and without normal distribution, in accordance with Eklund et al 20 .They emphasize the difficulty of specifying a kappa value as a standard for an appropriate calibration.This is because the kappa value varies according to the total agreement and of the characteristics of the distribution.In addition, when evaluating if there was statistical difference among the results obtained by the examiners it was not observed any difference to the values of p for DAI and IOTN.
Richmond et al. 21also proved the reproducibility of IOTN evaluating the consensus of opinions about the need for treatment within a group of 74 dentists, orthodontists and general practitioners.According to Otuyemi and Noar 22 , who evaluated the variability among the indexes DAI, OI, HMAR, there are a high level of reproducibility and correlation among them.
Considering the categorical data, it can be observed that according to the table III, except for the patients' AC, the other data resulted in kappa values were considered high.In addition, when the value of p is evaluated it was not observed significant difference between the two examiners.
Comparing with the other classification methods with IOTN and DAI it is important to emphasize some points.The Classification of Angle 14 indicated to have low reproducibility 16 and it was proven useless to the analysis of the priority for treatment.For the epidemic use, the registration techniques described by Björk et al. 23 are considered acceptable concerning the precision to identify the several aspects of the occlusion, in up to 80% of agreement.However, they do not evaluate the priority for treatment.The attribution of weight to some aspects of the occlusion promotes a certain concern of feeling to give them a degree of severity, and then to prioritize the treatment.Several indexes were created with that principle 24- 27 .Perhaps, the great diversity of occlusal indexes is explained by some of them do not register a great variation of occlusal characteristics, to present a certain subjectivity degree, to be used with different objectives for which they were extolled, due to the great diversity of works that make impossible the comparison and evaluation of their reproducibility.As a consequence, the indexes DAI and IOTN present advantages because they have been extolled to evaluate the need of orthodontic treatment and be reproducible.The data obtained in this study were based on the results pointed out by Yeh et al. 28 , who indicated that the two indexes DAI and IOTN are capable to identify the patients' occlusal characteristics, despite of existing few studies that compare them objectively 29,30 .These indexes have similar objectives, however, with different applications.Although the index DAI seems to be easier to use, it does not evaluate some occlusal characteristics, such as: cross bite, deep overbite and deviation of medium line 22 .Another aspect to be taken into account is that the margin of error of the DAI is larger, since the occlusal characteristics are measured with the probe periodontal and they can reflect great alterations, when the corresponding weights are added to the measures.The IOTN index allows an easier evaluation since it possesses classification degrees established previously, in DHC and in AC.However, concerning the component aesthetic, also in aesthetic scale representative pictures of open bite and cross bite are not had, for instance, besides each individual's own subjectivity in analyzing her.A safeguard should be made to DHC-IOTN that inhibits some characteristics once it only takes into account the most serious.
The importance of the patient's aesthetic perception regarding the orthodontic treatment cannot be underestimated.In other words, the patients that received treatment should also be satisfied with the aesthetic and functional benefits 28 .Based on this, the patients' opinion was evaluated and also compared regarding the reproducibility.Then, the patients were questioned in different moments and by different examiners.The data indicated that there was not a significant difference among the opinions of these patients to the two examiners.In other words, almost 100% of the patients agreed that there is a need for orthodontic treatment in relation to the two moments in that it was investigated.However, the reproducibility was not so relevant when the patient evaluated the aesthetic visual scale.
In relation to the reproducibility of the component AC of IOTN checked by the examiners, it was observed that the subjectivity of "perceiving" something similar in the scale of pictures it is minimized in the professionals' case, differently that what it happens with the patients that a lot of times appear for the picture that a situation that they would like to be or that they find to be the more aesthetics.
As it can be observed in Figures 1, 2, and 3 there was a clear tendency of the patients to indicate scores regarding without need of orthodon-tic treatment (from 1 to 4) when compared with the results obtained by the two examiners with the indexes DAI and DHC-IOTN, mainly.Therefore, these results corroborate the findings of Lewit 26 , which emphasized that the opinions of patients associated to the need for treatment have been previously registered by the dental and facial appearance and the patients' complaint, this not always coincides with the professionals' evaluations in relation to the treatment need.
In this way, subsequent studies should be accomplished in different populations, with different needs for orthodontic treatment, with differentiated groups of orthodontists that allow a validation of these indexes in clinical different realities from those where they arose, in the sense of corroborating the discoveries of the present study.
Conclusions
According with the obtained data it could be concluded that, regarding the reproducibility, both indexes were reproducible when comparing the data obtained by the two examiners.When evaluating, in decreasing order of reproducibility, which index was more reproducible in relation to the strips of treatment need the result obtained was DHC-IOTN, AC-IOTN and DAI.Therefore, the index IOTN was more reproducible than the index DAI.The two components of IOTN were reproducible, however, DHC was more than AC, maybe this is explained by the subjectivity degree when evaluate the aesthetic visual scale of the component AC.
Collaborations
PCA Paiva, ACR Farias and KC Lima have equally participated in every phase of the elaboration of this paper.
Both indexes are reproducible H 1 = Both indexes are not reproducible H 0 = The index DAI has better reproducibility than IOTN H 1 = The index DAI doesn't have better reproducibility than IOTN H 0 = The component DHC of IOTN is more reproducible than AC H 1 = The component DHC of IOTN is not more reproducible than AC
Figure 2 .Figure 1 .Figure 3 .Figure 4 .
Figure 2. Distribution of the data obtained by the examiners related to cases with the need for moderate orthodontic treatment according to the indexes DAI, DHC-IOTN, AC-IOTN and the AC of the patient.Natal (RN), 2006.
Table 2 .
Correlation Spearman Coefficient and the p value according to the Wilcoxon test for variables without normal distribution.Natal (RN), 2006.
Table 1 .
Correlation Pearson Coefficient and the p value according to the t test of Student for variables with normal distribution.Natal (RN), 2006.
Table 3 .
Correlation Kappa Coefficient of the categorical data and the p value according to the Wilcoxon tests or proportions.Natal (RN), 2006.
Table 4 .
Correlation Kappa coefficient and the p value according to the chi-square test of the degree for the need of orthodontic treatment.Natal (RN), 2006. | 2017-04-01T17:14:25.677Z | 2010-05-01T00:00:00.000 | {
"year": 2010,
"sha1": "ad256e0744179f71ee7905958579d241ac6fd92b",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/csc/a/TDpfsw47LB8NCVWbHvdmdgJ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "ad256e0744179f71ee7905958579d241ac6fd92b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251469668 | pes2o/s2orc | v3-fos-license | Lymphocyte to monocyte ratio and serum albumin changes predict tacrolimus therapy outcomes in patients with ulcerative colitis
Tacrolimus therapy for ulcerative colitis is ineffective in certain patients; these patients require biologics or colectomy. We examined the ability of serum albumin levels and leukocyte subtypes to predict the therapeutic efficacy of tacrolimus. Patients with ulcerative colitis treated with tacrolimus were divided into non-failure and failure (required colectomy or switch to biologics or systemic steroids) groups. Serum albumin levels and leukocyte subtypes at induction, week 1, and week 2 after reaching high trough levels were retrospectively examined. Tacrolimus therapy failed in 18/45 patients within 3 months. The week 2/week 1 albumin ratio was significantly different between the failure and non-failure groups (P < 0.001). The receiver operating characteristic curve analysis revealed optimal cut-off value of the week 2/week 1 albumin ratio was 1.06, and area under the curve was 0.815. Analysis of leukocyte subtypes revealed significant between-group difference in the week 1 lymphocyte to monocyte ratio (P < 0.001). Multivariate analysis showed week 2/week 1 albumin ratio ≤ 1.06 and week 1 lymphocyte to monocyte ratio ≤ 3.86. Therefore, a low week 2/week 1 albumin and low week 1 lymphocyte to monocyte ratio predicted failure within 3 months of tacrolimus induction; a combination of these markers could accurately predict failure.
0) and after (week 1 and week 2) tacrolimus induction (see Supplementary Table S1 online, which shows serum Alb levels and ratios). A significant difference in the serum Alb level between groups occurred only at week 2 (P = 0.019). We also calculated the ratio of the three Alb levels (at week 0, week 1, and week 2) measured per patient and found that the week 2/week 0 and week 2/week 1 Alb ratios were significantly different between the groups (P = 0.013 and P < 0.001, respectively). As the week 2 value and two ratios (week 2/week 0; week 2/week 1) showed significant differences and could predict failure within 3 months, a subsequent receiver operating characteristic (ROC) analysis was performed ( Table 2).
Among these three values, the week 2/week 1 Alb ratio had the largest area under the curve (AUC), and the optimal cut-off value and AUC were 1.06 and 0.815 (95% confidence interval [CI]: 0.680-0.950), respectively. Kaplan-Meier analysis was used to compare the fractions of patients with non-failure in the groups with week 2/week 1 Alb ratios > 1.06 and week 2/week 1 Alb ratios ≤ 1.06 (Fig. 1). There were 26 and 19 patients in the week 2/week 1 Alb ratios > 1.06 group and the week 2/week 1 Alb ratios ≤ 1.06 group, respectively, of whom 4 and 13 patients experienced failure during the 3-month follow-up period, respectively. The rate of failure in the week Comparison of leukocyte subtype absolute counts and rates between the non-failure and failure groups. Similar to Alb, leukocyte subtypes were analyzed. Absolute counts and leukocyte subtype rates were compared between the non-failure and failure groups, and significant differences were found for the following values: neutrophil count at week 2 (P = 0.007), neutrophil percentage at week 2 (P = 0.002), lymphocyte count at week 1 (P = 0.013), lymphocyte count at week 2 (P = 0.003), lymphocyte percentage at week 1 (P = 0.002), and lymphocyte percentage at week 2 (P < 0.001) (see Supplementary Table S2 online, which shows the leukocyte subtype absolute counts and rates). We calculated the ratios of neutrophils, lymphocytes, and monocytes for the leukocyte subtypes and compared them between the non-failure and failure groups before and after tacrolimus induction (see Supplementary Table S3 online, which shows the leukocyte subtype ratios). The N/L ratios at weeks 1 and 2 were significantly different (P = 0.006 and P < 0.001, respectively). Although the neutrophil to monocyte ratio (N/M ratio) did not show a significant difference, the lymphocyte to monocyte ratio (L/M ratio) in week 1 and week 2 showed significant differences (P < 0.001 and P = 0.011, respectively). ROC analysis was performed on the 10 values that showed significant differences between the two groups ( Table 3). The L/M ratio at week 1 showed the largest AUC, with an optimal cut-off value and AUC of 3. Table 3. Receiver operating characteristic analysis for prediction of treatment failure based on absolute count, rate, and subtype ratio during the 3-month follow-up period after reaching the tacrolimus high trough level. AUC, area under the curve; CI, confidence interval; N/L ratio, neutrophil to lymphocyte ratio; L/M ratio, lymphocyte to monocyte ratio. www.nature.com/scientificreports/ week 1 Alb ratio > 1.06 and L/M ratio at week 1 ≤ 3.86 group) was 14 with seven patients experiencing failure within 3 months. By contrast, all patients in the week 2/week 1 Alb ratio ≤ 1.06 and L/M ratio at week 1 ≤ 3.86 groups experienced failure within 3 months. Significant differences between the respective groups were shown using the log-rank test. Additionally, the Cox proportional hazard regression analysis was performed for week 2/ week 1 Alb ratios ≤ 1.06 and week 1 L/M ratios ≤ 3.86 (Table 4). week 2/week 1 Alb ratio ≤ 1.06 and week 1 L/M ratio ≤ 3.86 were independent prognostic factors for tacrolimus treatment failure within 3 months of induction in univariate and multivariate analyses.
Discussion
We focused on serum Alb and leukocyte subtype levels as prognostic factors for tacrolimus therapy outcome in UC. Regarding treatment effect prediction using Alb levels, the change in Alb level within 2 weeks of anti-TNFα treatment can predict the prognosis 15 . We previously analyzed this idea by applying it to tacrolimus therapy and reported that the ratio of Alb at 2 weeks after achieving a high tacrolimus trough to Alb before tacrolimus induction could predict failure within 3 months 16 . In our previous study, we performed the same analysis on CRP, Hb, and WBC, excluding fractions other than Alb, but only Alb showed significant results. In our previous study, only the ratios of week 2/week 0 and week 1/week 0 Alb were calculated, not the week 2/week 1 Alb ratios.
In this study, we decided to include the week 2/week 1 Alb ratio. Interestingly, the week 2/week 1 Alb ratio was a more accurate prognostic factor by ROC analysis than the week 2/week 0 Alb ratio that was reported in our previous study 16 . This difference, as described in our previous report, occurred because serum Alb levels were enriched due to intravascular dehydration associated with frequent diarrhea and bloody stools before tacrolimus induction, but serum Alb levels decreased due to blood dilution by intravenous infusion at week 1. Additionally, www.nature.com/scientificreports/ there are some cases that do not show sufficient Alb elevation due to a tacrolimus refractory state, and we believe that the change in Alb between week 0 and week 1 was greater than that between week 0 and week 2.
As a novel approach in this study, the absolute counts, rate, and ratio of leukocyte subtypes before tacrolimus induction and at week 1 and week 2 were analyzed by comparing the failure and non-failure groups. From these analyses, the L/M ratio at week 1 most accurately predicted failure within 3 months. The L/M ratio has been reported to be a prognostic indicator in the field of malignancy, and a low L/M ratio has been shown to be a poor prognostic factor 18 . The usefulness of the L/M ratio as an activity index in UC was first reported by Cherfane et al., and a significant difference in the L/M ratio between active and quiescent UC groups (that correlated with clinical and endoscopic activity) was shown in their study 19 . In the report by Okba et al., the L/M ratio showed a significant difference between the inactive and active UC groups 20 . Xu et al. reported that the L/M ratio in patients with UC showed a significant difference between active and inactive groups as well as between active and inactive groups in a cohort of patients with Crohn's disease 21 . As described above, the L/M ratio has been shown to be useful in assessing UC activity, and all these reports have shown that a low L/M ratio is indicative of active UC [19][20][21] . Previous studies of UC and Crohn's disease have shown reduced lymphocyte reactivity at the peripheral and mucosal levels, and lymphocyte reduction can occur with inflammation [22][23][24] . Monocytes differentiate into macrophages and dendritic cells in tissues during inflammation and play a role in innate immunity. The sustained activation of monocytes and incomplete innate immune responses can be involved in IBD development 25 . Furthermore, monocyte hyperplasia increased the risk for worse clinical outcomes in IBD 26 . The L/M ratio is a value that indicates UC activity, and it is believed to predict failure by the same mechanism as the aforementioned biomarkers. Furthermore, the reason the L/M ratio predicted failure at week 1, not at week 0 (before tacrolimus induction), in the present study was that most patients were equally active before tacrolimus induction; therefore, there were no significant differences in the L/M ratios between the failure and non-failure groups at that time. However, the L/M ratios of the non-failure group, who improved with tacrolimus treatment, increased in one week.
Nishida et al. performed a leukocyte subtype analysis similar to that conducted in our study and reported that the N/L ratio before tacrolimus induction could be a predictor of the therapeutic effect of tacrolimus 17 . In the present study, the N/L ratio before tacrolimus induction was not significantly different between the failure and non-failure groups. This difference may be because the endpoint of our study was set at 3 months while Nishida et al. did not define an endpoint. In addition, the small sample sizes in both studies may have caused the differences in data.
We further analyzed the combination of week 2/week 1 Alb and week 1 L/M ratios. Such an analysis-using a combination of two markers-has been conducted in studies on relapse prediction using multiple biomarkers; this analysis is especially seen in studies incorporating FC and FIT. These studies showed that patients with UC in clinical remission who were positive for both FC and FIT had a higher rate of subsequent relapse, demonstrating the usefulness of a combined analysis of markers 27,28 . In this study, the combination of the week 2/week 1 Alb ratio and L/M ratio at week 1 showed a more accurate failure rate.
We focused not only on the values before induction but also on the values and their changes after induction. Although it would be ideal to have a test that can predict the effect of UC treatment before induction (and there are reports of such a test), there is no test that can predict prognosis with certainty 17,29,30 . Another advantage of the evaluation method suggested in this study is that the examined Alb and leukocyte subtypes can be easily and inexpensively measured at any institution.
This study had several limitations. First, it was a single-center, retrospective study with a small sample size. Second, the examined markers have not been compared with other biomarkers. Although it was established that these biomarkers could predict the prognosis of UC, a comparison with various biomarkers will need to be conducted in our future studies. Furthermore, in this study, it took a median of 4 days from tacrolimus induction to achieve a high trough, and it took approximately 18 days from tacrolimus induction to determine both the week 2/week 1 Alb ratio and the L/M ratio at week 1; it may not be realistic to start considering the next treatment at this point. We hope that these markers will be considered as decision-making factors for tacrolimus therapy for UC 16 . Table 4. Multivariate analysis for predicting failure 3 months following tacrolimus induction. HR, hazard ratio; CI, confidence interval; Alb, albumin; L/M ratio, lymphocyte-to-monocyte ratio; 5-ASA, 5-aminosalicylic acid. www.nature.com/scientificreports/ In conclusion, a low week 2/week 1 Alb ratio and a low L/M ratio at week 1 were predictive of failure of tacrolimus-based therapy for UC in this study. The combination of these markers can provide a more accurate prognosis.
Patients. Patients with UC treated with tacrolimus at our institution between August 2010 and April 2021
were enrolled in this study. The diagnosis of UC was made based on history, clinical features, and endoscopic and histological evaluation according to recent guidelines 31 . We excluded patients with inflammatory bowel disease (IBD) who were not diagnosed with UC, such as those with indeterminate colitis or unclassified IBD. Based on the clinical activity index (also called the Rachmilewitz index), biological data, and endoscopic findings, only patients with moderate-to-severe UC were included in the study. Patients who did not reach an initial high tacrolimus trough level of 10-15 ng/mL were excluded. Patients who took more than 28 days to attain the tacrolimus high trough level were excluded because biological data changes before and after induction were examined in this study.
Study design.
This was a retrospective single-center study. Tacrolimus treatment failure as indicated by switching to colectomy or biologics and corticosteroids within 3 months of tacrolimus induction was the primary outcome measure. Secondary endpoints were the predictors of tacrolimus treatment efficacy based on a comparison of biological data and clinical findings between the "non-failure" and "failure" groups.
Disease assessment. The Rachmilewitz index was used to evaluate clinical disease activity 32 . Serum Alb and leukocyte levels were measured at our facility at admission and more than twice per week. The absolute count and rate of leukocyte subtypes, including neutrophils, lymphocytes, and monocytes, were extracted.
Endoscopic assessment. Endoscopic examination of all patients was performed before the induction of tacrolimus. MES was used to assess the mucosal status 33 . MES was assessed using the following criteria: 0, normal or inactive disease; 1, mild disease with erythema, decreased vascular pattern, and mild friability; 2, moderate disease with marked erythema, absence of vascular patterns, friability, and erosions; and 3, severe disease with spontaneous bleeding and ulceration.
Tacrolimus therapy and follow-up. After the initial administration, tacrolimus trough levels were measured 2-3 times per week. The dose was increased until it reached a high trough level of 10-15 ng/mL. As effective blood concentrations are reached after varying amounts of time across individuals, day 1 was defined as the time when the tacrolimus dose resulted in a high trough level for an individual. After maintaining a high trough level for 2 weeks, the dose was reduced to a low trough level (5-10 ng/mL), as previously prescribed. In this study, "failure" was defined as undergoing colectomy or switching to biologics or systemic steroids within 3 months from day 1.
Statistical analysis.
Statistical analysis was performed using SPSS version 24 (IBM, Armonk, New York, USA) and SAS version 9.4 (SAS Institute, Cary, NC). Statistical significance was set at P < 0.05. Differences between median values were compared using the Mann-Whitney U test or Fisher's exact test. ROC analysis was conducted to determine the optimal cut-off for each value and rate for predicting failure within 3 months. The cumulative non-failure rate was analyzed using Cox proportional hazard regression and Kaplan-Meier analysis and compared using the log-rank test.
Ethics approval. The study protocol was reviewed and approved by the Ethics Committee of Hamamatsu University School of Medicine (approval number: 20-356), and the study was performed in accordance with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. All enrolled patients provided written informed consent to participate in the study.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | 2022-08-11T06:16:09.514Z | 2022-08-09T00:00:00.000 | {
"year": 2022,
"sha1": "2324e76e705979ed3c70a8d6c8b616813b157707",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-17763-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "179e1c52c27256b22f4c665b9254b5178e08e6f7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220968902 | pes2o/s2orc | v3-fos-license | Constructing alternating 2-cocycles on Fourier algebras
Building on recent progress in constructing derivations on Fourier algebras, we provide the first examples of locally compact groups whose Fourier algebras support non-zero, alternating 2-cocycles; this is the first step in a larger project. Although such 2-cocycles can never be completely bounded, the operator space structure on the Fourier algebra plays a crucial role in our construction, as does the opposite operator space structure. Our construction has two main technical ingredients: we observe that certain estimates from [H. H. Lee, J. Ludwig, E. Samei, N. Spronk, Weak amenability of Fourier algebras and local synthesis of the anti-diagonal, Adv. Math., 292 (2016); arXiv 1502.05214] yield derivations that are"co-completely bounded"as maps from various Fourier algebras to their duals; and we establish a twisted inclusion result for certain operator space tensor products, which may be of independent interest.
Background context and our main application
Fourier algebras of locally compact groups have been a fertile source of examples in the study of general Banach function algebras, while also having some important applications to the study of operator algebras associated to group representations (see e.g. [8]). One theme with a long history is the study of how properties of a group G are reflected in properties of its Fourier algebra A(G). For instance, if G is compact and non-abelian and f ∈ A(G), the matrix-valued Fourier coefficients f (π) must decay at a certain rate as π "tends to infinity", which intuitively suggests that f should have a degree of differentiability or Hölder continuity. This heuristic underlies a theorem of Johnson 1 that when G is SO (3) or SU (2), there are nonzero derivations from A(G) to its dual; using general restriction theorems for Fourier algebras, it follows that the same is true for any G that contains a closed copy of SO (3) or SU(2).
Johnson's result went counter to some expectations at the time: for given a Lie group G and some X in its Lie algebra, the Lie derivative along X, viewed as a continuous operator on C ∞ c (G), does not extend to a continuous map A(G) → C b (G). (If such an extension existed it would yield non-zero continuous point derivations on A(G), contradicting the fact that points of G are sets of synthesis for A(G).) Enlarging the codomain from C b (G) to A(G) * allows ∂ X f to be a distribution rather than a function, but a separate averaging argument is needed to explain why f → ∂ X f has any chance of being continuous from A(G) to A(G) * . Despite these technical difficulties, Johnson's result has been greatly extended in recent years, by using the operator-valued Fourier transform for certain Type I groups to make explicit calculations and estimates: see the papers [2], [3] and [12]. These papers were motivated by a conjecture posed by Forrest and Runde in [7], which predicted exactly which groups G allow non-zero derivations A(G) → A(G) * ; the conjecture was confirmed for all Lie groups in [12], and a recent preprint of Losert [14] contains a solution in full generality.
For any commutative Banach algebra A, the space of derivations A → A * coincides with the first Hochschild cohomology group H 1 (A, A * ). The higher-degree groups H n (A, A * ) potentially capture more information about A, but have proved to be extremely difficult to calculate except in degenerate cases. A more promising approach to finding computable invariants, which was first pursued systematically in [10], is to consider the alternating part of H n (A, A * ); alternating cocycles are more tractable than general ones since they are built from derivations (and there is also conceptual motivation for singling out this class, see Remark 3.4 below). Nevertheless, the existence of non-zero alternating cocycles on a Banach function algebra is very sensitive to properties of the given norm, and is not guaranteed by simply having "enough derivations", as illustrated in Example 3.7 below.
Given the recent progress in studying derivations on Fourier algebras, it is natural to turn our attention to alternating cocycles on A(G). The present paper is the first step in getting this larger programme off the ground, by producing the first examples of groups whose Fourier algebras support non-zero alternating 2-cocycles. In fact, we show that not only do such groups exist, but they occur in abundance among the classical Lie groups.
Theorem 1.1. Let n ≥ 4 and let G be one of the groups SU(n), SL(n, R) or Isom(R n ). Then there is a non-zero, continuous, alternating 2-cocycle on A(G). Theorem 1.1 is a special case of a much more general result, which in turn follows by combining Theorems 5.7 and 6.3 below. The starting point for the proof of Theorem 5.7 is the canonical identification of A(H × L) with the operator space projective tensor product of A(H) and A(L), valid for any locally compact groups H and L. However, it should be emphasised that our proof requires more than a merely formal use of operator spaces and completely bounded maps; one can show that there are no completely bounded, non-zero, alternating cocycles on Fourier algebras.
The key extra ingredient in the proof of Theorem 5.7 is the following surprising phenomenon, which may have independent interest for those working in operator space theory. Theorem 1.2 ("Twisted inclusion"). Let X and Y be operator spaces, and let Y denote Y equipped with its opposite operator-space structure. Let ⊗ and q ⊗ denote the projective and injective tensor products of operator spaces. Then That is: the identity map on X ⊗Y extends to a contractive linear map θ X,Y : X ⊗Y → X q ⊗ Y .
The opposite operator space structure may be thought of, intuitively, as a "mirror image" of the given one. One of the background aims of this paper is to argue that in applying operator-space methods to the study of Fourier algebras, it may be useful to work simultaneously with both the canonical operator space structure on A(G) and its mirror image.
An overview of the main technical difficulties
We now sketch why the natural attempt to prove Theorem 1.1, by merely following algebraic recipes in an appropriate functional-analytic category, does not work, and indicate the new ideas needed to overcome these difficulties. None of the material here is logically necessary for the proof of Theorem 1.1 or Theorem 1.2, and it may be skipped if the reader wishes to get straight to the precise mathematical details.
We start with a simple example from commutative algebra. .
This is a special case of a general algebraic construction: given two commutative Calgebras A and B, a symmetric A-bimodule X and a symmetric B-bimodule Y, and derivations D A : A → X and D B : B → Y, we can "wedge together" the two amplified derivations Therefore, in cases where we have non-zero derivations from A(H) and A(L) into appropriate modules, one could try to obtain alternating 2-cocycles on A(H × L) by applying the natural analogue of this construction for the category of Banach spaces. Unfortunately this would only yield a non-zero 2-cocycle on the Banach space projective tensor product A(H) ⊗ γ A(L), and the natural map A(H) ⊗ γ A(L) → A(H × L) is never surjective in such cases. On the other hand, A(H × L) can be identified with the operator-space projective tensor product A(H) ⊗ A(L). But in the natural analogue of the algebraic construction for the category of operator spaces, one must start with completely bounded derivations from A(H) and A(L) into symmetric "c.b.-bimodules", and it was shown independently by Spronk and Samei that the only such derivations are identically zero: see [17,Theorem 5.2] or [18].
To resolve this apparent stalemate we need a new approach, which is to examine what occurs if we start from a pair of non-zero derivations D H : A(H) → A(H) * and D L : A(L) → A(L) * that become completely bounded after changing the operator space structure on the codomains. Although this breaks certain aspects of the algebraic construction, something survives if D H and D L are completely bounded when A(H) * and A(L) * are equipped with their opposite operator space structures (it is an unrecorded observation of the author and Ghandehari that this property holds for the derivations constructed in [2]). For then, as we shall show in Proposition 5.2, combining D H and D L and doing some careful book-keeping yields a bounded (but not completely bounded) bilinear map which behaves like an alternating 2-cocycle on the dense subalgebra A(H) ⊗ A(L). This is still not enough to obtain Theorem 1.1, since the right-hand side of (1.1) is not an A(H × L)-bimodule in the cases where D H and D L exist with the required properties. The saving grace is Theorem 1.2, which allows us to embed this space continuously (but not completely boundedly!) into A(H × L) * . Note that in general X ⊗ Y and X q ⊗ Y are incomparable as operator spaces (just take X = C), and so Theorem 1.2 does not seem to follow just from the universal/extremal properties of ⊗ and q ⊗ in the category of operator spaces. Instead, our proof proceeds by embedding X and Y into B(E) and B(F ) for Hilbert spaces E and F , which allows us to calculate or bound various tensor norms on B(E) ⊗ B(F ) by viewing elements of this space as elementary operators on Schatten classes.
It is striking that to prove Theorem 1.1, which on the face of it makes no reference to operator spaces and completely bounded maps, we are driven to make substantial use of such techniques.
Structure of the paper
Let us now describe the organization of the rest of this paper. In Section 2 we establish some global conventions for our notation, and set up the definition of A(G) that is most suitable for this paper. In Section 3 we give the key definitions of derivations and alternating 2-cocycles for commutative Banach algebras, illustrating the general definitions with some key examples that will motivate our proof of Theorem 5.7. We also record some basic constructions that were not mentioned in [10], as they may be useful for subsequent work. Section 4 has two purposes. We introduce the key notion of a co-completely bounded map (co-cb for short) between two given operator spaces; and we collect some results concerning the canonical operator space structure on A(G), some of which are only stated implicitly in the literature. In doing so, we spend time on the crucial notion of the opposite operator space structure and its functorial properties; this requires us to set out some basic properties that do not seem to be mentioned explicitly in [5] or [15].
Section 5 contains the main work needed to establish Theorem 1.1. En route, we give the proof of Theorem 1.2, reducing the problem to a special case which is handled by means of an interpolation argument. The section ends by stating and proving the main technical theorem of this paper, Theorem 5.7, which says loosely speaking that we can construct non-zero 2cocycles given enough non-zero co-cb derivations.
For some groups where non-zero derivations have been constructed, the co-cb property can be read out of the explicit formulas in [2]; but in fact those cases and more besides can be obtained by repurposing some technical results from [12]. Details are given in Section 6, culminating in Theorem 6.3 which provides co-cb derivations in all cases needed to establish Theorem 1.1. Finally, in Section 7 we make some remarks and pose some questions, with a view to possible avenues for future work.
We have attempted to make this paper accessible to workers in the general area of Banach algebras, or functional analysts interested in structural properties of particular Banach algebras. In particular, we have not assumed any prior familiarity with either Fourier algebras of locally compact groups, or the Hochschild cohomology groups of Banach algebras, and have tried to include a small amount of extra motivation for these objects. On the other hand, we do assume some previous exposure to the basic language of operator spaces and completely bounded maps.
Conventions and notation
Throughout this article, all derivations and cocycles from Banach algebras into Banach bimodules are tacitly assumed to be norm-continuous. This is the convention adopted, for instance, in [10], and this article will not be concerned with any issues of automatic continuity.
The algebraic tensor product of two complex vector spaces E and F is denoted by E ⊗ F . The term "map" is our short-hand for "linear map" or "linear operator".
All Banach spaces are defined over complex scalars. For a Banach space E, B(E) denotes the algebra of bounded linear operators on E. The adjoint of a bounded linear map f : E → F between Banach spaces is denoted by f * : F * → E * , with one important exception: if E and F are Hilbert spaces, then we shall denote the adjoint map F * → E * by f # , to avoid confusion with the adjoint in the sense of operators between Hilbert spaces.
One slight departure from usual conventions is that when E is a Hilbert space, we shall formulate various constructions in terms of the dual space E * rather than the conjugate space E; of course the two spaces are canonically isomorphic as Banach spaces via the Riesz-Fréchet theorem. (This decision is motivated by issues concerning operator space structures, but is ultimately only a matter of notational preference.) When E = L 2 (Ω, µ) for some measure space (Ω, µ) and η ∈ L 2 (Ω, µ) we shall write ev η for the functional The projective tensor product of Banach spaces E and F is denoted by E ⊗ γ F ; the Hilbertian tensor product of Hilbert spaces V and W is denoted by V ⊗ 2 W .
Our notational conventions for operator spaces and completely bounded maps will be set out in Section 4, since they are not needed until Section 5.
Fourier algebras of locally compact groups
Fourier algebras originated from the study of L 1 -group algebras of locally compact abelian (LCA) groups. Given a LCA group Γ with Pontrjagin dual G = Γ, the Fourier transform F : L 1 (Γ) → C 0 (G) is an injective algebra homomorphism, and so one may study the convolution algebra L 1 (Γ) by examining the function algebra F(L 1 (Γ)) equipped with the norm pushed forwards from L 1 (Γ). This function algebra, denoted by A(G), is now known as the Fourier algebra of G.
Using Bochner's theorem, one can characterize A(G) in terms of positive-definite functions on G, without reference to the group Γ. Guided by this philosophy, and following work of Godement and Stinespring in the unimodular case, Eymard [6] gave a definition of A(G) that is valid for any locally compact group G; details of the foundational results from Eymard's original paper, and much more besides, may be found in the recent book [11]. However, for most of this paper, it is more convenient to work with an alternative description of A(G). The one we use is also standard, and can be found in e.g. [19,Defn. VII.3.8], but our presentation has some cosmetic differences from the usual one since we wish to work with duals of Hilbert spaces rather than their conjugates. Fix a choice of left Haar measure on G, denoted by ds, and let λ : G → U (L 2 (G)) denote the left regular representation of G on L 2 (G) defined by [λ(x)f ](s) = f (x −1 s). Given ξ, η ∈ L 2 (G) and x ∈ G, let This defines a contractive linear map Ψ : L 2 (G) ⊗ γ L 2 (G) * → C 0 (G) whose range we denote by A(G). We equip A(G) with the quotient norm of L 2 (G) ⊗ γ L 2 (G) * / ker(Ψ [11,Proposition 2.3.3]). Note that with our definition, if G is a LCA group then we recover the isomorphism L 1 ( G) ∼ = A(G) using Parseval's theorem.
Given Banach algebras A and B and a continuous homomorphism A → B, derivations and cocycles on B can be pulled back to give derivations and cocycles on A (a precise statement will be given in Lemma 3.5). It is therefore useful to identify homomorphic images of A(G) which have a simpler form, so that we can build derivations or cocycles on those algebras instead. The following result was proved by Herz in a more general setting, and is usually known as Herz's restriction theorem.
For a detailed proof and some historical comments, see [11, §2.6 and §2.10]. It is worth noting that if we wish to exploit the restriction theorem to construct derivations or nontrivial cocycles, we have to take G 1 to be non-abelian; Fourier algebras of abelian groups are amenable (in the Banach-algebraic sense) and hence all cohomology with dual-valued coefficients vanishes.
In concrete cases, such as those in Theorem 1.1, once we have constructed a well-defined cocycle on a particular Fourier algebra it will be obvious from context that this cocycle is not identically zero. However, in order to state our main technical theorem (Theorem 5.7) in its greatest generality, it will be convenient to use the following lemma as a "soft" work-around.
Eymard's original definition, but also follows easily from the one given above). Moreover, for each f ∈ A 0 there exists g ∈ A 0 such that f g = f ; see [6,Lemme 3.2] or [11,Prop. 2.3.2]. Hence, by the usual polarization identity (The converse inclusion also holds.) Similarly, combining the identity with the same "absorption trick" as above, we obtain (Once again the converse inclusion is trivial.) Thus, both lin{a 4 : a ∈ A(G)} and lin{b 2 : b ∈ A(G)} contain A 0 and hence are dense in A(G).
Definitions and preliminaries
We assume familiarity with the basic language of Banach algebras and Banach bimodules over them. Let A be a Banach algebra and X a Banach A-bimodule. For n ≥ 0 let C n (A, X) be the space of bounded n-multilinear maps A×· · ·×A → X, with the convention that C 0 (A, X) = X.
There are maps δ n : C n (A, X) → C n+1 (A, X), called the Hochschild coboundary operators, that satisfy δ n+1 • δ n = 0 for all n ≥ 0. We only need the cases n = 1 and n = 2, which have the following explicit form: Let Z n (A, X) := ker δ n ; elements of this space are called n-cocycles. Note that 1-cocycles are the same as derivations. Let B n (A, X) = im δ n−1 ; elements of this space are called n- For the rest of this section, we only consider commutative Banach algebras A, and those Banach A-bimodules X which are symmetric in the sense that a · x = x · a for all a ∈ A and x ∈ X. Note that if A is commutative, then both A itself and its dual A * are symmetric Banach A-bimodules.
We write Z n alt (A, X) for the space of all alternating n-cocycles. Remark 3.1. In the case n = 2, every T ∈ C 2 (A, X) is the sum of a symmetric part and an alternating part. Also, every 2-coboundary is symmetric (since A is commutative and X is symmetric). Thus the natural map Z 2 alt (A, X) → H 2 (A, X) is actually an injection. Following [10, Definition 2.2], we say that T ∈ C n (A, X) is an n-derivation if it is a derivation in each variable separately. If T is either symmetric or alternating, then to verify the n-derivation property it suffices to check that T is a derivation in the first variable, i.e. to check the identity T (bc, a 2 , . . . , a n ) = b · T (c, a 2 , . . . , a n ) + T (b, a 2 , . . . , a n ) · c for all b, c, a 2 , . . . a, n ∈ A.
Given our earlier definitions, a straightforward calculation shows that every 2-derivation is a 2-cocycle. (It is crucial here that A is commutative and X is symmetric.) In particular, every alternating 2-derivation defines an element of Z 2 alt (A, X). The alternating 2-cocycles that we shall construct when proving Theorem 1.1 are created as alternating 2-derivations. For sake of completeness, we mention that every alternating 2-cocycle turns out to be a derivation in the first variable, and hence (by the remarks above) is an alternating 2-derivation. We omit the details, since this result is the n = 2 case of [10,Theorem 2.5]. Note also that in the introduction, and in the statement of Theorem 1.1, we restricted ourselves to cocycles on A taking values in A * . This is no loss of generality: by the n = 2 case of [10, The following example illustrates what the abstract definitions above mean in practice, and provides motivation for later constructions.
(i) Define a linear map D : where p = e 2πiθ and dp denotes the usual uniform measure on T. It follows immediately from the product rule that D is a derivation.
(ii) Define a bilinear map F : where p = (e 2πiθ 1 , e 2πiθ 2 ) and dp denotes the usual uniform measure on T 2 . Clearly F is alternating as a bilinear map, and a similar calculation to part (i) shows it is a derivation in the first variable; thus it is an alternating 2-cocycle.
Remark 3.3. We noted earlier that an alternating 2-cocycle is a coboundary if and only if it is identically zero. This is true for all n ≥ 1 by [10, Proposition 2.9], and so there is a natural injection of the vector space Z n alt (A, X) into the nth Hochschild cohomology group H n (A, X). The range of this injection is one summand in a canonical decomposition of H n (A, X) into n pieces; for further discussion of this decomposition, see e.g. [1, §3] and [20, §9.4].
Remark 3.4. We briefly leave the world of Banach algebras to mention some important context from algebraic geometry. If R is the coordinate ring of a smooth complex variety, every cocycle on R with symmetric coefficients is equivalent to an alternating one; this is one version of the Hochschild-Kostant-Rosenberg theorem. For instance, this applies to the complex co-ordinate ring of the algebraic group SL 2 , which occurs naturally as a dense subalgebra of A(SU(2)). While the HKR theorem itself does not seem to extend to the Banach-algebraic setting, it suggests that the alternating cocycles on commutative Banach algebras have some deeper meaning, rather than being ad hoc definitions, and hence deserve further study.
Tools for constructing alternating cocycles
Following a strategy analogous to those used in [2,3,12], we shall prove Theorem 1.1 by establishing the non-vanishing of Z 2 alt (A(G 1 ), A(G 1 ) * ) for some judiciously chosen closed subgroup G 1 ⊂ G. We record the following lemma for later reference.
Lemma 3.5. Let A and B be commutative Banach algebras and let θ : A → B be a continuous homomorphism. Then for any F ∈ Z 2 alt (B, B * ), the induced map θ * F defined by belongs to Z 2 alt (A, A * ). If F = 0 and θ has dense range, then θ * F = 0.
The proof follows easily from the definitions and we omit the details. As mentioned in Section 1.2, in commutative algebra there is a standard procedure for constructing alternating 2-cocycles on a tensor product of two algebras, given a pair of derivations on the respective algebras. With minor modifications, one can do the same in the setting of commutative Banach algebras and symmetric Banach bimodules. This observation is surely known to specialists in the cohomology of Banach algebras, but we have not found an explicit statement in the literature; this is somewhat surprising since it provides a natural converse to a special case of [10, Theorem 3.6]), Lemma 3.6 (possibly folklore). Given commutative Banach algebras A and B, symmetric Banach bimodules X and Y over A and B respectively, and derivations Since some of the relevant calculations will recur when we come to prove Theorem 5.7, we provide a detailed proof.
Proof. Since D A and D B are bounded, F extends to a continuous bilinear map (using the fact that X and Y are symmetric bimodules) and (using the fact that D A and D B are derivations into symmetric bimodules). Hence, by linearity and continuity, F is both alternating and a derivation in the first variable, so by the earlier remarks in this section it is an alternating 2-cocycle as required.
The formula (3.1) should be compared with Example 3.2. In that example, F was obtained by "wedging together" two derivations defined on the dense subalgebra C 1 (T) ⊗ C 1 (T), one being a copy of D acting in the θ 1 direction and the other being a copy of D acting in the θ 2 direction. Lemma 3.6 may be regarded as an abstract analogue of this construction. However, since C 1 (T) ⊗ γ C 1 (T) = C 1 (T 2 ), the lemma does not suffice on its own to construct alternating 2-cocycles on C 1 (T 2 ). Indeed, the next example shows that for some function algebras on T 2 , the approach suggested by Lemma 3.6 cannot possibly work.
Lemma 3.6 can still be applied to produce alternating 2-cocycles on A(H) ⊗ γ A(L). However, this is not enough to produce cocycles on A(H × L), because of the following two facts.
1) The natural map A(H) ⊗ γ A(L) → A(H × L) is surjective if and only if either H or L
has an abelian subgroup of finite index [13]. (See also [11, §3.6], with the caveat that they write ⊗ instead of ⊗ γ .)
Co-cb maps and Fourier algebras
This section is devoted to the infrastructure needed for the proof of Theorem 5.7. We pay particular attention to issues of functoriality; the reason for introducing operator space tensor products and the opposite operator space structure is not just to equip Banach spaces with extra structure, but to be able to combine linear maps that respect this extra structure.
Operator spaces, tensor products, and co-cb maps
All concepts not defined explicitly here can be found in standard sources, such as the early chapters of [5] or [15]. Henceforth, we abbreviate the phrase "operator space structure" to o.s.s. Given operator spaces X and Y , CB (X, Y ) denotes the space of completely bounded maps X → Y ; note that this space has a canonical o.s.s., defined via the identification M n CB (X, Y ) ∼ = CB (X, M n Y ).
Whenever H is a Hilbert space and we refer to B(H) as an operator space, we assume (unless explicitly stated otherwise) that it is equipped with its usual, canonical o.s.s.; note that if we do this, then there is a natural and completely isometric identification of B(H) with CB (COL H ), where COL H denotes H equipped with the column o.s.s.
Tensor products and tensor norms. The projective and injective tensor products of operator spaces are denoted by ⊗ and q ⊗ respectively (this is the notation of [5], rather than that of [15]). Note that if E and F are Hilbert spaces then the underlying Banach space of Given operator spaces V , W and X, we say that a bilinear map V × W → X is jointly completely bounded 3 (j.c.b.) if it extends to a completely bounded map V ⊗ W → X. This is equivalent to saying that the "curried map" V → L(W, X) extends to a completely bounded map V → CB (W, X), or the same with V and W interchanged. Indeed V ⊗ W may be characterized, up to completely isometric isomorphism, as the completion of V ⊗ W that satisfies If f ∈ CB (E, X) and g ∈ CB (F, Y ) then by tensoring we obtain completely bounded maps E ⊗ F → X ⊗ Y and E q ⊗F → X q ⊗Y ; for extra emphasis, these maps will be denoted by f ⊗ g and f q ⊗g respectively. We shall make passing use of the Haagerup tensor norm, but only "at level 1", and we only require the following facts: 2) if A and B are C * -algebras and w ∈ A ⊗ B, then where the inner infimum is over all representations of w as n j=1 a j ⊗ b j .
The opposite operator space structure. Given an operator space W , one may define a new sequence of matrix norms on W by ( It is easily checked that if f : X → Y is completely bounded, then so is f : X → Y , with the same cb-norm. To emphasise the functorial behaviour we write this as f : X → Y . The same calculation gives, with some book-keeping, a more precise result: we omit the details. Lemma 4.2. Given operator spaces X and Y , the assignment f → f defines a completely isometric isomorphism CB (X, Y ) ∼ ∼ = CB ( X, Y ). In particular, we can identify (X * ) ∼ with ( X) * .
Note that for any operator spaces V and W , the identity map on V ⊗ W extends to a completely isometric isomorphism V ⊗ W ∼ = (V ⊗ W ) ∼ . One can show this using the explicit definition of the matrix norms associated to ⊗, but it can also be deduced from the characterization in (♦), combined with repeated application of Lemma 4.2.
Co-cb maps between operator spaces. The next definition is non-standard (though it has some precedent 4 in [16]) but will be extremely useful for statements and calculations later on. in either case we say that f : V → W is co-completely bounded (co-cb for short). Similarly, f is a complete isometry from V to W if and only if it is a complete isometry from V to W ; we then say that f : V → W is a co-complete isometry.
The notion of co-completely bounded map seems to have gone largely unmentioned or unstudied in the literature. One notable exception is [16], which sets up some general machinery and obtains interesting results in connection with Schur multipliers.
Lemma 4.4. Let F be a Hilbert space. The C-linear map B(F ) → B(F * ) that sends an operator b ∈ B(F ) to its Banach-space adjoint b # : F * → F * is a co-complete isometry.
Warning: as we will see in the proof, our chosen notational conventions are important: B(F * ) is given the o.s.s. of CB (COL F * ) rather than CB ((COL F ) * ) = CB (ROW F * ).
Proof. For any operator spaces X and Y , the map CB (X, Y ) → CB (Y * , X * ) defined by taking adjoints is a complete 5 isometry. Thus b → b # defines a complete isometry Taking opposites and applying Lemma 4.2, the result follows.
The operator space structure on a Fourier algebra
The results in this section are all known to specialists, but are included here for the reader's convenience, and to ensure that we have consistent notation and conventions. 4.6 (Tomato, tomato). Care is needed when combining parts of the literature. Some sources, following the general framework of locally compact quantum groups, define A(G) to be the subspace of C 0 (G) obtained by identifying a vector functional ω ξ,η ∈ VN(G) * with the function s → λ(s −1 )ξ, η L 2 (G) . While this gives the same Banach function algebra A(G) ⊂ C 0 (G) as in this paper, it yields the opposite o.s.s. to the one we have just defined. This is related to Proposition 4.8(d) below.
. For any ξ, η ∈ L 2 (G) and x ∈ G we have It follows that the map f → f ∨ defines a contractive involution on A(G) (which must therefore be isometric). This is known as the flip map or check map on A(G).
(a) The natural inclusion
, whose range is the minimal C * -tensor product VN(G 1 ) ⊗ min VN(G 2 ).
Constructing 2-cocycles from co-cb derivations
We start in some generality, since the preliminary results may be useful in subsequent work. The following terminology is not entirely standard, but is analogous to the more familiar notions of completely contractive Banach algebra and completely contractive Banach (bi)module that have appeared in the literature. By a cb-Banach algebra, we mean an operator space A equipped with a bilinear, j.c.b. and associative map A × A → A. Given such an A, we define a cb-Banach A-bimodule to be an operator space X, equipped with an A-bimodule structure such that the left action A × X → X and the right action X × A → X are both j.c.b.
Clearly A itself is a cb-Banach A-bimodule; it is also routine to check that if X is a cb-Banach A-bimodule, so is X * when equipped with the dual o.s.s. These notions also interact well with the "opposite o.s.s. functor". If A is a cb-Banach algebra then so is A; and if X is a cb-Banach A-bimodule, X is a cb-Banach A-bimodule.
Remark 5.1. Given a cb-Banach algebra A, the class of cb-Banach A-bimodules is usually not closed under the operation of taking opposites. For instance, suppose A is unital. If A were a cb-Banach A-bimodule, then for each x ∈ A the orbit map a → ax would be completely bounded from A to A. Taking x = 1 A we conclude that the identity map on A is co-cb. In particular, if A = A(G) for G compact, this would force G to be virtually abelian (combine parts (c) and (d) of Proposition 4.8).
Proposition 5.2. Let A and B be cb-Banach algebras; let X be a cb-Banach A-bimodule and Y a cb-Banach B-bimodule. Let T A ∈ CB (A, X) and T B ∈ CB (B, Y ). Then, if we define Proof. We will only give the proof for F 1 ; the proof for F 2 is very similar. Since we obtain a complete contraction Also, since X is a cb-Banach A-bimodule and Y is a cb-Banach B-bimodule, putting defines a complete contraction R : In view of the earlier formula (3.1), one would like to apply Proposition 5.2 with T A and T B being co-cb derivations into symmetric cb-bimodules. However, this stops short of producing genuine 2-cocycles: the resulting bilinear map merely takes values in X ⊗ Y , and in view of Remark 5.1 there is no reason to suppose that this is even a Banach A ⊗ B-bimodule. (It is a Banach A ⊗ γ B-bimodule, but that does not help us.) To go further, we need to move from X ⊗ Y to X q ⊗Y , and this is where we require Theorem 1.2, whose proof we now turn to. For convenience we recall the statement of the theorem. Theorem 1.2 (reprise). Let X and Y be operator spaces. Then w X q ⊗ Y ≤ w X ⊗Y for all w ∈ X ⊗Y , and so the identity map on X ⊗Y extends to a contraction θ X,Y : X ⊗Y → X q ⊗ Y .
Proof of Theorem 1.2. We start by reducing to a special case.
Step 1. Given a pair of operator spaces X and Y , let j X : X → B(E) and j Y : Y → B(F ) be completely isometric embeddings, for some Hilbert spaces E and F . Note that j Y : Y → B(Y ) ∼ is also a complete isometry. Suppose we know Theorem 1.2 holds for the particular operator spaces B(E) and B(F ). Then we have a diagram as shown in Figure 2, in which the left-hand vertical arrow is a (complete) contraction, while the right-hand vertical arrow is a (complete) isometry (since q ⊗ respects complete isometries). Moreover, the diagram in Figure 2 "commutes on elementary tensors". Hence, for any z ∈ X ⊗ Y , we have Step 2. Observe that if E and F are Hilbert spaces and w ∈ B(E) ⊗ B(F ), the norm of w in B(E) q ⊗B(F ) ∼ coincides with the norm of the associated elementary operator on S 2 (F, E), the space of Hilbert-Schmidt operators F → E. This is a variation on a well-known fact in C * -algebra theory that can be found in various sources; to avoid any notational ambiguity, we give a precise statement in the following lemma.
(To justify the second point in a little more detail: let α : E⊗ 2 F * → S 2 (F, E) be the Hilbert-space isomorphism which sends x ⊗ φ to y → φ(y)x; then α intertwines the natural * -representation of the incomplete algebra B(E) ⊗ B(F * ) on E⊗ 2 F * with the map θ. See also [15,Prop. 2.9.1] or the calculations preceding [5, Eqn. (3.5.1)].) Step 3. Combining Steps 1 and 2, we see that Theorem 1.2 will follow if we can prove the following claim: given E, F and Φ 2 as in Step 2, the function Φ 2 extends to a contractive linear map B(E) ⊗ B(F ) → B(S 2 (F, E)). (We remind the reader that if E and F are infinitedimensional, one cannot expect this map to be completely bounded.) Our proof of the claim is based on an interpolation argument. For p ∈ [1, ∞] let S p (F, E) denote the space of Schatten-p operators from F → E, equipped with its standard norm. We adopt the convention that S ∞ (F, E) = K(F, E), the space of all compact operators F → E, equipped with the operator norm. Define Φ p : When p = 2 this is consistent with our earlier notation.
Lemma 5.5. Let E and F be Hilbert spaces, and let w ∈ B(E)⊗B(F ). Let σ : Proof. Part (i) follows immediately by quoting Haagerup's theorem that Φ ∞ extends to a (complete) isometry from B(E) ⊗ h B(F ) to CB (S ∞ (F, E)). However, there is also a direct easy proof: given w = n j=1 a j ⊗ b j , it suffices to show that This follows from standard calculations with "row" and "column" block matrices: for details, see e.g. [15,Remark 1.13], in particular the formula (1.12) in [15]. Part (ii) follows from part (i) and duality. In more detail: given w = n j=1 a j ⊗b j ∈ B(E)⊗ B(F ), consider the elementary operator Φ ∞ (σ(w)) defined on S ∞ (E, F ) by d → n j=1 b j da j . By part (i), applied with the roles of E and F reversed, Φ ∞ (σ(w)) ≤ σ(w) h . On the other hand, consider the standard trace pairing between S 1 (F, E) and S ∞ (E, F ), where s ∈ S 1 (F, E) acts as the functional t → Tr(st). Straightforward calculations show that with respect to this pairing, the Banach-space adjoint of Φ ∞ (σ(w)) : . Since a linear map and its adjoint have the same norm, (ii) is proved.
Finally, note that the elementary operator defined by w acts simultaneously on all S p (F, E) for p ∈ [1, ∞]. Since S ∞ and S 1 form an interpolation couple with (S 1 , S ∞ ) 1/2 = S 2 , part (iii) now follows from parts (i) and (ii) by applying the Riesz-Thorin interpolation theorem.
To finish off, note that the os-projective tensor norm dominates both the Haagerup tensor norm and the "reversed" Haagerup tensor norm. More precisely: for arbitrary operator spaces V 1 and V 2 and w ∈ V 1 ⊗ V 2 , we have Combining these inequalities with Lemma 5.5(iii), we have verified the claim at the beginning of Step 3, and this completes the proof of Theorem 1.2.
Remark 5.6. We can strengthen the conclusion of Theorem 1.2, although at present we do not have applications of the stronger version. Going back to Step 1, we can run the same argument with the os-projective tensor product replaced by either the Haagerup tensor product or its reversed version. Combining this with Step 2 and Lemma 5.5(iii), we conclude that for arbitrary operator spaces X and Y and w ∈ X ⊗ Y , We now have the necessary ingredients for our main technical theorem.
extends to a non-zero alternating 2-cocycle F : Consequently, for any locally compact group G which contains a closed isomorphic copy of H × L, we have Z As in the proof of Lemma 3.6, the defining formula for F 0 shows that F is an alternating 2-derivation on the dense subalgebra A H ⊗ A L ⊂ A H×L . By the usual continuity argument we deduce that F ∈ Z 2 alt (A H×L , A H×L * ).
We now show that F is not identically zero. Since F 0 takes values in V H ⊗ V L and the natural map V H ⊗ V L → V H×L is injective, it suffices to show that F 0 is not identically zero. Observe that if a ∈ A H and b ∈ A L we have By Lemma 2.2, elements of the form a 4 span a dense subspace of A H , and elements of the form b 2 span a dense subspace of A L . Therefore, since D H is continuous and non-zero, there exists a ∈ A H such that D H (a 4 ) = 0; similarly, there exists b ∈ A L such that D L (b 2 ) = 0. We conclude that F 0 (a 3 ⊗ b, a ⊗ b) = 0, as required. This proves the first part of the theorem. The second part follows by pulling back the non-zero 2-cocycle F ∈ Z 2 alt (A H×L , A H×L * ) along the restriction homomorphism A G → A H×L (see Lemma 3.5).
To use Theorem 5.7 effectively, we need to know examples of H for which such a D H exists. It turns out that the very first non-zero derivation constructed from a Fourier algebra to its dual, which was produced by Johnson in [9], can be shown with hindsight to be co-cb! In fact, during the writing of [2], the present author and Ghandehari had already observed that if H is one of the groups then in each case, the explicit non-zero derivation D H : A(H) → A(H) * that is described in [2] turns out to be co-cb. Showing this requires some work, but is mostly just a matter of composing D H with (the adjoint of) the check map and using the Plancherel theorem for each group, c.f. the formulas and remarks in [2, §7]. However, this observation was never written down, since at the time we did not have any applications of it.
Let us see how these facts, combined with Theorem 5.7, yield the first two cases of Theorem 1.1. For n ≥ 4, we may embed GL 2 (C) × GL 2 (C) as a closed subgroup of GL n (C) by sending (g 1 , g 2 ) to the block-diagonal matrix diag(g 1 , g 2 , I n−4 ); the same construction works with C replaced by R. If H n denotes any of SU(n), SL(n, R) or Isom(R n ), then our embedding maps H 2 ×H 2 onto a closed subgroup of H n . Note also that the real ax+b group is isomorphic to the standard parabolic subgroup of SL(2, R). Therefore, we may combine Theorem 5.7 with the examples (a) and (b) mentioned above.
To obtain the remaining case of Theorem 1.1, it would suffice to exhibit a non-zero co-cb derivation from A(Isom(R 2 )) to its dual. For this, the results of [2,3] are insufficient and we require results from the subsequent paper [12]. In fact, one can use results from that paper to obtain alternative proofs for the cases (a) (b) and (c), and so for clarity of exposition we devote the next section to summarizing and making use of the relevant parts of [12].
Remark 5.8. An alternative proof that the Fourier algebra of the real ax + b group supports a non-zero co-cb derivation, independent of both [2] and [12], will appear as part of the forthcoming work [4].
Obtaining co-cb derivations
In this section, we show in Theorem 6.3 that there is a plentiful supply of non-zero co-cb derivations from Fourier algebras to their duals. For the strongest results in this direction, we make use of the hard work done by the authors of [12] in proving the Lie case of the Forrest-Runde conjecture. While Theorem 6.3 is not hard to invent if one reads [12] in its entirety, it is never actually stated in that paper. We shall therefore extract some of the components which are used to prove [12,Theorem 3.2], and reassemble them into a "black box" that will be more suitable for our purposes.
For G a Lie group, let C ∞ c (G) denote the space of compactly supported smooth functions on G. This is contained in A(G) by [6, (3.26)] and is easily seen to be dense in A(G) (since C ∞ c (G) is dense in L 2 (G) and is closed under convolution).
Proposition 6.1 (Lee-Ludwig-Samei-Spronk, [12]). Let H be any one of the following (connected, real) Lie groups: (e) the "Grélaud groups" G θ (certain semidirect products R 2 ⋊ θ R where θ parametrizes the eigenvalues of the corresponding action of R on the Lie algebra of R 2 ) Then there exist a weight function v ∈ L 1 (H), not identically zero, and an element X of the Lie algebra of H, such that when we take the corresponding Lie derivative ∂ X : Proof. In each case, there is a calculation in [12] that provides suitable X and v. (Strictly speaking, X and v are chosen together with S = S X,v ∈ VN(H × H) such that the integral in (6.1) agrees with S, u for all u ∈ C ∞ c (H × H). By density and continuity arguments, if such an S exists it is uniquely determined, and by rescaling the weight function v we can always arrange that S ≤ 1.) For (a), see [12,Theorem 2.4] -strictly speaking, the cited result only proves this for SU(2), but the same calculation using representation theory and orthogonality relations goes through for SO (3) As indicated by the previous remark, the authors of [12] did not pursue Proposition 6.1 with the goal of constructing explicit derivations on Fourier algebras, as they aimed to establish stronger structural properties for a wider class of groups. For the present paper, what matters is the following consequence of Proposition 6.1, which appears to be a new observation.
Avenues for further work
In future work, we intend to set out a more systematic study of the higher-degree alternating cocycles on Fourier algebras, with the intention of exploring an associated numerical invariant that can be viewed as a kind of "dimension" associated to such algebras. Since one would like to calcuate or estimate this numerical invariant for as many small examples as possible, progress on the following natural question could be a useful guide for future work.
Currently our guess is that the answer is negative for SU(2) and SL(2, R), and positive for SU(3), SL(3, R), Isom(R 2 ) and Isom(R 3 ), but there is insufficient evidence to support any firm conjectures at this stage.
Turning to Theorem 1.2: one would like to understand better the comparison map θ X,Y : X ⊗ Y → X q ⊗ Y , perhaps by making greater use of the sharper result outlined in Remark 5.6. Indeed, a natural next step is to repeat the (complex) interpolation argument used in Lemma 5.5 at the level of operator spaces and cb-norms of elementary operators, to see what θ X,Y looks like at higher matrix levels.
The co-cb derivations that are crucial to proving Theorem 1.1 provide natural examples of c.b. maps from A(G) ∼ to VN(G) that behave like noncommutative Fourier multipliers (this is not immediately apparent from what is stated in Section 6, but can be seen by inspecting the details in [2] and [12].) Question 2. Given that A(G) ∼ and VN(G) are the endpoints of the scale of noncommutative L p -spaces associated to VN(G), are there other Fourier multipliers from L p (VN(G)) → L r (VN(G)) which satisfy some form of the Leibniz identity? For fixed p and r, what can we say about the space of such multipliers?
We finish with some natural questions concerning co-cb derivations on Fourier algebras, which are all aimed at strengthening or sharperning the conclusion of Theorem 6.3.
Question 3. The derivations constructed in [2] for SU(2), the real ax + b group and the reduced Heisenberg group are all cyclic and co-cb (c.f. the construction in [4]). Is every cyclic derivation on a Fourier algebra automatically co-cb? Question 4. Let G be the 3-dimensional real Heisenberg group. The results of [3] construct a non-zero derivation D from A(G) to a certain symmetric Banach A(G)-bimodule W. Can W be made into a cb-Banach A(G)-bimodule in such a way that D : A(G) → W is co-cb? Question 5. It was shown in [12] that the property (AD) for a Lie group G, mentioned in Remark 6.2, ensures that there is a non-zero derivation D : A(G) → A(G) * . Does it also guarantee that one can choose D to be co-cb?
In view of the good hereditary properties of (AD), a positive answer to Q5 would allow us to transfer co-completely bounded derivations between Fourier algebras of Lie groups which have the same universal cover, and hence by using the strategy outlined in Remark 6.2 one could strengthen Theorem 6.3 to the following result: every non-abelian connected Lie group H has non-zero co-cb derivations from A(H) to A(H) * .
Question 6. Can the explicit derivations constructed by Losert [14] on connected groups that are not necessarily Lie, be made into co-cb derivations from Fourier algebras into cb-Banach bimodules?
The constructions in [14] are closer in spirit to [3] than to [12], and so Question 4 would serve as a warm-up for Question 6.
Acknowledgments
A preliminary announcement of some of these results, with a different emphasis and less comprehensive results, was circulated in 2016 as an unpublished preprint; the author thanks M. Daws and A. Skalski for several comments and corrections on that document. He also thanks the authors of [12] for useful discussions about technical aspects of their paper, and to V. Losert for sharing a copy of the preprint [14].
Less direct, but no less important, thanks are due to M. Ghandehari and E. Samei for their interest and encouragement over several years concerning alternating cocycles on Fourier algebras, and to M. Whittaker for a conversation at the British Mathematical Colloquium 2019 in Lancaster which prompted the author to finally write up this work as a proper paper.
Some of these results were presented in conference talks at the Abstract Harmonic Analysis meeting in Kaohsiung, 2018, and the International Workshop on Harmonic Analyis and Operator Theory, Istanbul, 2019. The author thanks the organizers of these meetings for their respective invitations to present this work, and for bringing together speakers from a broad range of specialist interests for enjoyable discussions.
Most of the writing of this article was done during the COVID-19 pandemic, under a period of lockdown conditions in England. The author would therefore like to thank various colleagues at Lancaster University for an online mixture of camaraderie, commiseration, complaints, and computer support during these months, which has gone some way towards replicating a normal working environment. He hopes to one day visit Barnard Castle. | 2020-08-06T01:01:10.469Z | 2020-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "129b708d278f5a834a1b11c150984aec39c06264",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.02226",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "94e60c946ef30375cd3f6ea0f2e19d0a6506dbf2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
238838905 | pes2o/s2orc | v3-fos-license | Design Governance, Austerity and the Public Interest: Planning and the Delivery of ‘Well-Designed Places’ in West Dunbartonshire, Scotland
ABSTRACT This paper considers how planning authorities can achieve urban design ambitions in the context of deepening neoliberalism and fiscal austerity. Based upon a case study of West Dunbartonshire, Scotland, the paper reveals the innovative steps taken by the local authority to introduce new design governance tools in the face of significant resource constraints. The paper critically examines the role that the private sector plays in the governance of design and argues for a reconceptualisation of design governance that more rigorously attends to the challenge of delivering well-designed places in the public interest.
Introduction
During the first decade of the 21st century design-led planning was in the ascendency across the United Kingdom. The influential Towards an Urban Renaissance report of the Urban Task Force (1999) proved something of a watershed moment. As the property market boomed during the early 2000s, significant investment was targeted towards design-led urban regeneration schemes. The advocacy of design literate planning was widely supported and well-funded, particularly by the newlyestablished Commission for Architecture and the Built Environment (CABE) in England Punter, 2011), by Architecture and Design Scotland, the Design Commission for Wales, and the Ministerial Advisory Group for Architecture and the Built Environment in Northern Ireland in the recently devolved nations (Carmona, 2019a;Ministerial Advisory Group for Architecture and the Built Environment, 2017;Punter, 2010aPunter, , 2010bWhite & Chapple, 2018).
The political and economic upheaval triggered by the 2007-2008 financial crisis unravelled many of these urban design initiatives and shone light on the social inequities of urban design investment during the New Labour years (Porter & Shaw, 2009;Punter, 2010c). Austerity measures introduced by the Coalition Government from 2010 and subsequent Conservative Governments from 2015, restructured and downsized local government in the UK (Hastings et al., 2017;Lowndes & Gardner, 2016). Although public sector austerity has been widely deployed in response to the financial crisis (Ponzini, 2016;Tulumello et al., 2020), it has manifested varyingly according to existing political relations, path-dependencies and cultures of governance and regulation (Savini & Raco, 2019). Many European nations and cities have better resisted austerity than in the US sustainable development goals (Scottish Government, 2020a. In this paper our aim is to understand how public sector austerity has impacted the delivery of Scotland's urban design policy agenda at the local level by examining how one particular local authority has adapted its practice in pursuit of more design-sensitive planning outcomes. Our findings are drawn from a case study of West Dunbartonshire Council, a small, fiscally-constrained authority located in the west of Scotland, which has established a tentative authority-wide design programme since 2017. We argue that West Dunbartonshire Council has identified urban design as a strategic policy priority and introduced a series of innovative 'design governance' (Carmona, 2016) tools, but warn that its approach to delivery provides evidence of the deepening neoliberalisation of Scottish planning, and raises questions about the extent to which urban design decisions can be made in the public interest.
The remainder of the paper is structured as follows. First, we unpack the term 'design governance' (Carmona, 2016) and highlight the public sector's creeping reliance on private sector planning and design consultants in the era of austerity. Next, we present our research methodology, and share the results from our West Dunbartonshire case study. The paper concludes with a series of critical reflections on local authority urban design practice that challenge the notion that design governance operates with the aim of addressing "a defined public interest" (Carmona, 2016, p. 705).
Design Governance, the Public Interest and Austerity
Design governance is defined by Carmona as "the process of state-sanctioned intervention in the means and processes of designing the built environment in order to shape both processes and outcomes in a defined public interest" (2016, p. 705). It is typically practised through a range of familiar planning tools, including 'formal' statutory mechanisms and controls like design guidance and planning permission, as well as more 'informal' instruments like evidence of best practice, award schemes or education and skills training (Carmona, 2017). Although the 'public interest' is a contested concept within planning, it has long provided justification and legitimacy for state intervention in land and property markets (Campbell & Marshall, 2002) including on matters of urban design, and continues to underpin the stated aim of planning within the profession's Code of Conduct (Royal Town Planning Institute, 2016). Hack (2017) argues that the concept of design governance neglects the significant role played by private sector consultants in the urban design process, while the assertion that public and private sector actors share "responsibilities for delivery" (Carmona, 2016, p. 726) suggests that the operation of design governance demands further scrutiny.
The concept of the 'public interest' is often the subject of critical enquiry because, by its nature, it is difficult to define (Campbell & Marshall, 2002). Questions about how the 'public' is defined at different spatial scales and across generations continue to cause tension. Recent research on planning authorities across the UK by Slade et al. (2019) found that the 'public interest' is still an accepted rationale among planners seeking to improve planning and design outcomes for communities. Yet, the research found that this justification was complicated by its application in a diverse society and in response to government policy agendas focused on economic growth.
On the ground, planners' notions of the 'common good' are increasingly wrapped up in negotiations among stakeholders (Murphy & Fox-Rogers, 2015) and serving the 'customers' of the planning system (Clifford & Tewdwr-Jones, 2013). For some, this characterises a set of post-political planning practices which make any claim to the public interest superficial at best (Allmendinger & Haughton, 2012). In Slade et al.'s (2019) research, the notion of the "public interest" was sometimes used as a "partisan Trojan horse" (p. 13) that stakeholders, including housebuilders, communities and elected councillors, use to justify their private concerns. For Savini and Raco (2019) the very purpose of planning is being reshaped from one focused on "input-centred forms of deliberation, place-making and social justice to an enhanced concern with output-centred agendas premised on expedited development and growth" (pp. 3-4). As yet, limited attention has been paid to how these evolving values are interpreted and enacted by the numerous actors responsible for delivering urban design.
Descriptions of urban design practice have typically drawn a clear distinction between the 'policy' (public) and 'development' (private) functions of urban design (Linovski, 2015). This assumes that urban design consultants working for private sector developers interpret and implement the policies and plans produced by public sector officials. Bentley (1999) has depicted the 'push and pull' between these actors as a 'battlefield' where each actor is constrained by a set of resources and rules that collectively produce 'opportunity space' for action. Developing this further, Tiesdell and Adams (2011) describe how the boundaries between the opportunity space of different actors are continually negotiated, meaning that the interpersonal and negotiation skills of public sector planners are crucial in determining design outcomes. To some extent the distinction between the 'policy' and 'development' functions of urban design (Linovski, 2015) and the 'battlefield' metaphor (Bentley, 1999) remain valid, but the boundaries between the public and private roles assumed by urban design actors have become increasingly blurred as fiscal austerity has accelerated the evolution of neoliberal governance practices and forced the privatisation of many public sector planning and design functions (Linovski, 2015;Lovering, 2010).
If, as Campbell (2002) argues, planners should use situated judgment to make decisions on behalf of others about what makes good places, the competing understandings of 'good places' and the 'public interest' introduced by different actors within design governance are deserving of further scrutiny. Although there is significant evidence of the wide-ranging health, social, economic and environmental benefits of well-designed places (Carmona, 2019b), inequities in delivery have been observed, with urban design frequently associated with 'spectacle' architecture and economic development (Gospodini, 2002;Lovering, 2010;Punter, 2010c). Financially-motivated commercial developers who deliver much of the UK's new development are often reluctant to invest in urban design (Gulliver & Tolson, 2014;White et al., 2020), and the onus falls on public authorities to use the policy tools at their disposal to compel developers to contribute to making better places (Bentley, 1999;Tiesdell & Adams, 2011). As Linovski argues in a recent edited discussion in this journal, this renders public sector capacity crucial for ensuring oversight of market-led planning (in Parker et al., 2020).
The combined effects of the block grant issued to Scotland, and budgetary allocation decisions made by the devolved Scottish Government, have meant that cuts to public spending have been less significant in Scotland than in England, although they have still been "substantial and severe" (Gray & Barford, 2018, p. 554). An era of "super-austerity" following the election of a majority Conservative UK government in 2015 has continued this trajectory (Lowndes & Gardner, 2016).
From a planning and urban design perspective, one of the most significant impacts of austerity has been a reduction in local authority staffing capacity, through job cuts. Despite accounting for only 0.63% of total local authority spending (Beveridge et al., 2016), planning departments in Scottish local authorities have faced some of the most significant cuts of any local government service in the years following the financial crisis. The Royal Town Planning Institute in Scotland (2019) recorded that local authorities experienced a 40.8% decrease in their planning budgets and a 25.7% reduction in planning staffing levels between 2009 and 2018. Public sector urban design services were particularly hard hit. A 2018 survey by the Heads of Planning Scotland, the representative organisation for senior planning officers in Scotland, found that 15 of the responding 35 planning authorities named design as one of their top five skills shortages (Birrell, 2018).
The mantra of "doing more with less" (Hambleton & Howard, 2013, p. 48) has forced local authorities to develop new ways of delivering public services (Lowndes & Gardner, 2016). Planning authorities increasingly enter "into a relationship of critical dependence with the private sector" (Wargent et al., 2020, p. 193) and must correspondingly behave more like a business and, often reluctantly, outsource work to private consultants (Slade et al., 2019). For example, the survey by Heads of Planning Scotland found that 86% of Scottish planning authorities paid for technical expertise, in areas such as public consultation, development appraisal, environmental impact assessment, and design and placemaking (Birrell, 2018). In Scotland, and elsewhere, private sector urban design consultants now produce many of the 'formal tools' of design governance (Carmona, 2017) such as design policy and guidance that, in the past, would have been written by public sector officials. This means that many urban design consultants simultaneously undertake work for private sector developers operating within the regulatory and policy context they have helped to shape (Cuthbert, 2017;Linovski, 2015;Wargent et al., 2020).
This blurring of lines raises questions about the transparency, accountability, and efficiency of the planning system (Parker et al., 2018) and, by extension, the role of design governance. While Carmona (2016, p. 720) argues that "the governance of design . . . will always be ideological in that it aims at achieving a set of aspirational public interest outcomes, namely 'better design' than would otherwise be achieved", Linovski's (2015Linovski's ( , 2019 research on design and planning practice in Canada and the United States suggests that the growing role played by private urban design consultants on behalf of the public sector weakens the ability of local planning authorities to operate in that public interest. She argues that "[o]utside consultants . . . are often able to foster debate and push public actors in new and productive directions", but cautions that, "without strong public sector involvement, there is the tendency to view urban design as value-free and universal" (2015, p. 462). This proposition challenges us to treat Carmona's definition of design governance, focused as it is on shaping "processes and outcomes in a defined public interest" (2016, p. 705), with some caution.
In the UK, only 55% of planners are now employed by the public sector, compared to 70% in 2010 (Kenny, 2019). The close quarters within which public and private urban designers now operate arguably obscures the different interests they represent and creates the conditions for the "convergence of development and public interests" (Linovski, 2019(Linovski, , p. 1694. This shift of expertise and power from planning authorities to non-state actors represents the "cumulative incapacitation of the state" (Peck, 2012, p. 630), driven by public sector austerity within the sustained neoliberalisation of state governance. In this context, local authority reserves of knowledge are being hollowed out , while many public sector planners are motivated, not by the hope of making a positive impact, but by the fear of losing control over development, being blamed for unsatisfactory development, or of losing their job (Sturzaker & Lord, 2018). Slade et al. (2019) conceptualise this cohort of local authority "austerity planners" as practitioners who feel they have little room for executing independent professional judgment, which erodes their job satisfaction and contributes to a highly mobile profession. The austerity planner's role in local authorities is also increasingly characterised by proceduralism and box-ticking, hampering their ability to "think proactively and strategically about how to meet the public interest" (p. 32).
Practices associated with the 'austerity planner' should be viewed alongside the "considerable depletion" in design leadership since the global financial crisis (Gulliver & Tolson, 2014, p. 13). A UKwide study on design and housing quality in the UK recommends that better national level ministerial leadership needs to occur hand-in-hand with place and design leadership within local authorities (White et al., 2020). Successful design governance initiatives can often be traced back to inspirational leadership and judicious management (White, 2015), such as the example of Larry Beasley's leadership as Co-Director of Planning during the design-led urban transformation of downtown Vancouver, Canada in the 1990s and early 2000s (Grant, 2009). As Dobson and Platts argue, planning must have a central role in ensuring place-making, and, by extension, design, is a corporate priority within local authorities (in Parker et al., 2020). Critical questions must therefore be asked about how the tools of design governance are deployed in planning practice, by whom, and crucially, whose interests they legitimise.
Research Methodology: The Case of West Dunbartonshire Council
We conducted our research as a single qualitative case study of West Dunbartonshire Council, a small local planning authority in Scotland which embarked upon an ambitious strategic urban design agenda in 2017. Informed by Flyvbjerg (2001), who argues that richer information tends to be found in atypical case studies, West Dunbartonshire Council was purposefully selected for its 'uniqueness' as a potential exemplar of design governance at the local level in Scotland. Our contention was that the case could provide empirical evidence on the challenges of practising design governance under conditions of widening neoliberalism and fiscal austerity in a local planning authority with limited resources and a weak local economy. Funding received by the Council from the Scottish Government reduced in real terms by 22% between -11 and 2015-16 (West Dunbartonshire Council, 2015, and the Council had to make nearly £29.3 million in savings between 2015 and 2020 (West Dunbartonshire Council, 2020a), requiring the planning service to "look at new ways of working, the use of new technology and bringing in additional income" (West Dunbartonshire Council, 2016a, p. 2). Without yet taking the impacts of the Covid-19 pandemic into account, a funding gap of £5 million is anticipated in the 2021-22 budget year, and over £12 million the following year (West Dunbartonshire Council, 2020b).
West Dunbartonshire, the second smallest of Scotland's 32 local authorities, by land area, is located in the west of Scotland between Glasgow and Loch Lomond and the Trossachs National Park. Its population of 89,610 (West Dunbartonshire Council, 2018a) is mostly distributed between the urban settlements of Clydebank, Dumbarton and the Vale of Leven, and the area's identity is rooted in its proud Clydeside industrial and cultural heritage (Madgin, 2013). Clydebank and Dumbarton grew around several shipyards, including John Brown's in Clydebank which built many famous ocean liners including RMS Queen Mary, and Denny's in Dumbarton, where the Cutty Sark was built and launched.
Today West Dunbartonshire is a very different place and its more recent history has been shaped by tragedy and decline. Bombing raids on the area's industrial infrastructure during the 1941 Blitz killed 1,200 people and made 35,000 homeless, leaving just seven buildings in Clydebank unscathed (Finlay, 2006). Shipbuilding and manufacturing then fell into decline in the post-war years, leading to rapid deindustrialisation and high unemployment. West Dunbartonshire's urban challenges have been exacerbated by poor quality post-war housing estates and tower blocks (Finlay, 2006) which co-exist alongside 163 hectares of vacant and derelict land within the local authority boundary (West Dunbartonshire Council, 2020c). A weak local economy means that the (pre-Covid 19) unemployment level was 4.8%, compared to the Scottish average of 4.1%, and West Dunbartonshire faces strong competition for jobs and investment from neighbouring areas, particularly Glasgow (West Dunbartonshire Council, 2018a). Scottish Government data on 'multiple deprivation' places 44% of West Dunbartonshire's data zones in the top 20% most deprived nationally, the fourth highest local share of any Scottish local authority area (Scottish Government, 2020c).
We collected the primary data for this case study from two sources. First, semi-structured interviews were conducted in person with eight key informants, in 2019. They included two Council planning managers, two Council planning officers, an elected councillor, two developers, and one private sector consultant. All participants had first-hand experience of planning and design governance within West Dunbartonshire. We audio-recorded all of the interviews and fully transcribed them before conducting a thematic analysis. All participants have been anonymised and are referred to using generic role descriptors (e.g. 'planning manager', 'developer', etc.). Second, we collected and analysed documents including planning applications, committee reports, meeting minutes and national and local planning policy. All of the documents we analysed were publicly available online. The College of Social Science Ethics Committee at the University of Glasgow granted the necessary ethical approval for the research.
The Origins of a Design Governance Agenda
Since 2017, West Dunbartonshire Council has introduced a series of design governance initiatives to strengthen its control over the quality of new development and address developers' widely held perception that, as a small local authority operating in a weak economy, planning decisions are largely made on the basis of economic development, without much critical attention paid to design (see Table 1). A local authority planning manager explained this modus operandi in the following terms: "it's employment, it's economic regeneration, and it's easy to get sucked into that and think 'right this is going to bring jobs, going to bring homes . . . we refuse very little here because we always have to try and make it work" (Planning Manager, interview 2019). A developer who had recently sought planning permission for a housing scheme in West Dunbartonshire agreed with this characterisation, noting, "as an authority who haven't had an awful lot of private residential development in recent years, I think they were quite welcoming" (Developer, interview 2019).
Public sector austerity has exacerbated this challenge. Amid significant resource constraints, planning authorities across Scotland have struggled to maintain workforce capacity and retain a sufficiently skilled workforce (Royal Town Planning Institute Scotland, 2019). In West Dunbartonshire, this has directly impacted planning and design decision-making, particularly when the Scottish Government's performance indicators encourage efficiency -the perception amongst planners being "right, get this through the system" (Planning manager, interview 2019). A private sector developer operating in the local area explained that, in his view, public sector staffing cuts have made the planning application process "slower than it ever was" because local authorities simply "don't have the staff" (Developer, interview 2019).
This lack of resources meant that, prior to 2017, West Dunbartonshire had no dedicated in-house urban design staff and only a very limited amount of policy on design matters. The current Local Development Plan, adopted in 2010, also offers very little guidance on urban design (West Dunbartonshire Council, 2010). A revised Local Development Plan was produced in 2014 and does pay closer attention to urban design through a policy on "successful places and sustainable design" (West Dunbartonshire Council, 2016b, p. 44). Unfortunately, however, a dispute between West Dunbartonshire Council and the Scottish Government over the allocation of a controversial site for housing development meant that it was never fully adopted (West Dunbartonshire Council, 2016a), and only serves as a 'material consideration'. The Council also produced supplementary design guidance on residential development in 2014, which aims to support innovative and contextsensitive design, based on a checklist of criteria covering character and setting, layout and plot size, house design, landscaping and open spaces, creating streets, and community safety (West The event was attended by stakeholders from across Scotland, and forms part of the Council's endeavours to widely promote its evolving design governance regime.
March 2020 Funding for the Place and Design Panel and design officer is made permanent through approval of the 2020/21 budget by Council.
A permanent funding commitment is forthcoming despite a change in administration following the May 2017 local elections.
Dunbartonshire Council, 2014). It also serves as a material consideration for housing development applications.
The emergence of an urban design agenda at West Dunbartonshire Council in 2017 was spearheaded by a former local councillor called Patrick McGlinchey, who was a member of the Planning Committee between July 2013 and May 2017, and chair of the Infrastructure, Regeneration and Economic Development Committee between May 2014 and May 2017. McGlinchey had been working with senior planning officers to forge a more design-sensitive approach to planning, and to address the fact that West Dunbartonshire Council did not always appear to be particularly interested in design quality. He wanted to avoid the Council repeating the mistake of approving poor quality development. Explicitly recognising that pushing developers on design matters can be daunting for planning officers and councillors alike, McGlinchey nevertheless sought to build consensus by "seeing design as a means to achieve economic benefit" (The Improvement Service, 2017, p. 4). In a budget speech to Council in February 2017, McGlinchey outlined his intention "to develop [governance] structures to oversee the enhancement of design quality in the built environment" and proposed that the 2017-18 Budget included £75,000 per annum for three years to employ a design officer and establish a design review panel (McGlinchey, 2017).
In his speech, McGlinchey focused on the need to approve this funding expeditiously because West Dunbartonshire was "on the cusp of so many major developments" (McGlinchey, 2017), including the ambitious 'Queens Quay' regeneration project on the former John Brown's shipyard site in Clydebank. In approving McGlinchey's proposal within the budget, a majority of councillors agreed that Queens Quay was a crucial 'catalyst' that had the potential to transform the fortunes of Clydebank. One of the planning managers at West Dunbartonshire reported that, while the investment in design governance did gain majority support from Council, it was not without resistance, and one opponent argued that the funding could pay for two social workers (Planning Manager, interview 2019). The same manager reflected that the decision was thus "a really bold step by the Council, in an area of deprivation, where that money could have been used elsewhere" (Planning Manager, interview 2019), demonstrating the significant role of leadership in securing support for investment in design governance among councillors and senior officers.
A subsequent report to the Planning Committee, submitted in March 2017, sought to highlight how improvements to the built environment could enhance the lives of local residents while simultaneously addressing the area's struggling economy. The report stated that: "Good quality urban design is important to making successful places. This in turn will assist the area's future economic vitality and its well-being" (West Dunbartonshire Council, 2017a, p. 16). This dual motivation signalled West Dunbartonshire's ambition to enforce design quality standards 'in the public interest', and meant that Council planning officers could engage more robustly with urban design issues and begin to shift the culture of planning decision-making. One planning officer reflected that "it's right that we have our standards, and say 'actually this is the standard, West Dunbartonshire deserves better'", (Planning Officer, interview 2019), while a planning manager stated that the challenging social and economic conditions in West Dunbartonshire meant that the local authority had "an even more important role in creating great places" (Planning Manager, interview 2019).
The Council's funding for design governance kick started a series of initiatives that are outlined in Table 1 and discussed in the following paragraphs. A new Local Development Plan 2, initially written between October 2017 and September 2018 and currently under review by the Scottish Government, provides the basis for an increasingly proactive design policy framework premised on the links between economic regeneration and health, but also reflecting the wider turn towards 'placemaking' at the national level in Scotland (West Dunbartonshire Council, 2020c). The new plan presents a spatial strategy predicated on delivering a number of specified regeneration sites, which are identified by a policy called "delivering our places" and are deemed necessary for "creating places which strengthen our existing communities" (p. 6). A range of policies focussed on 'creating places' are included, one of which requires development to take a design-led approach and demonstrate the Scottish Government's six qualities of successful places (Scottish Government, 2013). Another concerns green infrastructure, and links design to health by aiming to integrate the area's green network with new development. The new plan places significantly greater emphasis on the importance of urban design within decision-making than previous policy, presenting an implementation strategy which is "focussed on placemaking" (West Dunbartonshire Council, 2020c, p. 15).
The plan anticipates that several design governance tools will be used on a regular basis, including a new design review panel called the Place and Design Panel, discussed in more detail later. Also, through a policy on masterplanning and development briefs (p. 77), developers will be expected to produce masterplans for significant projects, including named regeneration sites and those within a sensitive spatial context, such as a Conservation Area. Encouragingly, the plan gives planners a stronger policy basis for the preparation or commissioning of these masterplans and development briefs. Several of these have been produced in parallel to the plan making process since 2014, and the majority have been written by private sector consultants on the Council's behalf (see Table 2).
Two of the planners we interviewed felt that the enhanced commitment to urban design contained in the emerging plan expanded their 'opportunity space' (Tiesdell & Adams, 2011). One reported how "it's given a lot more scope for us, a lot more power to the planners" (Planning Officer, interview 2019), while a manager reflected that without "a clear framework of planning policy, that sets out what the expectations are, and clear guidance . . . you're leaving yourself open" (Planning Manager, interview 2019).
The Place and Design Panel
The cornerstone of the Council's evolving design governance regime is the aforementioned Place and Design Panel. It was launched in August 2017 alongside the recruitment of a full-time design officer who is tasked with running the panel and providing both strategic and case-specific advice to planning officers (West Dunbartonshire Council, 2017b). Design review panels usually involve a group of experienced professionals providing 'peer-to-peer' advice on planning proposals in parallel to the standard application process (White & Chapple, 2018). Before setting up its panel, West Dunbartonshire Council gathered existing research on design review panels, including guidance produced in the 2000s by CABE, and the 2010s by its successor, the Design Council (West Dunbartonshire Council, 2019a). It also sought advice on the nature and operationalisation of design review from government, third sector and industry stakeholders across Scotland, including the Scottish Government, Architecture and Design Scotland, Glasgow and Strathclyde universities, nationally-recognised architects and planners, as well as developers operating in the area and Homes for Scotland, the powerful housebuilding industry lobbyist (West Dunbartonshire Council, 2017b, 2019a). The terms of reference that were produced, following this consultation process, envisaged that the Place and Design Panel would act as "an enabler not an obstacle maker" and "work collaboratively with developers, architects and contractors" (West Dunbartonshire Council, 2019a, p. 9).
The Place and Design Panel reviews pre-application proposals that are classified as a 'major development' or identified as key regeneration sites, and the Council's development management Yet to be appointed officers meet regularly with the design officer to decide what forthcoming proposals should be reviewed (Planning Manager, interview 2019). The Panel also comments on draft planning policy, design guidance and masterplans produced in-house by Council officers or by external private consultants on the Council's behalf. The format of Panel meetings is semi-formal. Typically, a presentation is made by the proponents, usually the project architect or a Council officer if the topic of discussion is a draft policy or plan, before panellists are invited to discuss the scheme with the proponent in a workshop-like setting. The panellists comprise a pool of over 70 built environment professionals from a range of sectors and disciplines who responded to an advertising campaign in early 2018 (West Dunbartonshire Council, 2019a), or who are subsequently added to the pool to fill a recognised skills gap. They include practising architects and landscape architects, planning and urban design consultants, conservation and heritage consultants, civil and environmental engineers, university academics, and developers. The panellists are invited to attend the panel in rotation on a voluntary basis, and each panel meeting is usually attended by a minimum of four panellists. In addition to the presenting design team, Council officers involved in the project may attend to highlight relevant policy considerations or topics which, from the Council's perspective, require the Panel's attention. The Council's design officer acts as a facilitator during Panel sittings (West Dunbartonshire Council, 2019a). Although it is conventional for design review panels to comprise design experts, notable in the context of this paper is the fact that panellists do not include members of the public. A "tension between 'expert' advice and local 'democracy" (Punter, 2011, p. 185) remains a feature of design review panels, in West Dunbartonshire and elsewhere, and brings into focus the influence of private actors within the sphere of design governance. The remit of the Place and Design Panel is advisory only and usually no votes are taken on the outcome of the Panel's discussions, although a vote can be held if necessary for forming a consensus view (West Dunbartonshire Council, 2019a). A report is produced by the Council's design officer soon after each meeting. This is shared with the presenters who attended the Panel, the Panel members and local authority staff responsible for the project. It is also submitted to the elected members of the Council's Planning Committee when a corresponding planning application is to be considered. The Council's development management officers are not obliged to follow the Panel's advice when determining a planning application or submitting a recommendation to the Planning Committee, but the written summary of the Panel's deliberations is treated as a 'material consideration' in the assessment of planning applications. This is written into the proposed Local Development Plan 2 through a dedicated policy on the Place and Design Panel, which also states that applicants will be expected to show how they have responded to the Panel's recommendations (West Dunbartonshire Council, 2020c). Although the Panel is still in its infancy, its impact is already being felt on some of the strategic projects underway in West Dunbartonshire. One member of a presenting team explained that the Panel's advice was an important early intervention in the design process that enabled changes to be made before too many design decisions had been taken (Private Sector Consultant, interview 2019). In particular, the participant noted that the Panel's comments led to a more prominent gateway being established on the project and an additional means of pedestrian access being introduced.
The Place and Design Panel offers the Council access to expertise that is otherwise unavailable inhouse, thus increasing the 'opportunity space' (Tiesdell & Adams, 2011) for discussions about design, not only on specific planning applications but also on Council policy and guidance. West Dunbartonshire Council's is an effective model in an area which does not have the development pressures to support the introduction of fees for design advice, as has occurred in more affluent parts of England, for example (Carmona, 2019c;White et al., 2020). One of the planning managers reflected that being able to hold debates about design at a panel, before the planning application process started in earnest, allowed for difficult issues to be resolved in advance of 'high-risk' decisions being made by the Planning Committee. "If we can influence it, before it hits that . . . ", the planning manager explained, " . . . it can make a difference" (Planning Manager, interview 2019). At the same time, the Panel also provides the panellists, many of whom are private sector actors, with a more powerful voice within the planning decision-making process. Another planning manager suggested that this increased the Council's awareness about 'viability' from "people that work in the real world" (Planning Manager, interview 2019).
A New Culture of Design-Aware Decision-Making
Despite the ongoing challenges associated with fiscal austerity and a change from a Labour majority to an SNP-led minority coalition during the May 2017 local elections, West Dunbartonshire Council voted in March 2020 to make its £75,000 per annum funding for design governance permanent (West Dunbartonshire Council, 2020b). As a result, the roles and responsibilities of the Place and Design Panel have been written into the Local Development Plan 2 (West Dunbartonshire Council, 2020c, p. 77) and the Council's 2020/21 budget commits to establish "the Place and Design Panel as a permanent feature to help deliver regeneration and increase economic vitality" (West Dunbartonshire Council, 2020b, p. 22). This economic justification hints at a rhetorical shift in priority since the Panel's inception, when it had been more widely characterised as delivering both economic and health and wellbeing benefits. Nevertheless, the fact that the budget was extended with a new political party in power suggests that design governance has been accepted into the culture of planning decision-making in a relatively short period of time.
The acceptance was due, in part, to the work of planning officers who acted as 'design champions' in meetings with other local authority officers and senior managers. This research demonstrates that such design leadership, which is lacking in many UK local authorities (Gulliver & Tolson, 2014;White et al., 2020), is a key part of successful design governance. As one planning manager explained, there was a focus on "breaking down barriers with other departments and working with them, to say, 'yes you need to deliver 'x' amount of Council houses, but why can't that be excellent quality housing?'" (Planning Manager, interview 2019). Another planning official reflected that the Council's relatively small size facilitated this approach because of the myriad opportunities there are to influence the work of other departments and, crucially, access senior managers (Planning Officer, interview 2019). For instance, gaining support from the Housing team was said to be critical because it has an active role in delivering new development. The Council plans to supply 356 new Council homes directly between 2018 and 2022 (West Dunbartonshire Council, 2020d), and a planning manager reported how "we've really noticed the difference in the attitude of our housing colleagues, they're actually coming to us and asking for things to go to the Panel" (Planning Manager, interview 2019).
Another important facet of the Council's design governance agenda has been securing the support of local councillors who ultimately have the power to decide whether to grant planning permission or approve design policy and supplementary design guidance. Recognising that they "had to do a bit of work at the beginning to get them on board" (Planning Officer, interview 2019), one of the planning officers explained that the Council organised training sessions to address the design skills gap among elected councillors. As a further capacity building mechanism, a group of planning officers and councillors participated in a February 2018 study visit to the well-known regeneration project at Kings Cross in London (West Dunbartonshire Council, 2018b). Despite initially being sceptical about the justification for such a trip, one councillor who participated felt that "there's actually a really good reason why . . . . you can say 'there's things from that we can bring to our own area'" (Councillor, interview 2019). While the parallels between a multi-billion pound regeneration project in London and a small local authority in the west of Scotland might be questionable, the visit nevertheless appeared to be inspiring and also led to a subsequent visit to an arguably more comparable series of housing-led regeneration projects in Manchester and Liverpool a year later in February 2019 (West Dunbartonshire Council, 2019b).
To further improve the ways in which elected members receive information about urban design, West Dunbartonshire Council has also started to hold detailed design briefings for councillors about major development proposals at the pre-application stage. This allows councillors to highlight any issues they might have at an early stage of the planning process (West Dunbartonshire Council, 2018b). It also provides officers with an opportunity to offer further advice or education on design matters on a case-by-case basis, in addition to the reports of the Place and Design Panel. The briefing tool was initially trialled in late 2014 and has been used regularly since 2017. In 2018, the local authority won a Scottish Award for Quality in Planning, a programme run by the Scottish Government, for the contribution that the briefings have made to shaping design quality and delivery (West Dunbartonshire Council, 2019b). One of the councillors explained how the briefings had not only enhanced their understanding of design but also provided an early opportunity to ask questions: "you can say 'what about this, how does that work, why is that there?' You get it in before it all goes to committee. I think it's a great thing, it educates councillors as well" (Councillor, interview 2019). Developers also saw value in the briefing process, and one explained that "it was good for us because it flagged up things that were important to elected members" (Developer, interview 2019).
In addition to the steps taken within West Dunbartonshire Council to provide planning staff and elected members with better information and education on matters of urban design, the Council has also sought to alter how its planning and development decision-making practices are perceived by developers. A culture change is underway that encourages development management officers to negotiate more assertively with developers rather than allowing poorly-designed proposals to pass smoothly through the planning application process, as frequently happened in the past. A development management officer confirmed that "the culture is you can be quite candid with developers. Not to the point where you're rude, but if something doesn't work, you should be able to say so" (Planning Officer, interview 2019). A planning manager also explained that the Council's efforts to change the culture of planning and design decision-making have led to changes in the way recruitment decisions are made. The planning team now look for "the right personalities, people with drive, enthusiasm and energy", and the "strength of character to say 'no, we're not accepting that'" (Planning Manager, interview 2019), to support the delivery of the Council's urban design ambitions. Driven by champions within the planning service and supported by emerging local planning policy, the Council has tried to counteract the impact of a broader trend towards proceduralism within planning (Slade et al., 2019) by empowering its planners and providing the space for public interest concerns around urban design to be considered and debated.
The same planning manager spoke of their confidence that the culture change at West Dunbartonshire Council is beginning to impact the attitudes of developers and, in particular, is causing them to think more carefully about design quality when submitting a planning application. The manager stated that "I'm definitely seeing the difference now and I think that's reflected in the quality of applications that we're having in for some of our bigger sites" (Planning Manager, interview 2019). A developer who has experienced how the Panel and associated design governance processes work first-hand remarked that "[West Dunbartonshire Council are] very robust in their statements about what they wanted to see in order to get planning permission" (Developer, interview 2019).
Finally, West Dunbartonshire Council's planning officers have actively promoted their new 'design-aware' approach to planning within the close-knit professional built environment community in Scotland. Social media has played a key role in this, as one planning manager explained: "we're doing lots on social media about raising the profile of planning and what we do, and selling that we're all about quality" (Planning Manager, interview 2019). In 2019, the Council also hosted a well-attended event entitled Place and Design: Interventions to Create Successful Places (Partners in Planning, 2019). This involved a variety of speakers reflecting on aspects of the Council's design governance agenda for an audience of stakeholders from across Scotland, including practitioners from other local authorities, government agencies, consultancy firms, developers and academics.
Discussion: Design Governance in the Public Interest?
The findings presented in this paper illustrate the innovative ways in which West Dunbartonshire Council has identified urban design as a strategic priority during a time when the impacts of austerity mean local authorities are "doing more with less" (Hambleton & Howard, 2013, p. 48). This is being executed via the design governance tools described in previous paragraphs and summarised in Tables 1 and Tables 3 (below). To deliver this agenda, West Dunbartonshire Council has prioritised the need for better collaboration, both internally within the Council, between professional staff and elected councillors, and externally with private sector professionals. In this respect, West Dunbartonshire Council has been able to gain access to largely cost-free skills and capacity while, at the same time, challenging developers to 'up their game' when seeking planning permission.
We argue that the evolution of West Dunbartonshire's design governance agenda signifies a significant culture change which, as White (2015) has argued elsewhere, is necessary if urban design is to be established as a long-term local priority. On the one hand, the authority's collaborative ethos has increased the capacity for more informed and design-aware decision-making, and has given planning staff the confidence to assert greater influence over development outcomes. This Table 3. The tools of design governance used by West Dunbartonshire Council, categorised in accordance with Carmona's (2017, p. 31) arguably stands in contrast to the experience of planners elsewhere in Scotland (James & Tolson, 2020) amid a UK-wide trend towards proceduralism and away from the execution of independent judgment (Slade et al., 2019). On the other hand, it has meant that private sector developers and consultants operating in West Dunbartonshire now have more opportunities to influence the planning decision-making process directly, especially through the Place and Design Panel but also through the plan and policy-making process. As stated earlier, and like many local authorities in the UK struggling under the conditions of fiscal austerity (Parker et al., 2018;Slade et al., 2019;Wargent et al., 2020), West Dunbartonshire has outsourced the production of local masterplanning initiatives associated with the new Local Development Plan 2 to private consultants, some of whom also act as panellists for the Place and Design Panel (see Table 2). The rationale for employing private consultants for their expertise or inviting them to volunteer their time on a design panel is undoubtedly well-intentioned. Nevertheless, a comment by one of the local authority's planning managers, quoted earlier in the paper -that private sector actors provide insights into the 'real world' -suggests that, despite the authority's powerful regulatory function, the planner still felt that the final decisions about design are ultimately determined by economic viability and therefore rest with developers. With this in mind, we contend that the publicprivate relationship in design governance demands more scrutiny than it has hitherto received.
The composition of West Dunbartonshire's Place and Design Panel is a case-in-point and aptly demonstrates the 'fuzzy' boundary between public and private actors and the professional activities within which they are engaged. Design review is a contested process and is heavily shaped by its participants, as reviewers display a variety of attitudes and perceptions towards both design and process (Black, 2019). Panellists fulfil a quasi-public role while participating in design review as, strictly speaking, their remit is to act in the public interest by providing constructive and independent design advice. This role is complicated, however, by the differing motivations of each individual panellist, which may include a combination of business interests, personal development and altruism. By way of example, a planning manager noted that several developers currently involved in pre-application discussions with the Council had also expressed an interest in sitting on the Panel (Planning Manager, interview 2019). In another example, over the last five years, one local architectural practice has organised public engagement events for the Council, produced a masterplan for a key regeneration site and been commissioned to write the design codes and design one of the first projects on another major regeneration site. Asked about this, a Council planning manager told us that "working with [company name] is an experience in itself, because they're all about quality. That empowered us to say, 'we need to get something of equal quality'" (Planning Manager, interview 2019).
The planning manager's reflections point to the Council's desire to deliver better quality development in the public interest. Yet, their reflections do nevertheless demonstrate that public sector planners are developing ever-closer working relationships with private sector partners. This echoes Linovski's finding that authorities form close ties with "friendly firms" (2019, p. 1694) that move fluidly between contracts for the public and private sector. The case of West Dunbartonshire Council therefore illustrates not only the innovative ways that a small local authority has adopted a more design-aware approach to planning decision-making, but also the growing influence that non-state actors have acquired during this process.
The privatisation of planning practices, whether well-intentioned or not, has significant repercussions for how public values are understood and negotiated (Linovski, 2019;Slade et al., 2019). This challenges us to question the interpretation of design governance as shaping "processes and outcomes in a defined public interest" (Carmona, 2016, p. 705) (our italics). Yet, while we should view this public-private realignment through a critical lens, we must also recognise that it is through closer collaboration with the private sector that West Dunbartonshire Council has been able to introduce tools like the Place and Design Panel and foster a wider culture change within the local authority. With this in mind, we propose that the case study raises three key implications for local planning authorities pursuing 'the public interest' through design governance.
First, the growing prominence of non-state actors within design governance has significant accountability and transparency implications (Parker et al., 2018). The weight that West Dunbartonshire Council places on the views of Place and Design Panel members, the outsourcing of policy writing, and close relationships forged with private sector consultants, demonstrates how non-state actors can exert influence on the public functions of the local authority. Given the corporatist culture of the small and highly professionalised national planning community in Scotland (Inch, 2018), this is a cause for concern. While West Dunbartonshire's Place and Design Panel operates a code of conduct that includes provisions for managing conflicts of interest (West Dunbartonshire Council, 2019a, p. 31), the blurring of the boundaries between public and private activities means conflicts are perhaps inevitable if not overt, especially when a limited number of private consultants regularly work for local authorities and developers simultaneously. The scenario of a private planning consultant producing policy outsourced by a local authority, while that very same consultancy continues to seek business from local developers and eventually designs a project for a site covered by policy of its own making, might prove difficult to prevent (Cuthbert, 2017;Wargent et al., 2020).
Second, planning authorities are increasingly reliant upon external design expertise in a way that is likely to further undermine public sector capacity in the long term, resulting in a "relationship of critical dependence with the private sector" (Wargent et al., 2020, p. 193). As design review panels are increasingly used by local planning authorities to fill design skills gaps (Carmona & Giordano, 2017), the voluntary role of the panellists introduces an uncomfortable power dynamic whereby local planning authorities are beholden to the assistance of private sector actors to fulfil their regulatory duties. The motivations of these actors to volunteer their time, as we have suggested earlier, might very well be driven by commercial interests rather than notions of the wider 'public good'. Such dependence is likely to be reinforced as the role of the private sector in planning continues to grow across the sector (Kenny, 2019) and the recruitment of specialist public sector planning staff, such as urban designers, remains a low budgetary priority in the context of austerity . This also raises a further question about how the public sector can ensure the supply of leaders (Gulliver & Tolson, 2014;Hambleton & Howard, 2013) necessary for delivering welldesigned places in the future.
Third, while an emphasis on collaboration and negotiation with private sector actors provides potentially constructive opportunities for shaping developer behaviour (Inch, 2018;Linovski, 2015), an over-reliance on external stakeholders can also have the opposite effect and leave local authorities vulnerable when the public and private parties fundamentally disagree. Reflecting the trajectory of planning in Scotland and the UK, Ryan argues that achieving urban design goals will always be challenging in societies where "urban form is governed by the negotiation of powerful capital interests with comparatively weak local government" (2017, p. 148). Discretionary design governance tools, such as a design review panel, thus rely heavily on the mutual cooperation of stakeholders who, as noted earlier, might have differing motivations. For example, if a developer or their design consultants believe that issues of urban design addressed by a design review panel are likely to be downplayed when a decision is required from elected councillors, the developer is much less likely to engage positively with the local authority on matters of urban design. In this context, West Dunbartonshire Council's emphasis on improving the design awareness skills of its elected councillors may therefore prove to be well targeted. Carmona (2016, p. 722) argues that, for the purpose of conceptualising design governance, it can be assumed that "private organizations engaged in such processes effectively assume the role of a pseudo-public authority within their realms of influence". Yet, as Linovski (2015) found and our own research confirmed, "with little public sector capacity, the ideal of design as a collaborative process was weighted heavily toward private sector expertise" (p. 462). Although practices of collaboration, negotiation and partnership are now common within planning, the concept of partnership "presumes common ground where public and private interests may converge" (Lovering, 2010, p. 239). This, of course, cannot be taken for granted given the existence of competing and evolving understandings of how planning should fulfil the public interest (Murphy & Fox-Rogers, 2015;Slade et al., 2019). Indeed, the evidence presented in this paper suggests that the complexity of the relationships between public and private actors requires a deeper consideration of their respective motivations to fully appreciate how a "defined public interest" (Carmona, 2016, p. 705) is understood and ultimately served by design governance.
Conclusion
In this paper we have explored the emergence of a design governance agenda in West Dunbartonshire, a small local authority located in the west of Scotland. The paper began with an introduction to the concept of 'design governance' (Carmona, 2016) and considered the challenges associated with delivering a local authority urban design agenda in an era of fiscal austerity and expanding neoliberal governance practices that have deepened the private sector's influence over planning in the UK. We argued that producing well-designed places at the local level in Scotland remains a challenge, as it does in many other jurisdictions, where ambitious national objectives are difficult for fiscally-constrained and resource-starved local planning authorities to deliver upon. We then introduced our methodology and the single case study of West Dunbartonshire Council, before tracing the evolution of a design governance agenda which, we contend, has generated a new culture of design-aware planning decision-making, albeit one that is threatened by the growing privatisation of planning and its diminution of the public interest.
Our findings highlight the potential of micro-scale cultural changes to support local authorities in their pursuit of higher quality design outcomes through discretionary planning practices. By using design-oriented policy tools, working collaboratively with internal and external stakeholders on design matters, and empowering planners to negotiate for higher quality design, West Dunbartonshire Council has sought to change the way it is perceived by developers in the longterm. Initial findings indicate that the desired changes are beginning to take place. Our research also suggests, however, that the overlapping roles of public and private sector actors within urban design practice necessitates greater scrutiny (Linovski, 2015). Although the commitment of private urban design consultants to creating well-designed places might, in many instances, be genuine, these actors are also faced with the requirement to secure future business from developers whose priorities are driven more singularly by economic viability. Given the business interests of private actors, common ground with the public sector should not be taken for granted. The differing and sometimes competing interests of each party precludes any straightforward and shared understanding of the meaning of good design or a well-designed place (Linovski, 2015).
It is our contention that the concept of design governance and future research on urban design practice must approach the interests and influence of the various actors responsible for design processes and outcomes more critically, and ask how the public interest is being upheld, while the capacity and power of local planning authorities appears to be weakening. Our case study demonstrates how the leadership of design champions has been crucial in securing wider support for design governance within a local authority. To avoid the positive outcome we observed in West Dunbartonshire being a 'one off', it is our view that the range of skills that planners and local councillors bring to their work need widening (White et al., 2020), whether that is better awareness of good design principles and process or a better understanding of property markets and their influence on the planning system (Adams & Tiesdell, 2010). We share Parker et al.'s (2020) contention that to achieve such outcomes, stronger relationships must be forged between professional bodies, the practice community and universities, to support the planning profession to fulfil its duty to the public. While this paper has been primarily concerned with design governance and new processes that are still in their infancy in West Dunbartonshire, further research is necessary to understand the outcomes from these new practices in Scotland and within other planning contexts, while recognising that the pursuit of the public interest is never straightforward.
The Design Commission for Wales and the Ministerial Advisory Group for Architecture and the Built
Environment in Northern Ireland also continue to operate. | 2021-09-27T20:30:21.749Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "584d3d9438999468c534b88bf9e9b3b42ccde5e3",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14649357.2021.1958911?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c9acf46f9096607ac1f6937c1f523d45be452073",
"s2fieldsofstudy": [
"Political Science",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Political Science"
]
} |
14996482 | pes2o/s2orc | v3-fos-license | Intensive training induces longitudinal changes in meditation state-related EEG oscillatory activity
The capacity to focus one's attention for an extended period of time can be increased through training in contemplative practices. However, the cognitive processes engaged during meditation that support trait changes in cognition are not well characterized. We conducted a longitudinal wait-list controlled study of intensive meditation training. Retreat participants practiced focused attention (FA) meditation techniques for three months during an initial retreat. Wait-list participants later undertook formally identical training during a second retreat. Dense-array scalp-recorded electroencephalogram (EEG) data were collected during 6 min of mindfulness of breathing meditation at three assessment points during each retreat. Second-order blind source separation, along with a novel semi-automatic artifact removal tool (SMART), was used for data preprocessing. We observed replicable reductions in meditative state-related beta-band power bilaterally over anteriocentral and posterior scalp regions. In addition, individual alpha frequency (IAF) decreased across both retreats and in direct relation to the amount of meditative practice. These findings provide evidence for replicable longitudinal changes in brain oscillatory activity during meditation and increase our understanding of the cortical processes engaged during meditation that may support long-term improvements in cognition.
The successful execution of adaptive, goal-directed behavior depends critically on the ability to focus and sustain attention. Individuals vary considerably in their ability to flexibly manage attention in accord with changing situational demands, and even brief lapses of attention can substantially disrupt performance of routine daily tasks (Padilla et al., 2006;Weissman et al., 2006). Moreover, chronic or long-term deficits in attentional control can significantly impair social and emotional functioning by contributing to dysfunctional patterns of affect regulation (Wadlinger and Isaacowitz, 2011) and inhibitory self-control (Sarter and Paolone, 2011). The potential salutary effects of stable attention on mental well-being have long been acknowledged by contemplative traditions, which have developed various mental training techniques thought to increase attentional capacity. However, despite accumulating evidence for the behavioral effects of such training (Hölzel et al., 2011b;Slagter et al., 2011), it is relatively unknown how the engagement of specific cognitive processes during meditative practice may support trait-like improvements in cognition and well-being.
Within many Buddhist contemplative traditions, the term meditation connotes a process of familiarizing (Tibetan: gom) oneself with subjective mental experience and cultivating (Sanskrit: bhavana) balanced qualities of mind (Wallace, 2005). Meditative training includes methods for developing enduring psychological traits through deliberate application of awareness to the contents of subjective experience, including thoughts, sensations, intentions, and emotions. Repeated practice of attending to one's experience is thought to cultivate familiarity with and insight into transient phenomenological experiences, leading to lasting changes in cognitive and psychological functioning (Ekman et al., 2005;Wallace and Shapiro, 2006). Among the practices derived from the Buddhist tradition is a class of attention-regulatory techniques designed to foster attentional stability and vividness, traditionally known as shamatha (calm abiding; Wallace, 2005Wallace, , 2006. During shamatha practice, attention is directed to and sustained on an external or internal object of focus (e.g., sensations of the breath). While attention is focused on the meditative object, one continuously monitors the quality of attention and remains vigilant for distracting thoughts and impulses. When concentration inevitably lapses, attention is gently returned to the initial object of focus. These traditional descriptions of the mental processes employed during shamatha and other focused attention (FA) meditation techniques share considerable theoretical overlap with contemporary cognitive and neuroscientific theories of attention (Lutz et al., 2008;Slagter et al., 2011).
Recent longitudinal studies of intensive FA meditation have demonstrated training-related improvements in attentional stability (Lutz et al., 2009;MacLean et al., 2010) and alerting (Jha et al., 2007), sustained response inhibition (Sahdra et al., 2011), and information processing efficiency (van Vugt and Jha, 2011). In the same cohort of participants detailed in the current study, MacLean et al. (2010) observed behavioral improvements in perceptual discrimination and sustained attention during a 32-min visual discrimination task following three months of FA meditation. Additionally, there is evidence that these general behavioral improvements may be accompanied by functional and structural changes in cortical regions associated with attention and sensory processing. In a cross-sectional study of meditative adepts, Brefczynski-Lewis et al. (2007) reported increased activity in a broad network of attention-related brain regions, including frontoparietal, temporal, and posterior occipital cortical areas during meditation. Meditation experience has also been linked to increased cortical thickness in areas associated with attention and sensory processing, including regions within frontal and somatosensory cortex (Lazar et al., 2005;Grant et al., 2010;Hölzel et al., 2011a).
Taken together, the available evidence suggests that FA training leads to increased recruitment of attention-related brain networks during practice. Furthermore, recruitment and training of these attentional processes during meditation appears to generalize to improvements in the ability to focus and sustain attention across a wide variety of novel tasks. Slagter et al. (2011) characterized this kind of generalizable training as process-specific learning (cf. taskspecific learning; Green and Bavelier, 2008). However, the nature of the sensory and cognitive processes invoked and trained during FA meditation is not yet well characterized, and relatively little is known about how repeated engagement of these processes may lead to long-term, trait-level improvements. Given that these potential improvements likely reflect training-induced changes in practice-associated brain networks, a detailed characterization of patterns of cortical activation during meditation is critical to understanding the role of meditative states in trait-level improvements. One avenue for inferring process-specific training of attention is by examining the modulation of task-specific cortical oscillations during meditation (Cahn and Polich, 2006).
Intrinsic rhythmicity in ongoing electrical cortical activity is traditionally organized into standard spectral frequency bands, ranging from slow-to fast-wave oscillations (Steriade, 2006). It is well-established that patterns of ongoing oscillatory activity across these frequency bands reflect functional state changes in the brain. For example, specific mental states such as wakefulness (e.g., drowsy vs. alert) and attention (e.g., distracted vs. focused) are known to be associated with the pattern of ongoing oscillatory activity in predictable ways. Oscillatory activity has also been linked to local-and large-scale synchronization of neuronal assemblies across brain regions (Varela et al., 2001;Fries, 2005;Siegel et al., 2012;Tallon-Baudry, 2012), which may facilitate processes dependent on the integration of information across distributed brain networks. There is increasing evidence that attentional modulation of neuronal oscillations may serve to influence selective sensory processing (Womelsdorf and Fries, 2007). Thus, patterns of oscillatory activity may be used to infer activation of process-specific cognitive processes due to taskspecific attentional modulation during meditation.
Shamatha techniques such as mindfulness of breathing require repeated spatial and temporal orienting of attention to subtle tactile sensations and perceptions. Attentional modulations of ongoing oscillations in the alpha-(∼8-12 Hz) and beta-bands (∼13-30 Hz) have been functionally implicated in the perceptual processing of somatosensory information and may therefore serve as potential physiological markers of the capacity to focus attention on the breath. Activity in the alpha-and beta-bands is inversely related to cortical excitability (Tamura et al., 2005;Ploner et al., 2006;Ritter et al., 2009), speed of visual and sensorimotor processing (Thut et al., 2006;van Ede et al., 2011), stimulus discriminability (van Dijk et al., 2008, target detection accuracy (Linkenkaer-Hansen et al., 2004;Romei et al., 2010), and attentional suppression of distracting information (Snyder and Foxe, 2010;Haegens et al., 2012). Observed reductions in oscillatory activity in these bands may reflect the desynchronization of underlying neuronal populations (Pfurtscheller and Lopes da Silva, 1999), leading to an enhanced signal-to-noise ratio in preparation for upcoming sensory processing. Beta-band activity, for example, exhibits spatially and temporally specific modulations in anticipation of tactile stimulation. Attentional orienting to upcoming tactile stimuli induces hemisphere-specific suppression of beta oscillations in sensorimotor cortex (Dockstader et al., 2010;Jones et al., 2010;van Ede et al., 2010van Ede et al., , 2011, suggesting a functional role for beta in spatially oriented attention. Prestimulus beta reductions over sensorimotor regions are also associated with enhanced conscious perception of subtle tactile stimuli (Linkenkaer-Hansen et al., 2004;Schubert et al., 2009). Schubert et al. (2009), for example, reported that individuals with lower absolute magnitudes of prestimulus beta synchronization showed reduced interference from perceptual masking while detecting tactile stimuli.
Reductions in alpha oscillations over sensorimotor cortex have also been linked to sensory detection and attentional orienting to tactile stimuli (Linkenkaer-Hansen et al., 2004;Jones et al., 2010;Haegens et al., 2012). Although a number of studies have reported modulations of ongoing oscillatory activity in the alphaband during meditative states, alpha activity may partly reflect non-specific effects related to general arousal or to experimental task order (Cahn et al., 2010). Furthermore, because a wide array of contemplative practices and techniques have been linked to changes in alpha activity, the potential functional significance of alpha-band activity during FA meditation remains unclear (Cahn and Polich, 2006). Kerr et al. (2011) have provided initial support for the hypothesis that meditation training produces changes in cortical activation during attentional orienting to task-relevant stimuli. Compared to a control group, participants who completed a non-intensive eight-week course in mindfulness meditation demonstrated greater alpha-band suppression in response to anticipatory cues requiring the spatial orienting of attention to anticipated tactile stimuli. Notably, however, no evidence was found for changes in beta-band oscillations.
There are a number of methodological considerations that limit the conclusions that can be drawn from the existing literature. First, outcomes from most previous studies were based on cross-sectional comparisons of self-selected participants (i.e., experienced meditators) with only demographically matched controls, making it difficult to draw conclusions about the causal role of meditative practice in any reported differences in cortical activation patterns. The use of longitudinal designs and more rigorous control conditions are clearly necessary to rule out factors unrelated to meditation training . Second, there are methodological limitations in the spectral analysis of ongoing cortical activity. Power spectrum estimates are susceptible to contamination by non-cortical sources of noise (e.g., muscle and ocular artifacts). In addition, the designation of frequency bands may not be sensitive to intra-and interindividual differences that may influence the distribution of spectral frequencies. The understanding of cortical oscillatory activity during meditation may benefit from methodological advances in signal processing and the use of individualized frequency bands for the classification of spectral power (e.g., Klimesch, 1999).
In the present study, we addressed these issues by conducting a longitudinal wait-list controlled study of intensive meditation training using methodological advances in signal processing. Participants were randomly assigned to either an initial training group or a wait-list control group. Participants in the initial training group engaged in a three-month residential retreat wherein they received instruction and training in FA meditation (shamatha). During this period, participants assigned to the wait-list control condition served as a non-training comparison group. Subsequently, the wait-list control participants received formally identical training in a separate three-month residential retreat. To characterize training-related changes in cortical activity during meditation (i.e., state-dependent changes), we examined ongoing oscillatory activity while participants engaged in 6 min of mindfulness of breathing. Ongoing cortical oscillatory activity was assessed using spectral analysis of dense-array scalprecorded electroencephalogram (EEG) data at three assessment points during each retreat. We used second-order blind source separation (SOBI; Belouchrani et al., 1997), along with a novel artifact removal tool (Saggar, 2011), to identify and remove signal sources of putative non-neural origin. Each participant's individual alpha frequency (IAF; Klimesch, 1999) was used to define spectral frequency bands.
We hypothesized that intensive training in FA meditation would promote increased recruitment of cortical regions associated with tactile sensory processing during meditation, because the meditation training specifically involved focusing attention on the breath and oral-nasal facial regions. Specifically, we predicted training-related changes in areas involved in attention and somatosensory processing as evidenced by reductions in alphaand beta-band power across central and parietal areas of the scalp. We also investigated activity across the remaining spectral frequency bands based on previous reports of meditationstate-dependent activity in the delta, theta (Cahn and Polich, 2006), and gamma bands (Lutz et al., 2004;Cahn et al., 2010). Additionally, we predicted that intensive FA practice would lead to training-related changes in IAF during meditation. Because several meditation studies have revealed an overall slowing in oscillations within the alpha frequency, both as a trait and as a state effect (Cahn and Polich, 2006), we predicted a similar downward shift in IAF following training. Finally, we predicted that increases in cortical activity (reductions in beta-and alphaband power) and decreases in IAF would vary in direct relation to the amount of time individuals spent engaging in FA meditation during the retreat.
PARTICIPANTS
Participants were recruited through advertisements in various Buddhist magazines and email lists. Sixty participants were selected (out of 142 applicants) based on age, availability, physical and mental health, and previous retreat experience. These participants were then assigned to either an initial retreat (N = 30) or wait-list control (N = 30) group through stratified matched assignment. Groups were matched on age, sex, meditation experience, and ethnicity (see MacLean et al., 2010;Sahdra et al., 2011, for full assignment and matching criteria). During the initial retreat (Retreat 1), retreat group participants underwent three months of training at a remote mountain meditation center (Shambhala Mountain Center, Red Feather Lakes, CO). Participants lived and practiced onsite for the duration of training. Wait-list control group participants were flown to the retreat center for testing at each assessment point during Retreat 1 (data were collected after acclimatization for 72-96 h). Approximately three months after completion of Retreat 1, these same wait-list control group participants underwent formally identical training in a second three-month retreat (Retreat 2). The experimental staff was not blind to group membership. All study procedures were approved by the institutional review board of the University of California, Davis. Participants gave informed consent and were debriefed following training. Training participants paid for their room and board during each retreat (∼$5300) but were compensated $20 for each hour of data collection.
EEG data from 22 participants in each group (retreat and wait-list control) were included in the analysis. Participants were excluded from analysis if their data at any single assessment point were not usable (due to very poor EEG signal quality, missing experimental event timing, or electrode location information). Initial retreat and wait-list control participants included in the analyses did not differ (all ps > 0. (Wallace, 2006;Sahdra et al., 2011). Shamatha techniques included mindfulness of breathing, in which attention is directed toward the breath; observing mental events, in which attention is directed toward the whole field of mental experience (thoughts, images, sensations); and observing the nature of consciousness, in which attention is directed toward the experience of being aware. Beneficial aspirations included practices that cultivated lovingkindness, compassion, empathic joy, and equanimity (Wallace, 2006). Participants also met with Dr. Wallace privately once a week for individual advice, clarification, and guidance. Dr. Wallace was not present during any data collection procedures.
DATA COLLECTION
Participants were assessed at three points over the course of each retreat-at the beginning (pre), in the middle (mid), and at the end (post). At each assessment point, participants completed a battery of cognitive and affective measures over a two-day testing period (reported elsewhere, e.g., MacLean et al., 2010;Sahdra et al., 2011). At the conclusion of the second day of testing, participants engaged in a 12-min period of silent, eyes-closed mindfulness of breathing. The meditation began with approximately 50 s of audio instructions, recorded by Dr. Wallace: A recorded chime signaled the end of the meditation period. Continuous EEG was recorded over the entire period. However, due to errors in data acquisition, only the first 6 min of data were recorded for some subjects. To allow for comparisons across all time points and groups, analyses were restricted to the first 6 min of meditation. Prior to beginning the meditation, participants were instructed to rest quietly without engaging in formal meditation with their eyes closed for 1 min while baseline EEG was recorded. These baseline data were subsequently used to calculate the IAF values for each participant at each assessment.
Data acquisition and filtering
EEG was acquired at a sampling rate of 2048 Hz using the BioSemi ActiveTwo system (http://www.biosemi.com) and FMS electrode caps (http://www.easycap.de) fitted with BioSemi electrode holders in an 88-channel equidistant montage. Individual electrode locations were localized in three-dimensional space using a Polhemus Patriot digitizer (http://www.polhemus.com). Due to participant discomfort (e.g., pressure on the forehead) some channels were not inserted in the cap; data from these channels were discarded. Inspection also revealed channels with poor signal quality (intermittent connectivity or extreme amplitude), which were not included in the analysis. The data were then filtered at 0.1-200 Hz (zero-phase; roll-off: 12 dB LP/24 dB HP) and referenced to the average of all remaining channels.
Separating neural and non-neural signal sources
Second-order blind source identification (SOBI; Belouchrani et al., 1997) was used to separate sources of contaminating signal from ongoing brain electrical activity. SOBI uses joint-diagonalization of correlation matrices at multiple temporal delays (41 delays were used, τ = [1:1:10, 12:2:20, 25:5:100, 120:20:300] ms, as described in Tang et al., 2005) to derive signal components that have a continuous time course and fixed spatial projections, referred to as sources. These sources can be used to generate power spectra for frequency domain analyses or as inputs for source modeling methods to estimate the probabilistic locations of the underlying neural generators. SOBI has two main advantages over other methods (e.g., ICA) for blind source identification: (a) it uses average statistics over multiple temporal delays and hence is less susceptible to outliers; and (b) it uses second-order statistics such that short segments of data are sufficient for estimating components (for further discussion of blind source separation methods see Joyce et al., 2004;Tang et al., 2005;Congedo et al., 2008;Tang, 2010).
The 6 min of continuous meditation EEG data were divided into 6 one-minute segments. This was done to facilitate future within-session longitudinal analyses beyond the scope of the present paper. Each segment was divided into non-overlapping 1-s epochs and submitted to SOBI. SOBI generated an array of maximally uncorrelated spatial sources along with their corresponding time series. For each separate participant, assessment, and 1-min epoch, the number of generated sources was equal to the number of scalp channels (∼104,000 individual sources). To distinguish the neural or non-neural origin of these sources, signal components must be identified and evaluated. Although it is possible to evaluate these sources manually using quantitative features such as topography, time series, and power spectrum, the amount of data makes a manual approach infeasible. On the other hand, fully automatic solutions are harder to validate. Thus, a novel semi-automatic artifact removal tool (SMART; Saggar, 2011) was constructed to maximize the likelihood that only nonneural (i.e., artifactual) sources were rejected and that neural sources were retained.
SMART uses a combination of features to perform an initial classification of signal component sources for inclusion or exclusion in the subsequently analyzed data. Component sources are classified according to scalp voltage topography, power spectrum, autocorrelation, time series characteristics, and the impact of each source on the overall power spectrum. SMART provides the user with an html-based interface of initial classifications. The user can quickly review all the sources and, if required, reclassify the initial classifications provided by SMART. Figure 1 illustrates several types of sources (rows) separated by SOBI, along with the features (columns) extracted by SMART. The top row (A) indicates a source classified as neural based on broad topography, decreasing spectral power (with a peak in the alpha-band), gradual fall-off of the lagged autocorrelation with delay, bursting pattern typical of ongoing EEG, and when the source is removed, a reduction in global power spectrum (7-15 Hz range). Row (B) illustrates an ocular source. This is indicated by an extreme anterior topography, lack of peaks in the power spectrum, high autocorrelation independent of lag, and a slow time series. The rightmost column shows that removal of this source affects very low frequencies of the power spectrum (<3 Hz). Row (C) illustrates the profile of an EMG source, with localized right posteriolateral topography, an irregular power spectrum with increased power as frequency increases, a sudden drop in autocorrelation within 10 samples indicating a high degree of noise in the time series (seen in the high frequency content of the time series), and no effect on the global power spectrum within frequencies below 30 Hz. The bottom row (D) illustrates a "peak" source (power line interference at 60 Hz) with localized topography indicating the affected electrode, a large peak at 60 Hz in the power spectrum, a cyclical lagged autocorrelation and uniform high frequency time series typical of a 60 Hz source. Removal of this source affects only the global power spectrum at 60 Hz.
Reconstruction, quality check, and conversion to standard space
Following application of SOBI and SMART, the putative nonneural sources were treated as artifacts and removed from the data. The original montage of 88-channel data was thus reconstructed using only these signal sources of presumed neural origin. The reconstructed multichannel time series was then scanned for high amplitude transient signals, or signal gaps, that may have been included in the reconstruction because they were correlated with other neural activity. For instance, large movement artifacts account for such a large percentage of the signal at the time they occur that removing the SOBI source that contains such artifacts leaves near-zero signal upon reconstruction. This post-SOBI reconstruction signal check was conducted on each reconstructed Finally, the reconstructed 88-channel EEG data were transformed into a standard 81-channel montage (international 10-10 system) using spherical spline interpolation (λ = 2 × 10 −6 ) (Perrin et al., 1989) as implemented in BESA 5.2. This transformation ensured that the number of channels was consistent across participants and that channel locations were standardized. Eight channels (AF9, Fp1, Fpz, Fp2, Nz, AF10, CB1, CB2) from the 81-channel montage were excluded because data from the corresponding nearest electrode sites were not available in the original montage, yielding a final 73-channel montage for the reconstructed EEG. These data were then transformed to a reference-free estimation of scalp current density (CSD) to limit the effects of volume conduction and improve the spatial resolution depicted on the scalp surface (e.g., Kayser and Tenke, 2006). CSD was calculated using the surface Laplacian estimated as a second derivative of the scalp potential with CSDToolbox (Kayser and Tenke, 2006; λ = 1 × 10 −6 ).
Power spectrum estimation
The 6 min of continuous reconstructed data were divided into 2-s (4096 point) segments with 75% overlap. Power spectra were then calculated for each of these segments using the multitapered power spectral density estimation method (Mitra and Pesaran, 1999;Oostenveld et al., 2011). Multi-tapered estimation reduces the bias in power spectra estimation by obtaining multiple estimates from each sample.
IAF and individual frequency bands
IAF was estimated using the center of gravity method for the frequency range of 7 Hz (f 1 )−14Hz (f 2 ) (Klimesch, 1999), where power-spectral estimates at f i are denoted by a(f i ). The α IAF values were calculated for each channel and were averaged across all channels to obtain a single IAF value per subject. So as not to confound a trait measure with possible task-related effects, we calculated IAF separately during the pre-meditation baseline period and during meditation. IAF values obtained during the 1-min baseline period were used to anchor the frequency range definitions of all EEG bands. The frequency band definitions were computed separately for each participant at each assessment. Frequency ranges based on IAF are provided in Table 1, along with traditional fixed bandwidth definitions. IAF values obtained during the 6-min meditation period served as an outcome measure of possible training-related change. Across groups and retreats, IAF estimates during pre-meditation baseline and FA meditation were strongly correlated at pre-(r = 0.95, p < 0.001), mid-(r = 0.97, p < 0.001), and post-assessments (r = 0.96, p < 0.001) indicating a close correspondence of these two measures.
Non-parametric cluster-based permutation testing
Non-parametric cluster-based permutation testing was used to determine spatiotemporal changes in spectral band power while accounting for the problem of multiple comparisons (Maris and Oostenveld, 2007). In contrast to the traditional method of dividing scalp channels into predefined regions and calculating parametric statistics based on the average value across channels within a region, the non-parametric cluster-based approach is data driven, and may more accurately reflect the scalp topography of cortical activations. For instance, if an effect occurs in a scalp location that crosses the border between two predefined regions, the cluster approach will provide more statistical power to detect significant differences between conditions than the traditional approach.
For each frequency band, (delta, theta, alpha, beta, and gamma), retreat (Retreat 1, Retreat 2), and group (retreat, waitlist control), a separate non-parametric cluster-based permutation test (Maris and Oostenveld, 2007) was performed using FieldTrip (Oostenveld et al., 2011) to find contiguous clusters of electrodes that differed in power as a function of assessment (pre-, mid-, and post-retreat testing). The minimum cluster size was set to three electrodes, with no maximum limit. Ten thousand permutations were run to assess the significance of clusters, using a Monte Carlo estimation of significance. Significant clusters indicate changes over assessments for the respective frequency band. False discovery rate (FDR; Benjamini and Hochberg, 1995) was used to control for individual testing of each retreat, group, and frequency band (Figure 2).
Parametric testing of clusters
A hybrid non-parametric/parametric approach was used to assess training-related changes in spectral band power (Figure 2). After the clusters were identified and FDR-corrected for each group and assessment combination, the spectral power for each cluster was extracted by averaging the values of all identified electrodes in that cluster. These values were log-transformed to normalize their distribution and were used in parametric analyses of training-related changes in EEG spectral power.
SPECTRAL POWER
For Retreat 1, non-parametric cluster-based permutation tests revealed a significant midline-anteriocentral-posterior cluster for beta-band power (1.2 × IAF to 30 Hz) in the initial retreat group. No significant beta-band cluster was found in the control group. Additionally, no significant clusters were found for any other Frontiers in Human Neuroscience www.frontiersin.org September 2012 | Volume 6 | Article 256 | 6 FIGURE 2 | Groups were tested three times during each three-month retreat period: at the beginning (pre-assessment), middle (mid-assessment), and end (post-assessment) of each retreat. After estimating spectral power in each band, non-parametric cluster-based permutation analysis was utilized, followed by FDR correction for 15 high level non-parametric tests. A parametric approach was then used to examine changes in log-transformed spectral power (in clusters identified during non-parametric cluster analysis) across group and assessment.
frequency bands for either group during Retreat 1. Separate cluster analyses for Retreat 2 also revealed a significant beta-band cluster over midline-anteriocentral-posterior regions. There was substantial overlap (20 electrodes) between beta-band clusters found for the retreat group during Retreat 1 (31 electrodes), and the wait-list controls during Retreat 2 (27 electrodes). See Figure 3 for the topographic similarity of change in spectral power for the two independent retreat groups. Significant beta-band clusters indicate spatiotemporal changes in beta-band activity over time for participants who received training in Retreat 1 and Retreat 2, but not for the wait-list controls tested during Retreat 1. To examine the directionality of these training-related changes, multivariate repeatedmeasures analyses of variance (ANOVA) were used. For Retreat 1, the ANOVA included the within-subjects effect of assessment (pre-, mid-, and post-testing), the between-subjects effect of group (retreat, control), and the interaction between the two. Because no significant beta-band cluster was found for the control group in Retreat 1, data for this group consisted of the log-transformed beta-band power averaged across the same electrode locations as were found for the significant electrode cluster for the Retreat 1 group. The ANOVA revealed significant main effects of group [F (1, 42) = 5.01, p = 0.031] and assessment [F (2, 41) = 13.03, p < 0.001]. Importantly, a significant group × assessment interaction was also found [F (2, 41) = 7.11, p < 0.01], suggesting training-related changes in beta-band power. To further explore this interaction, we conducted separate repeated-measures ANOVAs for each group with assessment as a within-subjects factor. A significant effect of assessment was found for the retreat group [F (2, 20) = 40.12, p < 0.001], and post-hoc pairwise t-tests (all reported p-values are Bonferroni corrected for three comparisons) revealed a significant reduction in beta-band power at the mid-[t (21) = 8.65, p < 0.001] and postassessments [t (21) = 2.72, p = .038]. No significant differences were found in the control group (see Figure 3).
To test the effects of training on beta-band power in Retreat 2, a repeated-measures ANOVA was used to examine the withinsubjects effect of assessment (pre-, mid-, and post) in wait-list participants as they underwent training during the second retreat. Log-transformed beta-band power was averaged across the cluster found for Retreat 2 (discussed above). A repeated-measures ANOVA revealed a significant effect of assessment [F (2, 20) = 17.20, p < 0.001]. Post-hoc tests revealed a significant reduction in beta-band power at mid-[t (21) = 4.43, p < 0.001] and post-assessments [t (21) = 5.89, p < 0.001], compared to the preassessment. Thus, the pattern and spatial topography of trainingrelated changes in beta-band power was replicated in Retreat 2 (see Figure 3). Retreat 1 analyses indicated no reduction in beta-band power over time in wait-list controls. Although a lack of a significant beta-band cluster in wait-list controls during Retreat 1 suggests an absence of beta-band change over time, comparisons with retreat group participants were based on the significant cluster for the Retreat 1 group only. Therefore, we conducted a follow-up analysis to further rule out the potential effects of applying the initial retreat group's cluster to assess change in the control group participants (i.e., a cluster not specific to the control participants' scalp activity). A repeated-measures ANOVA was used to test the effect of assessment on beta-band power for wait-list controls during Retreat 1 using the cluster identified for these same participants once they received training in Retreat 2. As was observed when using the initial retreat group cluster, there was no significant effect of assessment [F (2, 20) = 0.18, p = 0.84], suggesting that the lack of beta-band reduction in wait-list controls in Retreat 1 was not dependent on using the Retreat 1 training group-defined electrode cluster.
Frontiers in Human
Taken together these analyses suggest that intensive FA training is associated with a reduction in beta-band power during mindfulness of breathing, with effects most reliably observed bilaterally overlying medial prefrontal, central, and parietal brain regions.
INDIVIDUAL ALPHA FREQUENCY
IAF values were calculated during the 6 min of FA meditation at each assessment point for the separate retreats. Changes in IAF during Retreat 1 were examined using multivariate ANOVA in a manner analogous to the beta-band power analyses summarized above. A 3 (assessment) × 2 (group) ANOVA revealed a significant main effect of assessment [F (2, 41) = 23.26, p < 0.001], indicating that IAF values shifted across time. The main effect of group [F (1, 42) = 0.13, p = 0.72] was not significant. As predicted, a significant group × assessment [F (2, 41) = 6.40, p < 0.01] interaction was found, suggesting training-related shift in IAF across three months of meditation training. To further explore this interaction, we conducted separate repeatedmeasures ANOVAs for each group. There was a significant main effect of assessment for both the initial retreat [F (2, 20) = 20.89, p < 0.001] and the wait-list control [F (2, 20) = 5.38, p = 0.014] groups. For the retreat group, Bonferroni-corrected follow-up comparisons revealed that IAF decreased at mid-[t (21) = 6.59, p < 0.001] and post-assessments [t (21) = 4.50, p < 0.001], compared to the pre-assessment. In the wait-list control group, IAF also decreased at the mid-assessment [t (21) = 2.61, p = 0.049] as compared to the pre-assessment. Neither group showed Frontiers in Human Neuroscience www.frontiersin.org September 2012 | Volume 6 | Article 256 | 8 significant changes in IAF between mid-and post-assessments (Figure 4).
In order to test changes in IAF during Retreat 2, a repeatedmeasures ANOVA was used to examine the within-subjects effect of assessment. A significant main effect was found [F (2, 20) = 35.44, p < 0.001]. Similar to the observed pattern for the initial retreat group during Retreat 1, a decrease in IAF was found at both the mid-[t (21) = 7.30, p < 0.001] and post-assessments [t (21) = 6.00, p < 0.001], as compared to the pre-assessment (Figure 4). Again, no reductions in IAF were found between the mid-and post-assessments.
The overall pattern of results suggests training-related decreases in IAF in both retreats. Although these data demonstrate reliable training-related reductions in IAF across three months of meditation training, reductions in IAF were also observed between pre-and mid-assessments for the wait-list controls in Retreat 1. However, the effect size for pre-to-mid change in IAF was nearly three times as large for the training groups in both Retreat 1 (Cohen's d = 1.40) and Retreat 2 (d = 1.56) than in wait-list controls (d = 0.56).
THE EFFECT OF DAILY MEDITATION ON BETA-BAND POWER AND IAF
Hierarchical multiple regression analysis was used to examine whether decreases in beta-band power and/or shifts in IAF were related to the amount of average self-reported daily FA meditation. Because the pattern of training-related change in beta and IAF was similar across both retreats, data from both groups were combined.
In the first step of the regression model of beta change, preassessment beta was entered as a predictor of post-assessment beta in order to account for baseline beta-band power prior to training. As expected, pre-assessment beta significantly predicted post-assessment beta [R 2 = 0.804, F (1, 42) = 171.79, p < 0.001]. The second step included the average daily amount of FA mediation practiced by each participant in order to examine the unique variance explained by daily practice in post-assessment beta independent of pre-assessment beta. In step 2, the addition of average daily FA hours did not add significantly to the explained variance of the model [ R 2 = 0.003, F (1, 41) = 0.57, p = 0.46]. Thus, collapsed across retreats, changes in beta were not predicted by the amount of participants' daily FA meditation practice.
In a similar step-wise manner, we used a hierarchical regression model to examine whether the amount of FA meditation predicted changes in IAF. In the first step, pre-assessment IAF significantly predicted post-assessment IAF [R 2 = 0.870, F (1, 42) = 280.134, p < 0.001; see Table 2]. In step 2, the addition of average daily FA meditation hours added significantly to the explained variance of the model [ R 2 = 0.020, F (1, 41) = 7.45, p = 0.009]. This relation was negative (β = −0.142), indicating that the more the participants engaged in FA meditation the more IAF decreased. In contrast to changes in beta-band power, these results suggest that reductions in IAF are significantly predicted by the amount of daily FA meditation engaged in over the course of meditation training.
THE EFFECT OF ALTITUDE ON BETA-BAND POWER AND IAF
During Retreat 1, the retreat group lived onsite at the retreat center for the full three-month training period, whereas waitlist control participants lived at home and were flown to the retreat center at each assessment point for testing. Because the retreat center was located at a relatively high altitude (∼2500 m),
FIGURE 4 | Change in individual alpha frequency (IAF) across assessments during Retreat 1 (Retreat and Control group) and
Retreat 2 (Control group in training). The figure shows IAF values, averaged across participants, in each group and at each assessment. Error bars are reported as standard errors of the mean (SEM). Significant training-related reductions in IAF were found in both retreat groups at mid-and post-assessments (compared to the pre-assessment).
Frontiers in Human Neuroscience
www.frontiersin.org September 2012 | Volume 6 | Article 256 | 9 Step 2 (R 2 = 0.890) Step 1 it is possible that traveling to and living at that higher altitude could have influenced changes in spectral power and/or IAF over time (Kaufman et al., 1993;Guger et al., 2005). To rule out this explanation, we used hierarchical multiple regression analysis to examine whether the reductions in beta-band power and IAF could be predicted by the difference in elevation between the testing location and the participant's city of residence. All participants except one resided at a lower elevation than the retreat center (N = 43; M elevation_difference = 1770.12 m, SD elevation_difference = 767.21 m); the single participant who resided at a slightly higher elevation than the retreat center was excluded from analysis. As in the analysis of daily FA meditation, the first step of each regression model included the pre-assessment level of beta or IAF, respectively. The addition of altitude change in step 2 did not add significantly to the explained variance of the model for either beta-band power [ R 2 = 0.01, F (1, 41) = 2.26, p = 0.14] or IAF [ R 2 = 0.003, F (1, 41) = 0.85, p = 0.36]. These analyses suggest that changes in IAF and beta-band power were unrelated to changes in altitude.
DISCUSSION
We observed training-related changes in oscillatory cortical activity in a longitudinal, wait-list controlled study of three-months of intensive meditation training. Spectral analysis of dense-array scalp-recorded EEG was used to characterize training-related changes during 6 min of FA meditation. Second-order blind source identification and a novel semi-automatic artifact removal tool were utilized to identify and remove non-neural signal contaminants. Each participant's resting-state IAF was estimated and used to define frequency bands. Non-parametric cluster-based analysis revealed consistent training-induced changes in the betaband only. In an initial retreat, significant reductions in beta-band power were observed bilaterally in anterior-central and posterior scalp regions with training. This pattern was replicated in a second retreat, in which the wait-list control group received formally identical training. A reduction (slowing in overall alpha frequency) was also observed in state-related IAF across both retreats. Moreover, the degree of reduction in IAF was predicted by the amount of time participants practiced FA meditation during training. The robustness of these findings, replicated across separate training periods, provide evidence of specific longitudinal changes in characteristic brain oscillatory activity obtained during mindfulness of breathing.
REDUCTIONS IN BETA-BAND POWER
Previous research has demonstrated that beta-band power is inversely related to cortical activity and excitability (Tamura et al., 2005;Ploner et al., 2006), and that beta-band power over somatosensory cortex is inversely correlated with fMRIdependent BOLD activation (Ritter et al., 2009). Suppression of ongoing oscillatory activity in the beta and alpha bands may result from increased cellular excitability in thalamocortical networks (Steriade, 2006;Bollimunta et al., 2011) and may serve to facilitate selective sensory gating, augmenting the signal-tonoise ratio of incoming sensory information (Pfurtscheller and Lopes da Silva, 1999;van Ede et al., 2011). This suggests a functional role for beta-band activity within tasks involving attention to tactile stimuli. Specifically, anticipatory modulations in betaband power have been associated with spatiotemporal orienting of attention (Dockstader et al., 2010;Jones et al., 2010;van Ede et al., 2010van Ede et al., , 2011 and conscious detection of subtle tactile stimuli (Linkenkaer-Hansen et al., 2004;Schubert et al., 2009). For example, when attention is cued to a lateralized tactile stimulus, beta-band power over parietal cortex is suppressed contralateral and increased ipsilateral to the attended stimulus (van Ede et al., 2011). The degree of prestimulus suppression is associated with both faster responding and enhanced stimulus detection (Linkenkaer-Hansen et al., 2004;van Ede et al., 2011).
Suppression of beta-band power may therefore reflect increased cortical activity associated with enhanced sensory processing of ongoing tactile stimuli. Training-related suppression of beta-band power in the present study is in line with the functional role proposed for oscillatory activity in facilitating sensory processing of the attended breath stimulus. During each assessment, participants focused their attention on the dynamic changes in breath sensations over 6 min of eyes-closed meditation. They were instructed to notice subtle tactile sensations at the aperture of the nostrils while regulating attention should it lapse. Thus, reductions in beta-band power over the course of training may reflect increased cortical activation of sensory-related attentional networks and an increased capacity to focus attention on the breath during mindfulness of breathing meditation. Beta suppression may also reflect increased perceptual discrimination and conscious perception of these tactile sensations. In support of this idea, Schubert et al. (2009) found that the absolute magnitude of beta suppression across individuals was associated with a greater ability to perceive target tactile stimuli within a context of similar distracters. In the present cohort, training was previously reported to increase perceptual discrimination of subtle visual stimuli (MacLean et al., 2010). In a similar manner, we speculate that intensive meditative practice may result in increased levels of sensory processing of ongoing tactile stimuli.
The observed reduction in beta-band power following meditative training is also consistent with a cross-sectional study of highly experienced meditative adepts (Brefczynski-Lewis et al., 2007). Brefczynski-Lewis et al. (2007) observed increased BOLD activation in brain regions typically involved in sustained attention during FA meditation for expert meditators with an average of 19,000 h of practice. In contrast, experts with over 40,000 h of lifetime practice showed a decreased amount of activation in the same brain regions during FA meditation. These results
Frontiers in Human Neuroscience
www.frontiersin.org September 2012 | Volume 6 | Article 256 | 10 suggest that cortical activation during meditative practice may follow a curvilinear trajectory such that both novices and highly experienced practitioners show less attention-related activation than practitioners whose lifetime experience falls between these extremes. The average level of lifetime meditation experience among participants in our study was about 2500 h-a moderate level in comparison to the above groups. Thus, our observed reductions in beta-band power, presumably indicative of an increase in cortical activation, are in line with the trajectory of training-related change proposed by Brefczynski-Lewis et al. (2007). Finally, the observed pattern of training-related modulation of oscillatory activity may be associated with improvements in behavioral measures of attentional performance. In this same cohort, we previously reported improvements in sustained attention, response inhibition, and perceptual discrimination following training (MacLean et al., 2010;Sahdra et al., 2011). Repeated engagement of attention networks during practice may allow for more efficient resource allocation during demanding external psychophysical tasks. For example, changes in neural signatures of attentional stability have been found following three months of intensive Vipassana meditation, which includes FA meditation as one component of training (Lutz et al., 2009). This link should be examined in future work by directly relating measures of cortical activation during practice and subsequent training-related behavioral outcomes. Furthermore, theoretical approaches, such as computational modeling, can be employed to test targeted hypotheses regarding the underlying cortical dynamics involved in these processes.
REDUCTIONS IN INDIVIDUAL ALPHA FREQUENCY
We also observed training-related reductions in IAF during FA meditation, which were evident by the midpoint of each retreat. IAF has typically been conceptualized as a relatively stable index of an individual's trait-like capacity for cognitive resource allocation (Kondacs and Szabó, 1999;Jann et al., 2010) and cognitive load (Moran et al., 2010). Although the available literature on IAF provides little framework for conceptualizing intraindividual change as a result of training, we believe that diminished IAF may represent an overall reduction in cognitive effort during meditation. Peak alpha frequency has been shown to increase with elevated cognitive load in visuospatial working memory (Moran et al., 2010) and may reflect a greater allocation of resources to the maintenance of information in memory. After intensive training, increased attentional stability may reduce the attentional resources required to sustain attention on the sensations of the breath. Additionally, meditation training may promote greater efficiency in the reorienting of attention to a given stimulus. Although the aforementioned reductions in beta-band power suggest enhanced sensory processing of the tactile sensations of the breath, we speculate that the observed reductions in IAF may indicate that such increases in activity incur fewer global processing costs.
A reduction in IAF was observed in both the training and control groups in the first half of the first retreat, suggesting that these changes in IAF were due at least in part to task-related learning. By the second assessment, both the retreat and the wait-list control groups had previous exposure to the demanding testing procedures (i.e., the perceptual and cognitive tasks that occurred in the 2-3 h before the FA meditation period), and thus reductions in IAF during FA meditation may have been due in part to increased familiarity with and reduced stress caused by data collection procedures. However, the reduction in IAF in controls was of a moderate effect size, whereas the effect size in the training groups was nearly three times larger. In addition, the amount of time participants devoted to FA meditation over the course of three months significantly predicted changes in IAF from pre-to post-retreat training assessments. This evidence suggests that a substantial degree of the change in IAF observed in retreat groups was related to meditation training.
CONTRIBUTIONS AND LIMITATIONS
Contrary to our predictions, we did not find reductions in alphaband power following training. Although reductions in power in the alpha-band have been associated with anticipatory attention to tactile stimuli in both training (Kerr et al., 2011) and nontraining (Jones et al., 2010;van Ede et al., 2011) contexts, alpha has been primarily implicated in the visuospatial domain (Thut et al., 2006;van Dijk et al., 2008;Romei et al., 2010). In particular, increased alpha-band activity may serve as a mechanism by which visual selective attention acts to suppress distracting information (Foxe and Snyder, 2011). Thus, it is reasonable to expect that training-related changes in alpha-band power might be more likely to manifest during focused meditation on a visual object (e.g., Brefczynski-Lewis et al., 2007) and/or during behavioral performance of visual attention tasks. In addition, it is interesting to note that our findings do not support a pattern of increased alpha power during meditative states, as has been reported in a number of previous studies (Cahn and Polich, 2006). As alpha may reflect non-specific effects such as general arousal level, we believe this highlights the importance of testing targeted hypotheses concerning the process-specific modulation of task-specific (e.g., tactile processing during mindfulness of breathing) cortical activity during meditation.
Previous studies have reported increases in gamma-band activity during meditation in experienced practitioners (Lutz et al., 2004;Cahn et al., 2010). Analysis of gamma-band power in humans is notoriously challenging due to the contribution of non-neural sources. Scalp-recorded muscle activity generates broadband myogenic electrical "noise" that overlaps substantially with the gamma band, which may also influence alpha-and beta-bands (McMenamin et al., 2010(McMenamin et al., , 2011. In the present study, we utilized novel signal processing methods to remove such putatively non-neural signal sources from the ongoing EEG. This may have contributed to the lack of changes in gamma-band activity with training. Furthermore, in contrast to previous studies that used standard ranges for each frequency band, we defined the range of each spectral band according to each participant's IAF during rest. This approach accounts for individual differences in the frequency band boundaries and therefore may provide a more accurate measure of activity. For example, the beta-band began at ∼11.6 Hz in the current study, which is lower than the traditional lower boundary of 13 Hz. In the present study, activity in the IAF-defined Frontiers in Human Neuroscience www.frontiersin.org September 2012 | Volume 6 | Article 256 | 11 frequency bands may have cut across the traditional boundaries of fixed spectral bands. Thus, for the majority of participants in our study, the IAF-defined beta-band likely included activity from both the traditionally-defined alpha (∼8-13 Hz) and beta (∼13-30 Hz) bands. Certain factors unrelated to the training may have contributed to the present findings. First, the retreats took place in a remote mountain setting at an altitude of approximately 2500 m above sea level, where participants spent most of their time in silent, solitary meditation. Although change in altitude is known to affect EEG recordings, typically by increasing the amplitude of the signal (Kaufman et al., 1993), no statistical relation was found between altitude and either beta-band or IAF change. Furthermore, it is possible that exposure to the natural wilderness setting may have cognitive benefits independent of the training (e.g., Berman et al., 2008). Second, beta-band power has been implicated in the active maintenance of a current motor set (Engel and Fries, 2010) such that beta-band activity increases in preparation of an anticipated postural challenge (Androulidakis et al., 2007). In the present study, training consisted of maintaining meditative posture for lengthy periods of time. Future research should therefore explore the potential effects of posture on state-related meditation effects. Finally, motivational levels may not have been exactly matched across groups. Participants receiving training were not blind to their group assignment and therefore our results may have been susceptible to demand characteristics resulting from varying levels of influence from teacher expectations and commitment to a general Buddhist worldview. Although our wait-list design likely addressed several important design limitation of prior research, future investigators should attempt to account for social, motivational, and environmental factors by implementing active comparison conditions in which participants complete non-meditative training or activities in a retreat-like setting.
In summary, we found replicable and robust reductions in beta-band power over central-parietal regions and decreased IAF following three months of intensive FA meditation. These findings add to the growing body of literature demonstrating functional brain changes associated with meditation practice-changes that may underlie generalized improvements in cognition and psychological well-being. | 2016-03-01T03:19:46.873Z | 2012-09-10T00:00:00.000 | {
"year": 2012,
"sha1": "7e4fdfdadddc4c47e608f0004f0ff51ad3918fed",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2012.00256/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7c0eac39f13ea1b7f73040917e4cc13693c4ebe",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
117800814 | pes2o/s2orc | v3-fos-license | Breather-like structures in modified sine-Gordon models
We report analytical and numerical results on breather-like field configurations in a theory which is a deformation of the integrable sine-Gordon model in (1+1) dimensions. The main motivation of our study is to test the ideas behind the recently proposed concept of quasi-integrability, which emerged from the observation that some field theories present an infinite number of quantities which are asymptotically conserved in the scattering of solitons, and periodic in time in the case of breather-like configurations. Even though the mechanism responsible for such phenomena is not well understood yet, it is clear that special properties of the solutions under a space-time parity transformation play a crucial role. The numerical results of the present paper give support for the ideas on quasi-integrability, and it is found that extremely long-lived breather configurations satisfy these parity properties. We also report on a mechanism, particular to the theory studied here, that favours the existence of long lived breathers even in cases of significant deformations of the sine-Gordon potential.
Introduction
Solitons play a fundamental role in the study of non-linear phenomena because in many situations they can be considered as the "normal modes" of the physical system in the strong coupling regime. In fact, in special examples of gauge theories in (3 + 1) dimensions and integrable field theories in (1 + 1) dimensions, there exist duality relations interchanging the roles of the fundamental excitations of the fields at weak coupling, and the solitons at the strong coupling regime [1]. In (1 + 1) dimensions, where soliton theory is much better understood, the solitons are often described as those classical solutions that propagate without dissipation and dispersion, and when two of such solitons are scattered they do not destroy each other. The only effect of the scattering is a shift in their positions in relation to the values they would have had, had they not participated in the scattering process. The most acceptable explanation for such behaviour is that, in practically all known soliton theories, there exists an infinite number of conserved quantities that dramatically constrains the dynamics, and leaves no options for the solitons after the scattering but to continue being themselves. This is a remarkable fact but it certainly has an annoying drawback. It forces solitons to exist only in the realm of the so-called exactly integrable field theories in (1 + 1) dimensions. Such theories are, however, very special as they possess highly non-trivial hidden symmetries and they have been used as convenient laboratories to study non-perturbative phenomena and so have lead to the development of new and important techniques in field theories.
Recently we have observed that some non-integrable field theories in (1 + 1) dimensions, present properties similar to those of exactly integrable theories [2,3,4]. They have soliton-like field configurations that behave in a scattering process in a very similar way to the true solitons, i.e. they do not destroy each other. We have also found that such theories possess an infinite number of quantities which are not exactly conserved in time, but are, however, asymptotically conserved. By that we mean that the values of these quantities do change, and change a lot, during their scattering process but they return, after the scattering, to the values they have had before it. This is a remarkable property since from the point of view of the scattering what matters are the asymptotic states, and so such a theory looks a bit as an effective integrable theory. We have also observed that some of such non-integrable theories possess breather-like solutions that are extremely long-lived, i.e. they oscillate without loosing much energy through radiation for very long periods of time. In addition, each from this infinite number of 'almost conserved' quantities, when evaluated on these breather-like field configurations, does vary in time but in a steady way by oscillating between two fixed values. For these reasons we have named this phenomenon quasi-integrability. The mechanisms responsible for these remarkable properties are not fully understood yet, but we believe they will play a very important role in the study of many non-linear phenomena. Since exactly integrable theories are rare and in general do not describe realistic physical phenomena, the quasi-integrable theories may be 1 very useful in their description of more realistic physical processes.
In this paper we report some results of our numerical study of breather like field configurations in a (1 + 1)-dimensional theory of a real scalar field φ subjected to a potential which is a deformation of the sine-Gordon potential. This theory has already been considered in one of our previous papers [4], and the deformed potential depends upon two free parameters ε and γ. The parameter ε measures the deformation of the potential away from the sine-Gordon potential. The γ-parameter, when different from zero, makes the potential not symmetrical under the reflection φ → −φ. In [2,4] we have argued that the phenomenon of quasi-integrability may be related to some properties of the two-soliton and breather field configurations under a very specific space-time parity transformation. When the field φ, evaluated on such configurations, is odd under this parity transformation, we have an infinite number of quasi-conserved quantities, i.e. quantities which are asymptotically conserved in the case of two-soliton solutions and oscillate in time for breather-like configurations. In order to have this property the potential has to be even under the change φ → −φ. Thus, we would expect the cases when γ = 0 to be less integrable than the cases whith γ = 0. Our numerical simulations do confirm this expectation, but we also observe an effect which had not been foreseen. Due to the way we build our initial field configuration, for the numerical simulations, from the exact sine-Gordon breather solution, the initial kinetic energy decreases with the increase of the ε-parameter, and so does the amplitude of oscillations of the resulting breather-like configurations in the deformed theory. This makes this quasi-breather field to oscillate in a region where the deformed potential differs little from the sine-Gordon potential. So, we find that we can have very long lived breather-like fields for theories which are (globally) large deformations of the integrable sine-Gordon model. The paper is organized as follows. In section 2, for completeness, we present our ideas about quasi-integrability based on a anomalous quasi-zero curvature condition (Lax equation), and give the algebraic and dynamical arguments of why the properties of the initial field configurations under this space-time parity transformations are important for the quasi-integrability concept. In section 3 we present the results of our numerical simulations. They have involved using the 4th order Runge-Kutta method to simulate the time dependence of field configurations which would allow us to determine and study various properties of breather-like solutions of the full equations of motion of our models for several choices of values of the parameters ε and γ characterizing the potential. In section 4 we present our conclusions.
The model and the concept of quasi-integrability
In this paper we report some results of our numerical simulations to study breather-like solutions of one of such quasi-integrable theories which corresponds of a particular deformation of the sine-Gordon model introduced in [4]. It is a (1 + 1) dimensional theory of a real scalar field φ described by the Lagrangian where the potential depends on two real parameters ε and γ and is given by Note that for ε = 0, one gets c = 1, ψ = φ, and the potential (2.2) reduces to the sine-Gordon potential Thus, the theory (2.1), for ε = 0, reduces to the sine-Gordon model defined by the Lagrangian The vacua of the sine-Gordon theory is obviously given by the constant field configurations ψ = n π, with n integer. The theory (2.1) also has infinitely many degenerate vacua but not equaly spaced like in the sine-Gordon case. However, the parameter c given in (2.4), was chosen to preserve two of these vacua. Indeed, ψ (φ = 0) = 0 and ψ (φ = π) = π. The parameter γ is important in our analysis of the quasi-integrability properties of the theories (2.1). Note that the potential (2.2) is even under the transformation φ → −φ for the case γ = 0, i.e. V γ=0 (−φ) = V γ=0 (φ) but not otherwise. As we will discuss below the quasiintegrability properties is favoured in the cases when γ = 0. In figure 1 we show the potential (2.2) for some values of ε and γ.
The potential (2.2) was introduced in [4] using the techniques of [5,6] based on ideas of self-dual or BPS solutions. Indeed, the static one-soliton solutions of the sine-Gordon model is invariant under the change φ → −φ for the case γ = 0, but not otherwise. In addtion, the vacua φ = 0 and φ = π are common to all values of ε and γ. For ε = 0 the peaks of the potential grow in height, compared to those for ε = 0, for | φ |> π, irrespective of the value of γ.
In fact, any static solution of the first order BPS equation (2.8) is a solution of the second order Euler-Lagrange (sine-Gordon) equation following from (2.6). If one now introduces a field transformation ψ (φ), it follows that the new field φ satisfies the BPS equation with the potential being given by (2.10) The potential (2.2) has been obtained from (2.10) by the field transformation (2.3). It then follows that the static solutions of (2.9) are solutions of the theory (2.1). Indeed, the static one-soliton solutions of (2.1) are obtained from (2.7) by applying the transformation (2.3). The transformation ψ (φ) maps BPS solutions of the sine-Gordon model (2.6) into BPS solutions of the theory (2.1). Note, however, that in general, a given solution of the second order equation of motion of the sine-Gordon model is not necessarily mapped into a solution of the second order Euler-Lagrange equation corresponding to (2.1).
The concept of quasi-integrability does not really depend upon the fact that the quasiintegrable theories are obtained from the integrable ones by field transformations of the type described above. However, many aspects of our analysis get simplified by using such a connection between integrable and non-integrable theories. In particular, the initial configurations used in our numerical simulations for breathers, have been obtained by applying the field transformation (2.3) to the exact breather solutions of the sine-Gordon model.
As described in [2,4] our concept of quasi-integrability involves a connection A µ satisfying an anomalous zero-curvature condition. Indeed, let us consider the connection or Lax potentials given by where we have used light-cone variables (2.12) The quantities b n and F n appearing in (2.11) are generators of the sl(2) loop algebra defined as where λ is the so-called spectral parameter of the loop algebra, and T 3,± are the generators of the finite sl(2) algebra: (2.14) It is then easy to see that the curvature of the connection (2.11) is given by Note that the Euler-Lagrange equation following from (2.1) for a general potential V is given by Thus, the term proportional to the Lie algebra generator F 0 vanishes when the field configurations satisfy the equation of motion (are 'solutions' of the theory). For the case of the sine-Gordon model potential the remaining term in (2.15), i.e. the anomaly X given in (2.16), vanishes. In such a case the curvature (2.15) vanishes for sine-Gordon solutions and this is what makes the sine-Gordon model integrable. For the potential (2.2) however, the anomaly X does not vanish irrespective of the choice of values of the parameters ω and m (except for the trivial case ω = 0).
The infinite number of quantities conserved asymptotically can be constructed using the techniques adapted from those of the integrable field theories. This can be done as follows. We perform the gauge transformation [4] The parameters ζ n in g can then be chosen recursively starting from n = 1 onwards in such a way that the component a − of the transformed connection has only terms in the direction of the generators b 2m+1 . So these terms generate an infinite dimensional abelian subalgebra of the sl(2) loop algebra.
Note that due to the non-vanishing anomaly X the component a + has also terms in the direction of the generators b 2m+1 and F m as well. For an integrable theory, like the sine-Gordon one, the anomaly X does vanish and the terms proportional to F m in a + vanish too, and the whole connection can be made to lie in the abelian subalgebra generated by b 2m+1 . In the general case, i.e. when the anomaly does not vanish, the transformed curvature becomes (2.20) where we have used the equation of motion (2.17). Note that the commutator of any b 2m+1 with any given F n produces terms proportional to the F m generators only. Therefore, for every component of the transformed curvature (2.20) in the direction of a given b 2m+1 we get an equation of the form and γ (2n+1) being the coefficients of the generators b 2m+1 in the expansion of a ± and g F 1 g −1 , respectively, in terms of the elements of the basis of the sl(2) loop algebra.
The relations (2.21) constitute an infinite number of anomalous conservation laws. Indeed, 6 by re-expressing them in the x and t components (see (2.12)) one gets the relations The charges Q (2n+1) are not conserved due to the non-vanishing anomaly X. They would, of course, be conserved in an integrable theory like the sine-Gordon one for which X = 0.
Note, however, that traveling solutions i.e. those which can be set at rest by a (1 + 1) dimensional Lorentz transformation the charges Q (2n+1) are conserved. To see this we observe that in the rest frame the charges are obviously x-dependent only, and so from (2.22) one gets α (2n+1) = 0. But from (2.21) one finds that X γ (2n+1) is a pseudo-scalar in (1 + 1) dimensions, and so α (2n+1) vanishes in any Lorentz frame. Therefore, for traveling solutions like the onesoliton solutions, the charges Q (2n+1) are conserved even in non-integrable theories.
Next we note a striking property that helps us to define what we mean by a quasi-integrable theory. For some very special subsets of solutions of the theory (2.1) the charges Q (2n+1) satisfy what we call a mirror symmetry. For any one of the solutions in such a subset one can find a special point (t ∆ , x ∆ ) in space-time, and define a parity transformation around this point: The field φ corresponding to such a solution is odd under such parity, i.e.
To find the implications of this observation we combine our parity transformation with the following order two automorphism of the sl(2) loop algebra: to build a Z Z 2 transformation Ω ≡ P Σ, as the composition of a space-time and internal Z Z 2 transformations. It turns out that the A − component of the connection (2.11) is odd under such Z Z 2 transformation, i.e. Ω (A − ) = −A − . This fact can be used to show that the group element g used to perform the gauge transformation (2.19) is even, i.e. Ω (g) = g. Then, one can use this fact to show that the factor γ (2n+1) in the integrand of α (2n+1) is odd under the space-time parity, i.e. P γ (2n+1) = −γ (2n+1) . More details of this reasoning can be found in [4].
If we now assume that the potential V (φ) in (2.1) is even under the parity when evaluated on the special solutions satisfying (2.25), i.e. P (V ) = V , then it follows that the anomaly X, given in (2.16), is also even, i.e. P (X) = X. Therefore we get that where the integration is performed on any rectangle with center in (t ∆ , x ∆ ), i.e.t 0 andx 0 are any given fixed values of the shifted timet and space coordinatex, respectively, introduced in (2.24). Now, by takingx 0 → ∞, we conclude that the charges (2.23) satisfy the following mirror time-symmetry around the point t ∆ : That is a remarkable property for the special subsets of solutions satisfying (2.25) and belonging to a theory of type (2.1) with an even potential under the parity (2.24). Such subset of solutions defines our quasi-integrable theory. For the case of two-soliton solutions one note that by taking the limitt 0 → ∞, one gets that the charges are asymptotically conserved, i.e. have the same values before and after the scattering.
For the case of sine-Gordon theory it is true that for any two-soliton solution or breather solution it is possible to find a point in space-time (t ∆ , x ∆ ), such that the solution is odd under a parity transformation around such point. Let us now consider the theory (2.1) with the potential (2.2) and expand its solutions in powers of the parameter ε around a given solution of the sine-Gordon model φ We now split the higher order solutions in even and odd parts as It turns out that for the case where γ = 0, the equations of motion for the first order solution are of the form That means that the odd part of the first order solution φ Therefore, one can always choose a first order solution which is odd under the parity. If one makes such choice then it turns out that the second order solution has similar properties, i.e. φ (−) 2 satisfies a non-homogeneous equation and φ (+) 2 a homogeneous one. Then one can again choose the second order solution to be odd, and the process repeats in all orders. For a detailed account of that please see [4]. Such argument works for the potential (2.2) with γ = 0 but not otherwise. Therefore, we can say that the theory (2.1) with the potential (2.2) with γ = 0 possesses subsets of solutions which constitute a quasi-integrable theory. Those are the facts that we want to check with our numerical simulations which we now explain.
The numerical simulations
Our numerical simulations were performed using the 4th order Runge-Kutta method of simulating time evolution. As in [4] we experimented with various grid sizes and numbers of points and most simulations were performed on lattices of 10001 lattice points with lattice spacing of 0.01 (so they covered the region of (-50.0, 50.0). Time step dt was 0.0001.
The breather-like structures were placed at x ∼ 0 and stretched up ±20.00 from their positions hence at the edges of the grid the fields resembled the vacuum configurations which were modified only by waves that were emitted during the scattering.
At the edges of the grid (i.e. for 49.50 < |x| < 50.00) we absorbed the waves reaching this region (by decreasing the time change of the magnitude of the field there).
In consequence, the total energy was not conserved but the only energy which was absorbed was the energy of radiation waves. Hence the total remaining energy was effectively the energy of the field configuration which we wanted to study.
To start our simulations we took a breather configuration for the sine-Gordon model (3.1) and then perfomed the change of variables (2.3) to obtain the corresponding φ field. We then used this field and its derivative at t = 0 as the initial conditions for the simulations.
The exact breather solution of the sine-Gordon model (2.6) is given by [7] ψ = 2 ArcTan where v is the speed of the breather, ν is its frequency (−1 < ν < 1) and In all our simulations we looked at the time dependence of breather-like field configurations initially at rest, i.e. with v = 0. Therefore, the initial configuration of the breather at t = 0, with v = 0 is The input for our program is the initial configuration of the φ-field defined by the transformation (2.3), and so From (2.1) we see that the initial energy of the breather-like configuration of the model was The factor 2 in front of the integral was put to match the definition of the energy in the numerical code. For the initial configuration (3.4) we have dφ dx | t=0 = 0, and V (φ = 0) = 0 (see (2.2)), and so the initial energy was Note that the energy of the initial configuration has the same ν-dependence as the sine-Gordon breather, but it is re-scaled by the factor 1/c 2 , with c given by (2.4), and so it decreases with the increase of the deformation parameter ε. This rescaling factor and its decrease with ε has an interesting effect as we will demonstrate in the discussions of the simulations.
We have performed several simulations for different values of the frequency of the breather i.e. ν in (3.1) and for various values of ε and γ. For some of these simulations we have also calculated the anomaly of the first non-trivial quasi-conserved charge given in (2.23), namely α (3) and Q (3) . We have chosen in the Lax potentials (2.11) the parameters as ω = 2 and m = 1. The reason for this choice was that those are the values that make the sine-Gordon potential (2.18), for which the anomaly (2.16) vanishes, equal to (2.5) when ε is set to zero. Then from (2.21) we find that Thus, using (2.23) and (2.16), we see that We have also computed the so-called integrated anomaly given by (see (2.22)) where t 0 is the initial time of the simulation, usually taken to be zero.
In the table below we summarise the main features of the simulations we had performed: We can clearly observe two types of effects playing a role in the outcomes of the simulations: First, as predicted by our analytical calculations based on the parity argument, the breathers for the cases with γ = 0 tend to live longer when compared to similar breathers for the cases where γ = 0. In order to see this more clearly, let us look at the simulations shown in Figs. 13, 14 and 15, all corresponding to ε = 0.3 and ν = 0.5. In Fig. 13 corresponding to the case γ = 0 we see that the energy almost goes to a constant value after the initial stabilization of the solution and the lifetime of the breather is extremely long. The simulation was stopped after 1.1 × 10 6 units of time and the energy was still almost constant (in fact it decreased all the time but extremely slowly). In Fig. 14, corresponding to the case γ = 0.2, we observe that the energy is still decreasing after 1.1 × 10 6 units of time of simulation. So, despite the fact that the fall of the energy is slow we see the γ = 0 effect playing an important role in increasing the speed of the decrease. Now, in Fig. 15, corresponding to the case γ = 0.5, we see that the energy drops much faster, and the breather starts moving after about 9 × 10 4 units of time, and it has bounced off the edges of the grid twice during the simulation. So, by increasing γ the phenomenon we have called quasi-integrablility, seems to have almost disappeared. Notice also that for smaller values of ε the effect of γ = 0 is not so visible. However, we do notice in Figures 2 and 3, corresponding to ε = 0.01 and ν = 0.1, that the breather for γ = 0.3 is short-lived as compared to that for γ = 0. The same effect is visible in Figures 4, 5 and 6, corresponding to ε = 0.01 and ν = 0.5. As the values of γ are increased from 0.0 to 0.2 and then to 0.5, the life-times of the breathers decrease. The same effect is not seem however in figures 7, 8, 9 and 10, corresponding to the case ε = 0.01 and ν = 0.95. We cannot see much difference in the behaviour of their energies as the value of γ is increased. They seem to have stabilized quite well after 3 × 10 4 units of time. These cases might be feeling the influence of the second effect which we will now describe.
Looking at the table above one can easily spot a correlation between the values of the energy (3.6) of the initial configuration (3.4) used in the simulations, and the lifetime of the breathers. The higher the energy the shorter the lifetime of the breathers. In fact, we have started all our simulations with φ = 0 on all sites of the grid. The energy (3.6) corresponds to the total kinetic energy given to the initial configuration. Note that the potential energy is zero because V (φ = 0) = 0, and the elastic potential energy is also zero because dφ dx = 0, on all sites of the grid at t = 0. So, the smaller is the initial kinetic energy, the smaller is the maximal value the φ-field can reach during the oscillations of the breather. Indeed, by departing from φ = 0 the potential energy V increases, as seen from the plot of the potential in Fig. 1. But if the value of φ remains small the breather oscillates only inside the part of the well of the potential around φ = 0 where V varies very little with ε. So, the breather can stay close to the breather solution of the sine-Gordon model which is integrable. Therefore, one would expect such a breather to live longer. Indeed, looking at Figures 7 -16 one observes that for energies smaller than ∼ 2.3, the amplitude of oscillations of the field φ is never larger than 0.6. For these values of amplitudes one sees from Fig. 1 that the field φ does not reach regions where the potential departs significantly from the sine-Gordon potential. In all these cases the breathers live quite long, except for the case of Figure 15 where γ = 0.5, and as discussed above the lack of good parity properties of the solution makes it short-lived. The dependence on γ only affects the initial drop of energy but is not very significant from the point of view of whether the breather is long-lived or not. This is clearly seen from the cases involving initially high ν as shown in figures 7, 8, 9 and 10. All these cases correspond to γ = 0, 0.3, 0.5 and 0.7, respectively. We note that all these breathers are long-lived. Their initial energies increase (but very little) with γ and, in the initial evolution decrease more for larger γ but then the decrease slows down and the breathers appear to be very long-lived. It would be interesting to check whether, at some stage, their energies 'cross' but the decrease is so slow and the gaps are still large enough that one would have to wait extremely long to observe this, if it ever happens, so this is not practical.
The only exception to our observation above is the case shown in Fig. 15 where γ = 0.5 (and ν is smaller) and the effect of the lack of parity properties in this case, as discussed above, plays an important role and makes the breather to die faster. The fact that the energy (3.6) decreases with the increase of the value of ε, makes it possible for us to find very long lived breathers for large values of ε. Note that the increase of the frequency ν of the initial configuration also plays a role in favouring the long lifetime of breathers since it decreases the value of the initial energy (3.6).
We can also observe a correlation among the anomaly α (3) , given in (3.8), and the integrated anomaly β (3) given in (3.9), with the life-time of the breather solutions. In Figures 7, 8, 12 and 16, where the breathers are long lived, we see that the anomaly α (3) oscillates steadily within a fixed interval. The integrated anomaly β (3) on the other hand presents a very slow drift of the order of one part in 10 for 10 4 -10 5 units of time. This is quite a long range of time integration, and it could well be inside the numerical errors which are difficult to estimate in these cases. In Figures 2 and 3, where the breathers are short lived we see that the anomaly α (3) does not oscillate within a fixed interval and the integrated anomaly β (3) does vary a lot. Thus, there is indeed a correlation between long lived/short lived breathers and well/badly behaved anomalies. However, the effect of the γ parameter is not very visible in the behaviour of the anomalies. It seems that the other effect discussed above in connection with the low initial kinetic energy seems to be predominant in these cases.
We have also observed that the energy of the breather-like solutions, after they have stabilized, seems to depend on the frequency in a way very similar to the exact sine-Gordon breather, i.e. E ∼ √ 1 − ν 2 . In Figure 17 we show, for the case of ε = 0.3, ν = 0.5 and γ = 0.0, the time dependence of the field φ at x = 0, in the left figure at the beginning of the simulation and in the right figure the same time dependence of the field at a much later time. From these two plots it is very clear that at first the breather oscillates with a period of ∼ T = 6.8 and much much later this period has decreased to ∼ T = 6.628571. Thus the frequency of this quasi-breather has increased from ν ∼ 0.923997839264706 to ν ∼ 0.947894335107758. Initially the energy was E in = 1.7491569855 and at the end E f in = 1.409446715. Thus the reduction of energy was roughly 1.7491569855/1.409446715 = 1.2410238476989 which is quite small.
Note that for genuine breathers of the sine Gordon model the energy is proportional to E ∼ √ 1 − ν 2 , so let us check what would have happened had we used this fact to estimate the energies in this case too (as our field configuration resembles the sine-Gordon model's breather so well). We would have had thus showing that this approximation is good within ∼ 7%.
We have also looked at the energy drop in simulations involving small ε. In such cases we had the initial drop of the energy (like for larger values of ε) followed by a motion of the breather towards the boundaries with a reflection of it from the boundaries (producing a further sharp drop of the energy at each reflection). This is clear from the simulation shown in figure Hence again we have results in good agreement with our expectations.
In figures 5 and 6 we present plots of the time dependence of the energy and the field at x = 0 for ε = 0.01 but this time for larger values of γ; namely γ = 0.2 (5) and γ = 0.5 (6). The results are not that different from what we saw for γ = 0 except that the decrease of energy is progressively greater. In fact, in each case, the energy start decreasing quite fast but then the decrease slows down. Again, the breathers start moving and so, again, we could calculate the increase of frequencies of oscillations and, like before, compare our expressions with the decrease of energies.
In the case of γ = 0.2 the energies are 6.3787993275 and 5.0901576543, respectively. Hence 6.3787993275 5.0901576543 2 ∼ 1.57041853413227. (3.13) The frequencies are ν i = 0.584336233551001 and ν f = 0.75398223684, respectively and so we get showing that, again, both sets of numbers are very close together.
Further Comments and Some Conclusions
In this paper we have performed further studies of quasi-integrability based on the observation in [2], [3] about the behaviour of the anomaly X of the curvature (2.15) of the Lax potentials, which distinguishes integrable models from non-integrable ones. We have seem that the anomaly integrated in time (see for instance (3.9)) also vanishes in some non-integrable models for field configurations which possess the parity symmetries discussed in Section 2.
This observation was originally made in some very specific models and here we have tried to assess its general validity. So, in [4] we constructed three classes of models (one with symmetry, one without it and one (dependent on two parameters)) which would allow us to interpolate between the two. Our results have confirmed the validity of our assumption (and so extended the class of models in which our observation holds) and have also allowed us to study the way the anomaly varies as we move away from the models with this extra symmetry. These results were first tested in great detail for the scattering of kinks of these models. Of course, in such scatterings the kinks were interacting with each other only over very short periods of time (when they were close to each other). So we decided to look also at the systems involving breather-like structures on which the kinks and anti-kinks, being bound into breathers, interact with other all the times. In our previous work [4] we have only glanced at such configurations. As the breather-like configurations depend on many parameters in this paper we have concentrated our attention at looking at them in detail, in particular when the symmetry is present and when it is not. As expected, we have found that the symmetry helps a lot in the validity of ideas of quasi-integrability. When the symmetry is present the energy decrease is much reduced and the configurations resemble, in their behaviour, the sine-Gordon breathers. When the symmetry is broken the breathers decay quite rapidly and the range of validity of quasi-integrability is much reduced.
However, the symmetry is only one of many topics to investigate for the breather-like configurations. We have also looked at the way the decay takes place and the dependence of this behaviour on various parameters of the model. One of the most interesting outcomes of our simulations is the understanding of the way the decay takes place. The breathers increase the frequency of their oscillations. This is quite clear from the energy of our initial configuration (3.6) but it is amazing to see as the breather-like field looses its energy this formula still holds true, as only ν in it increases. In our simulations some breather-like configurations started moving and lost their energies also by reflecting from the edges of the grid. For them the frequency increased even more so that their total energy (consisting of the energy of the oscillation and the energy of the motion was comparable to the final energy of the field, and much lower than the original energy).
Finally our work has lead to the discovery of the existence of many long lived breather-like fields a long way away (i.e. for large perturbations) from the sine-Gordon model. This was particularly true for the cases with symmetry and so it does extend the range of validity of quasi-integrability. | 2014-04-23T13:04:00.000Z | 2014-04-23T00:00:00.000 | {
"year": 2016,
"sha1": "dc7d56665782d9e30140c81c922367909e576f08",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.5812.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dc7d56665782d9e30140c81c922367909e576f08",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
1709474 | pes2o/s2orc | v3-fos-license | Role of Mitogen-Activated Protein Kinases in Peptidoglycan-Induced Expression of Inducible Nitric Oxide Synthase and Nitric Oxide in Mouse Peritoneal Macrophages: Extracellular Signal-Related Kinase, a Negative Regulator †
ABSTRACT The expression of inducible nitric oxide synthase (iNOS) and the production of nitric oxide (NO) are important host defense mechanisms against pathogens in mononuclear phagocytes. The objectives of this study were to examine the roles of mitogen-activated protein kinases (MAPKs) and transcription factors (nuclear factor-κB [NF-κB] and activating protein 1 [AP-1]) in peptidoglycan (PGN)-induced iNOS expression and NO production in macrophages. PGN is a cell wall component of Gram-positive bacteria that stimulates inflammatory responses both ex vivo and in vivo. PGN stimulates the activation of all three classes of MAPKs, extracellular signal-related kinase (ERK), c-Jun N-terminal kinase (JNK), and p38mapk in macrophages, albeit with differential activation kinetics. Using a selective inhibitor of JNK (SP600125) and JNK1/2 small interfering RNA (siRNA) knocked-down macrophages, it was observed that PGN-induced iNOS and NO expression is significantly inhibited. This suggested that JNK MAPK plays an essential role in PGN-induced iNOS expression and NO production. In contrast, inhibition of the ERK pathway using PD98059 dose dependently enhanced PGN-induced iNOS expression and NO production. PGN-induced ERK activation was attenuated in ERK1/2 siRNA knocked-down macrophages; however, NO and iNOS expression were significantly enhanced. An electrophoretic mobility shift assay showed that SP600125 inhibited PGN-induced NF-κB and AP-1 activation, whereas inhibition of the ERK pathway enhanced NF-κB activation, but with no effect on AP-1. These results indicate that the JNK MAPK positively regulate PGN-induced iNOS and NO expression by activating NF-κB and AP-1 transcription factors, whereas the ERK pathway plays a negative regulatory role via affecting NF-κB activity.
Sepsis represents a major challenge to the healthcare system, affecting about 751,000 people, causing ϳ215,000 deaths, and costing nearly $17 billion annually in the United States (35). According to a recent report, the incidence of sepsis is rising at an astonishing annual rate of ϳ8.7%, despite substantial prevention efforts and advancements in treatment (35,49). Gram-positive bacteria have become the predominant organisms in sepsis cases since 1987 and accounted for Ͼ52% of all cases of sepsis in 2000 (35). Staphylococcus aureus is a leading cause of nosocomial pneumonia and wound infections and is one of the bacteria most commonly isolated from patients with sepsis (35,49). Peptidoglycan (PGN), the main cell wall component of Gram-positive bacteria, activates host cells through the pattern recognition receptors Toll-like receptor 2 (TLR2) and CD14 and induces the expression of more than 120 genes in human monocytes (23,43,51). PGN has been reported to stimulate a variety of signal transduction elements such as MyD88, TRAF, protein tyrosine kinases, protein kinase C␦, and mitogen-activated protein kinases (MAPKs) in macrophages and induces the production of inflammatory cytokines and nitric oxide (5,8,19,48,50,51). However, the roles of MAPKs in PGN-induced activation of macrophages are still not well defined. The MAPKs constitute an important group of serine/threonine signaling kinases that modulate the phosphorylation and therefore the activation status of transcription factors, linking transmembrane signaling with gene induction in the nucleus. MAPKs are signaling molecules that play important roles in inflammatory processes. At least three MAPK cascades have been well described: extracellular signal-regulated kinase (ERK), p38, and c-Jun N-terminal kinase (JNK)/ stress-activated protein kinase (15,29,42), and PGN has been reported to activate all three pathways (19). However, the physiological relevance of such MAPK signaling to macrophage function remains unclear.
Activated macrophages release oxygen and nitrogen radicals that are important bactericidal and cytostatic molecules (33,38). However, massive production of these mediators can exert detrimental effects in the organism, as occurs during septic shock or persistent local inflammatory processes (7,39). For this reason, the study of the mechanism of action of antiinflammatory cytokines and drugs has constituted a subject of current interest (7,27,28,31). It is well known that inducible nitric oxide synthase (iNOS) expression is regulated mainly at the transcription level due to the activation of several transcription factors that bind to the promoter region of the iNOS gene, such as nuclear factor-B (NF-B), activating protein 1 (AP-1), STAT1, and IRF-1 (32-34, 52, 54, 55). Several data point to NF-B activation as a critical event in the expression of iNOS (16,46,55), and most studies focused on the analysis of anti-inflammatory mechanisms have suggested a prominent role for the inhibition of this transcription factor in their mode of action (2,3).
The present investigations were undertaken to define the regulatory role of different MAPKs, in iNOS expression and nitric oxide (NO) production in mouse peritoneal macrophages activated with PGN from S. aureus in vitro. Our data support the hypothesis that while activation of all three subfamilies of MAPKs occurs in macrophages stimulated with PGN, the activation of JNK MAPK, which in turn activates NF-B and AP-1, is necessary for iNOS and NO expression. Inhibition of ERK MAPK by PD98059 or small interfering RNA (siRNA) knockdown of ERK1/2 in macrophages results in an upregulation of PGN-induced iNOS expression, mainly through a mechanism that involves an enhanced activation of NF-B versus macrophages treated with PGN alone.
MATERIALS AND METHODS
Mice. Inbred strains of BALB/c mice of either sex at 8 to 10 weeks of age were used for obtaining peritoneal macrophages.
Inhibitor studies. Macrophage monolayers were cultured in serum-free medium in 24-well culture plates (10 6 cells/well) with SP600125 (10, 30, and 50 M), PD98059 (10, 30, and 50 M), or SB202190 (10 M) or a vehicle control (dimethyl sulfoxide at a concentration of 0.1%) for 30 min. The monolayers were then washed twice with warm incomplete medium, followed by further culture in complete medium in the presence or absence of PGN in a CO 2 incubator for 6 or 18 h. All inhibitors were used at the generally recommended concentrations (14). After inhibitor treatment, cell death was checked by using an MTT [3(4, 5)-dimethylthiazol-2, 5-diphenyltetrazolium bromide] assay (36).
Gene knockdown studies. Macrophage monolayers were cultured overnight in complete medium and transfected with a JNK1/JNK2 or ERK1/ERK2 siRNA cocktail or with scrambled siRNA (Ambion, Austin, TX) using Lipofectamine 2000 (Invitrogen, Carlsbad, CA). siRNA Lipofectamine 2000 complexes were prepared according to the manufacturer's instructions using 25 M siRNA and 1 l of Lipofectamine 2000. The final siRNA concentration was 20 nM. Transfection was performed for 5 h. After 5 h, medium containing siRNA-Lipofectamine 2000 complex was removed and replaced by RPMI supplemented with 10% fetal calf serum. The medium was changed 24 h after transfection and, after an additional 48 h, the macrophages were stimulated with PGN for 30 min and 18 h, and the supernatant and cell lysate were used for NO detection and immunoblotting, respectively.
Real-time RT-PCR analysis. Total RNA was isolated from the macrophages by using TRI-reagent according to supplier's instructions. Real-time RT-PCR was done by using a single-step real-time RT-PCR kit in Bio-Rad iQ5 real-time PCR machine (Bio-Rad Laboratories, Hercules, CA) according to the SYBR green detection protocol. The following gene specific primers were used to amplify genes: GAPDH, forward (TGA CCA CAG TCC ATG CCA TC) and reverse (GAC GGA CAC ATT GGG GGT AG); and iNOS, forward (ACA TCG ACC CGT CCA CAG TAT) and reverse (CAG AGG GGT AGG CTT GTC TC). The primers were designed using Beacon Designer software.
RT was performed for 30 min at 50°C, and then reverse transcriptase was inactivated at 95°C for 15 min. Amplification was performed with cycling conditions of 94°C for 15 s, 56°C for 30 s, and 72°C for 30 s for 35 cycles. After the amplification protocol was complete, the PCR product was subjected to meltingcurve analysis using Bio-Rad iQ5 software. Comparative cycle threshold method (the ⌬⌬C T method) was used for relative quantitation of the gene expression (30). The C T values were calculated by software automatically after completion of a run. The ⌬⌬C T was calculated using the formula: where GOI is the gene of interest, and HG is the housekeeping gene.
The fold increase in gene expression was determined by using the formula: fold expression ϭ 2 Ϫ⌬⌬CT .
Measurement of nitrite production. The concentration of nitrite, the stable end product of NO, was determined on the basis of the Griess reaction (17). Nitrite content was quantified by extrapolation from the sodium nitrite standard curve in each experiment.
Electrophoretic mobility shift assay (EMSA). Biotin-end-labeled singlestranded oligonucleotide probes (Metabion, Martinsried, Germany) were annealed by heating equimolar amounts of complementary strands to 95°C for 5 min in annealing buffer (10 mM Tris-HCl [pH 7.5], 0.1 M NaCl, 1 mM EDTA) and slowly cooling the reaction mixture to room temperature. Macrophages were pretreated with or without SP600125 or PD98059 for 30 min and then treated with PGN (10 g/ml) for 1 h. Nuclear proteins were isolated by using an NE-PER nuclear and cytoplasmic extraction reagent kit (Thermo Scientific, Rockford, IL). For the binding reaction and detection, LightShift Chemiluminescent EMSA and a Chemiluminescent nucleic acid detection module (both from Thermo Scientific), respectively, were used. Briefly, 40 fmol of labeled probe was incubated with nuclear extract (ϳ5 g of protein) in a total volume of 20 l for 20 min at room temperature with 1ϫ binding buffer. To prevent nonspecific binding of nuclear proteins, 100 ng of poly(dI-dC) was added, and the specificity of retarded bands was confirmed by including a 100ϫ excess of unlabeled oligonucleotides. Protein-DNA complexes were separated from unbound DNA by using 6% (wt/vol) native PAGE and run in 0.5ϫ Tris-borate-EDTA. The sequences of the probes were as follows: NF-B probe, 5Ј-biotin-AGTTGAGGGGACTTTCCCAGGC-3Ј; and AP-1 probe, 5Ј-biotin-CGCTTG ATGACTCAGCCGGAA-3Ј.
Densitometric analysis was carried out using GeneTools software from Syngene, a Division of Synoptics, Ltd., United Kingdom. The densities of each band are represented as raw volume.
Western blot analysis. The macrophage monolayers were washed with ice-cold phosphate-buffered saline containing 1 mM Na 3 VO 4 , lysed in 15 l/cm 2 of lysis buffer (20 mM Tris-HCl [pH 8], 137 mM NaCl, 10% glycerol [vol/vol], 1% Triton X-100 [vol/vol], 1 mM Na 3 VO 4 , 2 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 20 mM leupeptin, 0.15 U of aprotinin/ml) for 20 min at 4°C. The lysates were centrifuged at 15,000 ϫ g for 20 min, and the supernatants were separated on 10% sodium dodecyl sulfate (SDS)-polyacrylamide gels. The separated proteins (40 g/lane) were transferred to a nitrocellulose membrane (1 h at 100 V) by using a Bio-Rad Mini Transblotter, and each membrane was blocked with 5% serum for 2 h at room temperature, followed by incubation with primary antibody for 1 h at room temperature and then with horseradish peroxidase-labeled secondary antibody for 1 h at room temperature. The blot was developed using ECL reagent. To monitor equal loading of protein, Western blot analysis with antibody directed against actin was performed.
Statistical analysis. Results are expressed as means Ϯ the standard deviations (SD) of triplicate determinations. The statistical significance was determined by using a Student t test (P Ͻ 0.05).
MAPKs are activated by PGN.
We have previously reported that activation of macrophages with PGN from S. aureus leads to iNOS expression at both the mRNA and the catalytic levels (5). To explore that the PGN-induced iNOS expression is mediated through MAPK activation, the role of three MAPKs in PGN-induced signal transduction was investigated by detecting their dually phosphorylated (Thr/Tyr) forms by Western blotting. The phosphorylation of ERK and p38 mapk were detected early at 5 min and reached a maximum at 30 (Fig. 1). In contrast, the phosphorylation of ERK was sustained, although slight decreases in phospho-ERK levels were observed after 2 h with PGN stimulation (Fig. 1). These results demonstrate that PGN induces early phosphorylation of ERK and p38, whereas JNK phosphorylation occurs later. Activation of JNK MAPK is required for NO production and iNOS expression in response to PGN. To further understand the mechanism of PGN-induced iNOS and NO expression, the role of JNK was investigated by using a selective inhibitor, SP600125 (4). Macrophages were pretreated with a different concentration of JNK inhibitor, SP600125, for 30 min and then incubated with PGN for 30 min, 6 h, and 18 h. After 30 min as shown in Fig. 2A, SP600125 dose dependently inhibited the PGN induced phosphorylation of c-Jun. SP600125 strongly inhibited PGN induced NO accumulation in a concentration-dependent fashion (Fig. 2B). At 10 M SP600125, there was ca. 20% inhibition of NO accumulation, and with 30 and 50 M SP600125 there was 58 and 89% inhibition, respectively, in macrophages treated with PGN for 18 h. Polymyxin B (10 g/ml) markedly attenuated LPS (10 g/ml) but not PGN-induced iNOS/NO production (see Fig. S1 in the supplemental material). It was further observed that there is an 18-fold increase in the expression of iNOS mRNA in macrophages activated with PGN for 6 h. However, macrophages pretreated with 10, 30, or 50 M SP600125 and then incubated for 6 h with PGN showed 12.6-, 4.4-, and 2.78-fold-increased expression of iNOS mRNA, respectively (Fig. 2C). To determine whether this inhibition also occurred at the level of iNOS protein expression, macrophages were similarly pretreated with SP600125 before PGN treatment for 18 h, and whole-cell lysates were separated by SDS-PAGE and immunoblotted with iNOS polyclonal antibody. Significant inhibition in the expression of iNOS was observed at 30 M SP600125, which was completely inhibited at 50 M (Fig. 2D). No inhibition in PGN-induced NO production was observed in macrophages pretreated with different concentrations of the FIG. 1. Activation (phosphorylation) of JNK, ERK, and p38 MAPKs on PGN treatment. Mouse peritoneal macrophages were treated with PGN (10 g/ml) for 0, 5, 15, 30, 60, or 120 min. Macrophages were lysed, and the lysates were analyzed by immunoblotting with antibodies to phospho-JNK, phospho-ERK, and phospho-p38. The total p38 in each sample was used to ensure equal protein loading.
Effects of JNK1 and JNK2 siRNA on PGN-induced JNK activation and iNOS-NO expression in macrophages.
To further confirm the role of JNK MAPK in PGN-induced iNOS and NO expression in macrophages, JNK1 and JNK2 siRNA knocked-down macrophages were used. The molecular masses 46 and 54 kDa represent JNK1 and JNK2 isoforms (25). JNK1 and JNK2 siRNAs attenuated the expression of JNK1 (46-kDa) and JNK2 (54-kDa) protein in PGN-treated macrophages (Fig. 3A). Significant inhibition of PGN induced JNK and c-Jun phosphorylation were observed in JNK1/2 knockeddown macrophages compared to scramble siRNA (Scr) (Fig. 3A and B). PGN-induced iNOS and NO expression were further investigated in JNK1/2 knocked-down macrophages. Figure 3C and D shows that PGN-induced NO production and iNOS expression are significantly inhibited in JNK1/2 knockeddown macrophages compared to scrambled siRNA.
PGN-induced expression of iNOS and NO is inhibited by the ERK pathway.
To determine the role of the ERK pathway in the regulation of iNOS and NO expression, PD98059, a pharmacologic inhibitor of MEK1, the upstream kinase of p42 and p44 ERK, was used (18). Figure 4A shows that PD98059 dose dependently suppressed the PGN induced phosphorylation of ERK1/2. Macrophages were pretreated with different doses of PD98059 (10, 30, and 50 M) for 30 min and then further incubated with PGN for 18 h. As demonstrated in Fig. 4B, Inhibition of the ERK pathway with PD98059 (at 30 or 50 M) further augmented PGN-induced NO production in a dose-dependent manner. To determine whether PD98059 was inherently capable of inducing NO production, macrophages were pretreated with 50 M PD98059 and then further activated with LPS (10 g/ml) or IFN-␥ (100 U/ml). No increase in NO production was observed in macrophages treated with IFN-␥, while LPS treatment resulted in enhanced iNOS expression and NO production (see Fig. S3 in the supplemental material).
It was further observed that macrophages treated with PGN showed significantly increased transcription and translation of iNOS at 6 4C). As shown in Fig. 4D, a similar increase in iNOS protein expression was observed with ERK pathway inhibition compared to that seen with PGN stimulation alone. Significant augmentations of iNOS protein was observed with 30 and 50 M PD98059. These results demonstrate that the activation of ERK pathway negatively regulates PGN-induced iNOS expression and NO production. It was also observed in immunoblotting analyses that PD98059 does not inhibit PGN-induced phosphorylation of JNK and p38 (see Fig. S4 in the supplemental material).
Effects of ERK1 and ERK2 siRNA on PGN-induced ERK activation and iNOS-NO production in macrophages.
To further confirm the negative regulatory role of ERK in iNOS expression and NO production, ERK1 and ERK2 knockeddown macrophages were used. The molecular masses of 44 and 42 kDa represent the ERK1 and ERK2 isoforms (6). The PGN-induced expression of both ERK1/2 and phosphorylated ERK1/2 is attenuated in ERK1/2 knocked-down macrophages compared to scrambled siRNA (Fig. 5A). As shown in Fig. 5B and C, ERK1/2 knocked-down macrophages showed enhanced PGN-induced NO production and iNOS protein expression compared to scramble siRNA. Fig. 6B) at 1 h, whereas PD98059 (50 M) further enhanced NF-B activation (compare lanes 2 and 5 in Fig. 6A) but had no effect on the AP-1 DNA-binding ability (compare lanes 2 and 4 in Fig. 6B). These experiments thus demonstrate that both NF-B and AP-1 transcription factors are activated in PGN-treated murine macrophages and that SP600125 suppresses NF-B and AP-1 activation. In contrast, PD98059 enhances PGN-induced NF-B activity.
DISCUSSION
The signaling mechanisms and trans-acting factors that mediate the induction of iNOS expression during stimulation with PGN have not been very well documented. We have previously reported the role of protein tyrosine kinase, PKC␦ and NF-B in PGN (S. aureus)-induced iNOS expression and NO production in macrophages (5). In the present study, the role of MAPKs and transactivating molecules (NF-B and AP-1) in PGN-induced iNOS expression and NO production have been investigated. This is probably the first time that evidence for the involvement of the JNK MAPK pathway in iNOS and NO expression in PGN-treated macrophages, independent of p38 mapk , has been reported, whereas the ERK pathway plays a negative regulatory role. Utilization of particular signaling (10), and by IFN-␥ plus TNF-␣ in mouse macrophages (11). In contrast, JNK has been shown to play a neutral role in iNOS induction by TNF-␣ plus IL-1␣ in astrocytes (13) and by IFN-␥ plus LPS in glioma cells (40). We report here a simultaneous activation of ERK, p38, and JNK MAPKs by PGN. The PGN-induced activation of ERK can be observed as early as 5 min of treatment, with sustained, though slightly decreased, activation for up to 2 h thereafter. In contrast, p38 or JNK MAPK rapidly attenuated after the initial peak at 30 min. Inhibition of the ERK or JNK signaling pathway by specific inhibitors or siRNAs had distinct effects on PGN-induced iNOS expression and NO production in mouse peritoneal macrophages. The present study showed that SP600125 dose dependently inhibited PGN-induced c-Jun phosphorylation, iNOS expression, and NO production in mouse peritoneal macrophages. This observation was confirmed by using JNK1/2 siRNA knocked-down macrophages in which PGN-induced iNOS expression and NO production was significantly inhibited. These results indicate that a positive signaling pathway for iNOS expression induced by PGN in mouse macrophages is mediated through JNK MAPK. These data are in concord with the findings of others who have reported that the treatment of Sertoli epithelial cells with SP600125 completely inhibited IL-1-induced iNOS expression and NO production in a dose-dependent manner (26). In addition, JNK MAPK also appears to play a positive regulatory role in transducing the LPS-mediated induction of iNOS in murine macrophages and murine skeletal muscle cells (21,47). To better understand the PGN signaling, MAPK involvement in activation of NF-B and AP-1 transcription factors that regulate the expression of the iNOS gene was investigated. JNK and ERK MAPKs are able to independently and synergistically activate or increase the expression of a number of transcription factors, including c-Fos, c-Jun, NF-B, activating transcription factor 2, the Ets family, serum response factor, and CREB (41). Our data on PGN-induced activation of NF-B and AP-1 is in accordance with the previous reports (24,50). Sinke et al. and Chen et al. (12,44) reported that the JNK inhibitor SP600125 inhibited induced iNOS expression, with concomitant inhibition in NF-B and AP-1 activation in astrocytes and murine macrophages, respectively. The observations of these researchers are consistent with our data here suggesting that JNK MAPK plays a key role in the PGNstimulated induction of iNOS in macrophages through activation of NF-B and AP-1 binding to the iNOS promoter.
Several studies have shown that different MAPKs may have opposing effects on gene expression. More specifically, activation of the p38 and JNK MAPKs with simultaneous inhibition of the ERK MAPK was found to be critical for apoptosis in PC-12 pheochromocytoma cells stimulated with nerve growth factor (53). In another study, the p38 MAPK was shown to enhance the accumulation of IL-12 mRNA, whereas ERK MAPK inhibited IL-12 transcription in murine macrophages stimulated with LPS (20). We observed here that inhibition of MEK1-ERK pathway with PD98059 suppresses ERK1/2 activation and increases PGN-induced iNOS expression and NO production in mouse peritoneal macrophages. PD98059 also enhanced LPS-induced NO production, whereas it does not have any effect on IFN-␥-induced NO production. To reinforce these observations, ERK1/2 knocked-down macrophages was used. PGN-induced iNOS expression and NO production was significantly enhanced in knocked-down macrophages. LPS induces enhanced NO production in murine BV-2 microglial cells compared to RAW 264.7 cells but is unable to activate ERK (52). Aga et al. reported that the cotreatment of LPS with nucleotide receptor agonist enhances NO production and attenuates ERK activation (1). These reports support our hypothesis that ERK activation is involved in the negative regulation of iNOS/NO production. The constitutively active MEK3ERK pathway is known to negatively regulate NF-Bdependent gene expression (9). It was also observed that inhibition of the ERK pathway increased the transcriptional activity of NF-B by ϳ2-fold (Fig. 6A, lanes 2 and 5). It is possible that this enhanced activation of NF-B might contribute to favor the signaling by autocrine factors released in response to PGN challenge (TNF-␣ and several proinflammatory interleukins), and it might lead to an augmentation in iNOS expression. Inhibition of the MEK/ERK pathway-enhanced cisplatininduced NF-B activation via a pathway different from conventional IKK/IB␣/NF-B signaling has also been reported (56). The data presented here demonstrate that the ERK pathway regulates PGN-induced iNOS expression negatively through NF-B.
These observations show that the PGN-induced JNK MAPK is required for the enhanced NF-B and AP-1 activation observed in mouse peritoneal macrophages. NF-B and AP-1 activation then relates to increased expression of the iNOS gene and the subsequent generation of NO. Finally, the activation of ERK pathway and the suppression of NF-B activity and NO production may represent the microbe's strategies to avoid or delay activation of the immune response, providing a window of opportunity for the bacteria to establish infection. This might contribute to our understanding of the mechanism(s) of action of the anti-inflammatory cytokines (IL-13 and IL-4) that activate ERK MAPK in the course of their intracellular signaling (37). These findings suggest that therapeutic inhibition of ERK activation during bacterial infection may promote NO responses and inhibit microbe replication. | 2018-04-03T03:13:58.287Z | 2011-03-30T00:00:00.000 | {
"year": 2011,
"sha1": "3da5d298f043c39557e4b0827c7488d1ac635d3a",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/cvi.00541-10",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c83f9b3b440f000ae5481b072d76ed3fb767eb9e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
1139187 | pes2o/s2orc | v3-fos-license | High-Throughput Robotically Assisted Isolation of Temperature-sensitive Lethal Mutants in Chlamydomonas reinhardtii
Systematic identification and characterization of genetic perturbations have proven useful to decipher gene function and cellular pathways. However, the conventional approaches of permanent gene deletion cannot be applied to essential genes. We have pioneered a unique collection of ~70 temperature-sensitive (ts) lethal mutants for studying cell cycle regulation in the unicellular green algae Chlamydomonas reinhardtii1. These mutations identify essential genes, and the ts alleles can be conditionally inactivated by temperature shift, providing valuable tools to identify and analyze essential functions. Mutant collections are much more valuable if they are close to comprehensive, since scattershot collections can miss important components. However, this requires the efficient collection of a large number of mutants, especially in a wide-target screen. Here, we describe a robotics-based pipeline for generating ts lethal mutants and analyzing their phenotype in Chlamydomonas. This technique can be applied to any microorganism that grows on agar. We have collected over 3000 ts mutants, probably including mutations in most or all cell-essential pathways, including about 200 new candidate cell cycle mutations. Subsequent molecular and cellular characterization of these mutants should provide new insights in plant cell biology; a comprehensive mutant collection is an essential prerequisite to ensure coverage of a broad range of biological pathways. These methods are integrated with downstream genetics and bioinformatics procedures for efficient mapping and identification of the causative mutations that are beyond the scope of this manuscript.
Introduction
Phenotypic characterization of systematic mutant collections of model organisms is a proven approach for dissecting cellular complexity. The haploid unicellular green algae Chlamydomonas reinhardtii has a plant-like gene set, but it diverged from land plants before multiple genome duplications in the land plant lineage 2 . In principle, lack of gene duplication and a mainly haploid life cycle greatly facilitates loss-of-function genetic approaches. However, targeted disruption of genes of interest is nearly impossible due to the lack of efficient homologous genomic integration. A random insertional disruption library is under construction, combined with identification of the disrupted site, so far yielding an arrayed set of 1,935 mapped disruptions representing 1,562 genes 3 . However, this approach (expected in general to produce null mutations) is not applicable to essential genes. Temperature-sensitive (ts) mutations can be recovered in essential genes, and recent methods allow efficient identification of the mutated gene and the causative lesion. Phenotypic analysis at high temperature then provides immediate information about the function of the mutated gene. We reported on the isolation and characterization of ts lethal mutations in ~70 essential genes in Chlamydomonas, focusing especially on genes involved in cell cycle progression and control 1,4 . Ts lethal screens have been a mainstay of genetic analysis in microorganisms for decades 5,6 . In principle, a desirable feature is to approach "saturation," meaning that all genes capable of mutating to ts lethality are identified by at least one mutant, allowing a complete analysis. However, in practice, several factors limit the approach to saturation. First, while almost all genes can be mutated to loss of activity at high temperature, the efficiency of recovery of such mutants varies over at least an order of magnitude 7,8 . Therefore, a random screen begins to pick up recurrent hits in "frequent flyers" long before saturation is approached. Second, while ts mutations usually result in reduction of function, they may not be true nulls at a restrictive temperature (and conversely, are frequently not fully functional at a permissive temperature). This problem can be dealt with to some extent by comparing multiple alleles; if they all share a common phenotype, this is more likely to reflect the result of simple inactivation of the gene. Multiple alleles are also very helpful for definitive molecular identification of the causative lesion 1 . However, the "frequent flyer" problem means that multiple alleles in rarely hit genes can be difficult to recover.
For these reasons, we have been developing an enhanced pipeline to isolate and phenotypically characterize ts mutants. We have collected over 3,000 ts mutants so far, including about 200 new candidate cell cycle mutations. Molecular and phenotypic analysis of this collection, which already likely includes mutations in most or all cell-essential pathways, should provide new insights and hypotheses in plant cell biology. Importantly, this pipeline can be applied to any microorganism that grows on agar to efficiently construct ts mutant collections.
UV Mutagenesis
1. Prepare a batch of 100 -200 rectangular agar plates with Tris-Acetate-Phosphate (TAP) medium 9,10 . Prepare these plates a few days ahead and keep them on the bench to dry to ensure rapid absorption of the suspended cells in the next steps. 2. Culture Chlamydomonas cells up to 0.2 -0.5 optical density (OD 750 ; ~ 2 days) in 100 ml of liquid TAP under light, at 25 °C and shaking at 100 rpm. NOTE: UV mutagenesis is performed independently in two genetic backgrounds: Mat-Hygro r (confers resistance to Hygromycin B) and Mat+ Paro r (confers resistance to Paromomycin). Antibiotic choice is arbitrary, provided the two mating types have complementary drug resistances. 3. Check a sample of each of the cultures under the microscope to ensure that the cells are viable, healthy (swimming and intact), and without contamination. NOTE: Overgrown cultures "crash" and lose viability, appearing as "ghosts" in phase contrast microscopy. Do not keep shaker cultures going once they reach saturation. The "ghost" phenomenon is easily detectable in step 1.3. 4. Dilute the culture to 0.003 OD 750 . Wrap the bottle with aluminum foil to ensure homogenous density, as the strain is motile and swims directionally in response to light. 1. Adjust the density of the suspension based on the planned UV dose, so that accounting for cell killing, 200 -600 survivor colonies will form on the plates (Figure 1). See Table 1 for information on UV exposure times. 5. Attach a small-tube cassette that fits a liquid dispenser and perform a series of washes for sterilization, according to the manufacturer's instructions, to prevent contamination. 6. Using an 8 x 12 liquid dispenser, dispense 4 x 96 drops of 2 µl each onto rectangular plates (Figure 1). Tap mildly at the edge of the plate to ensure the merging of all drops into a thin sheet of liquid and immediately cover the plates to prevent exposure to light. 1. To ensure very even single cell dispersal, use dry plates, as mentioned above, and quickly cover the plates to prevent the cells from swimming in response to light. Keep the covered plates level and in the dark until all liquid is absorbed.
7. Place the plates under a germicidal UV lamp (30 watt germicidal UV tube) for periods of time determined empirically to give an optimal yield of ts-mutants among the survivors. Here, use times of 0.5 -1.5 min at a distance of 40 -50 cm to result in 90 -99.9% killing. Survivors contain 100 -1,000 UV-induced point mutations, of which ~ 10% change coding sequences. 1. In order to ensure maximum potency of UV irradiation with no reversion due to light-dependent DNA repair 11,12 , work in the dark at this step and immediately pack the plates in a dark box after irradiation. 8. Keep the plates in the dark for 8 -24 hr at room temperature. 9. Place the plates in a 21 °C incubator with illumination to form colonies. Colony formation takes ~10 days. Make sure to wrap the plates with plastic bags and add absorbing paper inside to blot up liquid condensation. Evaporation and condensation cycles otherwise can provide liquid film routes for contaminants to enter the plates. 10. Load the plates in the relevant stacks as sources for robotic colony picking (Figure 2). Pick colonies for 384-arrays on rectangular plates, and grow them at 21 °C with illumination (~ 1 week). 11. Condense the 384-arrays into a 1,536-arrays (4:1) using a replica-plating robot (Figure 3), and allow the plates to grow for ~ 3 days in the 21°C incubator. 12. Replicate the 1,536-arrays to two plates each and place one in the 21 °C incubator (permissive temperature) and the other in a 33 °C incubator (restrictive temperature). Following 24 hr, replicate the plates in 33 °C to a new set of pre-warmed plates, and place them in the 33°C incubator. NOTE: The reason for this secondary plating is that ts lethal mutants can in some cases accumulate substantial biomass, even if the cells are arrested in the first cell cycle after plating. The secondary replica eliminates this signal and greatly increases sensitivity. 13. Photograph the plates with a digital camera following 3 days of growth at 33 °C and 5 days of growth at 21 °C. Hold the plates in a fixed frame. Use a "grid-plate" marked with nine alignment indicators that is photographed together with the culture plates (Figure 4). Photograph culture plates as alternating paired 21 °C and 33 °C (21/33) images. NOTE: Different incubation times are used to equalize growth since the wild type grows significantly faster at 33 °C than at 21 °C.
Identification of ts Mutant Candidates: First Screen
1. Process the paired 21/33 plate images with a custom Matlab image analysis software to eliminate the background and to segment the images into a 1,536-array. The program will determine the detected biomass (total pixel intensity) in each position ( Figure 5). NOTE: The software (provided in the S.I. along with instructions) will use the grid-plate image to determine the locations of cells (individual entries) in a 1,536-array at the magnification and plate alignment used and calculate total pixel intensity in each cell. These values for each mutant are then compared against adjustable parameters to determine required growth at 21 °C relative to a ts+ standard and the degree of temperature-sensitivity, defined as: S(Mut 33 )/S(WT 33 )/S(Mut 21 )/S(WT 21 ), where S is signal (pixel intensity after background subtraction), Mut is an individual mutant, and "WT" is a randomly chosen non-temperature-sensitive colony (mutagenized strain that had ts+ phenotype). Growth at 21 °C is defined as S(Mut 21 )/S(WT 21 ). In this first screen, apply relaxed selection criteria (allowing relatively low growth at 21 °C and a relatively low degree of temperature sensitivity) to keep the false negative rate low. 2. Load the list of selected colonies generated by the software as an instruction file for the single-colony picking robotics (typically into a 384array). Prepare the source and target plates according to the robotics instructions and pick colonies to array (Figure 6). NOTE: Conventional pickers require an instruction file in some format, such as .csv, .txt, or .xls. All pickers will have the capacity to be filedriven; different formats will require minor editing of the MATLAB code (source code provided). 3. Place the target plates in the 21 °C incubator for ~ 5 days to grow a stock plate.
Identifying ts Mutants: Second Screen
1. Repeat step 2.1 to retest picked colonies. Typically 30 -50% of colonies will retest as clear ts lethals, for a yield of 2% -5% ts lethals relative to initial colonies surviving UV mutagenesis. 2. Repeat step 2.2 for picking the twice-screened ts lethals, but with a modified instruction file designed to array colonies into blocks of 100 colonies on rectangular plates at a 384-density. This is for convenient microscopic examination in the next step. The instruction file format is specified at runtime by the MATLAB code. 3. Make sure to include several WT colonies as a control and place the plates at 21 °C for ~ 5 days.
Initial Phenotype Determination
1. Replicate the 100-block plates arrayed in step 3.2 and place them in the 21 °C incubator for ~ 2 days to obtain freshly growing colonies. 2. Replicate the fresh copy of the 100-block plates (4.1) to three copies. Place one copy in the 21 °C incubator and one in the 33 °C incubator as controls. 3. Place the third copy ("screening plates") in a robotic setup, and spot the colonies with long pins touched with sterilized water.
NOTE: Chlamydomonas cells pinned to agar robotically tend to land initially as a rather dense spot of cells, with few single cells available for microscopic inspection. To optimize the colonies for microscopic inspection, spot the initially transferred cells with a drop of water (the volume of water transferred by the robotic long pins is ~ 100 nl). This results in dispersion of the cells in a small radius (~ 1 mm) about the initial pinning center, with many isolated single cells. 4. Take photomicrographs of a region of each spot of the screening plates at time 0 (while still single, undivided cells) (Figure 7) and place the plates at 33 °C for incubation. Determine image location by the manual stage control of a modified tetrad dissection microscope set to give stops at the 4.5-mm center-to-center spacing of a 384-density array. 5. Remove the screening plates from the 33 °C incubator and take photomicrographs, as in step 4.4, at varying time points (10 hr, 20 hr, 48 hr).
Make sure the stage, the plate holder, and the stage controller are precisely calibrated to get images of the same cells in every time point. Perform quick photomicrograph acquisition (~ 10 min). 6. Use the second copy in the 33 °C incubator to verify the ts phenotype and to make sure that temperature fluctuations during image acquisition have no major effect on ts phenotype. 1. Analyze microscopic images and select for mutants based on the desired criteria (Figure 8). Spot the final selected set in a 96-arrayed agar plate. Make sure that each plate contains mutants of the same mating type and drug resistance. 2. For each plate, incorporate the last two columns a positive (query mutation) and a negative (WT) control for the complementation assay.
Complementation and Linkage Testing of New Mutants to "Frequent Flyers"
1. Transfer large amounts of the arrayed colonies to nitrogen-free gamete-induction medium 10 3. Mix the samples in a target plate in a mating mixture volume of 20 µl (Figure 9). Following ~10 min under the light, spot 5 µl from each well twice: once on a TAP plate for linkage testing and once on TAP + 5 µM Paro + 9 µM Hygro for complementation testing. 4. Incubate the linkage plates in the 21 °C incubator overnight, and then wrap them in foil. Keep them in the dark for 5 days to allow zygospore formation. 5. Incubate complementation-testing plates for ~10 days in the light at 21 °C. NOTE: The drug amounts are calibrated to allow survival of doubly heterozygous diploids, but they are still enough to eliminate the mating haploids. These amounts were calibrated for standard resistance cassette integrations used in this specific project; with new integrations, doses should be recalibrated. Most biomass is verified to be diploids by flow cytometry (Figure 9), although a variable level of haploids (probably from meiosis and growth of doubly resistant segregants) is usually observed as well. 6. Replicate the complementation-testing plates into two copies for ts-phenotype identification. Place one copy at 21 °C and one copy at 33 °C for ~ 5 days. 7. Test colonies for the ts-phenotype, as described in section 2.1 (Figure 5). Colonies that are not complementing with the query likely represent alleles of the same gene as the query (diploids hetero-allelic for mutations in the same gene). 8. For linkage testing, following step 5.4, shift the plates back to light to allow meiosis and outgrowth of haploid segregants for ~7 days. 9. Replicate linkage testing plates to TAP + 10 µM Paro and 10 µM Hygro to select for double-resistant progenies (predicted to be 25% of the haploid progeny, since the cassettes are unlinked) and incubate at 21 °C for a week.
Discussion
The pipeline described here for high yield isolation of ts lethal mutants ensures that presumably all cellular-essential pathways of the Chlamydomonas genome are represented. The two most critical steps for efficient collection of potential cell cycle genes and for the elimination of repetitive "frequent flyer" alleles are: 1) the coherent definition of arrest phenotype characteristics for incomplete cell cycles and 2) the parallel complementation assay against already-identified query genes to enlarge the collection with newly isolated ones. When synchronized by light-dark cycles, Chlamydomonas grows photosynthetically during daylight hours and increases in cell size > 10x without any DNA replication or cell division 13 . Approximately coincident with the onset of night, cells then undergo multiple cycles of alternating DNA replication, mitosis, and cell division (Figure 8). This regulatory scheme provides a natural distinction between genes primarily required for cell growth and integrity and genes required specifically for the cell division cycle. We found that the 10-hr and 20-hr time points are very informative for an initial rough phenotypic cut 1 . The broad classes of ts lethal mutants that we recognize currently, based on these images (see S.I. in Tulin and Cross, 2014) 1 , are: Notch, Popcorn, Round, Small, Medium, early lysis, and multiple-cycle (Figure 8).
The three most relevant categories we focus on are Notch, Popcorn, and Round. The "Notch" and "Popcorn" phenotypes were shown previously to be characteristic of most cell cycle-specific lesions (e.g., mitotic cyclin-dependent kinase, DNA replication machinery, and Topoisomerase II) 1 .
The appearance of one (Notch) or multiple (Popcorn) apparent planes of incipient but unsuccessful cell division is a convenient morphological indicator of cell cycle initiation. These mutants generally exhibit little or no growth defects, with increases in cell volume similar to WT at the 10-hr mark. The Notch and Popcorn phenotypes are evident at 10 hr and are fully developed (frequently associated with cell lysis) by 20 hr. "Round" cells grow similarly to WT but with much-reduced production of apparent incipient division planes, thus yielding large, round arrested cells. Previous mutants in this category have fallen into components of the anaphase-promoting complex 14 or in genes required for microtubule function (tubulin-folding cofactors, gamma-tubulin ring complex) 1 . At later times, these cells frequently exhibit pronounced cell lysis.
"Small" and "Medium" cells grow either negligibly (Small) or significantly less than WT (Medium). Many of these mutants identified to date have lesions in genes whose annotations suggest roles in basic cellular growth processes (translation or membrane biogenesis). The main microscopic discrimination between Medium and Round rests on the amount of growth at 10 hr (Round: like WT; Medium: reduced). Because the Small and Medium categories are quite large and probably reflect lesions in a great range of cellular pathways, we are not attempting to saturate these categories; however, we do want to molecularly identify representatives of the class to understand phenotypes of loss in diverse pathways. Two unstudied categories are: 1) the early-lysing mutants that lose integrity (loss of green color, loss of refractility) by the 10-hr mark, with little evidence of prior cell growth and 2) the multiple cycles. Cells proliferate similarly to WT at 10 and 20 hr, though they exhibit a complete inability to carry out longer-term proliferation.
We are mostly characterizing "notch," "popcorn," and "round" and exclude small and medium round cells, as well as leaky mutants that complete few cell divisions. This is principally to ensure that basic cellular features, such as growth and membrane integrity, are functional and enrich the probability for division-related genes. This approach is found to be empirically efficient; however, it may be that a cell cycle gene is pleiotropic and has additional roles earlier in G1, before actual division. Such cases, which we expect to be rare, are missed. More generally, we aim for homogenous arrest, which in high probability is due to one causative mutation that is a completely dysfunctional protein. However, for the same reason as just described, there may be several arrest points and therefore, some flexibility is advisable in choosing candidates.
In order to enrich the collection with newly identified genes, the chosen candidates are assayed for complementation. We require ts-in the positive control (query against query mutation) and ts+ in the negative control (query against WT). New mutants in the same complementation group as the query are ts-. Membership in the same complementation group almost invariably reflects a molecular lesion in the same gene (this has been the case for every such gene we have tested). Therefore, for "frequent flyers," this criterion is exclusionary for further characterization. Mutants that were not at the same complementation groups as the tested queries are candidates for new genes and are further characterized by bioinformatics and experimental tools. Highly variable recovery of ts-alleles into different complementation groups is a well-known phenomenonthat is, variability is markedly greater than Poisson noise, due to the great intrinsic variability of mutability to ts-between different genes. Causes could include intrinsic thermolability differences; different protein sizes; the presence of a protein as a monomer versus as a large, stabilized complex; and mutagenic hot spots. This is almost a pure nuisance. However, one resulting favorable outcome is that the "frequent flyer" list is not long (with only a few targets occupying most of the list), so the labor-intensive complementation testing is not a massive undertaking until the later stages of the project.
As a complementary approach, we performed a linkage assay. In this assay, double-resistant progenies are selected and are tested for the ts-phenotype. A ts-phenotype is expected (and observed) for mutants in the same complementation group as the query or for tightly linked mutations. For each of the tested genes, a WT progeny is expected to appear (ts+ phenotype) in a certain probability depending on the genetic distance. We estimate that there are around 100 zygospores for each mating in these spots. Assuming 100% meiotic efficiency, this will result in around 100 double-drug-resistant progeny from the unlinked drug resistance cassettes (25% of the meiotic progeny, four per meiosis, due to Mendelian inheritance). This would also be the case for ts-mutations, where 25% of the progenies will be double-mutant, and 25% will be WT if the query and test mutations are unlinked. Therefore, out of the double-drug-resistant progeny, 25% will be WT (around 25 cells). This is the case for completely unlinked mutations; however, moderate linkage (within ~ 20 cM, about 2 Mb, or 2% of the genome 115 ) will strongly reduce or eliminate the ts+ signal. In the case of linkage of the tested mutation to the antibiotic cassette, ts+ haploids that are double-drug-resistant are present in very low amounts. This manifests as apparent failure to recombine with all mutants tested, despite complementing all mutants tested, an aberrant result that is easily noted; in such cases, backcrossing will solve the problem.
Both from prior knowledge and from sequence analysis, we expect cell cycle genes in the Chlamydomonas to be around 500 genes 2 , although most, but probably not all, are essential. We will evaluate the necessity for additional mutagenesis rounds as more mutants are collected and the level of saturation rises. This procedure is uniquely designed for studying essential biological processes and the genes and proteins that carry them out. Other methodologies to generate perturbations in essential genes exist (e.g., transformation of randomly mutagenized alleles 16 , conditionally transcribed alleles 17 , or hypomorphic alleles 18 ). However, they all require homologous recombination, which is strongly suppressed in vegetative Chlamydomonas. The clustered, regularly interspaced short palindromic repeat (CRISPR)/Cas9 system has been established as a powerful tool for gene modification 19 ; however, it is yet to work efficiently in Chlamydomonas 20 . Critically, all of these methods require prior knowledge of the target. This is a severe restriction if one wishes to have the possibility to learn something new! Our approach will yield mutations identifying essential genes, independent of any prior knowledge. Therefore, at the present level of technology, isolation of random ts mutations followed by gene identification by deep sequencing may be the most efficient method of gaining rapid entry into microbial cell biology in the plant superkingdom.
Copyright © 2016 Journal of Visualized Experiments
December 2016 | 118 | e54831 | Page 14 of 14 Identification of causative mutations (from among ~100 coding-sequence-changing mutations in each clone) is beyond the scope of this paper. Deep sequencing of bulked segregant pools 1 is effective but labor-intensive. A combinatorial pool strategy for the determination of all mutations in a large number of strains, after sequencing a small number of pools, is very cost-and labor-effective. A new strategy for combinatorial bulked segregant sequencing is under development that will allow the identification of causative mutations in dozens of mutants simultaneously in a single sequencing run (in preparation). These efficiencies are very important to allow the critical gene identification step to keep pace with the very rapid accumulation of mutants that is made possible by the procedures described here.
Disclosures
The authors declare no significant financial interests. | 2018-04-03T02:34:44.842Z | 2016-12-05T00:00:00.000 | {
"year": 2016,
"sha1": "829fc62ac08cd9757baadff233825b10f147ec52",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jove.com/pdf/54831/high-throughput-robotically-assisted-isolation-temperature-sensitive",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e0be482ece054d8b18fddd664a878d7fe181bf1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55677574 | pes2o/s2orc | v3-fos-license | Towards a Comprehensive Search of Putative Chitinases Sequences in Environmental Metagenomic Databases
Chitinases catalyze the hydrolysis of chitin, a linear homopolymer of β-(1,4)-linked N-acetylglucosamine. The broad range of applications of chitinolytic enzymes makes their identification and study very promising. Metagenomic approaches offer access to functional genes in uncultured representatives of the microbiota and hold great potential in the discovery of novel enzymes, but tools to extensively explore these data are still scarce. In this study, we develop a chitinase mining pipeline to facilitate the comprehensive search of these enzymes in environmental metagenomic databases and also to explore phylogenetic relationships among the retrieved sequences. In order to perform the analyses, UniprotKB fungal and bacterial chitinases sequences belonging to the glycoside hydrolases (GH) family-18, 19 and 20 were used to generate 15 reference datasets, which were then used to generate high quality seed alignments with the MAFFT program. Profile Hidden Markov Models (pHMMs) were built from each seed alignment using the hmmbuild program of HMMER v3.0 package. The best-hit sequences returned by hmmsearch against two environmental metagenomic databases (Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis—CAMERA and Integrated Microbial Genomes—IMG/M) were retrieved and further analyzed. The NJ trees generated for each chitinase dataset showed some variability in the catalytic domain region of the metagenomic sequences and revealed common sequence patterns among all the trees. The scanning of the retrieved metagenomic sequences for chitinase conserved domains/signatures using both the InterPro and the RPS-BLAST tools confirmed the efficacy and sensitivity of our pHMM-based approach in detecting putative chitinases sequences. Corresponding author. A. S. Romão-Dumaresq et al. 324 These analyses provide insight into the potential reservoir of novel molecules in metagenomic databases while supporting the chitinase mining pipeline developed in this work. By using our chitinase mining pipeline, a larger number of previously unannotated metagenomic chitinase sequences can be classified, enabling further studies on these enzymes.
Introduction
Enzymes are catalysts that support the development of environmental-friendly industrial processes.At present, most of the industrial enzymes of major importance are of microbial origin, so the search for novel of these catalysts is a key step towards the development of innovative bioprocesses.Chitinases are enzymes responsible for the hydrolysis of chitin, a linear homopolymer of β- (1,4)-linked N-acetylglucosamine, which is the second most abundant biopolymer in nature.A set of different enzymes are needed to drive the complete hydrolysis of chitin to free N-acetylglucosamine (GlcNAc), involving diverse mode of actions known to be synergistic and consecutive [1] [2].The endochitinases (EC 3.2.1.14)randomly cleave the chitin chain at internal sites, whilst the exochitinases (EC 3.2.1.52)catalyze either the successive removal of sugar unit from the non-reducing end or the hydrolysis of terminal non-reducing sugar [3] [4].
The low discovery rate of novel natural products from culturable microorganisms [17] coupled with the fact that only a small portion (estimated less than 1%) of the microbial community is capable of growing under artificial conditions [18] [19] has brought about the need to explore metagenomic approaches to speed up the finding of new biomolecules potentially useful in biotechnology [20].To date, a great number of environmental metagenomic studies were performed, such as the extensive studies on the Sargasso Sea [21] and the Global Ocean Expedition [22] [23], and as a result, a huge amount of sequence data has been generated but has not been entirely explored.Different projects have been implemented to provide an open infrastructure for metagenomic sequence data storage and analysis, as CAMERA ("Community Cyberinfrastructure for Advanced Microbial Ecology Research & Analysis") [24], MG-RAST ("Metagenomic Rapid Annotation using Subsystem Technology") [25], and IMG/M ("Integrated Microbial Genomes") [26].The current challenge is to fully exploit the metagenomic sequence information using appropriate data-management and data-analysis methods.
Typical metagenomic analyses rely on similarity search against some databases, followed by annotation of the output.The most frequently used similarity search tool is BLAST [27], but as it requires significant computational capacity for large datasets, faster searching tools have been developed, such as Pattern-Hunter [28] [29] and BLAT [30].However, comprehensive searches on specific genes or gene families require more sensitive tools to be used.Therefore, methods are needed to find subtler similarities between sequences and to assign putative structure and functional characterization to new proteins [31].Pipelines based on Hidden Markov Model (HMM) [32] are very promising since this is a statistical representation of a protein family conservation pattern extracted from multiple alignment of sequences, which has been demonstrated to be very effective in detecting distantly related homologues [33]- [35].
The aim of this work was to develop and validate a data mining strategy based on profile HMM (pHMM) in order to be able to broadly explore environmental metagenomic databases for putative chitinase sequences.The results confirmed the efficacy of our pipeline in detecting chitinase sequences and highlighted the power of pHMM-based strategies to identify remote homologues.
Environmental Metagenomic Databases
Two environmental metagenomic databases were selected to test our chitinase mining strategy.The first one was CAMERA v2.0 [36], available at http://camera.calit2.net/,which contains 84 unannotated metagenomic datasets with 135,704,056,943 nucleotide sequences.Six-frame translation of the nucleotide sequences was performed using the EMBOSS Transeq tool available at http://www.ebi.ac.uk/Tools/st/ and a total of 75 Gb of sequences were generated.The second database was IMG/M [26], available at http://img.jgi.doe.gov/cgi-bin/m/main.cgi/, which includes 364 automatically annotated metagenomic datasets containing 119,059,610 amino acid sequences, making a total of 20 Gb.Database sequences were downloaded to a local server by June 2011.
Construction of Profiles HMM and Search for Putative Chitinase Homologues
First, multiple sequence alignments were generated for each chitinase reference set (seed alignments) using the default settings ("-auto") of MAFFT v6.717b program [37] [38].Alignment visualizations were carried out in Jalview version 2 [39].The quality of each seed alignment was controlled by manual checking and, in a few cases, manual editing was necessary.Profile HMMs (pHMMs) were then built from each seed alignment using the hmmbuild program of HMMER v3.0 package (http://hmmer.janelia.org/).The 15 pHMMs generated were used to perform sequence database searches with the hmmsearch program also of the HMMER v3.0 package and an e-value threshold of 1.0E−05 against the two environmental databases CAMERA and IMG/M.
Mining Strategy Validation
The resulting sequence database searches (described in detail in Section 2.3) were used to extract the best-hit sequences of each metagenomic dataset, that is, the hits which presented the lowest e-value parameter among all the sequences of a metagenomic project.Best-hit sequences were retrieved in a fasta format using fastacmd program of BLAST package [27] [40] and then scanned for the occurrence of chitinase conserved domains/ signatures using both InterPro v4.7 (http://www.ebi.ac.uk/interpro/) and RPS-BLAST v.2.2.21 resources, with a evalue threshold of 1.0E−05.InterPro v4.7 combines predictive models and protein signatures from 10 member databases (Gene3D, PANTHER, Pfam, PIRSF, PRINTS, ProDom, PROSITE, SMART, SUPERFAMILY and TIGRFAMs) [41] and RPS-BLAST v2.2.21 integrates seven conserved domain databases (CDD v2.25, Pfam v.24.0,Smart v.5.1, COG v1.0, KOG, TigrFam v9.0 and Prk v.5.0).These conserved domain and protein signature databases were downloaded from EBI and NCBI on October 2010.InterPro and RPS-BLAST search results were parsed into spreadsheets using an in-house ruby script, and the frequency of the different chitinase conserved domain/signatures was calculated.
Phylogenetic Analysis of Putative Chitinase Sequences
Best-hit sequences (described in detail in section 2.4) were selected to perform phylogenetic reconstructions using the Neighbor-Joining (NJ) algorithm from MEGA 5.05 [42], p-distance model and 1000 bootstrap tests.Catalytic domain amino acid sequences from the chitinase reference sets and the selected best hit sequences were concatenated to generate a multiple sequence alignment using MAFFT v6.717b [37], which was used as query to build the NJ trees with MEGA 5.05.
Results
The construction of chitinase-reference sequence sets was a key step in the success of the mining strategy applied in this work.The collection and grouping of chitinase sequences on subsets allowed the generation of 15 chitinase groups covering all the three chitinase GH families, in which 9 were fungal GH family-18, three were bacterial GH family-18, one was bacterial GH family-19, one was fungal GH family-20 and one was bacterial GH family-20 (Figure 1).The use of these chitinase-reference subsets enabled the production of high quality multiple sequence alignments and, consequently, the properly construction of chitinase pHMMs.
The hmmsearch analysis performed against CAMERA and IMG/M metagenomic environmental databases retrieved a total of 708, 104 and 256 best-hit sequences putative of GH family-18, 19 and 20, respectively.The scanning of these sequences using a RPS-BLAST search revealed the presence of chitinase conserved domains in 74.6%, 97.1% and 97.7% of the GH family-18, GH family-19 and GH family-20 sequences, respectively (Figures 2(a)-(c)).Only a small portion of the sequences presented hits with conserved domains other than the chitinase ones (4.8% of GH family-18 and 0.8% of GH family-20).No hits sequences were 20.6% of GH family-18, whilst just 2.9% of GH family-19 and 1.6% of GH family-20 (Figures 2(a)-(c)).The InterPro search inferred the occurrence of chitinase signatures in 81.7%, 89.4% and 98.8% of the metagenomic sequences belonging to GH family-18, 19 and 20, respectively (Figures 2(d)-(f)).Compared to the RPS-BLAST search, the In-terPro analysis revealed a higher percentage of sequences hosting protein signatures other than the chitinase ones (10.3% of GH family-18, 8.7% of GH family-19 and 0.4% of GH family-20) and a smaller percentage of sequences presenting no hits against the databases examined (8.0% of GH family-18, 1.9% of GH family 19 and 0.8% of GH family 20) (Figures 2(d)-(f
)).
A large difference in diversity among all the three chitinase GH families was revealed in the RPS-BLAST and the InterPro analysis.That is, GH family-19 and GH family-20 presented no more than 12 types of conserved domains, and most of the sequences shared the same conserved domain hits (Tables 1 and 2).In contrast, GH family-18 displayed up to 34 different sorts of conserved domains and there was not a predominant set of conserved domains to the majority of the sequences (at most, half of the sequences shared the same conserved domain hits) (Tables 1 and 2).In addition, the scanning of IMG/M sequences has showed that some sequences annotated as hypothetical protein exhibited chitinase conserved domain hits, showing the sensitivity of our mining pipeline.
The phylogenetic analysis generated NJ trees corresponding to each chitinase dataset.All datasets showed some variability in the amino acid sequence of the catalytic domain region, except for the two active site residues (aspartate and glutamate in GH family-18 and 20, and two glutamates in the case of GH family-19), which The second stage consisted of the generation of profile Hidden Markov Models (pHMM) for each chitinase reference sequence subset, followed by a sequence database search against CAMERA and IMG/M.The best-hit sequences of each metagenomic project were retrieved and used in the last step of our analysis.The validation of the mining strategy was carried out by performing both an InterPro and a RPS-BLAST search against protein signatures, conserved domains and motifs databases.The phylogenetic analysis of the metagenomic sequences together with the chitinase reference sequences generated NJ trees for each chitinase subset.
were conserved in almost all sequences examined (data not shown).In addition, the NJ tree analysis also revealed two common sequence patterns, that is, all the trees presented metagenomic sequences phylogenetically related to characterized chitinases; and all these trees also displayed metagenomic sequences which did not cluster with any characterized chitinase (Figures 3-6).Interestingly, some metagenomic sequences annotated as "hypothetical protein" in the IMG/M database were retrieved after running our mining pipeline and were grouped with chitinase GH family-18 reference sequences in the NJ phylogenetic analysis (Figure 4), indicating they are putative chitinase sequences.
Discussion
The broad range of applications of chitinolytic enzymes makes their identification and study very promising.Metagenomic approaches offer access to functional genes in uncultured representatives of the microbiota and hold great potential in the discovery of novel enzymes, but tools to extensively explore these data are still scarce.This study aimed the development of a chitinase mining pipeline to facilitate the comprehensive search of these enzymes in metagenomic databases.The use of a pHMM-based strategy allowed sensitive and efficient detection of putative chitinase sequences.
The generation of representative seed alignments and the selection of the homology detection method are key steps in sequence mining pipelines.The quality of an alignment is critical to its utility in different approaches, such as functional analysis, evolutionary studies and structure prediction [43].For instance, the quality of a query and template sequence alignment is a major determinant of model quality in comparative modeling studies [44].In fact, the higher an alignment quality, the higher the sensitivity in detecting homologous sequences [43].However, the assignment of a high quality alignment depends on the relatedness of the sequences being aligned.Alignments of sequences sharing high levels of similarity, or about 50% identity, are generally unambiguous and easier to be automatically generated, but alignments of more distant sequences, as for some family of proteins (sharing 30% identity or less), usually will need to be manually checked for higher qualities.For most alignment methods, the quality increases significantly at about 20% identity [45].The algorithm implemented in the MAFFT program is considered to be faster though still accurate compared to other methods, such as Clus-talW and T-Coffee [38], thus making this program to be considered one of the best global alignment tools currently available [46] [47] and justifying the decision for using it in our mining pipeline.In this study we put some effort on properly generating chitinase reference sets representative of the different subgroups of sequences belonging to the GH families-18, 19 and 20.Basically, well-characterized chitinase sequences were chosen and organized in subsets of at least five sequences.Seed alignments were generated and manually checked, and then used to build reliable pHMMs.
pHMMs are statistical models that use multiple alignments of homologous sequences to quantify amino acids frequencies and the position-specific probabilities for inserts and deletions along the alignment [32] [48].They are broadly used for modeling conserved motifs of protein families since they contain more information about the sequence family than a single sequence [32] [48] [49].These pHMMs have been described as very efficient to detect conserved patterns in multiple sequences [35] [50] [51] and to perform better than simple profilesequence methods such as PSI-BLAST [48] [49].This higher sensitivity found with pHMMs is very promising when performing comprehensive searches to find remote homologues, as is such the case in our study.Two software packages are frequently used to build pHMMs and to perform profile-sequence searches, SAM [33] and HMMER [52], but the last one has been reported as more suitable for large sequence dataset searches [53] and then was used in the analyses of the present work.
The scanning for the presence of chitinase conserved domains and motifs/signatures in the best hit sequences (the ones retrieved after the hmmsearch analysis) was carried out in order to evaluate the performance of our chitinase mining pipeline on detecting true putative chitinase sequences.Many annotation pipelines use searches against conserved domain databases since these regions are evolutionarily conserved units in proteins [54].The recognition of a conserved domain footprint in a protein sequence usually indicates its cellular or molecular function [55] and provides more reliable protein classification than sequence similarity analysis.The RPS-Blast and InterPro searches performed in this work found high percentages of chitinase-related domains and motifs in the best hit metagenomic sequences, validating our chitinase mining pipeline.The presence of best hit metagenomic sequences showing no hits to any conserved domain may represent putative novel chitinases that possibly would not be identified using sequence-sequence similarity searches.Furthermore, some IMG/M metagenomic sequences annotated as hypothetical proteins resulted in hits with chitinase conserved domains in our analysis, indicating that our pipeline may have high sensitivity and it is able to detect remote homologues.
The results obtained in the RPS-Blast and InterPro analyses emphasized the large differences in diversity among the three chitinases GH families-18, 19 and 20.As described in previous reports, GH family-18 holds higher variability in evolutionary terms and contains the greatest number protein members [4] [7].The diversity observed in the GH family-18, 19 and 20 was also assessed in the phylogenetic reconstructions for the metagenomic and the chitinase reference sequences.Indeed, interpreting phylogenetic relationships among sequences is particularly important since it allows to infer gene function [56], genetic variability and protein evolution.Phylogeny-based classification systems have been used before to identify enzymes in metagenomic sequence datasets [57] [58].Based on the phylogenetic relationships observed in the NJ trees generated in this study, two common sequence patterns were identified, one including metagenomic sequences phylogenetically related to characterized chitinases-which may help to understand their origin and classification; and the other comprising metagenomic sequences which did not cluster with any characterized chitinase-suggesting a great reservoir of putative new chitinases to be exploited in these metagenomic databases.Our results reinforced the sensitivity and efficiency of our mining pipeline in detecting putative chitinase sequences from metagenomic databases.
Conclusion
Traditional sequence search pipelines frequently are not able to extensively exploit metagenomic databases.The current flood of sequence data from metagenomic studies and the wide range of applications of chitinases brought about the need to develop a new data search pipeline.The chitinase mining pipeline developed in this work was based on the generation of high quality seed alignments from reliable chitinase reference sets, which were then used on the construction of chitinase pHMMs.The searches using these pHMMs were able to retrieve high percentages of putative chitinase sequences, which were confirmed in silico by a scanning for chitinase conserved domains and motif/signatures and in NJ phylogenetic reconstructions.The results confirmed the efficacy of our pipeline in detecting chitinase sequences and highlighted the sensitivity of pHMM-based strategies to identify remote homologues.These analyses provide insight into the potential reservoir of novel molecules in Endo-beta-N-acetylglucosaminidase_Flavobacterium_sp._strain_SK1022 (sp|P80036|) CAM_READ_0274750873 Glycosyl_hydrolases_family_18_Atta_columbica_fungus_garden_and_dump Glycosyl_hydrolases_family_18_Soil_microbial_communities_from_FACE_and_OTC_sites Glycosyl_hydrolases_family_18_Atta_columbica_fungus_garden Secreted_endo-beta-N-acetylglucosaminidase_Streptomyces_griseoaurantiacus (tr|F3NDC4|) Endo-beta-N-acetylglucosaminidase_H_Streptomyces_plicatus (sp|P04067|) Secreted_endo-beta-N-acetylglucosaminidase_Streptomyces_lividans_TK24 (tr|D6ESW9|) Hypothetical_protein_Aquatic_dechlorinating_community_(KB-1) NCBI_PEP_149233387 Endo-beta-N-acetylglucosaminidase_F2_Flavobacterium_meningosepticum (sp|P36912|) Secreted_endo-beta-N-acetylglucosaminidase_EndoS_Melissococcus_plutonius (tr|F3Y8V4|) Hypothetical_protein_Dendroctonus_frontalis_Fungal_community Hypothetical_protein_Dendroctonus_ponderosae_beetle_community Secreted_xylanase_Xanthomonas_oryzae_pv._oryzae (tr|Q9AM28|) Xylanase_glycosyl_hydrolase_family_10_Clostridium_acetobutylicum (tr|Q97TI5|) Glycosyl_hydrolase_family_10_xylanase_Flavobacterium_sp.(tr|C0M1B6|) Endo-14-beta-xylanase_A_Bacillus_halodurans_xynA (sp|P07528|) GH10_xylanase_Clostridium_cellulolyticum_xyn10A (tr|Q0PRN5|) Xylanase_A_Streptomyces_thermocyaneoviolaceus_xynA (tr|Q9RMM5|) metagenomic databases while supporting the in silico chitinase mining pipeline developed in this work and identifying phylogenetic relationships among the chitinase sequences.By using our chitinase mining pipeline, a larger number of previously unannotated metagenomic chitinase sequences can be classified, enabling further exploration of these enzymes.
Figure 1 .
Figure 1.Workflow of the methodology applied in this study.The first step was to generate fungal and bacterial chitinase reference sets for the glycoside hydrolase (GH) families 18, 19 and 20.Fifteen subsets were created, in which 9 were fungal GH family-18, 3 were bacterial GH family-18, one was bacterial GH family-19, one was fungal GH family-20 and one was bacterial GH family-20.The second stage consisted of the generation of profile Hidden Markov Models (pHMM) for each chitinase reference sequence subset, followed by a sequence database search against CAMERA and IMG/M.The best-hit sequences of each metagenomic project were retrieved and used in the last step of our analysis.The validation of the mining strategy was carried out by performing both an InterPro and a RPS-BLAST search against protein signatures, conserved domains and motifs databases.The phylogenetic analysis of the metagenomic sequences together with the chitinase reference sequences generated NJ trees for each chitinase subset.
Figure 2 .
Figure 2. Pie charts representing the percentage of metagenomic sequences (the hmmsearch best hits sequences) which exhibited chitinase-related domain and/or signatures after RPS-BLAST ((a), (b), (c)) and InterPro ((d), (e), (f)) searches against different conserved domain databases.The plots represent each GH family separately: GH family-18 results are presented in (a) and (d); GH family-19 in (b) and (e), and GH family-20 in (c) and (f).* Percentage of metagenomic sequences showing conserved domains other than the ones found in the representative chitinase sequences; ** Percentage of metagenomic sequences which did not find any hit in these searches against conserved domain databases.
Table 1 .
Conserved domains hits recovered after a RPS-BLAST search using the metagenomic sequences (hmmsearch best hit sequences) against seven conserved domain databases (CDD, COG, KOG, Pfam, Prk, SMART and TIGRfam).
a Only the conserved domains hits found in more than 10% of the sequences analyzed were displayed in table; b Percentage of sequences which showed hit with that conserved domain.
a Only the conserved domains hits found in more than 10% of the sequences analyzed were displayed in table; b Percentage of sequences which showed hit with that conserved domain. | 2018-12-08T17:12:20.649Z | 2014-03-05T00:00:00.000 | {
"year": 2014,
"sha1": "a539ca597e6cc9acaf3ca3dd78723359dd50e4cf",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=43726",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6110fdac908611ed4e3b9de6e47a3b062c62d22f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
261314518 | pes2o/s2orc | v3-fos-license | Disease-specific therapy for the treatment of the cardiovascular manifestations of Fabry disease: a systematic review
Objective The cardiovascular manifestations of Fabry disease are common and represent the leading cause of death. Disease-specific therapy, including enzyme replacement therapy (ERT) and chaperone therapy (migalastat), is recommended for patients exhibiting cardiovascular involvement, but its efficacy for modulating cardiovascular disease expression and optimal timing of initiation remains to be fully established. We therefore aimed to systematically review and evaluate the effectiveness of disease-specific therapy compared with placebo, and to no intervention, for the cardiovascular manifestations of Fabry disease. Methods Eight databases were searched from inception using a combination of relevant medical subject headings and keywords. Randomised, non-randomised studies with a comparator group and non-randomised studies without a comparator group were included. Studies were screened for eligibility and assessed for bias by two independent authors. The primary outcome comprised clinical cardiovascular events. Secondary outcomes included myocardial histology and measurements of cardiovascular structure, function and tissue characteristics. Results 72 studies were included, comprising 7 randomised studies of intervention, 16 non-randomised studies of intervention with a comparator group and 49 non-randomised studies of intervention without a comparator group. Randomised studies were not at serious risk of bias, but the others were at serious risk. Studies were highly heterogeneous in their design, outcome measurements and findings, which made assessment of disease-specific therapy effectiveness difficult. Conclusion It remains unclear whether disease-specific therapy sufficiently impacts the cardiovascular manifestations of Fabry disease. Further work, ideally in larger cohorts, with more standardised clinical and phenotypic outcomes, the latter measured using contemporary techniques, are required to fully elucidate the cardiovascular impact of disease-specific therapy. PROSPERO registration number CRD42022295989.
INTRODUCTION
Fabry disease is a rare X-linked lysosomal storage disorder characterised by deficiency of alphagalactosidase and the accumulation of its substrate, globotriaosylceramide (Gb3). 1 The cardiovascular manifestations, which include myocardial hypertrophy, fibrosis and ischaemia, and arrhythmia, are common, affecting more than 50% of males and females, and represent the leading cause of death. 2 3 Disease-specific therapy, including enzyme replacement therapy (ERT) or chaperone therapy (migalastat), is recommended for patients exhibiting cardiovascular involvement, but its efficacy, optimal timing of initiation and cost-effectiveness are yet to be fully elucidated.
Previous systematic reviews are limited by the exclusion of chaperone therapy, which has been part of routine clinical practice for over 5 years, and the exclusion of non-randomised studies, which account for much of the published data. 4 5Furthermore, by evaluating effectiveness in Fabry disease in general, evaluation of the cardiovascular impact of disease specific therapy has been relatively limited.
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Disease-specific therapy, including enzyme replacement therapy and chaperone therapy (migalastat), is recommended for patients exhibiting cardiovascular involvement, but its efficacy for modulating cardiovascular disease expression, optimal timing of initiation and cost-effectiveness remain to be fully established.
WHAT THIS STUDY ADDS
⇒ Studies that evaluate the effectiveness of disease-specific therapy, compared with placebo, and to no intervention, for the cardiovascular manifestations of Fabry disease are highly heterogeneous in their design, outcome measurements and findings, which makes assessment of disease-specific therapy effectiveness difficult.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ It remains unclear whether disease-specific therapy sufficiently impacts the cardiovascular manifestations of Fabry disease.Further work, ideally in larger cohorts, with more standardised clinical and phenotypic outcomes, the latter measured using contemporary techniques, are required to fully elucidate the cardiovascular impact of disease-specific therapy.
Objective
This study aimed to systematically review and evaluate the effectiveness of disease-specific therapy, compared with placebo, and to no intervention, for the cardiovascular manifestations of Fabry disease.
METHODS
This review was conducted in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. 6The search strategy was constructed by a multidisciplinary team (cardiologists, metabolic physicians, specialist librarians).Eligible studies were grouped into randomised studies of intervention, non-randomised studies of intervention with a comparator group and non-randomised studies of intervention without a comparator group (uncontrolled before and after studies).The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO); CRD42022295989.Further information is available in the online supplemental materials.
Study selection
The study selection process is reported in figure 1.The review included 72 studies: 7 randomised studies of intervention, 16 non-randomised studies of intervention with a comparator group and 49 non-randomised studies of intervention without a comparator group (online supplemental tables 1-3, figure 1).Study characteristics are detailed in the online supplemental materials.
Assessment of risk of bias
The bias assessment of the randomised studies of intervention (figures 2 and 3) showed two studies were at 'low risk' of bias and five studies had 'some concerns' due to a risk of bias in the randomisation process (domain 1) or a risk of bias in selection of the reported result (domain 5).The bias assessment of the non-randomised studies of intervention with a comparator group (figures 4 and 5) demonstrated that all studies were at serious risk of bias.All non-randomised studies of intervention without a comparator group were found to be at serious risk of bias because, for example, it is not possible to determine whether study findings are secondary to the intervention. 7Additional information is available in the online supplemental materials.
Results of synthesis
In the presence of substantial heterogeneity in study design and study outcome, as here, statistical pooling and meta-analysis are not recommended.A narrative synthesis was therefore conducted, which first considers ERT and subsequently chaperone therapy.
Systematic review
A 5-year, non-randomised study observed no change in time to first complication in 58 patients in an ERT cohort compared with 42 patients in a natural history cohort (p=0.69),although the risk of developing a first complication declined with longer treatment duration (OR 0.81 (0.68-0.96) per year; p=0.015).Cardiovascular events were not specifically reported. 9eidemann et al 10 found no difference in clinical outcome (stroke, end-stage renal disease, dialysis and death) in patients treated with ERT compared with a natural history cohort (untreated adult patients matched by year of birth, gender, previous transient ischaemic attack and chronic kidney disease stage) (HR 1.48; 95% CI 0.72 to 3.06; p=0.284) although cardiovascular events were not reported.Beck et al, 11 who compared 79 patients derived from a registry with 31 patients from a double-blind placebo RCT, found a 16% risk of a composite morbidity outcome after 24 months in patients receiving ERT versus 45% in the placebo group.Age of first event and death was also higher in the ERT group.Cardiovascular events were not reported. 11Many single-arm studies report the frequency of clinical outcomes but without a comparator group, precluding evaluation of the impact of ERT.
Systematic review
LV mass and wall thickness Some studies have reported a significant reduction in left ventricular (LV) mass with ERT.For example, a 6-month double-blind placebo RCT of agalsidase alfa in 15 patients with left ventricular hypertrophy (LVH) at baseline demonstrated a reduction in left ventricular mass index (LVMI), measured using cardiovascular magnetic resonance (CMR) (−6.4 g/m 2 vs +12 g/m 2 ; p=0.02), although there was no further reduction in LVM in a 24-month open-label extension (n=10). 12Moreover, in a study including 181 participants with 5-year echocardiographic data taken from a larger cohort of 1428 patients, Mehta et al 13 demonstrated a significant reduction in LVMI during the first 3 years
DISCUSSION
This systematic review evaluated the effectiveness of diseasespecific therapy compared with placebo, and to no intervention, for the cardiovascular manifestations of Fabry disease.
The included studies were heterogeneous in design, size, comparator, risk of bias and outcome.The ERT RCTs (n=5) and chaperone therapy RCTs (n=2) were small, and in the case of ERT, under-represented females (ERT: 183 males and 13 females, migalastat: 49 males and 75 females), but were not at serious risk of bias.The 16 non-randomised studies of ERT with a comparator group were all at serious risk of bias.Comparator groups included patients not requiring disease-specific therapy, who presumably had milder phenotypes although treatment criteria varied considerably, previously published datasets, 'natural history' cohorts and interrupted time-series analyses.There were no non-randomised studies of chaperone therapy with a comparator group.The remaining 49 studies (ERT 45; chaperone therapy 4) were non-randomised studies of intervention without a comparator group, and all were at serious risk of bias.
Studies were predominantly small and single-centre, although the non-randomised ERT studies included large international registries.Inclusion criteria (eg, all-comers vs exclusively males, 24 exclusively females, 30 or patients with pre-existing cardiac or renal disease), 8 12 duration (20 weeks to 10 years), and analysis methodology (eg, within-group comparisons, between-group comparisons, stratification by variables such as age, 24 sex or LVH, 13 open-label extensions) were highly variable, and many reported salient levels of missing data.
Outcome measurements were particularly heterogeneous.Clinical cardiovascular events were seldom reported and were infrequent when they were reported, precluding meaningful analysis.LV mass assessments comprised the most common endpoints (65 studies), although these were also inconsistent, including LV mass, LV mass indexed to body surface area, LV mass indexed to height 12 13 19 and surrogates of LV mass including septal, posterior wall and segmental thickness (online supplemental reference 1). 30The level of heterogeneity made effectiveness of disease-specific therapy difficult to assess.
In a series of publications with increasing follow-up duration relating to a 20-week placebo RCT, ERT was consistently associated with a reduction in cardiac endothelial Gb3 (online supplemental references 2 and 3), although this is the only study to evaluate the impact of disease-specific therapy on cardiac endothelial Gb3.ERT was not associated with a reduction in myocardial Gb3 (online supplemental reference 2). 12ata regarding the impact of disease-specific therapy on cardiac phenotype are inconsistent.A number of studies found ERT and chaperone therapy to reduce or slow the progression of measurements of LV mass, predominantly in patients with LVH at baseline.However, other studies did not identify a significant impact of disease-specific therapy on measurements of LV mass.The reasons for the variable results are unclear, but may include differences in participant characteristics, duration of therapy and differential treatment response.In most studies, assessment of LV mass was made using echocardiography.The limited accuracy and high variability of echocardiography-derived LV mass, particularly in the context of variable ventricular geometry, such as is evident in Fabry disease, is well described (online supplemental reference 4), and measurement of LV wall thickness, even when performed using CMR images, is highly variable (online supplemental reference 5).Together with the small sample sizes and often short duration, many studies may not have been sensitive to relatively small changes in LV mass, especially considering the slowly progressive nature of Fabry disease.
In other conditions, tissue characterisation with CMR has identified myocardial injury and disease expression in advance of changes in 'macro' structure and function, such as LV mass or ejection fraction (online supplemental reference 6).In line with this, myocardial T1 relaxation time is a putative non-invasive biomarker of Gb3 accumulation.The available studies suggested that ERT may possibly be associated with an improvement in T1 relaxation time, however, data were very limited and somewhat inconsistent. 16 21Data regarding the impact of disease-specific therapy on LGE, a measure of focal myocardial fibrosis, were also very limited (online supplemental references 7 and 8). 16revious systematic reviews focus exclusively on ERT and have been variable in their conclusions.A Cochrane review of RCTs concluded that the long-term influence of ERT on risk of morbidity and mortality remains to be established. 5Two recent systematic literature reviews incorporating observational data concluded that in males: 'data published in adult male patients with Fabry disease demonstrates that the effect of ERT on plasma Gb3 levels, eGFR, and cardiac outcomes is strongest and substantiated by a wide range of publications, showing consistent, dosedependent reductions in Gb3 accumulation, a reduced decline in eGFR, and improvements in cardiac outcomes'.Whereas in females: 'ERT in adult female patients with Fabry disease has a beneficial effect on Gb3 levels and cardiac outcomes' (online supplemental references 9 and 10).A meta-analysis concluded that ERT did have a beneficial effect on the course of LV mass when compared with untreated groups (online supplemental reference 11).Specifically, in males with LVH at baseline LV mass remained stable, whereas in males without LVH at baseline the rate of LVH progression was lower than in untreated patients.In females with LVH at baseline LV mass decreased, and in females without LVH at baseline, LV mass remained stable compared with an increase in untreated patients.Importantly however, this meta-analysis included data from only 6 of 64 studies that report LV mass (online supplemental reference 11).
Given the marked heterogeneity of study design and outcome measurements, the relatively small sample size of most studies and low reported clinical event rates, the risk of study bias and the rare and slowly progressive nature of the condition, it remains unclear whether disease-specific therapy sufficiently impacts the cardiovascular manifestations of Fabry disease, particularly in the context of the not inconsiderable associated cost.
In future, large, statistically powered, multi-centre, prospective studies assessing the efficacy of therapy using standardised outcomes are required.The impact of disease-specific therapy on pre-defined 'hard' clinical cardiovascular outcomes, such as sudden cardiovascular death or malignant ventricular arrhythmia must be prioritised and additional secondary phenotypic outcomes derived from contemporary imaging, circulating biomarker and heart rhythm techniques should be measured using contemporary techniques.Moreover, if these secondary phenotypic outcomes are to be meaningful, their prognostic significance must be understood.In particular, the prognostic significance of a change in LVMI and native myocardial T1
Figure 1
Figure 1 PRISMA diagram.PRISMA flow diagram including searches of databases, registers and other sources.PRISMA, Preferred Reporting Items for Systematic Review and Meta-Analysis.
Figure 2
Figure 2 Traffic light plot of the risk of bias assessment, evaluated using the RoB2 and robvis tools.Traffic light plot displaying risk of bias in multiple domains generated using the revised tool for a revised Cochrane risk of bias tool for randomised trials (RoB2) and the Risk of bias VISualisation tool (robvis).
Figure 3
Figure 3Summary plot of the risk of bias assessment, evaluated using the RoB2 and robvis tools.Summary plot displaying risk of bias in multiple domains generated using the revised tool for a revised Cochrane risk of bias tool for randomised trials (RoB2) and the Risk of bias VISualisation tool (robvis).
Figure 4
Figure 4 Traffic light plot of the risk of bias assessment, evaluated using the ROBINS-I tool.Traffic light plot displaying risk of bias in multiple domains generated using the tool for assessing Risk Of Bias In Non-randomised Studies of Interventions (ROBINS-I) and the Risk of bias VISualisation tool.
observed a significant reduction in LVMI in males with LVH after 10.8 years (range 9.6-12.5) of treatment with agalsidase alfa (−13.55 g/m 2.7 (−23.05 to −4.06), n=15; p=0.0061).There was no change in males without LVH and no change in females.
During a 12-month open-label extension in which all patients received migalastat, there was a further reduction in LVMI in patients with LVH originally randomised to migalastat (n=11; −11.3 g/m 2 (−16.6 to −3.3 g/m 2 )).27
Figure 5
Figure 5 Summary plot of the risk of bias assessment, evaluated using the risk of bias tool for randomised trials tool, displaying risk of bias in multiple domains generated using the tool for assessing Risk Of Bias In Non-randomised Studies of Interventions and the Risk of bias VISualisation tool. 7 | 2023-08-30T15:12:53.631Z | 2023-08-28T00:00:00.000 | {
"year": 2023,
"sha1": "7f32ab66ae8c72f5570f6c279f20018a2f40805d",
"oa_license": "CCBYNC",
"oa_url": "https://heart.bmj.com/content/heartjnl/early/2023/08/28/heartjnl-2023-322712.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ba638bb13cac8e77894d52f87c29a64708c0b0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243352420 | pes2o/s2orc | v3-fos-license | C-Reactive protein and SOFA scale: A simple score as early predictor of critical care requirement in patients with COVID-19 pneumonia in Spain
Objective To identify potential markers at admission predicting the need for critical care in patients with COVID-19 pneumonia. Material and methods An approved, observational, retrospective study was conducted between March 15 to April 15, 2020. 150 adult patients aged less than 75 with Charlson comorbidity index ≤6 diagnosed with COVID-19 pneumonia were included. Seventy-five patients were randomly selected from those admitted to the critical care units (critical care group [CG]) and seventy-five hospitalized patients who did not require critical care (non-critical care group [nCG]) represent the control group. One additional cohort of hospitalized patients with COVID-19 were used to validate the score. Measurements and main results Multivariable regression showed increasing odds of in-hospital critical care associated with increased C-reactive protein (CRP) (odds ratio 1.052 [1.009-1.101]; P = 0.0043) and higher Sequential Organ Failure Assessment (SOFA) score (1.968 [1.389–2.590]; P < 0.0001), both at the time of hospital admission. The AUC-ROC for the combined model was 0.83 (0.76-0.90) (vs AUC-ROC SOFA P < 0.05). The AUC-ROC for the validation cohort was 0.89 (0.82–0.95) (P > 0.05 vs AUC-ROC development). Conclusion Patients COVID-19 presenting at admission SOFA score ≥ 2 combined with CRP ≥ 9.1 mg/mL could be at high risk to require critical care.
Introduction
A new atypical pneumonia appeared in December 2019 that was later found to be caused by a new coronavirus (SARS-CoV-2). By April 2020, the virus had spread to 212 countries worldwide, infecting more than 1,607,467 people and causing more than 98,866 deaths 1 . By 9 May, 223,578 cases of infection had been diagnosed in Spain, with 26,478 related deaths 2 . In the early stages of infection patients may experience low grade fever or flu-like symptoms, but this can also be followed by severe respiratory failure 3 . Patients with SARS-CoV-2 infection have high rates of hospitalization and admission to the intensive care unit (ICU) 4 . A wide range of parameters have been associated with morbidity and mortality 3,5---8 , ranging from sex 9 to increased plasma D dimer levels 3 , which may suggest endothelial activation. Many reports have also described an association between serious clinical deterioration and cytokine storm, characterized by the release of IL-6 and IL-1 type cytokines, and also by traditional inflammatory markers, such as Creactive protein (CRP) and ferritin 10,11 . Others suggest that severity can be predicted by different serum inflammatory profiles.
Some patients with SARS-CoV-2 infection may require critical care, and this then becomes one of the main problems faced by healthcare systems during such pandemics 12 . It is still unclear what comorbidities, laboratory test results, or severity features are capable of predicting the potential need for these resources that lead to the collapse of the healthcare system. Good triage criteria and correctly identifying the profiles of high-risk patients could be the cornerstones of individualized management 13 .
In this context, we decided to evaluate our series of COVID-19 patients in order to identify markers observed at admission that could predict the need for critical care.
Material and method Patient selection
We conducted a retrospective study collecting data from 150 patients diagnosed with COVID-19 pneumonia. All had a confirmed diagnosis of SARS-CoV-2. Seventy-five patients were randomly selected from among those admitted to the critical care units of the Salamanca University Hospital (critical care [CC]) between 15 March and 15 April 2020. The results of an earlier pilot study in 10 patients showed that patients admitted to the critical care unit were aged under 75 years and had a Charlson comorbidity index 14 of under 6. Therefore, we selected 75 patients admitted to the same hospital during the same period of time who did not require critical care, aged under 75 years and with a Charlson comorbidity index equal to or less than 6 (non-critical group [nCC]) as a control group. Parameters from the nCC were compared with those from the CC.
Exclusion criteria were patients younger than 18 or older than 76, patients with a Charlson comorbidity index greater than 6, and patients in whom essential laboratory results were missing.
ICU admission criteria were: severe refractory respiratory failure secondary to COVID-19 pneumonia, with or without respiratory distress 15 , and the need for intubation and mechanical ventilation. Patients were only admitted to the ICU after weighing up the benefits of admission against their underlying comorbidities and frailty.
The study was approved by the Research Ethics Committee of the Hospital Clínico de Salamanca (PI 2020 05 487). The Ethics Committee waived the requirement for informed consent.
Data collection
All study data were collected from the medical records of patients in each group, and included clinical and anthropometric variables and laboratory results on arrival at the hospital emergency unit, prior to admission. We recorded patient-reported symptoms, onset of symptoms, date of hospital admission, date of admission to the critical care unit, and the treatment received. The Sequential Organ Failure Assessment (SOFA) score on arrival at the hospital was calculated to evaluate the patient's severity 16 . All data were reviewed by 2 physicians (PAP and ESB), and discrepancies were resolved by a third investigator (LMVR).
CRP and ferritin were chosen as biomarkers of the activation of different inflammatory pathways that could be associated with worsening clinical status 10,11 . Four inflammatory profiles were created on the basis of blood test results.
Laboratory procedures
All patients underwent a SARS-CoV-2 rRT-PCR (real-time reverse transcriptase polymerase chain reaction) diagnostic test on admission using a nasopharyngeal swab. Blood tests included complete blood count, coagulation profile, serum biochemical tests (including kidney and liver function, creatine kinase, lactate dehydrogenase, and electrolytes), CRP, interleukin-6 (IL-6), serum ferritin, and procalcitonin. Chest X-rays and computed tomography (CT) scans were performed on all patients if required during their stay.
Definitions
Fever was defined as an axillary temperature of at least 38 • C. COVID-19 pneumonia was described as respiratory symptoms (fever, dry cough, dyspnoea, etc.) plus infiltrates on the chest image 23 . Acute respiratory distress syndrome (ARDS) was defined according to the internal WHO guidelines for the new coronavirus 15 . Hypoxaemia was defined as a partial pressure of oxygen (PaO 2 )/inspired fraction of oxygen (FIO 2 ) ratio of less than 300 mmHg 17 , or an SpO 2 /FIO 2 ratio of less than 220 18,19 . Severe hypoxaemia was defined as PaO 2 /FiO 2 less than 150 mmHg 20 . Severe refractory respiratory failure was defined as failure (increased work of breathing or hypoxaemia) of standard oxygen therapy even after administration of O 2 through a non-rebreather face mask (flow rates 10-15 l/min and FiO 2 of 0.60-0.95) 15 .
Statistical analysis
Data were summarised using descriptive statistics. Missing data were not imputed. Continuous variables were tested for normality using the Kolmogorov-Smirnov test. Quantitative variables are expressed as mean and standard deviation or median and interquartile range (IQR: 25---75), and qualitative variable are expressed as percentages and whole numbers. Quantitative variables were compared using non-parametric tests when the distribution was not normal (Mann-Whitney U test) and parametric when it was normal (Student's t test). Qualitative variables were compared using the 2 or Fisher's exact tests.
We calculated the area under the curve (AUC-ROC) of the biomarkers with a p of <0.05 between the two groups in the univariate analysis. Of these, we selected only the 4 parameters with the best AUC in order to include the fewest possible number of covariates for the sample size 21 . The parameters chosen were used as covariates in a multivariate model to predict the primary outcome. We excluded variables from the multivariate analysis when the differences between groups were not significant, when the number of events was too small to calculate the odds ratios, and when the variable was co-linear with another variable.
To differentiate the groups, we also calculated the cutoff points for the selected variables by calculating their sensitivity and specificity and determining the best Youden index. We then used a forward stepwise approach (logistic regression) to create a model for the selected biomarkers. Only those with a p-value < 0.05 were included in the model, and those with a p value < 0.10 remained in the model, which was developed using simulated replicated data sets, calculating a mean difference in the AUC-ROC between these models and the best biomarkers, with a 95% CI. The DeLong test was used for this purpose. The Hosmer-Lemeshow multiple regression test was used to calibrate the model. To validate the generalizability of the critical risk scale we used data from a hospital (Hospital Universitario de León) that were not included in the development cohort. The 97 patients in this validation cohort had to fulfil the same inclusion criteria as the patients in the development cohort.
Finally, we obtained the best Youden index cut-off points (CPm) for ferritin and CRP from the ROC curves of these variables by calculating the sensitivity and specificity of these points. We used these data to define 4 inflammatory profiles based on lab results: profile 1 (if CRP > CPm and ferritin > CPm), profile 2 (if CRP > CPm and ferritin < CPm), profile 3 (if CRP < CPm and ferritin > CPm), and profile 4 (if CRP < CPm and ferritin < CPm), which were then studied in the two groups. The level of significance was established at p < 0.05.
Statistical analysis was performed on SPSS 21 ® and Stata 15 ® .
Results
Between 15 March and 15 April, 150 patients were included in this study: 75 patients who required admission to the ICU and formed the group of critical patients (CC) and 75 patients who did not, and were included in the control group (nCC). All CC patients were intubated and connected to mechanical ventilation. Four patients were excluded from this group due to missing data, so the final sample consisted of 146 patients.
The baseline characteristics and reasons for admission to the ICU are shown in Table 1 There were no significant differences between groups in terms of symptoms.
The mean time from onset of symptoms to hospital admission was 7 days (3-9) in both critical and non-critical patients. The time from disease onset to start of antiviral treatment was 8 days (7---11) in both groups. In the CC group, the mean time from onset of symptoms to admission to the ICU was 9 (7---14) days.
The in-hospital mortality rate was 27.4% in both groups, being significantly higher in the CC (40.8%) compared to the nCC (14.7%) group.
The different treatments administered to patients during their hospitalization are shown in Table 1. Although we observed a trend towards more hydroxychloroquine and azithromycin administration in the CC group (p = 0.05), more significant differences were observed in terms of treatment with high-dose heparin, IL-6 receptor inhibitors and corticosteroids, which were administered in 43%, 69% and 77.1% of patients in the CC group compared to 13.3% (p = 0.000), 50.7% (p = 0.024) and 40% (p = 0.000) patients in the nCC group, respectively. Table 2 shows the results of the haemogram, coagulation and biochemical tests performed at the time of hospital admission, including pro-inflammatory markers. In the blood count, neutropaenia and lymphcytopaenia was more frequently observed in the group of patients who required admission to critical care units. Regarding the biochemical tests, higher baseline levels of creatinine, creatine kinase and lactate dehydrogenase were found in the CC group. Patients who required admission to the ICU also had higher baseline levels of fibrinogen, IL-6, CRP, procalcitonin, and ferritin as pro-inflammatory markers. Finally, the SaO 2 /FiO 2 ratio was significantly lower in the CC vs the nCC group. It is interesting to note that patients requiring admission to the ICU presented a considerably higher SOFA score on admission ( Table 1). The SOFA score was calculated from 6 different organ system scores, In both groups, particularly in the CC group, the most important scores were obtained from the respiratory item (Tables 1 and 3). Table 4 shows the AUCs for the parameters with significant differences between the nCC and CC groups and the cut-off points of the parameters that maintained their statistical significance in the logistic regression analysis. These variables were SOFA scale ≥ 2 (70% sensitivity and 76% specificity) and CRP ≥ 9.1 (75% sensitivity and 53% specificity). We also observed that the combined SOFA score and CRP level gave a sensitivity and specificity of 77%, which was significantly higher than the ROC curves for the SOFA score as an isolated variable. The AUC observed here (0.8) suggests that the SOFA score and CRP taken together at the time of hospital admission has a high predictive value.
In our multivariate logistic regression model, a high SOFA score gave an odds ratio of 1,968 for the need for admission to the ICU, and a high CRP value during admission was associated with an odds ratio of 1,052 for the need for ICU admission ( Table 5). The presence of high levels of procalcitonin was also associated with an odds ratio of 1.152 for the need for critical care, although it was finally eliminated due to low specificity in the AUC.
The validation cohort included 97 patients with a mean age of 65 years (57-70), of which 71 (73%) were male and 96 (96%) had a Charlson index ≤ 4. The SOFA variables collected for the validation cohort are shown in Table 3. The accuracy of the SOFA score was similar in both the validation and development cohorts, with an AUC of 0.89 in the validation cohort (95% CI: 0.82---0.95), a sensitivity of 86%, and a specificity of 72% (p = 0.236 compared to the development cohort). Table 6 shows the distribution of patients in the nCC and CC groups according to their inflammatory profiles on arrival at the hospital. It is interesting to note that 67.7% of patients with inflammatory profile 1 (CRP > 9.1 mg/dl and ferritin > 969 ng/mL) during admission required critical care, while only 16.1% of those admitted with inflammatory profile 4 (CRP < 9.1 mg/dl and ferritin < 969 ng/mL) required critical care.
Discussion
We developed and validated a clinical risk scale to predict the progression of critical illness among patients hospitalized for COVID-19. The model performed well, showing a precision of 0.8 based on the AUC of both the development and validation cohorts.
The results of this retrospective study show that the presence of CRP levels ≥9.1 mg/dl and a SOFA score ≥2 in COVID-19 + patients at the time of hospital admission are independent predictors, with a sensitivity and specificity of 77%, of the need for admission to the ICU. This is the first time the combination of both these markers has been used to predict admission to intensive care in COVID-19 patients.
The association of the SOFA score with severity in COVID-19 patients has already been reported in other studies 5,6 . However, the score of 2 observed in our series gave an acceptable AUC of 0.78 (0.70---0.86), which is in fact the cut- Total (n = 146) Non-critical (n = 75) Critical (n = 71) p off point for distinguishing between septic and non-septic patients 16 . Although this applies to bacterial infections, viral infections can lead to sepsis, and nearly 40% of adults with community-acquired pneumonia due to viral infection develop sepsis 22 . In our series, the SOFA score was shown to be a predictor of the need for critical care after correcting for the remaining main covariates, such as CRP, procalcitonin, and sex. Tables 1 and 3 show the importance of the respiratory item in the SOFA score at the time of admission, as expected.
Regarding CRP, other authors have reported that high values are related to prognosis and severity in COVID-19 23,24 .
In our series, the role of CRP as a predictor of the need for critical care maintained its significance in the univariate analysis, and the cut-off point of 9 mg/dl proved to be the most sensitive and specific factor to discriminate between patients who will require critical care versus those who will not. The role of CRP as a predictive factor was maintained in the multivariate analysis. As CRP is mainly synthesised in response to pro-inflammatory cytokines 10 , particularly IL-6 and to a lesser degree IL-1, and tumour necrosis factor alpha (TNF-␣), it should be determined at the time of admission, since it is more accessible than the other proxies of pro-inflammatory cytokine. In our series, a combination of CRP and SOFA yielded an AUC of 0.83 (0.76---0.90). This is excellent for predicting the need for intensive care during admission, and to our knowledge had not been previously reported. It is also plausible on a biological level, since, as in other infectious processes 25 , the combination of one of the main markers of inflammation (CRP) with a validated organ failure scale (SOFA) can provide a more accurate diagnosis/prognosis in patients with COVID-19.
Another important finding in this study has been that almost 70% of patients admitted with CRP levels of over 9.1 mg/dl and ferritin of over 969 ng/mL required critical care. This suggests that these parameters can also accurately predict prognosis. An inflammatory profile consisting of CRP < 9.1 mg/dl and ferritin < 969 ng/mL would help identify patients that are unlikely to require critical care. In fact, only 16% of patients in our series with this profile required admission to the ICU.
As mentioned previously, CRP is involved in the IL-6 pathway. IL-1 and IL-6 have been shown to trigger acute activation of endothelial cells 24 causing high levels of these and other cytokines in critically ill patients 26 . Ferritin is also involved in the IL-1 pathway 10,11 , therefore, the combination of CRP and ferritin may constitute an inflammatory pattern with prognostic potential.
Procalcitonin has been associated with prognosis in patients with an inflammatory response similar to that found in our series 27 . However, its role as a predictor of critical care was not maintained in the multivariate analysis, perhaps due to its poor diagnostic performance in viral infections 28 .
Sex has been described in other series as a prognostic factor 8,9 of overall mortality. However, in our study, male sex did not predict an increased risk of requiring critical care. The same was true of time to hospital admission or to start of treatment. Comorbidity derived from underlying diseases upon arrival at the hospital was similar in both groups, except for asthma, which has been identified in other studies 29 . Regarding treatment, the group of critically ill patients received more than expected amount of high-dose heparin, interleukin receptor inhibitors, and corticosteroids, due to their severity.
Other authors have suggested that LDH, CK, and white blood cells can be prognostic factors 3,5,7,30 , particularly of viral load (LDH and CK) 30 . In our case, they were not included in the multivariate analysis because they presented a lower AUC than that required for diagnosis.
Our study has certain limitations due to its retrospective nature, and some laboratory data were missing in some patients. We used a small sample to construct the risk scale and a relatively small sample for validation purposes. Despite this small sample size, the inclusion of randomized adult COVID-19 patients is representative of the number of cases treated in critical care units.
Finally, in general and as expected, the in-hospital mortality rate in patients who required admission to the ICU was more than double that of patients not requiring critical care. The mortality rate reported here is consistent with that published in other series 3 , and shows that it is crucial to identify this group of patients at the time of admission. As patients with a SOFA score > 2 and high levels of CRP appear more likely to require critical care, additional therapeutic actions could be taken to reduce the need for such care and thus reduce the in-hospital mortality rate. However, further studies and prospective trials are needed to support this finding.
Conclusion
COVID-19 patients with a SOFA score ≥ 2 and CRP ≥ 9.1 mg/mL could constitute a population that is most likely to require critical care. | 2021-11-05T13:12:57.097Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "fbbcfddd025b8788189681507de7bf140c8b71ec",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.redare.2020.11.008",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9ecb48bc1f05bb750e5cf1ad3a374d75cafce55",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53116156 | pes2o/s2orc | v3-fos-license | Purification of the Recombinant Green Fluorescent Protein Using Aqueous Two-Phase System Composed of Recyclable CO2-Based Alkyl Carbamate Ionic Liquid
The formation of aqueous two-phase system (ATPS) with the environmentally friendly and recyclable ionic liquid has been gaining popularity in the field of protein separation. In this study, the ATPSs comprising N,N-dimethylammonium N′,N′-dimethylcarbamate (DIMCARB) and thermo-responsive poly(propylene) glycol (PPG) were applied for the recovery of recombinant green fluorescent protein (GFP) derived from Escherichia coli. The partition behavior of GFP in the PPG + DIMCARB + water system was investigated systematically by varying the molecular weight of PPG and the total composition of ATPS. Overall, GFP was found to be preferentially partitioned to the hydrophilic DIMCARB-rich phase. An ATPS composed of 42% (w/w) PPG 1000 and 4.4% (w/w) DIMCARB gave the optimum performance in terms of GFP selectivity (1,237) and yield (98.8%). The optimal system was also successfully scaled up by 50 times without compromising the purification performance. The bottom phase containing GFP was subjected to rotary evaporation of DIMCARB. The stability of GFP was not affected by the distillation of DIMCARB, and the DIMCARB was successfully recycled in three successive rounds of GFP purification. The potential of PPG + DIMCARB + water system as a sustainable protein purification tool is promising.
INTRODUCTION
Green fluorescent protein (GFP) has been widely applied in the cellular and molecular biology research due to its unique properties such as the intense fluorescence visibility, high thermal stability, and the adjustable fluorescence intensity via a proper manipulation of the protein structure (Skosyrev et al., 2003;Li et al., 2009;Quental et al., 2015). Additionally, GFP can be easily quantified via spectrofluorimetric assay, making it a good candidate as a biosensor (Wouters et al., 2001) and biomarker (Gerisch et al., 1995) in the biotechnological application. The recombinant GFP has been successfully expressed by various organisms, including Escherichia coli (E. coli) (Lo et al., 2018), zebrafish (Amsterdam et al., 1996), Drosophila (Wang and Hazelrigg, 1994), and yeast (Amsterdam et al., 1996). Nevertheless, the purification of GFP with the conventional chromatographic techniques generally involves complex and tedious operations, resulting in a higher purification cost (Deschamps et al., 1995;Cabanne et al., 2005;McRae et al., 2005;Zhuang et al., 2008).
Aqueous two-phase system (ATPS) has been widely viewed as a potential alternative method for the separation of biomolecules. The advantages of ATPS include the high extraction efficiency, the cost effectiveness, and the simplicity of operation. This type of liquid-liquid extraction is commonly exploited for the primary recovery and purification of valuable biological products such as proteins (Merchuk et al., 1998), enzymes (Kroner et al., 1982), nucleic acids (Gomes et al., 2009), and viruses (Liu et al., 1998). ATPS is conventionally made of two types of incompatible polymers, or a polymer coupled with a salting-out inducing salt; the concentrations of phase-forming components in an aqueous solution must exceed the threshold value. ATPS has been widely perceived as a biocompatible medium for preserving the biological properties of biomolecules, owing to the large proportion of water content in both phases (Yao et al., 2018). The extraction of GFP has been successful achieved using the traditional ATPSs consisting of phase-forming components such as polymer, surfactant, alcohol, and inorganic salts (Jain et al., 2004;Johansson et al., 2008;Li and Beitle Robert, 2008;Samarkina et al., 2009;Lopes et al., 2011;Lo et al., 2018). However, the limited polarity range of the coexisting phases and the poor recyclability of the conventional phase-forming components have constituted a major bottleneck that hampers the vast use of these conventional ATPSs (Hatti-Kaul, 2000).
Over the past decade, ionic liquid (IL) has been envisaged as an alternative phase-forming component of ATPS due to its highly tunable properties (Freire et al., 2012). IL is a type of molten organic salt with a melting point below 100 • C. By properly selecting the cation and anion counterparts, the resultant ILs possess the desired polarity and affinity suitable for the separation of protein in ATPS. In comparison to the conventional polymer-based ATPS, the flexibility of IL-based ATPS allows a better design of the separation system for the target protein in a highly complex crude mixture. Nonetheless, the wide implementation of IL in liquid-liquid extraction is still restricted by the synthesis cost of IL, which is generally more expensive than the conventional phase-forming components (Plechkova and Seddon, 2008). Moreover, some of the conventionally used ILs (i.e., imidazolium-and pyridinium-based ILs) are reported to be highly toxic (Docherty and Kulpa, 2005). With the rising environmental consciousness in public, the application of environmentally benign ILs (e.g., cholinium-and amino acidsbased ILs) in forming ATPS has been on the rise (Song et al., 2015(Song et al., , 2017(Song et al., , 2018a. The CO 2 -based alkyl carbamate IL, which is formed by the combination of CO 2 with dimethylamine (Bhatt et al., 2006;Chowdhury et al., 2010;Idris et al., 2014;Vijayaraghavan and MacFarlane, 2014), has recently emerged as a potential phaseforming component of IL-based ATPS. In general, the synthesis of the alkyl carbamate IL is considerably simpler and cheaper than the conventional ILs (Kreher et al., 2004). It has been reported that the alkyl carbamate IL possesses the characteristics of biodegradability and biocompatibility (Stark et al., 2009). More importantly, the CO 2 -based alkyl carbamate IL can be distilled at a relatively low temperature under vacuum condition, thereby allowing a simple recovery of IL for the subsequent extraction process . Recently, our group reported a novel type of IL + polymer ATPS comprising N,N-dimethylammonium N ′ ,N ′ -dimethylcarbamate (DIMCARB, i.e., the simplest form of CO 2 -based alkyl carbamate IL) and the thermo-responsive poly(propylene) glycol (PPG) (Song et al., 2018b). Both DIMCARB and PPG used in this ATPS can be recovered via rotary evaporation and thermo-separation, respectively, for a viable recycling of phase-forming component.
Here, the application of DIMCARB + PPG + water systems for separating the target biomolecule from a real crude protein mixture was reported for the first time. The purification of recombinant GFP from the clarified lysate of microbial feedstock was performed using ATPSs made of DIMCARB and PPG. The stability and partition behavior of GFP in the ATPSs were studied, and the composition of ATPS was optimized for the purification of GFP. To evaluate the sustainability of this ATPS for practical use, DIMCARB was also recycled for several rounds of GFP purification.
Production of Recombinant GFP
The GFP was expressed by E. coli strain BL21(DE3)pLysS transformed with pET28a-GFP plasmid. The cells were cultured at 30 • C in LB broth medium containing 50 µg/ml kanamycin and 50 µg/ml chloramphenicol. When the optical density (OD 600 ) of cell culture reached 0.7-0.9, 0.5 mM IPTG was added to the culture for the induction of GFP expression. The cell culture was incubated in an orbital shaker for another 12 h at 30 • C and 200 rpm. Then, the culture broth was centrifuged at 4,000 rpm and 4 • C for 20 min. The harvested cell pellets were resuspended in 50 mM Tris-HCl (pH 8) buffer, and the concentration of biomass was adjusted to 10% (w/v). The disruption of cells was performed using an ultrasonic homogenizer (Cole-Palmer, U.S.A.) equipped with a horn-tip of 3 mm diameter (Model KH-04710-42, Cole-Parmer, U.S.A.) and operated at a frequency of 20 kHz, 40% amplitude for 40 min in pulse mode (Lo et al., 2016). Finally, the ultrasonicated solution was centrifuged for 10 min at 14,000 rpm and 4 • C. The supernatant containing the soluble GFP was collected and used as the feedstock for ATPS.
Protein Quantification
The concentration of GFP was determined spectrofluorometrically using a standard curve of pure GFP. The preparation of the pure GFP is described elsewhere (Lo et al., 2018). In brief, 100 µl of the sample was first loaded in a black microtiter plate. The relative fluorescence unit (RFU) of the sample were measured using a spectrofluorometer (Infinite R 200 PRO, Tecan) at the excitation wavelength of 448 nm and the emission wavelength of 512 nm. The concentration of protein in the sample solution was estimated from the polyacrylamide gel using densitometric method as described (Lee et al., 2015).
In the gel, the protein bands in a sample lane were evaluated individually based on the intensity ratio of a single band to the total bands in a lane. The intensity of control band (i.e., pure GFP) was used in the calculation of the protein concentration.
Sodium Dodecyl Sulfate Polyacrylamide Gel Electrophoresis (SDS-PAGE)
Prior to the SDS-PAGE analysis, the protein samples were subjected to ethanol precipitation for the removal of phaseforming components (e.g., ionic DIMCARB) that could interfere the electrophoresis. A mixture containing one part of sample solution and nine parts of chilled absolute ethanol was first prepared. The mixture was vortexed vigorously, before being stored at −20 • C for 60 min. Next, the solution was centrifuged at 15,000 rpm for 10 min. After discarding the supernatant, the pellet was resuspended in the chilled ethanol and the solution was centrifuged again. The washing step was performed twice, and the pellet was then dried at room temperature for 10 min. Lastly, the pellet was re-solubilized in Tris-HCl buffer (pH 8; 50 mM). The SDS-PAGE was conducted using a 12% (w/v) resolving gel in combination with a 5% (w/v) stacking gel (Laemmli, 1970). The thickness of the polyacrylamide gel was 1 mm. The electrophoresis was conducted at 180 V for 60 min using an electrophoresis unit (Mini Protean TM 3, Bio-Rad, U.S.A.). After the electrophoresis, the gel was stained with Coomassie Brilliant Blue R-250 for 45 min. Subsequently, the gel was destained with a destaining solution made of 10% (v/v) methanol and 10% (v/v) acetic acid until a clear background in the gel was formed. The protein bands on the gel were visualized and analyzed using a gel imaging system (Gel Doc TM XR +, Bio-Rad).
Partitioning of GFP in ATPSs
ATPS was prepared in a 2-ml micro-centrifuge tube by adding the appropriate amounts of PPG, DIMCARB, buffer solution (50 mM Tris-HCl) and 10% (w/w) crude feedstock, with the pH of final mixture was adjusted to the optimum pH. Next, the system was mixed thoroughly using a vortex mixer prior to settling for 3 h to attain the phase equilibrium. The temperature of the system was maintained at 25 • C during the incubation in a thermostatic bath. Subsequently, the system was centrifuged at 4,000 rpm for 10 min to achieve a complete phase separation. The partition coefficient of GFP (K GFP ) or total protein (K protein ) was determined using Equation (1): (Song et al., 2018b). b The concentrations of PPG, DIMCARB, and water in both top and bottom phases of the investigated ATPSs were analyzed using the methods described in previous study (Song et al., 2018b Selectivity (S) was defined as the ratio of K GFP to K protein , as shown in Equation (2): The yield of GFP partitioned to a specific phase of the system (Y) was calculated using Equation (3): where C P(GFP) and C C(GFP) represent the concentration of GFP in the top phase or the bottom phase of ATPS and the crude feedstock, respectively.
Stability Test for GFP
The effects of pH and temperature on the stability of GFP were evaluated by incubating the feedstock solution of GFP under different conditions of pH and temperature for 60 min. The GFP suspended in Tris-HCl buffer (pH 8; 50 mM) at 25 • C for 60 min was used as the control. The stability of GFP was assessed using the indicator "percentage relative concentration of GFP, " which is calculated as the RFU of sample as a percent of the RFU of the control. The results are shown in Figures 1, 2. At pH ranging from 4 to 10, the percentage relative concentration of GFP was >91.8%, showing that the stability of the protein was well preserved. A previous study (Johansson et al., 2008) reported that GFP is stable in a broad range of pH (5.0-11.5). An extreme pH condition affects the structural stability and solubility of protein, leading to an irreversible denaturation of protein.
As shown in Figure 2, a low percentage relative concentration of GFP (i.e., 24.7%) was observed when the incubation temperature was raised to 70 • C. Thus, it can be concluded that the GFP was stable when incubated at temperature below 70 • C. The GFP was mostly denatured when the incubation temperature increased to 80 • C, as indicated by the 0.89% of the relative concentration of GFP. Generally, the matured GFP is relatively stable and is able to fluoresce at temperature up 65 • C (Tsien, 1998). An increase in incubation temperature promotes the unfolding of native secondary and tertiary structures of GFP (Penna et al., 2005). Bokman and Ward (1981) reported that the native secondary structure of GFP is essential to maintain the fluorescent form of the chromophore.
GFP Partitioning in PPG + DIMCARB + Water System
The partitioning of proteins in ATPSs is typically governed by the interaction between phase-forming components and biomolecules (Tubio et al., 2004). In an ATPS, a protein will interact with the surrounding molecules through interactions such as hydrophobic interactions, hydrogen bonding, electrostatic interactions, steric effects, and van der Waals forces (Dreyer et al., 2009). To design the ATPS for an efficient separation of protein, it is important to understand the factors governing the partitioning of protein between phases in an ATPS. The partitioning of GFP in PPG + DIMCARB + water systems was investigated, and the results are presented in Table 1 and Figure 3. The compositions of the phase-forming components were selected based on the corresponding phase diagrams reported from our previous work (Song et al., 2018b). The concentrations of phase-forming components were varied according to the tie-line length (TLL). To exclude the effect of volume ratio on the GFP partitioning, the volume ratio of top and bottom phases was fixed at 1:1. From Table 1, majority of the investigated systems showed a positive value of log K GFP , indicating that the GFP was preferably partitioned to the DIMCARB-rich bottom phase. Among the investigated ATPSs, PPG 1000 + DIMCARB + water system at TLL = 97.4% (sample number 15) gave the highest S value (1237) and Y% of GFP in bottom phase (98.8%).
However, an inverse trend of partition behavior of GFP was noted in some of the systems (sample number: 6 7, 11, and 12), which are reflected by the negative values of log K GFP (see Table 1). The compositions of systems undergone the phase inversion were analyzed, and the results are summarized in Table 2. From the liquid-liquid equilibrium data of these systems, the top phase mainly consisted of DIMCARB, whereas PPG was predominantly concentrated in the bottom phase. Prior to the addition of feedstock to these systems, the concentration of PPG 700/1000 was in the range of 44-50% (w/w) and the concentration of DIMCARB was ≤3% (w/w). The phase Table 1), respectively; Lane S, distilled bottom phase of 100-g ATPS; Lane D, distillate obtained from the recovery of DIMCARB.
inversion occurred upon the addition of feedstock to these systems. As shown in Table 2, the addition of feedstock reduced the concentrations of PPG 700/1000 in both phases. The phase inversion may be associated to the temperature-responsiveness of PPG; the lower critical solution temperature (LCST) of PPG 700 and PPG 1000 may increase with a decreasing concentration of polymer. At room temperature and a lower concentration of PPG 700/1000, the hydrophobic moieties along the polymer chains were suspected to experience desolvation, resulting in the polymer aggregates, and a denser polymerrich phase. Therefore, the inversion of phases occurred in the system and rendered the DIMCARB-rich fraction as the top phase. Figure 3 shows that the volumes of the top phases of PPG 400 + DIMCARB + water + feedstock systems decreased with an increasing TLL. This indicated that the equilibrium of these PPG400-based systems was affected by the addition of feedstock. In contrast, the volume ratio of PPG 700/1000 + DIMCARB + water + feedstock systems remained mostly constant. Furthermore, the amount of noticeable protein precipitate at the interphase of PPG 400 + DIMCARB + water + feedstock systems increased as the TLL of the system increased. This may be attributed to a higher degree of salting-out of host cell proteins in the systems. The presence of protein debris was also found at the interphase of ATPSs composed of PPG 700/1000. Nonetheless, the precipitation of the host cell protein in these systems also served as a means for the removal of protein contaminant, thereby assisting in the purification of GFP by ATPS.
Recovery of Phase-Forming Components
Despite the promising potential of IL-based ATPS for the application in protein separation, these systems are yet to be widely adopted in the industrial operations due to the high cost of IL. In contrast to the conventional ILs, DIMCARB is relatively cheaper because of the use of CO 2 as the raw material.
Nevertheless, the cost of DIMCARB is still about 2 to 5 times higher than that of the conventional phase-forming components (e.g., inorganic salts, alcohol and carbohydrates). Therefore, the recycling of IL for the practical application of ATPS is highly desirable.
The recyclability of DIMCARB was evaluated using the optimized ATPS composed of 42% (w/w) PPG 1000 and 4.4% (w/w) DIMCARB. Firstly, the scale of ATPS was increased from 2 g to 100 g. The compositions and purification efficiencies of the scaled-up system are presented in Table 3. Regardless of the scale of ATPS, the composition of the system and the purification efficiencies remained nearly identical. The results affirmed that the performance of GFP purification was not compromised by the scalability of this IL-based ATPS. DIMCARB can be easily distilled and recovered via evaporation (Song et al., 2018b). In this study, the DIMCARB-rich bottom phase of the 100-g ATPS containing the partitioned GFP was subjected to rotary evaporation at 45 • C and 85 mbar for 1 hr. During the process, DIMCARB dissociated into the gaseous dimethylamine and CO 2 . As the gasses passed through a condenser unit, the re-association occurred and the DIMCARB was reformed in the liquid state.
The DIMCARB recovered from the distillation was characterized by Fourier Transform-Infrared (FT-IR) and carbon-13 nuclear magnetic resonance ( 13 C NMR) spectroscopies. The results of FT-IR and 13 C NMR analyses are presented in Figure 4. In the FT-IR spectra, the symmetric carbamate (∼1408 cm −1 ) and the carbamate C-O stretching (∼1621 cm −1 ) peaks were noted in both pure and recycled DIMCARB samples (see Figure 4A). Similarly, the carbamate signal at ∼162 ppm was observed in the 13 C NMR spectra of pure and recovered DIMCARB samples (see Figures 4B,C). These analyses confirmed the successful recovery DIMCARB from the bottom phase of the ATPS. Moreover, the percentage relative concentration of GFP before and after the distillation of DIMCARB was found to be 99.4% (data not shown), indicating that the stability of GFP was well preserved during the distillation of DIMCARB. The DIMCARB recovered from the distillation was used to prepare the new batch of ATPS for use in GFP purification. As illustrated in Figure 5, this recycling step was performed for three successive rounds of GFP purification, which is denoted as Recycling 0, Recycling 1, and Recycling 2, respectively. Overall, the phase compositions of the recycling ATPSs were almost similar to that of the primary ATPS. Moreover, the partition behavior of GFP in the recycling ATPSs did not deviate significantly (as indicated by the S-values shown in Figure 5).
SDS-PAGE Analysis of Purified GFP Using PPG + DIMCARB + Water Systems
SDS-PAGE analysis was performed to assess the purity of protein and the performance of protein separation by the ATPSs. The results are shown in Figure 6. In Lane P, GFP standard was identified as a single thick band at approximately 27 kDa. In Lane C (crude feedstock sample), multiple protein bands were detected along with a thick band of GFP at 27 kDa, indicating the presence of protein impurities in the harvested culture broth prior to the purification process. For the optimized ATPS made of 42% (w/w) PPG 1000 and 4.4% (w/w) DIMCARB (sample number 15), the top phase of the system (Lane T15) did not exhibit any protein band. This hinted that the protein impurities had been mostly precipitated and concentrated at the interphase of the system (see Figure 3). On the other hand, the bottom phase of the system (Lane B15) showed a single dark band at ∼27 kDa (i.e., GFP) and some faint bands representing minor impurities. Similarly, the bottom phase from the scaledup ATPS (Lane S) had the similar profile of protein bands as Lane B15. The Lane D representing distillate recovered from the rotary evaporation of DIMCARB-rich phase did not show any protein band.
CONCLUSIONS
The PPG + DIMCARB + water systems were successfully applied for the purification of GFP from the clarified E. coli lysate. In general, GFP has a higher affinity toward the DIMCARBrich phase. The optimal purification of GFP was attained with an ATPS composed of 42% (w/w) PPG 1000 and 4.4% (w/w) DIMCARB. The optimized ATPS was also successfully scaled up by 50 times. Moreover, DIMCARB was successfully recovered from the IL-rich phase and was reused for three successive rounds of GFP purification. Overall, PPG + DIMCARB + water system has demonstrated the satisfactory performance in the purification of protein from microbial lysate. The ease of recycling DIMCARB via distillation makes the ATPS even more sustainable and environmentally benign for application in protein purification.
AUTHOR CONTRIBUTIONS
The experiments were designed by CS, PL, and CO. The experiments were carried out by ZT and SL. The manuscript was written by CS, PL, PS, and CO. All authors have read and approved the final manuscript. | 2018-10-31T13:02:09.476Z | 2018-10-31T00:00:00.000 | {
"year": 2018,
"sha1": "6ca92d4f6efbb336ed8ba2506311cad86cbb92ea",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2018.00529/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ca92d4f6efbb336ed8ba2506311cad86cbb92ea",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
259357284 | pes2o/s2orc | v3-fos-license | Large‐scale pathogenicity prediction analysis of cancer‐associated kinase mutations reveals variability in sensitivity and specificity of computational methods
Abstract Background Mutations in kinases are the most frequent genetic alterations in cancer; however, experimental evidence establishing their cancerous nature is available only for a small fraction of these mutants. Aims Predicition analysis of kinome mutations is the primary aim of this study. Further objective is to compare the performance of various softwares in pathogenicity prediction of kinase mutations. Materials and methods We employed a set of computational tools to predict the pathogenicity of over forty‐two thousand mutations and deposited the kinase‐wise data in Mendeley database (Estimated Pathogenicity of Kinase Mutants [EPKiMu]). Results Mutations are more likely to be drivers when being present in the kinase domain (vs. non‐kinase domain) and belonging to hotspot residues (vs. non‐hotspot residues). We identified that, while predictive tools have low specificity in general, PolyPhen‐2 had the best accuracy. Further efforts to combine all four tools by consensus, voting, or other simple methods did not significantly improve accuracy. Discussion The study provides a large dataset of kinase mutations along with their predicted pathogenicity that can be used as a training set for future studies. Furthermore, a comparative sensitivity and selectivity of commonly used computational tools is presented. Conclusion Primary‐structure‐based in silico tools identified more cancerous/deleterious mutations in the kinase domains and at the hot spot residues while having higher sensitivity than specificity in detecting deleterious mutations.
Materials and methods:
We employed a set of computational tools to predict the pathogenicity of over forty-two thousand mutations and deposited the kinasewise data in Mendeley database (Estimated Pathogenicity of Kinase Mutants [EPKiMu]).
Results: Mutations are more likely to be drivers when being present in the kinase domain (vs.non-kinase domain) and belonging to hotspot residues (vs.nonhotspot residues).We identified that, while predictive tools have low specificity in general, PolyPhen-2 had the best accuracy.Further efforts to combine all four tools by consensus, voting, or other simple methods did not significantly improve accuracy.
Discussion:
The study provides a large dataset of kinase mutations along with their predicted pathogenicity that can be used as a training set for future studies.Furthermore, a comparative sensitivity and selectivity of commonly used computational tools is presented.
Conclusion:
Primary-structure-based in silico tools identified more cancerous/ deleterious mutations in the kinase domains and at the hot spot residues while having higher sensitivity than specificity in detecting deleterious mutations.
K E Y W O R D S
kinase, pathogenicity prediction, point mutations, prediction accuracy, sensitivity and specificity 1 | INTRODUCTION Mutated kinases are highly sought-after therapeutic targets.Next-generation sequencing (NGS) methods have identified a variety of somatic mutations in kinases across multiple cancers; however, the functional significance and pathogenic nature of less-frequent mutations is largely unknown. 1 It is important to understand the role of mutations in tumorigenesis in order to determine the treatment strategy using targeted therapeutics. 2Therefore, classification of many rare kinase mutations as either driver or passenger mutations is an important step for precision oncology.It is difficult, however, to experimentally determine the cancerous nature of each of these rare mutations 3 thus stressing the need for the prediction of their pathogenicity.Importantly, experimental validation of about 3400 mutations that were predicted as pathogenic in a pan-cancer and pan-software study concluded that 60-85% are likely drivers highlighting the utility of pathogenicity prediction analyses for less frequent mutations. 4hile several studies have reported computational methods for predicting the pathogenic nature of rare mutations, studies on predicting the pathogenicity of kinase mutants are scant.A previous study using SIFT and PolyPhen-2 tools reported the enrichment of somatic mutations in phosphorylation sites as effecting kinase-substrate interactions and indicating deleteriousness. 5However, it remains important to predict the extent of pathogenicity of all kinase mutations irrespective of their ability to be phosphorylated.Another recent study by Rodrigues et al. reported a computational approach in which the pathogenicity of kinase mutations was predicted with high accuracy although only for a small set of mutations. 6It was also reported that tumorigenic activating mutations tended to occur in kinase domain with a slightly higher selection pressure than those in non-kinase domains. 7With the routine use of NGS testing of tumor tissue in both the clinic and the laboratory, identifying the functional significance for many of these mutations is a major hurdle.In light of this, we collected kinase mutants reported in the Catalog of Somatic Mutations in Cancer (COSMIC) database, predicted their pathogenic nature using multiple in silico tools, performed comparative analysis across the kinome, and identified factors that determine the likelihood of a mutation to be a driver.Further, we have estimated the accuracy of the individual primary structure tools to help clinicians and research scientists interpret their own results when using these tools.
| MATERIALS AND METHODS
Four widely used in silico tools were selected to predict the pathogenicity of mutations in the kinome.PolyPhen-2 (Polymorphism Phenotyping v2, a Naïve Bayes Classifier based method) combines sequence-based and structurebased features. 8SIFT (Sorting Intolerant From Tolerant) is a position specific scoring matrix-based method which predicts the deleterious nature of mutations according to the sequence homology from the PSI-BLAST method. 9redictSNP is a consensus classifier based on eight prediction tools including PolyPhen-2 and SIFT. 10 FATHMM (Functional Analysis Through Hidden Markov Model) predicts cancerous nature of mutations based on the sequence homology. 11Our cancer kinome dataset included 42,165 point mutations belonging to 248 kinases from the COSMIC database.In total, 248 excel files were deposited in Mendeley database (10.17632/xn3xrppsyy.1), each containing prediction data for kinase mutations based on SIFT, PolyPhen-2, PredictSNP, and FATHMM.
To estimate performance of each tool, we evaluated accuracy, sensitivity, and specificity 12 on a ground-truth dataset composed of 141 kinase mutations (99 cancerous and 42 inert) with experimentally proven activity.As there are very few examples of inert kinase mutations in the literature, there was no alternative to the ground-truth dataset with a class imbalance.For this reason, we emphasize the balanced accuracy metric in our evaluation. Accuracy: Sensitivity: .
Specificity:
Balanced Accuracy: We further analyzed whether the balanced accuracy of the individual in silico tools could be improved on by leveraging either a consensus or voting method derived from all four tools.Additionally, a total of 12,544 stability predictions for mutations spanning 100 kinases were gathered using the tertiary structure-based tools CUPSAT, 13 SDM, 14 mCSM, 15 and DynaMut. 16We tested the accuracy, sensitivity, and specificity of these tertiary tools against 119 kinase mutations (99 cancerous and 20 inert) with known activity.Influenced by the variability in protein structure reported in protein databases (PDBs), we used protein structure from two different PDBs for each of the 119 kinase mutations.Concordance (count of pairs with same result/total counts of paired results) for the paired results reported by each tool were then calculated to understand the response for each tool.Kinome trees depicting our analyses were built using KinHub software as described previously. 17
| RESULTS AND DISCUSSION
Of the four primary structure tools, FATHMM could predict the cancerous nature of 38,483 mutations out of 42,165 total mutations.For the rest, wild type inconsistency was the reason for the lack of prediction.The proportion of deleterious mutations predicted was highest for SIFT at 61.8% while PolyPhen-2 closely followed at 55.6%.On the contrary, FATHMM predicted the least at 39.4% while PredictSNP predicted marginally more at 44.9% (Figure 1A).Using a dataset of 1036 cancerassociated somatic mutations in kinases, Torkamani et al. reported that nearly half (49.42%) of the mutations as drivers. 18Similarly, the in silico tools used in the present study also predicted approximately 40-60% of mutations to be pathogenic albeit on a much larger dataset further corroborating that less-frequent mutations can be drivers too. 19Previously, annotation of mutations as drivers or passengers (neutral) based on a domain mutational landscape approach proved superior over the traditional gene landscape approach in colorectal and breast cancers. 20Here, we find that a higher percentage of mutations in the kinase domain (KD) (Figure 1B) rather than the non-kinase domain (NKD) (Figure 1C) were predicted to be deleterious unanimously by all four primary structure tools (Figure S1).This observation also confirms a previous report wherein 66.67% of driver mutations belonged to the catalytic domain. 18Further analysis of sub-regions within the KD revealed a higher percentage of deleterious/cancerous mutations in DFG motif (Figure 1D) followed by in p-loop (Figure 1E) and in a-C helix (Figure 1F).Notably, mutations occurring in p-loop were previously shown to exhibit high selection pressure. 7These results thus indicate a correlation between pathogenicity and functional modularity of kinases.
The proportion of predicted deleterious/cancerous mutations as compared to the total mutations per kinase varied among the kinases (Figure 1G).Among 248 kinases, more than half of the mutations were predicted to be deleterious in 60 (24.1%), 181 (72.9%), and 235 (94.7%) individual kinases by PredictSNP, PolyPhen-2, and SIFT, respectively (Figure 1G).FATHMM predicted more than half of the mutations as cancerous in 53 out of 223 (23.7%) individual kinases (Figure 1G).Domain-specific analysis revealed a significantly higher proportion of the KD mutations to be deleterious/cancerous (Figure 1H) than the NKD mutations (Figure 1I).If mutations at a residue involved more than one amino acid substitution, we considered the residue a hotspot (HS) and the mutations at that residue as HS mutations.A total of 127 kinases that harbored HS mutations were further analyzed, accounting for 6.6% (1957 mutations) of total number (29,415) of mutations within these kinases.The percentages of predicted deleterious/cancerous mutations were higher for HS than for non-hotspot (NHS) mutations (Figure S2).The distribution of hotspot mutations (HS) among 127 kinases varied both in number and the percentage of predicted pathogenicity (Figure S2).Domain-specific analysis revealed a higher percentage of HS mutations in the kinase domain (14.4%) than the non-kinase domain (5.5%).The percentages of deleterious mutations were predicted to be higher among HS when compared with NHS mutations both in KD and NKD (Figure S3).The distribution of HS mutations among KD and NKD varied both in number and the percentage of predicted pathogenicity (Figure S3).
Analysis of agreements between all the four primarystructure based primary tools revealed that one-fifth of the mutations were predicted unanimously to be deleterious/cancerous (7435 mutations) or neutral/passenger (8894 mutations).Importantly, agreement rate was higher among PolyPhen-2, SIFT and PredictSNP but not with FATHMM (Figure 2 and Figure S4).A higher percentage of mutations with 100% consensus were seen for tyrosine kinases belonging to RTK, NRTK, and TKL families (Figure S5).However, a significant variability was observed between individual members within each kinase family (Figure S5).Among individual kinases, the highest percentage of mutations with 100% consensus was observed in GSK3A (48.5%),EGFR (47.1%), and JNK1 (46%) (Figure S5).Further, to assess the accuracy of individual primary tools, we analyzed a panel of 99 oncogenic mutants belonging to eight kinases which were previously shown to transform Ba/F3 cells to cytokine independence as well as 42 inactive mutants/variants (Figure S6).The measured sensitivity was 90.4%, for FATHMM, 86.9% for PolyPhen-2, 75.7% for SIFT, and 63.6% for PredictSNP (Table S1).FATHMM exhibited by far the lowest specificity at 11.9% compared to 45.2%, 52.4%, and 59.5% for PolyPhen-2, SIFT, and PredictSNP, respectively (Table S1).Based on the COSMIC dataset, PolyPhen-2 (79%) was previously shown to have higher sensitivity than the SIFT (70%), but, SIFT (82%) showed higher specificity than that of the PolyPhen-2 (75%). 21However, based on the COBR dataset, SIFT (73%) showed higher sensitivity than PolyPhen-2 (63%) and PolyPhen-2 (66%) showed higher specificity than the SIFT (55%).Therefore, the sensitivity and specificity of prediction softwares varied depending on the dataset and the gene under consideration.In another example, PolyPhen-2 displayed higher sensitivity and FATHMM showed higher specificity both with VariBench and SwissVar benchmarking datasets. 11Interestingly, SIFT showed maximum sensitivity for mutations in BRCA1 and MSH2 genes, PolyPhen-2 for MSH2 and TP53 genes, and FATHMM for MLH1 and TP53 genes. 11On the contrary, PolyPhen-2 displayed maximum specificity for mutations in MLH1 and TP53 genes while FATHMM for BRCA1 and MSH2 genes. 11Given the imbalance in sample sizes between the active and inactive kinases, we calculated balanced accuracy as being 66.1%, 64.1%, 61.6%, and 51.1% for PolyPhen-2, SIFT, PredictSNP, and FATHMM respectively.We further conducted consensus and voting methods of all four tools (Table S2) in hope of achieving higher prediction accuracy but neither method significantly improved balanced accuracy over PolyPhen-2.Considering the limited number of experimentally confirmed kinase mutation data and the implied complexity of performing a consensus or voting approach, PolyPhen-2 seems to be the best choice followed by PredictSNP and SIFT.FATHMM had the lowest balanced accuracy of all the four tools.Overall, our observations with the kinome dataset are in line with a previous study that ranked PolyPhen-2 among the best prediction tools followed by the SIFT among the medium and FATHMM among the low performing tools. 22e additionally used four tertiary structure-based tools to predict the stabilizing/destabilizing effects of kinase mutants.A total of 12,544 predictions for about 6626 mutations that spanned kinase domains of 100 kinases for which (co)-crystal structures are available and were considered for this analysis.The percentage of mutations that were predicted as destabilizing varied widely between the four tertiary tools: CUPSAT (67.1%), mCSM (78.2%),SDM (80.9%), and DynaMut (32.2%).We have identified several limitations with the utility of tertiary structure-based tools: (1) several mutations did not map to a solved crystal structure; (2) structural differences between multiple PDBs could result in discordant pathogenicity predictions, and currently, there is no directive principle to choose a particular PDB for a specific protein; (3) several instances were identified where different PDBs for the same gene mutation gave discordant results; and (4) a prediction of "destabilizing" by tertiary tools may not necessarily mean deleterious/cancerous.In the test for concordance, mCSM has the highest concordance rate (91.6%) between the two PDBs for a given protein followed by SDM (76.5%),CUPSAT (71.4%), and DynaMut (57.1%).For subsequent assessment of accuracy, we tested mCSM alone given the lower concordance for the other three tertiary tools.The sensitivity for mCSM was high at 85.8% while specificity was low at 20%.Using a dataset of 384 mutations within 42 kinases, the web server Kinact was shown to perform better than mCSM. 6However, Kinact again relies on the tertiary structure of the protein and many kinases still lack a solved structure. 6A comparative study of 989 mutations concluded that the accuracy of prediction softwares varied considerably and suggested that combinations of methods might improve the prediction performance. 23Therefore, we combined the best performing primary structure based PolyPhen-2 with the mCSM and observed that the sensitivity improved to 97.9% and specificity to 54.8%.
Taken together, our study indicates that the likelihood of a kinase mutant to be cancerous increases if its location is both in the kinase domain (vs non-kinase domain) and is a hot-spot mutation (vs non-hot spot mutation).Among all in silico tools, PolyPhen-2 provides the best balance in terms of accuracy, sensitivity, and specificity in identifying deleterious/cancerous mutations.Combining mCSM with PolyPhen-2 does show promising results in our small testing cohort but further studies are needed to confirm the validity of this combination.The large compendium of kinase mutations with predicted pathogenicity collected by this study may be used as training datasets to validate future in silico prediction tools.Additionally, we hope the above conclusions may be of help to clinicians who identify rare kinase mutants in cancer patients.
1
Predicted pathogenicity of kinase mutants by four primary structure-based tools.(A) Pathogenicity predictions of kinase mutants using primary structure-based tools.(B-F) Region/sub-region-specific analysis: kinase domain (B), non-kinase domain (C), DFGmotif (D), p-loop (E) and alpha-C-helix (F).Percentage of mutations that were predicted to be deleterious/cancerous in all kinases were shown in red while percentage of predicted neutral mutations were shown in green.Distribution of total number of mutations (bubble size) as well as the percentage of deleterious/cancerous mutations (bubble color) for complete kinase (G), kinase domain (H), and the non-kinase domain (I) are shown.Bubble size represents the number of mutations studied for that particular kinase.Large: >250 mutations/kinase; intermediate: 101-250 mutations/kinase; small: 50-100 mutations/kinase and very small: <50 mutations/kinase.Bubble color represents the percentage of deleterious mutations for that particular kinase.Red: 75.1-100% deleterious mutations; yellow: 50.1-75% deleterious mutations; green: 25.1-50% deleterious mutations; blue: ≤25% deleterious mutations.PS: PredictSNP, PP2: PolyPhen-2 and FAT: FATHMM.
F I G U R E 2
Agreement rates between individual primary tools.Heatmap showing the agreement of predictions made on the 42,165-point mutations belonging to 248 kinases by individual primary structure-based tools. | 2023-07-07T22:15:49.602Z | 2023-07-06T00:00:00.000 | {
"year": 2023,
"sha1": "fd05cfd9c9143a79cdb1479d880b427eb76c52ac",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.6324",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "839c701eceae184b2df23e58336797823aa82d5e",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236266969 | pes2o/s2orc | v3-fos-license | Abnormal Behavior Monitoring By Using RNN For 5G Networks In Smart Cities.
I. ABSTRACT The use of mm Wave or millimeter wave bands for systems based on mobile and wireless communication’s next generation is a promising alternate that has the capability for supporting continuous rates of data over tens of Gigabits per second or Gbps. Although the advantages are clear of using a large available spectrum, there are some challenges which are related to the characteristics of propagation at higher frequency bands which are needed to be dealt. In the existing system, an adaptive transmission that can reduce the probability of outage of secrecy capacity primarily when the mean signal to noise ratio of the eavesdropper is comparatively lower than the legitimate receiver ’s mean signal to noise ratio. As more number of devices are connected to the network, security threats are also becoming a real concern. For ensuring the normal operation of smart city, monitoring of abnormal behaviors becomes essential. A smart city requires a guarantee mechanism which is reliable for guaranteeing the QoS. The network’s QoS or quality of service can be affected by various abnormalities in the network like abnormal flow, link fault and DDoS attack etc. which causes packet loss and also interruption of service. Also as more number of IoT devices are connected to the network, security threats are the main issue as more personal information becomes vulnerable. In the proposed system, Design of Resilient or Spatial Correlation is evaluated, this kind of Spatial Correlation enable the physical later to do the iterative search for N number of times until the communication is established without any blockage of objects. To reduce the disconnection of data transfer due to line of sight issues, an adaptive Resilient Correlation scheme is newly proposed here.
II. INTRODUCTION
The fifth generation multipurpose organization is 5G. After the organizations of 1G, 2G, 3G, and 4G, 5G is the new global standard. 5G can also empower one more organization which is proposed to connect together everything that includes various items, gadgets and machines. [1] 5G remote invention is envisioned that it will be conveying information speeds of multi-Gbps, will have extremely low latency, more dependability, and large capacity of network, extended availability, with a better experience to a greater number of clients. The better efficiency and superior performance enables new client meetings and connect new undertakings. [2] In cell architecture, land territory is partitioned into various cells each with its radio service. In AMPS the territory is huge and in digital services the zone is small in comparison. Ordinarily cells are of hexagonal shape. Every cell utilizes a frequency range that isn't utilized by its adjoining cells. In any case, frequencies might be reused in non-contiguous cells. [3] In the cell-less plan, a mobile terminal can decide to access at least one or more than one base station or access points by various downlinks and uplinks with consideration about the status and demands of the wireless channel, or decide not to access any base stations when the versatile terminal is inactive. Which means, a versatile terminal won't connect with any base station prior to communicate information. In this case, base stations do not have to keep a list of related mobile terminals; all things being equal, the SDN regulator chooses which at least one base station play out the transmission of information by the control link for the mobile terminal. [4] As the number of breaches continue to increase, the security of 5G has become an issue that is more important than ever. To a network a huge threat is posed by the IoT devices. The use cases of 5G, like devices related to healthcare, autonomous driving as well as smart homes makes more personal data accessible to the attackers as ever. The network of 5G should be architecture to developing to increasing security needs. End-to-end security is required the 5G that uses its architecture that is software-defined to automatically identify as well as alleviate the threats. [5] For getting the best performance out of the current technologies that are wireless like Wi-Fi, Bluetooth, Zigbee,3G, 4G etc., IoT services make trade-offs in performance. Unlike these, the design of the 5G networks will be bringing for massive Internet of Things the required performance level. A perceived fully universal connected world will be enabled by this.
An RNN or recurrent neural network is a part of artificial neural networks where associations in between the nodes structure, along a temporal sequence, a directed graph. This permits it to show temporal behavior which is dynamic. Gotten from feedforward neural networks, recurrent neural networks can utilize their memory or internal state so as to handle input sequences of variable lengths. This creates them pertinent for errands like unsegmented as well as connected recognition of handwriting or acknowledgment of speech.
A generalization of the feed-forward neural network which consists of an internal memory is the recurrent neural network. It performs the same function for each and every data input and hence is recurrent in nature whereas the current input's output is dependent on past one computation. The output after being produced is copied and directed back into the recurrent network. It considers the current input and the output that has been learned from the last input, for making a decision. [6] For finding the shortest path for establishing a connection between two nodes, Dijkstra's Shortest Path Algorithm is used. Dijkstra's shortest path algorithm is used in this proposed project for establishing the connection between the nodes and their corresponding shortest distance node. If any blockage is found, a new route is selected and Hilbert Correlation is performed. Then the new route is selected as the final route between the given two nodes. Spatial autocorrelation's measure define to what degree values at spatial locations, are similar to one another. Hence, two things are needed by us: locations and observations. A smart city is a city that utilizes innovation to offer services and take care of city issues. A smart city does things like improve transportation and availability, improve social administrations, advance manageability, and give its residents a voice. [7] Smart cities utilize a blend of the Internet of things or IoT gadgets, solutions regarding software, and user interfaces (UI), and networks of communication. In any case, they depend above all else on the IoT. The IoT is an organization of connected gadgets -like vehicles, sensors, or appliances of home -that can communicate and exchange information. Information gathered and conveyed by the IoT sensors and gadgets is put away in the cloud or on servers. [8] Smart cities are the development patterns of future urban areas, which include numerous parts of everyday life in urban communities, including e-business, intelligent systems of transportation, telemedicine, management of metropolis, security surveillance, management of logistics, community services, social networks, etc. To prepare for services mentioned above, smart cities are utilizing different networks and technologies of wireless communication, which includes ZigBee, Bluetooth, RFID wireless innovations, remote cell organizations, remote neighborhood (WLANs), and networks for radio broadcast, networks of wireless sensors and body area networks, with numerous more. The given technologies related to wireless communication alongside networks of fiber communication and networks of cable, structure the pervasive smart city's networks.
Smart City's essential objective is to establish an environment that yields a great life to its occupants while additionally creating generally speaking economic development. Along these lines, a significant benefit of smart cities is their capacity to encourage an expanded delivery of services to its residents with less framework and cost. [9] Albeit a carrier for transmissions of service for smart cities has already been found, the smart city additionally needs a guarantee mechanism which is reliable to guarantee the QoS. The QoS in network is influenced by some abnormalities in a network. Abnormality recognition, additionally called outlier detection, is the identification of unexpected events, observations, or things that vary essentially from the standard. [10] III. LITERATURE SURVEY Tao Han ET. Al. [4] proposed networks of 5G converged cell-less communication for smart cities bearing in mind the distribution of ultra-dense 5G wireless networks. SDN controllers are configured for the management of traffic scheduling and resource allocation. The results of simulation shows that the probability of coverage is improved in the converged cell-less communication network. Also the saving of energy at both the mobile terminals and BSs are enhanced.
Chao Li ET. Al. [10] presents a technology for monitoring of abnormal behaviors using Spearman
Correlation Coefficient and based on recurrent neural network in 5G Network. A smart city requires a guarantee mechanism which is reliable for guaranteeing the QoS and thus ensuring a smart city's stable operation. The simulation shows parts of network where the true value differs from the predicted value and those are the parts where some abnormality is present.
Jose Santos ET. Al. [11] presents a solution for anomaly detection for the applications of Smart City is offered, that focuses on the low-power solutions of Fog Computing and estimated in the range of Antwerp's testbed of City of Things. The result obtained shows that Birch clustering as well as RC outlier detection of anomaly mechanisms, both can be done by fog resources near the IoT sensors and, through this, timely alerts will be sent when something unusual is detected.
Christophe Croux ET. Al. [12] presents the studies on the robustness of non-parametric estimators of correlation such as the Kendall as well as the Spearman correlation by means of their gross error sensitivities and influence functions. The estimator of the Pearson correlation, at normal distribution, has the highest efficiency, but on the other hand the statistical efficiency of Kendall as well as Spearman estimators of correlation is always more than 70 percent.
IV. ABNORMALITY DETECTION DUE TO LINE OF SIGHT BLOCKAGE
In the proposed system, Design of Resilient or Spatial Correlation is evaluated and this kind of spatial Correlation enable the physical receiver to do the iterative search for N number of times until the communication is established without any blockage of objects. To reduce the disconnection of data transfer due to line of sight issues, an adaptive Resilient Correlation scheme is newly proposed here.
Dijkstra's shortest path search algorithm has been used here to find the shortest distance nodes. In case any blockage is present in the route between two nodes, a new route will be selected. Hilbert Correlation will be performed on the new route and then this new route will be made as the final route.
The Simulations are done in three modules:
Module 1: Design of wireless sensor networks
This module contains the creation of nodes randomly in the free space, allocating the energy and distance cost for the nodes. Finding the cluster head on different environment will be the next task. Once the nodes are aligned and created, the next job is to establish the connection in the network, the established nodes connect with the shortest distance nodes after the cluster and tries to find out the Blockage in the path.
V. RESULTS AND ANALYSIS
The simulations are done in the MATLAB R2017b Software.
Figure 1. Process of Node Connectivity
The nodes are deployed in the free space. Figure 1 shows the process of node connectivity after the creation of the nodes and the assignment of the distance, cost and energy to the nodes. The base network is created when all the nodes in a given space are connected to some other nodes randomly. The figure also shows Distance to Zero where the node nearest to the origin is taken as 1, the node next nearest to the origin is taken as 2 and so on. The established nodes then connect with the shortest distance node and the updated network after complete iteration is shown. Dijkstra's shortest path algorithm has been used to connect to shortest distance node. Figure 2 shows the Network Operational Nodes per Transmission. Some of the nodes may have the energy for the data transfer and some nodes may not have the required energy for data transfer. With more number of iterations the number of nodes that are operational reduces. Hence more and more nodes become nonoperational.
Figure 3. Energy Consumed per Transmission
Each node is assigned a random energy while creating the nodes and before the establishment of the connection. Figure 3 shows the Network Energy consumed per transmission. The figure shows that with each transmission, random energy is consumed.
Figure 4. Enable Data Transfer
After the assignment of distance and random energy to the nodes, when the nodes are ready, they transfer the data. Some nodes may have the energy required for the transfer of data and some nodes may not have the required energy. Figure 4 shows the enabling of data transfer which is based on the energy of the nodes. Figure 5 shows the Hilbert Correlation Overlap Plot. The correlation is performed between the input signal and the reflected signal from a node. This correlation enables the receiver to do the iterative search for N number of times until the communication is established without any blockage of objects. Input signal is shown by green curve and reflected signal is shown by red curve. Figure 6 shows the Blockage removed routing. Whenever there is a blockage found in the route between two nodes, a new route will be selected by the nodes that will be free from any LOS blockage. The Hilbert correlation will be calculated for the new route, and if it is found to be good, then the new route will be made final.
VI. CONCLUSION
As demand of high data speed is increasing along with the number of subscribers, the technologies at present such as 3G, 4G are unable in supporting this and therefore there comes a necessity for the development for mobile network's next generation that is the 5G network. As more number of devices are connected to the network, security threats becomes a major issue as more personal data becomes vulnerable. For ensuring a smart city's normal operation, abnormality detection is necessary. The proposed system focused on spatial correlation based line of sight problem would be solved. The proposed system uses Dijkstra's algorithm for shortest path search. The proposed framework uses adjustable routing model through spatial correlation and abnormality detection through RNN network is developed. The proposed technique reduce disconnection of data transfer as line of sight problem was evaluated. The result shows the proposed rerouting consumes less delay and the propagation of data packets would be altered.
Availability of data and material-Not Applicable
Code Availability-Not Applicable Enable Data Transfer | 2021-07-26T00:06:35.350Z | 2021-06-03T00:00:00.000 | {
"year": 2021,
"sha1": "8bd9ae9383d45f1511c2d2584a3b33e796c0f0cb",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-498283/v1.pdf?c=1631883518000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e061e44b5e418b8f55cbdf12ee68c039325fa75e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
214519466 | pes2o/s2orc | v3-fos-license | New research findings on non-proportional low cycle fatigue
One of the challenges regarding multiaxial fatigue damage predictions is non-proportional loading. Relevant studies have shown that these multiaxial loadings cause significant additional hardening and reduction in durability due to non-proportionality. Fatigue life predictions due to non-proportional loadings are based on an equivalent nonproportional strain range that considers a material constant related to additional hardening and a non-proportionality factor. In this paper an analysis of the non-proportional factor for three multiaxial loadings forming a square in γ/√3 – ε coordinates is carried out. One of the observations revealed by this analysis is the sensitivity of the non-proportional factor to variable shear strain rate.
Introduction
The stress or strain state in a material point is defined as a multiaxial state. This multiaxiality is influenced by the external load, geometry, residual stress, etc. The problem of multiaxiality becomes more complicated when the stress/strain components vary over time and even more when they vary in an independent manner or at different frequencies. The mode of time variation of the stress/strain state components introduces effects that complicate the prediction of fatigue damage. Multiaxial loadings are non-proportional if the principal directions and the corresponding principal stresses/strains change over a loading cycle. Such loadings are very common in the operation of mechanical components, can be complex as a variation over time and it is difficult to quantify their effect on the fatigue life. However, considerable efforts have been made in this direction for development of methods, algorithms and/or criteria for analysing complex stress/strains histories and, respectively, the assessment of non-proportional effects on fatigue damage. Based on an experimental study of low-cycle fatigue tests under combined axial and torsional strains with different phase angles, Kanazawa, Brown and Miller, [1][2][3], highlighted a number of significant issues. These can be summarized as: a) during out-of-phase biaxial tests, fatigue cracks initiate on the plane of maximum shear strain; b) the fatigue life is governed by the shear strain range and the amplitude of normal strain on the plane of maximum shear strain; c) the out-of-phase loading condition with phase angle of 90° and strain ratio λ = γa/εa = 1.5 (where γa is the amplitude of shear strain and εa is the amplitude of normal strain) gave the lowest lifetime. Another important observation of their studies is the additional hardening phenomenon due to the rotation of the principal directions during a cycle. Considering this phenomenon quantified through a parameter, it was possible to correlate out-of-phase data in a single stress-strain curve. Very important contributions were made by Socie and his collaborators who started by experimentally highlighting two cracking modes of materials (Mode II -shear cracking and Mode I -tensile-cracking) that need to be considered in lifetime predictions of components subjected to multiaxial fatigue, [4,5]. Itoh et al, [6,7], explain the rotation of principal directions by the degree of non-proportionality that is found to be maximum for a load which in γ/√3 -ε coordinates forms a square. Also, an equivalent non-proportional strain range is defined which correlates the non-proportional experimental data. This strain range includes a material constant that describes the additional cyclic hardening under 90° out-ofphase and a factor that expresses the severity of non-proportional loading obtained directly from the strain history. Non-proportional multiaxial loadings result in the initiation of large number of microcracks compared to proportional loadings. These microcracks propagate in different directions due to the rotation of the principal directions and the activation of multiple slip systems. At the same time, the large number of microcracks initiated is responsible for reducing the fatigue life, [8]. Shamsaei et al., [9,10], found that the fatigue life of 1050 QT steel without additional non-proportional hardening is more sensitive to multiaxial non-proportional loadings than 304L stainless steel which shows significant nonproportional cyclic hardening. This observation supports the idea that additional nonproportional hardening is not the only effect responsible for reducing the fatigue life. An alternative method for evaluation the non-proportional loading effects was proposed by Freitas et al., [11]. They evaluate the effective shear stress/strain amplitude throughout a complex loading based on the geometric characteristics of a minimum ellipse embodying the multiaxial load path. Recent studies suggest a link between the microstructure of the material and the non-proportional loading effects. The non-proportional hardening may be related to stacking faults and dislocation structures, which have different forms produced by different loading paths, [12]. A new non-proportionality damage factor based on accumulative path length traversed within half cycle is proposed in [13] and [14]. This factor can vary from zero for proportional load to 1 for a semi-circular load. These studies are focused on the overall correlation of non-proportional experimental data through an equivalent parameter that includes a factor corresponding to additional cyclic hardening and respectively a factor corresponding to the loading path.
Based on analysis of low cycle multiaxial fatigue data, a slight inconsistency was observed between experimental data and correlations used for non-proportional loadings which describes a square in γ/√3 -ε coordinates. Also, in most cases these loadings determine the lowest fatigue life, even lower than non-proportional 90° out-of-phase loadings.
An analysis of the non-proportionality factor for square loadings is carried out in this paper. Thus, three non-proportional loadings are proposed in the paper which in γ/√3 -ε coordinates describe a square. For these loadings the non-proportional load factor and implicitly their effect on fatigue damage has been analysed.
Material and methods
The analysis performed in this paper has as a starting point three sets of low cycle multiaxial fatigue data, where the square non-proportional loadings have been considered among others. These experimental data include tests conducted by Socie [5], Itoh et al. [6] and Nobah et al. [15]. The experimental analysis by Itoh et al. reveals a maximum degree of non-proportionality in the case of square loading that determined the lowest fatigue life. Continuous loading path similar to the square loading are more damaging compared to discontinuous loading path.
For Nobah et al. tests, the same axial and shear strain amplitudes used for 90° out-of-phase and square loadings gave lower fatigue lives in the case of square loading. The same trend is maintained for Socie's tests. These experimental evidences indicates the need for a detailed analysis of non-proportional square loadings.
On the other hand, the current approach uses an equivalent non-proportional strain range to correlate the non-proportional fatigue data with the proportional ones. This parameter is defined by the following relationship: where ΔεI is the principal strain range, α is a parameter related to the additional hardening under nonproportional loading and is defined as the ratio of stress amplitudę in 90° out-ofphase loading to that from in-phase loading. The nonproportionality factor, fNP, express the severity of nonproportional loading and is obtained directly from the strain history. Practically, the correlation of non-proportional and proportional fatigue data depends on the product of α·fNP. Considering that α is a material constant experimentally determined under the same conditions for each tested materials, attention should be directed to the non-proportional load factor. Also, taking into account the fact that there are materials that do not exhibit additional non-proportional hardening but they are sensitive to non-proportional fatigue loadings, the analysis in this paper focuses on the non-proportional load factor for square loadings. Therefore, three non-proportional loadings are proposed in the paper which in γ/√3 -ε coordinates describe a square. The three loadings are shown in Figs. 1 -3. The three loadings, although having different variations of axial and shear strains, in γ/√3 -ε coordinates they describe a square with the same limits, Fig. 4. Fig. 4. The square loading described by the three non-proportional loadings Two techniques have been used in this paper to analyse the non-proportionality factor for the three loadings. On the one hand, the method developed by Itoh et all. [6], based on the rotation of the principal strain directions, was used to analyse the severity of each nonproportional loading. The second method, developed by Mei and Dong, [13], based on the accumulative Moment of Load Path (MLP) concept was applied for analysing the dimensionless non-proportionality induced damage factor, gNP.
According to Itoh's approach, the severity of non-proportional low cycle fatigue is calculated by measuring the rotation of the maximum principal strain direction and the strain amplitude after rotation, fig. 5. This severity is expressed by a non-proportional factor, fNP, defined as integral form: where εI(t) is the maximum absolute value of principal strain at time t and given by equation (3), εI,max is the maximum value that principal strain records during the loading path, T is the period of a cycle and ξ(t) is the angle between εI,max and εI(t). The constant 1.57 is chosen to make fNP unity for 90° out-of-phase loading. The Mei and Dong approach is based on a normalized integral form of the load path nonproportionality induced fatigue damage, Fig. 6, given by the following relationship : where DNP is the total load path non-proportionality induced fatigue damage, Dmax is the maximum possible non-proportional fatigue damage induced by a semi-circular load path with radius R.
The term √β ε is a fatigue equivalency parameter between pure cyclic tensile strain and pure shear strain. For this analysis β ε = 1/3. Two evidences are clear based on the definitions of the two non-proportional factors. The dimensionless non-proportional factor gNP is calculated with respect to ε -γ√β ε strain plane while Itoh et al.'s non-proportional factor fNP is defined with respect to a polar coordinate based maximum principal strain plane as a function of its direction (angle). Also, the nonproportional factor gNP is defined with respect to the reference semi-circular load path wich corresponds to conditions yielding the maximum possible damage among all paths between any two positions forming one half cycle in ε -γ√β ε plane, while Itoh's et al.'s factor is calculated with respect to the maximum principal strain within a loading event.
Results and discussion
For each loading path, the variation of the principal strain according to Itoh's method was determined, Fig. 7, as follows: (4) where ε is the applied axial strain, γ is the applied shear strain and ν is the Poisson's ratio. Fig. 8. Applying the Mei and Dong method, mainly involves the computation of the r'·sin θ term and integrating over the loading path. For the three analysed load paths, the variations of r'·sin θ term during a loading cycle are shown in Fig. 9. Table 1 gives the values of the non-proportional factor for the analysed loadings, based on the two methods used. The model proposed by Kanazawa et al. [2], was used in this paper to estimate the equivalent stress amplitude for a given equivalent strain amplitude, for the three analysed loading paths. This model incorporates the additional non-proportional hardening in the cyclic behaviour and is defined as: where ∆ ̅/2 is the equivalent stress amplitude, ∆ ̅ /2 is the equivalent plastic strain amplitude, ′ is the proportional (uniaxial) cyclic strength coefficient, ′ is the proportional (uniaxial) cyclic strength exponent, α is the additional non-proportional hardening and F is the non-proportional load factor. The model was applied on two materials analysed by Shamsaei et al. [10], one of which does not show additional non-proportional hardening (a medium carbon 1050 steel in quenched and tempered condition), while the other exhibit additional non-proportional hardening (a 304L stainless steel). The used properties of these materials are given in Table 2. Considering the materials loaded with the three non-proportional square loadings with an equivalent plastic strain amplitude of 0.00566, Figures 10 and 11 present the estimated equivalent stress amplitudes. A first observation that supports the Shamsaei findings on 1050 QT steel is the low sensitivity of the stress response to the loading path. In addition, the non-proportional load factor according to Mei and Dong method indicates an equivalent stress amplitude of only 1.4 % higher than the Itoh method. Instead, for 304L stainless steel, the Itoh method estimate an equivalent stress amplitude of 15 % higher than the Mei and Dong method. Also, in the case of Path I the difference is 24 % for Itoh method compared to Mei and Dong method.
Although the three loadings form the same square in γ/√3 -ε coordinates, the variation law of axial and shear strains influences the severity of the non-proportional loading. Moreover, the severity of non-proportional loading proves to be more sensitive at variable shear strain rates.
The loading Path I presents two important features, primarily, maintaining under constant shear strain for a half-cycle over which axial strain overlaps could have effects on ductility and implicitly on low cycle fatigue life. In a study conducted by Shenguyan Ji et all., [16], it is reported that the ductility decrease gradually with the increase of the shear pre-strain, instead the low cycle fatigue life is divided in three stages. Gradually increasing of the shear pre-strain determines a first stage with monotonically decreasing of fatigue life, followed by a second stage with fatigue life monotonically increasing and a third stage with decreasing the fatigue life. Depending on the amplitude of the shear pre-strain, one of the three stages in the evolution of the low cycle fatigue life can be reached, and this is determined by a combination of material ductility and microstructural transformations. A second characteristic with effect on low cycle fatigue life is the very high strain rate. The transition from the maximum to minimum and vice versa in the axial and shear strains is almost instantaneous. A recent study on structural steel, [17], has shown that yield strength and strain hardening exponent increases with increasing the strain rate. Also, increasing the strain amplitudes cause decrease of the strain rate sensitivity.
Loading Path II is easier to achieve experimentally and probably the most often used in experimental programs that generate non-proportional square loadings. It is characterized by constant strain rates of both axial and shear strains.
Loading Path III shows interest through the sinusoidal variation law of the shear strain overlapped over the axial strain maintained constantly for a half-cycle. Sinusoidal variation involves a variable strain rate of the shear strain.
Conclusion
Multiaxial fatigue loadings continuously generate non-proportional loads with significant effects on component durability. Current approaches on fatigue life predictions are cyclebased methods, but evolution of fatigue damage within a loading cycle is also important. Time dependent effects or stress relaxation are factors that may occur during a loading cycle and requires attention. In this paper, four non-proportional square loadings are analysed for the severity of non-proportionality. Two methods were used to analyse the non-proportional factor, proving that both can be sensitive to different variations of axial and shear strains. Also, the principal strain based method has found a higher non-proportional factor compared to non-proportionality induced fatigue damage method. This study requires a further analysis of stress response and respectively experimental validation. | 2019-12-05T09:06:14.088Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "7ce630428b7a590c04803c15047e820211ea695a",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/49/matecconf_icmff1218_08003.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "057738450e2ed46f90234a836873d46a9b4c6109",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260162246 | pes2o/s2orc | v3-fos-license | Multifocal Langerhans' Cell Histiocytosis in an Adult
Introduction: Multifocal Langerhans' cell histocytosis is a rare condition that can affect multiple organs and manifest in various scenarios. While the condition is more commonly found in children, it can also occur in adults. Case Report: A 43-year-old female presented with refractory otorrhea and had a rubbery neck mass in the left mid-cervical area, as well as an itchy eczematoid lesion in the left parietal area. The otic lesion was eventually resected, and histopathologic examination confirmed the diagnosis of Langerhans histiocytosis. Conclusions: Although rare in adults, Langerhans histiocytosis should be considered as one of the differential diagnoses for ear canal polyps. If diagnosed, medical treatment should be pursued.
Introduction
Since 1987, the term hystiocytosis has been used to describe a group of diseases that were previously known as eosinophilic granuloma, Hand-Schüller-Christian disease, and Letterer-Siwe disease. These three conditions were first linked by American pathologist Louis Lichtenstein in 1953, who observed an abnormal accumulation of hystiocytes in the patients' tissues.
These hystiocytes were characterized by abnormal cytoplasmic substances, which led to the name hystiocytosis X due to unknown etiology. Today, these cells are known as langerhans cells, and their abnormal accumulation and enlargement is the hallmark of the disease's pathogenesis.Collectively, these conditions are referred to as LCH, which represents a spectrum of clinical severity of a major disorder.
Eosinophilic granulomas are a benign form of the disease and are characterized by either single or multiple osteolytic lesions. Hand-Schüller-Christian disease is a chronic, multifaceted condition that involves several skin lesions and soft tissues, including the skin and mucous membranes.
In about 10% of cases, the classic "Christian triad" symptoms of insipid diabetes, bony lesions of the skull, and exophthalmus may present. Letterer-Siwe disease, on the other hand, is an acute and catastrophic illness marked by extensive skin eruptions, lung infiltration, and hepatosplenomegaly.
Histopathologically, all three conditions represent benign, acute, and chronic forms of a similar systemic disease process (1).
While LCH is rare in children, there have been few reports of auditory involvement in adults (2). In this case report, we present LCH as an educational example and reminder to consider it in the differential diagnosis of auditory canal polyps.
Case Report
The patient is a 43-year-old white female who has been suffering from chronic left ear otorrhea for the past 20 months, despite undergoing various topical antibiotic treatments.
Additionally, she developed a painless mass in her left neck six months ago and has been experiencing a seborrheic rash on her left parietal region (Figures 1-3). During an otoscopic examination, it was observed that both ear canals were occluded with polypoid tissue, which also obscured the eardrum ( Figure 2).
However, there was no palpable tenderness in the mastoid areas. Audiometrical studies revealed that the tympanogram of both sides were type B, with moderate conductive hearing loss. A temporal CT scan was performed, which showed left mastoid sclerosis and soft tissue density in the mastoid middle and external ear, as shown in figure 4. During mastoidectomy, a hypertrophic mastoid membrane was observed along with obvious destruction of the ossicles and external auditory canal. Pathologic studies revealed chronic inflammation and diffuse monocyte infiltration with oval-shaped cleaved nuclei and acidophilic cytoplasm with reaction to CD1a and S100 markers, confirming the diagnosis of Langerhans histiocytosis ( Figure 5). A neck biopsy also confirmed the diagnosis. As a result of these findings, the patient was referred to the oncology ward for further treatment approaches.
The medical oncology department initiated systemic treatment, which included a combination of prednisone and vinblastine. Following completion of the first session of chemotherapy, the patient returned to her home country.
Fortunately, the patient's response to the treatment was excellent. Within four weeks of starting induction therapy, noticeable improvements were observed.
Over the course of the next 3 months of induction therapy, the otologic manifestation continued to improve.At the end of the induction therapy, the patient underwent an examination revealing a normal tympanic membrane without any perforation or aural polyps.
Discussion
LCH is a disease that predominantly affects children, with an average age of onset at three years. However, it can also occur in adults, typically in the range of 30 to 39 years of age (3). Temporal bone involvement is observed in approximately 14% to 61% of children with LCH, but limited cases have been reported in adults (2)(3)(4).
Otological manifestations of LCH include otorrhea, polyps of the outer canal, postauricular swelling, conductive hearing loss, and rarely facial paralysis and vertigo (5).
Ear involvement is bilateral in 30% of cases, while in 25% of patients, the only symptom is ear disease (6).
Physicians should consider LCH in patients with refractory otorrhea and radiological evidence of soft tissue density in the middle and outer ear canal. A typical manifestation of LCH is a homogenous soft tissue lesion with sclerotic margins that enhance homogenously upon intravenous contrast injection (7).
Biopsy is the gold standard test for diagnosis, which reveals Langerhans cells along with infiltration of plasma cells, lymphocytes, and eosinophils through immunohistochemistry methods. In cases where LCH is suspected, frozen section biopsy during surgery can aid in the diagnosis and prevent the surgeon from resorting to radical surgery (8).The treatment of LCH depends largely on the pattern of the disease. Localized LCH is treated with surgery and steroid injections into the lesion, while immunosuppression is recommended for systemic LCH. Chemotherapy with vinblastine and steroids is also used as a treatment option for systemic LCH (9). Most patients with an ear problem have a systemic form of LCH, so surgical treatment is recommended only for diagnosis purposes. It should be noted that surgical treatment cannot completely eliminate the lesion, and complications such as facial nerve palsy, postoperative fistula, and deafness may occur thereafter. Therefore, surgical interventions are considered high-risk procedures in LCH patients with otological manifestations and are avoided when possible (10). If ear surgery is performed, close surveillance is crucial in ensuring that disease recurrence following initial treatment is detected early. As such, it is recommended that patients undergo surveillance imaging with MRI and/or PET every 6 months since up to 50% of patients may experience recurrence.
These imaging modalities allow for the detection of residual lesions or new lesions, which can be treated promptly to prevent further complications. Additionally, regular follow-up visits with an oncologist and an otolaryngologist are necessary to monitor patients' overall health and ensure that they are responding well to treatment. Early detection of recurrent LCH is critical in preventing the progression of the disease and reducing the risk of permanent damage to affected organs (11).
Conclusion
Although LCH usually occurs in children, it should be taken into account in any adult patient who have resistant otorrhea or mastoiditis. If diagnosed, treatment should be done medically. | 2023-07-27T05:09:52.965Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "6026142a8e0ec98324874080077c55b07d227b40",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6026142a8e0ec98324874080077c55b07d227b40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
197775627 | pes2o/s2orc | v3-fos-license | Measuring and Analyzing the Impact of the Relationship between Poverty Phenomenon and Disparities in the Distribution of Income Iraq a Standard Study for the Period (2003-2016)
The purpose of the paper is to explore measure and analyze the impact of unequal distribution of income and its impact of poverty in Iraq. Poverty is a multidimensional concept and it has various antecedents and consequences. The importance of poverty as a social issue is undoubted and it is among the millennium development goals of United Nations too to eradicate poverty from the world. Present study is focused on the determination of possible link between inequality in distribution of income in Iraq and its possible aftermaths on the phenomenon of poverty. Detailed review of the literature has been performed in order to establish the conceptual foundation of poverty and Disparities in the Distribution of Income in Iraq, which is supported by secondary data. The findings of the present study can be used for the government officials, policy makers and organizations working for socioeconomic development through poverty reduction in Iraq, moreover, the study also presented a model for less developed countries like Iraq to make the findings of this study as benchmark. The paper is concluded not before some of the suggestions and policy guidelines are proposed to reduce the gap caused by disparities in the distribution of Income in Iraq.
The present social, financial and economic issues of Iraq has been reported by the various researchers working on developmental and poverty related areas (Series, 2012). Furthermore, Series (2012), also proposed a model of interrelated social issues that are causing poverty and inequality of income distribution in Iraq, these factors are working in a vicious circle. Please refer to figure No. 1 for the model he proposed for interrelated social issues in Iraq.
Figure No-1. Interrelated Social Issues in Iraq
Source: (Series, 2012) According to World Bank (2018), the SWIFT survey conducted in 2014 reported that labour market was performing better and income level was increasing but the recent survey reported that it had gone back to the 2012 level again. Figure Source: (World Bank, 2018) There are various measurement tools to measure poverty, Alkire and Foster (2007); Alkire and Foster (2011) proposed Multidimensional Poverty Index (MPI) to measure poverty through a Multidimensional dimensions, please refer to figure No. 3 for further details about the Multidimensional Poverty Index (MPI). Multidimensional Poverty Index (MPI) measure poverty on health, education and living standard related parameters. The proposed model of Alkire and Foster (2007); Alkire and Foster (2011) to measure poverty, try to explore and measure poverty of various dimensions. Health, education and living standard are the primary factors which are used to examine poverty. Health is further divided into nutrition and child mortality, education is exclaimed as year of schooling and school attendance and living standard is measured from cooking fuel, sanitation, water, electricity, floor and assets. (Alkire and Foster, 2007;2011). The aforementioned dimensions are examined and explored in order to determine the level of poverty or prosperity in any country. Alkire and Foster model is presented the model as per figure No. 3. These dimensions are reported, cited and used with minor changes considering the scope of the research and the location of the research. The changes are made to make Multidimensional Poverty Index (MPI) more relevant to the context of the study. According to Ismail et al. (2017), MPI score of Iraq as compare to other Arab countries are on the higher side. MPI score is one of the most reliable and most used measure to examine poverty in any country. Please refer to figure No. 5 for MPI score comparison of Iraq with other Arab countries. Farrington and Gilling (1998) described poverty in holistic way in order to comprehend poverty. They proposed that there is a need of more complex nature of measurement since no single approach can be able to comprehend, analyze and measure poverty, they proposed an approach that should be rich on both quantitative and qualitative aspects. The poverty index is used in many countries to measure and report poverty, however Watts (1968) have make some reservations on the poverty index with an argument that since poverty index does not measure and analyze the difference of income level among the poor community (Farrington and Gilling, 1998).
The rest of the paper is organized as, the next section of the paper is regarding the poverty in Iraqi its various dimensions which are reported in previous literature along with some of the barriers that are causing hindrances in the poverty reduction in Iraq, followed by a section that is designated to the discussion on the inequality and Disparities in the Distribution of Income in Iraq and its impact on the poverty, the next section is about some of the recommendations and suggestions for the policy makers, government officials and non-governmental organizations working for the socio economic development through poverty reduction in Iraq and finally the last section of the paper will conclude the paper.
Poverty in Iraq
Iraq is a low middle income country but rich with great oil reserves, the country is facing significant challenges in socioeconomic development due to ongoing conflicts, high economic dependence on oil and oil related industries, bad governance, poor development of private sector and poor law and order situation (UN Iraq, 2014;UNDP, 2013). It is important to note that Iraq was able to achieve millennium development goal's target number one between 1990 and 2015 to (United Nations Development Programme, 2014), however, despite of various initiatives of the Iraq government the issue of poverty still exists in Iraq and call for more holistic remedial measures to for eradication. According to UNDP (2013), there is a majority of Iraqi population which is suffering from poverty, lack of employment opportunities, lack of women participation in work force, inequality, shortage of food and provision of some of the basic necessities of life. There are multiple government initiatives that are aimed at reducing poverty and inequity of income distribution but these programs are still underway and are yet to reap full benefits. National Strategy for Poverty Reduction (NSPR) is one of the key program which is taken by Iraqi government with the collaboration of World Bank and International Monitory Fund (IMF) for the identification and possible eradication. Government of Iraq has development, National Development Plan (NDP) 2013-2017 with great ambitions to reduce poverty and inequity with the support of United Nations Development Assistance Framework (UNDAF).
According to Oxford Poverty and Human Development Initiative (OPHI) (2015), 3.9 percent of the Iraqi population is classified as extreme poor as there earning per is less than $ 1.25, and there are 21.2 percent Iraqi population, which is earning less than $ 2 per day. Moreover, approximately there are 23 percent Iraqi population is living under the poverty line (Humanitarian Country Team, 2014). However Hasim, reported that the population that is living under the line of poverty is 35 percent. There is also another divide among the population under the line of poverty, the rate of poverty among the urban and rural locations are considerably different and also the intensity of the issue varies (Hasim, 2014).
There are various factors that is causing the current poverty situation in Iraq, According to UN Iraq (2014), the present humanitarian, security and financial crises had led to increase in poverty, unemployment and vulnerability. World Bank had tried to gather Quantitative evidence, which indicates that developmental efforts by the government and other stakeholders are trying to mitigate the negative impact of the current security and financial. The research study carried out by World Bank (2014), also indicated that the present poverty situation is also contributed by the legacy of Iraq, almost three decade of poor law and order, insecurity and violence has greatly hampered the human capital development, provision of basic health and education facilities and the economic development has also been on slower side which results in higher rate of poverty The dependence of the oil sector and its related industries lead to more labor intensive and less skilled type of industries to flourish in Iraq, the alternate policy the reduce dependence of oil sector can also reduce poverty from Iraq (IAU, 2012).
Sassoon, highlighted that despite the fact that Iraq is greatly blessed with the oil, Iraq could not flourish as it could be on socioeconomic fronts, among many possible factors on the high rate of poverty is the corruption and Bureaucratic officialdom are also contributing toward low rate of economic growth. Moreover, Sassoon, also added that after the violence which started in 2003, majority of highly education and skilled Iraqi had migrated to other countries for better and safe future, that call for government to take some serious remedial measure to stop brain drain from Iraq. Government of Iraq has imposed various trade berries such as bureaucratic practices regulatory measure which are causing low level of foreign and local investment in indigenous industries and low level of trade in Iraq (Sassoon, 2012).
The non-existence of well-defined budget law has also restricted the development of oil sector and ability to the government to provide and developer generate employment opportunities, deliver services (such as education, sanitation, health, transport and electricity) initiate infrastructure development projects which are essential for socioeconomic development and poverty reduction in Iraq (UN Iraq, 2014). Lack of employment opportunities is another issue that has lead toward higher levels of poverty in Iraq, level of employment of lowest among the neighboring countries (UN Iraq, 2014;United Nations Development Programme, 2014). Unsuccessful and inefficient government had been trying the solve the issue of unemployment through a very short term and narrow minded approach by creating marginal and part time jobs such as pavement, painters of bridges and guards to show the reduction of unemployment and poverty. A study conducted by World Bank on the period from 2007 to 2012 reported that eight percent of poor population has less primary education (Krishnan et al., 2014), moreover Iraq Human Development Report (2014) reported that lack of education is one of the key factor of poverty in Iraq (Shlash, 2014).
There are various studies that has indicated that poverty has been increasing in Iraq (UN Iraq, 2014), due to various reasons such as current security, humanitarian, unemployment and vulnerability and financial crises. Figure No. 5 is showing the same trend as reported in the existing literature.
The trend has been on lower side after 2007 to 2012 but it again started to rise after 2014 onwards, there is an important implication for government officials and policy makers that to explore the factors that caused this rise. However, the apparent situation and circumstances were indicating that Iraq was becoming more stable and democratic government was taking over control over the affairs of the country after the withdrawal of US led forces. The next section of the paper will provide some insight into that fact that Disparities and inequality in the Distribution of Income can lead to higher rate of poverty in general and in the context of Iraq, especially.
Disparities in the Distribution of Income and Poverty
There is need to study and explore the structure of inequality and Disparities in the Distribution of Income in order to devise any policy to reduce poverty. The question that inequality and Disparities in the Distribution of Income has any implications on the economic development or poverty reduction has been under investigation in social sciences. Aghion et al. (1999) explain "The absence of data on the distribution of wealth for a sufficient number of countries forces researchers to use proxies in empirical studies. The most common approach is to use data on income inequality as a proxy for wealth inequality". It is also to mention that impact of inequality in income distribution can has an impact of economic growth or poverty can has an impact of economic growth, these phenomena are yet to be explored established, moreover he also proposed inequality in distribution of income is more relevant to poverty. The role of inequality and Disparities in the Distribution of Income and poverty on the overall economic growth has been explore and established (Ravallion and Martin, 2012). There has been various arguments to the relationship of the inequality and Disparities in the Distribution of Income and poverty on the overall economic growth however, Easterly (2007, p.756) tried to make a reconciliation as:
"One confusion in the theoretical and empirical analysis of inequality is between what we could call structural inequality and market inequality. Structural inequality reflects such historical events as conquest, colonization, slavery, and land distribution by the state or colonial power; it creates an elite by means of these non-market mechanisms. Market forces also lead to inequality, but just because success in free markets is always very uneven across different individuals, cities, regions, firms and industries. So the recent rise in inequality in China is clearly market based, while high inequality in Brazil or South Africa is just as clearly structural. Only structural inequality is unambiguously bad for subsequent development in theory market inequality has ambiguous effects"
Disparities in the Distribution of Income and Poverty is the result of two factors, firstly, the level of income earned and secondly Distribution of Income among the population of any country (Kanbur, 2005). There is a tendency that once the economy is growing and people are becoming prospers the inequality in Distribution of Income will reduce (Fosu, 2011;White, 2001) also remarked that after an extensive review of the literature on poverty, it is confirmed that economic progress results is more justified income distribution.
United Nations Development Programme (2014) also proposed and suggested various factors that are causing poverty in Iraq along with the weightage of that respective factor in enhancing poverty in Iraq. It is important to realize that inequality in distribution in income is causing the most significant portion in poverty in Iraq, this calls for great attentions from the policy makers to act accordingly and make such policies which ensure the justified distribution of income is assured across the population. Please refer to figure No. 6 for the Factors Contributing in Poverty in Iraq. The next section of the paper is designated to some recommendations and suggestion for policy makers, development sector organizations working for poverty reduction. The suggestions can be applied on the countries with similar socioeconomic conditions in order to reap benefits from this study. This replication can also be helpful to explore if the results of the present study can be generalized to larger population or different socioeconomic conditions calls for different strategies to explore the disparities and inequality in the Distribution of Income and to eradication poverty.
Recommendation and Suggestions
After a comprehensive review and examination of the existing literature from secondary data sources, there are some observations, recommendation and suggestions for the policy makers to eradicate poverty from Iraq and also reduce the disparities and inequality in the Distribution of Income. These observations, suggestions and recommendation carry the support from the extensive research work conducted by the development sector organizations working for the eradication poverty in less developed countries like Iraq.
1. Presently, Iraq is under serious financial and law & order crises, in order to prosper and reduce inequality and unequal distribution of income, policy maker and government officials need to make both short term and long term policies which can reduce in equality and assure justified distribution of income across the board in Iraq. 2. Recent history of Iraq has been full of violence and wars. There is need for Iraqi government to make a well-defined foreign policy based of co-existence and all disputes should be settle through mutual dialogue. Prevalence of peace will be also helpful for the economy to grow and sustain. 3. Corruption and Bureaucratic officialdom is also causing poverty to prevail in Iraq. Corruption is hampering Iraq of various fronts, it is not only causing the inefficiency in civil service but also posing a bad mark on the society as a whole. Serious and immediate remedial are required to counter Corruption and Bureaucratic officialdom. 4. Policy maker and government officials needs to take such measure which can attract the best human capital to Iraq, since we are living in the fourth generation industrial revolution, natural resource and sources are not the only source of economic prosperity and competitive advantage of the nations, it is the quality of the human capital that can shape the future of any country in knowledge economy. 5. There are various trade and investment barrios in Iraq, which restricts local and foreign investment in local businesses and industries. After the economic liberalization initiated through World Trade Organization (WTO), Iraq should also relax the various trade and investment barrios for the economic development in Iraq. Efficient economy aided through trade and investment can reduce the poverty and distribution on income can be made more justified. 6. There is also need to re-define the budget laws in Iraq, alongside some aspects of the budget does not event exist. In order to reduce poverty, government need to make sure best practices of budgeting, governance and corporate finance are in place in Iraq. 7. Unemployment is another great socioeconomic issue in Iraq, leading toward higher level of poverty.
Government of Iraq need to devise short and long term mechanism and policies to reduce unemployment from Iraq, subsequently reducing poverty from Iraq. Technical and vocational training which is skilled based can be adopted instead of formal education. Skill based education can help in reducing Unemployment and extreme poverty from Iraq. 8. Government of Iraq should make informed polices backed by data to reduce poverty and reduce the unequal distribution of income in Iraq.
9. There is a need to restructure the economy of Iraq, which is present heavily depended on Oil and oil related industries. Government of Iraq should try to enhance the present and percentage of service sector in the overall economy. Through the recent diffusion of technology, services sector has been growing globally and major portion of the developed nations overall economy comprises of service sector. 10. Government or Iraq and the collaboration of developmental sector organizations has initiated various initiatives to reduce poverty from Iraq such as Social safety net (SSN), Public Distribution System (PDS) and Social Protection Net (SPN). These programs had been helpful in reducing poverty and inequality, but there is a need to make these programs more comprehensive and efficient. The scope of such programs should be made across the board in Iraq to reap optimum results. 11. Microfinance can be used for promotion of entrepreneurship and small scale businesses to enhance the income level and reduce poverty. Interest free and shari'ah compliance loan schemes can be introduced for small business. Globally, it is accepted that small scale business can contribute towards economic prosperity very quickly. 12. Income from diversified sources can also help in reducing poverty and increasing the income level at house hold level. Government of Iraq can start such initiatives which can attract poor people to make diversified sources of income, cottage industry can be one of the possible options to help to earn Income from diversified sources. 13. In order to reduce poverty, agriculture sector can also be used by Iraqi government. Agriculture sector can help the people living below the poverty line to earn substantial amount of money to make there reasonable earnings. Agriculture sector can be aided through various subsidies by the government, Interest free loans for reaping crops and purchase of Agriculture sector's output of rates defined by the government so the exploitation of the middle man can be minimized. 14. Since, the capitalistic economic principals are placed across the globe and Iraq is also no exception. There is need to make an efficient internal management of capitalistic economic principals so that the gap between haves and haves not cannot go beyond the prescribed boundaries. 15. The role of women labor force in the overall economy is on lower side in Iraq, Government of Iraq should take such measures that employment opportunities are offered on the principals of equality and merit. The participation of the women in overall economy and in labor force should be enhanced, this will also increase in overall economic output of the country.
The next section of the paper will provide concluding remarks of the present study.
Conclusion
Despite being an oil rich country Iraq has been facing various financial, economical, security and ethnic issues and could not reap the benefits of great natural reserves as compare to other countries with similar natural reserves and still facing the poverty issues. The present study was conducted to explore and examine the various factors that are causing poverty in Iraq with special emphasize on the unequal distribution of wealth. Poverty in Iraq has been caused by various factors which are discussed in detail with the support of existing research literature. Disparities and inequality in the Distribution of Income is one of the contributing factor in poverty in Iraq. Various observations, recommendation and suggestions are proposed for the policy makers and government officials of Iraq in order to eradicate poverty and make sure that inequality in income distribution can be restricted. | 2019-07-21T18:03:47.047Z | 2018-12-25T00:00:00.000 | {
"year": 2018,
"sha1": "49bb61984620ad3f4b30aa48dd7fdbefcdb01664",
"oa_license": "CCBY",
"oa_url": "https://arpgweb.com/pdf-files/spi5.9.396.399.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7c7696492b109db3fdbf0fe0ae10f9e37b2fb5ef",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.