id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
250081881
pes2o/s2orc
v3-fos-license
Immune Infiltration Represents Potential Diagnostic and Prognostic Biomarkers for Esophageal Squamous Cell Carcinoma Background Immune infiltrates in the tumor microenvironment have established roles in tumor growth, invasion, and metastasis. However, the diagnostic and prognostic potential of immune cell signature in esophageal squamous cell carcinoma (ESCC) remains unclear. Results The proportions of 22 subsets of immune cells from 331 samples including 205 ESCC and 126 normal esophageal mucosa retrieved from TCGA, GEO, and GTEx databases were deciphered by CIBERSORT. Nine overlapping subsets of immune cells were identified as important features for discrimination of ESCC from normal tissue in the training cohort by LASSO and Boruta algorithms. A diagnostic immune score (DIS) developed by XGBoost showed high specificities and sensitivities in the training cohort, the internal validation cohort, and the external validation cohort (AUC: 0.999, 0.813, and 0.966, respectively). Furthermore, the prognostic immune score (PIS) was developed based on naive B cells and plasma cells using Cox proportional hazards model. The PIS, an independent prognostic predictor, classified patients with ESCC into low- and high-risk subgroups in the internal validation cohort (P = 0.038) and the external validation cohort (P = 0.022). In addition, a nomogram model comprising age, N stage, TNM stage, and PIS was constructed and performed excellent (HR = 4.17, 95% CI: 2.22-7.69, P < 0.0001) in all ESCC patients, with a time-dependent 5-year AUC of 0.745 (95% CI: 0.644 to 0.845), compared with PIS or TNM stage as a prognostic model alone. Conclusion Our DIS, PIS, and nomogram models based on infiltrated immune features may aid diagnosis and survival prediction for patients with ESCC. Introduction Esophageal cancer (EC) remains one of the most common malignant tumors worldwide and ranks seventh and sixth among all malignant tumors in morbidity and mortality, respectively [1,2]. Although esophageal adenocarcinoma (EAC) is dominant in the United States, Europe, and other western countries, esophageal squamous cell carcinoma (ESCC) comprises more than 90% of EC cases in China [3]. Despite recent advances in diagnostics and therapeutics, the 5-year overall survival rate of EC remains 15-25% largely due to the lack of screening measures for early diagnosis and effective ther-apeutic regimens [4,5]. The majority of ESCC have metastatic disease at initial diagnosis, leading to futile clinical management [6]. As such, it is of utmost importance to identify biomarkers for early detection, diagnosis, prognosis, and therapeutic intervention of ESCC as well. Genomic, epigenomic, and proteomic alterations intrinsic to cancer cells have been extensively investigated and identified as driver agents in development and progression of ESCC [7][8][9]. Notwithstanding the clinical relevance of these molecular features, few dysregulations are clinically targetable in clinical care of ESCC. Instead, accumulating evidence also demonstrates the tumorigenic role of cancer-cell-extrinsic factors [10]. The esophageal tumor tissues are populated with a great variety of stromal and immune cell types that exert both pro-and anti-tumorigenic effects. In ESCC, multiple studies have reported correlations between prognosis and immune cells and stromal components [11]. Furthermore, autoantibodies against a panel of 6 tumor-associated antigens show good performance for discrimination of early-stage ESCC from normal controls, which indicates ESCC-specific immune response arising in the setting of ESCC [5]. The tumorinfiltrating immune cells are correlated with invasion, metastasis, tumor stage, poor prognosis, and therapeutic outcomes. The commonly used methods including immunohistochemistry, immunofluorescence, flow cytometry, and cytometry by time-of-flight mass spectrometry can only characterize limited types of immune cells based on preselected cellular markers, such as CD8 cells, Treg cells, Th17 cells, and tumor-associated macrophages (TAMs), thus providing limited knowledge of the collective effects of these heterogeneous immune cells [12]. The tumor fate, however, is dictated by numerous specialized cell types that interact in a highly coordinated manner [13][14][15][16]. Although information of individual cell types was kept in bulk transcriptomics data from tumor tissues, it is challenging to decipher the individual cellular identity mingled together. To estimate the immune cell proportions from bulk tumor samples, multiple computational methods have been developed [12,17]. For example, CIBERSORT is a computational algorithm to enumerate the relative proportions of immune cells using transcriptional data from bulk tumor tissues [18]. In the present study, CIBERSORT was used to enumerate the immune cellular composition of ESCC based on bulk transcriptome data of ESCC and normal esophageal mucosa samples from Gene Expression Omnibus (GEO), Genotype-Tissue Expression (GTEx), and The Cancer Genome Atlas (TCGA). Diagnostic and prognostic models were developed and validated with good performance. Patients and Datasets. This study used data in the public domain. The transcriptome data GSE53625/GSE23400 that comprise 179/53 human ESCC samples together with adjacent normal tissue samples and clinical data were downloaded from GEO (https://www.ncbi.nlm.nih.gov/geo/). The gene expression data of 92 ESCC samples from TCGA and 338 normal esophageal mucosa tissue samples from GTEx were derived from UCSC Xena (https://xena.ucsc.edu/). The expression level of mRNA in TCGA and GTEx were normalized to log 2 ðTPM + 0:001Þ (TPM (transcripts per kilobase of exon model per million mapped reads)) to improve the representation. For prognostic analysis, eligible subjects were recruited according to the following criteria: (1) histology confirmed diagnosis of ESCC and (2) available follow-up of ≥3 months and prognostic information. As such, 101 ESCC patients from GSE53625 and 51 patients from TCGA were included in the present study. One hundred one patients with ESCC were randomly divided into the training cohort (71 patients) and internal validation cohort (30 patients). Patients with ESCC from TCGA were used as an external validation cohort. . To quantify the proportions of immune cells, the current study utilized the CIBERSORT algorithm (http://cibersort.stanford.edu/), which is a deconvolution algorithm to estimate the proportions of 22 immune cell phenotypes based on a gene expression signature matrix of 547 genes representing each of 22 cells. These 22 infiltrating immune cells include naive B cells, memory B cells, plasma cells, CD8 T cells, naive CD4 T cells, resting memory CD4 T cells, activated memory CD4 T cells, follicular helper T cells, regulatory T cells, gamma delta T cells, resting NK cells, activated NK cells, monocytes, M0 macrophages, M1 macrophages, M2 macrophages, resting dendritic cells, activated dendritic cells, resting mast cells, activated mast cells, eosinophils, and neutrophils. CIBER-SORT yields a P value for each sample using Monte Carlo sampling, providing a measure of confidence in the results. Generally, samples with P < 0:05 indicate that the inferred fractions of immune cells calculated by CIBERSORT were considered eligible for further analysis. Estimation of Immune Cell Infiltration ESTIMATE (Estimation of Stromal and Immune Cells in Malignant Tumor Tissues using Expression data) algorithm generates the immune score that represents the infiltration of immune cells in tissue, based on single sample gene set enrichment analysis using gene expression data [19]. Several reports have demonstrated that immune scores and stromal scores produced by ESTIMATE algorithm could separate normal cells from tumor cells through analyzing specific gene expression signature of immune and stromal cells [20][21][22]. The ESTIMATE outputs were reduced by a factor of 1000 to be comparable with the CIBERSORT outputs. 2.3. Feature Selection. The objective of feature selection is to identify the specific factors that are most effective in discriminating normal from cancerous tissues. Reduction of feature number can alleviate the problem of overfitting. Another important advantage of feature selection rather than other dimensionality reduction techniques, such as principal component analysis and wavelet transform, is that the original features are maintained. Eligible samples were randomly separated into the training and validation cohorts (7 : 3) using the "Sample" function in R software. Two feature selection approaches including LASSO and Boruta were used to assess the importance of intratumor infiltrated immune cells [23,24]. LASSO minimizes the sum of squared errors for ranking and selecting variables in statistical models. Boruta is usually used for feature selection from all relevant features on the basis of random forest (RF) classifier. Classifier Development. XGBoost (eXtreme gradient boosting) [25,26] is an ensemble learning algorithm based on gradient boosting tree and provides state-of-the-art results for many bioinformatics problems. It uses the gradient boosting framework and provides a parallel tree boosting technique, which can solve a variety of problems with high accuracy. The super parameters of XGBoost were determined by grid search and 10-fold cross-validation, including the number of iterations (nrounds = 200), step size shrinkage used to prevent overfitting (eta = 0:15), maximum depth of a tree (max depth = 3), minimum loss reduction required to make a further partition on a 2 BioMed Research International leaf node of the tree (gamma = 0:25), parameters for ratio of random subsampling characteristic (colsample bytree = 0:2), and minimum sum of instance weight (hessian) needed in a child tree (min child weight = 0:7). Compared with other machine learning algorithms, XGBoost has certain unique advantages. The most important strengths are that XGBoost performs a second-order Taylor expansion for the objective function and uses the second derivative to accelerate the convergence speed of the model while training. Subsequently, we performed survival analysis to obtain robust survival-associated immune cells from that selected by diagnostic classifier. In this study, the predictive model was implemented by an R package called XGBoost (version 1.3.2.1), available from https://cran.r-project.org. The parameters of XGBoost can be optimized by grid search method with crossvalidation in the training cohort. Performance Evaluation Metrics. To objectively evaluate the performance of classifier, the following metrics, including sensitivity (Sn), specificity (Sp), and overall accuracy (Acc), are used in this study and calculated as follows: where TP, TN, FP, and FN indicate the true positives, true negatives, false positives, and false negatives, respectively. The values of Acc, Sn, and Sp reflect the robustness of the classifiers. In addition, the receiver operating characteristic (ROC) curve plots the signature performance of true positive rate (TPR = sensitivity) against false positive rate (FPR = 1 − specificity). The area under the ROC curve (AUC) is also used as performance evaluation in this study, which can quantitatively and objectively measure the performance of the proposed classifier. A perfect predictor is proved to have an AUC = 1, and the random performance is AUC = 0:5. 2.6. Statistical Analysis. For each immune cell fraction, we calculated the 75% quartile, median, and 25% quartile of the normal and tumor groups. Group comparisons for continuous variables were performed using Wilcoxon-signed rank test or Student's t-test. Correlation analysis was performed by package "corrplot" of R. The LASSO analysis was carried out using "glmnet" package. Survival ROC was plotted using "survivalROC" package. Decision curve analysis was carried out with "rmda" package. A nomogram and calibration plots were developed with "rms" package. Kaplan-Meier survival analyses with log-rank tests were applied using the "survival" package. Time-dependent ROC (survival ROC) curves were applied to assess the prognostic power of nomogram risk score. The above statistical analyses were conducted using R software 4.1.0. All statistical tests were two-tailed, and P < 0:05 was considered statistically significant. Eligible Samples. The present study involved 4 source datasets, including GSE53625, GSE23400, TCGA-ESCC, and GTEx. The overall proportions of immune versus nonimmune cells were estimated by CIBERSORT algorithm. As the CIBERSORT P values anticorrelate with the abundance of immune cells in bulk tissues, P < 0:05 was used as a threshold to select the eligible samples for further analysis. As such, 101 ESCC and 53 normal samples from GSE53625, 53 ESCC and 48 normal samples from GSE23400, 51 ESCC samples from TCGA, and 25 normal esophageal mucosa samples from GTEx were eligible for subsequent analysis. Demographic and clinicopathological characteristics of patients with ESCC in GSE53625 and TCGA-ESCC (the corresponding information unavailable for subjects from GSE23400 and GTEx) are shown in Table 1. (Table 2 and Figure 1(a)). In addition, ESTIMATE was used to calculate the scores of StromalScore, ImmuneScore, and ESTIMATEScore for each sample. We found that the StromalScore was significantly increased in ESCC samples, whereas the ImmuneScore was significantly decreased in ESCC samples ( Table 2 and Figure 1(b)). Furthermore, the correlation between StromalScore and Immu-neScore was significant, with a correlation coefficient of 0.60. Correlations among all 13 immune cells as well as Stro-malScore and ImmuneScore are shown in Figure 1(c). We observed positive correlations between gamma delta T cells and ImmuneScore and StromalScore and ImmuneScore with the correlation coefficients greater than 0.3. The negative correlation categories comprised M0 macrophages and activated memory CD4 T cells, M0 macrophages and gamma delta T cells, and M0 macrophages and Immune-Score, with coefficient less than -0.3. In addition, increased proportions of plasma cells and resting mast cells, decreased proportion of follicular helper T cells, and increased scores of StromalScore and ESTIMATEScore were found in male patients. In the category of alcohol use, the proportions of memory B cells, plasma cells, and gamma delta T cells were significantly decreased in ESCC patients with alcohol exposure (Supplemental Figure 2). With regard to T stage category, the higher the T stage was, the higher scores of StromalScore, ImmuneScore, and ESTIMATEScore were found in ESCC. In categories of N stage and TNM stage, the percentage of gamma delta T cells was positively correlated with disease status (Supplemental Figure 3). (Figures 2(a) and 2(b)). Figure 2(c) shows that 13 green features identified by Boruta algorithm contributed significantly to classification of ESCC. These 13 candidate features comprised naive B cells, memory B cells, plasma cells, activated memory CD4 T cells, regulatory T cells (Tregs), gamma delta T cells, monocytes, M0 macrophages, M1 macrophages, resting mast cells, neutrophils, StromalScore, and ImmuneScore. The 9 common features identified by both LASSO and Boruta algorithm were deemed as candidate biomarkers for ESCC in this study, which were exactly equivalent to features selected by LASSO. The correlations of these 9 features are shown in Figure 1(c), in which M0 macrophages were strongly correlated with activated memory CD4 T cells with a correlation coefficient of -0.32. The correlations between other features were not significant. Diagnostic Signature for ESCC. In the training cohort of 101 ESCC tissues and 53 normal tissues, we calculated the diagnostic immune score (DIS) using XGBoost method. For differentiation of ESCC from normal tissues, the cutoff score of DIS was determined by ROC curve. Using a DIS cutoff score of 0.603, an AUC of 0.999 was attained for classification of 154 tissue samples in the training cohort, with Sn and Sp of 0.981 and 0.999, respectively (Figure 2(d), Table 3). In the internal validation cohort of 53 ESCC tissues and 48 normal tissues from GSE23400, the DIS also showed robust performance for discrimination of ESCC from tissue samples with an AUC of 0.813 (Figure 2(e)). Consistently, in the external validation cohort (51 ESCC tissue from TCGA-ESCC and 25 samples from GTEx), the AUC was 0.966 (Figure 2(f)). Our data highlights that both immune infiltrate and nonimmune stromal components have clinical implication for ESCC diagnosis. 3.5. Prognostic Signature for ESCC. All 101 ESCC patients with survival data were randomly assigned to the training (70%, with 71 samples) and validation (30%, with 30 samples) cohorts in the present study. In the training cohort, naive B cells, M0 macrophages, resting mast cells, and StromalScore were significant prognostic factors among the 9 candidate biomarkers for ESCC by univariate Cox proportional hazard regression analysis (Supplemental Table 1). Multivariate Cox proportional hazard regression analysis showed that naive B cells and plasma cells were independent prognostic factors for patients with ESCC after adjusting potential confounding factors (Figure 3(a)). For risk score calculation by integrating the independent prognostic factors, the prognostic immune score (PIS) for each individual was calculated using Cox proportional hazards model in the training cohort. The formula for PIS calculation was as follows: PIS = ðnaive B cells × 27:71Þ + ðplasma cells × ð−5:50ÞÞ. The cutoff value of PIS for prognostic predication of patients with ESCC was determined using the "survminer" package. Using a cutoff value of PIS of -0.357, ESCC patients in the training cohort were divided into the high-and low-PIS groups. The Kaplan-Meier survival analysis showed that the median survival times of high-PIS and low-PIS subgroups were 28.6 months and >60 months, respectively (Figure 3(b)). Log-rank test showed that the survival times of ESCC patients in these two groups were significantly different, with a hazard ratio of 2.0 for patients with high PIS (95% CI: 1.06 to 3.70, P = 0:028, Figure 3(b)), and similar results were observed in the internal validation cohort Figure 3(c)) and in the external validation cohort (HR = 8:33, 95% CI: 1.03 to 50.0, P = 0:022, Figure 3(d)). The Kaplan-Meier survival analysis of esophagus adenocarcinoma (EAC) cohort in TCGA showed that there was no significant difference between the low-and high-PIS groups (HR = 2:04, 95% CI: 0.564 to 7.69, P = 0:26, Supplemental Figure 4(a)), consistent with the distinctive molecular phenotypes manifested by ESCC and EAC. 3.6. Prognostic Nomogram for ESCC. The independent prognostic factors for overall survival of ESCC among clinicopathological features (gender, tobacco use, alcohol use, T stage, N stage, and TNM stage) as well as PIS were determined by univariate Cox regression analyses, and N stage and TNM stage were identified to be the independent factors associated with prognosis of ESCC (Supplemental Table 2). Seeking to improve the accuracy of prognostic classification, a prognostic nomogram model was constructed to incorporate independent clinicopathological features with prognostic relevance and PIS in the prognostic model (Figure 4(a)). The calibration curves for the nomogram of 2-, 3-, and 5-year survivals showed good agreement between prediction and the actual observation in all samples (Figures 4(b)-4(d)). The mean standard errors of 2-year, 3year, and 5-year survivals were 0.146, 0.220, and 0.276, respectively. The Kaplan-Meier survival curves demonstrated that ESCC patients in the high-risk group had significantly worse overall survival than those in the low-risk group (HR = 4:17, 95% CI: 2.22 to 7.69, P < 0:0001, Figure 5(a)). Table 4). The 5-year AUCs of the nomogram model outperformed TNM stage or PIS as a prognostic model alone (P = 0:006 and P = 0:693, respectively), indicating that this nomogram model is a more reliable prognostic index. Discussion Esophageal cancer, including ESCC that is more prevalent in China, is clinically challenging and requires multidisciplinary care that comprises surgery, chemotherapy, radiotherapy, and immunotherapy [4]. Despite these efforts, recurrence and metastasis still ensue and render ESCC patient dismal clinical outcomes [4,11]. Therefore, identification of a novel biomarker signature for diagnosis and prognosis as well as for therapeutic interventions holds promise for tailored care of ESCC. Based on the deconvolution of bulk transcriptome data of ESCC, 9 overlapping immune features identified by LASSO and Boruta algorithms were used for DIS calculation followed by effective discrimination of ESCC from normal esophageal mucosa tissue. Two immune cell types, namely, B naïve cells and plasma cells, identified by univariate and multivariate Cox proportional hazard regression analyses as independent prognostic factors, were used to construct a prognostic model that classify ESCC patients into high-and low-risk patients with significant differences in clinical outcomes. Furthermore, a nomogram model integrating age, N stage, TNM stage, and PIS shows robustness in predicative accuracy of prognosis for ESCC. The initiation and development of malignancy are closely linked to inflammation, which fosters proliferation, survival, and migration during neoplastic progression [27,28]. For example, elevated plasma levels of C-reactive protein are associated with reduced disease-free survival of breast cancer patients [29]. Furthermore, current smoking or prior heavy smoking that links to chronic lung inflammation is significantly associated with an increased risk of recurrence and mortality in breast cancer patients [30,31]. In mice, lung inflammation induced by either tobacco smoke exposure or nasal instillation of lipopolysaccharide awakens dormant cancer cells [32]. Notably, chronic inflammation is an integral proportion of tumor microenvironment of ESCC evidenced by local infiltration of multiple immune cells and elevated circulated C-reactive protein [33]. In tumor microenvironment (TME), there exists a variety of immune and stromal cells that restrain or accelerate tumor growth. Mounting evidence indicates that infiltrating immune cell populations are associated with tumor growth, cancer progression, and clinical outcome in multiple cancers [34,35]. Among the tumor-infiltrating immune cells, immune suppressor cells (T regulatory cells and M2 macrophage) are generally associated with poor prognosis, whereas cytotoxic T cells (CD8+ T cells, NK cells, and γδ T cells) are correlated with improved survival [11, 13-16, 28, 34, 35]. In recent years, various computational algorithms have been developed to estimate the immune components within TME using bulk transcriptome data [12,17,[36][37][38][39]. The present study employed CIBERSORT to estimate the fractions of 22 immune cell subsets using transcriptome data from public domain. Among the ESCC and normal esophageal tissue samples with CIBERSORT P < 0:05, the proportions of 13 individual immune cell fractions in tumor tissues were significantly different from those in normal tissues. In TME of ESCC, the immunostimulating cells including γδ T cells, mast cells, and B cells were underrepresented, 9 BioMed Research International whereas the immunosuppressive cells, including M0 and M1 macrophages and neutrophil, were more abundant in ESCC compared with normal esophageal tissues. By leveraging these different immune features with XGBoost, we built a DIS that distinguished ESCC from normal squamous mucosa with reliable accuracy. Our results demonstrate that tumor immune infiltrates play oncogenic roles in pathogenesis and progression of ESCC and serve as novel potential biomarkers for detection and diagnosis of ESCC. Although CIBERSORT P values represent the total infiltrated immune cells in TME, no prognostic effect was observed in the highly infiltrated subset (CIBERSORT P < 0:01) compared with the subset lacking immune infiltration (CIBER-SORT P > 0:05) in the context of ESCC (Supplemental Figure 4(b)). Nevertheless, a total of 8 immune cells including naive B cells, CD8 T cells, regulatory T cells, gamma delta T cells, activated NK cells, M0 macrophages, resting dendritic cells, and resting mast cells were significantly correlated with clinical outcomes of ESCC patients by the univariate Cox regression analysis. Notably, ESCC patients with higher fractions of M0 macrophage showed poorer overall survival (Supplemental Figure 4(c)). Tumor-associated macrophages (TAM), the major component in TME, are functionally classified in two different subtypes, i.e., M1 and M2 macrophages, which show distinct effector molecules on plasma membrane. Generally, M1 and M2 macrophages assume tumoricidal and protumor functions, respectively, in the evolution of malignancy [40,41]. Higher proportions of M2 macrophages contribute to an immunosuppressive microenvironment and have been associated with therapeutic resistance and poor prognosis in multiple cancers, including both ESCC and EAC [11,42,43]. Targeting macrophages, in particular M2 macrophages, improved antitumor immunity through reprograming of immune cells [44]. In line with this, we also found negative correlation between ESCC prognosis and M0 and M2 macrophages, indicating the skewed differentiation of M0 towards M1 polarization. Additionally, higher proportions of resting memory CD4 and γδ T cells, in addition to M0 and M2 macrophages, were also found to be negative prognostic markers of clinical outcome. In contrast, greater infiltration of plasma cells, CD8 T cells, activated NK cells, and resting mast cells was correlated with improved prognosis. In immune-oncology, the cytotoxicity exerted by T and NK cells has been well recognized and received the greatest attention. On the other hand, the role of B lymphocytes has begun to be appreciated in the context of host-tumor interaction over the last decade. Through antibody-dependent cell cytotoxicity and complement cascade activation, B and plasma cells can kill cancer cells and are correlated with improved cancer outcome. In contrast, tumorpromoting roles have also been found in multiple cancers [42,45,46]. Based on the prognostic relevance of immune features generated by CIBERSORT, multivariable Cox regression approach was used to select the key prognostic features for PIS building. The ESCC patients with low PIS have favorable outcomes compared with those with high PIS. To further improve the predicative accuracy for prognosis, we also established a nomogram model, which integrate age, N stage, TNM stage, and PIS, with improved performance compared with PIS and TNM stage as a prognostic model alone. On aggregate, our data argue that the infiltrated immune populations in TME are heterogeneous in terms of phenotype and function, which play divergent roles in the development and progression of ESCC. The dichotomy in immune functions was supported by the evidence of negative or positive correlations with cytolytic activity, which was closely correlated with expression of GZMA, GZMK, and PRF1 [47]. Thus, the functional state of immune cells in TME per se, rather than the abundance, is the determinant of immune response against cancer. Our study also has limitations. As we all know, ESCC is prevalent in China contrasting with EAC more frequent in western countries. Nevertheless, genetic aberrations are remarkably distinct between ESCC cases from USA and southern China [50][51][52]. Furthermore, American ESCC that occurs more common in blacks compared with whites in the United States [53] shares only 30% differential gene expression with Chinese ESCC, indicating that demographic factors such as genetic ancestry could account for variation of genetic phenotype. Therefore, our PIS derived from Chinese ESCC is not likely feasible for ESCC from other sources, especially western countries. This is the main limitation of this study. Second, in this work, all data were from public databases and the clinical utility of our DIS and PIS was not verified in independent clinical ESCC samples. Third, some factors, including living environment, drinking habits, family history, and microbial infection, were incomplete for ESCC patients in this study, which might underestimate the value of our diagnostic and prognostic models. Conclusion In summary, the present study demonstrates the diagnostic and prognostic potential of our DIS and PIS based on the differential distribution of infiltrated immune cells enumerated by deconvolution of transcriptome. A nomogram model integrating clinicopathological characteristics and immune signature shows improved accuracy for prognostic classification over TNM stage or PIS as a prognostic alone, which warrants prospective studies to validate.
2022-06-28T15:04:30.885Z
2022-06-26T00:00:00.000
{ "year": 2022, "sha1": "3c512b0ddefdadfd04d4ae71b7d2eea61659940d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/9009269", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58f56846d5e877f36c1e87a2949c5e5fff6515ee", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
59130942
pes2o/s2orc
v3-fos-license
Stratigraphic Control of Petrography and Chemical Composition of the Lower Gondwana Coals , Ib-Valley Coalfield , Odisha , India The Ib-valley coalfield of Odisha, India contains five coal seams viz. Ib-seam at the bottom overlain successively by Rampur seam, Lajkura seam, Parkhani seam and Belpahar seam. Twenty one representative samples were collected from three major seams (Ib, Rampur and Lajkura) and their petrography and chemical studies were carried out. Samples were not collected from the Parkhani and Belpahar seams as these are very small seams exposed locally having no regional correlation. The macroscopic study shows the dominance of durain which imparts a dull appearance to these coals. The maceral analysis reveals that vitrinite percentage varies from 4.5% to 80.2%, the exinite from 3.30% to 22.2% and the inertinite from 12.5% to 92.2% in different samples of the Ib valley coalfield. The very high proportion of inertinite suggests a shallower water deposition of plant materials followed by prolonged period of exposure and repeated cycle of weathering. The proximate analysis results show that the top Lajkura seam is comparatively lower in rank than the underlying Ib and the Rampur seam. The ultimate analysis exhibites that the percentage of variation in C is found from 77.88 to 85.79, H from 4, 4 to 5.91 and O from 7.26 to 15.3. H/C and O/C ratio including C.V. in this coalfield showing distinct variations from the bottom to top seam. The analyses results indicate that the petrographic and chemical characters of the ib valley coals are stratigraphically controlled. Introduction The Ib-valley coalfield derives its name after the river Ib, a tributary of the river Mahanadi and represents a part of the NE-SW trending master basin belt of the Son-Mahanadi valley.The river flows in a general southerly direction through the coalfield and discharges into Hirakud reservoir, which has submerged the southern fringe of the coalfield.The Ib-valley comprises Hingir basin in the north and the Rampur basin in the south.Though the Gondwana sediments spread further north-west into the adjoining parts of Chhatisgarh state and comprise the Mand-Raigarh and the Korba coalfields, the limits of the Ib-valley coalfield are defined by political boundary and covers parts of Sambalpur, Jharsuguda and Sundargarh districts of Odisha.The coalfield extends over an area of 1460 sq km and is bounded by latitudes 21˚31'N and 22˚14'N and longitudes 83˚32'E and 84˚10'E. A few researchers have yet carried out their research activities on this coalfield of Mahanadi valley.Pareek (1958) [1] was the first person to study the microstructure and petrological composition of the Rampur seam and found that these coals comprised fibrous durain and fine grained durain, vitrinite being abundant in the former and occurring as long parallel strips, while in the latter, it was of micro-fragmental nature an sporadic.Exinite and fusinite occur interbedded with vitrinite sheets.Semi-fusinite, sclerotinite, micrinite, resinite and cutinite occur next in abundance.Subsequently, Navale (1967) [2] published the micro-constitutional analysis of the coals of Rampur coalfield and Navale and Tiwari (1968) [3] ventured the palynological correlation of coal seams, their nature and formation in the Rampur coalfield.The spontaneous combustion of the Ib-valley coalfiled was studied by Behera and Chandra (1995) [4] and correlations were drawn between macerals and crossing point temperature (CPT).Goswami et al. (2006) [5] carried out the study of floristic assemblage during the deposition of Barakar and Kamthi Fomation and had suggested a palaeoclimatic shift from temperate warm moist to warm dry condition during upper Barakar Fomation and a warm and humid condition during the Kamthi Fomation.Based on the study of petrochemistry of the coals of the Basundhara block of the Ib-valley coalfield, Singh et al. (2010) [6] suggested that these coals originated from the plant communities of highly fluctuating oxic and anoxic moor to oxic (dry) moor with sudden high flooding condition.Mohanty et al. (2011) [7] studied the petrographic signature of marine inundation from the Barakar coals of the Ib-valley and opined that this inundation was caused by a rise in the mean sea level of the Tethys sea following a phase of deglaciation till the isostatic equilibrium was achieved.A study of trace elements in the coal indicates that the seam belonging to the Karharbari Formation is more enriched with trace elements than the seam belonging to the Barakar Formation (Senpaty and Behera, 2012) [8].Singh et al. (2013) [9] studied the petrology of the coals of the Rampur seam-IV and the Lajkura seam and found the dominance of inertinite group for macerals and mineral matter content. In this research paper, an attempt has been made to see the variation of chemical parameters and petrographical studies in relation to stratigraphy. Geology The first ever geological map of the area was prepared by V. Ball in 1875.Drilling operation was initiated by W. King in 1884-86.The area was subsequently remapped by G.F. Reader (1901), G.C. Chatterjee (1943), E.R. Gee (1947), DRS Mehta and M.A. Anandalwar (1954-55).Large scale mapping on 1: 31,680 scale, with the aid of aerial photographs on 1: 42,240 scale, was carried out by B.C. Pandey and S.N.Chakraborty during 1961-63.However, the regional exploration by GSI has been started only from 1964-65.The Directorate of Geology, Government of Orissa is engaged in detailed exploration in this coalfield on behalf of CMPDI since 1974-75.The Geological map of the Ib-valley coalfield area is shown in Figure 1. The stratigraphic succession (Table 1) starts with precambrian rocks at the base.Gondwana Formations consisting mainly the Talchir, Karharabari, Barakar, lower Kamthi (Raniganj) and upper Kamthi Formation overlie the Precambrian rocks and at the top recent deposits are found.The upper Kamthi Sediments are of Triassic and other are of Permian time. Petrography With the advancement of coal technology, petrography plays an important role for determination of coal quality and its application in various sectors like carbonization, hydrogenation, coal gas production and uses in industries.Therefore, study of petrography was carried out for the Ib valley coals. Macroscopic Description Macroscopic observation shows that the Ib River coals are banded and show grayish black to dull black in ap-pearance.Durain is the most common macroscopic ingredient followed by clarain.The dominance of durain imparts a dull appearance to these coals.Based on the macroscopic study, the Lajkura seams are constituted of "dull coal", "banded dull coal", and "banded coal" (Singh et al., 2013;Diessel, 1965) [9] [16]. Microscopic Study/Maceral Analysis The results of maceral studies have been shown in Table 3. Vitrinite group: Vitrinite is the most common constituent of the Ib valley coal samples.The percentage of vitrinite rarely exceeds 77% (Table 3).The vitrinites are massive and cellular types and in general the colour is dark gray to light gray exhibiting moderate to low reflectivity.These are considered as tellicollinite.It is frequently intermixed with exinite, and fragmental bits of fusinite.A few high reflective grains resemble pseudo-vitrinite.In some samples discrete grains of pyrite and siderite are found to be embedded in vitrinite.Vitrinite, telicollinolite and fusinite are the dominant microlithotypes.In some samples vitrinites are seen to preserve resinous bodies. Excluding mineral matter the vitrinite varies from 4.5% to 80.2% (Table 3).Seamwise, the youngest Lajkura seam shows the highest percentage of vitrinite where as the Rampur and Ib seams show comparatively less of vitrinite.In reflected light, duroclarite microlithotype is seen containing macrinite, vitrinite and sporinite with specks of pyrite (Figure 2(a)).Under fluorescence, sporinite becomes fluorescing while vitrinite, macrinite and mineral grains are unfluorescing (Figure 2(b)).In Figure 3(a), vitrinite is found occurring alternately with exinite.Few fragments of fusinite are admixed with vitrinite.This figure when observed under fluorescent light, vitrinite becomes nonfluorescing while exinite is fluorescing (Figure 3(b)). Exinite: The exinite group of macerals observed in the Ib valley coals are mianly megasporinite, sporinite, cutinite and resinite.Megasporinite and sporinite are the major exinite macerals.In some samples cell sacks of megaspores are filled with secondary resinous material.In most of the samples exinites are admixed with fragmental bits of vitrinite and vice versa.Vitrinites admixed with exinities contain small oval bodies of resinite.In some samples of the Ramur seam, microsporinite and resinite are intimately mixed with fragments of vitrinite, inertinite and are aggregating to duroclarite or clarite microlithotype.The percentage of exinite in the Ib valley coals vary from 3.3 to 22.2.Normally Indian coals show low exinite content but interestingly the Ib valley coals are high in exinite content suggesting reducing environment.The percentage of exinite is comparatively low in the youngest Lajkura seam where as the percentage is higher in Rampur and the Ib seam.High percentage of exinite was reported by Chandra and Taylor (1975) [17] in Talchir coals.Niyogi (1989) [18] also supported the views of Chandra and Taylor.The authors also find high percentage of exinites in the Ib valley coals.In reflected light, durite is seen comprising fusinite and exinite (Figure 4(a)).When these are observed under fluorescent light, exinite becomes fluorescing while fusinite is non-fluorescing (Figure 4(b)). Inertinite group: Inertinites are the group of macerals which show highest reflectivity and are very bright in incident light.The micro components of the inertinite group include semi fusinite, fusinite, macrinite, micrinite, scelerotinite, and intertodeterinite.Fusinite is by far the most the dominant maceral in the Ib valley coals.Semi fusinite, macrinite, micrinite and inertodectrinite are other macerals found in decreasing order of abundance.Sclerotinite occurs as a rare component.Micrinite occurs as granular form and is opaque in transmitted light.Fusinite cell lumens are filled with inorganic minerals like pyrite, quartz and siderite.In many samples, fusinites are highly crossed.In some cases fusinite shows well preserved woody structure.Fusinite and semifusinite occur as lensoid bodies and are crossed at many places.Fusinite, vitrinattitte-I and clarodurite are the dominant microlithytypes. The percentage of inertinite in the Ib valley coals varies from 12.5% to 92.2% on mineral matter free basis (Table 2) and no definite trend in variation of inertinite is seen from bottom seam to the top seam.The very high proportion of inertinite obviously suggests a shallower water deposition of plant materials followed by prolonged period of exposure and repeated cycle of weathering. Mineral matter: The Ib valley coals are found to contain higher portion of mineral matter.The percentage of mineral matter in different samples varies from 1% -16% (Table 2).The mineral matters mainly include clay minerals, siderite, pyrite, limonite, and quartz.Pyrite and siderite are found as inclusions in vitrinites and in some samples they fill up the cell lumens of fusinite along with silicate minerals.In some cases siderite replaces semifusinite bodies.Pyrite and siderite are also found ubiquitously distributed as discrete grains.In the samples of Ib valley coalfields, the pyrite content varies from 1% to 7%.The authors also find a number of colloidal forms of minerals which are suspected to be melnikovite (variety of pyrite group). Chemical Analysis The chemical analysis of the coals of the Ib-valley coalfield was carried out with a view to know the chemical behavior of these coals in respect of stratigraphy.For this purpose, both proximate and ultimate analyses were done.To have a clear picture of whole coalification process (reaction process) atomic ratios of H/C versus O/C were plotted on standard figures.Proximate analysis of the coal samples were carried out by the Indian Standard Method (I.S.1977) [19] to determine the percentage of moisture, volatile matter and ash.Ultimate analysis of the samples was done following the Indian Standard Method (I.S.1974) [20].In this analysis, the weight percentage of carbon, hydrogen and nitrogen were determined and the calorific value was also measured for all the samples. Proximate Analysis The proximate analysis of the Ib-valley coals (Table 4) shows that the percentage of moisture varies from 5.5% to 16.1%.Seam wise, the youngest Lajkura seam shows higher percentage of moisture than the Rampur and Ib seam coals.Similarly, the percentage of volatile matter varies from 17.2% to 32.9%, the highest value being observed in the bottom most seam (Ib-seam).The Ash% varies from 9.9% to 41.5%.Comparatively, the lower seams contain less ash than the upper seams.Thus, the percentage of fixed carbon as determined from the above analysis varies from 26.8% to 50.2%. The fuel ratio is the ratio between fixed carbon percentage and volatile matter percentage.With the help of fuel ratio, the rank of coal can be determined.The fuel ratio of the samples analysed varied from 1.2 to 1.8 which is shown in Table 4.The seamwise variation of fuel ratio shows that the rank varies from the bottom to the top.The top Lajkura seam is comparatively lower in rank than the Ib and the Rampur seam. Ultimate Analysis The results of the ultimate analysis of Ib valley coals are shown in Table 5.In these coals, the percentage of carbon ranges from 77.88% to 85.79% and the hydrogen percentage varies from 4.40% to 5.91%.The carbon and hydrogen values indicate that the coals of the Ib valley are perhydrous to subhydrous by nature.There is not much variation in the nitrogen content.The values of all samples vary between a narrow range of 1.44% to 1.94%.The oxygen content ranges from 7.26% to 15.3%.The elementary constitution of any term of the coalification series may be represented graphically by plotting H/C vs O/C ratio which gives an insight into the course of the process occupying during coalification.From the ultimate analysis data, the atomic ratios of H/C and O/C were calculated for different samples of the Ib-valley coals and the results are shown in Table 5.The values have been plotted on standard figures (Figure 6(a) and Figure 6(b)) after Van Krevelen (1961) [21]. The plotting on Figure 6(a) clearly indicate that these coals have low H/C ratio (<1.0) and high O/C atomic ratio.This is suggestive of type III kerogen formation in terrestrial environment.The organic matters were derived from continental higher plants and contain much identifiable vegetal debris.Microbial degradation in the basin of deposition is usually limited due to important sedimentation and rapid burial.In Figure 6(b), the evolution paths of the maceral groups of coals have been shown.The plotting of the coals of the Ib valley coalfield fall in vitrinite field, hence these are vitrinite rich coals formed by terrestrial origin. Conclusions 1) The petrographic study reveals that the vitrinite percentage varies from 4.5% to 80.2%, the exinite from 3.30% to 22.2% and the inertinite from 12.5% to 92.2% in different samples of the Ib valley coalfield.Normally, Indian coals are low in exinite content, but interestingly the exinite content of the Ib valley coals is higher (Table 3).2) The very high proportion of inertinite obviously suggests a shallower water deposition of plant materials followed by prolonged period of exposure and repeated cycle of weathering. 3) Seamwise, the top Lajkura seam contains more of vitrinite and it gradually decreases towards the bottom seam whereas the exinite and exinite + vitrinie is low at the top seam and gradually increases towards the bottom.Inertinite doesn't show any variation from top to bottom. 4) Pyrite and other mineral matter vary from 1 to 16 & in which pyrite alone contribute to 1% to 7%.Also a number of colloidal forms of minerals which are suspected to be melnikovite (variety of pyrite group) are found in these coals. 5) On fluorescence studies, the Ib valley coals should have shown abundance of fluorescing macerals; but in the present study only a few fluorescing macerals have been observed.It may be due to paucity of exinite or liptinite materials.In exinites, sporinite and cutinite are the only dominant constituents intermingled with vitrinite (nonfluorescing).This also indicates that the vegetal tissues have not gone extensive lignifications which restricted the development of fluorescing property. 6) The proximate analysis revealed that the content of Ash, Moisture, Volatile matter and Fixed carbon was in the ranges of 9.9% to 41.5%, 5.5% to 16.1%, 17.2% to 32.9% and 26.8% to 50.2% respectively.Seam wise, the youngest Lajkura seam shows higher percentage of moisture and volatile matter than the older seams.Similarly, the lower seams were observed to contain less ash than the upper seams.Thus, the variation of Moisture, Ash, Volatile Matter and Fixed carbon in the coal seams shows a definite trend with stratigraphy.8) The chemical analysis indicates that the younger Lajkura seam is lower in rank compared with the Ib and the Rampur seam.Proximate analysis shows that the Ib vally coals are sub-bituminous in rank and consist of high volatile matter and increased levels of inorganics (Table 4). 9) The coals of the Ib valley coalfield have low H/C and high O/C ratio.This is suggestive of type-III kerogen formation in terrestrial environment.The organic matters are derived from continental higher plants and contain much identifiable vegetal debris.Microbial degradation is less due to sedimentation and rapid burial. 10) The environment of deposition and kerogen formation is shown in Figure 6(a) and the sample cluster indicates type II and type III kerogen field. 11) The H/C vs O/C diagram also indicates that the evolution paths of the macerals fall in the vitrinite field because of terrestrial depositional environment (Figure 6(b)). Figure 6 . Figure 6.(a) Plotting of Ib-valley coals for depositional environment and kerogen formation (after Van Krevelin, 1961); (b) Plotting of the Ib-valley coal in the evolution paths of maceral groups (after Van Krevelin, 1961). 7 ) The percentage of variation in C is found from 77.88 to 85.79, H from 4, 4 to 5.91 and O from 7.26 to 15.3.H/C and O/C ratio including C.V. in this coalfield shows distinct variations from the bottom to top sea m, thus indicating a relation with stratigraphy of the seams. Table 5 . Ultimate analysis of Ib-valley coals.
2018-12-15T22:10:27.826Z
2015-06-05T00:00:00.000
{ "year": 2015, "sha1": "519d1b5b85b2dd762e5ff56786ddc916d66bd660", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=57636", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "519d1b5b85b2dd762e5ff56786ddc916d66bd660", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
266330109
pes2o/s2orc
v3-fos-license
An Explainable Evaluation Model for Building Thermal Comfort in China : The concentration of atmospheric greenhouse gases is being amplified by human activity. Building energy consumption, particularly for heating and cooling purposes, constitutes a significant proportion of overall energy demand. This research aims to establish a smart evaluation model to understand the thermal requirements of building occupants based on an open-access dataset. This model is beneficial for making reasonable adjustments to building thermal management, based on factors such as different regions and building user characteristics. Employing Bayesian-optimized LightGBM and SHAP (SHapley Additive exPlanations) methods, an explainable machine learning model was developed to evaluate the thermal comfort design of buildings in different areas and with different purpose. Our developed LightGBM model exhibited superior evaluation performance on the test set, outperforming other machine learning models, such as XGBoost and SVR (Support Vector Regression). The SHAP method further helps us to understand the interior evaluation mechanism of the model and the interactive effect among input features. An accurate thermal comfort design for buildings based on the evaluation model can benefit the carbon-neutral strategy. Introduction The rapid increase in global temperature and its associated detrimental impacts have made climate change one of the most pressing challenges of the 21st century [1].A central aspect of this escalation in global temperatures is the increasing concentration of atmospheric greenhouse gases, notably amplified by human activities [2].As per recent studies, urban regions are major contributors to greenhouse gas emissions, predominantly due to activities such as transportation, industrial operations, energy production and consumption, waste management, and the functioning of residential and commercial buildings [3][4][5][6][7].Consequently, addressing urban carbon emissions has become imperative in the fight against global climate change [8].Among the various factors contributing to urban carbon emissions, building energy consumption, especially for heating and cooling purposes, plays a predominant role [9].It is estimated that buildings account for nearly 40% of global energy consumption [10], with a significant fraction of this energy being expended for maintaining thermal comfort [11,12].Thermal comfort, a state of mind expressing satisfaction with the surrounding thermal environment, is crucial for ensuring the health, productivity, and well-being of building occupants [13][14][15].Thermal comfort is a field of study that has garnered considerable attention, with research standards playing a pivotal role in establishing uniform testing protocols.Pioneering standards, such as ASHRAE Standard 55 [16] and ISO 7730 [17], provide comprehensive methodologies for assessing thermal comfort in various environments.These standards define the thermal environmental conditions for human occupancy and prescribe a range of factors, including temperature, humidity, airflow, and clothing insulation, which contribute to individual thermal satisfaction [18,19].The quantification of comfort parameters has been further refined through the Predicted Mean Vote (PMV) and Predicted Percentage Dissatisfied (PPD) indices, which are now widely accepted benchmarks for evaluating thermal environments in relation to human satisfaction [18].Such standards not only guide experimental design but also facilitate the comparison of findings across different studies, ensuring that assessments of thermal comfort are both reliable and replicable.However, achieving optimal thermal comfort in a manner that is both energy-efficient and aligned with occupants' preferences is a formidable challenge [20,21]. The challenge is compounded by the diversity in regional climates, building designs, and occupant preferences [22].Different regions, influenced by their geographical positioning and topographical attributes, experience different temperature ranges and climatic conditions.Similarly, buildings, based on their design, materials used, and purpose (whether commercial, residential, or industrial), have varying energy needs and thermal characteristics [23].Additionally, the preference for thermal comfort can differ significantly among occupants, influenced by factors such as age, health, clothing, and activities.This diversity necessitates a detailed, data-driven understanding of the thermal requirements of buildings and their occupants.With the advent of the digital age, vast amounts of data are being generated and made available through open-access datasets, providing an unparalleled opportunity to harness this information for understanding and addressing the thermal comfort needs of building occupants.Employing machine learning methodologies, researchers can now model complex relationships between multiple variables, offering insights that were previously elusive [24,25]. Recent advances in interpretability of machine learning models have emphasized the importance of explanation techniques that provide insight into model predictions.Global explanation methods, like permutation feature importance, offer an overall perspective on feature relevance across the entire dataset, but they do not account for the complex interactions between features within individual predictions.In contrast, local explanation techniques, such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), provide granular insights into the contribution of each feature to individual predictions, reflecting the conditional interaction effects within the model [26].SHAP, in particular, employs a game-theoretic approach to attribute the prediction output to its input features, thereby offering a cohesive and theoretically grounded method for local explanations [27].The SHAP technique not only elucidates feature contributions but also enhances transparency and trust in complex models, a crucial aspect in fields like energy management where model decisions have significant impacts [28].By incorporating local explanation methods, researchers can more effectively communicate model behavior, providing stakeholders with understandable and actionable insights into model predictions [29,30]. This paper delves into the above context, with the primary aim of establishing a smart evaluation model that leverages an open-access dataset [31] to understand the thermal requirements of building occupants.The emphasis is on creating a model that not only accurately predicts thermal comfort needs but also offers explanations for its predictions.The latter is particularly important as it offers architects, urban planners, and policy makers actionable insights into the factors influencing thermal comfort, facilitating informed decision-making.Furthermore, with China being the world's most populated country and undergoing rapid urbanization, the focus of this research on China offers timely insights.China's urban areas, characterized by their diverse climates ranging from the cold northeast to the hot and humid southeast, present a unique challenge and opportunity.An effective and efficient approach to ensuring thermal comfort in Chinese buildings can significantly contribute to the country's carbon-neutral strategy, echoing its commitments to global climate change mitigation efforts. The organization of this paper is as follows.Section 2 delves into the data processing methodologies employed, from filtering the raw data to encoding categorical variables.In Section 3, the LightGBM model's establishment is detailed, along with insights into the hyperparameter optimization and training processes.Section 4 presents the results, critically analyzing the model's performance.Section 5 interprets the evaluation model's predictions mechanism through the SHAP (SHapley Additive exPlanations) method.Finally, Section 6 concludes the research, highlighting its contributions and implications. In essence, this research sits at the nexus of urban development, thermal comfort, and sustainable energy consumption, providing a roadmap for future urban planning efforts aimed at achieving carbon neutrality while ensuring the well-being of occupants. Dataset Filtering The open-access Chinese thermal comfort dataset [31], spearheaded by Xi'an University of Architecture and Technology in collaboration with seven other institutions, encompasses 41,977 data entries gathered from 49 cities spanning five climatic zones in China over the last two decades.Rigorous quality control measures were implemented on the raw data, involving systematic organization to guarantee its dependability.Each data entry encompasses environmental parameters, occupants' subjective feedback, building specifications, and individual details.In the raw dataset, certain non-essential features have a substantial amount of missing data.We first deleted these features, and subsequently removed samples with incomplete data to derive a filtered dataset.The features in the filtered dataset are shown in Table 1.A total of 11,899 samples are retained after dataset filtering.The details of the subjective thermal comfort indicators are delineated below: • Clothing Insulation: Respondents were prompted to select the clothing type that matched their attire at the time of taking the survey.In instances where their specific clothing type was not listed, they were guided to choose the closest alternative.The insulation value for individual clothing items was determined based on ASHRAE 55-2020 [32].For outfits composed of multiple garments, the total insulation value was computed by aggregating the insulation values of each individual piece. • Metabolic Rate: The dataset features metabolic rate values for the Chinese population across various activity states.These values were ascertained in [33] using indirect calorimetry.The participants' activities at the time of completing the questionnaire were documented and subsequently translated into metabolic rate values.The corresponding values are sitting (0.9 met), sitting while typing (1.0 met), sitting with document filing (1.2 met), standing in an office setting (1.1 met), standing with document filing (1.3 met), and walking at a pace of 2 km/h (2.1 met). Feature Selection Feature selection is a critical step in the development of a robust and efficient model.Properly selecting the right features not only enhances the model's performance but also provides insights into the underlying processes governing the system.With the growing dimensions of data, especially in the age of big data, pruning irrelevant or redundant features becomes an imperative to prevent models from becoming overly complex and to reduce the computational overhead associated with training.In this study, we employ a two-pronged criterion for feature selection, aiming to streamline the input dataset while retaining the most informative predictors. • Exclusion of Irrelevant Features: The primary objective of any modelling endeavor is to capture the underlying patterns in the data that are pertinent to the prediction or classification task at hand.Hence, the first step in our feature selection process is to remove any feature that does not have a direct or meaningful relationship with the evaluation indicators.Features that do not contribute significant information or might introduce noise into the system are systematically identified and excluded.This ensures that our model remains focused on pertinent information and is not swayed by irrelevant data. • Addressing Feature Collinearity: The presence of highly correlated or collinear features can introduce instability in certain models and can also make the model's interpretations more challenging.When two or more features convey similar information, they are, in essence, redundant, and the inclusion of all these features does not necessarily improve the predictive power of the model but certainly increases the computational burden.In our methodology, if a set of features exhibit high collinearity (i.e., they are highly related), we adopt a conservative approach by retaining only a few representative features from that set and discarding the rest.This approach ensures that our model remains efficient without a compromise in its predictive capability. Upon examining the data, we observe that indoor physical parameters have been gauged at three distinct heights above the floor: 0.1 m, 0.6 m, and 1.1 m. Figure 1 illustrates the significant correlation between these parameters across the three levels, as evidenced by their Pearson correlated coefficients.Guided by the principle of "Addressing Feature Collinearity", it is judicious to select a single set of indoor physical parameters from one specific height, given the strong interrelation between measurements from different heights.We have chosen the parameters measured at 0.6 m above the floor, as this height consistently exhibits the most robust correlation with the other two levels.Subsequently, the Spearman correlated coefficients (SCC) for the remaining features are shown in Figure 2. SCC is a rank correlation coefficient, and its calculation is based on the ranking of sample values of two variables in the data.SCC is agnostic to the numerical type and distribution of variables, thus exhibiting a broad scope of applicability.The formula for SCC is expressed as follows: where x and y are the variables to be studied, R(x i ) is rank of sample x i , R(y i ) is rank of sample y i , and n is the amount of all samples.The value of SCC ranges from −1 to +1, and the greater absolute value indicates stronger correlation between the two studied variables.In the analysis presented within Figure 2, we focus solely on the absolute value of the SCC, emphasizing the strength of correlations between variables.Given the intricate internal dynamics observed in large sample sets, a mere reliance on significance might lead to misconceptions.Thus, the magnitude of the SCC holds primary importance in our approach.For the scope of this study, we designate thermal sensation (TSV), thermal comfort (TCV), and thermal acceptability (TAV) as evaluation outputs.Adhering to the principle of "Exclusion of Irrelevant Features", any feature demonstrating an SCC below 0.1 with these evaluation criteria is excluded from the modelling process.As tree-based learning models inherently yield a single feature output, separate models are necessitated for TSV, TCV, and TAV.Consequently, each model autonomously selects its most pertinent input features.The feature selection results are:  For the TSV evaluation model, the related input features are building type, building function, thermal operation mode, clothing insulation, metabolic rate, and indoor air temperature.  For the TCV evaluation model, the related input features are seasons, city, building type, building function, thermal operation mode, clothing insulation, metabolic rate, indoor air temperature, indoor relative humidity, and indoor air velocity.  For the TAV evaluation model, the related input features are city, climate zoom, weight, clothing insulation, metabolic rate, indoor air temperature, and indoor air velocity. In pursuit of a broader applicability, this research seeks to formulate a versatile evaluation model tailored for cities not encompassed within the current dataset.As such, the "city" variable is excluded from the previously identified factors.All the selected features are shown in Table 2.In the analysis presented within Figure 2, we focus solely on the absolute value of the SCC, emphasizing the strength of correlations between variables.Given the intricate internal dynamics observed in large sample sets, a mere reliance on significance might lead to misconceptions.Thus, the magnitude of the SCC holds primary importance in our approach.For the scope of this study, we designate thermal sensation (TSV), thermal comfort (TCV), and thermal acceptability (TAV) as evaluation outputs.Adhering to the principle of "Exclusion of Irrelevant Features", any feature demonstrating an SCC below 0.1 with these evaluation criteria is excluded from the modelling process.As tree-based learning models inherently yield a single feature output, separate models are necessitated for TSV, TCV, and TAV.Consequently, each model autonomously selects its most pertinent input features.The feature selection results are: • For the TSV evaluation model, the related input features are building type, building function, thermal operation mode, clothing insulation, metabolic rate, and indoor air temperature. • For the TCV evaluation model, the related input features are seasons, city, building type, building function, thermal operation mode, clothing insulation, metabolic rate, indoor air temperature, indoor relative humidity, and indoor air velocity. • For the TAV evaluation model, the related input features are city, climate zoom, weight, clothing insulation, metabolic rate, indoor air temperature, and indoor air velocity.In conclusion, the feature selection process adopted in this study is rigorous and is designed to produce a streamlined, informative, and non-redundant set of predictors.This not only facilitates efficient model training but also aids in deriving meaningful and interpretable results from the model.After feature selection, the data distribution of each feature (including the evaluation results TSV, TCV, and TAV) is shown in Figure 3. It should be noted that, in the context of many traditional machine learning algorithms, preprocessing steps, like data normalization for numerical features and one-hot encoding for categorical variables, are essential to ensure optimal model performance.However, when working with the LightGBM model, such transformations are not required.This is due to the inherent design and mechanism of LightGBM, which can naturally handle different scales of numeric data and internally manages categorical variables through its histogram-based algorithm.Specifically, LightGBM applies a binning process to sort numerical values into discrete bins and utilizes a special algorithmic approach for categorical attributes, negating the necessity for manual one-hot encoding.This not only simplifies the preprocessing pipeline but also often results in faster training times and reduced memory usage without compromising model accuracy.However, in this study, In pursuit of a broader applicability, this research seeks to formulate a versatile evaluation model tailored for cities not encompassed within the current dataset.As such, the "city" variable is excluded from the previously identified factors.All the selected features are shown in Table 2.In conclusion, the feature selection process adopted in this study is rigorous and is designed to produce a streamlined, informative, and non-redundant set of predictors.This not only facilitates efficient model training but also aids in deriving meaningful and interpretable results from the model.After feature selection, the data distribution of each feature (including the evaluation results TSV, TCV, and TAV) is shown in Figure 3. where n is denoted as the total number of the categories of variable X, and ei is the element of the one-hot vector whose value equals to 1 only in the corresponding categorical position that variable X indicates and equals 0 in the rest of the positions. LightGBM Model LightGBM serves as an enhancement of the XGBoost and Gradient Boosting Decision Tree (GBDT) models [34].It integrates the Exclusive Feature Bundling (EFB) and Gradient-based One-Side Sampling (GOSS) algorithms, positioning LightGBM as a leading It should be noted that, in the context of many traditional machine learning algorithms, preprocessing steps, like data normalization for numerical features and one-hot encoding for categorical variables, are essential to ensure optimal model performance.However, when working with the LightGBM model, such transformations are not required.This is due to the inherent design and mechanism of LightGBM, which can naturally handle different scales of numeric data and internally manages categorical variables through its histogram-based algorithm.Specifically, LightGBM applies a binning process to sort numerical values into discrete bins and utilizes a special algorithmic approach for categorical attributes, negating the necessity for manual one-hot encoding.This not only simplifies the preprocessing pipeline but also often results in faster training times and reduced memory usage without compromising model accuracy.However, in this study, the categorical variables are unordered, and thus it is preferable to employ one-hot encoding rather than label encoding.The encoding function can be expressed as follows: one-hot (X) = [e 1 , e 2 , . .., e i , . .., e n ] (2) where n is denoted as the total number of the categories of variable X, and e i is the element of the one-hot vector whose value equals to 1 only in the corresponding categorical position that variable X indicates and equals 0 in the rest of the positions. LightGBM Model LightGBM serves as an enhancement of the XGBoost and Gradient Boosting Decision Tree (GBDT) models [34].It integrates the Exclusive Feature Bundling (EFB) and Gradient-based One-Side Sampling (GOSS) algorithms, positioning LightGBM as a leading model for tabular data prediction, boasting rapid training speeds and elevated prediction accuracy [35,36].Typically, tabular data, characterized by rows representing samples and columns denoting features, often contain sparse categorical features abundant in zero elements, particularly when subjected to the one-hot encoding method.Such feature sparsity can detrimentally impede the efficacy of machine learning models.Addressing this, LightGBM leverages the EFB algorithm to amalgamate specific sparse features.Given that many sparse features frequently display mutual exclusivity, preventing them from being concurrently non-zero, the EFB algorithm consolidates these features into a singular new feature, thereby curtailing the feature dimension [34].This approach efficiently mitigates training complexities while retaining commendable accuracy.Moreover, as an ensemble model of the Classification and Regression Tree (CART), LightGBM encapsulates the decision manifold inherent in the Decision Tree (DT), ensuring it remains impervious to discrepancies in value-type and distribution.Consequently, LightGBM emerges as an apt choice for evaluating building thermal comfort based on the selected tabular features. Bayesian-Optimized Hyperparameters A total of 20% of the entire dataset was randomly allocated as the test set, providing a basis for evaluating the performance of the model.The remaining 80% of the data was designated for hyperparameter tuning and model training processes.Hyperparameter optimization was undertaken using 5-fold cross-validation, i.e., the dataset was divided into five equal parts, with each part used as a validation set while the remaining four parts were combined to form a training set, in a rotational manner to ensure comprehensive evaluation.Subsequently, for model training, the remaining 80% of the data was further partitioned into a training set and a validation set in a 4:1 ratio, facilitating the iterative refinement of the model parameters.It is imperative that the test set remains completely separate from and uninvolved in the model establishment process, encompassing both hyperparameter optimization and model training phases, to preclude any potential for data leakage and ensure the integrity of the model's evaluation. Bayesian optimization is a probabilistic model-based approach for global optimization of black-box functions that are expensive to evaluate.It operates by constructing a posterior distribution over the objective function and then subsequently selects points to evaluate by balancing exploration and exploitation.The method is particularly well suited for optimization of hyperparameters in machine learning algorithms.In this research, Bayesian optimization is employed to fine-tune hyperparameters of a LightGBM regression model.Here, f (x) represents the cross-validated root mean squared error (RMSE) of the model predictions, with x denoting the vector of hyperparameters: x = [x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ], where each x i represents a hyperparameter in LightGBM (shown in Table 3).The goal is to find the hyperparameter vector x * that minimizes f (x), which in this scenario translates to the optimal model performance.The Gaussian process is often used to model the distribution over functions p (f |D), where D represents the set of points (x, f (x)) already evaluated.Acquisition functions, such as Expected Improvement (EI) in this research, are then used to select the next query point by maximizing the expected utility.The Gaussian process posterior is updated with the new observations, and this process is repeated for a predefined number of iterations or until convergence criteria are met.This iterative process allows for the adaptive refinement of the search space, leading to more efficient optimization when compared to traditional grid or random search methods.This research provided initiates the optimization with 50 starting points and continues for an additional 500 iterations, progressively refining the model's hyperparameters towards the optimal configuration.The boosting type was set as "GBDT", and all of the other parameters of LightGBM, such as learning rate, were set as default.Since we need to build three different evaluation models for TSV, TCV, and TAV, respectively, the above optimization process will be conducted independently for each model.The Searching space and optimal value of each hyperparameter are shown in Table 3. Model Training The training progression of our LightGBM-based evaluation models is captured in Figure 4, which delineates the RMSE as the chosen loss metric over successive iterations for both the training and validation datasets.The loss curves of the Thermal Sensation Vote (TSV), Thermal Comfort Vote (TCV), and Thermal Acceptability Vote (TAV) models, as shown in Figure 4a-c, respectively, demonstrate a sharp decline in training RMSE.This illustrates the models' rapid learning curve and their ability to quickly assimilate the patterns within the training data.Concomitantly, the validation RMSE for each model converges to a low, indicating an effective generalization to the validation data which is pivotal in preventing overfitting-a phenomenon where a model exhibits high accuracy on training data yet fails to predict accurately on unseen data. iterations, suggesting a swift convergence indicative of the efficiency of the LightGBM algorithm.The ongoing reduction in training RMSE post-convergence points to the potential for additional fine-tuning, should it be necessary.The depicted validation loss curves reinforce the balance attained by the models, which encapsulates sufficient complexity to learn from the training data while maintaining the ability to generalize to new datasets.This balance is vital, affirming the models' robustness and ensuring their applicability to a broader range of data, consistent with the objectives of the validation phase.Remarkably, the models achieve their best validation performance within the first 50 iterations, suggesting a swift convergence indicative of the efficiency of the LightGBM algorithm.The ongoing reduction in training RMSE post-convergence points to the potential for additional fine-tuning, should it be necessary.The depicted validation loss curves reinforce the balance attained by the models, which encapsulates sufficient complexity to learn from the training data while maintaining the ability to generalize to new datasets.This balance is vital, affirming the models' robustness and ensuring their applicability to a broader range of data, consistent with the objectives of the validation phase. Model Performance To quantitatively evaluate the accuracy of our developed model for evaluating building thermal comfort, we employed three widely accepted evaluation metrics: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Error.Generally, for these metrics, lower values signify superior model performance.The definitions for these evaluation metrics are presented as follows: where y i and ŷi denote as the true value and the predicted value, and y and ŷ denote as the averages of true value and predicted value. Figure 5 provides a visual representation of the performance metrics for the evaluation models-TSV, TCV, and TAV-when applied to the testing set.The depicted box charts summarize the error distributions for each model, with the interquartile range (IQR) capturing the middle 50% of the data, delineated by the box's extent from the 25th to the 75th percentile.The central tendency of the models' errors is indicated by the median line and the mean symbol within the boxes, offering a dual perspective on the models' predictive accuracy.Notably, the span of the whiskers, extending to 1.5 times the IQR, illustrates the variability within the majority of the predictions, with the outliers marked as diamonds highlighting instances of significant deviation from the typical error range.Such graphical analysis aids in the comparative evaluation of model robustness and error consistency.The TSV model exhibits a slightly wider interquartile range, suggesting more variability in predictions compared to the TCV and TAV models.The latter models demonstrate a more compressed IQR, indicative of a tighter clustering of errors and, potentially, a more consistent predictive performance.In Figure 7, we present a comparative analysis of the LightGBM-based evaluation models against a suite of established machine learning algorithms, namely KNN (k-Nearest Neighbor), RF (Random Forest), XGBoost, GBDT (Gradient Boosting Decision Tree), and SVR (Support Vector Regression).The KNN, RF, GBDT, and SVR models were constructed using the Scikit-learn library, while XGBoost was implemented via its dedicated library.All models, including LightGBM, were established with default parameter settings to exclude the impact of hyperparameter optimization.The comparative outcomes suggest that RF, XGBoost, and GBDT exhibit comparable levels of accuracy, likely attributable to their shared foundation in tree-based methodologies.Conversely, SVR and KNN appear less adept at managing the tabular dataset's large-scale nonlinearity, as evidenced by their respective error metrics.Although LightGBM demonstrates a marginal superiority in assessing TSV and TAV, it is distinctly more proficient in evaluating TCV.The consistent performance across multiple evaluation metrics shows the robustness of LightGBM, confirming its potential as a reliable tool for thermal comfort evaluation.In the TSV evaluation model, outliers are symmetrically distributed beyond the whiskers, whereas for the TCV model outliers are predominantly found below the lower whisker, and for the TAV model outliers are primarily above the upper whisker.This indicates that the TCV and TAV models tend to produce anomalously low and high results, respectively.It is imperative to consider the range span of TSV, TCV, and TAV, as a broader span implies a more challenging prediction task.Specifically, the spans for TSV, TCV, and TAV are 6, 5, and 2, respectively.In this context, as illustrated in Figure 6, the TCV model outperforms the others in terms of prediction across its range, which is also deemed the most crucial metric for assessing thermal comfort in buildings.In Figure 7, we present a comparative analysis of the LightGBM-based evaluation models against a suite of established machine learning algorithms, namely KNN (k-Nearest Neighbor), RF (Random Forest), XGBoost, GBDT (Gradient Boosting Decision Tree), and SVR (Support Vector Regression).The KNN, RF, GBDT, and SVR models were constructed using the Scikit-learn library, while XGBoost was implemented via its dedicated library.All models, including LightGBM, were established with default parameter settings to exclude the impact of hyperparameter optimization.The comparative outcomes suggest that RF, XGBoost, and GBDT exhibit comparable levels of accuracy, likely attributable to their shared foundation in tree-based methodologies.Conversely, SVR and In Figure 7, we present a comparative analysis of the LightGBM-based evaluation models against a suite of established machine learning algorithms, namely KNN (k-Nearest Neighbor), RF (Random Forest), XGBoost, GBDT (Gradient Boosting Decision Tree), and SVR (Support Vector Regression).The KNN, RF, GBDT, and SVR models were constructed using the Scikit-learn library, while XGBoost was implemented via its dedicated library.All models, including LightGBM, were established with default parameter settings to exclude the impact of hyperparameter optimization.The comparative outcomes suggest that RF, XGBoost, and GBDT exhibit comparable levels of accuracy, likely attributable to their shared foundation in tree-based methodologies.Conversely, SVR and KNN appear less adept at managing the tabular dataset's large-scale nonlinearity, as evidenced by their respective error metrics.Although LightGBM demonstrates a marginal superiority in assessing TSV and TAV, it is distinctly more proficient in evaluating TCV.The consistent performance across multiple evaluation metrics shows the robustness of LightGBM, confirming its potential as a reliable tool for thermal comfort evaluation.Instead of the development of a novel model attuned to extensive thermal datasets, a core objective of this study is to elucidate the relative impact weights, marginal effects, and interplay among all pertinent factors, which we discuss comprehensively in Section 5. Model Interpretation The interpretive analysis of the established LightGBM model was conducted using the SHAP (SHapley Additive exPlanations) method, which is grounded in game theory and relies on conditional expectations to elucidate the model's decision-making process [26,[37][38][39][40][41][42].The SHAP approach delineates the marginal contribution of each input feature to the predictive outcomes and helps understand the model's operational tendencies when evaluating the thermal comfort.This interpretative process exclusively employed the test dataset to reveal the model's explanatory insights.Particular attention was devoted to the TCV (Thermal Comfort Vote) evaluation model, attributed to its exceptional predictive accuracy and its acknowledged importance in gauging thermal comfort within building environments.By scrutinizing the TCV model, we discerned the influence weights, marginal effects, and interactive mechanisms of its contributing factors.This detailed examination enables a deeper comprehension of the factors that predominantly affect thermal comfort evaluations, guiding both the design of intelligent thermal regulation systems and the formulation of strategies for enhancing occupants' comfort and well-being. Model Interpretation The interpretive analysis of the established LightGBM model was conducted using the SHAP (SHapley Additive exPlanations) method, which is grounded in game theory and relies on conditional expectations to elucidate the model's decision-making process [26,[37][38][39][40][41][42].The SHAP approach delineates the marginal contribution of each input feature to the predictive outcomes and helps understand the model's operational tendencies when evaluating the thermal comfort.This interpretative process exclusively employed the test dataset to reveal the model's explanatory insights.Particular attention was devoted to the TCV (Thermal Comfort Vote) evaluation model, attributed to its exceptional predictive accuracy and its acknowledged importance in gauging thermal comfort within building environments.By scrutinizing the TCV model, we discerned the influence weights, marginal effects, and interactive mechanisms of its contributing factors.This detailed examination enables a deeper comprehension of the factors that predominantly affect thermal comfort evaluations, guiding both the design of intelligent thermal regulation systems and the formulation of strategies for enhancing occupants' comfort and well-being.Additive exPlanations) values provide a profound understanding of feature contributions by assigning each feature an importance value for a particular prediction.A higher mean absolute SHAP value signifies a greater impact of the feature on the model's output.The bar chart reveals that 'Indoor Temperature' possesses the most significant influence on TCV, as evidenced by its highest mean absolute SHAP value.This suggests that variations in indoor temperature are the most substantial predictor of thermal comfort levels perceived by occupants.'Building Type' also demonstrates a notable impact, implying that the structural and architectural characteristics encapsulated by this factor are critical in determining thermal comfort.'Metabolic Rate' and 'Building Function' follow closely, indicating their substantial roles in influencing the thermal comfort outcomes, likely due to their direct relationship with human thermal regulation and the activities conducted within the building space.Conversely, 'Clothing Insulation', 'Indoor Humidity', 'Indoor Velocity', and 'Thermal Mode' display comparatively lower influence weights.Nonetheless, their contributions are non-negligible, suggesting a complex interplay of environmental conditions and personal factors that collectively shape the thermal comfort experience.It is noteworthy that 'Season' is the factor with the lowest mean absolute SHAP value, playing an inconsequential role in the evaluation model.The presence of multiple factors with varied influence weights reinforces the multifaceted nature of thermal comfort, which cannot be attributed to a singular environmental or personal characteristic.Instead, it emerges as an aggregate outcome of multiple interacting variables.The quantification of influence weights via SHAP values facilitates a nuanced understanding of the TCV model, allowing practitioners to prioritize interventions based on the factors most predictive of thermal comfort.Such insights can drive informed decisions in the design and management of building environments, optimizing occupant comfort while potentially enhancing energy efficiency. Influence Weights Buildings 2023, 13, x FOR PEER REVIEW 13 of 20 determining thermal comfort.'Metabolic Rate' and 'Building Function' follow closely, indicating their substantial roles in influencing the thermal comfort outcomes, likely due to their direct relationship with human thermal regulation and the activities conducted within the building space.Conversely, 'Clothing Insulation', 'Indoor Humidity', 'Indoor Velocity', and 'Thermal Mode' display comparatively lower influence weights.Nonetheless, their contributions are non-negligible, suggesting a complex interplay of environmental conditions and personal factors that collectively shape the thermal comfort experience.It is noteworthy that 'Season' is the factor with the lowest mean absolute SHAP value, playing an inconsequential role in the evaluation model.The presence of multiple factors with varied influence weights reinforces the multifaceted nature of thermal comfort, which cannot be attributed to a singular environmental or personal characteristic.Instead, it emerges as an aggregate outcome of multiple interacting variables.The quantification of influence weights via SHAP values facilitates a nuanced understanding of the TCV model, allowing practitioners to prioritize interventions based on the factors most predictive of thermal comfort.Such insights can drive informed decisions in the design and management of building environments, optimizing occupant comfort while potentially enhancing energy efficiency. Marginal Effects Figure 9 illustrates the marginal impacts of various factors of the TCV model.Each dot within the figure symbolizes an independent data point.The hue of each dot corresponds to the specific factor's value for that data point.The SHAP value associated with each dot quantifies the marginal influence of the data point on the outcome, namely, the Thermal Comfort Voting (TCV) assessment.A positive SHAP value suggests that the respective feature value of the data point contributes to an increase in the output.The TCV scale ranges from 0, denoting 'very comfortable', to 5, indicating 'very uncomfortable'.Consequently, an elevated SHAP value denotes that the feature value of the data point adversely affects thermal comfort.The SHAP scatters depicted in the figure offers a granular view of the feature importance and impact on the predictive model. Marginal Effects Figure 9 illustrates the marginal impacts of various factors of the TCV model.Each dot within the figure symbolizes an independent data point.The hue of each dot corresponds to the specific factor's value for that data point.The SHAP value associated with each dot quantifies the marginal influence of the data point on the outcome, namely, the Thermal Comfort Voting (TCV) assessment.A positive SHAP value suggests that the respective feature value of the data point contributes to an increase in the output.The TCV scale ranges bution of SHAP values for 'Indoor Temperature' underlines the critical balance required in maintaining temperatures within a range that maximizes comfort while minimizing energy consumption for cooling systems.Conversely, the 'Metabolic Rate' feature is characterized by a diverse spread of SHAP values, reflecting its complex relationship with thermal comfort.Notably, higher metabolic rates, indicated by red dots, contribute to a higher TCV, which in this context translates to a reduction in comfort levels.This is in line with the understanding that increased activity levels lead to higher internal heat production, which, if not offset by the thermal environment, can cause discomfort.This finding emphasizes the importance of designing building environments that are adaptable to the varying activity levels of occupants, suggesting that spaces should be versatile enough to accommodate different metabolic rates while still ensuring comfort.The insights derived from analyzing 'Indoor Temperature' and 'Metabolic Rate' highlight the interplay between environmental conditions and occupant activities in the context of thermal comfort.Effective thermal comfort design must therefore account for these factors, aiming to create an adaptive environment that can respond to both the dynamic nature of indoor temperatures and the diverse metabolic rates of occupants.This approach not only enhances occupant comfort but also promotes energy efficiency by aligning the building's climate control strategies with the actual needs of its users.The rest of the value-type features, such as clothing insulation and indoor humidity, did not show a clear mode in influencing the output, which might be revealed through interactive influence analysis. Within the categorical variables assessed, particular attention is given to each category's relative impact on thermal comfort.For the variable 'Building Type', the category 'Residential' exhibits a pronounced detrimental influence on thermal comfort.In contrast, other categories, such as 'Dormitory', 'Office', and 'Educational', appear to exert negligible effects.This observation suggests that occupants may have less stringent thermal comfort expectations within public edifices, or that these structures may inherently possess superior thermal regulation capabilities compared to private dwellings.In addition, this disparity may be attributed to the economic aspects of thermal consumption costs and payment responsibility.Specifically, the cost of thermal energy in public spaces, which is not borne directly by individuals, potentially reduces thermal comfort concerns among users of these buildings.As for 'Building Function', spaces with public utility, including offices and dormitories, demonstrate no significant impact on thermal comfort, whereas private spaces such as 'Bedroom' and 'Living Room' are associated with the poorest thermal comfort levels.The influence patterns for 'Building Function' align with those observed in 'Building Type'.Regarding 'Thermal Mode', 'Radiator Heating' emerges as the most conducive to thermal comfort.Alternative modes, such as 'Natural Ventilation' and 'Convection Cooling', tend to negatively affect comfort margins.Seasonally, individuals report optimal thermal comfort in the winter, with the 'Transition Season' being the least comfortable period.Summer does not display a clear trend in thermal comfort preferences. Interactive Mechanism The interactive mechanism is to show the comprehensive effects of two features on building thermal comfort.In this part, we took the most relevant indicator "indoor temperature" as the basic index, and conducted four groups of interactive analysis, as shown in Figure 10.In the SHAP dependence graphs, the scales of color bars do not include the outliers.Each point in these graphs represents how the interaction of the two features at that specific data point influences the TCV score, offering insights into the complex interplay of environmental factors on thermal comfort.To help better understand the SHAP dependence graph, we summarised two essential aspects of it: (a) Feature valueprediction impact relationship: The horizontal axis usually represents the value of a specific feature (i.e., indoor temperature in Figure 10), while the vertical axis shows the SHAP value, indicating the impact of that feature value on the model's prediction (i.e., TCV value); (b) Color of data points: Data points can be colored to represent the values of other features (i.e., clothing insulation, metabolic rate, indoor humidity, and indoor velocity from Figure 10a-d, respectively), revealing the interaction effects between different features.Figure 10a indicates a nonlinear relationship between indoor temperature and the SHAP values for this temperature, with a color gradient representing clothing insulation levels.As indoor temperature increases, SHAP values initially show a decline and then rise, suggesting an inverse U-shaped relationship.Lower SHAP values, indicating higher thermal comfort, are predominant at moderate temperatures, while extreme temperatures, both low and high, correspond to higher SHAP values, reflecting reduced thermal comfort.At lower temperatures, increased clothing insulation (as indicated by a gradient from blue to red) seems to mitigate the discomfort to some extent, as evidenced by the cluster of points with higher insulation levels associated with lower SHAP values.However, as the temperature rises beyond a certain threshold, even higher levels of clothing insulation cannot counteract the discomfort caused by high temperatures.In the midrange of temperatures, there is a spread of SHAP values at varying levels of clothing insulation, implying a more complex interaction, where factors other than clothing and temperature may play a significant role in thermal comfort.This could include individual metabolic rates, the presence of direct sunlight, or other environmental factors not captured in this two-dimensional graph.At higher temperatures, the trend of increasing SHAP values regardless of clothing insulation suggests a limit to the compensatory role of clothing in managing thermal comfort.In these conditions, the physiological limits of heat dissipation might be reached and the discomfort becomes more pronounced, regardless of clothing insulation.Overall, the SHAP dependence graph reveals that, while Figure 10a indicates a nonlinear relationship between indoor temperature and the SHAP values for this temperature, with a color gradient representing clothing insulation levels.As indoor temperature increases, SHAP values initially show a decline and then rise, suggesting an inverse U-shaped relationship.Lower SHAP values, indicating higher thermal comfort, are predominant at moderate temperatures, while extreme temperatures, both low and high, correspond to higher SHAP values, reflecting reduced thermal comfort.At lower temperatures, increased clothing insulation (as indicated by a gradient from blue to red) seems to mitigate the discomfort to some extent, as evidenced by the cluster of points with higher insulation levels associated with lower SHAP values.However, as the temperature rises beyond a certain threshold, even higher levels of clothing insulation cannot counteract the discomfort caused by high temperatures.In the mid-range of temperatures, there is a spread of SHAP values at varying levels of clothing insulation, implying a more complex interaction, where factors other than clothing and temperature may play a significant role in thermal comfort.This could include individual metabolic rates, the presence of direct sunlight, or other environmental factors not captured in this two-dimensional graph.At higher temperatures, the trend of increasing SHAP values regardless of clothing insulation suggests a limit to the compensatory role of clothing in managing thermal comfort.In these conditions, the physiological limits of heat dissipation might be reached and the discomfort becomes more pronounced, regardless of clothing insulation.Overall, the SHAP dependence graph reveals that, while clothing insulation can moderate the impact of indoor temperature on thermal comfort, this effect is bounded by the limits of physiological adaptation to temperature extremes.This suggests the importance of maintaining indoor temperatures within a moderate range to optimize thermal comfort, particularly in environments where the clothing insulation cannot be easily adjusted [43]. Figure 10b suggests that individuals with a higher metabolic rate (represented by red dots) tend to achieve thermal comfort more easily at lower indoor temperatures.This observation implies that the inherent heat generation from a higher metabolic rate may compensate for the lower ambient temperatures, thus aligning with the body's thermoregulatory needs to maintain a sensation of comfort.This phenomenon can be attributed to the body's endogenous thermal regulation system, where metabolic heat production plays a critical role.At lower temperatures, a higher metabolic rate can help maintain core body temperature, reducing the need for external heating sources and potentially leading to a more energy-efficient state of comfort.The concentration of red dots at the lower end of the indoor temperature spectrum on the SHAP graph indicates that, as the ambient temperature decreases, the thermal contribution of metabolic heat becomes increasingly significant.This aligns with thermoregulatory principles, where the human body's metabolic heat generation helps to offset the heat loss to the environment.The SHAP dependence graph indicates that the influence of metabolic rate on thermal comfort is attenuated at temperatures exceeding 27 • C, beyond which thermal comfort significantly declines with further increases in temperature, regardless of the metabolic rate.The implications of individual metabolic differences on thermal comfort are profound.They indicate that personalized comfort models could be beneficial in designing HVAC (Heating Ventilation and Air Conditioning) systems and in developing building energy management strategies that take into account the metabolic diversity of occupants.Adaptive thermal regulation systems that respond to individual metabolic rates can optimize energy consumption by reducing the reliance on artificial heating or cooling when the occupants' metabolic heat production is sufficient to achieve comfort. Figure 10c delineates the interaction between indoor temperature and humidity, elucidating their collective effects on thermal comfort.At lower indoor temperatures, the contribution of humidity to Thermal Comfort Voting (TCV) appears to be ambiguous; conversely, at elevated indoor temperatures, humidity levels predominantly register as high, hinting at a homogeneity within the filtered dataset.This homogeneity notably underscores the dearth of observations from hot and arid climates [44], thereby limiting the model's capacity to accurately reflect the variations in comfort perceptions associated with such conditions.To foster the creation of comprehensive thermal comfort models, it is imperative to procure a dataset that is both diverse and representative, spanning the full gamut of climatic scenarios. Figure 10d illustrates the relationship between indoor air velocity and thermal comfort across various temperature ranges.It can be observed that higher air velocities, which are predominantly prevalent during the summer months, correspond to lower TCV values, suggesting an increase in thermal comfort.This phenomenon is likely attributable to the prevalent cooling and ventilation strategies employed during these warmer periods.Conversely, during winter, instances of high indoor air velocity are comparatively scarce, thereby rendering the impact of air movement on thermal comfort less discernible.The lack of significant data points under cold conditions suggests that ventilation strategies may be less aggressive, possibly due to the heating requirements and the desire to minimize energy loss.This analysis underscores the importance of considering the seasonal context when evaluating the influence of air velocity on thermal comfort.Airflow, often a crucial factor in thermal comfort during hot conditions, might play a nuanced role in colder climates.Such insights are vital for the design of HVAC systems that are responsive to the thermal needs of occupants while balancing energy efficiency across seasonal variations. Conclusions This research presents an innovative approach for evaluating building thermal comfort in China, utilizing a smart evaluation model underpinned by an open-access dataset.Through the integration of Bayesian-optimized LightGBM and SHAP methodologies, we have developed an explainable machine learning model that accurately predicts thermal comfort requirements across different regions and building types.The following key insights have been distilled from our study: (1) Our model has demonstrated commendable accuracy in evaluating thermal comfort, with SHAP analysis providing granular insights into the model's internal workings. The ability of the model to generalize across the test set with high precision suggests its potential for widespread application in smart building management systems. (2) The study underscores the paramount influence of indoor temperature on thermal comfort voting, reiterating the necessity for precise temperature control in the pursuit of occupant comfort.The notable impacts of building type and metabolic rate highlight the significance of architectural design and human physiological activity in thermal comfort perception.(3) The insights gleaned from our analysis have significant policy implications.They can inform the development of energy-efficient thermal comfort standards and regulations that are sensitive to regional climatic diversity and personalized occupant needs.Accurate predictions of thermal comfort can aid substantially in the optimization of energy usage, aligning with the objectives of sustainable development and carbon neutrality.The model's ability to delineate the influence of distinct factors enables the design of energy-efficient and occupant-centric thermal environments.(4) The research paves the way for future studies to incorporate additional variables, such as clothing adaptability, occupant behaviour, and building occupancy patterns.Such expansions could yield a holistic thermal comfort model that is both predictive and prescriptive, aiding stakeholders in creating energy-efficient, comfortable, and health-promoting built environments. In conclusion, our study contributes a sophisticated, data-driven evaluation model to the field of building thermal comfort.This model not only serves as a tool for optimizing thermal comfort but also acts as a guide for sustainable building design and operation, ultimately supporting the global endeavor to mitigate climate change through improved energy stewardship in the building sector.With its capacity to elucidate complex relationships within large datasets, our research exemplifies the potential of machine learning to revolutionize building science and urban planning. Figure 1 . Figure 1.Pearson correlated coefficients between indoor physical parameters with different measured heights. Figure 1 . Figure 1.Pearson correlated coefficients between indoor physical parameters with different measured heights. Figure 2 . Figure 2. Spearman correlation coefficients for all features. Figure 2 . Figure 2. Spearman correlation coefficients for all features. Figure 3 . Figure 3.The distribution of each feature after feature selection. Figure 3 . Figure 3.The distribution of each feature after feature selection. Figure 4 . Figure 4. Loss curves in the training process.Figure 4. Loss curves in the training process. Figure 4 . Figure 4. Loss curves in the training process.Figure 4. Loss curves in the training process. Figure 5 . Figure 5. Performance of evaluation models on testing set. Figure 6 . Figure 6.Performance of evaluation models on testing set considering the indicator ranges. Figure 5 . Figure 5. Performance of evaluation models on testing set. Buildings 2023 , 20 Figure 5 . Figure 5. Performance of evaluation models on testing set. Figure 6 . Figure 6.Performance of evaluation models on testing set considering the indicator ranges. Figure 6 . Figure 6.Performance of evaluation models on testing set considering the indicator ranges. Buildings 2023 , 13, x FOR PEER REVIEW 12 of 20Instead of the development of a novel model attuned to extensive thermal datasets, a core objective of this study is to elucidate the relative impact weights, marginal effects, and interplay among all pertinent factors, which we discuss comprehensively in Section 5. Figure 7 . Figure 7. Overall performance of all machine learning models on testing set. Figure 8 Figure 7 . Figure8delineates the influence weights of various factors on the Thermal Comfort Vote (TCV) model through mean absolute SHAP values.It should be noted that category features, including building type, building function, thermal operation mode, and season, have been one-hot encoded, therefore, each category feature's mean absolute SHAP value indicates the sum of its one-hot features' mean absolute SHAP values.SHAP (SHapley Additive exPlanations) values provide a profound understanding of feature contributions by assigning each feature an importance value for a particular prediction.A higher mean absolute SHAP value signifies a greater impact of the feature on the model's output.The Figure 8 Figure 8 delineates the influence weights of various factors on the Thermal Comfort Vote (TCV) model through mean absolute SHAP values.It should be noted that category features, including building type, building function, thermal operation mode, and season, have been one-hot encoded, therefore, each category feature's mean absolute SHAP value indicates the sum of its one-hot features' mean absolute SHAP values.SHAP (SHapley Figure 8 . Figure 8. Mean absolute SHAP values for factors of TCV model. Figure 8 . Figure 8. Mean absolute SHAP values for factors of TCV model. Buildings 2023 ,Figure 10 . Figure 10.SHAP dependence graph to show the interactive effects of different factors. Figure 10 . Figure 10.SHAP dependence graph to show the interactive effects of different factors. Table 1 . The features in the filtered dataset for building thermal comfort. Table 2 . The selected features in the dataset for the evaluation model of building thermal comfort. Table 2 . The selected features in the dataset for the evaluation model of building thermal comfort.
2023-12-17T16:18:27.390Z
2023-12-14T00:00:00.000
{ "year": 2023, "sha1": "53ebba0c1a6ab4be9e778a313c3dc2665c4b8f58", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-5309/13/12/3107/pdf?version=1702555734", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d452116a8a3c5ca2a71819d77c89142ab0851cba", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
6933025
pes2o/s2orc
v3-fos-license
Synergetic Toxic Effect of an Explosive Material Mixture in Soil Explosives materials are stable in soil and recalcitrant to biodegradation. Different authors report that TNT (2,4,6-trinitrotoluene), RDX (hexahydro-1,3,5-trinitro-1,3,5-triazine) and HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) are toxic, but most investigations have been performed in artificial soil with individual substances. The aim of the presented research was to assess the toxicity of forest soil contaminated with these substances both individually as well in combinations of these substances. TNT was the most toxic substance. Although RDX and HMX did not have adverse effects on plants, these compounds did cause earthworm mortality, which has not been reported in earlier research. Synergistic effects of explosives mixture were observed. Trinitrotoluene is highly toxic to terrestrial plants. In most cases the toxic effects (germination rate, decrease of plant biomass, and abnormal growth) are directly related to an increase TNT concentration (Krishnan et al. 2000;Vila et al. 2008). Hexogen, even in high concentrations, does not affect seed germination; however, many adverse developmental effects have been detected in plants exposed to this substance. Some of effects (e.g. atypical bilateral symmetry, bifurcated and fused leaves, irregular and curved leaf margins, and underdeveloped roots) were indicative of teratogenicity (Winfieled et al. 2004;Vila et al. 2007b). Octogen, even in very high concentrations, is not toxic to higher plants (Rocheleau et al. 2008(Rocheleau et al. , 2003. Trinitrotoluene acute toxicity was observed in tests with earthworm Eisenia andrei (Lachance et al. 2004) and other oligochaete species (Enchytraeus crypticus and Folsomia candida; Schäfer and Achazi 1999). Concentrations causing 50 % mortality (LC 50 ) (Best et al. 2006) ranged from 143 to 365 mg/kg and depended on soil type. Trinitrotoluene has also exhibited adverse effects on earthworm biomass and has shown to cause reduction in reproduction parameters (decrease in cocoon and juvenile numbers; Schäfer and Achazi 1999;Robidoux et al. 2002). Hexogen and octogen were not lethal to earthworms, but as with TNT, they caused biomass and fertility decrease (Robidoux et al. 2002;Simini et al. 2003). Even though there is significant information about the toxicity of explosives, there are still a lack of reports about the influence of different kinds of soil related to their associates on toxic effects. Moreover, most of the tests conducted thus far have been based on determining toxic effects of individual substances. Yet in the environment, explosives usually appear as mixture, which can cause difficulty in assessing their individual effects, as can be observed with other groups of contaminants (Kalka 2012). The main purpose of this research was to assess the phytoand zootoxicity of mixtures of explosives (and individual explosives) in forest soil. Materials and Methods The soil used in this research originated from the Panewnickie Forests situated near Katowice (Poland). It was sandy soil (sand content [74.7 %), with low pH (3.8 ± 0.5) and organic carbon content (\5.24 %). Explosives used to spike the soil were technical grade and in the crystal form. Prior to addition to the soil, they were dissolved in acetone. TNT, RDX, and HMX solutions were added to a small sample of the soil (100 g). Following acetone evaporation, additional soil was added to each sample. Nominal concentrations of explosives measuring 100, 180, 360, 540, and 1,000 mg/kg were identical in both tests. In the samples with explosives mixture (MIX), equal amounts of each compound were used to achieve identical concentrations, as in tests with individual substances (e.g. in the sample with 100 mg/kg nominal concentrations were 33.3 mg/kg TNT, 33.3 mg/kg RDX, and 33.3 mg/kg HMX). Unspiked soil was used for control samples. Explosives concentrations in soil after spiking were measured according to US EPA Method 8330 (with the some modifications in the procedure). This method provides high performance liquid chromatographic conditions for the detection of explosives. Prior to using this method, extraction in an ultrasonic bath with the use of acetonitrile must be conducted. In this research, samples were sonicated for 2 h (instead of 18, which is recommended in Method 8330) and then shaken for 18 h (rpm = 100). Other stages of sample preparation were identical to US EPA Method 8330. The separation of explosives in liquid extracted samples was conducted with the use of Thermo Scientific, Hypersil Gold C18 250 9 4.6 mm chromatographic column (filling granulation -5 lm) preceded with precolumn Hypersil Gold 10 9 4 mm (filling granulation -5 lm). The mobile phase during the separation was 50/50 methanol/organic free reagent water. Flow rate through the column was 1 ml/min (lower than recommended by Method 8330, which is 1.5 ml/ min); injection volume was 20 ll (in Method 8330 -100 ll). Method validation was conducted with the use of analytical standard EPA 8330 MIX A produced by Supelco containing 8 explosives dissolved in acetonitrile. Explosives concentrations were measured in each sample type after spiking the soil and are presented in Table 1. Nominal concentrations are used in the figures and in the text to make results clear and easy to compare. Phytotoxicity tests were conducted according to PN-ISO 11269-2:2001 ''Effects of chemicals on the emergence and growth of higher plants.'' One monocotyledonous plantbread wheat (Triticum aestivum) -and one dicotyledonous plant -red clover (Trifolium pratense) -were chosen for the test. Only untreated seeds with germination ability greater than 90 % were used in the test. Twenty seeds were sowed in each pot (containing 500 g of soil). Soil was watered with distilled water; the soil moisture was established at the level of 40 %. Each sample was prepared in quadruplicates. The experiment was carried out in a plant growth chamber at the temperature of 21°C/18°C (day/night). Air humidity in the chamber was kept at the level of 80 %; light intensity was 25,000 lm/m 2 surface in the hourly cycle of 14/10 (day/ night). After 7 days following the beginning of the test, germinated seeds were counted in each sample. Subsequently, the 5 most representative seedlings in each sample were chosen; the rest were removed. After 14 days, plants were collected, lengths of shoots and roots were measured, and biomass of fresh shoots was weighed. Acute earthworm toxicity tests were conducted on the base of PN-ISO 11268-1:1997 ''Effects of chemicals on earthworm (Eisenia fetida).'' Plastic pots were filled with 750 g of contaminated soil, which moisture was established at the level of 40 % using a manure and water solution (manure served as food for the earthworms). Control samples were unspiked and were comprised of soil with manure and water solutions. Samples were prepared in quadruplicates. Ten washed earthworms (100-600 mg weight) were introduced into each container; containers were covered with gauze to prevent the earthworms from escaping. Test was conducted for 14 days at the room temperature and in stable soil moisture. After 2 weeks, living organisms were counted (mortality assessment) and weighed (effect of chemicals on the biomass). Toxicity endpoints such as LC 50 were calculated on the basis of the linear regression best fitting model. Statistically significant differences between the results were evaluated on the basis of determining standard deviation and on the basis of Dunnett's multiple comparison test (p B 0.05). Results and Discussion In the phytotoxicity evaluation of soil contaminated with explosives, inhibition of seed germination, biomass weight, and root growth for two plants (T. pretense and T. aestivum) were determined. It has been stated that a concentration 180 mg/kg of each explosive (except HMX) and a mixture of explosives in soil cause significant biomass loss of red clover seedlings (Fig. 1). For wheat seedlings, the significant fresh biomass loss was observed in each TNT concentration, while HMX and RDX caused increased growth in comparison with the control samples (in each analyzed concentration). In samples with the explosives mixture, growth stimulation was noticed in lower concentrations, while in the concentrations of 360 mg/kg and higher significant (in comparison with control samples) plant biomass weight loss was observed (Fig. 1). In the samples spiked with TNT at the greatest concentrations (540 and 1,000 mg/kg), chlorosis on the red clover leaves surface was visible. In the last 5 days of the experiment drying and death of the seedlings were observed. The analysis of root length demonstrated that 2,4,6-trinitrotoluene caused a significant decrease of root length in red clover and wheat in all applied concentrations. Hexogen and octogen did not cause significant decrease of red clover root length, but did cause a very strong increase of wheat root length. Significant inhibition of red clover root length was only observed in soil samples spiked with the highest concentration mixture of explosives; low concentrations caused wheat growth stimulation. Length of roots in 1,000 mg/kg concentration were significantly lower in comparison with the control samples (Fig. 2). Among the analyzed compounds, only TNT had a significant effect on seed germination. In highest concentration, only about 30 % of the red clover seeds germinated. Some results obtained in phytotoxicity tests are in agreement with other researchers' achievements. The increase of toxic effects observed with the increase of TNT concentration in soil has been reported in many papers (Krishnan et al. Krishnan et al. 2000b;Vila et al. 2008). The scale of the toxic effects depended on the plant species. Wheat biomass loss was significant in comparison with control samples only at the lowest concentrations; for clover, it was at the concentration of 180 mg/kg. At the same time, 2,4,6-trinitrotoluene did not have an adverse effect on wheat germination in all analyzed concentrations, while in samples with TNT concentrations of 540 and 1,000 mg/kg only about 30 % of red clover seeds germinated. Previous research has shown that a soil's sensitivity to the presence of TNT depends on plant species. Alfalfa (Medicago sativa) could not grow in soil contaminated with TNT at a concentration of 100 mg/kg (Scheidemann et al. 1998). Similarly, cress (Lepidium sativum) and cabbage (Brassica rapa) germination was inhibited at the TNT concentration of 200 mg/kg in the soil (Gong et al. 1999). Oats (Avena sativa) demonstrates significant resistance to the presence of trinitrotoluene in soil, for which no toxic effects were observed even at a concentration of 1,600 mg/kg (Gong et al. 1999). RDX did not affect the germination of red clover and wheat. These observations are similar to the results obtained by Best et al. (2006), who conducted tests with ryegrass (Lolium perenne) and alfalfa (Medicago sativa), and by Winfield et al. (2004), who analyzed the effects of hexogen on 16 different plants. In this research, red clover and wheat biomass loss and morphological changes did not appear in plants grown in the soil spiked with hexogen. In soil contaminated with RDX (138 mg/kg concentration), Vila et al. (2007b) observed necrosis and bleaching on the surfaces of leaves of wheat (Triticum aestivum), rice (Oryza sativa), and soybean (Glycine max). Alternatively, in research conducted by Rocheleau et al. (2008) no adverse effects in ryegrass (Lolium perenne) were observed at RDX concentrations up to 10,000 mg/kg. Differences between obtained results may be connected with different test durations: toxic effects appeared in tests conducted for 42 days (Vila et al. 2007b). Presumably, 14 days (this research) or 21 days (Rocheleau et al. 2008) may be a too short period to notice in plants the appearance of morphological changes caused by RDX exposure. Despite the fact that during the tests no adverse changes in wheat were observed, the effects of increased growth are difficult to predict. Tests of greater duration would provide a response if any changes were to appear. The presence of octogen in soil did not cause adverse effects on wheat and red clover germination and growth (it caused only growth stimulation). This is similar to research conducted by Rocheleau et al. (2008), in which lettuce (Lactuca sativa), barley (Hordeum vulgare), and ryegrass (Lolium perenne) tolerated very high HMX concentrations in artificial soil (lettuce and barley to 3,320 mg/kg; ryegrass up to 10,000 mg/kg). In the evaluation of zootoxicity, the impact of explosives contamination in soil on earthworm (Eisenia fetida) mortality was determined. The highest earthworm mortality was observed in soil spiked with TNT. In samples where TNT concentration was 360 mg/kg oligochaete lethality was 70 %; in the higher concentrations, 100 %. In samples spiked with RDX, mortality was 60 % in concentrations of 540 mg/kg and LC 50 was evaluated at the level of 585.7 mg/kg. Among the analyzed compounds, HMX showed the lowest level of earthworm toxicity. Lethality at the level of 60 % appeared only at the highest concentration (1,000 mg/kg) and LC 50 was 841.5 mg/kg. In samples with the mixture of explosives, 100 % mortality appeared at the concentrations of 180 mg/kg and higher. Interestingly, at the lowest concentration (100 mg/kg) all the earthworms stayed alive. The obtained results indicate that in all probability it was the synergistic effect of explosives on oligochaetes that was observed. In the sample where the concentration of sum of the explosives was 180 mg/kg (60 mg TNT ? 60 mg RDX ? 60 mg HMX), complete mortality was observed; in the tests with individual explosives, it was only higher concentrations that caused a lethal effect. The test was repeated with the lower concentrations (100, 130, 170, 220, 290 mg/kg) to more precisely observe mortality changes. The effects were surprising: again at 100 mg/kg almost all earthworms stayed alive (39 out of 40), while in the next concentration -130 mg/kg -only one living organism was observed after 7 days incubation time. LC 50 was evaluated at the level of 115 mg/kg. After calculating toxicity units (TUs), which are defined as 100 divided by the EC 50 or LC 50 (Kalka 2012), the synergistic effect of the mixture of explosives in comparison with individual substances tests was assessed. The result was positive (Table 2), which is proof of the synergistic effect of explosives in a mixture. TNT zootoxicity tests results are in agreement with other researchers reports. A 50 % mortality rate of the population of the earthworm (Eisenia fetida) was estimated at the concentration level of 276.7 mg/kg. According to other reports, LC 50 evaluated for different forest soils was 143-325 mg/kg (Lachance et al. 2004). Results obtained in artificial soil are considerably higher, which indicates that the scale of the toxic effect depends on the soil content (Lachance et al. 2004). It has been stated in many investigations that RDX and HMX do not cause lethal effects, even in high concentrations (Robidoux et al. 2002;Best et al. 2006). In this research, earthworm mortality was observed in soil spiked with hexogen and octogen. These compounds were lethal for oligochaetes at higher concentrations than TNT: the LC 50 value estimated for RDX was 585.7 mg/kg and for HMX it was 841.5 mg/kg. This unexpected effect can be connected with physicalchemical properties of soil and soil content, especially with an organic matter content. In the soil with a low organic matter content the small amount of explosives is binding to soil matter. Almost all introduced substances are bioavailable to organisms. It is also the first time that the synergistic toxic effect of the mixture of TNT, RDX, and HMX on earthworm survival has been observed. The concentration of explosives that caused lethality of Eisenia fetida was lower in the samples with explosives mixture than in the samples with individual substances. The analyzed compounds are toxic, and the scale of the effects they cause is difficult to predict. In comparing results obtained in this research with data reported by other researchers, it can be stated that toxic effects depend not only on the concentration of explosives, but also on the kind of soil in which they appear. Furthermore, mixtures of explosives can cause other and/or stronger effects than individual substances. That is why there is a need to conduct more research to assess the effects of different explosives compositions on various organisms.
2018-04-03T00:50:38.551Z
2013-09-05T00:00:00.000
{ "year": 2013, "sha1": "08af18741fbb26f96ef47df2da7c472c6561e726", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00128-013-1090-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "08af18741fbb26f96ef47df2da7c472c6561e726", "s2fieldsofstudy": [ "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237343532
pes2o/s2orc
v3-fos-license
Paracrine study of adipose tissue - derived mesenchymal stem cells ( ADMSCs ) in a self - assembling nano - polypeptide hydrogel environment : To research the paracrine role of adipose tissue derived mesenchymal stem cells ( ADMSCs ) in promoting angiogenesis under the three - dimensional culture condi tions consisting of a functionalized self - assembling pep tide nano fi ber hydrogel. ADMSCs were isolated, extracted, and then identi fi ed. Three kinds of peptides ( RADAI - 16, RGD, and KLT ) were prepared, and a functionalized self assembling peptide nano fi ber hydrogel was produced by mixing RADAI - 16, RGD, and KLT in a volume ratio 2:1:1. AFM was used to observe RADAI - 16, RGD, KLT, and the functionalized self - assembling peptide nano fi ber hydrogel. Then, ADMSCs were cultured under three - dimensional conditions consisting of the peptide nano fi ber hydrogel, and AFM was used to observe cell migration. The ADMSCs in the common culture group ( 37°C, 5% CO 2 cell culture box ) and hypoxic culture group ( 37°C, 10% CO 2 , and 1% O 2 hypoxic culture box ) acted as controls. ADMSCs were three - dimensionally cultured in situ for 1 day, and then the concentrations of HGF and VEGF in the supernatant were determined by ELISA. Cells were extracted from the peptide nano fi ber hydrogel, and HO - 1 expression was detected by western blotting. ADMSCs have high expression levels of CD29, CD90, and CDl05 and low expression levels of CD34 and CD45. In addition, they can di ff erentiate into adipocytes and osteocytes. The diameters of the fi bers of RADAI - 16, RGD, KLT, and the functionalized self - assembling peptide hydrogel are 17.34 ± 1.82, 15.50 ± 1.41, 13.77 ± 1.18, and 20.26 ± 1.25 nm, respectively. AFM indicated that cells in the functionalized self - assembling peptide nano fi ber hydrogel migrated farther than those in RADAI - 16. The concentrations of HGF under common, hypoxic, and three - dimensional culture conditions were 47.31 ± 6.75, 247.86 ± 17.59, and 297.25 ± 17.95 pg/mL, respectively, while the concentrations of VEGF were 218.30 ± 3.03, 267.13 ± 4.27, and 289.14 ± 3.11 pg/mL, respectively. Both HGF and VEGF were expressed more in the presence of the functionalized self - assembling peptide nano fi ber hydrogel than in its absence ( P < 0.05 ) . Using western blotting, ADMSCs cul tured under hypoxic and three - dimensional conditions were found to have high expression levels of HO - 1. Culturing ADMSCs under three - dimensional conditions consisting of functionalized self - assembling peptide nano fi ber hydrogels can promote their paracrine role in angio -genesis, such as HGF and VEGF, and hypoxia is one of the important elements. Introduction Mesenchymal stem cells (MSCs), namely, adult stem cells with high self-renewing ability and multidirectional differentiation potential from myeloblasts are extensively present in connective tissues and organ mesenchyme in the whole body. They were first found by Friedenstein in marrow-adhering cell culture at the earliest [1], and subsequently, similar MSCs were found in tissues such as cord blood, peripheral blood, muscle, and fat. By virtue of convenient acquisition, extensive sources, the high number of obtained cells, and minor damage to the donor site [2], adipose tissue-derived mesenchymal stem cells (ADMSC) have rapidly become widely accepted seed cells in tissue engineering. Cells are in a three-dimensional microenvironment in vivo and are influenced by physical signals and bioactive signals, while it is difficult to provide a three-dimensional environment for ordinary cultures. In recent years, nanopolypeptide materials have made progress as biological scaffolds for three-dimensional cell culture. Liu et al. used a functional self-assembling nano-polypeptide hydrogel as a scaffold to culture ADMSCs and found that the expression of paracrine cytokines would increase during three-dimensional ADMSC culture conditions [3]; however, the reasons for this effect were not sufficiently explained. This experiment aims to explore the influencing factors of increasing the expression of ADMSC paracrine angiogenic growth factors under three-dimensional culture conditions in the presence of a functional self-assembling nano-polypeptide hydrogel. Separation and culture of ADMSC The animal procedures and human participation/tissues in this study were carried out in accordance with the National Institutes of Health (NIH) Guidelines for the Care and Use of Laboratory Animals, with approval from the Animal Ethics Committee of Shandong Academy of Medical Sciences. Isolated fresh fat samples from patients with benign diseases (patients signed an informed consent form before operation) were transferred onto an ultraclean worktable within 30 min, megascopic blood vessels and connective tissues were washed and removed, I-type collagenase (m/v 0.1%) was added after the fats were cut into pieces (<1 mm 3 ), and the samples were centrifuged at 1,500 rpm for 10 min after oscillation and digestion in a 37°C thermostatic water bath. Sediments at the substratum were resuspended using 10% FBS complete culture solution (10% FBS, 1% 100 µg/mL penicillin, and 100 µg/mL streptomycin) and were filtered by passing through a 100-mesh cell screen, and a complete culture solution was added until 5 mL. Then, the sediments were placed into a 25 cm 2 cell incubator. The solution was replaced for the first time after 48 h, and then it was replaced once every 2-3 days. After primary cells were coated to approximately 90% density at the bottom, subculturing was carried out. ADMSC osteoinductive differentiation and Alizarin Red S and alkaline phosphatase staining Cell density was adjusted to 2 × 10 4 /mL, and 2 mL of the cell suspension was inoculated into a six-well plate precoated with 0.1% gelatin. When the cell density was fused to 60-70%, the culture solution was removed, and the cells were washed. A total of 2 mL of osteoinductive differentiation complete culture solution was added to adult MSCs, the solution was replaced once every 3 days, and observation was conducted after 4 weeks. Alizarin S staining and alkaline phosphatase staining were also implemented. ADMSC adipogenic differentiation and oil red O staining The cell density was adjusted to 2 × 10 4 /mL, and 2 mL of the cell suspension was added to a 6-well plate for culturing. The solution was replaced once every 2-3 days, and the culture medium was removed when the cell density was 100% or the cells were in the fusion state. Then, 2 mL of ADMSC adipogenesis-induced differentiation solution A was added. Three days later, the solution was removed, 2 mL of ADMSC adipogenesis-induced differentiation solution B was added, and the B solution was replaced by the A solution 24 h later. After 5 days of this alternate culturing, the B solution was continuously used to maintain culturing for 7 days, and the culture solution was removed until the fat droplets became large and round. The cells were washed, and fixation for 30 min using 4% paraformaldehyde and staining with 1 mL of oil red O solution for 30 min were performed. Then, the cells were washed and observed under a microscope. ADMSC ordinary culture and cell supernatant and protein extraction under anaerobic culture conditions The cell density was adjusted to 1.0 × 10 5 /mL, and 200 µL of cells were placed in a 24-well plate. A total of 200 µL of the complete culture medium was added to each well, and after culturing in a 5% CO 2 cell incubator at 37°C for 24 h, the cell culture supernatant was collected, centrifuged at 3,000 g/min for 10 min in a 4°C environment and preserved in an −80°C refrigerator for standby use. The cell total protein was extracted for the follow-up experiment. The cell supernatant and total protein under anoxic conditions (37°C, 10% CO 2, and 1% O 2 anoxic incubator) were obtained using the same method. Preparation and detection of polypeptide solution Ten milligrams each of RADA16-I, RGD, and KLT were dissolved in 1 mL of sterile deionized water in the ultraclean worktable, namely, they were completely dissolved through ultrasonic treatment for 30 min. The polypeptides were placed in a 4°C environment for standby use after sterilization using a 0.22 µm filter membrane. RADA16-I:RGD:KLT were blended in a proportion of 2:1:1 and subjected to ultrasonic blending, and the functional self-assembling polypeptide solution was obtained. After the above four types of polypeptide solutions were diluted 20 times, 5 µL of solutions were diluted and dropped on newly peeled mica sheets. They were gently washed using 100 µL of distilled water after standing for 10 s and then were observed under an atomic force microscope after airing. Three-dimensional in situ culture of ADMSC and cell supernatant and protein extraction First, 10% sterile sucrose solution was used to adjust the ADMSC density to 1.0 × 10 6 /mL. A total of 20 µL of cell glucose solution was rapidly blended with 100 µL of functional self-assembling polypeptide solution, and the mixture was dropped into a Transwell chamber (the Transwell chamber was preplaced in a 24-well plate with each holder containing 400 µL of the complete culture medium). Then, 200 µL of the complete culture medium was gently dropped along the diagonal direction of the chamber, and the chamber was placed in a 5% CO 2 cell incubator at 37°C. The solution was replaced after 15 min of culturing, and the culturing was continued for another 30 min. The chamber was transferred to a 12-hole incubator with 800 µL of the complete culture medium in each hole. After 1 day, the cell culture supernatants inside and outside the chamber and intracellular proteins in the hydrogel were collected. Determination of VEGF and HGF concentrations using ELISA ELISA was used to determine VEGF and HGF concentrations in cell supernatants in the common culture group, hypoxic culture group, and three-dimensional culture group. Determination of the expression of intracellular heme oxygenase-1 (HO-1) under various conditions using a western blot method ADMSCs cultured under common culture conditions, hypoxic culture conditions, and three-dimensional culture conditions for 1 day were extracted, and total proteins were obtained after pyrolysis, and protein concentration was determined. Equivalent amounts of proteins were taken and loaded, after which they were subjected to PAGE. After being transferred to polyvinylidene fluoride membranes, they were sealed for 1.5 h, and HO-1 primary antibody (1:1,000) was added and incubated overnight at 4°C. The secondary antibody (1:5,000) was added after washing the membranes, and then they were incubated at room temperature for 2 h, developed using an enhanced chemiluminescent agent, and finally imaged using a gel-imaging system. Statistical method Among the experimental results, all data were expressed as the mean ± standard deviation ( ± x s ). SPSS20 statistical software was used for data analysis. One-way ANOVA was used for intergroup comparison. P < 0.05 indicated that a difference was statistically significant. Results ADMSC primary cells obtained through separation had slow growth and proliferation; they adhered to walls during the growth process and presented a fusiform shape. The cells increased about 7 days later and presented a long fusiform shape, and the cell growth presented a vortex shape with sizes of 30-50 µm (as shown in Figure 1). After osteoinductive differentiation of ADMSC, cells were transformed from the original long fusiform shape into triangular and polygonal shapes; particular matters were sedimented in the cells, and sediments increased continuously with time and finally presented a linear shape (as shown in Figure 2). After staining with alizarin red S, calcified nodes were seen in the cells, and black particular minerals were sedimented in cells by alkaline phosphatase staining. The above changes appeared in the control group. After adipogenesis induced differentiation of ADMSCs, cells were transformed from the original long fusiform shape into triangular or rectangular shapes, and circular vacuoles structures could be seen in cells with regular morphologies but unequal quantities and sizes. The spherical fat droplets were colored red in cells after oil red O staining with regular morphologies but were of unequal sizes. A similar change did not appear in the control group (as shown in Figure 3). ADMSCs were placed in an anoxic incubator for 24 h culturing. Then they were observed under a microscope, and it was found that cells still presented "vortex"shaped growth with morphologies of long fusiform shape retained. A small number of dead cells were floating on the surface of the complete culture solution, and there was no obvious difference in morphologies from cells in the ordinary culture group (as shown in Figure 4). RADA16-I, RGD, KLT, and the self-assembly polypeptide solution were all colorless transparent liquids, and good self-assembly of the functional polypeptide solution could be observed under an atomic force microscope. As shown in Figure 5, RADA16-I, RGD, and KLT were all nanofiber shapes as observed under an atomic force microscope, with fiber diameters of 17.34 ± 1.82, 15.50 ± 1.41, and 13.77 ± 1.18 nm, respectively. The functional self-assembly polypeptide was also of nanofiber shape and the fiber diameter was 20.26 ± 1.25 nm. By the statistical analysis, as in Figure 6, the diameter of the self-assembly polypeptide increased when compared with those of the original three types of polypeptide fibers (P < 0.05). The ELISA method was used to determine VEGF and HGF concentrations in the cell supernatant. The HGF concentrations in the common culture group, hypoxic culture group, and three-dimensional culture group were 47.31 ± 6.75, 247.86 ± 17.59, and 297.25 ± 17.95 pg/mL, respectively; their VEGF concentrations were 218.30 ± 3.03, 267.13 ± 4.27, and 289.14 ± 3.11 pg/mL, respectively. Thus, it could be seen that VEGF and HGF concentrations in the cell supernatant in both the hypoxic culture condition and three-dimensional culture condition obviously increased when compared with the common culture group (P < 0.05). The details can be seen in Figure 7. When cells were under the hypoxic culture environment, expression of the intracellular heme oxygenase (HO-1) would obviously increase. According to western blot results analysis, it could be seen that the HO-1 content expressed in cells in both the hypoxic culture group and three-dimensional culture group obviously increased when compared with the common culture condition group. The protein content expressed in the three-dimensional culture group was the highest (P < 0.05) (as shown in Figure 8). : Laboratory tests by ELISA method for the concentration of HGF were found to be 47.31 ± 6.75, 247.86 ± 17.59, and 297.25 ± 17.95 pg/mL after one day under different culture conditions (the common culture, the hypoxic culture group, and the threedimensional culture). Under the same conditions stated above, the concentrations of VEGF were 218.30 ± 3.03, 267.13 ± 4.27, and 289.14 ± 3.11 pg/mL. The concentrations of HGF and VEGF were highest in TDCG, closely followed by HCG. *P < 0.05. CCG: common culture group; HCG: hypoxic culture group; TDCG: threedimensional culture group. Discussion By virtue of convenient acquisition, extensive sources, the high number of obtained cells, and minor damage to the donor site [2], ADMSCs have rapidly become widely accepted seed cells in tissue engineering. Currently, it is believed that MSCs are a heterogeneous cell population, and their specific antigen phenotype has not yet been found. The International Society for Cellular Therapy (ISCT) proposed test criteria of MSCs in 2005, including cell adherence growth features, multidirectional differentiation features, and phenotyping features [4]. Adherent fusiform cells were obtained by the "adherence screening method," and the cell growth trend presented a "vortex shape" [5]. ADMSCs are nonhematopoietic stem cells derived from myeloblasts with self-assembling and multidirectional differentiation potentials, and they can be differentiated into multiple types of cells from myeloblasts, such as bone, cartilage, and fat. By induced osteogenic differentiation, fusiform cells obtained through separation experienced changes in morphologies, and calcified nodes could be observed in cells by alizarin red S staining, and black particulate minerals were sedimented in cells after alkaline phosphatase staining. During induced adipogenic differentiation, quasi-circular vacuole structures could be observed in cells, and red spherical fat droplets were colored in cells through oil red O staining. Thus, the obtained cells could undergo favorable osteogenesis and adipogenic differentiation with multidirectional differentiation potential. With the combined features of the adherence screening method and multidirectional differentiation potential, fusiform and "vortex-shaped" cells obtained by separation were determined as ADMSCs from fats. For the treatment of ischemic diseases, including ischemic heart disease and ischemic limb disease, stem cell treatment is a therapeutic method with prospects at present [6,7]. However, after treatment by inoculating stem cells, stem cells proliferate in the damaged part with a low growth rate, so the therapeutic effect cannot be achieved, which is the main disadvantage of restricting stem cell treatment [8,9]. Moreover, the existing studies have found that tumor risk exists after stem cell treatment [10,11]. One of the solutions to overcome the above disadvantage of stem cell treatment is to use the conditional culture medium for stem cell culturing to treat ischemic diseases. Stem cells can express, synthesize, and secrete cytokines and growth factors and regulate multiple types of bioactive factors such as polypeptides [12][13][14][15], especially ADMSCs, which can secrete proangiogenic factors and antiapoptosis factors, including VEGF, HGF, bFGF, and TGF-b. Anoxia is an important factor influencing the secretion of bioactive factors by stem cells [16,17]. The concentration of bioactive factors secreted by stem cells after ordinary culture conditions is low, so it is difficult to exert a therapeutic effect. Recent studies have found that the VEGF concentration in the supernatant under ordinary culture conditions is 217 ± 97 pg/mL [18], while the concentration of VEGF with therapeutic significance is approximately 5,000 pg/mL [19,20]. Therefore, in vitro culturing of MSCs and enhancement in the ability of stem cells to secrete bioactive factors constitute the key to solving stem cell treatment problems. The three-dimensional culturing of ADMSCs by the "microsphere method" and by the use of supernatant obtained from culturing with the three-dimensional conditional culture medium can significantly ameliorate acute ischemic kidney diseases [21]. By three-dimensional culturing of ADMSCs using the "microsphere method," bioactive factors secreted by stem cells will increase, which is closely related to the anoxic environment where the cells are located [22]. Moreover, three-dimensional culture conditions are similar to the in vivo environment within cells and can be better influenced by the physical environment and biological environment. Therefore, a functional selfassembling nano-polypeptide hydrogel was used in this experiment as a biological framework for the three-dimensional stem cell culture. According to the previous literature reports, the selfassembling polypeptide RADA16-I can spontaneously form nanofibers with diameters of 10 nm, pore diameters of 5-200 nm, and abundant moisture [23][24][25]. The formed polypeptide nanofibers are extremely similar to the extracellular matrix and can act as a biological framework for the three-dimensional cell culture. The polypeptide RGD is a key integrin for cell adhesion and can facilitate cell There was a high expression rate of the HO-1 in both the three-dimensional culture group and the hypoxic culture group. It also suggested that the expression rate of HO-1 was higher than the latter. *P < 0.05. CCG: common culture group; HCG: hypoxic culture group; TDCG: three-dimensional culture group; and HO-1: heme oxygenase-1. adhesion [26,27]. The polypeptide KLT is a stimulatory factor of VEGF [28]. In this study, the above three types of polypeptide solutions were sufficiently blended in a volume ratio of 2:1:1, and then the colorless and transparent liquid was obtained. By observing the self-assembling polypeptide solution obtained after blending under an atomic force microscope, the solution still consisted of nanofiber structures. Compared with RADA16-I, the fiber diameter obtained through self-assembly was larger (thick). Thus, it can be seen that the self-assembling nano-polypeptide obtained by blending three types of polypeptides in a volume ratio of 2:1:1 could be very well assembled into a nanofiber structure with abundant moisture and could be used as a biological framework for the three-dimensional cell culture. In this experiment, the in situ three-dimensional culture of ADMSCs was carried out using a functional selfassembling nano-polypeptide hydrogel, and the ELISA method was used to determine the VEGF and HGF concentrations in the supernatant of the culture medium under three-dimensional culture conditions and ordinary culture conditions. It was found that ADMSCs secreted pro-angiogenic factors more significantly after threedimensional culture than under ordinary culture conditions. When cells are beyond the oxygen diffusion distance (generally 150-250 μm), they will be in an anoxic state [29]. Therefore, we believe that when a functional self-assembling polypeptide hydrogel is used for the three-dimensional culture, the anoxic state of cells is the influencing factor for increasing the secretion of pro-angiogenic factors. The results for the anoxic group in this study showed that under anoxic conditions, even an ordinary two-dimensional culture would result in an increase in the expression of proangiogenic factors by ADMSCs; however, the magnitude of this increase was obviously lower than that under threedimensional culture conditions. Extensively existing in in vivo microsomal enzyme systems, HO, including three types of isozymes, HO-1, HO-2, and HO-3, participates in multiple philological and pathological processes in vivo. HO-1 can be activated by multiple oxidative stress factors in vivo and has important in vivo effects, such as antioxidation, anti-inflammatory reactions, and immune adjustment [30]. We found that HO-1 expression levels in cells in the three-dimensional culture group and the anoxic group were obviously higher than those in the ordinary culture group and so it is believed that the protein kinase (Akt) signal pathway is activated under three-dimensional culture conditions [16]. The expression of anoxic genes was upregulated inside ADMSCs, and as a result, the number of pro-angiogenic factors secreted by ADMSCs increased. MSCs were obtained through isolated culture from fat tissues, and ADMSCs were validated by combining adherence screening and multidirectional differentiation potential. The mediated three-dimensional culture was carried out for ADMSCs with a functional self-assembling nano-polypeptide hydrogel, and it was found that the level of proangiogenic factors secreted by ADMSCs increased under three-dimensional culture conditions. Moreover, anoxia was validated as one of the important factors in increasing secretion. However, a further in-depth study is needed regarding whether the expression of anoxic genes in ADMSCs is upregulated by activating the Akt signal pathway during mediated three-dimensional ADMSC culture conditions using a functional self-assembling nanopolypeptide hydrogel. Conclusion In this experiment, MSCs were isolated and cultured from adipose tissue, and ADMSCs were confirmed by adherent screening and multidirectional differentiation potential. Three-dimensional culture of ADMSCs mediated by functionalized self-assembled nano-polypeptide hydrogel showed that ADMSCs secreted increased growth of vascular growth factors such as HGF and VEGF under three-dimensional culture conditions and confirmed that hypoxia was one of the important factors for its secretion. However, it is still necessary to further study the function of functional peptide nanometer hydrogels on ADMSC-mediated three-dimensional culture in order to upregulate the expression of Akt in ADMSCs. Acknowledgement: The authors would like to express their gratitude to Dr Qian Wang for their assistance in performing the experiments. Funding information: This study was supported by grants from the Natural Science Foundation of Shandong Province (ZR2017MH072), the Natural Science Foundation Program of Shandong Province (2016GSF201219), and the Livelihood Technology Project of Qingdao (18-6-1-90-nsh). Author contributions: Ling Jianmin performed the experiments, analyzed, and interpreted the data, and wrote the manuscript; Tian Ailing and Yi Xin performed the analysis and interpretation of the data; Sun Nianfeng designed the study and revised the manuscript. All authors read and approved the final manuscript. Conflict of interest: The authors state no conflict of interest. Data availability statement: The data that support the findings of this study are available from the corresponding author upon reasonable request. Ethical approval: The procedures and participation/tissues of humans in this study were carried out in accordance with the National Institutes of Health (NIH) Guidelines for the Care and Use of Laboratory. All experimental protocols were approved by the Ethics Committee of Shandong Academy of Medical Sciences. All participants provided a statement of written informed consent.
2021-08-29T13:17:12.010Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f35620edb6c30a49b82c337062bc01854fb599c4", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/gps-2021-0053/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "83d791e7010363d3a8e768d1417b0dadad1db437", "s2fieldsofstudy": [ "Biology", "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
204811378
pes2o/s2orc
v3-fos-license
Junin Virus Triggers Macrophage Activation and Modulates Polarization According to Viral Strain Pathogenicity The New World arenavirus Junin (JUNV) is the etiological agent of Argentine hemorrhagic fever (AHF). Previous studies of human macrophage infection by the Old-World arenaviruses Mopeia and Lassa showed that while the non-pathogenic Mopeia virus replicates and activates human macrophages, the pathogenic Lassa virus replicates but fails to activate human macrophages. Less is known in regard to the impact of New World arenavirus infection on the human macrophage immune response. Macrophage activation is critical for controlling infections but could also be usurped favoring immune evasion. Therefore, it is crucial to understand how the JUNV infection modulates macrophage plasticity to clarify its role in AHF pathogenesis. With this aim in mind, we compared infection with the attenuated Candid 1 (C#1) or the pathogenic P strains of the JUNV virus in human macrophage cultures. The results showed that both JUNV strains similarly replicated and induced morphological changes as early as 1 day post-infection. However, both strains differentially induced the expression of CD71, the receptor for cell entry, the activation and maturation molecules CD80, CD86, and HLA-DR and selectively modulated cytokine production. Higher levels of TNF-α, IL-10, and IL-12 were detected with C#1 strain, while the P strain induced only higher levels of IL-6. We also found that C#1 strain infection skewed macrophage polarization to M1, whereas the P strain shifted the response to an M2 phenotype. Interestingly, the MERTK receptor, that negatively regulates the immune response, was down-regulated by C#1 strain and up-regulated by P strain infection. Similarly, the target genes of MERTK activation, the cytokine suppressors SOCS1 and SOCS3, were also increased after P strain infection, in addition to IRF-1, that regulates type I IFN levels, which were higher with C#1 compared with P strain infection. Together, this differential activation/polarization pattern of macrophages elicited by P strain suggests a more evasive immune response and may have important implications in the pathogenesis of AHF and underpinning the development of new potential therapeutic strategies. INTRODUCTION Junin virus (JUNV) is the etiological agent of Argentine hemorrhagic fever (AHF), an endemoepidemic disease mainly affecting agricultural workers in Argentina. The infection is usually acquired through small abrasions in the skin or through aspiration of particles contaminated with urine, saliva, or blood from carrier rodents. The AHF incubation period ranges from 6 to 12 days, ending with the onset of fever. During first 7 days, patients are commonly associated with a flu-like syndrome and the fever persists until the second week, when hemorrhagic or neurological signs of varied severity may be present. The 80% of patients improve after the second week. AHF diagnosis is based on clinical and laboratory data, the latter mainly based on platelet counts below 100,000/mm 3 in combination with white blood cell counts under 2,500/mm 3 . Early diagnosis is important, because the early use of immune plasma from convalescent patients reduces mortality rates from 20 to 1%. Candid #1 (C#1) is a live attenuated vaccine against AHF, which is licensed in Argentina and has been administered to several hundred thousand persons in endemic areas for more than 20 years, with a major impact on the incidence of the disease. However, since the first description of the disease in the 1950s, uninterrupted annual outbreaks have been observed in a progressively expanding region in north-central Argentina, to the point that more than 5 million individuals are today considered to be at risk of AHF (1). JUNV belongs to the clade B New World (NW) of genus mammarenavirus that together with genus reptarenavirus form the Arenaviridae family (2). Most mammarenavirus are associated with rodent infections. The Old World (OW) choriomeningitis lymphocytic virus (LCMV) infects Mus musculus, and this explains its global distribution. In contrast, other strains of mammarenavirus infect rodents with a circumscribed geographical distribution that explains their endemicity (3). In their natural rodent host, mammarenavirus usually produce a persistent asymptomatic infection that may occasionally be transmitted to humans where it can cause severe hemorrhagic fever (HF). In addition to JUNV, other strains of mammarenavirus associated with HF are the NW Machupo (MACV) and Chapare (CHPV) viruses in Bolivia, Sabiá (SABV) in Brazil and the OW Lassa virus (LASV) in Africa. In contrast, other members such as the NW Tacaribe (TCRV) and Pichindé (PICV) or the OW Mopeia (MOPV) viruses do not cause disease (4). The mammarenavirus are etiological agents of emergent diseases because human activity facilitates contact with wild rodents in new ecological niches and, therefore, new isolates should be expected in the future (3). Like other members of the same family, JUNVs are enveloped virions, ∼120 nm in diameter, with a capsid of helicoidal symmetry that includes a variable number of ribosomes. The virions contain a bi-segmented single-stranded RNA genome, with both segments employing an ambisense coding strategy. The L segment contains genes encoding the RNA-dependent RNA polymerase (L) and the matrix protein (Z). However, the smaller S segment encodes the nucleoprotein (N) and the glycoprotein precursor (GPC) which, after post-translational cleavage, yields mature virion glycoproteins (G) G1, G2 and the stable signal peptide SSP that together will constitute the spikes that decorate the virus surface (5). Macrophages are the most functionally diverse (plastic) cells of the hematopoietic system. Macrophages are found in all tissues and their main function is to respond to pathogens and modulate the adaptive immune response through antigen processing and presentation (6,7). Macrophage activation has emerged as a key area of study in immunology, tissue homeostasis, disease pathogenesis and inflammation resolution (8). To accomplish such diverse functions, they mature under the influence of signals from the local microenvironment into either classical M1 or alternatively M2 activated macrophages. M1 macrophages are characterized by the production of high levels of proinflammatory cytokines, an ability to mediate resistance to pathogens, strong microbicidal properties, high production of reactive nitrogen and oxygen intermediates and promotion of Th1 responses. In contrast, M2 activated macrophages are characterized by their involvement in parasite control, resolution of inflammation, tissue remodeling, immune regulation, and Th2 promotion responses (6,9). In this study, we aimed to characterize the infection of macrophages using two strains of the same arenavirus with different pathogenic properties. For this purpose, we studied the infection of human macrophages by the attenuated C#1 and the pathogenic P strains of JUNV, using an in vitro model represented by human macrophage cell cultures. Cells BHK-21 and Vero-76 cells (ATCC, USA) were maintained as monolayers, as previously described (10). Peripheral blood mononuclear cells (PBMCs) were obtained from healthy volunteer donors who had not taken any non-steroidal antiinflammatory drugs for 10 days prior to sampling as previously described (11). This study was approved by the Institutional Ethics Committee, National Academy of Medicine, Argentina. Written consent was obtained from all subjects. Human monocyte-derived macrophages (HMDM) were obtained as previously reported (12). Briefly, PBMCs from healthy donors were isolated by Ficoll-Hypaque (GE, Chicago, IL, USA) density gradient centrifugation, and positive selection of CD14 + monocytes was performed using an EasySep TM Human CD14 Positive Selection Kit (StemCell Tech, Vancouver, Canada). Macrophage differentiation was carried out by plating 2.5 × 10 5 CD14 + monocytes in 48-well plates containing 500 µL of RPMI 1640 plus 10% Fetal Bovine Serum (FBS) and 1% penicillin/streptomycin (PS) in the presence of rM-CSF (40 ng/ml) and cultured for 7 days. In selected experiments, 24well plates were used with a double quantity of cells and medium. Virus A virulent strain of JUNV, originally isolated from an AHF patient (P3441), as well as the attenuated Candid 1 (C#1) have been already described (13). The preparation of viral stocks in BHK-21 cells and infectivity titration using the Vero-76 cell line has been previously described (13). All work with the infective P strain was performed in a BSL/3 facility by vaccinated personal. Cell Infection For viral infection, cells were washed with PBS twice before incubating with the virus at a multiplicity of infection (MOI) of 1 in serum free medium. After 1 h of incubation at 37 • C, cells were washed with PBS twice again and supplemented with a complete culture medium. Mock infection was performed by adding the same volume of BHK-21 cell culture supernatant, instead of JUNV, to the cell monolayer. Cells were observed daily using an inverted microscope with an Olympus SP-320 camera and images were further processed with Photoshop 6.0 software. Plaque Formation Assay Ten-fold dilutions of the macrophage-JUNV infected culture supernatants were added to 24-well plates with a 40-50% confluence monolayer of Vero E6 cells. The plate was then incubated at 37 • C for 1 h with gentle rocking. Following adsorption, the inoculum was removed and overlaid with 2 ml of MEM containing 0.8% methylcellulose and 2% FBS and further incubated at 37 • C in a humid atmosphere with 5% CO 2 . Plaques were allowed to develop for either 4-6 days before being fixed (4% w/v paraformaldehyde) and stained with a 1% Crystal Violet in 20% ethanol and d H 2 O. Indirect Immunofluorescence Studies Cells were cultured on 12 mm diameter glass inserts before viral infection. At the indicated time-point after infection, the inserts were fixed with 4% paraformaldehyde (PFA) for 20 min and permeabilized with 0.1% Tween for 10 min. The slides were incubated overnight at 4 • C with a pool of specific monoclonal antibodies against JUNV (13). FITC-conjugated anti mouse Igs were then applied to the PBS-washed slides for 30 min at room temperature (RT). Antibodies were diluted with PBS containing 5% FBS and 5% goat serum as blocking reagents. The slides were counterstained with DAPI and examined under a Nikon E200 microscope equipped with fluorescence filters and a 100-W mercury lamp. Images were acquired with a Tucsen TCC 5.0 refrigerated camera under the control of IS listen software and further processed using Photoshop 6.0 software. Flow Cytometry Analysis The viability assay on macrophages culture was performed after 72 h of JUNV infection using Annexin V (AnnV) (Immunotools, Gladiolenweg, Friesoythe, Gemany) together with Fixable viability dye (eBioscience, USA). Briefly, cell harvesting was performed by a 20-min incubation with PBS plus 2% FBS (PBSF) and 1 mM EDTA on ice, followed by up and down pipetting. The harvested cells were washed once with PBSF and then stained with fixable viability dye efluor 780 diluted in PBS for 30 min. After washing cells, they were stained with AnnV following manufacturer's instruction. After final washing, the cells were fixed with a Cytofix/Cytoperm Kit (BD Biosciences, USA). The surface staining for CD11b, CD64, CD206, CD14 (phenotypic characterization of macrophages) or CD11b, HLA-DR, CD86, and CD80 (activation status) was performed following a standard protocol. Briefly, the harvested cells were washed with PBS and blocked in PBSF on ice for 30 min. The cells were washed with PBS and the respective antibody cocktails (prepared in PBSF) were added to the cell pellet and incubated for 30 min on ice. A fixable viability dye was used according to the manufacturer's instructions to gate on live cells. After washing, the cells were fixed with a Cytofix/Cytoperm Kit, washed again and analyzed in a FACS Canto I (Becton Dickinson, Franklin Lakes, NJ, USA) or Partec-Sysmex CyFlow flow cytometer (Görlitz, Germany). All analysis was carried out with FlowJo software (Tree Star). Intracellular staining was performed following manufacturer's recommendation for the Cytofix/Cytoperm Kit. The preparation of blockage and cocktail antibodies was performed with PBSF. We have used fluorescent minus one (FMO) to set the threshold for each marker. RNA Isolation, RT-PCR, and Real-Time PCR For gene expression analysis, cultured cells were washed and then harvested with Trizol (Life Technologies, Carlsbad, CA, USA) and total RNA was obtained following the manufacturer's instructions. Reverse transcription was performed using 100 ng of RNA by employing an iScript cDNA synthesis kit (Bio-Rad, Hercules, CA, USA). qPCR reactions were assessed using 1 µl of cDNA and using Sso Advanced Universal mix with Sybr Green and CFX-Connect equipment (Bio-Rad). Primers used in this study are listed in Table S1. The reaction was normalized to housekeeping gene expression levels and the specificity of the amplified products was checked through analysis of dissociation curves. Statistical Analysis Each experiment was performed with 3-7 different donors. All results are graphed as the median (min-max, horizontal line indicates the median) and non-parametric one-way analysis of variance (ANOVA) (Kruskal-Wallis) followed by Dunn's multiple comparison test was used to detect significant differences between groups. In all cases, P-values lower than 0.05 were considered statistically significant. All statistical analyses were performed using Prism 6 software (GraphPad). JUNV Strains Replicate Similarly in Human Macrophages Human monocyte-derived macrophages (HMDM) cells were infected at a multiplicity of infection (MOI) = 1 with the attenuated C#1 or the pathogenic P strains of JUNV. HMDM cells showed clear morphological changes, such as becoming more flattened and extended, as early as 24 h post-infection (hpi) with both JUNV strains ( Figure 1A). Infectious virus released to the cell culture supernatants were measured over 6 days by plaque formation assay. Infection with both viral strains led to similar levels of infectious viruses, peaking at 3 days post-infection (dpi) and declining until the end of the study (Figure 1B). Viral antigen was studied at 3 dpi by immunofluorescence (IF) and flow cytometry (FC) analysis. Viral protein staining was similarly positive with both strains ( Figure 1C). As expected, FC analysis confirmed these results showing that 58% of HMDM cells were infected with C#1 strain meanwhile 57.6% were positive for the P strain ( Figure 1D). JUNV Strains Differentially Enhances the Expression of CD71 Viruses exploit fundamental cellular processes to gain entry into cells and deliver their genetic cargo. Virus entry pathways are largely defined by the interactions between virus particles and their receptors at the cell surface. These interactions determine the mechanisms of virus attachment and, ultimately, penetration into the cytosol. In contrast to LASV and other OW arenaviruses, which use α-dystroglycan to infect cells, the NW arenaviruses, including JUNV, use human transferrin receptor 1 (hTFR1 or CD71) (14). We have previously shown that JUNV infection enhances the expression of hTFR1 in the precursor CD34 + cells, suggesting that JUNV infection promotes its own dissemination (15). Compared with other cell types, mature macrophages may be atypical regarding the requirements for hTFR1 expression levels (16), and for that reason, we explored the expression pattern in HMDM infected cells. In this sense, our results showed that JUNV infection enhances CD71 expression in human macrophages, but with the highest value associated with P strain infection (Figures 2A-C). JUNV Strains Differentially Activate Macrophages and Cytokine Production We have analyzed the expression pattern of co-stimulatory markers such as CD80 and CD86, and the antigen presentation surface marker (HLA-DR). Our results indicate a differential expression when infected with one or other viral strain. A significantly higher percentage of CD14 + CD86 + cells were observed after C#1 strain infection, while CD80 did not show significant differences between infected cells. On the other hand, P strain-infected macrophages showed the highest percentage of CD14 + HLA-DR ++ cells revealing a differential expression pattern after infection with C#1 or P strain (Figures 3A,B). Considering the observed macrophage activation induced by JUNV infection, we next analyzed the level of several cytokines in the supernatants of HMDM at 3 dpi. We found a clear distinctive profile, since higher levels of TNF-α, IL-10, and IL-12 were detected in the supernatants of C#1 strain-infected macrophages, but only IL-6 was significantly increased using the P strain. Interestingly, no difference in IL-1 production was observed compared with mock conditions, suggesting no activation of the inflammasome pathway (Figures 3C-G). The percentage of viable cells does not showed significant differences comparing Mock, C#1 and P (90.2, 88.25, and 83.5, respectively) although small increase in AnnV+ cells were observed with P when compared to C#1 and Mock (Supplementary Information S1). JUNV Selectively Skews Macrophage Polarization Taking into account the fact that JUNV modulates macrophage activation depending on which strain was used, we next evaluated different surface markers to distinguish M1/M2 polarization in HMDM after JUNV infection. The percentage of CD64 + (M1), CD206 + , and CD163 + (M2) cells expressed in CD11b + cells were analyzed by flow cytometry. CD11b + CD64 + CD206 − cells were increased when cells were infected with C#1 strain as compared to Mock and the P strain with an average of 18 vs. 8% and 3.2%, respectively. However, the M2 phenotype CD11b + CD206 + CD64 − was significantly higher after P infection as compared C#1 strain or to Mock with an average of 34 vs. 10.1%, and 17.5%, respectively. This indicates that JUNV modulates polarization according to viral strain pathogenicity (Figures 4A,B). Additionally, another M2 phenotype analyzed as a double positive, CD206 + /CD163 + cells, showed a clear tendency toward an increased percentage after infection with P strain as compared to Mock and C#1 strain infection (44.2 vs. 29.9% and 33.9%, respectively, see Supplementary Information S2). The Expression of MERTK Was Differentially Modulated With JUNV Variants The TAM family tyrosine kinase receptors TYRO3, AXL, and MERTK (TAM) receptors have been assigned to have a prominent role in the following: regulating the innate immune response (17); phagocytosis and macrophage polarization by acting in coordination with cytokine signaling (18); and in several aspects of the host response to viral infection (19,20). Considering our observation that JUNV modulates macrophage polarization and that AXL and MERTK are differentially expressed in pro-inflammatory M1 and anti-inflammatory M2 macrophages, respectively (21), we next evaluated TAM expression in HMDM after infection with both strains. Our results showed that while TYRO3 + or AXL + macrophages showed a similar response to infection with the two strains, the percentage of MERTK + cells was down-regulated by C#1 strain and up-regulated by P strain infection, highlighting again a differential macrophage response depending on virus strain (Figures 5A,B). Since activation of MERTK triggers the induction of the suppressor of cytokine signaling 1 (SOCS1) and SOCS3, we next analyzed the transcription level of these genes by RT-qPCR. We also analyzed interferon regulatory factor 1 (IRF-1) as a target gene of infection and a member of the interferon regulatory factor family (22). As expected, we observed higher transcription levels of IRF-1, SOCS1, and SOCS3 concomitant with lower levels of IFN-β in macrophages infected with P strain as compared with mock conditions after 24 hpi (Figures 6A-D). DISCUSSION In the present study, we showed that both attenuated C#1 and pathogenic P JUNV strains induced a phenotypic change in primary human macrophages as early as 1 dpi, that was interpreted as macrophage maturation and/or activation. In addition, we observed similar infectivity titers in the supernatants and a comparable percentage of infected monolayer cells. This stands in contrast with the reported minimal replication of JUNV-XJ (pathogenic) and XJ-Cl3 (attenuated) strains in macrophage cells from adult rats (23), a fact that may be attributed that macrophages were from a different donor species. Previous studies of human macrophage infection by mammarenavirus have been shown that the non-pathogenic MOPV both replicates and activates macrophages (24) whereas pathogenic LASV replicates, but fails to activate macrophages (25). The lack of activation of LASV-infected macrophages was later associated with sequence differences in viral protein N (26) and involved CXCL10 (27). On the other hand, it has been reported that the non-pathogenic TCRV replicates less efficiently in macrophages than the pathogenic JUNV, but induces a cytokine release not observed in JUNV-infected cells (28). More recently, a differential inhibition of macrophage activation by LCMV and PICV, mediated by the N-terminal domain (NTD) of viral Z protein, has been reported (29). Moreover, LCMV Z NTD leads to increased viral replication and inhibition of IFN responses in macrophages (30), a fact that has been more recently assigned to the Z protein from pathogenic arenaviruses only (31). Our results partially support to the hypothesis that OW and NW arenaviruses may have different pathogenic mechanisms, at least in macrophages cells (4,32). We have previously shown that JUNV infection up-regulates TfR1 in CD34 + hematopoietic stem cells (15). Here, we show that both JUNV strains also increased the expression of CD71 in infected HMDM, with P showing higher values. This supports the hypothesis that JUNV promotes its own dissemination not only in undifferentiated hematopoietic cells but also in a differentiated lineage and that P exploits this mechanism more. The observed differential maturation and activation markers and the cytokine expression profile depending on which JUNV strain infects the macrophages strongly support the notion that C#1 and P strains are able to elicit differential immune responses. In this sense, a higher level of pro-inflammatory cytokines (IL-12 and TNF-α) together with an increased level of a co-stimulatory marker (CD86) demonstrates the ability of the C#1 strain to mount an adequate inflammatory response. This allows the generation of protective immunity against the virus in the absence of disease in the host. By contrast, the pathogenic P strain elicits a more attenuated activation state of macrophages by decreasing the prototypical pro-inflammatory cytokines. However, it also remarkably induces higher levels of IL-6, a cytokine also associated with immunomodulation (33), and increases the percentage of CD14 + HLA-DR ++ , two signals that indicate an anti-inflammatory response that might allow early immune evasion, facilitating viral dissemination in the host and subsequent disease. It has been demonstrated that most of the acute viral infections of pathogenic viruses are associated with macrophage activation to a M1 status promoting inflammation (34). Regarding M2, the first studies were carried out with viruses associated with chronic infections, and the first accepted paradigm was that viral infection activates macrophages inducing a M1 profile during the acute phase and an M2 profile emerged during the eventual chronic phase of the disease (34,35). In many of these studies, the M2-prone response related to an enhanced production of IL-10, which indirectly exerts potent immunosuppressive effects (36)(37)(38). Moreover, some viruses, such as herpesviruses and poxviruses, encode functional orthologs of IL-10 (vIL-10s) (39) or IL-6 (33). The viral IL-6 could Frontiers in Immunology | www.frontiersin.org FIGURE 3 | CD14 + HLA-DR ++ are graphed. We had set the threshold for each marker based on the FMO. The expression level of TNF-α and IL-1β (C), IL-12 (D), IL-10 (E), IL-6 (F), and IL-1β (G) were measured in the supernatant of infected macrophages after 72 h, employing commercial ELISA kits. Non-parametric One-way ANOVA followed by Dunn's multiple comparison test was used to detect significant differences between groups, *P < 0.05, **P < 0.01. The results are graphed as the median (min-max, horizontal line indicates the median) of at least four independent donors in each assay. also inhibit antiviral immunity through inhibition of type I IFN, which allows HHV8 to evade immune detection (40). Our results clearly show a potent pro-inflammatory response was elicited when macrophages were infected with C#1 strain, concordant with a M1 phenotype. However, the P strain elicited a more anti-inflammatory M2 response, associated with a higher level of IL-6, but not IL-10, an increase in CD11b + CD206 + and HLA-DR cell expression suggesting that the P strain shifts the macrophage response to a regulatory program (41). In macrophages, IL-10 as well as pro-inflammatory cytokines (TNFα, IL-12 and IL-6) are produced in response to activation of TLRs 2, 3, 4, 7, and 9, via MyD88 or TRIF, NF-kB, and MAPK pathways (42). The strain P did not stimulate the production of IL-10 neither pro-inflammatory ones such as IL-12 or TNF-α, instead of that, a large amount of IL-6, a cytokine also associated with immunomodulation, was specifically induced by P and not by C#1 strain. Interestingly, in a different model, the IL-6 cytokine has recently been associated with the promotion of the M2 phenotype (43) and described as a potent inducer of SOCS3 (33). Furthermore, the generation of human immunosuppressive myeloid cell populations in human IL-6 transgenic NOG mice has been demonstrated (44). Very little is known about M2 macrophage polarization during acute viral infection since the early anti-viral response is normally associated with a pro-inflammatory immune response (45). In this regard, a recently transcriptomic analysis of macrophages infected with attenuated or virulent influenza virus strains showed an early and clear profile of genes associated with the M2 phenotype triggered by a pathogenic influenza virus (46). It has been shown that AXL and MERTK receptors are differentially modulated in cytokine-induced M1 and M2 macrophages, where enhanced levels of MERTK were associated with M2 polarization (21). IRF-1 was initially described as a regulator of type I IFN and MHC-I expression by binding to regulatory regions of their promoters (47). However, IRF-1 is one of the most important IFN-stimulated genes for innate and adaptive antiviral immunity, making a complex network with other transcription factors to finally have a specific response (48)(49)(50). In this sense, IRF-2 and 8 can both inhibit IRF-1mediated induction of transcription competing by promoter binding sites (51,52), or by blocking protein:protein interactions (53,54), supporting the hypothesis that viruses can manipulate the induction of IFN and ISGs to enhance their replication. In this sense, the selective modulation of the MERTK, as . Non-parametric One-way ANOVA followed by Dunn's multiple comparison test was (Continued) FIGURE 5 | Used to detect significant differences between groups; **P < 0.01. The results are graphed as the median (min-max, horizontal line indicates the median) of seven independent donors. We had set the threshold based on the FMO of each receptor. well as higher levels of IRF-1, SOCS1, and SOCS3 during P strain infection highlight not only the skewing property of this strain, but also its ability to usurp immunomodulatory pathways (TAM/SOCS1 and 3) and potentially use them for immune evasion. Our results show that JUNV triggers differential macrophage activation and modulates polarization according to viral strain pathogenicity, inducing distinct cell responses that might facilitate correct immune surveillance or viral evasion and dissemination events that end in disease. Thus, the results provide important mechanistic insights into the understanding of JUNV pathogenesis and the multi-faceted host immune responses in arenavirus infection. Taking our results together with the above-mentioned recent findings by others, it may be speculated that in some acute viral infections, subversion of the conventional M1 proinflammatory response to a M2 anti-inflammatory response by acute pathogenic viruses will be more frequent than previously described, and thus is deserving of further study since this may allow the development of potential new candidates for therapeutic targets. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding authors. AUTHOR CONTRIBUTIONS MF did most of the experiments. PT performed IF studies. AL and NC made qPCR analysis. AE participated in the generation of macrophages and flow cytometry analysis. VR, JG, and MR edited the manuscript. EC and RG designed the experiments, discussed results, and wrote the manuscript. Técnicas (CONICET), Argentina. The funders had no role in study design, data collection and interpretation, or in the decision to submit the work for publication.
2019-10-22T13:08:21.223Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "134b67152cf3deaeb5027cb43d6e410c625c3b42", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02499/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "134b67152cf3deaeb5027cb43d6e410c625c3b42", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
54809355
pes2o/s2orc
v3-fos-license
The role of PS ability and RC skill in predicting growth trajectories of mathematics achievement There are relatively few studies in Australia and South-East Asian region that combine investigating models of math growth trajectories with predictors such as reasoning ability and reading comprehension skills. Math achievement is one of the major components of overall academic achievement and it is important to determine what factors (especially domain-general factors) predict levels of achievement over time. This study presents large-scale data (N = 5,886) from Australia to examine the trajectories of growth in math achievement and how problem solving (PS) ability and reading comprehension (RC) skills predict this growth among government school students (grades 3–8) in Victoria. Latent growth modelling showed that PS ability predicts growth in math achievement and this relationship is partially mediated by RC skill. Both PS ability and RC skill predict initial status in math achievement, but only PS ability predicts growth. The data also fit a model in which an improvement in general reasoning ability of students allows those with lower initial levels in math achievement to catch up. This can be interpreted as evidence that improving growth rates in PS ability may lift growth rates in math achievement. Subjects: Educational Psychology; Educational Research; Research Methods in Education ABOUT THE AUTHOR Alvin Vista was professionally engaged as part of a research team at the Assessment Research Centre, University of Melbourne in conducting the study for this paper, and where he has completed his PhD in Educational Measurement. Alvin is currently a research fellow at the Australian Council for Educational Research. His main research interests are in test development and educational measurement in cross-cultural settings. The data for this study were collected as part of a broader research project called the Assessment and Learning Partnerships (ALP) conducted by the Assessment Research Centre. The collaboration for the data collection process and the development of the research instruments included the contributions of M. Pavlovic, N. Awwal and P. Robertson as the research team under the supervision of Patrick Griffin and Esther Care as the principal investigators. PUBLIC INTEREST STATEMENT This paper looks at the relationship between maths achievement, reading comprehension (RC) skill and problem solving (PS) ability. An increasing linear trend has been established for maths achievement. There is statistically significant variation in both the initial maths achievement levels and growth rates in maths achievement. RC skill affects the initial status in maths achievement more than it affects the trajectory of growth. The rate of growth in PS ability significantly predicts the rate of growth in maths achievement. However, PS ability is less affected by time-related factors compared with growth in maths achievement. Background Measuring growth in achievement is central to educational measurement. Single measures may be useful when the goal is to simply assess at which level in any particular area of study a student stands. But in order to assess improvement and rate of learning, we need to have measures of growth that allow us to determine not just the current level but also the rate of growth so that we can project the level of achievement over time. In conjunction with measuring growth, it is also important to determine if certain factors predict this growth. Mathematics achievement is one of the major components of overall academic achievement (Duncan et al., 2007) and predicts real-life attainment beyond what is attributable to general intelligence (Rivera-Batiz, 1995). It is therefore not surprising that a substantial number of studies on academic achievement focus on math as one of its most important domains. There have been Australian studies on the links between math achievement and demographic factors (e.g. Clarkson (2006), Clarkson and Leder (1984), Hemmings, Grootenboer, and Kay (2011)), as well as the link between these factors and more general cognitive abilities (e.g. Carstairs, Myors, Shores, and Fogarty (2006), Dandy and Nettelbeck (2002)), but most of these studies are based on limited samples. More importantly, there is only a handful of Australian studies that look at changes in math achievement over time. Two notable studies with large-scale national data are by Marks and Ainley (1997), who looked at Australian literacy and numeracy data among grades 7 to 9 students over 20 years, and by Rothman (2002), who extended the study by including younger students and increasing the time span to almost 25 years. Both of these studies looked at the effects of demographic variables (including language background) on achievement trends. However, they did not examine the trajectory of math growth in a way that would enable theoretical models to be developed nor did they look at what could predict the trajectory of such growth. There has been no Australian study of comparable scale that has focussed on math achievement growth trajectory, and in particular, no study on the association between this growth trajectory and general cognitive abilities. Even internationally, there are relatively few studies that combine investigating models of math growth trajectories with predictors such as reasoning ability and reading comprehension (RC) skills (Lee, 2010). This gap in research is what this study attempts to fill. It is well established that problem solving (PS) and general cognitive abilities strongly predict math achievement (Deary, Strand, Smith, & Fernandes, 2007;Geary, 1994;Jensen, 1998;Taub, Keith, Floyd, & McGrew, 2008). This study, therefore, extends what is known about this association between general cognitive abilities and math achievement by investigating the dynamics of how reasoning ability affects the trajectory of growth. The inclusion of RC skill as predictor in models of growth trajectories bridges the gap in research by providing factors that can explain the mechanisms of growth. Studies on growth trajectories for math, reading or both have been reported (Guglielmi, 2012;Lee, 2010). However, as in the case of Lee's study, the models are useful in formalising the mechanisms of math and reading growth over time but they do not explore the association between these two constructs, nor do they look at possible external factors that can affect the mechanisms of growth. Primi, Ferrão, and Almeida (2010) address this need to include predictors that can explain growth trajectories, but their study is still limited in that no covariates were included in their models. Guglielmi (2012) did explore the association between math and reading achievement while also including relevant covariates that can inform on the association between the constructs of interest, but it is focussed on English-language learners. This paper aims to combine the general research thrust of modelling the trajectory of math growth of a more general and nationally representative sample, in a framework that includes both direct predictors and possible covariates based on naturally occurring groups. This is important because the inclusion of this reasoning component in this study allows measurement of a construct that affects both numeracy and literacy components of academic performance while being relatively independent of schooling. A study by Parrila, Aunola, Leskinen, Nurmi, and Kirby (2005) elucidated the mechanisms of growth for reading development, but they did not extend their models to include other domains of learning (such as math) or domain-general cognitive processes. There is limited understanding of what factors might influence how math achievement gaps narrow and to what extent these factors influence such rate. Exploring the mechanism of these relationships could provide a better understanding of how growth in math achievement is affected by important cognitive processes such as reasoning and language skills. In turn, a better understanding of these mechanisms could inform targeted interventions in areas that result in optimal growth. It is important to investigate the link between language proficiency and performance in math yet research on the association between language proficiency and trends in math achievement, is rare. A comprehensive review on trends in math achievement in the US by Tate (1997), spanning some 15 years, found only one study that looked at language proficiency as predictor. There is similar lack of studies on the predictors of math achievement in Australia, especially studies that look at predictors of math achievement growth (Hemmings et al., 2011). The measurement of RC skill is part of literacy assessment, which involves both reading and writing as the two main components of literacy. From an assessment perspective, literacy is defined in the curriculum as consisting of two main processes: comprehension and composition of texts (Australian Curriculum, Assessment & Reporting Authority, 2012). RC is the component and more delimited construct that is used in this study. As a construct, RC can be defined as a "conceptualisation of skills and knowledge that comprise the ability to make meaning of text" (Morsy, Kieffer, & Snow, 2010, p. 3). In order to develop models of growth trajectories that take into account Australia's classroom diversity, this study includes an analysis of how the dynamics of this association is influenced by proficiency on the test language. Important language skills (e.g. decoding and comprehension, phonological processing) either directly predict math performance or at least covary with it (Hart, Petrill, Thompson, & Plomin, 2009;Vilenius-Tuohimaa, Aunola, & Nurmi, 2008), thus the test language has direct implications for non-native speakers of this language. This effect of test language on math performance of non-native speakers is not uniform because of variation in language loadings for different types of math skills being measured (for example, arithmetic calculation may have less language loads compared with more complex word problems) (see, for example, findings in Fuchs et al. (2005Fuchs et al. ( , 2006). Regardless of differential effects that depend on skills being measured and type of test items, it can be argued that skills in the test language would have some influence on the performance in these tests, just as reading skills overlap with general cognitive ability in its loading on math ability (Hart et al., 2009). General reasoning ability and mathematics ability General reasoning ability is broadly defined and involves both inductive and deductive reasoning, as well as divergent and convergent thinking skills. While these skills are essential in curricular domains such as math, tests of general reasoning ability are usually constructed to be domain-independent and generally do not require specific math knowledge. In this study, the PS test used measures general reasoning ability and creative thinking skills; the test does not require any specialised math skills. However, it is reasonable to expect that performance on a general PS test and a math ability test will be highly correlated. This association between general reasoning ability and math performance is influenced by a number of factors. For example, performance on measures that test visuospatial working memory correlate with math achievement but the effect is mediated by fluid intelligence (Kyttälä & Lehto, 2008). Fluid intelligence (Gf) is one of two general factors of intelligence put forward more than 60 years ago by Cattell (1941Cattell ( , 1963, the other being crystallised intelligence (Gc). In what is now known as the Gf-Gc Theory (Carroll, 1984;Cattell, 1963), the two factors differ mainly on how they depend on specific knowledge. GC is entangled with school-based skills and areas of learning, making the measurement of GC difficult to separate from the measurement of achievement (Cattell, 1963), while GF is conventionally accepted to be domain-general even if it strongly correlates with specific school-based skills (e.g. mathematical reasoning, Floyd, Evans, and McGrew (2003)). GF and working memory predict multi-tasking performance (Konig, Buhner, & Murling, 2005), which is crucial to math achievements. This study, however, is not focused solely on GF as a predictor of math achievement. By using a measure that assesses general reasoning ability, this study takes into account both fluid and GC. Consequently, the measures of reasoning ability to be used in this study are not strictly nonverbal and in fact a considerable proportion carries significant verbal loads. This dual and inclusive approach may offer unique benefits compared with using language-free measures of GF in examining mediating effects of language proficiency. An Australian study that examined the effect of linguistic background on performance in an intelligence test found that language could be just one of two independent factors that have an effect on cognitive abilities and that English proficiency operates independent of sociocultural factors (Carstairs et al., 2006). In their study, group differences exist between English-speaking background (ESB) and non-English-speaking background (NESB) participants even if the NESB participants speak English as their first language. This shows that language proficiency can influence the performance on language-dependent portions of an intelligence test while sociocultural factors also exert an influence on the nonverbal portions (Carstairs et al., 2006). Measuring math achievement Benchmarks for measuring math achievement in the levels of schooling that are within the context of this study are based on standardised and national measures of math both in Australia and across the world. There is much debate on how math achievement should be defined, what measures are appropriate and how levels of achievement should be delineated. However, these wide range of issues are beyond the scope of this study and as such, discussions will be limited to what is the current de facto standard of measuring math achievement and how these measures are used within the context of each country's governing institutions to define the levels of achievement. In other words, this study does not seek to redefine the measurement and structure of the construct that is math achievement, but is instead focussed on the actual measurement data. This means that the study utilises test data from well-established, preferably large-scale and standardised measures of math achievement. In Australia, national testing is comparatively recent and the main measure of math ability is the numeracy test within a much larger system of national testing called the National Assessment Program-Literacy and Numeracy (NAPLAN). NAPLAN is the main instrument for Australia's assessment programme that evaluates educational outcomes at the national level (Ministerial Council for Education, Early Childhood Development & Youth Affairs, 2009). Beginning in 2008, NAPLAN is administered annually for students in years 3, 5, 7 and 9 in four domains of learning: reading, writing, language conventions (spelling, grammar and punctuation) and numeracy (Australian Curriculum Assessment & Reporting Authority, 2011). Although the available data and scope of population being tested in NAPLAN are comprehensive, there is an important limitation that needs to be considered in using it as a main source of data for this study. As mentioned earlier, in any given year, NAPLAN is only administered to years 3, 5, 7 and 9. This is a major logistical limitation and thus, for this study, a parallel and concurrently validated measure of math achievement was used-the Assessment and Learning Partnerships (ALP) tests (for details, see Assessment Research Centre, (2004,2011)). Measuring general reasoning ability There is a need to delineate the measurement of general reasoning ability in the curricular and school-level contexts by framing it as a measure of PS ability because reasoning ability is an abstract and relatively broad construct. By framing the test as a measure of PS ability, we limit the scope of the construct that we need to measure and are able to take advantage of an existing framework that specifies the curriculum standards. The instrument for measuring PS ability in this study is a multiple choice test but it maximises the information about reasoning processes by applying findings from modern test development. In mathematical PS, for example, a task that is constructivist in approach may allow or even encourage the students to employ their own strategies based on their collection of math concepts, in contrast to tasks that require a specific mathematical approach to be demonstrated (Clarke, 1992). The PS items in the ALP tests used for this study were designed such that they are open-ended in terms of strategies needed to solve them. In other words, they do not require fixed concepts or rigid itemspecific processes to solve. Another important issue is that RC skill should not interact with PS ability. As the main predictor in the study design, the measure of PS ability (and thus the scope-limited measure of reasoning skills) must not be systematically biased with respect to RC skill. To address this issue, the instrument needs to have English reading loads that are not detrimental to the performance in this instrument. A purely language-free instrument would be ideal, but it would not be feasible because the instrument has to fit within the Victorian curricular specifications. Nevertheless, the study design incorporates statistical methods to check on whether the instrument exhibits systematic bias for any of the groups. This is done through a differential item functioning analysis on the PS instrument and by designing the instrument such that it does not require any specialised math skills that are specific to the grade level of students. Research questions Given the framework that math learning is linked with general reasoning ability (e.g. Casey, Pezaris, and Nuttall (1992), Hart et al. (2009), Primi et al. (2010), this paper extends this by seeking to establish that the mechanism of this association between PS ability and math achievement is not uniform longitudinally. Further, this paper also aims to define the role of RC skill in this framework. Therefore, the two major research questions are: "What is the rate of growth in math achievement level as measured at three timepoints?" and "Do PS ability and RC skill predict the initial status and rate of growth in math achievement?" with sub-questions on the type of growth (linear or not) and the dynamics of growth (slowing or increasing over time). Results from a comparable study, but one that focuses on reading development (Parrila et al., 2005), showed that growth can be nonlinear. The number of timepoints in this study is limited but a linear model can still be explicitly tested for significant deviations from linear trajectories. Findings from the literature show a positive association between initial level and subsequent level of math achievement (Jordan, Hanich, & Kaplan, 2003;Jordan, Kaplan, Ramineni, & Locuniak, 2009;Lerkkanen, Rasku-Puttonen, Aunola, & Nurmi, 2005) and we predict that the same association will be supported by results from this study. Positive correlations between reasoning ability and math (Fuchs et al., 2010;Geary, 1994Geary, , 2011, reasoning ability and RC (Evans, Floyd, McGrew, & Leforgee, 2002;Riding & Powell, 1987;Vellutino, 2001), and RC and math (Jordon, Kaplan, & Hanich, 2002;Lerkkanen et al., 2005) are well reported in the literature. We expect these positive associations to be supported by our results, but we also aim to put these three constructs together and elucidate the relationships among them. The answers to the research questions in this study can have important implications for educational interventions that depend on a better understanding of how growth in maths achievement is affected by important cognitive processes such as reasoning and language skills. Main design for modelling latent growth The longitudinal study design and the way the data were collected have features that are suitable for an analysis of change to be conducted (see Singer and Willett (2003)): (a) multiple waves of data (at least three to enable testing for linear growth), (b) a time-structured data-set with uniform and equally spaced timepoints and (c) continuous interval-scale outcome measures. Among the more sophisticated ways of analysing longitudinal data is through linear growth modelling (LGM) with latent variables. This LGM approach can be traced back to seminal works by Rao (1958), Tucker (1958) and, under an SEM perspective as implemented in this study, works by Bollen and Curran (2006), McArdle (1987, 2009), McArdle and Epstein (1987, Meredith and Tisak (1990), and Muthén (1991). The two main advantages of using LGM under an SEM approach over OLS regression to analyse longitudinal data include more flexibility on the treatment of measurement error variances, and the capability to analyse change over time for individuals rather than only considering group means (Kline, 1998). The simplest longitudinal growth model is that of linear change over time, based on three or more observations of the sample cases across time periods. This model is described in detail elsewhere (e.g. Bollen and Curran (2006), Duncan, Duncan, Strycker, Li, and Apert (2006), Kline (1998), Muthén (1991), Singer and Willett (2003)) but is summarised below in the manner in which it was implemented in this paper. Suppose we want to model a linear growth for the same individual i = 1-N by measuring over a certain time period (t = 1-3 for this study). The unconditional model trajectory equation in scalar expression is: The two main factors in this equations are α and β, which represent the random initial status and the random rate of growth, respectively. These two main factors are the latent variables in the growth model. The growth term is multiplied by a coefficient, λ t , representing intervals of time and is typically a constant where λ 1 = 0, λ 2 = 1, λ 3 = 2, and so on. The unobserved error of measurement is represented by ε t . The equations for the two factors of interest are: where μ α and μ β are the means of the initial status and growth rate, respectively, while each ζ i represents the residual terms associated with their particular endogenous variable. Combining the intercept and slope equations with the trajectory equations yields a combined model with both fixed and random components known as the reduced-form equation (Bollen & Curran, 2006): Translating the trajectory equation to a path diagram and adopting the conventional nomenclature (Jöreskog, 1970), Figure 1 presents the basic growth model. This unconditional model can be extended to incorporate exogenous (independent) predictors for the growth and initial status factors. The matrix form of the intercept (Equation (2.1)) and slope (Equation (2.2)) equations is: where η is k × 1 vector of latent factors (e.g. η α and η β ) and μ η is a k × 1 vector of latent variable means. Including predictor variables to the model results in an expansion of Equation (4) into: (1) where Г is a k × p matrix of fixed regression parameters, p = number of predictors in x. Similarly, extending the visualisation in Figure 1 to include two predictors (x 1 and x 2 ) is presented in Figure 2. Here we see that the latent factors (η α and η β ) from Figure 1 become latent dependent (or endogenous) variables that are regressed on x 1 and x 2 . These exogenous predictors are therefore explanatory variables for the initial status of the process (η α ) and the rate of growth (η β ). However, this conditional model is treating the covariates as time invariant (because they were only measured once) and as error-free indicators that are supposed to represent an underlying factor (Kline, 1998). This is a limitation that can be addressed by extending the conditional model to transform the predictor variables into more accurate latent factors, adapting an approach by Walker, Acock, Bowman, and Li (1996), and applied to a framework similar to simultaneous growth models (Curran, Harford, & Muthén, 1996). Instead of using a single measure of PS ability and using that indicator to predict both initial status and growth of math achievement, PS ability can be treated as a latent factor and modelled for growth. The latent initial status and latent growth for PS ability can be used as covariates for the respective latent variables for math achievement. This extension into a simultaneous growth model is described further in the methodology. Participants Participants in this study are government school students who participate in the ALP testing every October and March. The ALP tests are designed to be administered to grades 3-10 students across the participating Department of Education and Early Childhood Development (DEECD) regions in Victoria. Testing commenced on October 2009 and was administered twice a year (March and October) every year thereafter. However, this study is focused only on grade levels 3-8, and only for participants from 2010 to 2011 test administrations. The participants for the ALP project within the scope of this study are 61 government schools in 6 DEECD regions in Victoria involving around 5,886 students of diverse linguistic backgrounds and a wide range of English-language proficiency. The sample characteristics follow the school population characteristics in terms of gender, language background and other demographic data that are collected by schools in Victoria. The sample distribution is also reasonably close to state and national distributions for the general population within the age range of this study (for detailed demographic tables, see Vista, 2013). Instruments and methods The main instruments are the numeracy, literacy and PS tests from the ALP project (Assessment Research Centre, 2011). The development process for the ALP tests and the validation analysis (with NAPLAN as the validating measure) were described in detail elsewhere (Vista, 2013). As an overview, the tests are multiple-choice type, Adobe Flash™-based and administered online. The PS test items fall into three broad groups based on the type of reasoning process involved: spatial, symbolic or verbal reasoning. These items were matched to the Victorian Essential Learning Standards (VELS) under the general capabilities domain (Victorian Curriculum & Assessment Authority, 2012). The numeracy items have the following broad content areas: number, geometry, measurement, chance and data. The literacy items primarily measure RC skill (used in this study as the main RC measure) and were developed based on a previous large-scale literacy assessment study (see Assessment Research Centre (2004)). The ALP tests are in a single-scale calibrated using the Rasch model. As such, the "scores" reported are actually weighted likelihood estimates (WLE) of student ability in logits (centred at 0 with a standard deviation of 1). The numeracy and literacy tests were validated using the nationally administered NAPLAN (for details, see Vista, 2013). Results show significant correlations between ALP and NAPLAN scores 1 within same test administrations, implying concurrent validity. Results also show that when same cohorts were administered the ALP and NALPAN tests two years apart, the scores correlated significantly, implying predictive validity. Regression analysis further supported this finding of predictive validity from the correlation analysis (see Vista, 2013). The measurement reliabilities of all subtests 2 for Num, PS and RC test measures are reported in Table 1, with colours indicating the targeted ability levels for each subtest arranged in order of relative difficulty (easy to difficult from top to bottom). The test reliability measures used are Cronbach's α and EAP/PV index, or the average expected a posteriori/persons variance, which The SEM approach adopted for this study is strictly confirmatory and concerned mainly in the confirmation of a predetermined structural model. The structural models being tested are simple unconditional latent growth models and conditional models with well-defined covariates, following frameworks that have been well established in the literature (e.g. Bollen and Curran (2006), McArdle and Epstein (1987), Meredith and Tisak (1990), Muthén (2004)). The research questions were statistically tested by constructing models that were constrained to control specific model components. By comparing these constrained models with freely estimated models, null hypotheses on those specific model components can be tested for statistical significance. For example, in testing for linear growth, the baseline model will have no restrictions while the constrained model will have path coefficients that reflect a linear trend. If the fit of the constrained model is acceptable, this would suggest that there is no evidence for us to reject the model where linear growth is assumed. Simultaneous growth model Latent growth for PS ability was modelled using three test administrations for PS ability, matching with the three timepoints for the Num test administration. Each test administration is 1 semester apart (timepoint 1 = 1st semester 2010, timepoint 2 = 2nd semester 2010 and timepoint 3 = 1st semester 2011). Linear growth for PS ability was not the main focus of this analysis so the parameters of path loadings from the latent PS ability growth variable were not constrained as linear. Further, due to evidence that RC skill may be partially mediating the association between PS ability and growth in math (see Vista, 2013), this mediating effect is included in the extended model. Modelling predictors that are latent variables themselves is straightforward and involves using the latent variables of a growth model to predict the latent variables of another (see, for example, Kline (1998), Muthén (1991)). Thus, the latent initial status and growth variables of an LGM for PS ability became predictors for both latent variables in the growth model for math achievement. This model (designated as LGM-E1) is visually presented in Figure 3. Dealing with missing data Latent growth modelling was done using AMOS, and missing data were dealt with by computing maximum likelihood estimates through a procedure called full information maximum likelihood (FIML) (Anderson, 1957). Simulation studies done by Wothke (2000) show that, when applied to growth curve modelling, FIML is superior to pairwise and listwise deletions by providing parameter estimates with the least bias. FIML also tends to produce the least bias among comparative methods used in SEM (Arbuckle, 1996;Enders, 2001;Enders & Bandalos, 2001). More important, apart from producing unbiased estimates in missing completely at random (MCAR) and missing at random (MAR) types of data missingness (Rubin, 1976), FIML procedure is also robust to slight deviations from maximum likelihood estimation assumptions (e.g. multivariate normality) (Arbuckle, 1996). The type of missing values in this study is a scale-level missingness (e.g. students might take the Num and Lit tests but not the PS test). Further, since the subsequent administrations included new participating schools and there is no way to predict in the initial test administration which schools will decide to participate in the future, this type of missingness is closer to what is defined as MAR 3 rather than MCAR or missing not at random (MNAR) (refer to Little and Rubin (1987) for details on the mechanisms of missingness). Main LGM results The basic unconditional model for latent growth is presented in Figure 4. The two latent variables are defined as representing initial status and growth. These latent variables in turn load into the observed variables, which measure the levels of math achievement in three time periods. The path loadings from the initial status latent variable are fixed to 1, indicating that this latent variable is an intercept variable that sets the initial level of math achievement for the model. To test for linear growth, the default baseline model freely estimates the regression weight from the latent growth variable to the observed time 2 variable (denoted in the path diagram as b2), while a second model constrains this regression weight to 1. The baseline model is saturated (i.e. zero degrees of freedom), however by fixing b2 = 1, the constrained model can now be tested using model χ 2 among other fit indices. If this model fits, meaning that regression weights of 0, 1 and 2 are acceptable, this implies that the latent growth variable is a linear parameter representing an amount of increase that is uniform over time (Duncan et al., 2006). Comparative fit indices showed that the constrained model has good fit, χ 2 (1) = 10.08, RMSEA = 0.039, p (RMSEA≤0.05) = 0.75, CFI = 0.997, suggesting that linear growth is tenable. The parameter estimates for this linear model are presented in Figure 5. The mean initial status is 0.36, indicating that the average 2010 time 1 WLE estimates of student ability in math is 0.36 logits. The variance for the initial status mean is 0.60, SE = 0.03, p < 0.01, suggesting that there is statistically significant variation in the initial math achievement levels across all participants. The average growth is 0.26 logits per time period, again with significant variation of 0.03 across all participants, SE = 0.01, p = 0.02. Thus, the linear growth model based on these parameter estimates is y = 0.36 + 0.26t, for t = 0, 1, and 2. The implied means for each timepoint therefore are: Num 2010_1 = 0.36, Num 2010_2 = 0.62 and Num 2011_1 = 0.88. There is a negative and statistically significant covariance of −0.08 (r = −0.61) between initial status and growth, SE = 0.02, p < 0.01 suggesting that students with higher initial Num scores have a lower rate of growth, although this association has to be interpreted in the context of the small amount of variation in growth rates. This is unexpected, especially in comparison with results from studies that showed positive association between initial levels and subsequent levels of performance (e.g. Lerkkanen et al. (2005)), but not contradictory since our results showed a negative association between initial level and the rate of growth in math achievement. Simultaneous growth model The simultaneous growth model did not achieve exact fit, χ 2 (11) = 77.30, p < 0.01, but is still tenable, RMSEA = 0.032, CFI = 0.993, and parameter estimates can still be interpreted with meaning. This allows us to examine the path loadings and the regression weights to answer the research questions. Proceeding to evaluate the regression weights, latent PS ability growth does not significantly predict initial status in math, B = 0.17, SE = 0.16, p = 0.29. The direct effect of the mediating variable, RC skill, to growth in math is also nonsignificant, B = 0.02, SE = 0.02, p = 0.32. The covariance between initial status and growth for PS ability is nonsignificant, Cov(PS init , PS growth ) = −0.01, SE = 0.01, p = 0.26. For this extended model, the predictors account for 82% of the variability in initial level of math achievement and 53% of the variability in latent growth. Examining the factors that load on to these latent dependent variables, the main results with substantive relevance are the statistically significant predictors of the rate of growth in math achievement (Table 2). Students with higher initial PS scores tend to also have higher initial levels, B = 0.66, SE = 0.03, p < 0.01, but slower rate of growth, B = −0.12, SE = 0.02, p < 0.01, in Num scores. This association becomes slightly weaker if the partial mediating effect of RC skill is taken into account, reducing the total effect of latent initial status of PS to latent initial status of Num to −0.10 (indicating an indirect effect of 0.02). The rate of growth in PS ability also significantly predicts the rate of growth in math achievement, this time proportionally, B = 0.53, SE = 0.14, p < 0.01. This can be interpreted as such that, for every 1 logit increase from the mean growth in PS ability over the time period in this model, there is a corresponding increase of 0.53 logits in math ability. The mean rate of growth for PS ability is 0.18 logits, with statistically significant variability among the students in the data (s 2 = 0.04, SE = 0.01, p < 0.01). The latent means, covariances and path loadings are presented in Figure 6. The summarised means and variances are reported in Table 3. Discussion Preliminary analyses established that there is growth in Num scores across the three time periods that were observed. Data fitted a regression model where this growth is assumed to be linear. Fitting a latent growth model to the data resulted in good fit, with results indicating that, in an unconditional model, Num scores grow at an average of 0.26 logits per time period (i.e. around six months). The variation in the rate of growth is significant; some have faster rates of growth while others have slower rates of growth. A statistically significant variation is also present in the initial status of math achievement. There is statistically significant covariance between a student's initial status and the rate of growth but, given that the variance in growth rates is small, the practical implications may not be substantial. After establishing that a latent growth model with linear growth parameters is tenable, the next step is including the predictors specified previously and to analyse whether or not they account for the latent growth. This simultaneous growth model (LGM-E1) has good fit although relatively worse compared with the unconditional model. This could be due to added model complexity, given that this model has comparatively more estimated parameters and thus less parsimonious (Fan, Thompson, & Wang, 1999;Schumacker & Lomax, 2004). Because model fit remains good, the parameter estimates can still be meaningfully interpreted. Initial PS ability predicts initial level of math achievement, supporting previous findings (Fuchs et al., 2010;Geary, 1994Geary, , 2011Primi et al., 2010). This is expected, so we're more interested in the dynamics of growth in math achievement and PS ability. Results showed that the average rate of growth in PS ability is 0.18 logits per time period. The magnitude of this rate of growth is almost double compared with the mean rate of growth for math achievement, which could be interpreted as an indicator that PS ability is more amenable to change than math achievement. This rate of growth significantly predicts the rate of growth for math achievement such that, on average, those who are growing in PS ability at a rate of 1.0 logit faster tend to grow in math achievement at a rate of 0.53 logits faster as well. This is somewhat compensated by the inverse association of latent initial status in PS and latent growth in math such that those who are 1.0 logit higher in initial PS ability have a lower rate of growth in math by around −0.10 logits per time period (direct effect of −0.12 plus indirect effect of 0.02). But because the loading on math growth from PS growth is larger than the loading from PS initial status, those with higher rates of growth in PS ability will still end up with higher rates of growth in math regardless of their initial PS scores. The results from the extended model also allow for the comparison of the growth rates for both math achievement and PS ability. According to the model, students have an average rate of growth in PS ability that is higher than their rates of growth in math achievement (M Num = 0.10, SE = 0.05, p = 0.03 vs. M PS = 0.18, SE = 0.04, p < 0.01). Interestingly, there is no covariation between latent initial status and growth of PS ability, but the variance in PS ability latent growth is significant (see Table 3). This suggests that, for whatever reason, one's growth rate in PS ability does not appear to be associated with one's level of PS ability in the beginning. Whatever is causing some rates of growth in PS to be higher than others is outside the model. This finding could have important implications for pedagogy and the developmental framework approach to learning, to be discussed in more detail later on. The association between PS ability growth rate and trajectory in math achievement is presented in Figure 7. 4 Here we can see that, as the growth rate in PS ability increases, the growth rate in math achievement also increases (i.e. the slope of the trajectory in math achievement over time becomes steeper). A simplified plot that takes into account the effect of initial PS ability status is presented in Figure 8. In this plot, trajectories for three levels of initial status are shown, each with a specific rate of PS ability growth in relation to the mean values for both latent variables. Here we see that those who have an initial PS ability of 1 SD below but with a rate of growth 1 SD above the mean (low-start, high-growth or LSHG group) is catching up in terms of math achievement with those who started at 1 SD above the mean but have 1 SD below the mean growth rates in PS ability (high-start, lowgrowth or HSLG group). The available data for this study do not show those with lower initial status reaching the level of those who started higher, but if we extrapolate the growth curves, we see the LSHG eventually overtaking the HSLG group within two years (i.e. within four six-month time periods). The results also show an indirect effect of PS ability on math initial status of around 0.02 that can be attributed to RC skill. There is no evidence of any similar mediating effect on the latent growth variables. It is possible for a student's initial RC skill to be mediating between PS ability and math achievement early on, although the effect is weak. This is not surprising, as it is certainly possible that a portion of the latent difficulty in the math items may be attributable, but not exclusively, to reading loads (Guglielmi, 2012;Jordan et al., 2003;Landerl, Bevan, & Butterworth, 2004). In particular, Guglielmi's results based on large-scale data suggest that RC skill may be contributing to math achievement independent of general reasoning ability (Guglielmi, 2012). However, the nonsignificant path loading from RC skill to latent growth in math suggests that over time, as the student becomes more immersed in the classroom, initial levels of RC skill may have increasingly less effect on subsequent progress in math achievement compared with the influence from PS ability. From a pedagogical perspective, this does not lessen the importance of improving RC and related language skills among students. On one hand, there is evidence in the main LGM model that RC skill covaries with PS ability. This highlights the need to focus more on language skills especially for NESB students because the teaching of reasoning skills in Australia is essentially carried out in English. On the other hand, the results show that while RC skill is associated with initial status in math, growth in math achievement becomes independent of initial RC skill from that moment forward. This implies that the disadvantage in initial math achievement due to lower levels of RC skill can be compensated by other factors. In addition, improving the PS ability of students can lift the math achievement of those who have relatively lower initial levels and allow them to catch up. This result confirms findings from the literature that math learning involves considerable general reasoning ability (e.g. Casey et al. (1992), Hart et al. (2009), Primi et al. (2010). However, the strength of the association between PS ability and math achievement is not uniform longitudinally such that the amount of variance in math growth being explained by PS ability (and RC skill) decreases in Figure 8. Comparison of math trajectories between HSLG and LSHG groups, relative to those with average PS ability initial status and growth rates. magnitude over time for any individual regardless of their grade level at the start of this study. This means that time as a variable tends to have a negative effect on growth in math achievement. Because student year is related to levels of PS ability (i.e. those in the higher grade levels have higher PS scores), this effect is captured in the latent growth models such that PS ability has a negative loading on the latent growth factor for math achievement. The causes for this effect of time on the trajectory in math achievement are complex and may await future analyses with more comprehensive data on both within and between-individual factors. For example, future data on classroom changes over the course of a student's school life may provide insight on why growth appears to slow down in the secondary levels relative to early primary levels. Curricular shifts, changes in student motivation, even cognitive developmental maturation may all play very important roles in further explaining the shifts in math growth. In contrast to this time effect on math growth, growth in PS ability does not appear to be related to initial levels. It can therefore be argued that growth in PS ability is less affected by time-related factors compared with growth in math achievement. Another interesting result is that the average rate of growth for PS ability is almost double that of math achievement. This could have important implications for pedagogy because general reasoning skills have long been found to have strong associations with math learning (McGrew & Hessler, 1995;Taub et al., 2008) but have only been recently included as a formal component 5 of the curriculum in Australia. Once firmly established in the curriculum, targeted teaching of reasoning skills may be able to lift or enhance math learning if we follow the logical consequences of experimental evidence based on such interventions (Nunes et al., 2007). Implications and future directions The growth models allow the trajectory of math achievement to be extrapolated beyond the time span in this study within a certain reasonable extent. While both regression and latent growth models suggest an eventual levelling off in the trajectories of those who have high initial levels of PS ability, the models also imply that manipulating the rates of growth in PS ability can have a direct positive impact on math achievement growth. These findings have important implications for teaching under a developmental framework where assessment data are used primarily to improve learning regardless of the initial status in a particular domain of learning (Griffin, 2007). When applied to the teaching of problem solving and general reasoning skills, the initial status in PS ability would indicate the initial level of a student's readiness to learn (Griffin, 2007) so that teachers can scaffold content and approaches for optimal learning. This study shows that facilitating a higher growth rate in PS ability can translate into increased math achievement over time, regardless of initial status in either math or PS ability. In the extended latent growth model, predictors of math growth account for as much as 53% of the variance in latent growth in math achievement, with PS ability growth rate as the strongest predictor. The factors that account for the growth rate in PS ability are beyond the scope of this study and thus were not examined here. These could come for example from home environment changes or developmental maturity, as well as from a diversity of other factors. However, it is natural to consider that growth in PS ability may be facilitated through school intervention. A study by Nunes et al. (2007) found that training children in logical reasoning can improve their math learning more than children who were not given logical reasoning instruction. Their results are confirmed in this study's findings where growth in PS ability predicts growth in math achievement. The implications could be far-reaching because if we relate the findings in this study with the findings from Nunes et al. (2007), the beneficial effects of growth in PS ability through specific intervention could last for a substantial length of time and affect math learning across a wide area of the curriculum even if the general reasoning instruction does not target any specific math domains. Limitations and recommendations The main limitations of this study regarding scope (government schools in Victoria), range of academic year levels included in the study, and management of missing data are discussed in an earlier and related publication (Vista, 2013). Specific limitations include the relatively low number of timepoints in this study. Three timepoints enabled testing for linear growth but more timepoints over a longer period of time would provide more data to describe the latent growth curves better. Extending the study has logistical implications as well as methodological challenges, especially concerning the heightened chances of missing data. Perhaps a balance between sample size and length of study could be considered for future study designs. Longer duration and more timepoints also need to be weighed logistically with the option to include additional independent cohorts for more robust cross-validation. It is also recommended that the trajectory models in this study be validated by future research. Independent samples with either similar or different scope in terms of sample characteristics will allow a theoretical validation of the growth trajectories that were fitted to the data in this study. For example, country-level representative samples, samples that include a wider age-range or cohorts based on longer time-spans are recommended. The inclusion of other demographic variables in the models, such as SES or parental education, may also be useful. These validation studies can help strengthen the findings or, if future results are contradictory, provide a basis for re-examination of the conclusions put forward here. Finally, it is hoped that the issues and challenges tackled in this study provide some insight to future studies of similar nature. It is recommended that the results and implications find their way to policymakers so that they can be translated into operational terms. The research possibilities on the dynamics of math growth and factors that affect it remain exciting. It is up to future researchers to validate the findings as well as extend this study to look at growth dynamics in other areas of student learning. Funding The author declares no direct funding for this research. Author details Alvin Vista 1 E-mail: alvin.vista@acer.edu.au 1 Melbourne Graduate School of Education, University of Melbourne, Victoria, Australia; Australian Council for Educational Research, Camberwell, Australia. Citation information Cite this article as: The role of PS ability and RC skill in predicting growth trajectories of mathematics achievement, Alvin Vista, Cogent Education (2016), 3: 1222720. Notes 1. NAPLAN scores are also on a uniform scale that spans from year 3 to year 9, and represent the same ability level over time (Australian Curriculum Assessment & Reporting Authority, 2011). 2. That is, the instruments composing each test before they have been horizontally and vertically calibrated into a single scale under the Rasch model. 3. This is only an assumption because we can only test whether or not the missing values are MCAR and not whether they are either MAR or MNAR (Newman, 2003;Schafer & Graham, 2002). 4. Because growth rate is not associated with initial status in PS ability, this graph is fixed at the mean level for initial status. In other words, this graph shows the association for those with the mean initial status of PS ability (M PS = −0.36).
2018-12-07T12:38:22.747Z
2016-09-03T00:00:00.000
{ "year": 2016, "sha1": "d9baa6153b2284f2bbccf74b612a91d4eba4ac40", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/2331186x.2016.1222720", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d9baa6153b2284f2bbccf74b612a91d4eba4ac40", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
116842782
pes2o/s2orc
v3-fos-license
SCORPION ENVENOMATION IN BAGH-E MALEK, IRAN –A 5 YEAR STUDY Scorpionism is a major health problem in many tropical countries including Iran. The aim of this study was to describe the epidemiological and demographic information among whom stung by scorpions in Bagh-E Malek, Iran. In this retrospective cross-sectional study the information were gathered through evaluation of the records of stung patients referring to Shahid Tabatabai hospital of Bagh-E Malek April 2008 to April 2012. A total of 132 cases stung by scorpion were recorded including 3115 males (43%) and 4121 females (57%).Approximately 42.1 percent of the sting cases occurred in the summer followed by spring with 35.9% of stings. About 59.8% of stings happened in people by the age of 15-44 years old. Most of the stings happened in exposed extremities (78.5%) with most of it in upper limbs (41.8%). The scorpions’ species were unknown but 60.4% of them were yellow, 34.0% black and 5.6% were “other colors”. Since the highest rate of scorpionism cases were reported in rural areas (74.2%), it is suggested that the main focus should be considered for education of rural people, especially women who play a major role in the family. Additionally, evaluation of residential houses and surrounding environment and giving information on method of cleaning up the environment from the equipment and the factors from which scorpion may use as shelter, can also be effective in reducing the incidence of Scorpionism. INTRODUCTION Scorpions (Arthropods: Arachnida)are amedically important arthropod which has beendistributed around the world, but usually they are abundant in the warm and dry weather conditions (1). So far, about 1,500 species of scorpions have been reported from all over the world of which only 30 species are medically important (2)(3)(4)(5).According to the scientific reports, about 51 species of scorpions have been reported from Iran that belong to four families (Buthidae, Scorpionidae, Hemiscorpiidae, and Diplocentridae) and areclassified in 14 genera.These Scorpionscause 40,000 to 50,000Scorpionisms each yearand approximately 19 deaths happen in Iran which is the highest reported in the Middle East (6)(7)(8)(9) (2015) in Fars,also 58.6% of stung people were women that were the highest rate (2). Most of scorpionism and the caused death have been reported inKhuzestan province in southwestern Iran (1563 per 100000 people)and the cities of Masjed Soleiman, Rāmhormoz, izeh, Susa, and Bagh-E malek were respectively ranked as first to fifth in this field (11). Since in the studied city of this research a greater percentage of the population were living in rural areas and also due to the mountainous terrain and climate of the city, dangerous scorpions Hemiscorpius lepturusis certainly present in this region, therefore, it seemed thatdemographic and epidemiological data of stung people in this area is neededto carry out targeted and preventive measures. For this reason, the present study was conducted to assess these characteristics in 7236stungpeople in this city over a period of 5 years. MATERIALS AND METHODS: The present study is a retrospective cross-sectional study,which was conducted during a 5- people according to the 2006 census. The required data for the research was collected using a questionnaire which is completed for each patient in the hospital.The questionnaire consisted of demographic information, gender, geographic location of residence,the interval between the stingand serum receiving, serum injection method (intramuscular or intravenous) and the situation of injured person after serum injection.All data was analyzed using SPSS software and results less than0.05 (P<0.05) was considered as significant. and July (15%) and the lowest number have been observedin winter (4%) and in January and March each with 2.1% (Table1). DISCUSSION Given that approximately 1230000 scorpionism and 3250 leading death have been occurredall over the world and with the knowledge that treatment of scorpionism is complexespecially in terms of the use of anti-venom and required systemic treatments, therefore,scorpionism is one of the major public health problems (1,2,6,12). In the present study most stungpeople by scorpions were female (57% vs. 43% men).There are a lot of studies about scorpionism that have similar results (1,11,(13)(14)(15).However, in this regard, a number of studied can be found that have achieved different results with the results of the present research and reported that most of the stung people by scorpions were men (6,(16)(17)(18)(19)(20). In the present study, most people were in the age group of15-44 years which suggests that most of the stung people were young people and work force of the community, while similar studies confirmedour findings that represent thatthe active force of our communities are at risk (6,13,(16)(17)(18)(19)21). The study of Shahbazzadeh et al 2009(11) shows that 90% of reportshave been presented in summer (from April to October).In this study it was observed that stings occur throughout the year but the majority of scorpionismoccur in the summer (42%).This study and many other studies confirm these results that the majority of stings are in June, July and August (6, 13-16, 18, 22)but there are also studies that report more numbers of stings in the rainy seasons especially May (7,20,21). The results of this study demonstrated that the majority ofstung subjects were rural residents The collected data showed that most stings occurred in the upper limbs,here we can find studies with similar results (6,11,14,20) but there are studies that their results contradict the mentioned result and reported that most stings were in the lower limbs and organs (16,17). major attacker were reported as yellow scorpions (1,11,13) but in two studies conducted in Turkey(15) and Saudi Arabia(18) the attacker scorpions were reported as Black Scorpions; this difference may be due to different scorpions of every region. In general, and based on the results obtained from this study, programs could be planned to educate people at risk to avoid happening the stings.Due to the frequency of stings in the rural than urban areas, the main focus should be considered for education of rural people, especially women who play a major role in the family. Additionally, evaluation of residential houses and surrounding environment and giving information on method of cleaning up the environment from the equipment and the factors from which scorpion may use as shelter, can also be effective in reducing the incidence of Scorpionism.
2019-04-16T13:23:13.761Z
2017-02-15T00:00:00.000
{ "year": 2017, "sha1": "f6e22eab068ecf397596949e77f71b1423f7bda5", "oa_license": "CCBYNC", "oa_url": "https://www.ajol.info/index.php/jfas/article/download/159160/157610", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "abf04d20bb9a6f3655d92af83480821a70b13994", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54019197
pes2o/s2orc
v3-fos-license
An Empirical Study on the Performance of Cost-Sensitive Boosting Algorithms with Different Levels of Class Imbalance Cost-sensitive boosting algorithms have proven successful for solving the difficult class imbalance problems.However, the influence of misclassification costs and imbalance level on the algorithm performance is still not clear. The present paper aims to conduct an empirical comparison of six representative cost-sensitive boosting algorithms, including AdaCost, CSB1, CSB2, AdaC1, AdaC2, and AdaC3. These algorithms are thoroughly evaluated by a comprehensive suite of experiments, in which nearly fifty thousands classification models are trained on 17 real-world imbalanced data sets. Experimental results show that AdaC serial algorithms generally outperform AdaCost and CSB when dealing with different imbalance level data sets. Furthermore, the optimality of AdaC2 algorithm stands out around the misclassification costs setting: C N = 0.7, C P = 1, especially for dealing with strongly imbalanced data sets. In the case of data sets with a low-level imbalance, there is no significant difference between the AdaC serial algorithms. In addition, the results indicate that AdaC1 is comparatively insensitive to themisclassification costs, which is consistent with the finding of the preceding research work. Introduction Classification is an important task of knowledge discovery and data mining.A large number of classification algorithms have been well developed, such as decision tree, neural network, the Bayesian network, and support vector machine.These algorithms always assume a relatively balanced class distribution.However, class imbalance problems are frequently encountered in many real-world applications including medical diagnosis [1], fraud detection [2], fault diagnosis [3], text categorization [4], and DNA sequences analysis [5].The class imbalance problem has emerged as an intractable issue due to the difficulty caused by the imbalanced class distribution. The imbalanced class distribution is characterized as having many more instances of some classes than others.Particularly for the two-class task that we consider in this paper, it occurs when samples of the majority class representing the negative concept outnumber samples of the minority class representing the positive concept.It has been reported that conventional classifiers exhibit serious performance degradation for class imbalance problems, since they show a strong bias towards the majority class.However, the correct classification of the minority class is more preferred than the majority class.For example, the recognition goal is to provide a higher identification rate of rare diseases in medical diagnosis. In view of the importance of this issue, a great deal of research work has been carried out in recent years [6][7][8].The main research can be categorized into three groups.The first group focuses on the approaches for handling the imbalance both at the data and algorithm levels.The second group explores proper evaluation metrics for imbalanced learning algorithms [9,10].The third one is to study the nature of the class imbalance problem; that is, what data characteristics aggravate the problem, and whether there are other factors that lead to performance reduction of classifiers [11,12]. Data level techniques add a preprocessing step to rebalance the class distribution by resampling the data space, including oversampling positive instances and undersampling negative instances [13,14].There are also some methods that involve a combination of the two sampling methods [15,16].When discussing what is the best data level solution for this issue, Van Hulse et al. [17] suggested that the utility of each particular resampling strategy depends on various factors, including the imbalance ratio, the characteristics of data, and the nature of classifier.Recently, García et al. [18] significantly extended previous works by deeply investigating the influences of the imbalance ratio and the classifier on the effectiveness of the most popular resampling strategies.Their experimental results showed that oversampling consistently outperforms undersampling for strongly imbalanced data sets, whereas there are no significant differences for data sets with a low-level imbalance. At the algorithm level, the objective is to adapt existing learning algorithms to bias towards the minority class.These methods require special knowledge of both the corresponding classifier and the application domain [19,20].In addition, cost-sensitive learning algorithms fall between data and algorithm level approaches.They incorporate both data level transformations (by adding costs to instances) and algorithm level modifications (by modifying the learning process to accept costs).Interested readers can refer to the relevant literature [21][22][23][24][25][26]. In recent years, ensemble-based learning algorithms have arisen as a group of popular methods for solving class imbalance problems.The modification of the ensemble learning algorithm includes data level approaches to preprocess the data before learning each classifier [27,28].Besides, some proposals consider the embedding of the cost-sensitive framework in the ensemble learning process, which is also known as cost-sensitive boosting [29][30][31][32][33][34].For this kind of algorithms, the proper misclassification costs are essential for their good performance.When handling imbalanced classification problems, the imbalance level of the data set will undoubtedly impact the optimal misclassification costs of these algorithms.To the best of our knowledge, it is still not clear how the misclassification costs and imbalance level affect the performance of these cost-sensitive boosting algorithms so far. Motivated by the previous analysis, we made a thorough empirical study to investigate the effect of both imbalance level and misclassification costs on the performance of some popular cost-sensitive boosting algorithms, including AdaCost [29], CSB1, CSB2 [30], and AdaC serial algorithms (AdaC1, AdaC2, and AdaC3) [31].To this end, we carry out a comprehensive suite of experiments by employing 17 realworld data sets, four performance metrics and fifty thousands training models, providing a complete perspective on the performance evaluation.The comparison results are tested for statistical significance via the paired -test and visualized by the multidimensional scaling analysis. The rest of this paper is organized as follows.Section 2 reviews several cost-sensitive boosting algorithms based on AdaBoost.In Section 3, we describe the experimental framework, including experimental data sets, cost setups, performance measures, and experimental approaches.In Section 4, we discuss experimental results to obtain some valuable findings.Finally, conclusions and some future work are outlined in Section 5. AdaBoost and Its Cost-Sensitive Modifications Ensemble methods have emerged as meta-techniques for improving the generalization performance of existing learning algorithms.The basic idea is to construct several classifiers from the original data and then combine them to obtain a new classifier that outperforms each one of them.Boosting and bagging are the most widely used ensemble learning algorithms, which have led to significant improvements in many real-world applications. AdaBoost Algorithm. As the first applicable approach, AdaBoost [35] has been the most representative algorithm in the family of boosting.In particular, AdaBoost has been appointed as one of the top ten data mining algorithms [36]. AdaBoost uses the whole data set to train base classifiers serially and gives each sample a weight reflecting its importance.At the end of each iteration, the weight vector is adjusted so that the weights of misclassified instances are increased and those of correctly classified ones are decreased.Furthermore, another weight vector is assigned to individual classifiers depending on their accuracy.When a test instance is submitted, each classifier gives a weighted vote, and the final predicted class label is selected by majority.The pseudocode for AdaBoost algorithm is shown in Algorithm 1. The sample weighting strategy of AdaBoost is equivalent to resampling the data space combining both undersampling and over-sampling.When dealing with imbalanced data sets, AdaBoost tends to improve the identification accuracy of the positive class since it focuses on misclassified samples.Hence, it makes the AdaBoost an attractive algorithm for class imbalance problems. Cost-Sensitive Boosting Algorithms. However, since AdaBoost is an accuracy-oriented algorithm, so the learning strategy may bias towards the negative class as it contributes more to the overall accuracy.Moreover, reported works show that the improved identification performance on the positive class is not always satisfactory.Hence, it requires AdaBoost algorithm to adapt its boosting strategy towards the cost-sensitive learning framework. Cost-sensitive learning assigns different costs to different types of misclassification.Let (, ) denote the cost of predicting an example from class as class .For the twoclass case, the cost of misclassifying a positive instance is denoted by , and the contrary one is denoted by .The recognition importance of positive instances is higher than that of negative instances.Hence, the cost of misclassifying the positive class is greater than that of the negative class; that is, > .The cost-sensitive learning adds the cost matrix into the model building process and generates a model that minimizes the total misclassification cost. In present paper, we focus on several representative algorithms in the family of cost-sensitive boosting, including AdaCost [29], CSB1, CSB2 [30], and AdaC serial algorithms (AdaC1, AdaC2, and AdaC3) [31].They differ in the way of how to introduce cost items into the weighted distribution in the AdaBoost framework. where is the normalization constant so that +1 will be a distribution, that is, (iv) Output: The final hypothesis: Algorithm 1: AdaBoost algorithm. AdaCost. In this algorithm, the weight update rule increases the weights of misclassified samples more aggressively but decreases the weights of correctly classified samples more conservatively.This is accomplished by introducing the cost adjustment function into the weight update formula as follows: where sgn(ℎ ( ), ) denotes "+" when ℎ ( ) equals ; that is, is correctly classified, "−" otherwise.Fan et al. [29] provided the recommended setting: where is the cost of misclassifying the th example and + ( − ) denotes the output of correctly (incorrectly) classified samples, respectively.Since > , then we have + + < − + and + − > − − .Hence, false negative receives greater weight increase than false positive, and true positive loses less weight than true negative. The weight updating parameter is computed as where CSB1 and CSB2. CSB1 modifies the weight update formula of AdaBoost to And CSB2 changes it to The difference between CSB1 and CSB2 mainly lies in weight parameter : CSB1 does not use any factor, and CSB2 updates in the same way as that of AdaBoost.Even though the weight update formula of CSB2 is similar to that of AdaC2, CSB2 does not take cost items into consideration in the update rule of parameter in the learning process. AdaC1 . This algorithm is one of the three cost-sensitive modifications of AdaBoost proposed by Sun et al. [31].These three algorithms derive different weighted distribution update formulas depending on where they introduce cost items.In AdaC1, cost items are embedded inside the exponent part of weight update formula: where ⊂ [0, +∞] is an associated cost item with the th sample.The weight parameter can be induced in the similar way as AdaBoost: Note that AdaCost is a variation of AdaC1 by introducing the cost adjustment function instead of cost items inside the exponent part.All the AdaC serial algorithms can be reduced to AdaBoost when all the cost items are equally set to 1, but AdaCost cannot be reduced to AdaBoost. Mathematical Problems in Engineering 2.2.4.AdaC2.Unlike AdaC1, AdaC2 embeds cost items in a different way that is outside the exponent part: Accordingly, the computation of the parameter is changed to 2.2.5.AdaC3.This modification combines the idea of AdaC1 and AdaC2 simultaneously.Namely, the weight update formula is modified by introducing cost items both inside and outside the exponent part: Thereby, the weight parameter is computed as follows: ) . (12) Experimental Framework In this section, we present the experimental framework used to carry out the empirical study to evaluate the above costsensitive boosting algorithms.The aim of this study is to investigate how the algorithm performance is affected when different cost settings and different imbalance levels are considered in the experiment. Experimental Data Sets. In the experiment, we employed 17 public imbalanced data sets from the UCI machine learning repository, which have also been used [18].The chosen data sets vary in sample size, class distribution, and imbalance ratio in order to ensure a thorough performance assessment.We discussed only the binary classification problems in this paper.Some data sets with multiclass labels were transformed into two-class ones, by keeping one class as the positive class and joining the remaining classes into the negative class.Table 1 summarizes the properties of the used data sets for each data set, the total number of samples, the indicia, and the sample size of the minority and majority classs.The last column is the imbalance ratio, which is defined as sample size of the majority class divided by that of the minority class.This table is ordered according to the imbalance ratio in the descending order. The large imbalance ratio means the high-level imbalance, and the small one denotes the low-level imbalance.Experimental data sets were divided into two groups according to the imbalance ratio.The first group is deemed as strongly imbalanced, including LetterA, Cbands, Pendigits, Satimage, Optidigts, Mfeat kar, Mfeat zer, Segment, and Scrapie, whose imbalance ratios are larger than 4. The second group consists of the low-level imbalanced data sets: Vehicle, Haberman, Yeast, Breast, Phoneme, German, Pima, and Spambase. Cost Setups. In the experiment, we will study the influence of different cost settings on the performance of these cost-sensitive boosting algorithms.In these algorithms, misclassification costs are used to characterize the recognition importance of different samples.The proper misclassification costs are often unavailable and can be ascertained using the empirical method. In our experiments, the misclassification costs for samples in the same category are set with the same value: denotes misclassification cost of the positive class, and denotes that of the negative class.The ratio between and represents the deviation of the learning importance between two classes.The larger the ratio, the more weighted the sample size of the positive class is boosted to strengthen learning.A range of ratio values are tested to search for the most effective cost setting, and we use the cost setup: = 1, and varies from 0.1 to 1 with step 0.1. When = = 1, AdaC1, AdaC2, and AdaC3 are all reduced to the original AdaBoost algorithm.For CSB1 and CSB2, the cost setup is suggested by Ting [30]: if a sample is correctly classified, = = 1; otherwise, > ≥ 1.According to the proposal of [31], we fix the cost factor for false negatives as 1 and the cost setting for true positives, true negatives, and false positives. Performance Measures. The evaluation metric plays a crucial role in both the guidance of the classifier modeling and the assessment of the classification performance.Traditionally, the total accuracy is the most commonly used performance metric.However, accuracy is no longer a proper measure in imbalanced domains, since the positive class makes little contribution to the overall accuracy.For example, a classifier that predicts all samples to be negative in a data set with an imbalance ratio value of 9 may lead to erroneous conclusions although it achieves a high accuracy of 90%. In the confusion matrix, all samples can be categorized into four groups, as described by Table 2. The accuracy evaluates the effectiveness of a classifier by the percentage of correct predictions: When dealing with the class imbalance problem, there are other appropriate metrics instead of accuracy.In particular, we can obtain four metrics from the confusion matrix to measure the classification performance of positive and negative classes independently. True Positive Rate.The percentage of positive instances correctly classified, also known as sensitivity or recall in the information retrieval domain, is as follows: True Negative Rate.The percentage of negative instances correctly classified, also known as specificity, is as follows False Positive Rate.The percentage of negative instances misclassified is as follows: False Negative Rate.The percentage of positive instances misclassified, is as follows: On the other hand, in the case that high classification performance on the positive class is demanded, the precision metric is often adopted: When good quality performance for both classes is required, none of these metrics alone is adequate by itself.Hence, some more complex evaluation measures have been devised. (1) -Measure.If only the performance of the positive class is considered, TP rate and precision are important metrics.measure integrates these two metrics as follows: Evidently, -measure represents the harmonic mean of precision and recall, and it tends to be closer to the smaller of these two measures.Hence, a higher -measure value ensures that both recall and precision are reasonably high. (2) -Mean.When the performance of both classes is concerned, both TP rate and TN rate are expected to be high meanwhile.The -mean metric is defined as -mean represents the geometric mean of TP rate and TN rate , and so, it measures the balanced performance of a learning algorithm between two classes. Experimental Approaches. For each data set, we performed 5 independent runs of a stratified 10-fold crossvalidation to partition the whole data and obtained 50dimensional score vector for each algorithm.Moreover, we had 60 models for each data set, which came from 6 costsensitive boosting algorithms (AdaCost, CSB1, CSB2, AdaC1, AdaC2, and AdaC3) and 10 cost settings.For comparison purposes, we also included the best results of these models, and thus got a total number of 61 models in the experiment.Similar to the statistical analysis method used in [18], we adopt paired -test to determine whether one algorithm is significantly better than another one and then use the multidimensional scaling to visually compare the performance of different cost-sensitive algorithms. (1) Paired -Test.Given two paired sets and of measured values, the paired -test determines whether they differ from each other significantly under the assumptions that the paired differences are independent and identically normally distributed.In light of the central limit theorem, the sampling distribution of any statistic will be approximately normally distributed, if the sample size is large enough.As a rough rule of thumb, a sample size of 30 is large enough.In this paper, we conduct the statistical comparison between each pair of 50-dimensional score vectors, which can be regarded as approximately normally distributed, and so, it is reasonable to employ parametric paired -test for statistical comparisons.Based on pairwise comparisons for these algorithms, we computed the index of performance as the difference between wins and losses, where wins (losses) denotes the total times that an algorithm has been significantly better (worse) than others with a significance level = 0.05. (2) Multidimensional Scaling.The second complementary analysis tool is multidimensional scaling (MDS) [37,38], which aims at giving a visual comparison for classifier performance with respect to multiple metrics.We built a 61 × table for each performance metric, where denotes the number of data sets used in the experiment.Each element (, ) represents the average score of the model on the data set , which was calculated by 5 runs of 10-fold cross-validation.Then, we computed Euclidean distances between each pair of rows in the table and performed multidimensional scaling on the distance matrix in order to obtain a projection on 2-dimension space.We can determine the effectiveness of various algorithms through the dispersal trend of their performance scores towards the optimal point in the MDS space. Experimental Results Aiming to study the influence of the imbalance ratio and misclassification costs on the performance of different costsensitive boosting algorithms, we performed both statistical test and MDS analysis for data sets with high-level and lowlevel imbalances separately. Results on Severely Imbalanced Data Sets. First of all, we perform the significance test of different cost-sensitive boosting algorithms to show whether there exist significant differences among them.To this end, we use the paired -test on the combinations of different algorithms and different cost settings. For each combination, we achieve some 50-dimensional vectors which come from the results of 5 runs 10-fold cross-validation on all the training data sets.Then, we conduct the paired -test between each pair of these score vectors.The wins of all the data sets are added into the final wins, and the final losses is gained likewise.The index of performance is calculated as the difference between wins and losses.Tables 3 and 4 provide the indices of performance for severely imbalanced data sets using -measure and -mean metrics, respectively.Note that the best result of each cost setting is signed with framed boxes. From the results of severely imbalanced data sets, we can observe that in most cases, AdaCost is always significantly the firstly worst, CSB1 and CSB2 are secondly worst with negative index values absolutely.Among AdaC serial algorithms, AdaC2 is more preferred than others, especially when using the -mean metric.From the point of view of -measure, AdaC2 is slightly better than the others except for the first three cost settings.AdaC serial algorithms have similar performances when = 1, since they are all reduced to AdaBoost. Owing to the obvious advantage of AdaC serial algorithms over the other two groups, we will only include AdaC serial algorithms in the MDS analysis.For each data set, we have 30 models which come from 3 algorithms (AdaC1, AdaC2, and AdaC3) and 10 cost settings.For each model, we obtain the average score as the mean of 50-dimensional score vectors.We build a 31 × 9 table whose element (, ) represents the average score of the model on the data set .Then, we perform multidimensional scaling on the distance matrix to obtain the projection on 2-dimension space. Figure 1 illustrates the MDS plots of AdaC serial algorithms on severely imbalanced data sets.The number associated with each point denotes some cost setting.For example, the number 2 with the blue circle denotes the model of AdaC2 at the cost setting = 0.2, = 1.We can comprehend the meaning of other points, and so forth. As we might imagine, when TP rate is considered, points of AdaC2 and AdaC3 become more and more close to the optimal point as decreases.Moreover, points of AdaC2 are always closer to the optimal point than those of AdaC3, and points of AdaC1 are much farther than others.Conversely, the TN rate apparently presents the opposite behavior.On the other hand, AdaC1 is relatively insensitive to misclassification costs, which is consistent with results of Sun et al. [31]. Focusing on the -mean metric in Figure 1, we can see that AdaC2 is much better than AdaC3 and AdaC1, which is consistent with the result of Table 4.However, there are little differences between AdaC serial algorithms for the measure metric, and the most proper cost of the negative class lies in [0.7, 0.9].These findings are further confirmed by Table 5. As a further confirmation of the previous findings, Table 5 reports Euclidean distances to the optimal point in the MDS space on the severely imbalanced data sets.As expected, the closest points with framed boxes generally appear in AdaC2 algorithm for both -measure and -mean metrics.We also find that AdaC1 is relatively insensitive to the costs settings.Hence, the average results of AdaC1 tend to be lower than the others.However, it is not implied that AdaC1 is superior to the other two algorithms.Experimental results show that AdaC serial algorithms are much more better than the other two groups, and AdaC2 is more qualified for handling severely imbalanced classification, especially when the cost of the negative class lies in [0.7, 0.9]. Results on Data Sets with a Low-Level Imbalance.Similarly, we perform the significance test of different costsensitive boosting algorithms for the data sets with a low-level imbalance. Tables 6 and 7 provide the indices of performance for the -measure and -mean metrics, respectively.As can be seen, both AdaCost and CSB serial algorithms are inferior to AdaC serial algorithms, which is very similar to the case of the strongly imbalanced data sets.However, when comparing AdaC serial algorithms, it seems that the differences between them are marginal in the sense that they generally achieve similar indices of performance, especially unlike the outstanding behavior of AdaC2 for severely imbalanced data sets. Figure 2 illustrates the MDS plots of AdaC serial algorithms on the slightly imbalanced data sets over four evaluation measures.The results for the TP rate and TN rate are very similar to the case of the strongly imbalanced data sets: AdaC2 and AdaC3 become closer to the optimal TP rate as decreases, whereas they are nearer to the optimal TN rate as increases.AdaC1 is relatively insensitive to misclassification costs similar to the case of strongly imbalanced data sets. When analyzing the results of -mean and -measure, we can observe that the trend of AdaC2 along with the change of is very similar to that of AdaC3.Moreover, most of the best performances are achieved by AdaC2 algorithm around = 0.7.These findings are also clear in Table 8. From the previous analysis on the data sets with a lowlevel imbalance, we can conclude that AdaC serial algorithms consistently outperform the other two groups, but it is difficult to advise the best strategy among AdaC1, AdaC2, and AdaC3.It means that the effectiveness of a particular costsensitive boosting algorithm depends on the class imbalance as well as on other factors, such as the data characteristics and the algorithm itself. Conclusions and Future Work In this paper, we presented a thorough empirical study on the performance of the most popular cost-sensitive boosting algorithms when dealing with different levels of class imbalance.We used 17 real-world imbalanced data sets (9 severely imbalanced and 8 slightly imbalanced), 6 cost-sensitive boosting algorithms (AdaC1, AdaC2, AdaC3, AdaCost, CSB1, and CSB2), and 10 cost settings in the experiment.Besides, the performance of algorithms has been evaluated by means of four different evaluation metrics, that is, TP rate , TN rate , measure, and -mean. Experimental results show that AdaC serial algorithms (AdaC1, AdaC2, and AdaC3) consistently outperform the other two groups (AdaCost and CSB), both for strongly and slightly imbalanced data sets.Moreover, AdaCost has been demonstrated to be worse than CSB algorithms.When comparing AdaC serial algorithms, AdaC2 is observed to perform better than the two others for severely imbalanced data sets, especially when using the -mean metric.In the case of data sets with low-level imbalance, however the difference between AdaC serial algorithms is negligible.It is necessary to make a further data complexity analysis to choose a suitable algorithm for a particular imbalanced data set. On the other hand, we have given some guidance in choosing the proper misclassification costs for these costsensitive boosting algorithms.Summarizing the experimental results, it demonstrated that the most proper cost setting is located in the neighbourhood of the point = 0.7, = 1. Based on the present work, there are some interesting future research directions with regard to the class imbalance problem: (1) to utilize other parameter selection techniques for the confirmation of the proper misclassification costs; (2) to take other cost-sensitive learning algorithms into consideration within the present framework, such as the proposed algorithms of [22][23][24]39]; (3) to compare these cost-sensitive algorithms in terms of other performance metrics, such as AUC [9] and IBA [40]. Figure 1 : Figure 1: MDS plots of severely imbalanced data sets over four performance metrics. Figure 2 : Figure 2: MDS plots for data sets with low-level imbalance over four performance metrics. Table 1 : Summary of characteristics for the used data sets. Table 3 : Index of performance using -measure for severely imbalanced data sets. Table 4 : Index of performance using -mean for severely imbalanced data sets. Table 5 : Euclidean distances to the optimal point in the MDS space for data sets with a high imbalance. Table 6 : Index of performance using -measure for data sets with a low-level imbalance. Table 7 : Index of performance using -mean for data sets with a low-level imbalance. Table 8 : Euclidean distances to the optimal point in the MDS space for data sets with a low imbalance.
2018-11-30T17:50:53.366Z
2013-09-19T00:00:00.000
{ "year": 2013, "sha1": "784faf03ff99e09280613dfd3911e685bf88d002", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2013/761814.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "784faf03ff99e09280613dfd3911e685bf88d002", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
230784184
pes2o/s2orc
v3-fos-license
Reliable Time Propagation Algorithms for PMF and RBPMF This paper addresses the reliable time propagation algorithms for Point Mass Filter (PMF) and Rao–Blackwellized PMF (RBPMF) for the nonlinear estimaton problem. The conventional PMF and RBPMF process the probability diffusion for the time propagation with the direct sampled-values of the process noise. However, if the grid interval is not dense enough, it fails to represent the statistical characteristics of the noise accurately so the performance might deteriorate. To overcome that problem, we propose time propagation convolution algorithms adopting Moment Matched Gaussian Kernel (MMGK) on regular grids through mass linear interpolation. To extend the dimension of the MMGK that can accurately describe the noise moments up to the kernel length, we propose the extended MMGK based on the outer tensor product. The proposed time propagation algorithms using one common kernel through the mass linear interpolation not only improve the performance of the filter but also significantly reduce the computational load. The performance improvement and the computational load reduction of the proposed algorithms are verified through numerical simulations for various nonlinear models. Introduction Recursive Bayesian filtering recursively predicts and corrects an unknown Probability Density Function (PDF) using a mathematical model and incoming measurements. However, it is almost impossible to obtain a closed-form solution for all estimation problems by applying Bayesian filtering technique. As a very limited case, if the model is linear and all the random variables included in the model follow normal distributions, there exists an explict solution for Bayesian filtering, and that solution is well-known Kalman Filter (KF) [1,2]. For other general problems, two groups of techniques are applied to obtain approximate solutions. The first group is based on the assumption that, despite the nonlinearity of the model, PDFs to be estimated follow normal distributions approximately. Among them, the most representative nonlinear filter widely applied in various fields is Extended Kalman Filter (EKF) [1,2], which applies KF after linearizing the model. EKF has the advantage of showing excellent asymptotic convergence characteristics despite linearization errors [3,4]. Methods which do not apply model linearization include Unscented Kalman Filter (UKF) and Ensemble Kalman Filter (EnKF) which approximate PDFs to normal distributions [5,6]. Gaussian Filters (GFs) apply Gaussian quadrature integration rules to calculate expectations of nonlinear functions for normal distributions [7]. The second group directly estimate PDFs by applying numerical approximation techniques to solve Bayesian filtering. Particle Filter (PF) is the most representative method. The concept of PF was first proposed in the 1950s, but it was not popular until Gordon proposed the bootstrap algorithm that the PF established itself as a representative nonlinear-/non-gaussian estimation technique with high accuracy [1,2,5,[7][8][9]. PF directly estimates non-gaussian PDFs through the combination of positions and weights of random sampling points. It is known that the estimation error of PF is independent of the dimension of the state variable [10,11]. In the practical application of PF, the increase of the computational load with dimension is still a problem to be tackled [2,9]. Rao-Blackwellized PF (RBPF), which divides the state variable into a nonlinear part and a linear part, and applies PF to the nonlinear part and KF to the linear part, is a state-of-the-art computational load reduction technique [12,13]. PMF is another method obtaining an approximate Bayesian filtering solution [2,5,14]. PMF estimates PDFs using masses of equally spaced grids on a state space. PMF was introduced in the early 1970s because of its conceptual simplicity [15], and the advantage of the PMF over the PF is its deterministic nature of the algorithm. However, it has not received as much attention as the PF due to the lack of efficient grid design methods or a problem of excessive computation. However, developments of computing technologies led to the introduction of a PMF-based TRN algorithm in the 1990s. Since then, there have been active researches for performance improvements of PMF [16,17]. Most of the studies have been conducted for the purpose of improving TRN performance, but the results are applicable to general estimation problems. Numerous research results for efficient grid design or reselection have been presented to improve PMF performance, and there are the Anticipative Grid Design (AGD) algorithm and the Boundary-based Grid Design (BGD) algorithm [18], the grid resolution and support design algorithm considering a noise level in TRN [19], the grid support design algorithm using mutual information [20], the density specific grid design algorithm assuming two different grids [21], and the density difference grid design algorithm based on the differentiation of the PDF in a sparse grid [22]. In implementing the filter, it is important to process probability diffusion through time propagation for the filter performance. In the case of PMF, time propagation is conceptually a convolution operation between masses of grids and the kernel of the process noise. The kernel is a set of the directly sampled values of the process noise PDF with arguments of differences between irregular grids passing a system model and newly defined regular grids. However, if the grid interval is not dense enough, the conventional kernel cannot accurately represent the statistical characteristics of the noise. This causes a problem in which the probability diffusion is not properly handled so the filter performance might deteriorate [23]. To resolve this problem, Variance Adjusted Gaussian Kernel (VAGK) or Moment Matched Gaussain Kernel (MMGK) have been proposed [23,24]. As another countermeasure, the Density-Weighted Convolution (DWC) algorithm, dealing with a model that only the measurement model is nonlinear, has been proposed recently [25]. Among them, MMGK exactly matches the moments of the noise up to the effective kernel length. However, those are kernel generation techniques applicable only for the model whose system model is linear. That is, it cannot be directly applied to irregular grid intervals due to the nonlinearity of the system model. In this paper, as a first result, we propose the PMF algorithm with indirect time propagation using MMGK through a mass redefinition process for general nonlinear estimation problems. The proposed algorithm has the advantage of not only improving the performance but also reducing the burden of calculating the direct-sampled values of the process noise. Furthermore, we propose the dimension extended MMGK by applying outer tensor product. Like PF, PMF cannot be free from the problem of the extensive computational burden. In particular, PMF is limited to low-dimensional problems due to the high-dimensional convolution operation. In order to reduce the computational load of PMF, RBPMF, analogous to RBPF, has been proposed relatively recently [26]. RBPMF algorithm for easy implementation of measurement validity check logic, which is essential for maintaining the filter stability in practical application, and RBPMF for TRN estimation problems were proposed [27,28]. Recently, Rao-Blackwellized Particle-Point Mass Fusion Filter (RBPPFF), which combines RBPF and RBPMF, has been introduced for robust TRN [29]. However, the proposed algorithms have the same abnormal probability diffusion problem in the time propagation operation as the aforementioned PMF. In this paper, as a second result, we propose the RBPMF algorithm applying MMGK without a mass redefinition. In RBPMF, the weight of each grid for the nonlinear part is paired with the normal distribution for the linear part. Since the linear part acts as an artifact noise to the nonlinear part, it is impossible to apply a common kernel to the time propagation of the nonlinear part. However, if model terms related to the linear part are not functions of the nonlinear part like TRN, all covariances of the linear part approximately have the same value. So, for these specific models, we can apply the time propagation operation to RBPMF through grid and mass redefinition, like the proposed PMF. To complete this scheme, a linear part redefinition procedure following the nonlinear part mass redefinition is required. However, until now, only the index-based adaption algorithm of simply copying the state of the neighboring linear part for the TRN problem has been proposed [30]. Therefore, for the constant linear model case, we propose the RBPMF algorithm with indirect time propagation, which includes the redefinition process of the linear part, as the third result of this paper. The composition of this paper is as follows. First, we introduce Bayesian filtering in Section 2, then describe the conventional PMF and the proposed PMF algorithm with the indirect time propagation algorithm, and then show the simulation results comparing their performances. Section 4 describes the conventional RBPMF and two proposed RBPMF algorithms, simulation results for them, and concludes in Section 5. Bayesian Filtering Let us consider the following nonlinear discrete-time stochastic dynamic system. where x ∈ R n is the state variable to be estimated and y k ∈ R m is the measurement. It's assumed that the process noise w k and the measurement noise v k are white noise and mutually independent of each other, and follow known normal distributions, p w k (w k ) and p v k (v k ), respectively. The nonlinear mappings f k : R n → R n and h k : R n → R m represent the system model and the measurement model. The model in Equation (1) is an input-free model, but can be easily extended to an input-driven one. The conditional PDFs of the state variable given measurements, which is the estimation target of Bayesian filtering, is p(x k |Y k ). Here, Y k = {y 0 , y 1 , · · · , y k } is a set of all measurements up to time t k . Two PDFs to be estimated are the priori PDF p(x k |Y k−1 ) which is one step ahead prediction and the posterori PDF p(x k |Y k ) which is filtering. To solve a recursive Bayesian filtering problem, PDF models are required. First, the transition PDF of the state variable for the system model in Equation (1) is as follows. The transition PDF satisfies p(x k |x k−1 , Y k−1 ) = p(x k |x k−1 ) from the Markovian characteristic. The PDF model for the measurement equation is as follows. p(y k |x k , Y k−1 ) = p(y k |x k ) is satisfied because y k is dependent only on x k from the measurement model. Bayesian filtering is a process of recursively obtaining the posteriori PDF p(x k |Y k ) for the current time under the assumption that the posteriori PDF p(x k−1 |Y k−1 ) for the previous time is known. To solve this, first, apply Bayes' theorem to the posteriori PDF, then p(x k |Y k ) becomes as in Equation (5). If p(y k |x k , Y k−1 ) = p(y k |x k ) is applied to Equation (5), p(x k |Y k ) can be rewritten as in Equation (6). The first term of the nominator represents the likelihood as the measurement PDF of Equation (4), and the second term is the priori PDF, which can be obtained by the Chapman-Kolmogorov equation. The denominator p(y k |Y k−1 ) of Equation (6) is called the evidence and can be calculated by integrating the nominator. Conventional PMF with Direct Time Propagation To obtain the posteriori PDF in closed-form from Bayesian filtering , the integrals of Equations (7) and (8) must be explicitly calculated. However, it is almost impossible to solve such integrals for general nonlinear models. Therefore, numerical techniques are usually used to obtain approximate solutions, and PMF is one of those approximation methods. The basic concept of PMF is to discretize a state space into equally spaced grids and to calculate a mass (or a weight) in each grid to discretely express the PDF. That is, suppose that the mass ω i k−1|k−1 of the each grid ξ i k−1 from the grid set Ξ k−1 (N k−1 ) = {ξ i k−1 ; i = 1, · · · , N k−1 } defined at time t k−1 is as follows. Then, the discretely approximated posteriori PDF at time t k−1 can be expressed as Equation (10).p where δ(·) is dirac delta function. Previous PMF-related literatures considered the mass ω i k−1|k−1 as the value of the pdf at ξ i k−1 . Therefore the rectangular volume term ∆ξ i around ξ i k−1 should be included in Equation (10). However, if assuming equal grid intervals and ∑ i ω i k−1|k−1 = 1 for the masses, then the volume is a common term for all masses so the volume can be viewed as included in the mass. This can reduce unnecessary multiplication operations when implementing the algorithm. When the PDF of Equation (10) is applied to recursive Bayesian filtering, the integral equations of Equations (7) and (8) are converted to discrete summation equations, so that the discretized posteriori PDF at time t k can be obtained in a similar manner. To apply PMF to general nonlinear models, an adaption procedure for irregular grid intervals, grid support, and grid resolution due to nonlinearity of the model must be included. The conventional PMF algorithm including that procedure is as follows [18]. Algorithm 1 Conventional PMF 1: Initialization Define the initial grid set and the masses for the initial priori PDF p(x 0 |y −1 ) 2: Measurement Update Calculate the measurement updated masses for all i = 1, · · · , N k 6: Update k := k + 1 and repeat (2)-(5) Various algorithms can be applied to the grid redefinition of Step (4). Among proposed algorithms, the algorithms suitable for general estimation problems are AGD and BGD proposed by Šimandl [18]. AGD is an algorithm that selects grid support and grid resolution considering the performance of time propagation calculation. AGD assumes Gaussian distribution when selecting grid support, whereas BGD is a grid support selection algorithm that considers non-Gaussian distribution. Suitable algorithms for TRN include the grid resolution/support adaption algorithm considering noise magnitude [19], and the grid support adaption algorithm using mutual information [20]. The density specific grid design algorithm that assumes two different grids [21] and the density difference grid design algorithm based on the differentiation of the PDF in a sparse grid [22] have been presented recently for general estimation problems. PMF is a global approximation nonlinear filter, so its application range is very wide. However, to apply PMF, we need to know exactly the time evolution model of Equations (1) and (2) or the probabilistic model of Equations (3) and (4). If there are uncertainties in the model or disturbances that cannot be modeled, such terms can be considered as process noise, but the estimation performance may be degraded. The Takagi-Sugeno (T-S) fuzzy affine model is known to be very effective in dealing with such uncertainties and disturbances. As the state-of-the-art result in that field, a sampled-data filtering design technique for Ito stochastic T-S fuzzy affine system has been proposed recently [31]. PMF with Indirect Time Propagation If the probability diffusion of the mass through time propagation is not properly processed, only the contraction of the probability distribution by the measurement update is repeated over time. This eventually causes the filter to diverge because the measurement can be no longer reflected in the error correction. Conversely, the excessive diffusion of the probability is the same as losing estimation information so far, which might degrade the estimation performance. That is, in the implementation of filtering, the proper diffusion process is very important for the stability and the performance of the filter. If the interval between grids is greater than about 1.5 times the standard deviation of the process noise, the kernel for probability diffusion by sampling does not adequately reflect the statistical characteristics of the original noise [23]. VAGK, MMGK, and DWC have been proposed to resolve this problem [23][24][25]. VAGK generation method creates the kernel according to the conventional method and scales up it to match only the variance. So it cannot deal with the process noise whose mean is not zero. MMGK is the structured kernel obtained through linear equations where kernel K t of length I matches moments up to I − 1. The DWC method uses the difference between the Cumulative Distribution Functions (CDFs) at the upper and lower limits of the volume near the grid. In order to find a kernel generation technique that accurately expresses the variance in the sense of probability diffusion, we compared the variances of the various kernels while varying the variance of the noise. Figure 1 shows the comparison result. The x-axis and y-axis of the graph represent the ratio of the standard deviation of the process noise to the grid interval and the ratio of the variance of the generated kernel to the variance of the noise, respectively. The variance ratio of the conventional kernel gradually decreases as the standard deviation ratio becomes smaller than 0.67. The DWC kernel has a slightly larger value than the original variance when the standard deviation ratio is greater than 0.29 but gradually decreases in the interval less than 0.29. On the other hand, unlike the previous two kernels, MMGK has a variance that exactly matches the original variance regardless of the standard deviation ratio. So we adopt the MMGK as the time propagation kernel. Adopting MMGK significantly improves the PMF's inconsistency problem especially when the standard deviation of the process noise is quite small. However, MMGK was proposed to apply to the TRN problem that the system model is an identity matrix. In this paper, to resove the inadequate probability diffusion problem, the general PMF algorithm adopting MMGK is proposed. MMGK can be applied in the case of regular grid interval, but the kernel in Step (5) of Algorithm 1 is the sampled values of the process noise with the difference between element η i k+1 of the irregular grid set H k+1 (N k ) and element ξ j k+1 of the newly defined regular grid set Ξ k+1 (N k+1 ). Therefore, it is impossible to directly apply MMGK to the conventional PMF algorithm described in the previous section. However, if the convolution operation is performed only on the new grids set Ξ k+1 (N k+1 ), MMGK can be applied immediately. That is, after redefining mass γ k|k for ξ i k+1 with ω i k|k , the posteriori mass for grid η i k+1 , the convolution operation is performed indirectly for γ k|k . γ k|k calculation can be easily implemented through various well-known multivariate interpolation algorithms for irregular grids [32]. Figure 2 shows the comparison of the probability diffusion process concepts of the previous method and the proposed one (in order to explain the probability diffusion in the 2D state space, the grid indeces are expressed in 2D). In the conventional method, the probability values adjusted in proportion to the sampled value of the process noise for all η i k+1 within the effective distance of process noise near ξ j k+1 (for example, 3 times the standard deviation of the process noise) are diffused and accumulated in ξ j k+1 . On the other hand, the proposed method first calculates the new mass γ i k|k for each ξ i k+1 . A variety of interpolation algorithms can be applied, but in this paper, mass γ i k|k is calculated as the linear combination of η i k+1 around ξ i k+1 as follows by applying linear interpolation. where λ s,i is the linear combination coefficient of η s k+1 = f k (ξ s k ) concerning for ξ i k+1 , which satisfies ξ i k+1 = ∑ s∈S i λ s,i η s k+1 and s k|k represents the nonlinear transformation of the probability ω s k|k from ξ s k to η s k+1 . Equation (12) stems from the relationship f Y (y)dy = f X (x)dx between two random variables x and y and the determinant of the Jacobian ∂ f k (x k )/∂x k for the system model η k+1 = f k (x k ) represents dη k+1 /dξ k [33]. The condition of ∑ s∈S i λ s,i = 1 for the coefficient λ s,i means that ξ i k+1 is a value generated by interpolation. If the interpolation is unavailable, the corresponding mass is assigned as 0. The mass transformation in Equation (12) is only applicable when f k (x k ) is an invertible function such that the solution of ξ i k+1 = f k (x k ) is one. If there are multiple solutions, it is necessary to calculate the mass for each solution and add up every transformed mass. The indirect time propagation operation using the new mass γ i k|k by the linear interpolation is shown in Equation (13). The time propagation equation in step (5) of Algorithm 1 and the equation in Equation (13) are mathematically the same. The only difference is that the sampling in Equation (13) is processed on equally spaced grids while the sampling interval of the process noise in step (5) is irregular. Therefore, it is possible to design a structured kernel to accurately reflect the statistics of the process noise. In this paper, MMGK is adopted as the kernel. The MMGK generation method is as follows [24]. First, let M m,k be the k-th moment of the process noise whose mean and variance are µ m and σ 2 m , respectively. For the grids set Ξ k+1 (N k+1 ) where the grid spacing of the m-th state variable is ∆ξ m , considering the effective support of the noise only up to the ±3σ range, it is sufficient to generate a discrete kernel for Let K m be the kernel to be determined, then ∑ U m s=L m x k s K m (s) = M m , k has to be satisfied. Therefore, if we write moments from 0-th to (I m − 1)-th in vector form, it is as in Equation (14). where I m = U m − L m + 1 is the length of the kernel, and K m (s) is the element of the kernel. It is the 1-dimension kernel for the process noise. If two dimensional problems are dealt, vector outer product is enought to adopt the MMGK. In this paper, we propose the dimension extended MMGK by outer tensor product. The extended MMGK is a combination of K m generated for each process poise, as shown in Equation (15). where • represents the tensor product(in the sensor of the outer product) [34]. When the generated kernel Ker is applied to the time propagation, the time propagation or the probability diffusion of step (5) for newly defined mass γ i k|k as shown in Equation (16). where s = j − i, and s ∈ Ker means that the index s is within the valid range of Ker. Here the mass is represented by the one-dimensional index, but the kernel Ker is the n-dimensional tensor. Therefore, to implement the above equation, an appropriate transformation must be included between the 1-dimensional mass index and the n-dimensional kernel index. If the mass is expressed and processed with a n-dimensional index like the kernel, Equation (16) can be rewritten as Equation (17). The proposed time propagation algorithm has the following advantages and disadvantages. In the proposed method, the time propagation performs the convolution through MMGK on grids of equal spacing, so the probability diffusion is more accurate than the conventional one. The conventional method has to calculate the distance between one new grid and all previous grids and examine whether it is within the effective range for the noise probability distribution. On the other hand, the proposed method uses the kernel which considers the effective length of the noise for a new set of grids, so such process is unnecessary. However, the proposed method must additionally perform the interpolation operation for the masses. Bergman's grid adaption algorithm for TRN is a special case of indirect time propagation that performs only interpolation and decimation of two times intervals [17]. The PMF algorithm including the new indirect time propagation algorithm proposed in this section is as follows. has two or more solutions, repeat the linear interpolation for each solution and calculate their total sum. Calculate the total kernel Ker as a tensor product after finding MMGK K m for each process noise. The consistency between the estimation error and the covariance of the filter is very important to ensure the reliable operation of the filter. Specifically, PMF that estimates PDFs by mass on a discrete grid requires consistent handling of the process of probability diffusion through time propagation. However, if the grid interval compared to the variance of the process noise is not dense, the variance of the conventional kernel is treated less than the original design. So, over time, the filter behaves as if it only does measurement updates without time propagation. Eventually, the covariance of the filter gets smaller and smaller, so the measurement update no longer works. However, the proposed PMF adopted MMGK for probability diffusion. MMGK can accurately handle at least the noise variance, so the filter's consistency can be reliably maintained. Therefore, the performance of the proposed PMF can be improved. The time propagation of the conventional PMF (step (5) of Algorithm 1) uses the value obtained by directly sampling the process noise for the probability diffusion, and this sampling must be performed N k+1 × N k times. However, the proposed PMF uses MMGK that is independent of the number of grids, and the size of the MMGK is much smaller than the total number of grids, so the computation time of time propagation can be drastically reduced. However, the proposed PMF needs to perform mass interpolation on the new grid set before performing probability diffusion. If the nonlinearity of the system model is too large, the interpolation has to be repeated several times, which can increase the computation time. One Dimensional Growth Model To verify the performance improvement of the proposed PMF algorithm, a simulation was performed on the following non-stationary growth model used by Gordon et al. in a paper that proposed a bootstrap filter [2,8,26]. The measurement noise is v k ∼ N (0, 1 2 ), and the initial error and the process noise are considered in several cases for performance comparisons. The state variable of the model with the given parameters does not exceed the range [−25, +25] in spite of the process noise so the grid adaption process is not mandatory in this problem. Therefore, the grid set may be fixedly determined according to a predetermined grid interval. In this paper, simulations were performed for a total of four grid intervals. Besides, the same simulation was performed with the bootstrap filter of 1000 particles regardless of the performance change with the grid interval. First, we compared the PDF outputs estimated by the PMFs and the PF (here, bootstrap filter) for several cases, and Figure 3 shows the results. The pdf of the PMF is just a sampledfunction at the grid set. In the case of PF, PDF can be obtained by applying a window function in the form of a normal distribution of small variance to each particle and then summing up all the window functions. The result of k = 12 for the case where the initial error and the variance of the process noise are x 0 ∼ N (1, 5 2 ) and Q k = 2 2 , repectively, and the grid interval ∆ξ is 0.1 such that the grid interval is sufficiently small compared to the process noise, is shown in (a). As you see, all three algorithms show similar PDFs. In this case, the conventional PMF showed more similar results to the PF. However, in the case of (b) where the process noise variance is reduced to 0.3 2 , the conventional PMF shows the completely different aspect of the PDF from the proposed PMF or the PF. In particular, in the range of −5 to +15, the probability distribution of the PF and the proposed PMF shows a moderate decrease trend, whereas the conventional PMF tends to fluctuate. When ∆ξ is set to 0.5 for Q k = 0.3 2 , the PDF estimates for k = 11, 12 are shown in (c) and (d). In the case of k = 11, the three algorithms showed a similar trend, whereas, in the next step, the conventional PMF shows a completely different probability distribution estimate. Next, in the case of x 0 ∼ N (5, 5 2 ), the 100 times Monte Carlo simulation results varying the grid interval ∆ξ and process noise variance Q k are shown in Figure 4. The xaxis of each graph is the ratio of process nose standard deviation to grid resolution, which means that the smaller the value, the greater the grid interval than the process noise effective range. The y-axis represents the Root Mean Square (RMS) of the estimation error of the simulations. From the results, if the grid interval is sufficiently small (∆ξ = 0.1), there are no differences in performance regardless of the ratio. On the other hand, In other grid interval cases, if the grid interval is not sufficiently dense( √ Q k /∆ξ < 1), the performance of the proposed PMF is superior to that of the conventional PMF. In particular, the estimation error of the new PMF tends to decrease like the PF as the ratio decreases, whereas the estimation error of the conventional PMF increases again when √ Q k /∆ξ < 0.15. Table 1 shows the numerial results of the simulations. The number of grids of the PMFs for grid intervals 1.0, 0.5, 0.3, and 0.1 were 51, 101, 167, and 501, respectively. To verify the effect of the computational load reduction of the proposed algorithm, the computation time was measured every epoch. We performed the simulation on the Matlab 2020b single thread environment on Windows 10 operating system with Intel i7-10750H 2.6GHz CPU and 32GB DDR4 Memory. Furthermore, we used vectorized operations and functions to shorten the execution time as much as possible. Figure 5 shows the results of measuring the calculation time for various grid intervals and process noises. The result of Figure 5a is the average value of the calculation time for various process noises. In both algorithms, it can be seen that the computational load increases as the grid interval becomes narrower. However, while the time of the proposed algorithm did not change significantly, the time of the conventional one increases exponentially. In particular, when the grid interval is 0.1, the proposed one is about 12.5 times faster than the conventional one. On the other hand, when the grid spacing is 1, the proposed one is about 1.42 times slower. This is because a total of five linear interpolations were performed for mass redefinition considering the high nonlinearity of the growth model. Since mass redefinition process occupies about 70% of the computation time of the proposed algorithm, reducing the number of mass redefinitions can significantly reduce the overall execution time. Figure 5b shows the calculation time result versus process noise for the case where 0.3 and 0.5 grid intervals. As mentioned earlier, the proposed algorithm did not show a large variation along the grid interval, while the time of the conventional algorithm increased by 2.2 times as the grid interval increased 1.67 times. The time of the proposed algorithm also hardly changes even when the process noise changes, but in the conventional algorithm, the calculation time decreases as the process noise decreases. This is because exponential function calculation is required when performing the probability diffusion due to process noise, and in general numerical calculations, it is treated as 0 if the exponent of the exponential function is less than a certain value. Two Dimensional Body Fall Problem As another numerical example for PMF, we performed a simulation for the two dimensional body fall problem. The mathematical model of the body fall problem is as follows [1].ẋ As usual, w i is the process noise and and v k is the measurement noise, respectively. The two state variables x 1 and x 2 represents the altitude and the velocity of the body, respectively. ρ 0 is the air density at sea level, k is a constant for the relationship between air density and altitude, g is the gravity, and b c is the ballistic coefficient. We use a discretized system model with a step size of 100 ms and the measurement is obtained at every 0.5 s. A range measuring device is located at an altitude a and the horizontal range between the device and the body is M. The constants that we use are given as ρ 0 = 105 kg−s 2 /m 4 g = 9.8 m/s 2 k = 5100 m b c = 6.24 × 10 −5 m 3 /kg−s 2 M = 10, 000 m a = 10, 000 m The initial conditions of the system and the filter are given as x 0 = [40, 000 −3000] T , P 0 = diag(100 2 , 5 2 ), Q k = diag(1 2 , 0.1 2 ), and R k = 10 2 . The grid size of the PMF is 101 × 21 and the simulation time is 30 s. For the first few seconds, the velocity is slowly decreased. However, then the air density increases and drag slows the falling object. Toward the end of the simulation, the body reaches a constant terminal velocity. Figure 6 shows the altitude and velocity RMS errors of 100 times Monte Carlo simulation. The proposed algorithm shows excellent performance in the entire time domain. The altitude error of the proposed algorithm tends to increase slightly between 5 to 20 s interval which the velocity changes rapidly, but after 20 s, the error stably decreases again. On the other hand, the altitude estimation error of the conventional one tends to increase gradually even after 20 s. Likewise, the velocity error increases slightly from 5 s when the velocity starts to change rapidly but stabilizes again after about 12 s. It is enough to apply the conventional PMF algorithm if the grid interval is sufficiently small. However, the realtime application of the nonlinear estimation filter might be very limited in computational power depending on the system payload. In that case, the performance is likely to deteriorate when the conventional PMF is applied. On the other hand, in particular, the process noise is small, but if the relative grid interval is not dense sufficiently, it is confirmed through the simulations that the proposed algorithm improves the performance and decreases the computation load over the conventional method. Recently, many studies for the accurate position determination of the small unmanned vehicles such as drones have been conducted, and the estimation models of the localization technique using radio wave, vision, and distance information are generally nonlinear. Therefore, in the sense of the estimation performance and computational load, the proposed algorithm is particularly effective when applied to small vehicles with limited payload in relation to power consumption. Regardless, according to the simulation results, the PF has better performance than the PMFs. However, due to the characteristics of the algorithm, it is known that PMF of the deterministic nature has a superior robustness to PF [35]. Rao-Blackwellized PMF with Reliable Time Propagation The algorithmic complexity of PMF is O(N 2 ) due to the convolution operation of the time propagation step where N is the number of grids. Furthermore, generally, N is an exponential function of the dimension n of the state variable. So the calculation amount of PMF increases exponentially as n increases. The Rao-Blackwellization technique is a representative method to reduce the computational complexity for high-dimensional nonlinear estimation problems based on PF [12,13]. In this technique, when the estimation model can be separated into a nonlinear part and a linear part, nonlinear filtering is applied only to the nonlinear part, and Kalman filtering is applied to the linear part to make the dimension of the nonlinear filter as small as possible. It has been developed and applied to PF earlier, and Šmídl was the first to propose the RBPMF relatively recently [26]. However, since all previous RBPMFs are based on the conventional PMF, there is a problem of degrading the filter stability due to the abnormal probability diffusion for the nonlinear part. Therefore, in this paper, like the PMF proposed in the previous section, we propose an RBPMF algorithm using MMGK that can more accurately process the probability diffusion. The simulation results to verify the effectiveness of the proposed RBPMF are described. Finally, an RBPMF algorithm with the same indirect time propagation as the PMF proposed in the previous section is proposed for a special case. Conventional Rao-Blackwellized PMF Let us consider the following model where the state variable can be separated into a nonlinear part and a linear part [12]. where the state variable, the process noise, and the measurement noise are as follows. Then, the posteriori PDF for the decomposed state variable can be divided into two conditional PDFs as shown in Equation (22). Here, an important assumption is that the conditional PDF p(x l k |x n k , Y k ) for the linear part x l k given the nonlinear part x n k follows approximately a normal distribution. Therefore, KF is applied to the linear part and PMF is applied only to the PDF estimation of the nonlinear part. There are two important aspects from the model in constructing the RBPMF algorithm. First, when processing the nonlinear part, the two terms F nl k−1 (x n k−1 )x l k−1 and H k (x n k )x l k due to the linear part are regarded as additional noises. This makes the variances of the effective process and measurement noises be larger when estimating the nonlinear part. Furthermore, in general, the two noises have been considered as zero-mean noise. However, in the case of RBPMF, the effective process and measurement noises are changed to nonzero normal distribution due to the influences of the linear part (see Equations (A4) and (A8)). The second aspect is that the system model for the nonlinear part is processed as an artifact measurement for the linear part and the artifact measurement model must be processed before the system model for the linear part. To understand how these two perspectives are triggered, we described the PDF update procedure for RBPMF divided into six steps in the Appendix A. For the time propagation step for the linear part, if w n k and w l k are correlated with each other, the artifact measurement should be handled carefully, and Schön proposed the well-established algorithm for this [12]. Figure 7 shows an example of a PDF estimated by PMF that treats a state variable as a two-dimensional nonlinear part for a two-dimensional estimation problem, and a PDF estimated by RBPMF composed of a one-dimensional nonlinear part and a one-dimensional linear part. The conventional RBPMF algorithm based on the Bayesian framework summarized in the Appendix A is as follows. Rao-Blackwellized PMF with MMGK In the time propagation of the RBPMF, the probability diffusion is calculated by sam- T + Q n k ) at the grid points. Therefore, if the new grid spacing is not small enough, diffusion might not normally handled as in conventional PMF. To improve the abnormal probability diffusion process of the RBPMF, the MMGK technique can be applied as in PMF. In PMF, to apply MMGK, the masses of the regular grids were generated through interpolating the masses of the irregular grids. However, in the case of RBPMF, each grid of the nonlinear part is paired with the normal distribution of the individual linear part. That is, even if the nonlinear state space is relocated to the regular grid and the mass interpolation is performed, it is impossible to apply a common kernel because each grid has a individual process noise PDF. Nevertheless, adopting MMGK in RBPMF improves the performance because MMGK can better represent the statistical characteristics of the process noise when the grids are not dense enough. In addition, the process noise in the time propagation of the nonlinear part is the normal distribution whose mean is no longer zero but F nl k (ξ k|k . However, the mean affects the probability diffusion at the same location as η k ), which corresponds to the nonlinear transformation of the previous grid. Therefore, the process noise can be regarded as the normal distribution with zero mean, and instead, the mean can be reflected in the nonlinear mapping such that it is treated as η k|k . Moreover, it is more advantageous to process the probability diffusion to the redefinition of the the grids in consideration of the movement of the probability distribution by the linear part (note that the process noise has zero mean, but the time propagation is performed by the sampling of the normal distribution of the difference between ξ where j ∈ Ker [i] means that ξ Rao-Blackwelkized PMF with Indirect Time Propagation for Constant Linear Model Case Since the RBPMF algorithm for the general nonlinear model has its own covariance matrix for each linear part, it was impossible to apply the indirect time propagation algorithm of Algorithm 2 using one common kernel. However, if F nl k (x n k ), F l k (x n k ), and H k (x n k ) of the model for the linear part are not functions of the non-linear part but constant, then the linear part covariances are approximately equal as follows. Suppose P l,[i] k|k−1 are equal for all i = 1, · · · , N k . Then, the calculation results for P l,[i] k|k in Step (3) of Algorithm 4 give the same value by H k (x n k ) = H k . Again, the calculation results for P l,[i] k+1|k in Step (7) give the same value by F nl k (x n k ) = F nl k , F l k (x n k ) = F l k . The last covariance operations of the linear part is the adjustment by the moment matching in Step (8), as shown in Equation (25). Equation (25) has the same covariance P l k+1|k term and the covariance adjustment terms by N k normal distributions scattered from meanx l, [j] k+1|k . Since the covariance adjustment terms act in the direction of making the covariance larger, there is a positive diagonal k (in the sense of positive semidefinite). Then, it can be set as P l,[j] k by applying a maximum covariance adjustment so that the linear part covariances has the same value for all j = 1, · · · , N k+1 (in the sense of element-wise). Alternatively, Λ k can be considered as the tuning parameter of P k+1|k . Therefore, by applying the maximum covariance adjustment in moment matching together with the F nl k (x n k ) = F nl k , F l k (x n k ) = F l k , and H k (x n k ) = H k conditions, the covariance matrices of the linear part are equal. In other words, only one operation for the covariance matrix is enough, which not only can significantly reduce the computational burden but also makes it possible to apply the indirect time propagation algorithm of Algorithm 2 for the nonlinear parts which was not applicable because the covariance matrices are different for each linear part. The PMF's indirect time propagation algorithm includes the mass redefinition procedure for the nonlinear state variables. Therefore, to apply it to RBPMF, the linear part redefinition for the new grid must also be performed (the covariance matrix is common, and only the mean corresponding to the state estimate of the linear part is redefined). Linear part redefinition is to redefine (ξ k , Y k )) pairs according to the nonlinear part redefinition procedure. It has been previously described that γ [i] k|k can be calculated following the same equation as the linear combination of ξ [i] k+1 , and the same linear combination can be applied to the linear part redefinition. To show this, examining the PDF of the linear part conditioned on , it is as shown in Equation (26). The denominator of Equation (26) is p(x n k+1 = ξ k|k p(Y k ). The numerator can be calculated through an approximation as follows. If we assume a fixed x l k , then x l k does not affect an approximation of x n k . Therefore, using the linear interpolation coefficients of ξ [i] k+1 , Equation (27) can be approximated in the form of the linear interpolation as shown in Equation (28). Figure 8 illustrates the concept of the approximation of the joint probability of the nonlinear part and the linear part by the linear interpolation. Substituting the resulting equation of the numerator and denominator into Equation (26), k+1 , Y k ) becomes Equation (29). If the errors by the redefinition have to be treated, we can model it as an additional process noise and reflect it in the time propagation of the linear part. Although the estimation performance due to the redefinition is slightly deteriorated, the robustness of the linear part filter against model errors can be improved. Figure 8 shows the concept of the state estimate redefinition of the linear part. The RBPMF algorithm applying the indirect time propagation scheme to the model where F nl k (x n k ) = F nl k , F l k (x n k ) = F l k , and H k (x n k ) = H k are satisfied is as follows. 3: Measurement Update for Linear State KF measurement updates for the linear part with measurement y k − h k (x n k ) = 5: Grid, Mass, and Linear State Distribution Redefinition Redefine the grid set k|k for ξ [j] k+1 and redefine the linear state mean where the index i is limited to the range of K j,m 7: Time Propagation for Linear State KF time propagate for the linear part with artifact measurement x n k+1 − f n k (x n k ) = where Γ k is a positive diagonal matrix and the summation is conducted only forα [j,i] k+1|k = 0 9: Update k := k + 1 and repeat (2)-(8) As a representative example of such a model, there is Terrain Referenced Navigation (TRN) in which the horizontal position error of two dimensions and the altitude error of one dimension are state variables. For this problem, Peng proposed the interpolation technique of the height errors according to changing grid intervals. However, his method is an index-based adaptive method that simply copies neighbor height error intuitively [30], which is not theoretical and also has a disadvantage that it cannot be applied to changes in grid position. On the other hand, the proposed redefinition algorithm of the linear part is more systematic and can be applied to both the grid position and the interval change. Growth Model with Unknown Parameters The RBPMF algorithms described in Sections 4.1 and 4.2 were applied to the simulation model performed in Section 3.3. Although the same model, parameters b and d are considered as unknown estimation targets. That is, there are three states to be estimated; x k , b, and d. Furthermore, to apply RBPMF, the nonlinear state variable and the linear state variable are set to x n k = x k and x l k = [b, d] T , respectively. Then the model can be rewritten as follows. The RMS results of the estimation errors through 100 times Monte Carlo simulations up to 100 s are shown in Figure 9 Considering the four parameters as the estimation target, since F nl k (x n k ) in Equation (20) T of the process noise of the nonlinear part due to the linear part has a large value compared to the grid interval depending on x n k values. Therefore, in this paper, only the parameter b and d were estimated, and the simulation was performed only for the problem that the linear part influence on the process noise is small. Figure 10 shows the change in kernel length for MMGK creation over time for one of the previous simulations. Except for the initial transient region, the kernel length within the range of ξ < |15| maintains 3 to 4 for almost all regions, and in this case, the performance is improved by applying MMGK. Tightly-Coupled INS/TRN Integration TRN is the most representative application field of PMF. The basic concept of TRN is to find the position by comparing the difference between the absolute altitude of the Inertial Navigation System (INS) output and the relative altitude of the Radio Altimeter (RA) with a terrain elevation database. TERCOM, the first TRN system, is based on a batch processing algorithm that intermittently estimates the position by accumulating measured values for a certain period of time, but in recent years it is gradually developing into a filter-based sequential processing method. Since the measurement model of TRN is the terrain itself and the terrain has very high nonlinearity, it is mandatory to apply a nonlinear filter to the sequential processing TRN algorithm. In general, TRN is integrated with INS and there are three ways to INS/TRN integration as follows. • No Integration: Single TRN filter structure without any integration; • Loosely-coupled: Cascaded structure of the INS aiding filter following TRN filter; • Tightly-coupled: Single filter structure combining TRN filter and INS aiding filter. In this paper, the proposed RBPMF algorithm is applied to the tightly-coupled method, which is known to have the best performance among the three methods. To apply RBPMF, first, the mathematical model of TRN must be constructed as shown in Equation (20) . Here, a 15th order model including all of the position errors, altitude errors, velocity errors, attitude errors, accelerometer bias errors, and gyro bias errors is considered. Among the 15th state variables, only the two horizontal position errors, which are the independent variable of the terrain elevation function, is selected as the nonlinear part state variable, and the rest is the linear part state variable. Based on this, the tightly-coupled INS/TRN system model is represented as follows. A detailed explanation of the INS error model F l k−1 is provided in many other pieces of literature, so it is omitted in this paper. Generally, the horizontal position errors are generally expressed as the angular errors of the latitude and the longitude, but in that case, the system model for the nonlinear part is not a unit matrix because it is affected by the earth radii. Therefore, in this paper, the horizontal position errors are considered as a distance error instead of the angle. The configuration of various conditions for simulation is as follows. First, the PMF grid size is 51 × 51. Time propagation and measurement update are performed every 1 s. Due to the characteristics of RBPMF, the nonlinear part is changed by the velocity error of the linear part, so the grid redefinition is performed every time propagation. The initial grid interval is set to cover the 3-sigma region of the initial position error of the INS, and the mass is initialized in the form of a normal distribution. Various initial errors and sensor errors are summarized in Table 2. Terrain elevation data with about 30m resolutions is used for simulation. We conducted 50 times Monte Carlo simulation to obtain position error RMS every time. The flight trajectory is assumed to be straight at constant speed for 200 s. Figure 11 shows the ground trajectory on the terrain elevation. The flight altitude is 300m higher than the highest ground altitude below the flight trajectory, and it is assumed that there is no INS altitude errors and vertical velocity error. Table 3 summarizes the execution times of the three algorithms. It can be seen that the proposed algorithms operate several times faster than the conventional algorithm. In particular, Algorithm 5 is found to be about 5.67 times faster. Algorithm 4 computes the linear covariance as many as the number of grids, whereas Algorithm 5 computes only one covariance. Due to this effect, the execution time of Algorithm 5 decreased by 29.3 ms and 16.5 ms, respectively, in time propagation and measurement update time compared to Algorithm 4. In other words, Algorithm 5 is calculated approximately 1.35 times faster than Algorithm 4. Conclusions In this paper, we proposed the various algorithms that can improve the reliability of the time propagation of PMF. First, we proposed the PMF algorithm that indirectly performs the probability diffusion through the mass redefinition and the dimension extended MMGK, as opposed to the conventional PMF directly sampling the process noise to perform the probability diffusion. The proposed PMF outperforms the conventional one but requires less computation load. To verify the performance of the proposed algorithm, the simulation was performed on the Growth model and body fall problem. The simulation results show that the proposed PMF performance is improved under most conditions and the computational load is reduced by up to 12 times. RBPMF is one remedy to resolve the excessive computational burden of PMF, which increases exponentially as the dimension of the state variable increases. However, the RBPMF based on the conventional PMF has the same problem in the probability diffusion process. So, as the second result of this paper, we proposed the RBPMF algorithm adopting MMGK but without mass redefinition. The third result is the proposal of the RBPMF algorithm including the redefinition step of the linear part for indirect time propagation in the case of a constant linear model such as TRN. Simulations results for the Growth model with two unknown parameters and the tighltycoupled INS/TRN integration of the 15th order state variable verify that the proposed algorithm shows better performance with less computation than the conventional RBPMF. When generating the extended MMGK, we ignored correlations between process noises. However, to consider the correlations, higher moments for multivariate normal distribution have to be dealt with. The proposed algorithms perform mass linear interpolation and it requires the nonlinear transformation of the PDF. Its implementation might not be easy if the system dynamics model is complicated and it has multi-solutions for a given target value. where v
2021-01-07T06:18:41.048Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "002a8b85ce363985afb2f0e2f1f1ac1624004b69", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/1/261/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a5f02eb0fdb027178a857e73915ab85ef691e32", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
265459474
pes2o/s2orc
v3-fos-license
Recent advancement in vascularized tissue-engineered bone based on materials design and modification Bone is one of the most vascular network-rich tissues in the body and the vascular system is essential for the development, homeostasis, and regeneration of bone. When segmental irreversible damage occurs to the bone, restoring its vascular system by means other than autogenous bone grafts with vascular pedicles is a therapeutic challenge. By pre-generating the vascular network of the scaffold in vivo or in vitro, the pre-vascularization technique enables an abundant blood supply in the scaffold after implantation. However, pre-vascularization techniques are time-consuming, and in vivo pre-vascularization techniques can be damaging to the body. Critical bone deficiencies may be filled quickly with immediate implantation of a supporting bone tissue engineered scaffold. However, bone tissue engineered scaffolds generally lack vascularization, which requires modification of the scaffold to aid in enhancing internal vascularization. In this review, we summarize the relationship between the vascular system and osteogenesis and use it as a basis to further discuss surgical and cytotechnology-based pre-vascularization strategies and to describe the preparation of vascularized bone tissue engineered scaffolds that can be implanted immediately. We anticipate that this study will serve as inspiration for future vascularized bone tissue engineered scaffold construction and will aid in the achievement of clinical vascularized bone. Introduction Bone is a highly vascularized connective tissue that depends on the interaction between blood vessels and bone cells for its development [1].It has been shown that 10 %-15 % of the mammalian cardiac output goes to the skeletal system [2].The skeleton's circulatory system performs a crucial supportive function in skeletal development.Intramembranous osteogenesis and endochondral osteogenesis are two types of skeletal development.Blood vessels proliferate where the bone is to be created during intramembranous osteogenesis, and blood vessels invade the cartilage during endochondral osteogenesis, signaling the start of ossification [3].The processes of osteogenesis and angiogenesis are coupled during bone regeneration, and the communication between osteoblasts and endothelial cells is crucial for the bone remodeling process [4].Arteries supply not only oxygen and nutrition during fracture healing, but also osteogenic stem cells and the ions required for mineralization in the latter phases of fracture healing. Bone defects produced by tumors, injuries, and geriatric diseases are widespread in clinical practice [5].The bone tissue can heal itself when the bone defect is small, such as greenstick fracture or crack fracture [6]; however, a bone substitute is needed to assist in the treatment when the bone defect exceeds the critical value (a defect involving 50 % of the cortical diameter with a minimum length of 1 cm) [7,8].Autologous bone grafts with vascular pedicles are the gold standard for the treatment of large bone defects caused by bone tumors [9].Nevertheless, there are significant drawbacks to autologous bone grafting, such as the restricted size and quantity of bone blocks available and the trauma caused by bone extraction to the donor site [10].Bone tissue engineering techniques have been proposed to address these disadvantages. Tissue engineered bone is a significant tool in the treatment of bone defects in clinical practice.The creation of new bone and blood vessels, as well as tissue integration and functional bearing, are the characteristics of successful artificial bone implantation [11].The construction of a functional vascular system is central to the application of tissue engineered bone [12].Whereas the skeletal vascular system is heterogeneous, its characteristic endothelial phenotype is closely related to the formation of functional blood vessels [13].Functional vascularization must first be induced after bone substitute implantation in order to provide the basis for the subsequent repair of bone defects [14].Previously, the vascular network formed after implantation of the bone substituted into the recipient site decreases gradually from the periphery of the scaffold to the center [15].The blood supply to the core of the scaffold is deprived, which will eventually lead to failure of the scaffold implantation [16]. To enable the construction of functional vascularization in bone tissue engineered grafts that rapidly communicate with the host vascular system, researchers have proposed the use of pre-vascularization techniques to enhance blood vessel formation within the implant [17,18].The pre-fabrication technique allows the scaffold to be enriched with a functional vessel before implantation and enables instantaneous anastomosis of the scaffold vessel to the host vessel after implantation [19].This will significantly increase the likelihood of scaffold activation, but patients will have to wait an excessive amount of time for a premade donor.Enhancing the angiogenic capability of bone tissue engineered scaffolds will not only shorten the time to the pre-production of scaffolds by pre-vascularization techniques, but will also allow exploration of the construction of immediately implantable tissue engineered scaffolds with vascularization capability. Biomaterial-derived scaffolds play an important role in promoting vascularized bone regeneration in bone tissue engineering [3,20].Materials that have excellent biological characteristics, such as calcium phosphate, natural polymers, and synthetic polymers, have been widely used in the construction of vascularized scaffolds for bone tissue engineering [21,22].Aside from the material, the scaffold's structure influences blood vessel formation and osteogenesis.Porous scaffolds favor cell migration, tissue development, nutrition delivery, and waste elimination more than solid scaffolds [23].The properties of porous scaffolds, such as pore size [24], porosity [25], porous interconnectivity [26], and internal morphology [27], will determine whether peripheral vessels can successfully grow into the scaffold and induce new tissue formation.Furthermore, fracture repair is closely related to the role of cytokines, with vascular endothelial growth factor (VEGF) and bone morphogenetic protein 2 (BMP-2) having a decisive role in vascular repair after fracture [28,29].Loading cytokines onto scaffolds has become an effective strategy to induce vascularized bone tissue production [30]. Skeletal vascular network The skeletal vascular system plays an important role in bone development, regeneration, and remodeling [17].At the macroscopic level, the skeletal vascular system displays a classical hierarchical arrangement of afferent arteries, capillary networks, and efferent veins [31].In the long bones, for example, the afferent arteries are mainly composed of the central nutrient artery, the metaphyseal-epiphyseal artery, and the periosteal artery [32].Its capillaries were categorized into type-H vessels (CD31hiEmcnhi) and type-L vessels (CD31loEmcnlo) due to the differential expression of platelet endothelial cell adhesion molecule-1 (PECAM-1/CD31) and salivary glycoproteins (Emcn) [33].Type-H vessels are highly expressive of CD31 and Emcn markers, mainly in the metaphysis and subperiosteum, and are interconnected by distal vascular rings or arches [34].In comparison, type-L vessels, which are low in CD31 and Emcn markers, are distributed in the backbone and form a dense, perforated, and highly branched sinusoidal network in the bone marrow lumen [33].At the microscopic level, the periosteal arteries, the small branches flowing from the central nutrient artery, and the small distal arteries in the epiphysis flow into specialized type-H vessels [35,36].The type-H vessels are connected to the type-L vessels, which eventually join the outflow vein [37].These two types of vessels are closely connected at the epiphyseal-diaphyseal junction and form a complete vascular bed in the marrow cavity [38](Fig.1g).Notably, Langen et al. reported that type-H vessels can transition to type-L vessels, and that this process is accompanied by the rapid developmental expansion of bone and none marrow, suggesting that type-H endothelial cells may be upstream endothelial cells in bone [13,39].In addition, E-type vessels (CD31hiEmcnlo) also exist, which have high expression of CD31 and low expression of Emcn.These E-type vessels are predominantly found in embryonic and early postnatal bone and have the ability to differentiate into type-H vessels [39].In addition to the classical vascular system described above, Grüneboom et al. recently reported a transcortical vasculature (TCV), also known as the small periosteal vessels [40].The heterogeneity and uniqueness of the bone vascular system plays an important role in the regulation of bone metabolism. Skeletal vascular system influences bone regeneration In the skeletal system, the vascular system is closely related to bone metabolic activity, i.e., angiogenesis, vascular secretory signaling, and osteogenesis are coupled with each other [41].Schmidt-Bleek et al. reported that revascularization is essential for the regenerative bone healing process [42].Immunosuppressive agents with anti-angiogenic properties can inhibit new blood vessel formation in fracture healing tissue and delay fracture healing if used [43].Vascular invasion is an important step in all modes of osteogenesis, and type-H vessel formation is thought to play a key role in regulating the bone formation and repair processes [44].During the healing of mammalian bone defects, the proliferation of type-H vessels shows spatiotemporal specificity.On day 3 after bone injury, type-H vessels proliferate and are widely distributed throughout the repair area.At 7 and 14 days after injury, type-H vessels began to shrink and were concentrated around the growing trabeculae in the vicinity of the front growth area [45].This is mainly due to the close molecular communication between type-H endothelial cells and osteoblasts [46].Kusumbe et al. reported that type-H endothelial cells mediate local growth in the vascular system and provide ecological niche signaling for Osterix + bone progenitor cells, type 1α+ collagen osteoblasts, Runx2+ bone progenitor cells, and PDGFRβ+ pericytes [33].In contrast, type-L vessels are barely surrounded by osteoblast progenitor cells [47].This is closely related to type-H endothelial cells with high expression of platelet-derived growth factor A (PDGF-A), PDGF-B, and fibroblast growth factor 1 (FGF1) [33,39].The above findings further confirm the critical role of type-H vessels in bone regeneration. In humans, the type-H vessel content decreases progressively with aging, and improving vascularized bone regeneration by inducing type-H vessel formation is a potential target for tissue engineering [48].Hypoxia-inducible factor-1 (HIF-1α), VEGF, BMP-2, Notch, and Slit3 signaling pathways can exert control over type-H vessel formation and osteogenesis [31,34,44](Fig.1f).Hypoxia-inducible factor-1 (HIF-1α) expression and activity are regulated by hypoxia and are important regulators of angiogenesis in physiological or pathological conditions [49].HIF-1α plays a crucial role in the angiogenic-osteogenic cascade reaction [50].Kusumbe et al. demonstrated that HIF-1α is an important promoter of metaphyseal type-H vessel formation [33].High levels of HIF-1α expression in cells in the region of bone defects leads to the significant expansion of type-H endothelial cells and metaphyseal vascular columns [51,52].Bone tissue engineering scaffolds can activate HIF-1α signaling by carrying hypoxia-inducing cobalt (Co) ions to promote type-H vessel and bone generation [53].Sun et al. prepared a hydrogel that allows slow-release control of Co2+ release.The hydrogel sustained Co2+ release over 21 days and stably activated HIF-1α signaling, which matched type-H vessel formation during bone repair [54].VEGF is a classical factor that promotes angiogenesis, and regulates the formation of type-H blood vessels [55].Liang et al. found that Panax ginseng saponin (PQS) could promote type-H vessel formation by stimulating VEGF expression [56].In addition, BMP-2, a more widely used growth factor in orthopedic surgery, can also regulate type-H vessel formation [57].Yao et al. implanted a hydrogel loaded with BMP-2 in a rat mandibular defect model, and animal experiments showed that the BMP-2-triggered region of new bone was rich in type-H blood vessels and associated Osterix + bone progenitor cells [58].There is evidence that blood flow is also closely related to type-H vessel generation.Ramasamy et al. found that reducing blood flow by ligating the femoral artery or using a blood flow-lowering drug resulted in a significant reduction in the number of type-H blood vessels in the epiphysis of mice, whereas increasing blood flow promoted an increase in the number of type-H vessels and thus promoted bone formation.This is because high blood flow velocity (0.98 ± 0.1 mm s − 1 ) stimulates Notch signaling and thus promotes type-H vessel formation [59,60].In addition, Xu et al. determined, by transcriptome analysis, that SLIT3 is an osteoblast-derived, SHN3-regulated factor that promotes H-type formation.They found that mice with mutations in SLIT3 showed reduced type-H endothelial cells and corresponding defects in fracture repair, whereas mice with mutations in SHN3 showed enhanced fracture repair [47].Zhai et al. processed polycaprolactone, collagen, and nanohydroxyapatite into a composite scaffold with vascularized osteogenesis using electrostatic spinning technology.The composite scaffold could significantly promote SLIT3 gene expression in bone marrow mesenchymal stem cells (MSCs) and could effectively repair cranial defects in mice [61]. The vascular system in the skeletal system is more than a passive conduit system; it is intimately and actively involved in the complex processes of bone metabolism.Among the multiple cellular activities and various signaling pathways involved in bone regeneration, type-H vessels have a great capacity to couple osteogenesis with angiogenesis.Currently, the lack of vascularization and functional vasculature in tissue engineered bone has become the biggest obstacle in its clinical application, and the use of type-H vessels as a target for stimulating vascularized bone regeneration holds great promise. Pre-vascularization strategy The scaffold pre-vascularization approach provides the implant with a vascular pedicle before placement.These vascular pedicles can be anastomosed with the blood vessels at the recipient location after implantation, allowing the scaffold to be perfused instantly and bypassing the vascularization phase.As a result, it can be employed to solve the vascularization problem [62].Pre-vascularization techniques for scaffolds in bone tissue engineering may be separated into in vivo pre-vascularization procedures and in vitro pre-vascularization strategies. In vivo pre-vascularization strategies The in vivo pre-vascularization strategy for bone tissue engineering takes advantage of the body's self-regenerative capacity by using a part of the body as an in vivo bioreactor (IVB) for in situ or ex situ culturing of bone tissue engineering scaffolds [63].Depending on the scaffold and implantation location, vascularization of the graft can take weeks or months [64].Vascularized scaffolds for the reconstruction of bone defects are obtained immediately after the end of culture.The preparation of the scaffold and the selection of the implantation site are two key elements of the in vivo pre-vascularization strategy for bone tissue engineering [65].Currently, 3D printing technology allows the construction of scaffolds or culture chambers with different shapes according to individual patient variability [66].Secondly, the choice of scaffold or chamber filling materials, seed cells, and growth factors needs to be determined according to the implantation site [67].Furthermore, the choice of implantation site determines the subsequent steps and clinical outcomes.Constructing an IVB at the defect site can directly repair bone defects without secondary surgery.If the culture conditions of the defect site are poor, then ectopic construction of the IVB can be chosen.Currently, in bone tissue engineering, in vivo pre-vascularization strategies can be classified into tissue flap-based IVB, axial vascular tip-based IVB, and periosteal flap-based IVB, depending on the implantation location and approach. IVB based on muscle bag. The muscle pouch preformation technique utilizes the self-regenerative ability and safe induction of differentiation of living muscle bags to create vascularized tissue engineered bone [68].The technique can be traced back as far as 1995 when Fujimura et al. successfully induced ectopic osteogenesis in rat thigh muscles using BMP-2 and type I collagen [69].The traditional approach Fig. 2. a) Titanium mesh cage filled with bone mineral blocks, recombinant bone morphogenetic protein-2 (rhBMP-2), and human bone marrow aspirate.b) Computed tomography (CT) image of the titanium mesh cage 10 weeks after implantation in the greater omentum.c) CT image of the prefabricated titanium mesh cage 1 week after implantation into the mandible.d) Bone density and bone metabolic activity during transplantation.e) Histological evaluation of bone biopsies 3 months after transplantation [76].f-h) Schematic diagram of arteriovenous (AV) loop repair of a bone defect.f) After the creation of the bone defect.g) Placement of osteogenic material into the defect.h) Anastomosis of the radial artery and cephalic vein forms an AV loop and penetrates the osteoid structure.i) Preoperative magnetic resonance imaging (MRI) showed a cystic lesion of the distal radius.j-k) Angiography at 14 months after surgery showed AV loop patency and complete healing of the bone defect [81]. H. Liu et al. is to create a non-degradable chamber based on the shape of the bone defect and fill the chamber with osteoinductive material.The chamber is then surgically implanted into a blood-rich muscle pocket for culture.A few weeks later, the personalized vascularized bone is transferred to the recipient site for microvascular anastomosis [70].The technique enables the vascular pedicle to be transferred along with the vascularized bone mass, which allows for better repair in areas where the vascular bed is significantly damaged [71].Importantly, the role of BMP-2 in ectopic preformation is to induce ectopic osteogenesis, which is, therefore, an essential cytokine in the preformation process.Cao et al. prefabricated 3D printed β-tricalcium phosphate (TCP) scaffolds (with or without a recombinant bone morphogenetic protein-2 (rhBMP-2) coating) using monkey latissimus dorsi.After prefabrication, they found that only the TCP scaffold with the rhBMP-2 coating would have new bone generation.They then implanted the prefabricated rhBMP-2/TCP scaffold together with the myocutaneous flap into the rhesus mandibular defect.The scaffold demonstrated strong vascularized osteogenesis compared to the unprepared control group [72].In addition,muscle bag prefabrication techniques have been reported to be successfully applied in clinical practice.Kokemueller et al. used the patient's latissimus dorsi muscle for ectopic vascularization cultures to obtain artificial bone blocks rich in blood vessels.A personalized titanium mesh filled with the vascularized bone block was then implanted into the patient's segmental mandibular defect.After a period of recovery, the patient's mandibular defect healed well and the temporomandibular joint was stabilized [73]. IVB based on omentum. In comparison to muscle tissue, the gastrocolic omentum is thin, flexible, rich in vascular tissues, and has precursor cells that support osteogenic differentiation [74].The porcine omentum was employed as a bioreactor for prefabricated vascularized scaffolds by Naujokat et al. [75].Custom-made titanium chambers containing pig bone marrow aspirate, bone mineral blocks, and BMP-2 were implanted into the porcine omentum for culture, and customized bone blocks with vascular pedicles were obtained after 8 weeks.Additionally, the omental prefabrication technique has been successfully shown in clinical situations.Wiltfang et al. [76] inserted titanium mesh cages containing human bone marrow aspirate, rhBMP-2, and bone mineral blocks into the patient's gastrocolic omentum for prefabrication.Three months later, the team removed the prefabricated vascularized bone block from the titanium cage and reconstructed the patient's mandibular defect.Grafts increased in density and metabolic activity both before and after transplantation, suggesting adequate vascular supply and survival of induced bone tissue.Three months after implantation of the defect, histological evaluation showed that most of the graft was covered by a bone-like matrix, and the patient's quality of life was significantly improved (Fig. 2a-e).The large amount of space in the omentum makes it possible to prefabricate vascularized scaffolds of different sizes, including large bone substitutes of various shapes.Moreover, the omentum is rich and regenerative, and removal of part of it does not affect the body.Therefore, the application of omentum for the prefabrication of vascularized bone has a relatively broad application prospect. Bone marrow aspirates used for muscle pouch and greater omentum pre-vascularization may contain CD31hiEmcnhi subpopulations, bone progenitor cells, and bioactive factors.The addition of BMP-2 during preparation can promote type-H vessel formation and ectopic osteogenesis [57,58].In addition, scaffolding raw materials, such as bone mineral blocks and calcium phosphate, also have osteoinductive and conductive properties [77,78].Furthermore, the specially designed culture chambers provide a hypoxic microenvironment that facilitates the secretion of HIF-1α from the surrounding tissues. Moreover, the muscle pouch and peritoneal environment in which the culture chambers are located can provide the appropriate stress conditions for the growth of vascularized scaffolds [79].These conditions provide a favorable microenvironment for the ectopic culture of vascularized bone.Evidence suggests that neonatal bone tissue obtained by pre-vascularization in culture is more capable of restoring bone defects than autologous bone.Dai et al. obtained highly vascularized juvenile bone within 3-5 weeks after subcutaneous implantation of gelatin scaffolds loaded with BMP-2 in mice.Compared with autologous bone, this vascularized juvenile small bone showed fewer senescent MSCs, and abundant type-H vessels and bone progenitor cells.Compared with the non-pre-vascularized treatment group, the juvenile small bone could completely repair critical-sized cranial defects in young and old mice 2 weeks after transplantation [80]. IVB construction based on axial vascular tip 3.1.2.1.Arteriovenous (AV) loop model.The arteriovenous (AV) loop model is also an IVB that induces the generation of axially vascularized tissue.In animal models, superficial arteries and veins are usually anastomosed to form an AV loop, after which the AV loop is placed in an implantation chamber with bioactive material, and vascularized bone tissue is obtained over the period of culture [82].AV loop cultured grafts feature more vasculature and are more densely packed than the muscle pocket and omentum, making them safer and more promising for clinical use [83].Horch et al. [81] applied the AV loop model to successfully treat a 3 × 9 × 4 cm bone defect in the distal radius of one patient.Vascularization of this defect was achieved by inserting a segment of a lower arm vein graft into the arteriovenous loop between the palmar radial artery and the dorsal cephalic vein.The filler in the defect was composed of clinically approved β-tricalcium phosphate/HA, fibronectin, and direct autograft bone marrow taken from the right iliac crest.At the postoperative follow-up, imaging showed the presence of an open arteriovenous loop as well as a fully healed radial bone defect (Fig. 2f-k).This clinical example fully demonstrates the feasibility of the AV loop technique for clinical application. The immune response, hemodynamic alterations, and hypoxia may be involved in the process of the AV loop model supporting vascular development and bone replacement remodeling.Hessenauer et al. [84] observed the presence of rolling and firmly adherent leukocytes at the site of the vein graft and microsurgical anastomosis in the rat AV loop model.White blood cell recruitment is the first step in the integration of any biomaterial tissue.Leukocytes at these sites of inflammation produce angiogenic factors, including VEGF, angiopoietin-1, PDGF, transforming growth factor (TGF), and epidermal growth factor (EGF) [85].VEGF and PDGF can significantly promote the expression of type-H vasculature [31,34].Furthermore, the venous segment in the AV loop is critical for initiating flow-mediated angiogenesis [86].Researchers discovered that the venous side of the AV loop generated more vascularized tissue than the arterial side in the mouse AV loop model built by Wong et al. [87].They further discovered that as the culture period increased, the vascularized tissue in the lumen infiltrated the matrix around the chamber and spread radially from the arteriovenous anastomosis to the surrounding region [87].This is primarily due to the hemodynamic effects of the AV loop's specific structure, namely the high flow rate and shear stress on the thin vein wall [88].This high shear stress and blood flow effect promote vascular Notch signaling, thereby promoting type-H vessel formation [60].In addition, there is a direct correlation between the high blood flow state and the expression of the Cx gene in hemodynamics.Connexin (CXS) is a four-helix transmembrane protein.Myoendothelial cells (ECs) and vascular smooth muscle cells (VSMCs) can communicate through CXS.This exchange of information between cells enables vascular networks to adapt to changes, such as short-term changes in vascular tension to regulate blood flow [89], and long-term adaptation processes, including angiogenesis and wound repair [90].Schmidt et al. investigated the hemodynamic effects on AV loop formation in axially vascularized tissue using a rat arteriovenous ring model.They discovered that significant hemodynamic alterations resulted in a significant increase in connexin H. Liu et al.Cx43 expression in venous segments [91].Furthermore, in vitro studies show that high shear stress and blood flow acting on ECs can increase Cx40-linked protein expression via the PI3K/Akt pathway [92].Cx40 is thought to be essential for vascular arterial homogeneity and plays a role in angiogenesis [93].Hypoxia is also thought to be an important cause of AV loop formation.A previous study discovered that AV loops in isolation chambers experience hypoxia, which was characterized by the upregulation of HIF [94].Yuan et al. observed, through a constructed rat AV loop model, that vessels began to grow rapidly with increasing HIF-1α ratios in an AV loop model [95].This suggests that HIF-1α promotes vascularized bone formation by facilitating type-H vessel formation [50,94]. AV bundle model. The AV bundle model requires a section of the unbranched AV bundle to pass through a custom chamber filled with osteoinductive material to obtain vascularized artificial bone.Compared to the AV loop model, the AV bundle model does not require anastomosis of the vessels.This reduces the risk of thrombosis and angioma formation.The customized chamber filled with biomaterials can then be prefabricated through the AV bundle [96].This technique simplifies the AV loop technique by eliminating the need to graft additional vein segments; however, it produces less vascularized tissue [97].There are currently successful applications of vascular bundle technology.Ismail et al. placed chambers filled with allogeneic inactivated bone matrix and autologous stromal vascular component (SVF) cells and BMP-2 within the patient's latissimus dorsi muscle and cultured AV bundles from the thoracic dorsal vasculature through the chambers.After 32 weeks of pre-culture, the mature vascularized bone was successfully transplanted into the patient's maxillary defect and enabled the functional reconstruction of the patient's maxilla [98].The addition of AV bundles within prefabricated chambers provides a well-defined vascular axis and improves vascularization and bone formation in prefabricated structures [96].In addition, Charbonnier et al. reported the application of a single vein as a vascular axis to successfully induce vascularized osteogenesis in a porous bioceramic scaffold loaded with autologous bone marrow.The team applied avascular control to induce bone levels of 9 %-26.6 %, whereas osteogenic levels of up to 66 ± 6 % were achieved after the application of a single venous shaft for induction [99].This osteogenic effect, which is no less than that of the AV bundle, may be related to the fact that thin-walled veins are subjected to pressures that are an order of magnitude higher than physiological pressures, resulting in the germination of the official lumen [100,101]. IVB construction based on periosteal flaps When inducing ectopic osteogenesis, all the above approaches require the injection of cytokines to stimulate osteogenic differentiation, whereas periosteal osteogenesis does not.Periosteal induction is a method that uses the periosteum's intrinsic osteogenic and angiogenic properties as an IVB to induce vascularized bone growth in a specified shape [102].This approach works by removing a portion of the prefabricated object's rib and leaving the empty periosteum, followed by implanting a reaction chamber filled with bone induction material into the rib periosteum for internal culture.After the prefabrication is finished, it can be removed and implanted into the problem location [103].This is a newer model, and its viability has been demonstrated in successful animal models.Tataraa et al. [66] successfully repaired the model of a large mandibular defect in sheep using periosteal reactor technology.In their periosteal bioreactor, engineered bone with good vascularization can be obtained by using calcium phosphate cement or crushed sheep autogenous bone as bone induction material.In contrast to the above methods, the periosteal bioreactor can induce ectopic osteogenesis without exogenous BMP-2.However, this technique requires surgical osteotomy to obtain the empty periosteum and is more traumatic to the prefabricated area.In the future, if in vitro bionic construction of the periosteum is possible, the treatment of defective areas can be personalized. These in vivo pre-vascularization methods face some problems, such as the need for multiple operations, possible infection, hemangioma, and the risk of thrombosis, compared with autogenous bone transplantation.In addition, the in vivo prefabrication procedure may be interfered with by soft tissues when the rate of proliferation of soft tissues is faster than the rate at which inducers stimulate new bone formation [104].The soft tissue contained in prefabricated scaffolds can be counterproductive to bone regeneration and can even lead to serious problems with soft tissue entrapment at the defect site after implantation.We can interfere with soft tissue generation by the following methods.Firstly, the formation of functionalized bone blocks can be controlled by controlling the number of inducers, such as BMP-2, thus avoiding soft tissue seating in the implant [80].Secondly, cytokines that inhibit the growth of soft tissues such as muscle can be added to the prefabricated cells during prefabrication [105].Moreover, the period of the ectopic culture of vascularized bone in the human body must be investigated further.When the culture period is too short, the bone does not have adequate vascular pedicle and volume, and when the culture time is too long, bone resorption might occur.According to Wolf's law, if the stress on bone tissue is insufficient, the bone will collapse [106].In IVB, engineered bone is subjected to only limited mechanical loading.The prefabricated vascularized bone may have a better therapeutic effect if some stress is applied to the bone mass during the ectopic culture process. In vitro pre-vascularization strategies In vitro pre-vascularization procedures are less dangerous than in vivo pre-vascularization strategies and can prevent subsequent injury to patients.In recent years, there has been an increase in the sophistication of in vitro pre-vascularization techniques, which focus on prevascularization prior to graft implantation in vitro.These techniques primarily involve the use of tissue-specific cells in culture to generate vascularized tissue structures that can be implanted into the defect with improved vascular connectivity and osseointegration [107].The co-culture technique and the cell sheet technique are the two main types of cell-based in vitro pre-vascularization strategies. Co-culture techniques The co-culture of MSCs and ECs is one of the most straightforward approaches to bone tissue engineering co-culture.Co-culture techniques can better mimic the in vivo situation because signals between cells are transmitted via connections between different cell types, exosomes, and paracrine activity [108].Bo et al. co-cultured dental follicle-derived stem cells (DFSCs) with human umbilical vein endothelial cells (HUVECs).They discovered that the cultures had a significantly higher expression of angiogenesis-related genes and proteins, as well as improved osteogenic potential [109].This co-culture system is kept going for a long time to produce tissue with a vascular structure, but this vascular structure is only suitable for a 2D culture environment and the vasculature does not function [19].The solution to this problem is the use of scaffolds, which can be tailored to the shape of the defect so that grafts with a 3D vascular network can be formed in co-culture.By covering hydrogels packed with human adipose-derived MSCs (ADMSCs) and HUVECs on 3D printed polycaprolactone/hydroxyapatite scaffolds, Kuss et al. successfully created composite scaffolds with vascularized osteogenic potential [110].In vitro experiments with immunohistochemistry and qPCR analysis showed that the co-culture system promoted capillary network formation and vascularization gene expression.Microvessel formation was visible on the scaffold in the overall view and histological analysis 4 weeks after subcutaneous implantation.The immunohistochemical analysis further showed that these official structures were formed by HUVECs (Fig. 3a-c).Furthermore, Smirani et al. discovered that pre-vascularized grafts had more vascular network connections to the recipient site, as well as a higher proportion of erythrocytes in the lumen and greater blood flow when compared to the control group [111].These findings demonstrate the potential of pre-vascularized scaffolds in healing large bone defects. Cell sheet technology Cell sheet technology typically involves growing unitary tissuespecific cells on temperature-sensitive polymers that allow cells to adhere and proliferate at 37 • C.After lowering the temperature, cell slices can be separated without the use of trypsin.Controlling the culture temperature can result in a single cell sheet with a complete cell connections and an established extracellular matrix [113].EC sheet layers can be included in a multilayer cell sheet to create vascularized tissue structures using this technique [114].Xu and colleagues seeded bone marrow-derived stem cell (BMSC)-derived ECs on BMSC cell sheets.Lumen-like structures and osteoblast slices formed on the flake tissue after co-culture.The vascularized tissue was then implanted into a major skull defect in rats, and the results revealed that the flake tissue developed functioning perfusion arteries and new bone tissue [115].Similarly, Zhang et al. [112] used human amniotic mesenchymal stem cell (hAMSC) sheets as the basis for constructing vascular cell sheets, osteoblast sheets, and bicellular sheets, respectively.They then used a rat cranial defect model to validate the regeneration of bone defects treated with different cell sheets.Micro-competed tomography (CT) at 8 and 12 weeks postoperatively showed that the bicellular sheets exhibited significantly better bone repair.Histological analysis showed that bone defects covered by bicellular sheets formed more regularly arranged bone covered with cuboidal osteoblast-like cells compared to the other groups (Fig. 3d-f).In addition, cell sheet technology can be combined with metal scaffolds, enhancing the vascularization ability of metal scaffolds.MSC sheets were adhered to the surface of titanium implants by Yan et al. to form a tightly connected MSC sheet-titanium composite.In the experiment, the composite scaffold promoted osteogenesis and angiogenesis very well [116].The cell sheet technique is predicted to improve metal transplant biocompatibility and the ability to generate vascularized bone.Although the cell sheet technology has excellent advantages, the number of layers accumulated by the cell sheet can never break through the limit of 12 layers.This may be due to the limited diffusion distance of oxygen and nutrients, thus, EC slices will not form more complete blood vessels [117].Nonetheless, this technology has promise, and some researchers may be able to push it beyond the 12-layer limit in the future by combining cell sheets with other cytokines and materials for better application in bone tissue engineering.In addition, no researchers have reported that functional type-H vessels can be obtained using co-culture techniques and cell sheet techniques, so the exploration of targeting type-H vessels may be a potential direction for in vitro pre-vascularization studies. The two pre-vascularization techniques discussed above appear to be a promising approach for improving graft integration into the host bone defect, particularly when the graft is large.It has a positive impact on ongoing neovascularization after implantation because it allows for prevascularization of the graft, enabling good integration with the host receptor site microenvironment.Nevertheless, the pre-vascularization procedure takes a long time to prefabricate a prosthesis and cannot be employed immediately in fresh bone defects.If the preformation period could be greatly decreased, this approach would offer tremendous potential for the correction of segmental bone defects. Vascularization scaffold materials in bone tissue engineering Given the extended duration needed for the prefabrication process of the pre-vascularization technique, the emergence of filling tissue engineering scaffolds that possess immediate vascularization potential presents a promising alternative for managing segmental bone defects.A scaffold is a biomaterials-based 3D structure that offers a surface milieu for cell adhesion, development, reproduction, and function, as well as structural and mechanical support for cellular interactions [118].It has been demonstrated that the chemical makeup of the scaffold material influences the angiogenic process at the implantation site [119].Therefore, the choice of scaffold material is crucial to vascularized bone tissue engineering.The following requirements should be carefully considered while creating a bone scaffold with vascularization potential.Various natural and synthetic materials, biodegradable and non-biodegradable, have been employed in the fabrication of bone scaffolds through various techniques.Each of these materials has distinct features.Based on these requirements, we outlined two types of materials for vascularized bone tissue engineering applications: calcium phosphate and polymers. Calcium phosphate 4.1.1. Advantages of calcium phosphate in vascularized bone tissue engineering The biocompatibility, osteogenic induction, and osteoconductivity of calcium phosphate materials make them one of the important biomaterials in the field of bone tissue engineering [120].The chemical composition and hierarchical multilevel structure are generally considered to be factors in osteogenesis induced by calcium phosphate materials [121].Recently, with the deeper exploration of calcium phosphate materials, researchers have suggested that osteogenesis induced by calcium phosphate materials may be related to the activation of BMP/Smads, Wnt, and Notch signaling pathways.Tang et al. [122] examined the expression of BMSC signaling molecules on calcium phosphate ceramics and culture plates, respectively.They found that Smad1, 4, 5, and Dlx5, the major molecules in the BMP/Smads signaling pathway, could be significantly upregulated by the calcium phosphate ceramic plate.In addition, Wang et al. [123] evaluated the expression of Wnt, Notch signaling pathway, and osteogenesis-related genes on CaP ceramics in BMSCs with and without the Wnt pathway inhibitor DKK1.The expression of Wnt, Notch signaling pathway, and osteogenesis-related genes increased and then decreased without the addition of the inhibitor, whereas the overall expression showed a decreasing trend after the addition of the inhibitor.The above studies suggest that BMP/Smads, Wnt, and Notch signaling pathways play important roles in calcium phosphate-induced osteogenic differentiation, but the mechanism of the synergistic effects needs to be further explored. In addition, the effect of calcium phosphate materials on angiogenesis is manifested in several ways.First, the solid-phase calcium phosphate material implanted at the trauma site is degraded to liquid-phase calcium and phosphate by solution-, protein-, and cell-mediated mechanisms [124].In vivo, these increased calcium and phosphate levels were negatively correlated with apoptosis of VSMCs, oxidative stress, and EC apoptosis [125].Angiogenesis is tightly regulated by pro-angiogenic factors, and the expression of VEGF-promoting signaling pathways correlates with increased intracellular calcium concentrations [126].Nadège et al. demonstrated that a calcium-rich environment facilitates the generation of vascular tissue in the initial phase [127].Second, the physical properties of the calcium phosphate material can have an impact on angiogenesis.The process of calcium phosphate scaffold degradation produces stress changes in the surrounding tissues, and the ECs respond to the mechanical pulling of the surrounding tissues [128].Thus, changes in ambient stress may affect angiogenesis [129].Furthermore, the porous nature of calcium phosphate materials facilitates the invasion of vascular tissues [130].The ability of calcium phosphate materials to induce angiogenesis and osteogenesis varies with their physicochemical properties [131].Therefore, it is important to control these properties and select the right calcium phosphate for a specific application.The most commonly used calcium phosphate materials in bone tissue engineering today are hydroxyapatite (HAp), β-tricalcium phosphate (β-TCP), and bidirectional calcium phosphate (BCP). HAp, β-TCP, and BCP Natural bone contains HAp as a component of calcium hydrate.It is the most stable form of calcium phosphate known [132].HAp is commonly employed in bone tissue engineering because of its great biocompatibility and osteoinductivity [133,134].Burgio et al. described a miniature scaffold made of Hap, which displayed typical expression of angiogenic marker genes in the experiment, indicating that HAp had excellent angiogenic effects [135].Because of its distinctive hard and brittle qualities, HAp has mostly been used as a HAp coating and nano-HAp delivery system, rather than in situations where high stresses are applied [136,137]. HAp implants synthesized by high temperature sintering or hydrothermal conversion have a much higher crystallinity than bone minerals, making HAp non-degradable after implantation [138].In contrast to HAp, β-TCP is biodegradable and can be completely replaced by new bone minerals [139,140].Under physiological conditions, the solubility of β-TCP is similar to that of bone minerals and is normally dissolved by osteoclasts and forms an apatite layer on the surface [141]. On the one hand, the newly formed surface apatite layer absorbs proteins from the surrounding tissue and encourages osteoblasts and angiogenic cells to adhere, proliferate, and differentiate, leading to the healing of bone defects [142].On the other hand, calcium released by β-TCP lysis is closely related to the process of neovascularization at the fracture site.Among them, free calcium ions may affect the structure of the blood clot at the fracture site, thus affecting fracture healing [143]. In addition, β-TCP significantly promoted neovascularization at the defect, as demonstrated by Anghelescu et al. by establishing a tibial healing model [144]. Calcium phosphate ceramics composed of β-TCP and HAp are known as BCP, which is highly similar to the composition of bone minerals [145].The Hap/β-TCP ratio is the primary determinant of the solubility of BCP ceramics, and the absorbability and stability of BCP can be managed by adjusting the material ratio [146].Shao et al. constructed BCP scaffolds with various HAp/β-TCP ratios (HAp30/β-TCP70, HAp50/β-TCP50, and HAp70/β-TCP30) and examined their biological and degrading characteristics.The findings demonstrated that BCP scaffolds with a HAp/β-TCP ratio of 30/70 were capable of regenerating bone with excellent efficiency and a degradation rate that was consistent with bone formation [147].BCP scaffolds have been demonstrated to induce osteogenesis, increase cell adhesion, and absorb growth hormones in BMSCs [148].In addition, BCP has been shown to promote angiogenesis [149].Chen et al. evaluated the possibility of BCP ceramics to stimulate the differentiation of BMSCs to ECs in a real physiological environment by modeling an in vivo diffusion chamber.They prepared diffusion chambers containing BCP ceramics inoculated with BMSCs and implanted the chambers into subcutaneous pockets on the backs of New Zealand rabbits for culture.Results of in vitro and in vivo experiments show that at the genetic level BCP ceramics significantly stimulates the differentiation of BMSCs towards ECs [150](Fig.4a-c).Additionally, to investigate the angiogenic and osteogenic potential of BCP scaffolds, Zhang et al. implanted them into mandibular abnormalities in miniature pigs.An osseointegration transcriptional study indicated that genes associated with angiogenesis were increased in the BCP group [151].The preceding research establishes the vascularized osteogenic capacity of BCP ceramics. The role of calcium phosphate in vascularized bone tissue engineering The hard and brittle qualities of calcium phosphate materials currently limit their usage in load-bearing applications.They are primarily employed in drug delivery systems, biological coatings, and doping with other materials to generate composite scaffolds in bone tissue engineering [153].The nano calcium phosphate drug delivery system is an advanced drug delivery method.The significant surface area to volume ratio of calcium phosphate nanoparticles allows for greater diffusion drive and particle solubility.This high surface-to-volume ratio can influence the adhesion of specific proteins, making it particularly suitable for the delivery of therapeutic factors [154].Calcium phosphate nanoparticles have now been successfully employed to provide bone repair therapeutic components.Kim et al. assembled BMP-2 and VEGF into large (300-500 μm) and small (100-200 μm) microcarriers, respectively.Scanning electron microscopy revealed that both sizes of microcarriers contained hollow internal structures and no differences in surface morphology were observed.Histological results of critical-size cranial defects in rats showed that a calcium phosphate delivery system loaded with growth factors can lead to substantial new bone formation compared with controls [152] (Fig. 4d-f).Calcium phosphate nano-delivery systems have also shown promise in combination with gene therapy for bone repair [155].Schlickewei and colleagues developed an injectable DNA-loaded nano calcium phosphate paste.They loaded the paste with transfected BMP-7 and VEGF-A DNA and implanted it into critical-size bone lesions in rabbits.The transfected DNA group healed bone more quickly and for a longer period of time than the control group [156]. The use of calcium phosphate coating on scaffolds is intended to increase biocompatibility and vascularization [157].Research on a titanium plate with a calcium phosphate coating was published by Khlusov et al. [158], who reported microvascular invasion of the calcium phosphate layer after implanting the scaffold subcutaneously in mice for 3 weeks.The calcium phosphate coating modification considerably improved the vascularization of titanium grafts.This is a relatively new way of drug delivery.Prosolov et al. describe a method for fabricating a drug delivery system based on a calcium phosphate coating that can carry several different drugs and can appropriately regulate their release [159].Drug delivery systems based on calcium phosphate coatings are currently considered to be beneficial drug delivery systems.In complex bone disease cases, calcium phosphate coatings can release Ca, P, and drugs available for bone growth in a localized area and can induce new bone production under multiple effects.However, there are some limitations in its native province, such as the loose manner in which the coating adsorbs the drug and the limited drug-loading capacity of the coating [160].Secondly, the burst release of calcium phosphate carrier drugs is a major problem [161].Therefore, continuous improvement is still needed in terms of more controlled drug release and precise administration of high concentrations of drugs for topical application. Calcium phosphate can be combined with other materials to create composite scaffolds.These scaffolds can be modified to attain the right mechanical strength and elastic modulus by modifying the material ratio and structure.They also have strong biological qualities because they are made of calcium phosphate.A polycaprolactone-poly (lactic-ethanolic acid)-tricalcium phosphate composite scaffold was described by Kumar et al. [162].This scaffold is biocompatible and has the proper porosity to encourage vascularized bone tissue to grow inward.Furthermore, throughout its resorption by the body, the scaffold can maintain acceptable mechanical characteristic.As previously stated, using calcium phosphate in conjunction with other biomaterials allows for greater control and improvement of their characteristics, hence better fulfilling the duty of stimulating vascularized bone tissue formation. Polymers Natural biopolymers and synthetic polymers are the most common organic materials used in bone tissue engineering [163].Proteins (collagen, elastin, fibronectin, filamentous protein) and polysaccharides are among the natural polymers that can be employed as scaffold materials (chitosan and alginate).They offer good biocompatibility, biodegradability, cell adhesion, and growth-promoting qualities [164].However, they also have some disadvantages such as immunogenic reactions, poor mechanical properties, and uncontrollable degradation rates [165].Synthetic polymers have lesser bioactivity and fewer cell recognition sites than natural polymers, but they degrade at a controlled rate [166].Polycaprolactone (PCL), polylactic acid (PLA), and polylactic acid-glycolic acid copolymers are the most utilized synthetic polymers in bone tissue engineering scaffold fabrication (PLGA). Natural polymers 4.2.1.1. Collagen. Collagen is a key component of normal bone formation.It is abundant in the bone matrix secreted by osteoblasts and plays a significant role in matrix mineralization [164].More than 20 different types of collagen have been identified, of which the most common is type I collagen [167].Patients with type I collagen defects develop osteogenesis imperfecta, which is characterized by bone fragility and skeletal deformities [168,169].Mizuno et al. found that when BMSCs were cultured in contact with type I collagen, BMSCs differentiated towards osteoblasts and expressed an osteoblastic phenotype [170].Furthermore, type VII collagen enhances the osteogenic potential of human MSCs via the ERK-dependent pathway [171].Collagen has been reported to have the potential to induce the differentiation of BMSCs into ECs.Kulakov et al. co-injected collagen and MSCs into the subcutis of rats and discovered considerable vascularization in the newly produced tissue after 7 days [172].This study found that collagen can promote the formation of vascularized tissue.Researchers have attempted to overcome collagen's poor mechanical qualities to fully use the material's osteoinductivity and angiogenesis capabilities.One approach would be to use collagen as the soft matrix component of the hard scaffold.For example, Culla et al. first prepared calcium phosphate cement (CPC) scaffolds and then injected mineralized collagen matrix (MCM) into the CPC scaffolds, resulting in composite MCM-CPC scaffolds.The team used a mouse model of a critical bone defect in the femur to validate the scaffold's osteogenic properties.Histological analysis showed more new bone tissue growing deeper within the hybrid scaffold compared to the CPC group alone.The combination of CPC and MCM accelerates the growth of new bone in the scaffold while enhancing the biomechanical stability [173] (Fig. 5a and b).Another approach is to combine collagen in a low ratio with other inorganic materials to create a hybrid scaffold with a certain mechanical strength.Baheiraei et al. [174] constructed a collagen/β-TCP scaffold by freeze-drying β-TCP powder into a porous collagen matrix with a β-TCP/collagen weight ratio of 4. Compression tests showed that the collagen scaffold had a compression modulus of 0.8 ± 1.82 KPa, while the composite scaffold had a compression modulus of 970 ± 1.20 KPa.In addition, the composite scaffold exhibited good angiogenic effects.A mixture of collagen and elastin has been shown to enhance the proliferative potential of ECs.Scaffolds built from a combination of these two proteins with low porosity are ideally suited for the production of tiny-diameter vessels and are predicted to be employed in vascularized bone tissue engineering. Fibrin. Fibrin is a type of extracellular matrix protein that is similar to collagen [177].It is a protein with high density and the potential to keep transplanted cells alive.Oh et al. found that fibrin promotes the proliferation of larger MC3T3-E1 preosteoblasts and promotes intracellular osteogenic gene expression and calcium deposition under low cell density conditions [178].Due to the lack of substantial mechanical properties of fibrin scaffolds [179], fibrin is mainly used as an auxiliary component in bone tissue engineering [180].In order to more effectively encourage bone tissue regeneration, Siddiqui et al. modified the surface of genipin cross-linked chitosan/nano-TCP composite scaffolds with fibrin [181].In the process of wound healing, the blood clot formed by fibrin can interact specifically with VEGF, in addition to providing attachment sites for ECs, which can effectively promote angiogenesis [182,183].Dohle et al. combined ECs and osteoblasts with platelet fibrin-rich (PRF) medium for mixed culture.Immunohistochemical analysis after 7 days showed the formation of lumen and microvessel-like structures in this culture system.In addition, the genes and proteins related to angiogenesis also showed a high expression status [175](Fig.5c and d).These favorable biological properties result in fibronectin being one of the most promising cofactors in the field of vascularized bone tissue engineering. Silk protein. Collagen and fibrin are both proteins found naturally in the human body, but silk protein (SF) is a biological substance generated by farmed silkworms, spiders, and scorpions [184].SF is a biocompatible protein with an RGD cell-binding domain that allows cells to connect and proliferate [185].It has excellent mechanical properties; for example, the tensile strength of SF is in the range of [173].c) Lumina and microvessel-like structure formation in co-cultures mixed with platelet fibrin-rich (PRF) medium.d) Effect of PRF medium on angiogenic gene and protein expression in two cell types [175].e) The preparation process of the deferoxamine (DFO)-loaded SFH-HA composite hydrogel.f-g) Neovascularization and bone regeneration of the defects treated with different hydrogels.h) Bone and vascular neogenesis in the histological analysis at week 12 after implantation [176]. H. Liu et al. 360-530 MPa, and the elastic modulus is in the range of 10-15 GPa, which is similar to that of cortical bone [164].Furthermore, the degradation rate of SF in vivo matches the repair cycle of bone defects [186].Based on the above advantages, SF has been widely used in the field of bone tissue engineering.SF scaffolds (ET scaffolds) were prepared by mixing Eri (Philosamia ricini) and Tasar (Antheraea mylitta) silk in a 70:30 ratio by Panda et al.The osteogenic properties of ET scaffolds were verified using scaffolds derived from gelatin and Bombyx mori (BM) sericin as a control group.The experimental results showed that the ET scaffolds significantly promoted the expression of osteogenic markers in human MSCs compared with the control group [187].In addition, SF can promote angiogenesis.Fan et al. [188] reported that after cultivating BMSCs on an SF scaffold, the BMSCs had a proclivity to develop into ECs.When the SF scaffold was implanted subcutaneously in rats, they observed high-density angiogenesis in the scaffold region.SF proteins with pro-angiogenic effects can also be combined with other materials to prepare composite scaffolds with even better performance.Liu et al. prepared four SF/BCP scaffolds with different SF contents (0, 20, 40, and 60 % SF).The 40 % SF group had the highest compressive strength (40.80 ± 0.68 MPa) and showed good integrated bone-building ability in the rat model [189]. Chitosan. In addition to proteins, natural polymers of polysaccharides are among the candidates for vascularized bone tissue engineering.Chitosan, derived from chitin, is a unique natural polysaccharide with excellent biodegradability, biocompatibility, nonantigenicity, and cellular affinity [190].Chitosan and its derived materials enhance the differentiation of bone progenitor cells and promote new bone formation [191].Liu et al. compared the biological performance of titanium rod prostheses coated and uncoated with carboxymethyl chitosan (CMC) in New Zealand rabbits after total knee arthroplasty.It was found that CMC could reduce the inflammatory response around the rabbit knee prosthesis and promote osteogenesis by affecting the OPG/RANKL/RANK signaling pathway [192].In addition, chitosan-based materials can also promote angiogenesis.When Han et al. examined the capacity of three sulfated chitosan to differentiate on HUVECs, they discovered that all SCSs stimulated the development and proliferation of HUVECs.In particular, 2,6-SCS could encourage capillary development and intracellular nitric oxide release [193].Gniesme et al. demonstrated that compounding chitosan with other materials can significantly enhance the angiogenic performance of composite scaffolds.The team prepared PCL-chitosan scaffolds with more significant pro-angiogenic properties than PCL scaffolds alone [194].In addition, chitosan and its derivatives have good anti-fungal and bacteriostatic effects and are often used as antibacterial materials, carrier materials, and film-forming materials [195].Based on the above examples, chitosan is considered to be one of the excellent materials for vascularized bone tissue engineering.Due to its poor mechanical stability and bone conductivity, it is usually used in combination with other natural polymers or bioceramics to create more stable scaffolds [166].Maji et al. reported a chitosan composite scaffold with a matched mechanical strength and porosity by adjusting the ratio of gelatin, chitosan, and β-TCP scaffold components [196]. Alginate. Alginate is a polysaccharide polymer similar to chitosan that is commonly utilized in vascularized bone tissue engineering.Alginate is a natural polymer of algal origin that forms gels with divalent cations under normal physiological circumstances, which is one of the most essential features, along with its biocompatibility and biodegradability [197].Although alginate is non-toxic to host tissues and cells, it lacks cell adhesion qualities [198].It can be modified by incorporating adhesion ligands and growth factors that promote cell attachment to enhance osteogenesis [199].By combining BMP-2 and recombinant peptide (RCP) microspheres in alginate gel, Fahmy-Garcia and colleagues created a new natural polymer gel sustained release system.This system balanced inflammatory cell infiltration, BMP-2 release, and angiogenesis in the experiment, resulting in more fully vascularized new bone tissue [200]. Although natural polymers have good biological properties, their mechanical properties are poor, and they cannot withstand the stress transmitted by bones when used as scaffolds alone [201].As a result, they can be combined with other materials to improve the mechanical properties while also exerting bone inductivity and cellular affinity [202]. Synthetic polymers Synthetic polymers are often used as scaffolds for vascularized bone tissue engineering because of their good mechanical properties.At present, the commonly used synthetic polymers are polycaprolactone (PCL), polylactic acid (PLA), and polylactic acid-glycolic acid copolymer (PLGA) [203]. PCL. PCL has non-toxicity, absorbability, and costeffectiveness, and the Food and Drug Administration (FDA) has authorized it for biomedical engineering applications [204].PCL's toughness and mechanical stiffness under physiological conditions make it ideal for bone tissue engineering [205].Based on these various advantages, PCL has been explored as a potential delivery scaffold for stem cells to support bone regeneration research.Xue et al. found that PCL nanofiber scaffolds were able to promote osteogenic differentiation of human MSCs through activation of the Wnt/β-linker protein signaling and Smad3-related signaling pathways [206].Ji et al. further found that PCL scaffolds can regulate cell proliferation and differentiation by promoting cell senescence, cell cycle, and deoxyribonucleic acid (DNA) replication pathways, accelerating endochondral ossification and healing tissue formation [105].PCL could also induce angiogenesis; Sekula et al. found that PCL can stimulate human umbilical cord-derived mesenchymal stem cells (hUC-MSCs) to differentiate toward angiogenesis in the absence of additional chemical stimulation [207].Electrospun nanofibers of PCL are cytocompatible by virtue of their nanometer size and facilitate the rapid growth of vascularized bone tissue [208].Different diameter PCL nanofibers have different pro-vascular differentiation potentials.According to Reid et al.PCL nanofibers with a diameter of 4.83 m were better at promoting the expression of the angiogenic marker CD31 in HUVECs [209].4.2.2.2.PLA.PLA, like PCL, is a low-cost, biodegradable polyester [197].The degradability and mechanical properties of PLA are related to the molecular weight of the polymer.Low molecular weight PLA has a faster degradation rate and mechanical properties that better match the tissue growth rate during degradation than higher molecular weight PLA [210].Therefore, low molecular weight PLA is the preferred choice for constructing vascularized bone tissue engineering.Skua et al. investigated the impact of an FDA-approved PLA on the biological characteristics of hUC-MSCs.PLA increases the angiogenic differentiation capability of hUC-MSCs, according to genetic analyses [207].The degradation product of PLA is lactic acid, which can be metabolized by the human body.However, during the rapid degradation of PLA, a large number of lactic acid by-products will form a local acidic environment, leading to tissue inflammation and cell death [211].To deal with this issue, calcium phosphate can be employed as a buffer to keep the pH steady.Barbeck and colleagues created a biphasic scaffold by combining PLA with biodegradable calcium phosphate glass.The experimental results revealed that the biphasic scaffold not only has a greater compression modulus and more lasting mechanical qualities than the basic PLA scaffold, but it also has a better capacity to stimulate vascularized bone growth [212]. PLGA. PLGA is a copolymer created by the polymerization of the lactic acid monomer and ethanoic acid monomer.It is one of the most extensively used biodegradable polymers [213].By controlling the ratio of lactic acid to glycolic acid, the mechanical properties and degradability can be flexibly controlled [214].The metabolites of PLGA can be eliminated safely in vivo [215].Regarding the relationship between cells and PLGA in osteogenic differentiation, Calvert et al. found that in the presence of osteoinductive factors BMSCs attach to PLGA scaffolds and are secreted to form a mineralized matrix [216,217].Additionally, PLGA scaffolds can induce blood vessel growth.When Jehn et al. implanted PLGA scaffolds into the dorsal skin folds of mice, they discovered that the scaffold's peripheral area had a significant branching vascular network. The above-mentioned scaffold materials can mimic the natural bone tissue composition and can induce limited vascularized bone tissue.However, the use of these single materials does not meet the need to construct a perfect bone substitute.No single material possesses good biocompatibility, biodegradability, porous 3D structure, osteoconductivity, osteoconductive, and angiogenesis at the same time, except for autologous bone with limited bone volume [218].Combining the benefits of several materials to create a composite scaffold with a variety of good qualities is another way to induce scaffold vascularization.Wang et al. [176] created an injectable high-performance composite hydrogel by combining deferoxamine (DFO), silk fibroin nanofibers, and HAp.The composite scaffold can achieve the stable release of pro-angiogenic substances at the critical defect of the rat skull for more than 2 months.Immunohistochemical results at 2 and 4 weeks after implantation showed that the DFO silk nanofiber-HAp composite hydrogels (SFH-HA-DFO) group achieved optimal angiogenesis.In addition, micro-CT and histological analysis of the defect showed more and denser bone tissue formation in the SFH-HA-DFO group.The composite scaffold provides a stable stimulation ecological niche for vascularized bone tissue regeneration (Fig. 5e-h).In addition, composite scaffolds designed to target type-H vessel formation may be more promising.He et al. created a PCL/fibronectin/human umbilical vein endothelial cell-derived decellularised extracellular matrix (HdECM) (PFE) composite scaffold, which showed early vascularization infiltration and enhanced bone regeneration following implantation of PFE into a femoral defect in rats.Immunofluorescence analyses revealed that the PFE was able to regenerate the bone through a variety of channels, including PFE-mediated endogenous angiogenesis and osteogenesis, through a large number of type-H vessels and bone progenitor cells [219].Composite scaffolds may be the best alternative material for scaffolding major bone lesions in the future [220].Kumar et al. constructed a composite scaffold out of PCL and β-TCP.During the degrading phase, the composite scaffold could preserve correct porosity and mechanical characteristics [221].This time-graded porosity structure is predicted to aid in the growth of blood vessels into the scaffold, facilitating the development of vascularized new bone. Structural design of vascularized bone tissue engineered scaffolds Currently, orthopedic scaffolds are mainly divided into solid scaffolds and porous scaffolds.Compared to solid scaffolds, porous scaffolds can significantly accelerate cell and protein penetration and are the preferred scaffolds for promoting the growth of vascularized bone tissue [222].Kuboki et al. implanted solid, porous, and lamellar structured hydroxyapatite scaffolds subcutaneously into rats for comparison of the angiogenic and osteogenic abilities.Significant bone and angiogenesis were observed within the porous and lamellar structure of HAp scaffolds with a BMP mixture supplied to all scaffold groups [223].Appropriate pore size, pore distribution, and connectivity between pores provide the microenvironment for cell infiltration, migration, blood vessel formation, and metabolism [224].Therefore, when manufacturing porous bone scaffolds, the pore size, porosity, internal porous morphology, and overall bionic design of the scaffold are critical criteria to consider. Pore size and porosity The point-to-point distance in normal bone tissue between adjacent osteocyte centers is 24.1 ± 2.8 μm [225].The interior pore size of porous scaffolds is often bigger than this distance to permit osteoblastic, endothelial, and inflammatory cell infiltration and proliferation [224].Scaffolds with various hole diameters affect vascularization differently.Gupte et al. [24] discovered that poly (l-lactic acid) PLLA scaffolds with pore sizes ranging from 60 to 125 μm hindered endochondral ossification in BMSCs by blocking inward vascular development.The 125-250 μm hole size facilitated inward capillary development, which dramatically improved cartilage differentiation in human BMSCs but had little effect on mineralization.The 425-600 μm pore size allowed microvessels to develop in the scaffold, promoting bone tissue vascularization.Similarly, Swanson et al. demonstrated that a PLLA scaffold with pore diameters less than 125 μm prevented vessel penetration into the scaffold, while pore diameters greater than 250 μm promoted vessel formation [226].It is clear that a scaffold with a pore larger than 250 μm will grow develop arteries.As the pore size increases, the vascularization effect is better.Wang et al. [227] examined the angiogenic and osteogenic properties of titanium scaffolds with pore sizes of 350 μm, 450 μm, and 550 μm, respectively.They discovered that as the pore size of the titanium scaffold increased, so did the number of vessels within the tissue section, with the 550 μm pore size scaffold having the most vessels.However, Feng et al. compared the angiogenic capacity of four different pore sizes (300-400, 400-500, 500-600, and 600-700 μm) of β-TCP scaffolds and found no significant difference in the neovascularization area when the scaffold pore size was bigger than 400 μm [228].There is an ongoing debate regarding the optimal pore size that affects angiogenesis.The two studies above have different angiogenic performances at the same pore size.We suspect that the primary cause of this may be the fact that the various scaffold materials have varying levels of vascular productivity.The biodegradable scaffold's internal pore size enlarges and becomes more favorable to vascular development over time following implantation.Non-degradable scaffolds have constant pore sizes and consistent vascularization-affecting variables.Therefore, studies on the effect of pore size on vascularization should be conducted using scaffolds of the same material and systematically controlling for other variables. Porosity, which is the proportion of void space in the scaffold, is a significant element influencing the growth of arteries into porous scaffolds.Increasing the porosity of the porous scaffold within a particular range promotes cell migration and value addition, allowing blood vessels to extend into the porous interior [229].However, excessive porosity will lower the scaffold's mechanical strength, making it unsuitable for load bearing [230].Therefore, while constructing scaffolds, the porosity should be adjusted to both encourage vessel growth and provide the scaffold with a robust enough support base. Porous morphology Cells can adapt the cytoskeleton to the surrounding geometry during tissue growth [231].Therefore, the porous morphology inside the scaffold affects the growth of vascular and bone tissues [232].The morphology of porous scaffolds that have been widely studied can be broadly classified into two categories.The first category is the branched rod-like structures represented by cubic morphology, which is the conventional porous morphology in 3D printed scaffolds [233].The second category is the Triply periodic minimal surface (TPMS) with spherical surface morphology, which has excellent potential for clinical applications [234].The interior surface of the TPMS structure is very smooth and well connected between the porous holes, with no sharp edges or knots, as in the case of cubes and hexagonal supports [235].In addition, the average curvature of this TPMS structure is zero, and the average curvature of the bone trabeculae is also zero, which gives the TPMS structural scaffold a natural bionic advantage [236].Wu et al. used a rabbit dorsal muscle embedding system to investigate the in vivo angiogenic potential of three scaffold structures (cube, gyroid, and hexagon).SEM images of the scaffolds show that the pore geometries are in perfect agreement with the designed pore models, with no obvious cellular defects or deformations, and the pore walls exhibit similar microstructure and densification.They discovered that compared to the cuboidal and hexagonal structures, the TPMS structure promoted a denser and thicker vascular network.The team analyzed the results and found that scaffolds with curved and less angular pore morphology were more suitable for mediating angiogenesis [27] (Fig. 6a-c).This conclusion is consistent with previous studies, demonstrating that curvature affects tissue growth [237]. The roughness of the scaffold surface can also affect angiogenesis.Duan et al. observed that tricalcium phosphate with submicron surface morphology induced macrophages to polarize toward M2.M2-polarized macrophages then enhanced tube formation in HUVECs.In contrast, tricalcium phosphate with a micrometer shape does not have this effect [131].In addition, Hou et al. demonstrated that human MSC adhesion is Fig. 6. a) Three-dimensional modeling and preliminary characterization of scaffolds with different hole geometries.b) Scanning electron microscopy (SEM) images of the surface morphology of bioceramic scaffolds with different pore geometries.c) Microangiographic images at 2 and 4 weeks after implantation [27]. H. Liu et al. bi-directionally regulated by interfacial roughness by setting up a gradient roughness interface [238].At moderate roughness, cells have better adhesion properties [239], and adhesion is necessary for angiogenesis [240,241].Importantly, Partida et al. demonstrated that rough-modified titanium scaffolds have better angiogenic properties than smooth titanium scaffolds [242]. Several studies mentioned above have shown that pore morphology is an important factor affecting the growth of blood vessels into the scaffold.An effective porous structure design can promote the growth of vascularized bone tissue inside the scaffold. Bionic design Mimicking bone structure and bone function is one of the important vascularization strategies for bone tissue engineering.Barati et al. constructed a bionic cortical bone scaffold with a microtubule-like porous interconnected structure similar to natural cortical bone [243].The bionic scaffold promotes human MSC and endothelial colony-forming cells to form neovascularization and bone tissue without the addition of cytokines.Constructing structures in the scaffold that mimic the natural microvascular network is an important means of promoting vessel growth.The bionic hollow tube structure scaffold prepared by Duan et al. using polycaprolactone has the advantage of being highly interconnected and permeable [244].In comparison to non-hollow-tube structured scaffolds, the specially integrated hollow-tube scaffold dramatically improved cell adhesion, spreading, and proliferation, as well as osteogenic differentiation and angiogenesis in vitro studies.In vivo experimentation further substantiated that the bionic scaffold conferred a marked improvement in both bone regeneration and angiogenesis within rabbit femoral defects. The periosteum is a highly vascularized thin tissue with excellent osteogenic capacity [245].Constructing bionic bone by mimicking periosteal tissue has become a new strategy for bone defect repair and regeneration.Dai et al. induced periosteum-like tissue by implanting gelatin scaffolds loaded with BMP-2 and chondroitin sulphate (CS) into the skin of mice.The composite scaffold induced a bionic periosteum with a structure similar to that of natural periosteum and was rich in functional periosteum-like tissue-derived cells (PTDC), type-H vessels, and osteochondral progenitor cells.The bionic periosteum showed strong osteogenic repair ability in a cranial defect model [246]. Furthermore, mimicking plant structures in nature is the focus of the combination of bionics and vascularized bone tissue engineering.Inspired by the lotus structure, Han et al. prepared hydrogel microspheres encapsulating DFO liposomes as "lotus seeds" using microfluidics, and combined them with 3D-printed bioceramic scaffolds with a biomimetic structure to prepare scaffolds with a biomimetic lotus structure (TGL).The team investigated the bone repair ability of the bionic scaffold in a rat distal femur defect model.In vivo micro-CT and quantitative protein analysis showed that the TGL group integrated new bone faster and better than the other groups and had high expression of osteogenic and angiogenic proteins [247] (Fig. 7a-c).These bionic scaffolds take advantage of complementary materials and technologies to exhibit excellent osteogenic and angiogenic capabilities and are highly instructive for exploring multifunctional vascularized scaffolds that promote the regeneration of adventitial bone tissue. Cytokines in the vascularization of bone tissue engineering Sufficient angiogenesis at the fracture site is a prerequisite for efficient bone regeneration [248], and the release of a number of angiogenic factors plays a vital role in early angiogenesis [249].One of the hottest study areas in bone tissue engineering is the creation of bone substitutes by mixing artificial scaffolds with cytokines.The two most commonly employed cytokines in bone tissue engineering are VEGF and BMP. VEGF The VEGF family is the most specific promoter of angiogenesis and the most important activator, which is widely used in bone tissue engineering [250].VEGF-A is the most abundant type in the organism [251].During bone formation, VEGF can mediate osteogenesis by regulating H-type formation [55].Inhibition of VEGF has been shown in several investigations to form bone discontinuity models [252].VEGF expression reaches its peak in the early stage of fracture healing, which can promote the migration and proliferation of ECs to form tubular vessels [248].Importantly, VEGF-loaded artificial scaffolds for bone tissue engineering can efficiently stimulate the fast creation of new blood vessels and offer metabolic support for the production of new bone [253].Liu et al. created a VEGF-loaded PCL/HAp scaffold, which could significantly induce the formation of microvessels and promote the rapid bone regeneration of the rat skull defect model [254].Similarly, Li et al. used thermosensitive collagen hydrogel as a carrier for VEGF, which was compounded with a porous titanium alloy scaffold.This complex system was shown to strongly promote angiogenesis-mediated bone regeneration and significantly promote more osseointegration in a rabbit lateral femoral condyle bone defect model [255]. However, VEGF seems to have a dose-dependent range of action.Therefore, if the dose of VEGF is too high, it may lead to vascular malformation, and if too low, it will be ineffective [256,257].Wang et al. developed a mechano-chemical coupling model of a biodegradable polymer scaffold loaded with VEGF, which showed that there is an optimal range of VEGF doses that can promote the efficiency of bone regeneration [256].Indeed, Dreyer et al. evaluated the literature from the previous 10 years, indicating that the lowest single dosage of VEGF used was 0.2 μg, the highest was 24 μg, and the highest single dose with a consistently positive response was 2.6 μg [252].This interval can be used as a reference for future studies of single VEGF dose loading.Furthermore, the dosing interval of the two-factor loading system needs to be investigated further.Walsh et al. loaded a collagen/HAp scaffold with 2.5 μg of VEGF and 2.5 μg of BMP-2; experimental models of bone defects showed that this dose of the two-factor system significantly promoted vascularized bone production [258]. Furthermore, loading VEGF by direct adsorption may lead to the burst release of VEGF, which may have adverse effects [259].Inducing the expression of VEGF by indirect means is an alternative way of sustained release.Pekozer et al. reported a study of VEGF-inducing agents loaded onto PLGA scaffolds for vascularized osteogenesis.The inducer-loaded scaffold has been shown to support EC recruitment, vascularization, and bone repair in vivo [260].In addition, genetic engineering-based delivery strategies can achieve the precise release of cytokines without worrying about the short half-life of exogenous cytokines.Moreira et al. delivered the gene encoding VEGF to human dermal fibroblasts; this strategy allows cells to produce functional VEGF, which induces the formation of capillary-like structures faster and for longer periods of time in experiments [261].Similar to this, Yao et al. [262] created a composite scaffold that could regulate VEGF regeneration and release by combining exosomes containing VEGF genes with PCL porous scaffolds using the exosome anchor peptide CP05.Engineered exosome-transfected rat BMSCs exhibit significant angiogenic and osteogenic differentiation in vitro experiments.Histological evaluation and immunofluorescence staining analysis of healed radial defects in rats further confirmed that the composite scaffold was effective in inducing massive vascularized bone regeneration (Fig. 8a-d).VEGF is also closely associated with bone growth factors (e.g., BMP) and exhibits synergistic stimulation in the formation of bone and angiogenesis signaling pathways.Dashtimoghadam et al. encapsulated VEGF and BMP-2 into microcarriers; the dual-factor microcarrier device was discovered to have significant osteogenic and angiogenic potential as well as a sustained growth factor release profile with prolonged bioavailability [263].The above results suggest that scaffold-released VEGF can induce vascularized new bone regeneration.However, the translational application of VEGF for the treatment of bone defects currently needs to be further explored.VEGF administered in direct form is susceptible to biodegradation.Various techniques for cytokine transport and release are constantly evolving.Cytokine control strategies based on genetic engineering and microcarrier technology show great promise.However, the ethical implications of therapeutic gene therapy techniques must also be considered. BMP-2 BMP-2 is one of the most popular growth factors in vascularized bone tissue engineering.It regulates the repair and regeneration of segmental bones by promoting the formation of bone, cartilage, angiogenesis, and fibrotic tissue [264].Among the BMP family, BMP-2 plays an integral role in fracture healing.Mice with defective Bmp-2 expression in the extremities may have normal skeletal development throughout growth but poor fracture repair.Moreover, other forms of BMPs cannot fill this crucial role [29].Due to its critical nature, BMP-2 has been used clinically for the treatment of open tibial fractures, non-healing bone injuries, and spinal fusion, and is included in the FDA-approved bone regeneration system [265,266].BMP-2 is a trigger/signal molecule in fracture repair.It primarily aids bone regeneration by causing bone progenitor cells to develop into osteoblasts and attracting BMSCs to the wounded area [267].Furthermore, BMP-2 is the cytokine that promotes the production of alkaline phosphatase and osteocalcin, which is important for fracture healing [268].Importantly, BMP-2 can promote angiogenesis via chemotaxis of circulating endothelial progenitor cells in peripheral blood and increased secretion of paracrine angiogenic growth factor by MSCs [269,270].This may be related to BMP-regulated type-H vessels formation [57,58].As a result, applying BMP-2 alone at the site of bone defects may have a pro-angiogenic effect.In addition, BMP-2 can synergize with VEGF to further enhance vascularized bone tissue production [271]. The dual-factor loading system with VEGF and BMP-2 has more angiogenic and osteogenic capabilities than the system loaded with VEGF alone.Jie et al. investigated the effects of calcium phosphate scaffolds loaded with VEGF and BMP-2 on osteogenesis.They discovered that combining VEGF and BMP-2 was more efficient than either VEGF or BMP-2 alone in inducing osteogenesis and vascularization of the composite scaffold [272].Similarly, the HAp/PLGAs scaffold, prepared by Wang et al. and equipped with dual VEGF and BMP-2 factors, also showed a stronger ability to promote bone tissue maturation than the control group.They further investigated that the bifactor may promote vascularized osteogenesis by activating the p38 MAPK pathway to Fig. 8. a) General idea of engineered exosome-enhanced therapies on osteogenesis and angiogenesis.b) Expression of VEGF and its pro-angiogenic effect after transfection of engineered exosomes with rat bone-marrow derived mesenchymal stem cell (BMSCs).c) Immunofluorescence staining of osteogenic markers, OCN, nucleus, and cytoskeleton.D) Histological evaluation of healed radial defects and immunofluorescence staining for the angiogenesis marker CD31 in rats [262]. H. Liu et al. promote the nuclear translocation of osterix proteins [273]. Despite its benefits, systemic BMP-2 administration has been linked to side effects such as prevertebral swelling, airway edema, carcinogenesis, and ectopic bone growth [274,275], whereas local BMP-2 injections are not successful in promoting local bone production [276].To manage the osteogenic impact of BMP on the target region, it is vital to employ a good delivery vehicle and an optimum dosage to optimize the effect of BMP-2.Datta et al. encapsulated BMP-2 into a chitosan microsphere system to achieve spatiotemporal control and sustained release of BMP-2 and demonstrated good osteogenic and angiogenic effects in the experiments [277].In addition to using microcarriers to achieve a rational release of BMP-2, gene transfection techniques can be used to better control the optimal dose of BMP-2 in bone repair [278].Geng et al. modified BMSCs with mRNAs encoding the human BMP-2 and VEGF-A genes and inoculated them onto a collagen scaffold.They used a rat cranial defect model to validate the bone healing ability of the two-factor system.Reconstructed images of 3D micro-CT scans at 4 and 12 weeks after treatment show outstanding bone regeneration in the bifactorial group.In addition, quantitative assessment of OCN and CD-31-positive cells at the site of bone regeneration showed more osteoblastic and angiogenic cells clustered around the bifactor group. This system exhibited excellent cytokine precision control and the ability to synergistically drive osteogenesis and angiogenesis in a murine cranial defect model [279] (Fig. 9a-c). One of the most promising methods for creating vascularized bone tissue engineered bone is the development of a cascade system of spatiotemporally adjustable multifactor inside tissue engineered grafts.Direct adsorption, multifactor adsorption, hydrogel delivery, microcarrier transport, and gene editing approaches are all being used by researchers to further this concept.Importantly, the ethical and practical concerns associated with the aforementioned solutions must also be taken into account. Conclusions and prospects for vascularized bone engineering The skeletal vascular system plays a crucial role in the process of bone regeneration and healing.The blood vessels in and around bone tissue provide oxygen, nutrients, growth factors, and cells for the survival, proliferation, and differentiation of bone formation.Currently, the lack of functional vasculature in tissue engineered bone has become the biggest obstacle in its clinical application.Among all cellular and signaling pathways, type-H vessel formation is a key factor in coupling angiogenesis to osteogenesis.Therefore, the use of type-H vessels as a target for stimulating vascularized bone regeneration in bone tissue engineering may be a promising therapeutic approach.However, there are some challenges to research on type-H vessels.Deletion of the gene encoding integrin β1, a cell-surface transmembrane glycoprotein, results in a loss of normal morphological and functional properties of H-type ECs.However, the expression of CD31 and Emcn, markers characteristic of H-type ECs, is unaffected.This mutation leads to the formation of dysfunctional H-type vessels, which can impair normal bone metabolism and lead to bone loss [39].In addition, abnormal accumulation of type-H vessels may lead to osteoarthritis [280].We believe that because the coupling of type-H vessel formation and osteogenesis involves multiple factors, inducing functional type-H vessels while avoiding aberrant type-H vessel formation is necessary for vascularized bone tissue engineering. In vivo pre-vascularization strategies have been a hot topic of research in the field of regeneration.Different pre-vascularization sites will exhibit different rates of bone formation.In general, an increased blood supply means faster bone formation.The use of muscle pouches and the greater omentum as bioreactors to pre-vascularize bone has achieved initial clinical results.However, there are a number of challenges that are currently being faced.Firstly, achieving proper coordination between blood vessel formation and bone tissue regeneration is crucial.If the culture process is delayed or too fast, it may lead to insufficient bone or more soft tissue in the prefabricated scaffolds, which can ultimately lead to suboptimal tissue integration.The current solution is to control the prefabrication process by adjusting the dose of inducers, such as BMP-2 [80].However, individual patient variability, such as age, health status, and underlying medical conditions, can affect the determination of the optimal rhBMP-2 dose.Therefore, the optimal rhBMP-2 dose for humans needs to be explored and demonstrated.Secondly, the optimal stage of implantation of pre-vascularized bone should be determined.Pre-vascularized bone at different developmental stages will show different proportions of type-H vessels and bone progenitor cells, which is important for the regenerative capacity of the determined pre-vascularized bone.Thirdly, the stability and longevity of pre-vascularized bone should be guaranteed prior to clinical application, and the stability and long-term function of the vascularized network after implantation is crucial for sustained tissue regeneration and functional rehabilitation.In addition, regulatory requirements and safety issues need to be carefully considered before translating pre-vascularization strategies into clinical applications.Scaffold design, material selection, and loading drugs have been the focus of research in vascularized bone tissue engineering. Currently, single materials do not fully meet the conditions for scaffolds to promote vascularized bone formation, and composite scaffolds prepared from multiple materials may be one of the solutions to the difficulties in vascularizing scaffolds.Each material of a composite scaffold has unique properties, resulting in scaffolds with improved structural, mechanical, and biological properties.Composite scaffolds achieve mechanical strength and stability by combining biodegradable polymers and ceramics to match the defect area.This allows for better weight-bearing capacity and support during bone regeneration.Composite scaffolds can also incorporate osteoconductive materials (e.g., HAp) that provide scaffolding for bone growth, osteoinductive materials (e.g., BMPs) that induce the differentiation of stem cells into boneforming cells, and vasoinductive materials (e.g., VEGF) that induce angiogenesis.Although the current status of composite scaffolds in vascularized bone tissue engineering is promising, there are still a number of challenges that need to be addressed.The primary challenge is to ensure compatibility between composite scaffold materials and overall scaffold biocompatibility.It is desirable to design composite scaffolds in which each component synergizes with each other to promote vascularized osteogenesis.Secondly, the optimal ratio and distribution of the different materials within the composite scaffold must be determined.This is necessary for the composite scaffold to achieve matched mechanical properties, excellent vascularization induction, and osteogenesis.In addition, it is important to consider whether the composite scaffold can achieve long-term stability after implantation, which is essential for subsequent vascularized bone regeneration and functional recovery. In conclusion, future work needs to further explore the interactions and mechanisms between the skeletal vascular system and bone regeneration.Based on this, research into safe and effective prevascularization strategies and exploration of new materials, fabrication techniques, and optimized composite scaffold designs would be effective solutions to the challenges of vascularization in bone tissue engineering. Fig. 1 . Fig. 1.Vascularization strategies in bone tissue engineering include in vivo and in vitro based pre-vascularization strategies and the construction of immediately implantable scaffolds with vascularization capabilities.In vivo pre-vascularization strategies can be divided into tissue flap IVB (a), axial vascular tip IVB (b) and periosteal flap IVB (c).In vitro pre-vascularization strategies can be categorized into co-culture techniques (d) and cell sheet techniques (e).f) Factors affecting the type-H vessels formation.g) Skeletal vasculature system.Immediately implantable scaffolds with vascularization capability are constructed by cytokine loading (h), scaffold materials selection (i), and scaffold structure design (j). H .Liu et al. Fig. 3 . Fig. 3. a) Preparation of polycaprolactone/hydroxyapatite (PCL/HAp) scaffolds loaded with encapsulated adipose-derived mesenchymal stem cell (ADMSC) and human umbilical vein endothelial cell (HUVEC) hydrogels.b) The co-culture system promotes capillary network formation and vascularization gene expression.c) In vivo angiogenesis within the composite scaffold [110].d) Experimental procedure of different cell sheets for bone defects.e) Micro-computed tomography (CT) analysis of in vivo performance at 8 weeks and 12 weeks.f) Hematoxylin and eosin staining and Masson staining at 12 weeks [112]. H .Liu et al. Fig. 4 . Fig. 4. a) Schematic diagram of the in vivo model of bidirectional calcium phosphate (BCP)-induced differentiation of bone marrow-derived mesenchymal stem cells (BMSCs).b) Expression of angiogenic genes in BMSCs when cultured on BCP and coverslips in vitro.c) Expression of angiogenic genes in BMSCs when implanted on BCP and coverslips [150].d) Schematic plane of the application with two bioactive factors incorporated in the calcium phosphate microcarrier for bone regeneration.e) Scanning electron microscopy (SEM) micrographs of calcium phosphate microcarriers.f) Implantation of dual growth factor-loaded calcium phosphate microcarriers into a mouse cranial bone model resulted in substantial new bone formation [152]. H .Liu et al. H .Liu et al. Fig. 5 . Fig.5.a) Preparation process of the calcium phosphate collagen (CPC) composite scaffold.b) Tissue sections 8 weeks after in vivo implantation show a tight pincerlike structure between the receptor site and the scaffold[173].c) Lumina and microvessel-like structure formation in co-cultures mixed with platelet fibrin-rich (PRF) medium.d) Effect of PRF medium on angiogenic gene and protein expression in two cell types[175].e) The preparation process of the deferoxamine (DFO)-loaded SFH-HA composite hydrogel.f-g) Neovascularization and bone regeneration of the defects treated with different hydrogels.h) Bone and vascular neogenesis in the histological analysis at week 12 after implantation[176]. H .Liu et al. Fig. 7 . Fig. 7. a) Schematic diagram of a vascularized bionic scaffold and its biomechanical role in the repair of bone defects in a rat model.b) Three-dimensional reconstruction of micro-computed tomography images of the implant site showing the regenerative effect.c) Differential expression of relevant proteins in de novo tissues after implantation of bionic scaffolds into defects [247]. Fig. 9 . Fig. 9. a) Experimental procedure of modified mRNA-treated bone marrow-derived mesenchymal stem cell (BMSC) composite biomaterials to promote bone healing in a rat cranial defect model.b) Micro-computed tomography (CT) reconstruction of bone regeneration after treatment.c) Quantification of OCN and CD-31-positive cells at the sites of bone regeneration [279].
2023-11-29T05:06:08.826Z
2023-11-11T00:00:00.000
{ "year": 2023, "sha1": "b7a143cf9a8070e1a5216dccd9c9c6e6af21dacd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.mtbio.2023.100858", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7a143cf9a8070e1a5216dccd9c9c6e6af21dacd", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214127379
pes2o/s2orc
v3-fos-license
Characterization of Agricultural and Food Processing Residues for Potential Rubber Filler Applications Large volumes of agricultural and food processing residues are generated daily around the world. Despite the various potential uses reported for this biomass, most are still treated as waste that requires disposal and negatively impacts the environmental footprint of the primary production process. Increasing attention has been paid toward the use of these residues as alternative fillers for rubber and other large-scale commodity polymers to reduce dependence on petroleum. Nevertheless, characterization of these alternative fillers is required to define compatibility with the specific polymer, identify filler limitations, understand the properties of the resulting composites, and modify the materials to enable the engineering of composites to exploit all the potential advantages of these residue-derived fillers. Introduction Valorization of agricultural and food processing residues is not only an environmental trend nowadays, but also an important economic goal. These residues represent a widely and continuously available source of renewable raw materials. However, most of them lack a valuable application and are treated as wastes that require costly disposal and have large negative environmental impacts. These waste streams have been considered as potential sources of high-value chemicals, biomass for energy production and animal feed [1][2][3], but utility depends on particular chemical composition and the economic feasibility of the extraction and transformation processes. In practice, the consumption of agricultural and food residues is still very low. By 2050, we will need to generate 60-70% more food than is currently produced to feed the expected population of more than 9 billion people, by combining increased production with reduced loss and waste [4]. This implies a concomitant increase in already abundant crop and processing residues. Hence, diversified applications of the vast amounts of agricultural and food processing residues daily generated worldwide are required to consume these poorly exploited resources effectively. In the last two decades, increasing research has focused on the use of agricultural and food processing residues as alternative sources of fillers for rubber composites. This area of research is driven by the need to decrease dependence on petroleum derivatives, concerns about environmental footprint and sustainability of the rubber industry and the need to secure long-term, high volume, supplies of raw materials. Composites consist of two or more primary materials combined to produce one material with properties not possessed by the individual constituents [5]. Some agricultural and fillers and bio-based fillers that affect rubber reinforcement, characterization of fillers obtained from agricultural and food processing residues is needed to identify other attributes that can impact rubber reinforcement and predict composite performance. In this review, we outline important characteristics to consider when selecting residue-derived fillers and describe characterization methods that can help elucidate their interaction with the rubber matrix and predict and explain composite performance. Filler Characterization The reinforcement of elastomers by fillers is the result of a combination of physical and chemical interactions [39]. These complex interactions allow material flexibility to be maintained, while enhancing strength and resistance to deformation [16] Morphological and physicochemical properties of fillers determine the type and strength of the interactions between the polymer and the filler and, hence, the final composite properties [16,18,40,41]. In conventional carbon black and silica fillers, which have standardized production methods and very defined chemical composition, the primary filler characteristics that affect material performance are filler surface area, structure and surface activity [18,19]. Although these characteristics also are important for bio-based fillers, they are not the only characteristics that can affect rubber composite performance. Fillers with a wide diversity of particle size, shape, structure, chemical composition and crystallinity can be achieved from agricultural and food processing residues depending on the source of the material and the extraction method used [10,42]. A single method will not provide all the information needed to characterize all the materials, but a combination of multiple techniques selected based on the type of filler, polymer application and processing conditions, can be effective. Surface Area Surface area is arguably the most important morphological characteristic affecting filler reinforcing potential [16,19,43]. This filler characteristic directly impacts its interfacial contact area with the polymer. Larger surface area and higher filler loading (amount of filler in the composites) facilitate more interfacial contact between the filler and the polymer, thus, increasing reinforcing potential [18,40,44]. For conventional, non-renewable fillers, particle size is inversely proportional to particle surface area, and so this parameter is commonly used as a simple classification criterion for conventional fillers ( Figure 1). can impact rubber reinforcement and predict composite performance. In this review, we outline important characteristics to consider when selecting residue-derived fillers and describe characterization methods that can help elucidate their interaction with the rubber matrix and predict and explain composite performance. Filler Characterization The reinforcement of elastomers by fillers is the result of a combination of physical and chemical interactions [39]. These complex interactions allow material flexibility to be maintained, while enhancing strength and resistance to deformation [16] Morphological and physicochemical properties of fillers determine the type and strength of the interactions between the polymer and the filler and, hence, the final composite properties [16,18,40,41]. In conventional carbon black and silica fillers, which have standardized production methods and very defined chemical composition, the primary filler characteristics that affect material performance are filler surface area, structure and surface activity [18,19]. Although these characteristics also are important for bio-based fillers, they are not the only characteristics that can affect rubber composite performance. Fillers with a wide diversity of particle size, shape, structure, chemical composition and crystallinity can be achieved from agricultural and food processing residues depending on the source of the material and the extraction method used [10,42]. A single method will not provide all the information needed to characterize all the materials, but a combination of multiple techniques selected based on the type of filler, polymer application and processing conditions, can be effective. Surface Area Surface area is arguably the most important morphological characteristic affecting filler reinforcing potential [16,19,43]. This filler characteristic directly impacts its interfacial contact area with the polymer. Larger surface area and higher filler loading (amount of filler in the composites) facilitate more interfacial contact between the filler and the polymer, thus, increasing reinforcing potential [18,40,44]. For conventional, non-renewable fillers, particle size is inversely proportional to particle surface area, and so this parameter is commonly used as a simple classification criterion for conventional fillers (Figure 1). Figure 1. Classification of fillers according to average particle size. Adapted from reference [19]. Based on this classification, reinforcing filler particles have at least one dimension in the nanoscale (<100 nm), and are known as nanoparticles. The superior reinforcement achieved with these small particles compared to larger sized particles, relies on there being greater numbers of particles per volume of rubber, and a greater interfacial contact area [20,44]. Big particles also may Figure 1. Classification of fillers according to average particle size. Adapted from reference [19]. Based on this classification, reinforcing filler particles have at least one dimension in the nanoscale (<100 nm), and are known as nanoparticles. The superior reinforcement achieved with these small particles compared to larger sized particles, relies on there being greater numbers of particles per volume of rubber, and a greater interfacial contact area [20,44]. Big particles also may act as localized stress-raising inclusions, generating flaws within the composite that can initiate failure [20,40]. Despite the large surface area offered by nanofillers, important limitations have been identified, particularly for non-carbon black nanofillers like those obtained from agricultural and food processing residues. Nanoparticles are often much more expensive to produce than macro and micro size particles, and good filler dispersion within the polymer matrix is challenging [40,[45][46][47][48]. Depending on the material, extensive hours of milling, harsh chemicals, high temperatures, and high pressures may be required to prepare nanoparticles ( Figure 2) [46,[49][50][51]. Furthermore, the higher surface area of nanoparticles increases the attraction between the particles, leading to their agglomeration and reduced composite performance [16,52,53]. To achieve homogeneous dispersion of nanoparticles in the rubber, complex mix protocols are required that often involve high power consumption, increasing processing costs [44,48]. act as localized stress-raising inclusions, generating flaws within the composite that can initiate failure [20,40]. Despite the large surface area offered by nanofillers, important limitations have been identified, particularly for non-carbon black nanofillers like those obtained from agricultural and food processing residues. Nanoparticles are often much more expensive to produce than macro and micro size particles, and good filler dispersion within the polymer matrix is challenging [40,[45][46][47][48]. Depending on the material, extensive hours of milling, harsh chemicals, high temperatures, and high pressures may be required to prepare nanoparticles ( Figure 2) [46,[49][50][51]. Furthermore, the higher surface area of nanoparticles increases the attraction between the particles, leading to their agglomeration and reduced composite performance [16,52,53]. To achieve homogeneous dispersion of nanoparticles in the rubber, complex mix protocols are required that often involve high power consumption, increasing processing costs [44,48]. Particle size is not the only characteristic affecting rubber reinforcement by bio-based fillers. Filler surface activity determines the strength and nature of the polymer-filler interaction [41] and structural features, such as material porosity, also play a role in reinforcement and can increase surface area of the filler. Hence, fillers with similar particle size may reinforce differently. Barrera and Cornish [55] identified bio-based rubber composites made with micro sized fillers that have similar or better performance than composites made with a nanosized version of the same material. Moreover, composites made with micro sized fillers had a much lower energy consumption than nanofillers during the mixing of the materials. Nitrogen, cetyl triethyl ammonium bromide (CTAB) and iodine adsorption, are used to estimate filler surface area [16,40]. However, these methods involve molecular adsorption, which means that results are affected by the surface area and surface activity of the filler. Furthermore, iodine is highly reactive, while CTAB requires calibration curves made using different standard carbon blacks. Nitrogen Adsorption Multilayer gas adsorption behavior, based on the Brunner, Emmet and Teller (B.E.T) method, is the most commonly used technique to estimate filler surface area [16]. The B.E.T theory is based on the physical adsorption of gas molecules onto the surface of materials. The amount of gas adsorbed at a constant temperature (adsorption isotherm) is proportional to the surface area in contact with the gas, and is dependent on its relative vapor pressure [56]. Filler surface area is determined from the linear region of the adsorption isotherms of nitrogen given by the B.E.T. equation [57,58]: Particle size is not the only characteristic affecting rubber reinforcement by bio-based fillers. Filler surface activity determines the strength and nature of the polymer-filler interaction [41] and structural features, such as material porosity, also play a role in reinforcement and can increase surface area of the filler. Hence, fillers with similar particle size may reinforce differently. Barrera and Cornish [55] identified bio-based rubber composites made with micro sized fillers that have similar or better performance than composites made with a nanosized version of the same material. Moreover, composites made with micro sized fillers had a much lower energy consumption than nanofillers during the mixing of the materials. Nitrogen, cetyl triethyl ammonium bromide (CTAB) and iodine adsorption, are used to estimate filler surface area [16,40]. However, these methods involve molecular adsorption, which means that results are affected by the surface area and surface activity of the filler. Furthermore, iodine is highly reactive, while CTAB requires calibration curves made using different standard carbon blacks. Nitrogen Adsorption Multilayer gas adsorption behavior, based on the Brunner, Emmet and Teller (B.E.T) method, is the most commonly used technique to estimate filler surface area [16]. The B.E.T theory is based on the physical adsorption of gas molecules onto the surface of materials. The amount of gas adsorbed at a constant temperature (adsorption isotherm) is proportional to the surface area in contact with the gas, and is dependent on its relative vapor pressure [56]. Filler surface area is determined from the linear region of the adsorption isotherms of nitrogen given by the B.E.T. equation [57,58]: where P is the manometer pressure in kPa, P o is saturation vapor pressure of nitrogen in kPa, V a is the volume of nitrogen adsorbed per gram of sample, V m is the volume of nitrogen per gram that covers one monomolecular layer in standard cm 3 /g and C is the B.E.T constant. Its numerical value depends on the heat of adsorption by the monomolecular layer [58]. Although this method is the most widely accepted, it assumes that the filler surface is energetically homogeneous [27], which is rarely the case for bio-based materials. The surface of most bio-based fillers can be highly polar and possess functional groups, exposed ions and a mix of amorphous and crystalline areas. Surface area using nitrogen adsorption is calculated assuming the molecular cross-sectional area of the adsorbate is known. However, due to the quadrupolar nature of nitrogen molecules, interaction of nitrogen with the polar surface of bio-based fillers can change the orientation and micropore filling pressure, which leads to a miscalculation of the true surface area of bio-based fillers [59]. For instance, bio-based fillers evaluated at two different particle sizes showed lower surface area for micro sized cellulosic material than macro sized particles of the same material [60]. This could be due to a decrease in an aspect ratio of the particles or changes in the surface chemistry, due to prolonged grinding. The same study found that particles obtained from different residues with similar particle size distribution showed considerable differences in surface area and pore volume [60]. Other adsorbates like argon, which does not have polar interactions with surface functional groups, may deliver a more accurate measurement. However, interpretation of argon isotherms is not as simple as for nitrogen isotherms. Furthermore, nitrogen is cheaper than argon and, despite the uncertainty of the true surface area of bio-based fillers, nitrogen adsorption surface area has been standardized as a predictive tool of performance of conventional fillers. In addition to the total surface area, nitrogen absorption also provides information about porosity and surface treatment in bio-based fillers. Micro-pores measured by nitrogen adsorption is not accessible to many rubber polymers, due to their large size, and so they are not considered important for the reinforcing efficiency of carbon black. However, in bio-based fillers, pores in polar particles can contain moisture and other smaller molecules than could negatively affect reinforcement of the rubber and, hence, this is useful information for the characterization of the material. Moreover, these pores can have active sites for coupling agents [61]. Nitrogen absorption by calcium carbonate before and after different levels of surface treatment with stearic acid showed a decrease in the equilibrium concentration as the degree of surface coverage increased [62]. For bio-based fillers, the high surface area can result from materials having broad particle size distribution and complex structure. Thus, particle size analysis using laser diffraction and/or microscopy is recommended to complement the information provided by absorption methods. Surface Activity Filler surface activity reflects the abundance and concentration of high energy sites in the material surface and influences filler dispersibility and filler compatibility within the rubber. This filler characteristic determines the type and strength of the polymer filler interaction, and hence, the degree of composite reinforcement [63,64]. For instance, fillers with high amounts of active surface hydrogen ions, generate strong filler-filler networks that can lead to agglomeration problems during processing and limit the reinforcement efficiency of the filler in most non-polar elastomers. Although high energy sites are mainly associated with functional groups [18,40,65], the surface activity also depends on the accessibility of these sites, which is determined by the arrangement and orientation of surface chemical groups [66,67]. Furthermore, high energy sites also can arise at structural heterogeneities, such as boundaries between crystallites and amorphous regions [19,67,68]. Therefore, highly active filler surfaces can result in a variety of interactions ranging from Van der Waals forces to chemical interactions. Filler surface activity is measured as surface free energy, a parameter that describes the interactive potential of a given surface [69,70]. Surface free energy (γ S ) of a solid surface is the result of dispersive (γ D S ) and specific components (γ SP S ) [71,72]: The dispersive component represents the surface's ability to interact through London type interactions [64,73]. These weak intermolecular forces play the main role in the interaction of fillers with non-polar molecules, such as most general purpose rubbers. In contrast, the specific component represents the interactions, due to all other types, such as acid-base, magnetic, metallic, and hydrogen bonding [74]. Therefore, fillers that have a high specific component and a low dispersive component are associated with weak polymer-filler and strong filler-filler interactions [19]. Surface energy characterization is particularly important for fillers obtained from agricultural and food processing residues. Varied and complex materials composition causes variations in surface energy, as do different feedstock sources and production methods. For instance, milling particles can increase surface energy by disrupting crystalline structures and exposing high energy sites [75]. Moreover, surface energy characterization is required to evaluate the efficiency of filler surface modifications. Unlike surface area measurement, there are no standardized methods for the quantification of surface energy. Currently, the most commonly used methods are contact angle and inverse gas chromatography [64,73]. Contact Angle The measurement of liquid-solid contact angle is a commonly used method for the characterization of solid surfaces. The surface energy-dispersive and specific components are obtained from the Young equation [50,72]: where the superscripts s and l are the surface energy of the solid and the liquid, respectively, and θ is the contact angle. Although contact angle offers a simple way to characterize solid surfaces, these methods were designed for macroscopically flat surfaces, not for small particles, and are not very effective on particulates, rough surfaces and chemically heterogenous materials, such as those of bio-based fillers [65,76,77]. Despite adaptations like the compression of samples to form planar surfaces and adherence of particles to glass slides or tapes (Figure 3), it is difficult to obtain reliable measurement of surface energy for bio-based fillers, especially when comparing different materials. Attempts to obtain quantitative information of surface energy report high scattering of surface energy values, due to the heterogeneity among samples, even after averaging values from multiple measurements [72]. The contact angle is an excellent tool to generate qualitative information about successful surface treatment and different levels of surface coverage by comparing the affinity of the bio-based material (before and after treatment) with liquids of different polarities [62]. J. Compos. Sci. 2019, 3, x 6 of 20 The dispersive component represents the surface's ability to interact through London type interactions [64,73]. These weak intermolecular forces play the main role in the interaction of fillers with non-polar molecules, such as most general purpose rubbers. In contrast, the specific component represents the interactions, due to all other types, such as acid-base, magnetic, metallic, and hydrogen bonding [74]. Therefore, fillers that have a high specific component and a low dispersive component are associated with weak polymer-filler and strong filler-filler interactions [19]. Surface energy characterization is particularly important for fillers obtained from agricultural and food processing residues. Varied and complex materials composition causes variations in surface energy, as do different feedstock sources and production methods. For instance, milling particles can increase surface energy by disrupting crystalline structures and exposing high energy sites [75]. Moreover, surface energy characterization is required to evaluate the efficiency of filler surface modifications. Unlike surface area measurement, there are no standardized methods for the quantification of surface energy. Currently, the most commonly used methods are contact angle and inverse gas chromatography [64,73]. Contact Angle The measurement of liquid-solid contact angle is a commonly used method for the characterization of solid surfaces. The surface energy-dispersive and specific components are obtained from the Young equation [50,72]: where the superscripts s and l are the surface energy of the solid and the liquid, respectively, and θ is the contact angle. Although contact angle offers a simple way to characterize solid surfaces, these methods were designed for macroscopically flat surfaces, not for small particles, and are not very effective on particulates, rough surfaces and chemically heterogenous materials, such as those of bio-based fillers [65,76,77]. Despite adaptations like the compression of samples to form planar surfaces and adherence of particles to glass slides or tapes (Figure 3), it is difficult to obtain reliable measurement of surface energy for bio-based fillers, especially when comparing different materials. Attempts to obtain quantitative information of surface energy report high scattering of surface energy values, due to the heterogeneity among samples, even after averaging values from multiple measurements [72]. The contact angle is an excellent tool to generate qualitative information about successful surface treatment and different levels of surface coverage by comparing the affinity of the bio-based material (before and after treatment) with liquids of different polarities [62]. Inverse Gas Chromatography (IGC) Inverse gas chromatography (IGC) is a versatile and robust adsorption method to characterize surface properties of a solid. Contrary to contact angle, IGC is independent of sample morphology, and solids in any form, including powders, fibers and with different crystalline and amorphous content can be evaluated. This is particularly suitable for surface characterization of small particles and porous materials with different chemistries, such as those of bio-based materials [66,73,74]. Moreover, IGC allows tight control of experimental conditions, including humidity and temperature that are not possible with other methods like contact angle, and which can significantly impact the measurement of surface activity. Hence, IGC provides more reliable quantitative information that can be used to predict the performance of filler reinforcement. In IGC, solid particles and fibers are packed into a chromatography column as the stationary phase ( Figure 4) [74,78]. The surface energy of fillers is determined by analyzing the retention time of probes, with known characteristics, which are injected as the mobile phase. These probes are injected at very low concentrations ("infinite dilution") to eliminate probe to probe interaction, so that interaction occurs only with the high-energy sites on the particle surface [74,78,79]. The probe retention times depend on the type and strength of the interaction between the filler material (stationary phase) and the specific probe [74,78,80], and are directly related to the thermodynamic interaction between the probes and the material surface [71,80,81]. Inverse gas chromatography (IGC) is a versatile and robust adsorption method to characterize surface properties of a solid. Contrary to contact angle, IGC is independent of sample morphology, and solids in any form, including powders, fibers and with different crystalline and amorphous content can be evaluated. This is particularly suitable for surface characterization of small particles and porous materials with different chemistries, such as those of bio-based materials [66,73,74]. Moreover, IGC allows tight control of experimental conditions, including humidity and temperature that are not possible with other methods like contact angle, and which can significantly impact the measurement of surface activity. Hence, IGC provides more reliable quantitative information that can be used to predict the performance of filler reinforcement. In IGC, solid particles and fibers are packed into a chromatography column as the stationary phase ( Figure 4) [74,78]. The surface energy of fillers is determined by analyzing the retention time of probes, with known characteristics, which are injected as the mobile phase. These probes are injected at very low concentrations ("infinite dilution") to eliminate probe to probe interaction, so that interaction occurs only with the high-energy sites on the particle surface [74,78,79]. The probe retention times depend on the type and strength of the interaction between the filler material (stationary phase) and the specific probe [74,78,80], and are directly related to the thermodynamic interaction between the probes and the material surface [71,80,81]. IGC has been extensively used for the characterization of complex and energetically heterogeneous materials like pharmaceutical carriers. Also, some studies have reported the use of IGC in bio-based materials, including mineral bone and eggshell particles and cellulosic materials [60,65,83]. These studies showed how IGC could effectively quantify differences in surface characteristics of materials, due to physicochemical changes caused by various grinding and drying conditions, and surface treatments. Moreover, the versatility of this method allows comparison with more conventional fillers like carbon black and silica [73,80] which can lead to a better understanding of differences in rubber reinforcement. Nevertheless, the main drawbacks of IGC are the need for more complicated setup and multiple, more expensive chemicals than other surface characterization techniques. Packing of the chromatographic columns can be time-consuming and introduce problems during measurements, such as pressure drops across the column, due to agglomeration of the particles. Filler Chemistry Chemistry-related variables can greatly impact the reinforcing effect of filler particles, directly or indirectly, but are often overlooked. Chemical composition of bio-based fillers defines their surface activity, chemical and thermal stability, and hence, composite performance [80]. The presence of active chemical groups, like hydroxyl groups, on the filler surface, impedes interfacial adhesion of IGC has been extensively used for the characterization of complex and energetically heterogeneous materials like pharmaceutical carriers. Also, some studies have reported the use of IGC in bio-based materials, including mineral bone and eggshell particles and cellulosic materials [60,65,83]. These studies showed how IGC could effectively quantify differences in surface characteristics of materials, due to physicochemical changes caused by various grinding and drying conditions, and surface treatments. Moreover, the versatility of this method allows comparison with more conventional fillers like carbon black and silica [73,80] which can lead to a better understanding of differences in rubber reinforcement. Nevertheless, the main drawbacks of IGC are the need for more complicated setup and multiple, more expensive chemicals than other surface characterization techniques. Packing of the chromatographic columns can be time-consuming and introduce problems during measurements, such as pressure drops across the column, due to agglomeration of the particles. Filler Chemistry Chemistry-related variables can greatly impact the reinforcing effect of filler particles, directly or indirectly, but are often overlooked. Chemical composition of bio-based fillers defines their surface activity, chemical and thermal stability, and hence, composite performance [80]. The presence of active chemical groups, like hydroxyl groups, on the filler surface, impedes interfacial adhesion of the filler to the rubber, resulting in poor reinforcement [40]. Chemical composition of agricultural and food processing residues is really diverse (Table 1) but, in general, they possess more polar surfaces than carbon black, due to their high surface concentration of active chemical groups. Although most research that has explored these residues as potential fillers for rubber has focused on the isolation of a single component, mainly calcium carbonate, cellulose, starch or chitin [8,84,85], the natural complex array of components in these residues could provide better reinforcement and improve other material properties. For instance, the presence of hydrophobic residual lignin and waxes in cellulose fibers may promote better adhesion than cellulose alone [42]. Moreover, high amounts of lignin can result in lower water absorption by the composite [31]. Unsaturated resins and proteins could behave as active ingredients in the vulcanization of the rubber or processing aids. Filler surface chemistry also affects the vulcanization behavior of filled compounds. Alkaline fillers can cure more quickly and lead to higher crosslink density unless the curing package is optimized to control the curing rate [10,40]. Active chemical groups on the surface of the filler can react with the compounding ingredients required to efficiently crosslink rubber molecules, reducing crosslink density and performance [40]. Likewise, active filler surfaces may absorb water, due to hydrogen bonding with water molecules [102]. In rubber composites, water adsorption may cause filler swelling compromising dimensional stability. Drying material to remove the adsorbed water may further weaken interfacial adhesion between the filler and the polymer and introduce flaws [38,89]. Nevertheless, chemically-active surfaces allow surface modification through grafting of molecules, or other physico-chemical treatments, to generate composites with unique properties [34,37,49,69]. For instance, grafting or coupling of fillers chemically attaches them to the rubber resulting in stronger polymer-filler interfaces and reduces filler-filler attraction which can lead to lower hysteresis in the material. Surface modifications often improve filler compatibility with the rubber, reduce reactivity with compounding ingredients, and inhibit moisture adsorption [40,89]. Chemical composition of the filler also defines its thermal stability, which is an important consideration in processing and aging of the composite. Low molecular weight components can degrade at rubber processing temperatures or operating conditions and adversely affect composite performance. Filler chemistry can be characterized by spectroscopic techniques, including Fourier transform infrared spectroscopy (FTIR), X-ray fluorescence spectroscopy (XRF), Energy-dispersive X-ray spectroscopy (EDX) and X-ray photoelectron spectroscopy (XPS) [38,73,93,103,104], to provide chemical group information and serve as a tool to evaluate the effectiveness of surface treatments. A complete analysis of the material chemical composition can be performed using thermogravimetric analysis and chromatography. Fourier Transform Infrared Spectroscopy FTIR is the most commonly used filler surface characterization technique, due to its short characterization time, high signal-to-noise ratio, high accuracy in frequency, simplicity and because it can be used on almost any material [105]. Infrared spectroscopy uses the absorption of infrared radiation by a chemical bond in a molecule at a specific frequency (wave numbers) to provide information about functional groups and molecular structure. Absorption occurs when the bond vibrational frequency matches that of the infrared radiation. The vibration pattern is unique for a given molecule [106][107][108]. Peak intensity in a spectrum is proportional to the concentration of the corresponding bond or molecule. Therefore, IR spectroscopy can be used to quantify a particular component based on the Lambert-Beer law [106,107]: where A v is the absorbance at wave number v, ε v is the molar absorption coefficient, b is the path length, and c is the concentration of the material. Nevertheless, due to structural complexities, it is unusual to find a single absorption frequency that can be used to quantify any single component. For instance, in bio-based materials, a large portion of organic components like lignin and cellulose can have overlapping bands with mineral materials. Hence, this technique is mostly used to obtain qualitative information. Quantitative analysis requires the use of standards, appropriate software and calibration with regression approaches [107]. Characterization of bio-based fillers with FTIR analysis identifies active chemical groups, or the lack thereof, resulting from surface treatment by comparisons with untreated materials. For example, analysis of hemp fibers treated with acetic and propionic anhydride resulted in absorbance increments in the regions 1737 and 1162-1229 cm −1 associated with stretching vibration of the carbonyl (C=O) group, and C-O stretching of the ester carboxyl group, due to the esterification of the fibers [38]. However, FTIR does not provide quantitative information on the extent of surface treatment-induced changes. Although FTIR can be performed in transmission or reflection mode, recently, the attenuated total reflection (ATR) mode has become the most commonly used method for the surface characterization of fillers [38,69,107,109]. ATR-FTIR is done by bringing the filler into direct contact with a crystalline material containing prisms that act as an internal reflection element. Although this is an easy and fast way to characterize a material surface, it does not account for changes in functional groups caused by heat during processing of the composite. Another important parameter to consider when using ATR-FTIR or any surface characterization technique is the depth of penetration. In ATR-FTIR, the depth of infrared radiation penetration depends on wavelength, incident angle, and indices of prism and sample refraction [108,110]: where d p is the penetration depth, υ is the infrared radiation frequency, n 2 and n 1 are the refraction indexes of the sample and the prism, respectively, and θ is the incident angle. Crystalline materials used as prisms include diamond (C), germanium (Ge), silicon (Si), zinc selenide (ZnSe) and thallium bromide (KRS-5) [107,108]. These materials have different indices of refraction (Table 2), as well as different robustness and cost. Prism selection depends on the type of material to be characterized. For instance, samples that strongly absorb infrared radiation, like carbon black, need a lower depth of penetration to avoid overabsorption. In general, the lower the prism refractive index, the higher the penetration depth [108]. In addition, if the samples to be characterized are abrasive, a more robust prism may be desired. X-ray Spectroscopy Different X-ray spectroscopy techniques, including XPS, XRF and EDX, can be used to evaluate the elemental composition of fillers. X-ray spectroscopic methods are non-destructive techniques based on the principle that each element has a unique response to a high-energy beam. These techniques are particularly useful for the chemical characterization of bio-based fillers with high mineral content, such as rice husk or mollusk shells [111,112], but can also be used in other compositionally diverse materials to identify trace elements, state of oxidation and variations in carbon/oxygen ratio as result of filler extraction method, purification or surface treatments. For instance, XPS can quantify differences in O/C ratio in cellulosic fibers from different sources and variation of the O/C ratio as a result of acetylation [38]. Interfacial interaction between non-polar rubbers and more polar bio-based fillers is a well-known variable affecting reinforcement of rubber composites. However, the performance of the rubber has not been correlated quantitatively to the polarity of different bio-based fillers. X-ray spectroscopy is an important tool to further understand and quantify these differences. Although all these techniques are very sensitive and provide qualitative and quantitative information about the elemental composition of a material surface, and information about associated functional groups, each technique has its own limitations in terms of the spatial resolution (electron penetration depth) and the information they provide [110,113]. Statistical analysis has been used to classify different carbon blacks based on surface chemistry information obtained from XPS and thermal analysis-mass spectroscopy (TGA-MS) and IGC [73]. X-ray spectroscopy techniques require very sophisticated instrumentation. Moreover, data interpretation becomes more difficult as the complexity of the material increases, bio-based materials often are multiphasic and possess complex composition. Furthermore, the reliability of the results is highly dependent on sample preparation. Thermogravimetric Analysis (TGA) Thermal analysis is a crucial test for bio-based fillers, due to the high temperature required for vulcanization and during the life of finished rubber composites. Thermal decomposition of the filler can negatively affect the performance of the material by creating voids in the material or simply not achieving the expected reinforcement. Although TGA is mainly used to determine the thermal stability of the filler, it can also provide information about the chemical composition of the material, by separating different constituent fractions within the material, for instance, moisture, organic and inorganic fraction. TGA measures the mass loss upon heating, and each step of mass loss marks changes in the sample, due to decomposition and chemical reactions [114], and such changes can be used to evaluate polymer filler interactions and surface modifications. For example, TGA has been used to correlate weight loss in a specific temperature range to the amount of condensation water lost from silanol groups which, in turn, was used to estimate silanol group density [61]. In this study, rice hull ash had a lower silanol density than commercial silica, which reduced the efficiency of surface treatments intended to improve its interaction with rubber. In bio-based fillers, small organic molecules and impurities can quantitatively affect both processing and performance of composites. For instance, resin in lignocellulosic materials has been associated with low composite modulus [9]. TGA can be coupled to additional gas analysis techniques, such as FTIR, gas chromatography with mass spectroscopy (GC-MS) to provide both quantitative and qualitative information about the decomposition and chemical make-up of the filler [114][115][116]. TGA coupled with MS was used to detect and quantify the presence of sulfate groups in cellulose nanocrystals. Moreover, the grafting of molecules on the surface of the filler and changes in crystallinity can also be determined by thermal analysis [117]. Nevertheless, despite the multiple advantages of using TGA alone or coupled with other analytical tools, exact chemical characterization in some materials can be challenging. In TG/FTIR or TG/MS, identification of specific components is complicated when gases generated in TGA have overlapping spectra. Also, although TG/GC-MS allows the composition of the evolved gases from the organic fraction of a material to be separated and identified, is not possible to assign a specific mass loss to each component because of the retention in the chromatographic column [118]. Elemental analysis of the ash must be performed to characterize the inorganic fraction. Shape and Structure Conventional fillers have well-defined shapes, including spheres (carbon black and silica), and plates (mica, talc and kaolin). While fillers obtained from bio-based sources may have a wider variety of shapes, including elongated rod-like shaped (cellulosic fibers or crystals) and undefined irregular shapes (particles that resulted from grinding) ( Figure 5) [40,44]. Nevertheless, similar to conventional fillers, small primary particles can aggregate into complex tri-dimensional objects, due to bonding forces between the filler particles [19]. The random spatial arrangement of primary particles generates different degrees of irregularity that define the effective filler structure [18]. Moreover, for bio-based filler complex structure also can result from naturally occurring pores, intermeshed fibers and surface roughness. associated with low composite modulus [9]. TGA can be coupled to additional gas analysis techniques, such as FTIR, gas chromatography with mass spectroscopy (GC-MS) to provide both quantitative and qualitative information about the decomposition and chemical make-up of the filler [114][115][116]. TGA coupled with MS was used to detect and quantify the presence of sulfate groups in cellulose nanocrystals. Moreover, the grafting of molecules on the surface of the filler and changes in crystallinity can also be determined by thermal analysis [117]. Nevertheless, despite the multiple advantages of using TGA alone or coupled with other analytical tools, exact chemical characterization in some materials can be challenging. In TG/FTIR or TG/MS, identification of specific components is complicated when gases generated in TGA have overlapping spectra. Also, although TG/GC-MS allows the composition of the evolved gases from the organic fraction of a material to be separated and identified, is not possible to assign a specific mass loss to each component because of the retention in the chromatographic column [118]. Elemental analysis of the ash must be performed to characterize the inorganic fraction. Shape and Structure Conventional fillers have well-defined shapes, including spheres (carbon black and silica), and plates (mica, talc and kaolin). While fillers obtained from bio-based sources may have a wider variety of shapes, including elongated rod-like shaped (cellulosic fibers or crystals) and undefined irregular shapes (particles that resulted from grinding) ( Figure 5) [40,44]. Nevertheless, similar to conventional fillers, small primary particles can aggregate into complex tri-dimensional objects, due to bonding forces between the filler particles [19]. The random spatial arrangement of primary particles generates different degrees of irregularity that define the effective filler structure [18]. Moreover, for bio-based filler complex structure also can result from naturally occurring pores, intermeshed fibers and surface roughness. Filler structure contributes to composite reinforcement by mechanically interlocking the polymer chains, which restricts their mobility when subject to deformation [18,41]. Branching of filler aggregates defines the effective filler volume fraction in the polymer and, therefore, contributes to the hydrodynamic effect of the filler in the polymer [16]. Filler structure contributes to composite reinforcement by mechanically interlocking the polymer chains, which restricts their mobility when subject to deformation [18,41]. Branching of filler aggregates defines the effective filler volume fraction in the polymer and, therefore, contributes to the hydrodynamic effect of the filler in the polymer [16]. Particle shape and structure are easy to describe qualitatively, but are difficult to measure quantitatively. This is particularly true for fillers obtained from agricultural and food processing residues in which shape and structure are diverse and can vary depending on the source and method used to prepare the particles [49,119]. Furthermore, different structures may coexist in the same filler, due to random aggregation and source heterogeneity [16,49]. Structure of conventional fillers, like carbon black, can be characterized as the volume of dibutylphthalate (DBP) absorbed [16,18]. However, this measurement only represents the empty volume between particles and agglomerates and does not describe the primary particle shape or structure. Furthermore, DBP absorption is sensitive to the filler surface chemistry, so it an unreliable measurement for most non-black fillers [16]. Other methods include microscopy techniques like scanning electron microscopy (SEM), transmission electron microscopy (TEM), and atomic force microscopy (AFM). Electron Microscopy Electron microscopy techniques are widely used to characterize filler shape and structure, due to their high spatial resolution [120]. However, these techniques provide mainly qualitative information, are very time-consuming, results are highly dependent on sample selection and preparation, and only a few particles/aggregates can be observed at one time [16,41], which is a problem when 200 measurements may be needed [117]. Despite the lack of quantitative data, electron microscopy is heavily relied upon to assess filler dispersion and polymer-filler interactions [120,121]. Scanning electron microscopy (SEM) uses an electron beam to scan a material surface and visualize morphological features which can be filler dependent. Electron interaction with a surface generates specific electron signals that are detected and converted into magnified two-dimensional images [87,120,122]. SEM resolution is limited to approximately 10 nm, and so SEM is generally used to characterize particles at the micron scale [120]. SEM also is used to characterize fracture surfaces of rubber composites, analyze filler dispersion, and the presence of voids and aggregates [120,121]. SEM requires conductive surfaces; hence, fillers and rubber composites must be sputter coated with a layer of conductive material before analysis [120,123]. Like SEM, transmission electron microscopy (TEM) uses an electron source, electron lenses and electron detectors. However, in TEM, the electrons pass through the sample [110,122], and have higher electron energies and smaller focal lengths and, hence, higher resolution than SEM. Sections must be less than 100 nm thick, to allow free passage of electrons through the sample with relatively little loss of energy [120,122]. Staining is generally used to improve contrast and highlight different components in particles obtained from agricultural and food processing residues. TEM is preferred for nanosized filler characterization [122]. TEM allows characterization of the shape of individual nanoparticles, particle dispersion within the composite, and the filler network. Multiple imaging of a sample at various angles can produce a three-dimensional representation of the sample [120,124]. Atomic Force Microscopy (AFM) AFM is mainly used to analyze topographical features of composite surfaces, but has been used to evaluate the structure, shape and the elastic modulus of single particles [11,42,54,120,125]. This technique uses a sharp tip (radius of 10-100 nm), supported in a cantilever, to scan the surface of a sample. A laser beam is focused on the cantilever and monitors and records its deformation as a result of topographical variation in the sample surface [110,120]. AFM has a higher resolution than SEM and can be used to evaluate nanosized particles. However, similar to SEM, this analysis is limited to surfaces. AFM offers three-dimensional surface images and does not require sample sputter coating [124]. Nevertheless, limitations of AFM include its lower canning speed compared to SEM and tip artefacts like tip/sample broadening, which may overestimate the size of particles obtained agricultural and food processing residue [11,126]. Filler Crystallinity Crystallinity is not a characteristic evaluated in conventional fillers, but can have both direct and indirect effect on rubber reinforcement and so it is important to evaluate for mineral and lignocellulosic fillers. Filler crystallinity can affect the filler surface activity, and hence, its interaction with rubber. Amorphous regions can concentrate on structural defects that translate into high energy sites [19,67]. Plant fibers and calcium carbonate particles have consistently displayed lower dispersive components of surface energy typical of materials with lower crystallinity [66,67]. Crystallinity impacts particle tensile strength, modulus and water resistance, and so affects their reinforcing potential [11,31,54,127]. Furthermore, crystalline materials have a lower tendency to undergo physical and chemical changes than amorphous materials [128], creating better composite stability. Changes in crystallinity also can indicate compound purity, such as in the purification of cellulose nanocrystals. X-ray diffraction (XRD) is the most commonly used technique to quantify the amount of crystallinity [49,119,128,129]. The percentage of crystallinity is obtained from the ratio of crystalline peak area to the total XRD intensity profiles [130,131]: where A c is the crystalline area, and A a is the amorphous area on the X-ray diffractogram. Calculation of crystallinity depends on the decomposition of the total XRD intensity profiles into the amorphous and the crystalline components. This separation of components can be challenging and represents the main limitation of XRD characterization of bio-based fillers [128]. Different quantification methods have been used to quantify the crystalline and amorphous components of total XRD intensity profiles. One method evaluates total XRD profile decomposition by curve fitting using Gaussian, Lorentzian, and Voigt functions to separate the crystalline and amorphous components [123,132,133]. Another method uses the Ruland-Vonk or amorphous contribution subtraction method in which the profile obtained from a standard material is subtracted from the total XRD profile [123,133]. The Segel or peak height method is particularly used for lignocellulosic materials. In this method, crystallinity is calculated from the equation [123,133,134]: where I 200 is the maximum peak intensity at 2θ = 22.6 • , and I am is the minimum peak intensity between the (2 0 0) and (1 1 0) peaks at 2θ = 18 • [123,135]. These different methods may produce considerably different results [132]. Characteristic diffraction peaks also can help identify differences in crystal structure between conventional mineral fillers and fillers obtained from agricultural and food processing residues. The crystal phase of calcium carbonate obtained from seashells is mostly aragonite and calcite, whilst the vaterite crystal phase is only seen in synthetic materials [111]. Conclusions Agricultural and food processing residues offer a wide variety of materials to explore as potential sustainable fillers for rubber composites. As these alternative filler sources are considered, we need to better understand and quantify filler characteristics which affect their reinforcement efficiency in rubber composites. Given the diversity among these materials, comprehensive selection criteria beyond particle size and surface polarity must be developed. Therefore, appropriate characterization techniques are needed to fully understand these materials, and potential performance and cost advantages over traditional fillers. Although some characterization methods applied to conventional fillers can be used for alternative fillers, new and modified methods are required, due to the inherent differences in chemistry and morphology of these residue-derived materials. Furthermore, the efficacy of such fillers should always be compared to the conventional fillers, and filler combinations hold considerable promise.
2019-11-28T12:47:11.506Z
2019-11-26T00:00:00.000
{ "year": 2019, "sha1": "2a9e89f19895a719e493a2f15b542afe1ced04ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-477X/3/4/102/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "144de0f090e82d2edf4e4f94aff0ada0679dbfab", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
119178654
pes2o/s2orc
v3-fos-license
Well-posedness and numerical algorithm for the tempered fractional ordinary differential equations Trapped dynamics widely appears in nature, e.g., the motion of particles in viscous cytoplasm. The famous continuous time random walk (CTRW) model with power law waiting time distribution ({\em having diverging first moment}) describes this phenomenon. Because of the finite lifetime of biological particles, sometimes it is necessary to temper the power law measure such that the waiting time measure has convergent first moment. Then the time operator of the Fokker-Planck equation corresponding to the CTRW model with tempered waiting time measure is the so-called tempered fractional derivative. This paper focus on discussing the properties of the time tempered fractional derivative, and studying the well-posedness and the Jacobi-predictor-corrector algorithm for the tempered fractional ordinary differential equation. By adjusting the parameter of the proposed algorithm, any desired convergence order can be obtained and the computational cost linearly increases with time. And the effectiveness of the algorithm is numerically confirmed. Introduction The fractional calculus has a long history. The origin of fractional calculus can be traced back to the letter between Leibniz and L'Hôpital in 1695. In the past three centuries, the development of the theories of fractional calculus is well contributed by many mathematicians and physicists. And from the last century, the books covering fractional calculus began to emerge, such as Oldham and Spanier (1974), Samko, Kilbas and Marichev (1993), Podlubny (1999), and so on. In recent years, more theories and experiments show that a broad range of non-classical phenomena appeared in the applied sciences and engineering can be described by fractional calculus [33,26,35]. Because of its good mathematical features, nowadays fractional calculus has become a powerful tool in depicting the anomalous kinetics which arises in physics, chemistry, biology, finance, and other complex dynamics [26]. In practical applications, several different kinds of fractional derivatives, such as Riemann-Liouville fractional derivative, Caputo fractional derivative [33,35], Riesz fractional derivative [35], and Hilfer fractional derivative [19,42] are introduced. One of the typical applications for fractional calculus is the description of anomalous diffusion behavior of living particles; and the tempered fractional calculus describes the transition between normal and anomalous diffusions (or the anomalous diffusion in finite time or bounded physical space). In the continuous time random walk (CTRW) model, for a Lévy flight particle, the scaling limit of the CTRW with a jump distribution function φ(x) ∼ x −(1+α) (1 < α < 2) exhibits superdiffusive dynamics. The corresponding stable Lévy distribution for particle displacement contains arbitrarily large jumps and has divergent spatial moments. However, the infinite spatial moments may not be feasible for some physical processes [8]. One way to overcome the divergence of the moments of Lévy distributions in transport models is to exponentially temper the Lévy measure. Then the space fractional operator will be replaced by the spatially tempered fractional operator in the corresponding models [7,8,36]. This paper concentrates on the time tempered fractional derivative, which arises in the Fokker-Planck equation corresponding to the CTRW model with tempered power law waiting time distribution [34,17]. Tempering the power law waiting time measure makes its first moment finite and the trapped dynamics more physical. Sometimes it is necessary/reasonable to make the first moment of the waiting time measure finite, e.g., the biological particles moving in viscous cytoplasm and displaying trapped dynamical behavior just have finite lifetime. The time tempered diffusion dynamics describes the coexistence/transition of subdiffusion and normal diffusion phenomenon (or the subdiffusion in finite time) which was empirically confirmed in a number of systems [8,27]. More applications for the tempered fractional derivatives and tempered differential equations can be found, for instance, in poroelasticity [18], finance [7], ground water hydrology [27,28], and geophysical flows [29]. Tempered fractional calculus can be recognized as the generalization of fractional calculus. To the best of our knowledge, the definitions of fractional integration with weak singular and exponential kernels were firstly reported in Buschman's earlier work [4]. For the other different definitions of the tempered fractional integration, see the books [39,35,28] and references therein. This work continues previous efforts [25] to explore the time tempered fractional derivative. The well-posedness, including existence, uniqueness, and stability, of the tempered fractional ordinary differential equation (ODE) is discussed, and the properties of the time tempered fractional derivative are analyzed. Then the Jacobi-predictor-corrector algorithm for the tempered fractional ODE is provided, which has the striking benefits: 1. any desired convergence order can be obtained by simply adjusting the parameter (the number of interpolation points); the computational cost increases linearly with the time t instead of t 2 usually taken place for nonlocal time dependent problem. And extensive numerical experiments are performed to confirm these advantages. In Section 2, we introduce the definitions and show the properties of the tempered fractional calculus, including the generalizations of the tempered fractional derivatives in the Riemann-Liouville and Caputo sense, and the composite property. More basic properties are listed and proved in Appendix A; the expression and properties of the tempered fractional calculus in Laplace space are proposed and proved in Appendix B. In Section 3, we discuss the initial value problem of the tempered fractional ODE: first derive the Volterra integral formulation of the tempered fractional ODE; then prove the well-posedness of the considered problem. The Jacobi-Predictor-Corrector algorithm for the tempered fractional ODE is designed and discussed in Section 4, and two numerical examples are solved by the algorithm to show its powerfulness. Preliminaries In this section, we first give the definitions and some properties of the tempered fractional calculus. Let [a, b] be a finite interval on the real line R. Denote L([a, b]) as the integrable space which includes the Lebesgue measurable functions on the finite interval [a, b], i.e., And let AC [a, b] where a I σ t denotes the Riemann-Liouville fractional integral Obviously, the tempered fractional integral (1) reduces to the Riemann-Liouville fractional integral if λ = 0. In practical applications, sometimes the fractional integral (1) is represented as a D −σ,λ t u(t). Remark 1 ([3]) The variants of the Riemann-Liouville tempered fractional derivatives are defined as Definition 3 (fractional substantial derivative [16,40,6]) For n−1 < α < n, n ∈ N + , and λ(x) being any given function defined in space domain. The Riemann-Liouville fractional substantial derivative is defined by where a I n−α,λ(x) t denotes the Riemann-Liouville fractional integral and Remark 2 The fractional substantial derivative (6) is equivalent to the Riemann-Liouville tempered fractional derivative (3) if λ(x) is a nonnegative constant function. In fact, using integration by parts leads to The tempered n-th order derivative of u(t) equals to d dt + λ n u(t), which can be simply/resonably denoted as D n,λ u(t). Proof. Take v(t) = e λt u(t) in the equation for the Riemann-Liouville and Caputo fractional derivatives [33,23,35] Multiplying both sides of the above equation by e −λt , we obtain Furthermore, using the definitions of Riemann-Liouville and Caputo tempered fractional derivatives, we get that Using the linearity properties presented in Proposition 4 and the formula of power function we deduce the desired result from (11). and (2) Let u(t) ∈ AC n [a, b] and n − 1 < α < n. Then the Caputo tempered fractional derivative and the Riemann-Liouville tempered fractional integral have the com-posite properties and Proof. From the definitions of Riemann-Liouville tempered fractional integral and derivative, we have Thanks to the composition formula [33,23,35] we get Inserting the above formula into (16) leads to (12). Again from the definitions of Riemann-Liouville tempered fractional integral and derivative, there exists Furthermore, using the composite properties of Riemann-Liouville fractional integral and derivative [33,23,35] we get Similarly, using the composite properties of Caputo fractional derivative [33,23,35] we can get (14) and (15). Remark 4 For a constant C, Obviously, a D α,λ Well-posedness of the tempered fractional ordinary differential equations In this section, we consider the ODEs with Riemann-Liouville and Caputo tempered fractional derivatives, respectively, i.e., and The Cauchy problems (20) and (21) can be converted to the equivalent Volterra integral equations of the second kind under some proper conditions. Lemma 1 If the function f (t, u(t)) and u(t) belong to L([a, b]), then u(t) is solution of the initial value problem (20) if and only if u(t) is the solution of the Volterra integral equation of the second kind In particular, if 0 < α < 1, then u(t) satisfies the Cauchy problem (20) if and only if u(t) satisfies the following integral equation Proof. For the linear Cauchy problems (20) and (21), the conclusion is directly reached by the Laplace transform given in Appendix B. Now we prove the more general case. Necessity. Performing the integral operator a I α,λ t on both sides of the first equation of (20), we have where we use the composite property (1) given in Proposition 2. Then Eq. (22) is obtained. where we use the fact and the composite property (13). Now we show that the solution of (22) satisfies the initial condition given in Eq. (20). Multiplying e λt and then performing the operator where the formula is utilized. Taking a limit t → a in the above equation, we obtain with the second term in the right hand side being equal to zero; and for its first term, we have By the similar technique in proving Lemma 1, we obtain the following conclusion for the Cauchy problem (21). Lemma 2 If the function f (t, u) is continuous, then u(t) is the solution of the initial value problem (21) if and only if u(t) is the solution of the Volterra integral equation of the second kind In particular, if 0 < α < 1, then u(t) satisfies the Cauchy problem if and only if u(t) satisfies the following integral equation For the global existence and uniform asymptotic stability results of fractional functional differential equations corresponding to (23), one can see [24,2]. In the following, we discuss the existence and uniqueness of the solutions of the nonlinear tempered fractional differential equations based on the equivalent Volterra equations presented in Lemmas 1 and 2. We shall employ the Banach fixed point theorem to for any u ∈ B, being an open set in R. In the following, we always suppose that f (t, u) satisfies the Lipschitz type condition with respect to the second variable where C Lip is constant. We shall use the following space . Proof. The proof of this theorem is similar to the references [32,14,20,42]. First we prove the existence of a unique solution u(t) ∈ L( [a, b]). In accordance with Lemma 1, it is sufficient to prove the existence of a unique solution u(t) ∈ L([a, b]) to the nonlinear Volterra integral equation (22). We rewrite the integral equation (22) in the form of operator and We first prove that P is a contraction operator in the subinterval [a, holds.To apply the Banach fixed point theorem in the complete metric space L([a, t 1 ]), we have to prove the following facts: (ii) For all u 1 , u 2 ∈ L([a, t 1 ]) the following inequality holds In fact, since f (t, u(t)) ∈ L([a, t 1 ]) and Lemma 6 in Appendix A, the integral in the right-hand side of (31) belongs to L([a, t 1 ]); obviously u 0 (t) ∈ L([a, t 1 ]), hence ). Now, we prove the estimate (34). From Lemma 6 in Appendix A, we obtain Since the function u(t) is uniquely defined on the interval [a, t 1 ], the last integral can be considered as the known function. Then the above equation can be rewritten as where is the known function. With the same contraction factor W 1 , we can prove that there exists a unique solution u * (t) ∈ L(t 1 , t 2 ) to Eq. (22) with the choice of certain u m on each [a, taking the limit of (40) as m → ∞, gives and hence a D α,λ t u(t) ∈ L([a, b]). This completes the proof of the theorem. By almost the same idea, we can prove the following existence and uniqueness result for the Cauchy type problem (21). Theorem 2 If n − 1 < α < n, n ∈ N + , λ ≥ 0, then there exists a unique solution u(t) to the Cauchy type problem (21) in the space AC n [a, b]. Stability To prove the stability of the solutions of the Cauchy type problems (20) and (21), we need the following generalized Gronwall's Lemmas. Lemma 3 ([9]) Let x, y, φ be real continuous functions on interval , and Then holds If, in addition, y(·) is a nondecreasing function defined on [a, b], we have Then there exists a constant C = C(α) such that for all t ∈ [a, b]. (20) with different initial conditions. Then Theorem 3 Under the assumptions given in Theorem 1, let u(t) and v(t) be the solutions of the Cauchy type problem where ϕ(t) = C(α) Similarly, under the assumptions given in Theorem 2, for the problem (21), there exists where Proof. Suppose that u(t) and v(t) are any two solutions of the Cauchy type problem (20) with different initial conditions. From the equivalent integral formulation (22), we have For 0 < α < 1, using the generalized Gronwall's inequality with weak singular kernel given in Lemma 4, we get which implies For n − 1 < α < n, n ≥ 2, taking where C(α) = max With the similar method, we can prove the stability results for the problem (21). Lemma 5 Assume that B is an open set in R m and f ( Suppose that f (t, u 1 , u 2 , ..., u m ) is a continuous function satisfying the Lipschitz type condition Proof. Similar to Theorems 1 and 2. We begin our proof from the integral equations given in Lemma 5. We only prove the Cauchy type problem (53). Let t 1 belong to (a, b) such that the inequality then u(t) is the solution of the initial value problem (54) if and only if u(t) is the solution of the Volterra integral equation of the second kind holds.The operator corresponding to (55) takes the form From the Lipschitz condition (56) it directly follows that Furthermore, using the composition formula (12), we have where n j is the smallest integer larger than or equal to α j . From the given initial conditions, it can be checked that a D for any t ∈ [a, b]. Taking t = t 1 in above the formula and applying (A.2), we get It follows that there exists a unique solution u * (t) to Eq. (55) in L([a, t 1 ]). This solution is obtained as a limit of the convergent sequence (T j u * 0 )(t) = u j (t), and holds lim i.e., lim With the same fashion of proving Theorem 1, we can show that there exists an unique solution u(t) ∈ L α,λ ([a, b]) to Eq. (55). In addition, Numerical algorithm for the tempered fractional ODE The well-posedness of the tempered fractional ODE has been carefully discussed in the above sections. Usually, it is hard to find the analytical solutions of the tempered fractional ODE, especially for the nonlinear case. Efficient numerical algorithm naturally becomes an urgent topic for this type of equation. Now, we extend the so-called Jacobi-predictor-corrector algorithm [44] to the the tempered fractional ODE; and its striking benefits are still kept, including having any desired convergence orders and the linearly increasing computational cost with the time t. Now we turn to describe the computational scheme for Eq. (63). For this purpose, we define a grid in the interval [a, b] with M + 1 equidistant nodes t j , given by where τ = (b − a)/M is the stepsize. Suppose that we have got the numerical values of u(t) at t 0 , t 1 , · · · , t n , which are denoted as u 0 , u 1 , · · · , u n ; now we are going to compute the value of u(t) at t n+1 , i.e., u n+1 . From Eq. (64), we have where {f P n+1, j } N j=0 in (68) means that all the values off n+1 at the Jacobi-Gauss-Lobatto nodes are got by using the interpolations based on the values of { f (t i , u i )} n i=0 ; whereas {f n+1, j } N−1 j=0 in (67) are obtained by using the interpolations based on the values of { f (t j , u j )} n j=0 and f (t n+1 , u P n+1 ). From the computational scheme (67)-(68), it can be clearly seen that the computational cost linearly increase with n (or time t). With the similar methods given in [44], we can get the following estimates for the Volterra integral system (62). Theorem 5 If g(t, u(t)) is Lipschitz continuous with respect to the second variable, and has the form where w k (t), 1 ≤ N I ≤ µ 1 ≤ ... ≤ µ m are sufficiently smooth and m can be +∞, δ is a constant, then there exists a constant C being independent of n, τ, N, such that This theorem shows that the scheme (67)-(68) potentially have any desired convergence order by adjusting the number of interpolation points N I . Numerical test In this subsection, we consider two simple numerical examples to show the numerical errors and convergence orders of the Jacobi-predictor-corrector method. The two examples are Caputo tempered ODEs; solving the Riemann-Liouville tempered ODEs can be done in the same way, so is omitted here. Example 2 In this example, we examine the following initial value problem The initial values are given as e λt u(t)| t=0 = 1 and d dt (e λt u(t)) t=0 = 0 for α ∈ (1, 2), and as e λt u(t)| t=0 = 1 for α ∈ (0, 1). Using the Laplace transform presented in Appendix B, we have Then Employing the Laplace transform involving the derivative of the Mittag-Leffler function [33] L we can check that the exact solution of this initial value problem is Here the generalized Mittag-Leffler function E α,β (·) is given by [33] In this example, the solution u(t) does not have a bounded first (second) derivative at the initial time t = 0 as 0 < α < 1 (1 < α < 2). To improve the convergence order, we employ the technique given in our previous work [44]. We separately solve the equation in subintervals [0, T 0 ] and [T 0 , T ] of the interval [0, T ]. More specifically, we modify the formula (63) as Here, we suppose that the smoothness of g is weaker on the subinterval [0, T 0 ] and sufficiently smooth on [T 0 , T ]. For the integral on the subinterval [0, T 0 ], the Gauss-Lobatto quadrature with the weight function w(s) = 1 is used; and for the one on [T 0 , T ], we compute it as (64), i.e., [31] and the analysis above, we can see that ifÑ is a big number then the accuracy of the total error can still be remained. The numerical results are reported in Tables 4 and 5. And it can be seen that the desired numerical accuracy is obtained. Concluding remarks Currently, it is widely recognized that fractional calculus is a powerful tool in describing anomalous diffusion. Because of the bounded physical space and the finite lifetime of living particles, in the CTRW model, sometimes it is necessary to truncate (temper) the measures of jump length and waiting time with divergent second and first moments, respectively. Exponential tempering offers technique advantages ) and/or the space tempered fractional derivative (tempered jump length). This paper focus on discussing the properties of the time tempered fractional derivatives as well as the well-posedness and numerical algorithm for the time tempered evolution equation, i.e., the tempered fractional ODEs. The proposed so-called Jacobi-predictor-corrector algorithm shows its powerfulness/advantages in solving the tempered fractional ODEs, including the one of easily getting any desired convergence orders by simply changing the parameter of the number of the interpolating points and the other one of linearly increasing computational cost with time t rather than quadratically increasing more often happened for numerically solving fractional evolution equation. Proof. If u(t) has continuous derivative for t ≥ a, then using integration by parts to (1), there exists and if the function has n continuous derivatives,then integrating by parts, we get Proof. By simple calculation, we have L([a, b]) and σ 1 , σ 2 > 0, λ ∈ R. Then for all t ≥ a, Proposition 3 (Semigroup properties) Let u(t) ∈ Proof. The linearity of fractional integral and derivatives follows directly from the corresponding definitions. We omit the details here. Proof. After simple argument, we have From the above analysis, we get the desired result. Appendix B: Laplace transforms of the tempered fractional calculus In this subsection, we discuss the Laplace transforms of the tempered fractional calculus. Define the Laplace transform of a function u(t) and its inverse as We start with the Laplace transform of the Riemann-Liouville tempered fractional integral of order σ. Proposition 6 The Laplace transform of the Riemann-Liouville tempered fractional integral is given by Proof. First, we rewrite the Riemann-Liouville tempered fractional integral as the form of convolution In view of the Laplace transform of the convolution [37] L{u(t) * v(t); s} = u(s) v(s), we have Recalling the Laplace transform L{e −λt t σ−1 ; s} = Γ(σ)(λ+s) −σ , we have the Laplace transform of Riemann-Liouville tempered fractional integral L{ 0 I σ,λ t u(t); s} = (λ + s) −σ u(s). (B.7) Next, we turn to consider the Laplace transform of tempered fractional derivative. Proposition 7 The Laplace transform of the Riemann-Liouville tempered fractional derivative is given by The Laplace transform of the Caputo tempered fractional derivative is given by . Combing the first translation Theorem [37] L{e −λt u(t); s} = u(λ + s), λ ∈ R, Re(s) > λ, (B.10) we have L e −λt d n dt n v(t); s = (s + λ) n v(s + λ) − From the Laplace transform of tempered fractional derivatives, we observe that different initial value conditions are needed for fractional differential equations with different fractional derivatives. From (B.9), it can be noted that the Laplace transform for the Caputo tempered fractional derivative involves the values of the function u(t) and its derivatives at the lower terminal t = 0, which are easily specified in physical. So the Caputo type fractional derivatives are more popularly used in time direction [33]. The Laplace transform for the variants of the Riemann-Liouville tempered fractional derivatives are given as
2015-01-02T10:16:21.000Z
2015-01-02T00:00:00.000
{ "year": 2015, "sha1": "8b5380cbb989aff3be690c30a418e6dfc64a6331", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=df4d009d-f1ae-47d6-8c0e-b335574da4b7", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "8b5380cbb989aff3be690c30a418e6dfc64a6331", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
54768851
pes2o/s2orc
v3-fos-license
Optimal Power Planning of Wind Turbines in a Wind Farm Wind energy is attractive in the presence of climate concerns and has the potential to dramatically reduce the dependency on nonrenewable energy resources. With the increase in wind farms there is a need to improve the efficiency in power allocation and power generation among wind turbines. In this paper, a hierarchical algorithm including a cooperative level and an individual level is developed for power coordination and planning in a wind farm. In the cooperative level, a constrained quadratic programming problem is formulated and solved to allocate the power to wind turbines considering the aerodynamic effects of wake interaction and the power generation capabilities of wind turbines. In the individual level, a method based on the local pursuit strategy is studied to connect the cooperative level power allocation and the individual level power generation using a virtual leader-follower scheme. The stability of individual wind turbine power generation is analyzed. Simulations are used to show the advantages of the method. Introduction Wind energy is considered to be an important player in the renewable energy market, enabling a reduction in carbon pollution from conventional energy with a rapid growth at the rate of around 27% per year between 2005-2009 [1] [2]. The US government has plans to produce 20% of the country's energy via wind by 2030 [3]. Although promising in its potential, wind farms arranged in arrays suffer in power output due to aerodynamic interaction between the wind turbines. This requires wind farm control schemes that can improve the power production output and handle the aerodynamic interactions better [4]. It is shown that around 10% to 40% of wind energy output and profit is lost as a result of the interaction among wind turbines, particularly due to wake interactions [5] [6]. Wind energy control research is normally focused on either individual wind turbine control or wind farm cooperative control. In individual wind turbine controls, work has been done on using linear/nonlinear feedback control techniques to track the power to be produced. An example of this can be seen in [7] where the researchers proposed an adaptive control strategy using neural network to control rotor speed and blade pitch angle. Another popular direction is the study of wind availability and the stability analysis of the system while switching between different operation regimes [8]. The approach of maximizing the power of an individual turbine renders suboptimal in terms of wind farm power production due to coupled aerodynamic effects and mechanical loadings [9]. This beckons a scheme of coordination of individual wind turbine actions to increase the overall efficiency of the plant and reduce fatigue and loads on wind turbines [10]. With an increased responsibility in power generation, wind farms have other tasks to perform such as regulation and stabilization of power plants and may not be required to run at a full capacity at all times [9]. Many researchers have tackled cooperative wind farm control problems. The two broad categories of approaches include (i) maximizing the total power output, and (ii) power optimization schemes to distribute the power demand in terms of load reduction, e. g. in [10]. In [5], wind farm output is maximized by finding optimal combination of yaw angles and induction factor using a steepest decent method. In [11], power demand is met by dynamically coordinating and varying wind turbine power such that the addition of their power outputs matches the total requirement. Although there have been many works in recent years focusing on the cooperative control of wind turbines, there is still plenty of room for improvement in this field. For example, most of the recent work focuses on the use of linearized wind turbine models [12] for optimization purposes; this can lead to errors as some wind turbine operating modes in linearized models do not match well with real nonlinear phenomena. Furthermore, the work in coordinated wind farm control often ignores structural deflection constraints of individual wind turbines [13]. In some wind farm cooperative control work as seen in [14], the algorithm has a high computational cost when applied to larger wind farms and is not scalable with increase in the number of wind turbines. The optimal cooperative power planning in this work is divided into a hierarchical structure which consists of two levels, cooperative and individual. The cooperative level algorithm handles the objective of optimally allocating power to the wind turbines while considering the coupled constraint of wake interaction between wind turbines, as well as uncoupled constraints of power production limits of individual wind turbines based on wind turbine properties and available wind speeds. The individual level algorithm is to minimize the differences between the actual power generated and the allocated power demand while considering individual wind turbine constraints such as thrust and torque on the rotor, the rotor speed, and the tower deflection. A leader follower arrangement is used in connecting the cooperative and individual level algorithms. The recently studied cooperative control strategy [15] motivated by the local pursuit phenomenon seen in foraging ants [16] will be further enhanced to govern the relationship between the power generation in virtual leader power and individual wind turbine. The following aspects of algorithm are studied: (1) the asymptotic stability of power allocation formulation, 2) the equilibrium point and the stability of wind turbine rotor speed dynamics, 3) the ability to handle nonlinear wind turbine dynamics, and 5) the scalability of algorithm with increase in wind farm size. The paper presented is divided into the following parts. Section 2 introduces the adopted individual wind turbine model and the wake interaction model. In Section 3 the cooperative level and individual level optimization problems are defined. Section 4 describes the local pursuit based individual wind turbine optimal power control, and Section 5 discusses the coordinated wind farm power allocation algorithm. Simulation results are shown in Section 6. Lastly conclusion is drawn in Section 7. Individual Wind Turbine Model The nonlinear wind turbine model adopted from [7] consists of the blade pitch actuator dynamics and the rotor dynamics as, Here the state variables [ ] x ω β = T r are the rotor angular velocity and the collective blade pitch angle, and the control variable r β is the blade pitch angle reference input. ( ) , P C λ β is the rotor power coefficient. , and T β are the air density, rotor radius, average wind speed, the equivalent shaft inertia, gear box ratio, and time constant of the pitch servo system, respectively. r J and g J are the inertia of the rotor and generator. The data and coefficients used in this model are selected from a 3 blade, horizontal axis, 5 MW capacity offshore wind turbine [17]. The constants a and b are the parameters in the linearized generator torque model g g T a b ω = + [7], in which the generator speed is It is worth noting that the input matrix in Eq. (1) is non-square. The outputs of the model ( ) h x include the power extracted from the wind P , the torque experienced by the low speed shaft T , and the thrust experienced by the rotor F as follows In the above equation, To make the extraction of pitch angle easy from known P C and λ , an equation is adopted from [18] as ( ) ( ) . The P C value calculated using Eqs. (4) and (5) matches well with the values obtained from the FAST and Aero Dyn packages of NREL [17]. The Jensen wake model [19] is used to calculate the downstream velocity between wind turbines in the farm, which permits fast calculations and is commonly used in commercial wake calculation programs. The wind speed at a distance x is given as follows. ( ) ( ) Here 0 V and k are the incoming wind speed and the entrainment constant, and R is rotor radius. The thrust force acting on the rotor plane of the wind turbine causes the oscillation of the tower, and the tower deflection in the fore-aft direction is depicted in the second order system [20] as, mz dz cz F + + = ɺɺ ɺ (7) in which z is the displacement of tower top along the direction of the wind, the thrust force F is assumed to be concentrated in the center of the rotor hub. In Eq. (7), parameter m is the modal mass, d is the modal damping, and c is the modal stiffness of the tower. The displacement of the tower top is constrained by max z z ≤ . Power Generation Optimization in Individual Wind Turbine The performance index to be optimized in each wind turbine is, We assume that there are w N wind turbines in the farm, and , 1, 2,3 (3) and the tower deflection limitations are regarded as the inequality constraints. Power Allocation in Wind Farms In the wind farm cooperative level power allocation, the wind speed available to upwind turbines and the distances between the upwind and downwind wind turbines are known. At a particular time, the power grid network needs a total of tot P from this farm, and the performance index in the cooperative level is The power allocated to wind turbine i is limited by its power generation capability , ,  , which depends on the ranges of its incoming wind, pitch angle, and tip speed ratio. Power Output Regulation The power output of each wind turbine is proposed to be driven by a modified local pursuit strategy [15], in which VL P is the power output of leader that can be a virtual wind turbine, and the value can be the average power generated by w N wind turbines in the farm as The constant term i ∆ in a planning horizon is the power output bias of wind turbine i from VL P . There are different approaches to drive the power of each wind turbine towards its allocated value. Following Eq. (10) is just one approach. In this approach, the actual wind turbine power will follow a first order trajectory without an overshoot. Additionally, the speed control parameter (SCP) i v determines how fast the power output i P will converge to its desired value VL i P + ∆ . Let us define the output power tracking error of wind turbine i to be , 1, , Lemma 1: As t → ∞ , the power output of wind turbine i will asymptotically converge to its allocated value if 0 i v > . Also under this guidance law, the power output is ( ) Proof: It is proven in [15] that the error signal will asymptotically converge to zero as t → ∞ if ( ) 0 Thus the proof of this part of Lemma 1 is omitted. According to Eq. (10) and Eq. (11) (13) which leads to Eq. (12). Equilibrium Point and Stability of Rotor Lemma 2: If the power generation follows Eq. (10), the equilibrium points of the rotor speed , in which the coefficients are defined in the following proof. Here a negative rotor speed represents the case that the rotor will spin in the opposite direction if allowed. Proof: The rotor dynamics from Eq. (1) can be written as, , the rotor dynamics can be rewritten as Therefore, the equilibrium point , As t → ∞ , the steady state equilibrium point of the rotor speed is derived as Eq. (17). Remark 1: In reality, a wind turbine may only have one equilibrium point according to its wind blade pitch angle installation. Lemma 3: If the power generation for each wind turbine follows the modified local pursuit equation (Eq. 10), the equilibrium point of the rotor speed in Eq. (17) is asymptotically stable if the perturbation from its equilibrium point , Proof: Let us assume the rotor speed is perturbed to be , , , is the error around the equilibrium point. Then Eq. (17) can be rewritten as which can be simplified as Remove the equilibrium part in Eq. (19), the error dynamics is derived to be For any 0 If , , coefficients in both terms of the error dynamics are negative, which means the rotor speed error will decay to zero as t → ∞ , and the error is bounded by its initial error. Therefore, according to [21], the rotor speed equilibrium point is asymptotically stable. There is a singular value at , 0 Note that the equilibrium condition is applied in deriving Eq. (23). Therefore, if 10), the rotor speed in the individual wind turbine will reach its equilibrium point depending on its initial condition, which is asymptotically stable. Remark 4: Based on Eq. (13), ( ) ( ) ( ) This equation can provide information on how fast roughly the power generated by wind turbine i will approach the allocated power. Remark 5: It is worth noting the asymptotically stability of the equilibrium rotor speed assumes that the model is perfectly known and there is no sensor or actuator noises or uncertainties. When the noise and/or uncertainties cannot be neglected or the wind turbine is not perfectly modeled, the planning algorithm proposed here can be put in a receding horizon framework and the power generation in individual wind turbine will be replanned at the beginning of each planning horizon. Dynamic Model Propagation To solve the optimization problem for individual wind turbine listed in Section 3.1, we need to know the state and control variables at each instance. Since the input matrix of model Eq. (1) is non-square, instead of finding those variables through fast collocation methods such as those used in [15], we will directly propagate the dynamic model here. Since our goal is to plan individual level wind turbine's power regulation optimization, assumption is made that the relation between rotor speed, collective blade pitch angle, tip speed ratio, coefficient of power and allocated wind turbine power are perfectly modeled. The detailed steps involved are listed in the following algorithm. Step 1 Based on the allocated power i P for the th i wind turbine using Eq. (10), the rotor power coefficient ( , ) P i i C λ β can be calculated using Eq. (2). Step 2 The result from step 1 can be used to propagate the angular speed dynamics , i r ω using the first equation in Eq. (1). Step 3 The tip speed ratio is then calculated by Step 4 The tip speed ratio calculated in the previous step along with the known ( , ) P i i C λ β can help us reversely solve for the pitch angle i β using Eqs. (4) and (5) Step 5 The control variable (i. e. the reference pitch angle , Step 6 The output variables, i. e. the thrust and torque on the rotor, can be calculated using Eq. (2). The tower deflection ( i z ) is propagated using Eq. (7) based on the calculated thrust i F on the rotor. Individual Wind Turbine Power Generation Optimization The optimization of the power generation in each wind turbine is shown in Algorithm 2 listed below. The "fmincon" solver in MATLAB is applied here. As proven in Lemma 3, the closed-loop system is asymptotically stable. Table 2. Algorithm 2 -Power Generation Optimization. Step 1 Using the known virtual leader power VL P and the allocated power bias i ∆ , guess the optimizable variable (i. e. the speed control parameter i v ) at each time node. Step 2 The power i P to be generated is propagated using the guessed i v . Step 3 Algorithm 1 is followed and the results are used in evaluating the performance index as defined in Eq. (8) and the equality and inequality constraints as described in Section 3.1. Step 4 If the performance index does not converge to the minimum or a feasible solution, go back to Step 1. Else, the optimization is accomplished. Wind turbines in a wind farm can be optimized using Algorithm 2 in a decentralized manner. Power Generation Allocation in Cooperative Level The performance index in the cooperative level is given in Eq. (9). Expanding this performance index, we get . Expanding the first term in (25), Therefore, the performance index can be written as the form of a quadratic programming as where the optimizable parameters powers to be allocated. The matrices H and f are defined as The constraint in the optimal power allocation is [ ] P , P min max . To know the range of the available power for each wind turbine, the range of possible wind speed needs to be calculated. The algorithm to calculate the lower and upper bounds of the available power [ ] P , P min max is listed next as Algorithm 3. Table 3. Algorithm 3 -Range of Available Power for each Wind Turbine. Step 1 Receive the total wind farm power demand ( tot P ) Step 2 Follow Algorithm 3 to find min The MATLAB quadratic programming solver "quadprog" is used to solve the formulated power allocation problem (Eqs. 27 and 28 and [ ] P P , P ∈ min max ). The algorithm used to optimally allocate the power to each wind turbine is summarized in the following table. Step 1 Receive the total wind farm power demand ( tot P ) Step 2 Follow Algorithm 3 to find min P and max P Step 3 Solve the formulated quadratic programming problem (Eqs. 27 and 28 and [ ] P P , P ∈ min max ) Step 5 Compute the virtual leader power ( / VL tot w P P N = and 0 VL P = ɺ ) Step 6 Send the allocated power ( i VL i P P = + ∆ ), virtual leader power ( VL P ), and bias information ( i ∆ ) to Algorithm 2 for lower level optimization. This step is decentralized. Coordinating Power Allocation and Planning Algorithm Algorithms 1 through 4 are put together in Algorithm 5 as the overall power allocation and optimal power planning algorithm for a wind farm. Table 5. Algorithm 5 -Summary of the Algorithm. Step 1 The grid sends a total desired power output ( tot P ) in the beginning of each planning horizon. Step 2 Algorithm 4 (including Algorithm 3) is used to find VL P , i P and i ∆ in the cooperative level, which will be sent to individual wind turbine (centralized). Step 3 Algorithm 2 (including Algorithm 1) is used to find the optimized i v and the optimal reference pitch angle , i r β (decentralized). Step 4 Send the overall operation and power production information back to the central computer. Individual wind turbine will execute the , i r β command. Simulation Settings The simulation is carried out on a laptop, running Intel® Core i7-2620M with a processor speed of 2.7 GHz and a 6 GB RAM. The constrained nonlinear programming problem in Algorithm 2 is solved using the MATLAB "fmincon" function; while the constrained linear quadratic programming problem in Algorithm 4 is solved by the "quadprog" function. The properties of the wind turbine are adopted from [13] as shown in Table 6. It is worth mentioning that although all the wind turbines in the simulated wind farm are assumed to be the same, non-homogenous dynamics models can be used in the proposed cooperative control algorithm. Parameter Definition Number Gear box ratio ( gb n ) 97 Generator inertia ( g J ) , respectively [13]. The maximum tower deflection ( max z ) constraint is kept at 5% of the tower height. As one case, the weights in performance index Eq. (8) are set to 1 1 W = , 2 0 W = , and 3 0 W = . All the quantities in the optimization are nondimensionalized to help the optimization convergence. It is worth mentioning that for brevity only the plots of the state and control variables in Case A are shown since all the other cases have similar state and control variable performance. Individual Wind Turbine Optimization Three scenarios are simulated to test the robustness of Algorithm 2, i. e. the power planning optimization of individual wind turbine: A) varying wind speed, B) varying allocated power, and C) varying initial power condition. During the planning horizon, it is presumed that the wind speed remains constant. Varying Wind Speeds (Case A) The table below summarizes the optimization results of varying wind speeds for a fixed set of allocated and initial wind turbine power, as well as an invariant virtual leader power. The obtained steady state values for rotor speed, pitch angle, rotor torque, and rotor thrust are in agreement with those in similar scenarios on a 5 MW NREL wind turbine [17]. The minor differences in those performances are due to fact that the generator torque values (i. e. the values of a and b) chosen for the simulation are different from the data in NREL. Our strategy is to tune the generator torque to keep the tip speed ratio between 7 and 8 near the optimal tip speed ratio of 7.55 [17] Optimum solutions are able to be attained in reasonable time as shown in Table VII, ranging between 1.8 and 2.8 seconds. The following figure shows the time history of the wind turbine state and output variables for those five varying wind speed cases. In Figure 1(a) - Figure 1(b), the torque and thrust force are within the limit. The rotor speed (Figure 1(c)) is stabilized at its equilibrium point based on its power output and blade pitch angle. In all the cases, the power generation reaches its allocated number 1 MW (Figure 1(d)). The pitch angle (Figure 1(e)) follows well with the commanded reference pitch angle (Figure 1(f)). It is worth noting that all five cases have different initial pitch angle due to the fact that there are only two independent variables among the initial power, initial blade pitch angle, and initial rotor speed settings. Varying Allocated Power (Case B) For the cases in Table VIII, the allocated power is changing and the wind speed is kept constant. As expected with an increase in power demand, the pitch angle decreases. The maximum tower deflection, force, and thrust experienced by the turbine are increasing in a general trend. The CPU time is between 1.79 and 2.82 seconds. The rotor speed is maintained at its equilibrium point according to its power output, wind speed, and blade pitch angle. Varying Initial Power (Case C) For all five C cases, the initial power condition is varied, while the wind speed and the allocated power are kept at the rated value. For the same commanded power at the same (rated) wind speed, the steady state values for all 5 cases achieve the same value as expected. The maximum tower deflection is different due to its different initial power output, which affects the transient stage of the power generation; however it is within the limit. Coordinated Wind Turbine Optimization The overall cooperative optimal power planning algorithm (Algorithm 5) is tested on three offshore wind farms with different sizes. A 2 by 2 Wind Farm Array In this case, an array consisting of 4 wind turbine array is selected (Figure 2). The distance between each row of wind turbines is 504 m. A total power demand of 10 MW is requested from the farm. A rated wind speed of 11.4 m/s is available at the first row of wind turbines. Following Algorithm 5 and subsequent algorithms within it, the downwind wind speed at the second row is 10.13 m/s and the CPU time used in allocating the power to the wind turbines is 0.33 sec. The individual level algorithm is then minimizing the performance index in Eq. (8) and determines the pitch angle references for the individual wind turbines. A 4 by 4 Wind Farm Array In the second case a bigger array is used (Figure 3). Here again a rated wind speed of 11.4 m/s is available in the first row of wind turbines. For a total power demand of 30 MW, the cooperative level algorithm could rapidly allocate power to each wind turbines. The calculated velocities at the 2nd, 3rd and 4th rows are 9.83 m/s, 8.74 m/s, 7.54 m/s. The CPU time of the cooperative power allocation is 0.35 sec. A 5 by 5 Wind Farm Array For similar upwind conditions, in this case with 25 wind turbines (Figure 4), the total power demand from the wind farm is 45 MW. The calculated wind speeds based on the cooperative level algorithm at the downwind rows 2, 3, 4 and 5 are 9.84 m/s, 8.75 m/s, 7.55 m/s, and 5.91 m/s, respectively. The CPU time of the cooperative power allocation is 0.36 sec. The table below demonstrates the scalability of the cooperative power planning algorithm proposed in this paper. For an increase in the farm size, the computational cost remains at a similar level. The CPU time for the cooperative level only increases slightly from 0.33 seconds to 0.36 seconds. The CPU time increase for the individual level is relatively very low. The power allocation and planning optimization in a typical wind farm is at most 0.1 Hz [7]; therefore the CPU time achieved here meets the need. Furthermore, with a more efficient C programming solver, the CPU time is expected to be much lower. Conclusion In this paper, a new, hierarchical method for cooperative control of wind turbines in a wind farm is presented. The power allocation among wind turbines is obtained by solving a formulated quadratic constrained programming problem taking into account coupled and uncoupled constraints. The local pursuit strategy is customized for each wind turbine to optimally track the allocated power command taking into account realistic wind turbine constraints. Some benefits of the algorithms are: the wind turbine rotor dynamics under the planned power generation strategy is guaranteed to be asymptotically stable; the computational cost is low; and the algorithm is scalable in terms of the CPU time.
2019-04-15T13:11:45.243Z
2017-04-15T00:00:00.000
{ "year": 2017, "sha1": "0bc90af93468c70076a45d558d278200ac58bc74", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.epes.20170602.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c9aa0532ac3d7768a939cfb088abd20d25e5070b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
10238973
pes2o/s2orc
v3-fos-license
A stepwise drug treatment algorithm to obtain complete remission in depression : a Geneva study Questions under study/principles: We describe the proportion of severely depressed outpatients reaching complete remission at the different stages of a drug treatment algorithm. We compare several treatment options for SSRI (selective serotonin reuptake inhibitor) non-responders and test the feasibility of the algorithm in clinical conditions. Methods: Patients with severe depressive disorders (ICD-10; MADRS ≥25) admitted to an academic outpatient clinic were enrolled in this algorithm-guided sequential treatment protocol (starting with an SSRI and ending with a tricyclic, lithium, triodothyronine combination). The general principle of the algorithm was to boost the drug therapy in the event of non-response. Results: 135 patients entered the study and 131 were eligible for analysis. From this group, 86 patients dropped out (65.6%), 40 reached complete remission (30.5%) and 5 patients did not reach remission at all (3.8%). In the 117 patients to whom a last observation carried forward approach was applied, the median improvement of the MADRS score was 48.0% (range –20.7%–100%), with 48.7% of patients considered responders, 23.1% partial responders and 28.2% non-responders. Median retention time was 8 weeks (range 2–34). Conclusions: This algorithm-guided antidepressant treatment was acceptable for clinicians and resulted in an elevated final response rate among study completers. However, the dropout rate was high, mainly due to treatment interruption or non-observance. Treatment protocols are now widely used in many medical specialities such as obstetrics, paediatrics and cardiology.In psychiatry as well, a great deal of attention has been given to the development of practice guidelines and medication algorithms for management of mental health disorders.Depression is a major public health problem for which effective pharmacological treatment is now widely available in outpatient and inpatient settings [1].The aim of treatment is symptomatic remission and functional recovery [2], with maintenance treatment to prevent relapse [1].Symptomatic improvement (i.e.response as defined as a ≥50% reduction of the initial score on a depression scale) is distinguished from remission (i.e.minimal or no symptoms) because remission, in contrast to a response with residual symptoms, is associated with better functioning and a better prognosis [3,4].Failure to achieve remission is frequently due to inadequate dosage, too short duration of treat-ment or insufficient use of the available therapeutic options in cases of partial remission [2]. Because treatment success is never guaranteed with any antidepressant, clinicians often use a sequence of treatment steps (either monotherapies or combinations) to increase the likelihood of remission.Recent efforts have aimed to define algorithms to operationalise these different steps [5][6][7]. In spite of a general consensus among experts concerning the pharmacological strategies to be used, a certain number of questions remain with little or no answer.Few data have been published on the effects of increased dosage in cases of nonresponse or partial response [8][9][10].Moreover, the selective serotonin reuptake inhibitors (SSRIs) being now the first-line antidepressants, in cases where partial or complete remission is not achieved with such compounds the best strategy for obtaining remission has not been clearly de- fined.Suggested methods have included another increase in SSRI dosage, administration of another antidepressant with a broader spectrum of action (a tricyclic or a serotonin noradrenaline reuptake inhibitor [SNRI]) or the addition of lithium or triiodothyronine [11][12][13]. On the basis of these considerations we developed an algorithm-guided treatment plan aimed at obtaining complete remission.The general principle of this systematic treatment algorithm is to progressively reinforce drug therapy where clinical response in outpatients with depressive episodes is wholly or partially lacking.The strategy selected can be called "semi-naturalistic", since it aims at combining a strict treatment algorithm with the daily complexity of clinical reality [8]. The main objective of the study was to describe the proportion of patients reaching complete remission at the different steps of a treatment algorithm, starting with the usual first intention treatment (daily defined dose of an SSRI) and ending with a tricyclic antidepressant and a double potentiation (lithium and triiodothyronine) for the most resistant patients.The second objective was to compare, at an intermediate level of the algorithm, several available options for SSRI nonresponders.The third objective was to test the global feasibility of an algorithm of this kind in clinical conditions. This paper reports the final findings of this semi-naturalistic study, which we have named the Geneva Outpatient Depression Study (GODS). Patient evaluation and selection Over a 4-year period (1999-2002), all male and female patients admitted to our outpatient clinic in a university department of psychiatry (an outpatient clinic occupying a secondary rather than tertiary position in the local health system) with probable clinical diagnoses of depressive episodes were screened for inclusion and exclusion criteria.The diagnosis of the depressive episode and the comorbidities were screened by use of the M.I.N.I. (Mini International Neuropsychiatric Interview), ICD-10 version [14].The potential presence of a borderline personality disorder was investigated by means of a checklist based on the corresponding DSM-IV-R diagnostic criteria.Severity of depression was assessed by trained senior residents or clinical research nurses using the Mont-gomery and Asberg Depression Rating Scale, MADRS [15]. To be included in the study outpatients had to meet the following requirements: age between 18 and 65 years; a moderate or severe depressive episode without psychotic characteristics as per the ICD-10 [16] (F.31: depressive episode, bipolar affective disorder; F.32: depressive episode; F.33: depressive episode, recurrent depressive disorder), and a minimum score of 25 on the MADRS scale [15].For women of child-bearing age information was provided on the need for contraception.The exclusion criteria included the presence of one of the following diagnoses or criteria: (1) an organic illness (in particular cardiovascular, renal, hepatic or cerebral) contraindicating the use of the antidepressants or lithium or presenting a (5) borderline personality disorder; (6) dependence on alcohol or other substances, as per ICD-10, during the preceding year; (7) hypersensitivity to one of the antidepressants used or to lithium; (8) failure of a previous treatment at minimun dosage over at least a 2-week period, with one of the antidepressants used in the study (paroxetine: 20 mg/day; venlafaxine 75 mg/day; clomipramine 150 mg/day); (9) MAOI or fluoxetine treatment during the previous two weeks; (10) mood-stabilising or antipsychotic treatment.If any antidepressants had been taken previously, the clinician evaluated the time needed after discontinuing this treatment before inclusion in the study. The study was in conformity with the recommendations of the Declaration of Helsinki and received the approval of the Ethics Committee of the Geneva Department of Psychiatry.Patients gave their written informed consent. The GODS treatment algorithm The primary feature of the GODS treatment algorithm is a stepwise medication change based on the results of clinical evaluation with the MADRS at 2-or 4-week intervals according to the procedures for advancing from one step to the next (see below).The GODS consisted of up to 7 sequential treatment steps (step 1 to step 7) (figure 1). The GODS algorithm defines no response to treatment as a reduction by 25% or less of the initial MADRS score, partial response 1 as a reduction by 26% to 40%, partial response 2 as a reduction by 41% to 50%, response as a reduction by 50% or more, and complete remission as a MADRS score of 8 or less. The procedures specified that a move to the next step was warranted if a 25% decrease in the initial MADRS score (partial response 1) was not observed.Patients were assessed for possible progression to the next step every two weeks for steps 1-4 and every four weeks for steps 5-7.Once the 25% reduction was obtained, the goal shifted to a 40% reduction (partial response 2).If this goal was not achieved at the next patient evaluation, the treatment was stepped up.If it was achieved, the treatment remained unchanged and the goal shifted to a 50% reduction (response). This approach was based on our clinical experience, supported by research [17,18] showing a close correlation between the results obtained after one or two weeks and those obtained after 4 weeks. Once the response was obtained (50% reduction or more of the initial score on the MADRS), the goal was to reach complete remission (MADRS score of 8 or less). With this in view, the rules were modified to allow a slower progression.Patients remained with the same treatment if they continued to improve (i.e. one point mean decrease of the MADRS score between two consecutive visits).However, if there was a clear worsening (i.e. initial MADRS score/2 + 5 points) the patient moved to the next step. Step 1 was based on the minimum effective dose of an SSRI, paroxetine 20 mg in the evening.If the response was inadequate, patients moved on to step 2, which corresponded to a higher dosage of 30 mg/day paroxetine.If complete remission was not obtained with paroxetine after steps 1 and 2, a crossroad (step 3) determined the therapeutic reinforcement by randomised allocation of three treatments: increased paroxetine dosage to 40 mg/day (step 3A), addition of lithium to paroxetine 30 mg/day treatment (step 3B) or switch to venlafaxine (extended release form) 75 and after two days 150 mg/day in the evening (step 3C).After one week, the lithium (lithium sulphate slow release) doses were adapted according to plasma levels to target blood lithium levels of 0.6-0.8mEq/L. If the clinical evolution was unsatisfactory, patients moved on to step 4. For those in step 3A, lithium was added to paroxetine 40 mg/day (step 4A); for those in step 3B, lithium was continued and paroxetine increased to 40 mg/day (step 4B); for those in step 3C, venlafaxine was increased to 225 and after two days to 300 mg/day (150 mg b.i.d.) (step 4C).Paroxetine 40 mg/day was estimated a sufficient dose, as no significant benefit was shown with dose escalation from 20 mg to 40 mg/day [8]. In the absence of clinical improvement, the next step (step 5) was tricyclic antidepressant treatment with clomipramine (slow release form).Progressive titration by 37.5 mg/day increments every two days continued until the final dose of 150 mg/day, taken once in the evening, was reached.After two weeks of treatment, the clomipramine dosage was adjusted according to the results of therapeutic drug monitoring (TDM).The targeted therapeutic windows for clomipramine and desmethylclomipramine were 50-150 and 50-300 ng/ml respectively. If the response to this treatment was unsatisfactory, the following steps allowed for the addition of lithium to clomipramine (step 6) and finally, for patients still resistant to treatment, addition of triiodothyronine (T3: 37.5 mg/day taken in the morning) to the lithium and clomipramine regimen (step 7). The comedications allowed were clorazepate, maximum 30 mg/day, for anxiety and zolpidem, maximum 20 mg/day, for insomnia.In addition to the psychological support provided during the fortnightly visits, a number of psychosocial services were offered to reduce dropouts and encourage patient participation in treatment.These included group support sessions for depressive patients (daily then weekly), psychoeducation groups on depressive disorders and their treatment, and regular discussions with the nurse about the importance of medication compliance.When, after inclusion of the first 50 patients, a high rate of non-compliance was observed, nurse phone calls were added to these services during the week to remind patients of appointments and treatment. TDM was carried out after the first two weeks of therapy with paroxetine, venlafaxine and clomipramine and was repeated two weeks after a change in dosage and after prescription of lithium for adaptation of plasma levels.Thyroid function (TSH and T4L) was assessed as routine screening at inclusion.When lithium was added further tests were carried out (creatinine, Na + , K + , T4L, TSH and ECG).Before and after the addition of triiodothyronine a further evaluation of thyroid function was performed including T3L, T4L and TSH. Statistical analysis Descriptive statistics included frequencies and percentages for categorical variables and median and range for continuous variables.Subgroups of patients considered to be dropouts and study completers were compared with the Fisher exact test for categorical variables and the Mann-Whitney U-test for continuous variables.The MADRS decrease at each step of the treatment algorithm was tested using the Wilcoxon signed ranks test.Statistical analysis was performed with the SPSS package, version 11 (SPSS Inc., Chicago, IL).The significance level was set at 0.05 (two-sided tests). A total of 135 patients gave informed consent and entered the study.Of these, 4 patients were subsequently excluded because of major protocol violations.The descriptive and efficacy analyses were thus conducted with a sample of 131 patients.Sociodemographic and clinical characteristics of the study population are presented in table 1. Median numbers of previous depressive episodes and suicidal attempts were 1 (range 0-8, n = 118) and 0 (range 0-3, n = 105) respectively.Median duration of the current depressive episode at inclusion was 8 weeks (range 2-52). The comedications allowed (clorazepate and zolpidem) were used by 79 out of the 121 patients for whom data were available. Figure 1 presents an overview of the number of complete remitters, responders, partial respon-ders 1 and 2, non-responders and dropouts during the different steps of the GODS 1 treatment algorithm.The dropouts were divided into two subgroups according to the reasons of non-completion: withdrawals were patients who spontaneously decided to withdraw from the study (mainly through repeated missed appointments) and exclusions were patients excluded by the investigators because of non-compliance (as shown by TDM), adverse effects or other reasons (table 2). Overall attrition Overall, 66% (n = 86) of the patient sample dropped out of the study, with 44% (n = 57) excluded by the investigators and 22% (n = 29) withdrawn because of patient decisions to discontinue (table 2).Prevalence of non-compliance, as Clinical and sociodemographic characteristics of the patients (n = 131). N % 1 Total dropouts 86 66 Exclusions (Investigator's decision to exclude the patient) 57 44 For non-compliance documented from TDM 18 14 For adverse effects 27 21 For other reasons 212 9 Withdrawals (Patient's decision to interrupt treatment and follow-up) 29 22 With non-compliance documented from TDM 12 9 Without non-compliance documented from TDM 17 13 Table 2 Distribution of dropouts from the study (n = 131). In an exploratory perspective, factors possibly associated with dropout were investigated by comparing dropouts (n = 86) with study completers (n = 45) who either achieved remission (n = 40) or completed the 7 steps without remission (n = 5).The only factor significantly associated with dropout was shorter duration of the depressive episode at admission (median, range: 5 weeks, 2-52 versus 12 weeks, 4-34; Mann-Whitney Utest, p <0.001).Groups did not significantly differ with respect to gender, age, history of depression or associated diagnoses of anxiety disorders or substance abuse. In an attempt to reduce attrition and noncompliance rates, nurse phone calls were introduced to remind patients of treatment and appointments.However, when comparing patients contacted by phone (n = 81) with those who were not (n = 50), dropout rates did not significantly differ (64.2% versus 68.0%, Fisher exact test, N.S.).Rates of non-compliance also remained similar (24.7% versus 20.0%, Fisher exact test, N.S.). Overall response Of the 131 patients entering the protocol at step 1, 45 were considered to be study completers.Forty patients were complete remitters (30.5%), with a median time before achieving remission of 10 weeks (range 4-34).Five patients had not reached remission at the end of the 7 steps of the protocol. The 117 patients who had at least one MADRS assessment after inclusion were further considered in a last observation carried forward approach for the overall treatment algorithm.The median MADRS score decreased from 33 (range 25-49) at inclusion to 16 (range 0-40) at discharge from the study.Median improvement of symptom severity was 48.0% (range -20.7-100%), with 48.7% of patients considered to be responders, 23.1% partial responders and 28.2% non-responders.Median study retention time was 8 weeks (range . MADRS scores at the end of each treatment step, change within a given step and improvement from baseline score are documented in table 3. MADRS scores and changes at each step. Discussion The three main aims of this study were to evaluate, among severe depressive outpatients, the proportion of patients obtaining complete remission at the different steps of a medication algorithm; to compare, at an intermediate level of the algorithm, several of the options available when a patient did not respond to the SSRI that was used as a first-intention antidepressant; and to test the feasibility of the algorithm in clinical practice and in the conditions of an open seminaturalistic design. In the light of the results we must, before considering these three aims, emphasise the fact that the most significant finding in this trial is the high attrition rate, with 66% of the sample not completing the study.Non-compliance, as measured by plasma levels, was the most common factor associated with dropout (23%), including patients who withdrew and those who were excluded.The other common causes of dropout were treatmentinduced, unbearable side effects (21%) and selfwithdrawal without TDM evidence of non-compliance (13%).A review of controlled therapeutic studies suggests a dropout rate of up to 33% irrespective of antidepressant drug class [11], but higher rates are observed in clinical practice [19].In general, between 30% and 60% of all patients fail to take medication they have been prescribed and compliance in psychiatric patients seems comparable with other patient populations [20].Consequently, adherence to treatment with antidepressant drugs is an issue of major clinical relevance. The factors leading patients to discontinue therapy, as well as the issue of what specific interventions contribute to improving adherence, are not fully understood [19].The two reasons that have been most frequently examined in clinical trials are lack of efficacy and adverse events.However, a small study by Maddox et al. [21] revealed that the reasons for dropping out are different in naturalistic settings.In this study, feeling better was the most frequent reason given, followed by adverse events, other reasons, physician's instructions and non-response to medication.In our study, the only factor affecting attrition was shorter length of the current depressive episode, which has already been shown to affect adherence [20]. It is worth noting that placebo response is also associated with shorter episode duration in placebo-controlled trials [22][23][24][25].Thus, spontaneous improvement in individuals with short episode duration may have contributed to the high attrition rate.However, no data are available about the reasons behind non-compliance in our trial.Moreover, our intervention (nurses' phone calls) aiming to reinforce and improve patient adherence and reduce dropouts had no effect.An intervention of this kind was probably not forceful enough to improve treatment adherence [26].Even if patients were encouraged to take advantage of several supportive and psychoeducational activities in our protocol, no formal structured psychotherapy was offered.This may have contributed to the present study's significant dropout rate, as psychotherapy may have a possible adherenceenhancing role [27]. The first consequence of the high attrition rate is that only 30.5% of the patients who entered the GODS study reached full remission.These findings correspond to those of other naturalistic and general effectiveness studies [5,[28][29][30].However, as a result of this high attrition rate, the first aim of the study (evaluation of the proportion of patients obtaining complete remission at the different steps of a medication algorithm) was only partly accomplished.Nevertheless, it is worth noting that two-thirds of the patients in this trial achieved complete remission with paroxetine 20 mg and 30 mg in monotherapy, during steps 1 (n = 13: 9.9%) and 2 (n = 15: 17.2%) respectively.Furthermore, although there is overwhelming evidence that lithium augmentation of antidepressants is an effective strategy for treating nonresponders and/or resistant patients [31], the remission rates did not greatly increase after lithium augmentation of paroxetine in our trial, since only 1 out of 21 patients improved.This is much lower than the response rate reported for lithium addition in SSRI non-responders [13,32], even if this rate is usually lower than for lithium addition to tricyclic non-responders.Moreover, only one of the 10 clomipramine non-responders reached complete remission after lithium addition.However, it should be stressed that most of the patients who received clomipramine were already resistant to lithium augmentation (addition to paroxetine at steps 4A, 3B and 4B). When investigating which treatments provided the greatest number of complete remissions (in relation to the number of patients entering the step), the best results were obtained with venlafaxine (300 mg) (2/9) and clomipramine 150 mg (5/20), which are potent multi-action antidepressants at such doses.This finding may provide further evidence of the relative advantage of such agents in achieving remission among both in-and outpatients [33][34][35]. It is worth noting that the 40 complete remitters participated in the study for a relatively long period of time (median 10 weeks; range 4-34), especially considering the fairly aggressive treatment plan.Moreover, when considering the GODS study completers, 88% (40/45 patients) reached complete remission.The main reason for the very favourable outcome among the study completers may be the fact that the study period was not limited as in most RCT's, but continued until either full remission or the end of the algorithm.Another reason for these favourable results may be the exclusion of those patients presenting a borderline personality disorder who may have a more treatment-refractory course of illness [36].Finally, we cannot exclude the possibility that individuals with potential drug resistance were over-represented among the dropouts. The high attrition rate also significantly affected the second objective of the study, since the number of subjects who reached steps 3 and 4 was too small for a statistical comparison of the three therapeutic arms: venlafaxine 150 and then 300 mg, paroxetine 40 mg, and paroxetine 40 mg + lithium.Moreover, at step 3 randomisation was only partial.For practical reasons (time pressure in clinicians' daily activities), a subgroup of patients received a paroxetine dose increase (corresponding to the 3A arm) without being randomised.This resulted in an unbalanced, incompletely randomised arms allocation with a clear excess of patients in the 3A arm.Consequently, the second objective (i.e. to compare three treatment options for SSRI nonresponders) was not attainable. The third objective of this trial was to determine the feasibility of an algorithm of this kind in clinical practice.Overall, patient acceptance of the GODS protocol was fairly satisfactory.Our experience with the clinicians was more mitigated, since acceptance was only partial among the young clinicians.After one year we selected a more limited number of clinicians.As a result, their acceptance of and compliance with the protocol improved.Furthermore, seven steps were probably too many and randomisation could not always be appropriately carried out in this naturalistic clinical context.On the other hand, the clinicians easily assimilated the relatively complex rules guiding decisions.Such rules greatly contributed to homogenisation of therapeutic decisions in the course of treatment. In conlusion, the present study has confirmed that many patients do not continue their treatment and/or take the prescribed drug treatment, and that an SSRI as first intention treatment followed by the possible use of a multi-action antidepressant (including tricyclics) may be useful in the event of resistance. Moreover, simply exposing physicians to a treatment algorithm may not be sufficiently effec-tive in the treatment of a major depressive disorder.It may be of particular interest to study the long-term outcomes of patients who dropped out of treatment (with special attention to the risk of suicide, chronicity, and inability to work, a highly sensitive topic in Switzerland).This may make it possible to evaluate the needs of such patients and design appropriate strategies to reduce the number of such dropouts, including patient education programmes and integration of structured elements of psychotherapy. a Number of patients with at least 2 MADRS evaluations two weeks apart at a given step b MADRS change from baseline = 100 X [MADRS at baseline -MADRS at step exit] / MADRS at baseline c MADRS change at each step = 100 X [MADRS at step entry -MADRS at step exit] / MADRS at step entry LOCF = last observation carried forward approach Table 3 Med Wochenschr (1871-2000) Swiss Med Wkly (continues Schweiz Med Wochenschr from 2001) Editores Medicorum Helveticorum
2018-04-03T00:16:58.089Z
2006-02-04T00:00:00.000
{ "year": 2006, "sha1": "c00f3080bc6dad457f3ac1f9deaf77273592e81f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4414/smw.2006.11267", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c00f3080bc6dad457f3ac1f9deaf77273592e81f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235201186
pes2o/s2orc
v3-fos-license
Assessment of the Potential for Veverimer Drug-Drug Interactions Veverimer is a polymer being developed as a potential treatment for metabolic acidosis in patients with chronic kidney disease. Veverimer selectively binds and removes hydrochloric acid from the gastrointestinal tract, resulting in an increase in serum bicarbonate. Veverimer is not systemically absorbed, so potential drug-drug interactions (DDIs) are limited to effects on the absorption of other oral drugs through binding to veverimer in the gastrointestinal tract or increases in gastric pH caused by veverimer binding to hydrochloric acid. In in vitro binding experiments using a panel of 16 test drugs, no positively charged, neutral or zwitterionic drugs bound to veverimer. Three negatively charged drugs (furosemide, aspirin, ethacrynic acid) bound to veverimer; however, this binding was reduced or eliminated in the presence of normal physiological concentrations (100-170 mM) of chloride. Veverimer increased gastric pH in vivo by 1.5-3 pH units. This pH elevation peaked within 1 hour and had returned to baseline after 1.5-3 hours. Omeprazole did not alter the effect of veverimer on gastric pH. The clinical relevance of in vitro binding and the transient increase in gastric pH was evaluated in human DDI studies using two drugs with the most binding to veverimer (furosemide, aspirin) and two additional drugs with pH-dependent solubility effecting absorption (dabigatran, warfarin). None of the four drugs showed clinically meaningful DDI with veverimer in human studies. Based on the physicochemical characteristics of veverimer and results from in vitro and human studies, veverimer is unlikely to have significant DDIs. SIF, simulated intestinal fluid This article has not been copyedited and formatted. The final version may differ from this version. Veverimer is being developed as a once-daily treatment for metabolic acidosis in patients with CKD. Veverimer is an orally administered, non-absorbed, insoluble, free-amine polymer that combines high capacity and selectivity to bind and remove hydrochloric acid (HCl) from the gastrointestinal (GI) tract, resulting in an increase in serum bicarbonate (Bushinsky et al., 2018;Wesson et al., 2019b;Wesson et al., 2019a). Acid binding and removal using a non-absorbed polymer is a novel approach to treating metabolic acidosis. Within the GI tract, the polymer restores the ability to excrete acid from the body. This mechanism of action is fundamental to the effectiveness of veverimer in treating metabolic acidosis and is distinct from the mechanisms of other drugs, such as proton pump inhibitors (PPIs) and histamine H 2 -receptor antagonists, that affect gastric pH but do not have an effect on systemic acid-base balance. The effect of veverimer on acid binding, as assessed by a change in gastric pH, was evaluated in this study both in the presence or absence of a PPI. The effect of veverimer on a background of PPI use was a relevant question because these drugs affect gastric pH and are commonly used. This article has not been copyedited and formatted. The final version may differ from this version. DMD Fast Forward. Published on May 24, 2021as DOI: 10.1124 at ASPET Journals on July 16, 2021 dmd.aspetjournals.org Downloaded from Veverimer was designed to bind HCl with high capacity and specificity. After ingestion, veverimer is protonated, and the positively charged polymer selectively binds the smallest anion in the GI tract, chloride, with little or no binding of other anions (Klaerner et al., 2020). The high HCl binding capacity is a function of the amine content of the polymer, while the high specificity for chloride binding is the result of extensive crosslinking within the polymer beads that excludes anions larger than chloride (e.g., phosphate, citrate, bile acids and short-chain and long-chain fatty acids) and minimizes interaction between the polymer and other concomitantly administered oral drugs (Klaerner et al., 2020). Polypharmacy is common in CKD and drug-drug interactions (DDIs) are a continuing clinical concern (Rama et al., 2012;Sommer et al., 2020). We used a directed approach based on the known physical and chemical characteristics of veverimer to analyze its potential for DDIs ( Figure 1). Veverimer is too large to be systemically absorbed (Klaerner et al., 2020) therefore its potential for DDIs is limited to effects on the absorption of other orally administered drugs either through 1) binding to veverimer or 2) transient increases in gastric pH caused by veverimer binding to HCl. In this study we tested the hypothesis that the potential for veverimer to interact with other orally administered drugs is low, consistent with the physical and chemical properties of this novel polymer. This article has not been copyedited and formatted. The final version may differ from this version. DMD Fast Forward. Published on May 24, 2021as DOI: 10.1124 at ASPET Journals on July 16, 2021 dmd.aspetjournals.org Downloaded from MATERIALS AND METHODS The protocols for all five clinical trials reported herein were approved by the relevant US institutional review boards (Chesapeake Research Review, Inc. [Columbia, MD] for the furosemide, aspirin and warfarin DDI studies and the gastric pH study and Advarra IRB [Columbia, MD] for the dabigatran DDI study). All participants provided written, informed consent prior to trial initiation. The trials were conducted at Celerion in accordance with the principles of Good Clinical Practice and the Declaration of Helsinki. The furosemide, warfarin and dabigatran DDI studies and the gastric pH study were conducted in Tempe, AZ, and the aspirin DDI study was conducted in Lincoln, NE. Anionic Probes Since the free-amine veverimer polymer becomes positively charged upon binding to hydrogen ions, negatively charged probe molecules of increasing molecular weight (36.5-234.2 Da; Supplemental Table S1) were used to assess the role of size exclusion in binding of molecules to veverimer. Veverimer (450 mg) was incubated in 20 mM aqueous solutions (100 mL) of individual probe molecules for 0.5, 2, 4, 6 and 24 hours. Phosphoric acid and HCl incubation periods were limited to 6 hours and 4 hours, respectively. At each time point, a 400 µL aliquot was filtered, diluted 10-fold and analyzed by ion chromatography to calculate the number of millimoles of acid bound per gram of veverimer. In Vitro Assessment of Direct Drug Binding to Veverimer: Test Drug Panel An in vitro test system was designed to assess the potential for binding interactions between veverimer and a set of test drugs that included 14 oral medications used in patients with CKD, as well as two water-soluble vitamins ( Table 1). The test drug panel included prototypical drugs from 14 distinct drug classes that ranged in size from 129 to 482 Da, were positively charged (N=5), neutral/zwitterionic (N=4) or negatively charged (N=7), and comprised all four Biopharmaceutics Classification System (BCS) classes, covering a range of solubilities and permeabilities. In vitro binding assays were conducted using seven matrices mimicking the pH and ionic conditions of the GI tract: simulated gastric fluid (SGF) with or without additional 60 mM HCl and without pepsin (pH 1 to 1.2); 50, 100 or 200 mM acetate buffer (pH 4.5); and simulated intestinal fluid (SIF) with or without an additional 50 mM PO 4 and without pancreatin (pH 6.8). The matrices with higher buffering capacity (SGF+60 mM HCl, 100 mM and 200 mM acetate buffer, SIF+50 mM PO 4 ) were used to maintain the pH of the incubation mixture around the target pH values of 1.2, 4.5 and 6.8, respectively, in the presence of 9.0 mg/mL veverimer. The likelihood of identifying a drug interaction would be greatest at a high concentration of veverimer (4.5 g or 9.0 g) and a low concentration of the test drug. The maximum clinical dose of veverimer is anticipated to be 9 g QD. A volume of one liter was used for dispersion of an orally administered drug in the upper GI tract. (Read et al., 1980;Metcalf et al., 1987;Thelen et al., 2011). Binding to veverimer was assessed in six replicates by measuring free test drug concentration after a 3-hour incubation on a benchtop shaker at 37°C in the presence or absence of veverimer (4.5 mg/mL or 9.0 mg/mL). After incubation, samples were allowed to settle for 5 minutes and were filtered with a 0.45 micrometer polyvinylidene fluoride (PVDF) filter plate unit (allopurinol, aspirin, gliclazide, metoprolol tartrate, lisinopril, riboflavin, thiamine hydrochloride, and trimethoprim) or a 0.45 micrometer PVDF syringe filter (amlodipine besylate, ethacrynic This article has not been copyedited and formatted. The final version may differ from this version. is the t-value for a two-sided 90% confidence interval with DF degrees of freedom. Assuming equal variances between the two populations, the standard deviation of the difference between the means is: where 1 2 is the variance of n 1 observations from population 1 and 2 2 is the variance of n 2 observations from population 2 and DF = n 1 + n 2 -2. Results from these in vitro studies informed the selection of drugs for human DDI studies. Effect of Veverimer on Gastric pH in Healthy Volunteers with and without Omeprazole An open-label, in-patient, randomized, crossover, 2-stage study in healthy volunteers was conducted to assess the effect of veverimer on gastric pH (Supplemental Figure S1). Stage 2; subjects who withdrew from the study after Stage 1 were replaced to ensure that the same number of subjects (N = 40) participated in each stage. The sample size was estimated based on the integrated acidity variability when no drug is administered to show no difference and ensure that the effect observed with veverimer was due to the drug administration and not the natural variability of the pH levels in healthy subjects. The number of subjects enrolled was based on in-house data and conservative assumptions regarding intra-and inter-subject coefficients of variation in order to obtain a power of at least 80%, which was defined as the probability of having a 90% confidence interval (CI) for a treatment ratio within 80.00 -125.00%. Both Stage 1 and Stage 2 had a randomized (1:1:1:1), 4-period, 4-way crossover design. One of four study drug treatments was administered on Day 1 of each of four treatment periods in each stage. Periods 1 and 2, and Periods 3 and 4, were each conducted over two consecutive days; Period 2 and Period 3 were separated by a 1-day rest period. As prespecified in the protocol, after completion of Period 4 in Stage 1 and review of the preliminary intragastric pH data, the decision was made to conduct Stage 2. Stage 2 began with a 6-day Run-in Period (Days -6 to -1), during which subjects received omeprazole once daily (QD In Stage 1 of the study, subjects received the following treatments: In Stage 2 of the study, subjects received the following treatments:  Treatment E: Water Fasted. Ninety (90) Safety was monitored throughout the study with clinical laboratory evaluations, reporting of AEs, physical examination, vital signs, and 12-lead electrocardiograms (ECGs). Information on subject disposition is provided in Supplemental Table S2. Demographics and key baseline information for the study population are presented in Supplemental Table S3. Most subjects were white (80.0% in Stage 1; 77.5% in Stage 2) and approximately half were of Hispanic or Latino ethnicity (57.5% in each stage). Most subjects were female (67.5% in Stage 1; 65.0% in Stage 2), and the mean age was 33.4 years in Stage 1 and 35.5 years in Stage 2 Results from this study informed the selection of additional drugs for human DDI studies. Human DDI Studies To characterize the potential for clinically relevant DDIs due to binding of other orally administered drugs to veverimer, human DDI studies were conducted with the two drugs that This article has not been copyedited and formatted. The final version may differ from this version. showed the highest binding with veverimer in vitro: aspirin and furosemide. Given the transient effect of veverimer on gastric pH, human DDI studies were also conducted to evaluate the potential for veverimer to affect the bioavailability of drugs with pH-dependent solubility; the victim drugs evaluated in these studies were the weak acids furosemide and warfarin, and the weak base dabigatran (Supplemental Table S4). Human DDI studies were open-label, in-patient, randomized, crossover studies that examined the effect of veverimer on the pharmacokinetic (PK) profile of each of the four victim drugs. The treatments administered comprised: victim drug alone, victim drug coadministered with veverimer, and victim drug separated from veverimer by 1 -3 hours, and subjects were randomly assigned to treatment sequence (Supplemental Figure S2 and Figure S3). Key entry criteria for the studies are provided in Supplemental Section 3.1.2.1. In order to maximize the possibility of observing binding to veverimer, the highest anticipated daily dose of veverimer (9 g) and the lowest feasible clinical dose of the victim drugs were administered (20 mg furosemide, 81 mg aspirin, 2 mg warfarin, 150 mg dabigatran). Study treatments were administered during treatment periods in which subjects were confined in a CRU where they were fed a standardized diet, with an out-patient washout period between treatment periods and a 14-day follow-up period after the final study treatment. In each treatment period, serial blood samples for PK analysis were collected following administration of the victim drug; the sampling time periods and washout periods between treatments were based on the half-lives of the victim drugs ( All DDI studies included assessment of the safety and tolerability of veverimer when coadministered with the victim drug. Safety was evaluated from 12-lead ECGs, measurements of vital signs and clinical laboratory parameters, assessment of AEs, and physical examinations. Information on the disposition of subjects in the human DDI studies is summarized in Supplemental Table S6. A total of 52 subjects were randomized, enrolled, and received at least one dose of study drug in the furosemide DDI study. The furosemide PK analysis set comprised 51 subjects; one subject vomited during administration of the first dose of veverimer in Treatment Period 1 and was withdrawn from the study. Per protocol, because the subject did not have ≥ 2 consecutive time points with measurable furosemide concentrations, he was excluded from the furosemide PK analysis set. In the aspirin DDI study, a total of 51 subjects were randomized, enrolled, and received at least one dose of study drug and were included in the aspirin PK analysis set. Forty-eight (48) subjects completed the study; 3 subjects withdrew early for diverse reasons (i.e., mild headache, difficulty with study blood draws, personal reasons). In the warfarin DDI study, a total of 15 subjects were randomized, enrolled, received at least one dose of study drug, and completed the study. The warfarin PK and PD analysis sets comprised 15 subjects who complied sufficiently with the protocol and displayed evaluable PK and PD profiles, respectively. A subset analysis was prespecified that excluded obviously aberrant PK profiles based on the presence of outlier values in one or more of the key PK parameters, defined as those falling below the first quartile or above the third quartile by more than 1.5 times the interquartile range, in the event that outlier values were noted for the PK parameters AUC 0-t , AUC 0-inf or C max , for R-or S-warfarin. The initial PK and statistical assessments included a PK profile for one subject (Treatment A [warfarin alone]) that diverged dramatically from the remainder of the data. It was readily apparent on visual inspection of the concentration-time profiles that there was a unique issue in this individual on this dosing occasion (i.e., the subject did not appear to have ingested the drug), and that this profile should be excluded from the analysis. A cause for the effective absence of warfarin blood levels in this subject on this occasion could not be identified. The decision to perform subset analyses with the aberrant Treatment A profile removed from the warfarin PK analysis set was supported by the statistical assessment described above. In the warfarin DDI study, Plasma R-and S-warfarin levels were below the limit of quantitation (BLQ) (i.e., < 1.00 ng/mL) in all samples collected prior to dosing in Period 1. Quantifiable Rand S-warfarin concentrations were observed in all predose plasma samples in Periods 2 and 3, consistent with carryover from the prior dose. This suggests that the 21-day wash out period was insufficient. However, individual predose concentrations were low, ranging from approximately 1.5% to 6.8% of the subsequent C max for R-warfarin, with nine instances exceeding 5%. For Swarfarin, predose concentrations ranged from approximately 2.0% to 8.6% of C max , with four This article has not been copyedited and formatted. The final version may differ from this version. instances exceeding 5%. Therefore, it is highly improbable that the measurable predose concentrations had a discernable impact on the PK data. In the dabigatran DDI study, a total of 84 subjects were randomized, enrolled, and received at least one dose of study drug. Of the 84 subjects in the dabigatran PK analysis set, 81 subjects (96.4%) completed the study, receiving all doses of each study drug; three subjects had incomplete PK data: 1 subject was discontinued for a protocol violation and had no PK data for Treatment D; 2 subjects withdrew early causing one to miss Treatment D and the other to miss Treatments C and D. The available data for all subjects were included in the PK analyses. Demographic and baseline characteristics of the subjects enrolled in the DDI studies are summarized in Supplemental Table S7. The average age of the subjects in the studies ranged from 33-39 years and the study populations ranged from 47-80% male. The majority (80-90%) of the patients studied were white; 2-79% were Hispanic or Latino; and 6-13% were black or African American. This article has not been copyedited and formatted. The final version may differ from this version. Binding of Veverimer to Anionic Probes and Representative Test Drugs In Vitro The binding kinetics for the anionic probe molecules to veverimer are shown in Figure 2A and binding of the probe molecules to veverimer as a function of size is illustrated in Figure 2B. The rate of binding, as well as the total amount of probe bound to the polymer, was inversely proportional to the size of the probe, suggesting that smaller negatively charged molecules preferentially bound to the polymer over larger negatively charged molecules. Anionic probe molecules >200 Da did not bind to veverimer. In in vitro binding experiments using a panel of 16 test drugs, none of the five test drugs that are positively charged across the physiological pH range of the GI tract (i.e., amlodipine, metformin, metoprolol, thiamine and trimethoprim) bound to veverimer under any condition (Figure 3). Similarly, none of the four test drugs that are neutral or zwitterionic (i.e., allopurinol, riboflavin, spironolactone and lisinopril) bound to veverimer under any condition (Figure 3). The remaining seven test drugs (aspirin, ethacrynic acid, furosemide, valsartan, rosuvastatin, warfarin and gliclazide) are weak acids; of these, only furosemide, aspirin and ethacrynic acid bound to veverimer and these only did so in the acetate buffer (pH 4.5) matrix. Under the pH conditions of the SGF matrix (pH 1-1.2), the weak acids are neutral (Table 1), and none bound to veverimer (Figure 3). Although the weak acids are negatively charged in the SIF matrix (pH 6.8), none bound to veverimer, likely because 50-100 mM of the small anion, phosphate, was available to compete for veverimer binding sites (Figure 3). In the presence of physiologically relevant concentrations of chloride (100 mM) the three test drugs (aspirin, ethacrynic acid, furosemide) that bound to veverimer in acetate buffer were unable to bind, likely due to preferential binding of the small chloride ion (Figure 4). Drugs previously shown not to bind veverimer (allopurinol, trimethoprim) also did not bind to the polymer in the presence of 100 mM chloride. Effects of Veverimer on Gastric pH with and without A Proton Pump Inhibitor The magnitude and duration of effect of veverimer on gastric pH was measured continuously in vivo in healthy volunteers using a microelectrode pH probe. proton pump inhibitor (omeprazole) (Figure 5C and Figure 5D). Supplemental Figure S4 and Figure S5 show the independent effects of food or omeprazole (in the absence of veverimer) on gastric pH. A total of 12/40 subjects (30%) in Stage 1 and 13/40 subjects (32.5%) in Stage 2 had one or more AEs during the study. No deaths or other SAEs were reported, and no subject discontinued the study due to an AE. The most common AEs observed in the veverimer treatment periods (i.e., reported by more than one subject in either study stage in the Veverimer Fasted and Veverimer Fed treatment periods combined) were headache, nausea and oropharyngeal pain. All AEs were mild (22.5% of subjects in Stage 1 and 30% of subjects in Stage 2) or moderate (7.5% of subjects in Stage 1 and 2.5% of subjects in Stage 2); there were no severe AEs reported for any treatment group. There were no treatment effects noted in this study on clinical laboratory or vital signs parameters, physical examination findings or ECG intervals. Human Drug-Drug Interaction Studies Human DDI studies were conducted with drugs that demonstrated the greatest potential for direct interaction with veverimer in vitro (furosemide, aspirin) and those with susceptibility to gastric pH changes (furosemide, warfarin, dabigatran). No DDIs were observed between veverimer and any of the drugs tested, either with concomitant administration (Figure 6) or with 1-3-hour dosing separation intervals ( Table S10). In the furosemide DDI study, there were no deaths, serious AEs or severe treatment-emergent AEs. Three subjects were withdrawn because of treatment-emergent AEs (mild postural dizziness; mild eosinophil count increase and mild white blood cell count increase; mild vomiting). In the aspirin DDI study, there were no deaths, serious AEs or severe treatmentemergent AEs. All treatment-emergent AEs were mild. One subject withdrew because of a mild headache. In the warfarin DDI study, there were no deaths or treatment-emergent AEs leading to discontinuation of study drug. One subject had a serious AE (jaw fracture [due to trauma]) 10 days after study treatment in Period 1. This event was assessed as unrelated to study drug. All other treatment-emergent AEs that occurred were mild or moderate and non-serious. In the dabigatran DDI study, there were no deaths, serious AEs or severe treatment-emergent AEs. There were no treatment-emergent AEs that led to discontinuation of study drug. DISCUSSION Reported here are results from in vitro and in vivo studies that evaluated potential interactions of veverimer with other orally administered drugs. We considered the physicochemical and biopharmaceutical properties and method of use of veverimer to design a rational, sequential and tailored approach that examined possible mechanisms of interaction. Because the polymer is not absorbed from the GI tract, it is unlikely that it would alter the pharmacokinetics of concomitantly administered drugs through inhibition and/or induction of drug metabolizing enzymes or transporters. Potential interactions with veverimer are restricted to those that could affect absorption of other drugs from the GI tract, such as through direct binding or indirect effects on bioavailability resulting from transient increases in gastric pH. The mechanism by which veverimer transiently reduces acidity in the GI tract after ingestion involves protonation of the polyamine polymer with subsequent binding of chloride and the removal of HCl from the GI tract in the feces (Bushinsky et al., 2018). The highly cross-linked structure of veverimer confers a marked size exclusion selectivity to the negatively charged moieties that bind to the protonated polymer, strongly favoring binding of the smallest anions and restricting binding of larger anions (Klaerner et al., 2020). These properties were affirmed in studies reported here evaluating the in vitro binding of a range of anionic probe molecules and a diverse panel of test drugs clinically relevant to the CKD population. The results of these studies illustrated the predisposition of veverimer to bind negatively charged molecules, with an inverse relationship between size of the anion and its propensity for binding to the polymer (Figure 2); data on furosemide, gliclazide, and warfarin suggested that size is secondary to negative charge in this regard. While binding of aspirin, ethacrynic acid and furosemide to veverimer was significant at pH 4.5 in acetate buffer, it was reduced or eliminated in the presence of This article has not been copyedited and formatted. The final version may differ from this version. physiologically relevant concentrations of chloride (100-170 mM) (Figure 4). This is consistent with results from previous in vitro studies in matrices mimicking the lower GI tract demonstrating that veverimer preferentially bound chloride in the presence of competing organic and inorganic anions (e.g., acetate, phosphate, citrate, taurocholate, oleic acid (Klaerner et al., 2020). In the current study, chloride effectively competed for veverimer binding sites with the three negatively charged test drugs at chloride concentrations normally present in the GI tract, which predicted a low likelihood of clinically meaningful DDIs mediated by direct binding of even the smallest co-administered anionic drugs. Consistent with the binding properties of veverimer, administration of the drug did not impact absorption of fat-soluble vitamins from the GI tract in rat and dog chronic toxicity studies. Since the essential nutrient requirements of rats and dogs are qualitatively similar to humans, the absence of clinically relevant binding of veverimer to any essential dietary substance in the chronic toxicology studies is expected to extrapolate to humans. The pharmacodynamic effects of veverimer on gastric acidity were evaluated in human volunteers to elucidate the extent and time course of gastric pH changes mediated by the intended removal of HCl from the GI tract by the polymer. Continuous monitoring of stomach acidity demonstrated a modest and transient increase in gastric pH following oral administration of veverimer. The magnitude of the gastric pH increase observed after ingestion of a meal or when the proton pump inhibitor (PPI) omeprazole was given without food were similar to those seen with veverimer given in the fasted state. All three factors individually appeared to increase mean gastric pH by 2-4 pH units, although the time course of the effect on gastric pH differed. While omeprazole, which had been dosed to steady-state prior to the test, increased gastric pH throughout the 22-hour monitoring period, the effects of food and veverimer were short-lived This article has not been copyedited and formatted. The final version may differ from this version. (i.e., disappearing within 1-4 hours). In this way, the effect of veverimer on gastric pH more closely resembled the transient effect of food than the long-lasting effect of a PPI. Thus, the mechanism of action of veverimer (i.e., increasing serum bicarbonate by binding and removing HCl from the GI tract), as assessed by its effect on gastric pH, was unaffected by proton pump inhibition. These findings are consistent with clinical trials in patients with CKD and metabolic acidosis which showed that the effect of veverimer on serum bicarbonate was similar in patients who were and were not receiving proton pump inhibitors or H-2 receptor blockers (Wesson et al., 2019b). ). With both free and total dabigatran, GLSM ratio point estimates were reduced by less than 18% when coadministered with veverimer, the effect lessening slightly with a widening of the dose separation interval (e.g., 1 and 2 hours). This minimal reduction is substantially less than that observed when dabigatran was coadministered with the PPI pantoprazole, which diminished dabigatran C max and AUC by 40% and 28%, respectively, changes that are not clinically meaningful (Zhang et al., 2014). Notably, the lack of effect on aspirin, furosemide, warfarin and dabigatran exposures indicates that veverimer does not interact in a clinicallyrelevant manner with small, negatively charged drugs or drugs that have pHdependent solubility, demonstrating the lack of two potential mechanisms by which the veverimer polymer might foster DDIs. In summary, the potential for DDIs with veverimer was evaluated based on the known site of action and physicochemical structure of the polymer, which restricts the compound to the GI particularly vulnerable to DDIs (Sommer et al., 2020). Based on the physicochemical characteristics of veverimer and the findings from the in vitro and human studies presented here, we conclude that veverimer is unlikely to have clinically significant DDIs. This article has not been copyedited and formatted. The final version may differ from this version. FOOTNOTES This work was funded by Tricida, Inc. The work was partially published in abstract form at American Society of Nephrology, Kidney Week 2020 Strategy to Assess Potential for Veverimer Drug-Drug Interactions We used a directed approach based on the known physical and chemical characteristics of veverimer to analyze its potential for DDIs. Veverimer is too large to be systemically absorbed; therefore, its potential for DDIs is limited to effects on the absorption of other orally administered drugs either through 1) binding to veverimer or 2) transient increases in gastric pH caused by veverimer binding to HCl. To identify candidate drugs for testing with veverimer in human DDI studies, we conducted in vitro studies to identify the characteristics most likely to lead to binding to veverimer and we evaluated the effect of veverimer on gastric pH in healthy volunteers. Results from these studies informed the selection of drugs for human DDI studies. The potential for binding interactions with veverimer was assessed in vitro using a set of test drugs that included 14 oral medications and two water-soluble vitamins and test matrices mimicking the pH of various GI compartments. DDI = drug-drug interaction This article has not been copyedited and formatted. The final version may differ from this version. Pump Inhibitor The magnitude and duration of effect of veverimer on gastric pH was measured continuously in vivo in healthy volunteers using a microelectrode pH probe. Ingestion of veverimer or water occurred at ~Hour 0, just after initiation of pH monitoring. In experiments assessing the fed condition, breakfast was eaten within 15 minutes prior to Hour 0. Other meal/snack times occurred at ~ Hour 4, Hour 9 (or 10) and Hour 13. Omeprazole was administered at Hour 9. Figure 1 Veverimer is an orally administered, nonabsorbed, insoluble, free-amine polymer Does veverimer have potential to affect absorption of coadministered drugs? Does veverimer have potential to affect metabolism and excretion of coadministered drugs? Need to conduct DDI studies to evaluate potential effect on drug absorption
2021-05-27T06:19:21.328Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "ba12d9df264b1ce7849b15fbe6e76024771ac027", "oa_license": "CCBY", "oa_url": "https://dmd.aspetjournals.org/content/dmd/49/7/490.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "eee62d39935adb856863bc62e3414017e283a36c", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
238114107
pes2o/s2orc
v3-fos-license
The voices of parents whose children hospitalized with chronic kidney disease: A qualitative study Background Parents play an important role in the treatment of children with chronic kidney disease (CKD) and their dissatisfaction may result in negative impacts on children’s health outcomes as well as their medical treatment. Thus, exploring parents' experience and identifying and addressing challenging issues could be helpful in managing the patients’ chronic conditions during their hospitalization. This study aimed to explore parents’ experiences during the hospitalization of their children with CKD. Methods This study was a qualitative study with the content analysis approach. Participants were 15 parents of children with CKD who were selected by purposive sampling. Data were collected using in-depth, semi-structured, face-to-face interviews. Data were analyzed using conventional content analysis. Results Two overarching categories of “improper behavior of personnel” and “unprofessional performance of personnel” were extracted from the data. The first category included sub-categories of ‘staff aggression’ and ‘staff indifference’. ‘Disturbed interaction’, ‘poor patient care, and ‘poor skills of personnel’ were considered as the sub-categories of "unprofessional performance of personnel". Conclusion The results indicated that improper behavior and unprofessional performance of the healthcare personnel can intensify the child’s and parents’ problems, and make it more difficult for them to deal with these difficulties. The medical team can significantly help parents by establishing appropriate communication and behavior, providing them the required information about their child’s disease and the necessary care to mitigate or eliminate their problems. Also, health care authorities can develop and implement educational and practical guidelines for healthcare personnel to improve their knowledge and skills. Two overarching categories of "improper behavior of personnel" and "unprofessional performance of personnel" were extracted from the data. The rst category included sub-categories of 'staff aggression' and 'staff indifference'. 'Disturbed interaction', 'poor patient care, and 'poor skills of personnel' were considered as the sub-categories of "unprofessional performance of personnel". Conclusion The results indicated that improper behavior and unprofessional performance of the healthcare personnel can intensify the child's and parents' problems, and make it more di cult for them to deal with these di culties. The medical team can signi cantly help parents by establishing appropriate communication and behavior, providing them the required information about their child's disease and the necessary care to mitigate or eliminate their problems. Also, health care authorities can develop and implement educational and practical guidelines for healthcare personnel to improve their knowledge and skills. Background Chronic kidney disease (CKD) is one of the major health issues and it is on the rise worldwide [1]. The prevalence of CKD is 1.5-3 cases per 10,000 among children under 16 years old [2]. This disease has destructive effects on different systems of the body [3]. Providing continuous care can improve the quality of life in these children [4], and it requires the involvement of health systems and families [5]. Parents are primary caregivers to a child suffering from CKD [6]. The relationship of the health care team and their Support have a positive impact on the caregiving ability of parents during hospitalization and the improvement of the child's recovery after discharge [7]. Moreover, improving children's health or reducing their symptoms, adhering to their treatment regimen, and parents' understanding of medical information is linked to the parents' satisfaction with health care. Parents' satisfaction with health care may be used as an appropriate variable to assess the quality of patient care [8]. Growing the number of chronic diseases, higher hospitalization rates, and huge health care costs have encountered health care systems with major challenges such as paying less attention or ignoring patients' rights and not treating the patients with proper respect [9]. The number of patients who are disappointed with medical services is in the rise globally and it damages the link between patient and healthcare providers [10]. The majority of patients are dissatis ed with the quality of medical services [11]. The quality of medical services is equal to a sense of satisfaction in patients regarding care and treatment [12]. Satisfaction with medical and nursing care is a multidimensional concept and depends on the degree of meeting expectations of the patients and their families [13]. When the patient is a child, the satisfaction with medical care is evaluated by parents [14], who have the right to be part of the medical team members and participate in the medical decision-making process [15]. Parents play an important role in the treatment of children with CKD and their dissatisfaction may result in negative impacts on children's health outcomes as well as their medical treatment [16]. Thus, exploring parents' experience and identifying and addressing challenging issues could be helpful in managing the patients' chronic conditions during their hospitalization [17]. Qualitative study is the best approach to reach the lived experiences of parents and understand the situation from their perspective [18]. This study aimed to explore parents' experiences during the hospitalization of their children with CKD by using a qualitative research approach. Study Design And Setting This qualitative study was conducted using the content analysis approach in Shahid Motahari teaching hospital of Urmia in Iran. Participants Participants were 15 parents (9 mothers and 6 fathers) of children with CKD who were hospitalized in the Nephrology department. Participants were selected using purposive sampling and a snowball approach was used to nd informant participants. Similar to other qualitative studies, the sample size was based on data saturation. In this study, data saturation occurred when previously collected data were repeated during interviews and no other new codes were obtained. Inclusion criteria included willing to participate in the study, having the ability to express experiences, more than 6 months had passed since a child's diagnosis, not being a single parent, and not having a mental illness in the parents. Data Collection Data collection was performed through interview and eld notes by the lead researcher (Ph.D. candidate) from September 2018 to September 2019. She had passed 6 credit hours of qualitative research methodology class before the interview. She owned a master's degree and worked in the Nephrology unit in the pediatric hospital. She was interested in the research topic and she performed the rst two interviews with her advisor after obtaining permission from the participants. Upon agreement of the participants, meetings were held in the classroom of the Nephrology unit. The researcher explained the research process, objective, and the roles of the participants in the study. She obtained written consent from participants and noti ed them about recording their voice during interview sessions. She performed in-depth, semi-structured, face-to-face interviews, and started with a general, open-ended question, followed by exploratory, deepening questions. The research team discussed and developed primary general questions. The questions had open and interpretive answers and participants' responses guided the research process. The main questions of the interview were as follows: "how were the treatment and care provided to your child?" or "what experiences did you have during the hospitalization of your children?" Based on parents' answers, probing questions were asked: "what do you mean by that?" or "would you please explain more about that?" Each interview lasted for approximately 20-60 minutes. A digital voice recorder was utilized to record each interview. Data collection continued until data saturation was reached [19]. Data analysis Each interview was transcribed verbatim and analyzed by using MAXQDA 10 software. Simultaneous analysis of the interviews provided access to the key informant participant for the next interview and lead to obtaining richer information. Data analysis was carried out using the conventional content analysis approach suggested by Graneheim and Lundman [20]. This approach includes 6 stages: 1) becoming familiar with transcribed data through immersion and identifying primary code by reading, 2) generating primary codes in the transcript by reviewing line by line, 3) searching for and recognizing categories and sub-categories, 4) reviewing categories to nd the relationship between categories and sub-categories, 5) naming and labeling categories and sub-categories, 6) preparing the nal report of the analysis. In this study, the leading researcher performed coding and the other research team members monitored the coding process. The research team member spent exchanging ideas until they reached an agreement about coding and categorization processes. Rigor Lincoln and Guba's criteria were applied to determine the precision and accuracy of the data [21]. Prolonged engagement with data, peer checking (expert review), and member checking (participants' feedback) were performed to increase the credibility of the data. The researchers tried to provide a detailed report of the research process and prevent the researcher's presumption from interfering with data collection and analysis in order to achieve Conformability. Dependability was achieved by step by step repetition the research process, and allowing external reviewers to audit and critique the data and documents. To assure transferability, the steps and activities performed during the research process were described precisely and con rmed by external reviewers. Demographic pro le of participants The results were obtained from interviewing 15 parents of children with CKD. Nine of them were mothers and six were fathers. Seven parents had two children and eight of them had one child. Four parents had primary education, two held a high school diploma and nine of them had bachelor's degrees. The duration of a child's illness ranged from 7 months to 12 years. Categories Two overarching categories of "improper behavior of personnel" and "unprofessional performance of personnel" were extracted from data. The category of improper behavior of personnel" had sub-categories of 'staff aggression" and "staff indifference". "Disturbed interaction", "poor patient care", and "poor skills of personnel" were considered as the sub-categories of "unprofessional performance of personnel" (Table 1). No attention to parents' concerns A few of the participants noted that some doctors and nurses refused to answer parents' questions and ignored their concerns. One of the parents stated: "… The pediatric resident told us to take lab results to my professor urgently. He should see this immediately… The professor was at a conference. It was winter. I was really worried and sat in front of the conference hall in the yard for one and a half hours. After the conference was over, the professor came out. I said, 'dear professor, this is my kid lab report … the resident said you should see this…' He didn't take a look at the report and said, 'The resident should read the lab report for me and I'll give the answer to him, not to you…" (P.7). Unprofessional performance of the personnel Disturbed Interaction Lack of providing information to parents In this study, some participants stated that they were not provided the necessary information about their child's disease and their treatment process. This unawareness made it di cult for parents to make decisions about their child's treatment and caused them great anxiety and concerns. A mother shared her experience as follows: "They gave me a brochure and it's just some simple information about the disease. I asked my child nurse to give me some information about her diet, activities, etc. The nurse told me to check out the internet, you can nd everything online" (P.6). Lack of easy access to the doctor According to the participants, it was extremely di cult to visit the attending doctors in the hospital. Some of them voiced that the residents and interns come and visit the patients. Sometimes, if we have questions about our child's condition, we have to wait several days to see the doctor. One of the parents stated: "It's been 3 days now I haven't seen her doctor yet. Some other doctors came and saw my kid. When I ask them 'how I can visit her doctor. They just say, he will be here any minute'..." (P.13) Participant 7 shared her experience as follows: "His doctor referred us to a doctor in Tehran. We passed through a Herculean task to see the doctor. At rst, a resident visits my child, then a surgeon saw him. It took us about 2 hours until we nally reached the doctor…" (P.7). Poor Patient Care Ignoring patients' problems Some of the parents experienced poor quality of patient care in the hospital. They complained that their child problems were ignored by some of the healthcare providers. A mother stated: "… I said to the nurses that my son wasn't feeling all right at all and asked them to come and check on him. Once my son felt extremely bad and started having a seizure… I shouted 'someone please help me…'" (P.1) A father shared his experience as follows: "… My wife shouted 'oh, God, my child died, someone please help him.' I fought the security man, he didn't let me in. I rushed into the department. They checked his blood pressure; it was 4. I said, 'for God's sake, we are here since this morning; Sister, you didn't even check on him.'" (P. 5) Lack of providing care in a timely manner Some participants complained about the lack of timely care services and medical interventions based on previous scheduling, as one of the parents stated: "… They told us to be here at 8 in the morning for surgery. We stayed in a hotel the night before and went to the hospital early in the morning. We sat in the waiting room until 1 PM. My child was sitting there with an empty stomach all that time. The doctor nally showed up at 1…" (P.7). Poor Skills Of Some Healthcare Personnel Improper diagnosis and treatment by some physicians Some parents were dissatis ed with the primary inappropriate diagnosis and treatment by some physicians. The parents talked about their experience as follows: "… She was being under her doctor's supervision for two years. The other day, I saw that her eyes had become sunken and her body had been terribly dried. We are here now and the doctor said the dose of her medications has to be adjusted before she discharges home…" (P.6). "… His feet were swollen little by little, his hands, his face, and every part of his body was swollen; he had become like a ball. Then, we took him to the clinic. They said 'it is because of his cold and you don't have to worry about it. It will be OK'. He got worse and hospitalized…" (P.11) Poor skills of some nurses Some parents were disappointed with the poor performance of nurses that had caused pain and discomfort in their children. Regarding this issue, Participant 13 stated: "When we were in the emergency department. A nurse tried to insert her IV catheter, she tried several times but didn't succeed. She still wanted to try one more time. I said, Oh! ma'am, you made a lot of holes in my child's hand. Please, ask someone else to come.'" (P. 13) Poor skills of some non-clinical personnel Some parents complained that incorrect laboratory test results and poor performance of radiology staff caused them extreme anxiety. Participant 5 and 7 describe their experience: "… After I got the lab results. Laboratory technician told me, one of his results was too high. We were under great stress because of this test result. We saw his doctor and re-checked the test...It was 83 and the doctor said it wasn't high at all…" (P.5). "After surgery, they moved him to ICU. Then, the doctor came and ordered radiology imaging. After the imaging was done, they gave the image to the doctor. He said it didn't show anything. He said they had used -I don't know-too little or too much of rays. He got upset and said imaging should be repeated…" (P.7). Discussion This study was carried out to explore parents' experiences during the hospitalization of their children with CKD. Results indicated that the parents had experienced inappropriate behavior and unprofessional performance of the healthcare personnel. Staff indifference and aggression were the behaviors that parents faced during their children's hospitalization. Our ndings are con rmed by the following studies. Reader and Gillespie (2013) reported that patients and their families believed that healthcare personnel ignored their feelings and physical health [22]. Another study showed that 37 cases per 1000 patients experienced indifference during their hospitalization [23]. Robinson et al (2014) indicated that patients complained about the lack of healthcare personnel's sympathy [24]. According to Schnitzer et al study (2012), patients complained about not being involved in medical decision making and lack of personnel's empathy [25]. Montini et al (2008) showed that one of the patients' complaints was "patient not taken seriously" [26]. Indifference to the patients and their family members is a problem related to the attitude of the health care providers. This problem has been reported frequently in healthcare facilities and can hurt patients or cause unpleasant clinical consequences. Thus, health policymakers need to investigate this problem and take the necessary steps to solve it. Staff aggression was another behavioral problem mentioned by the participants in our study. The following studies are in line with our study. According to a study by Schnitzer et al (2012), health care unfriendly behavior and disrespect were as patients' complaints [25]. Harrison et al (2016) showed that lack of respect and dignity was the hospitalized patients' complaints [27]. Skålén et al (2015) claimed that 34.6% of patients' complaints were about the communication and attitude of health care providers [28]. Henderson et al (2009) found that many ethical principles and patients' dignity were not met during healthcare activities [29]. When caring for patients, observing ethical principles, and treating the patients with respect, dignity, and compassion determine the quality of care in health care facilities. Aggressive and unethical behavior of the health care team can cause problems for both patients and healthcare providers. The medical team must consider ethical principles when providing care for patients. To reach this goal, constant training should be offered to the medical team members. Disturbed interaction was another nding of our study. The interactive problems between the medical team and parents cause issues in health care providers and the treatment process. If the medical team has defective interaction with parents and provides them insu cient information about their children, it can lead to complications and delayed treatment in their children. Limited studies have been conducted on this topic. Asadi Noghabi et al (2004) claimed that the vital mission of patient education is not performed well, and because of this issue, not only the patient but also their family and society suffer [30]. Mazor et al (2012) reported that patients attributed 47% of the problems to communication di culties, including information exchange issues [31]. In Robinson et al study (2014), patients were dissatis ed with an inadequate explanation about the treatment options [24]. Schnitzer et al (2012) showed that the patients' dissatisfaction was due to the lack of diagnostic and medical information [25]. Montini et al (2008) reported that patients were disappointed with insu cient information, staff's inattention, and inappropriate responses to their questions [26]. Proper interaction of the medical team with patients can lead to positive treatment outcomes. The inability of the medical team to properly interact with patients caused problems, including the negative attitude of patients toward the health care providers, increased patient dissatisfaction, patient noncompliance with treatment, and even change of the doctor. Healthcare staff can help expedite the successful treatment of patients through improving proper interactions with patients. Parents had experienced poor patient care in our study. They voiced that their patients' problems were ignored and the care was not provided on time. Similar to our ndings, delayed diagnosis and treatment [27,31], late medical procedures [27], long waiting times for medical procedures [24,26,27], and incorrect test results/analysis [26] were among the patients' complaints in various studies. Poor patient care may lead to non-compensable consequences for patients. As a result, the healthcare system must identify the underlying causes and take necessary actions to avoid this issue. Lack of the required skills in health care providers was another issue as discussed by the participants in our study. The ndings indicated that late and incorrect diagnosis and malpractice by some physician, surgical misoperation by some inexperienced residents, and poor performance of procedures by some nurses and other department technicians caused complications in children, and intensi ed parents' distress and dissatisfaction. Parallel to our nding, a recent study reported that misdiagnosis causes about 40,000-80,000 deaths in US hospitals, annually [32]. In the study by Robinson et al (2014), patients of a fertility center had complaints about incorrect semen analysis reports [24]. Skålén et al.'s study (2015) also indicated that 59.1% of the complaints were related to healthcare/medical treatment [28]. In addition, misdiagnosis, poor skills, lack of knowledge and work experience, carelessness, and negligence of health care providers were reported in previous studies [27,33,34]. These results are in line with our ndings. Lack of competence and skill in healthcare providers can have negative impacts on patients' treatment. Medical team members should make an effort to acquire and enhance their knowledge and skills needed to care for patients in order to avoid harm to themselves and the patients. Study limitation Conclusion The results of this study indicated that improper behavior and unprofessional performance of the healthcare personnel had intensi ed the child's and parents' problems, and making it more di cult for them to deal with these di culties. The medical team can signi cantly help parents by establishing appropriate communication and behavior, providing them the required information about their child's disease and the necessary care to mitigate or eliminate these problems. Also, health care authorities can develop and implement educational and practical guidelines for healthcare personnel in order to improve their knowledge and skills. Declarations Ethics approval and consent to participate This study was registered and approved by the ethics committee of Urmia University of Medical Sciences (IR.UMSU.REC.1397.138). The researcher informed all participants about the voluntary nature of their participation and that they have the right to terminate their cooperation with the researcher at any time during the research process. They were also given an explanation about the purposes of the research and assured about their privacy and con dentiality of their information. All participants signed informed consent before participation. Consent for publication Written informed consent was obtained from the patient for publication. Availability of data and materials The datasets used and analyzed during the current study are available from the corresponding authors on reasonable request. Competing interests The low number of fathers' participants compared with mothers was one of the limitations of this study. Possibly because of the signi cant role of mothers in caring for the child in the hospital, fathers' workload and not being able to present to the hospital. Thus, it is recommended to engage a higher number of fathers in similar future studies.
2020-05-21T00:13:07.910Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "d9603dccb127a562ffa72baef029d55dd798902f", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-26956/v1.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "2995a07655b97c4288f08d9aa0b6b924efdfa1dc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233611508
pes2o/s2orc
v3-fos-license
4.0. Determination of sources, spatial variability, and concentration of polycyclic aromatic hydrocarbons in surface water and sediment of Imiringi https://doi.org/10.30574/wjarr.2021.9.3.0121 Abstract Polycyclic aromatic hydrocarbons (PAHs) are very toxic and persistent environmental micro-contaminants that possess health-impacting tendencies. Environmental levels of PAHs are mainly exacerbated by anthropogenic activities. At elevated concentrations, PAHs become toxic and readily bio-magnify across the food chain. This study was undertaken to determine the concentration and identify possible sources of PAHs in Imiringi River. PAH concentrations depicted the following ranges; Oswan-1 (0.00046 – 0.05010 mg/L and 0.00002 – 0.01812 mg/kg); Olem-1 (0.02428 - 2.86264 mg/L and 0.00151 - 3.96536 mg/kg); Oswan-4 (0.00041 - 0.30012 mg/L and 0.00143 - 0.04530 mg/kg) for water and sediment samples respectively. PAHs mostly exceeded the recommended maximum contaminant levels (MCL) stipulated by United States Environmental Protection Agency (USEPA), while high molecular weight PAHs (4 – 6 ring PAHs) are prevalent in the environment. The applied diagnostic ratio (fluoranthene/pyrene) values for surface waters at Oswan-1 (0.8364) and Olem-1 (0.7337), and sediment at Olem-1 (0.4894) were less than 1, thereby reflecting petrogenic PAHs (from gasoline and diesel). On the other hand, Fluoranthene/Pyrene ratio of sediments from Oswan-1 (2.4558), Oswan-4 (2.3565) and surface water at Oswan-4 (2.0252) depicted values greater than 1, indicating pyrogenic PAHs (from coal combustion). Results further showed Fluoranthene/(Fluoranthene + Pyrene) ratio for all sampling locations at values greater than 0.4 for both surface water and sediment. Hence, revealing pyrogenic source PAHs (from combustion of fossil fuel, coal, grass, wood, etc). Overall, the water body showed reasonable hydrocarbon contamination. As such, it is unsuitable for consumption, as well as recreational and agricultural activities. The application of One-way ANOVA statistics showed spatial variability (p < 0.05) for different PAH species across different sections of river, while principal component analysis (PCA) revealed discrete similarities for most PAHs, excluding anthracene and Introduction The Polycyclic Aromatic Hydrocarbons (PAHs) have been extensively studied to understand their distribution, fate and effects in the environment [1]. The PAHs are of special interest because of their carcinogenicity, mutagenicity and teratogenicity. This is because their significant importance lies on the awareness of the biochemical and toxicological roles they play in humans and animals [2,3]. Because of their sources, they are wide spread in the environment. Depending on their volatility, PAHs may be transported far from their original source, ending up in various environmental compartments. Although, their main environmental sink is the organic fraction of soils and sediments [4,5,6]. PAHs are naturally hydrophobic, lipophilic and exhibit great tendency of adsorption to suspended particulates in the aquatic system. Hence, PAHs are majorly deposited in river or seabed sediments [6,7]. This in turn accumulates to levels high enough to exacerbate toxic effects within the environment [8,9]. PAHs are also bio available to aquatic animals and consequently find their way into dietary sources [3,10,11]. They are assumed to have potentials for endocrine system disruption [12,13,14] and are also listed as priority organic pollutants with their photo-oxidation products and alkylated derivatives on account of their tendency to be carcinogenic, teratogenic and mutagenic [10,13,14]. These aromatic compounds (PAHs) are naturally present in fossil fuels and find their way into the environment as byproducts of incomplete combustion of organic materials (oil and gas, coal, biomass, firewood, garbage, tobacco or charbroiled meat), by way of incineration, vehicular exhaust emissions, oil exploration, power generation and various industrial production practices [15,16]. Abdel-Shafy [17] observed that PAHs are emitted into the atmosphere and other receiving environmental compartments mainly via the processing of crude petroleum (refining of crude oil and synthetic fuels), use of fossil fuels (thermal power generation, domestic heating and burning of organic wastes at unregulated dumpsites, vehicle exhaust emissions) and fallouts (wild/forest fires and volcanic activities). Larger quantity of these compounds arrive the marine environments from coastal regions as urban run-off, domestic wastes, river run-off, industrial discharges and emissions from engine and bilge pumping. Other times, they infiltrate the aquatic bodies in the form of leachates from bulwarks and docksides [16]. There are two Sources of PAHs in the environment, namely; Pyrogenic and Petrogenic sources [4]. Pyrogenic PAHs are formed whenever organic substances are exposed to high temperatures under low oxygen or no oxygen conditions. As such, the destructive distillation of coal into coke and coal tar, or the thermal cracking of petroleum residuals into lighter hydrocarbons results in intentionally occurring pyrolytic PAHs. Meanwhile, other unintentional processes occur during the incomplete combustion of motor fuels in cars and trucks, the incomplete combustion of wood in forest fires and fireplaces, and the incomplete combustion of fuel oils in heating systems [18]. PAHs formed during crude oil maturation and similar processes are called Petrogenic [19]. Such Petrogenic PAHs are common due to the widespread transportation, storage, and use of crude oil and crude oil products. Some of the major sources of Petrogenic PAHs include oceanic and freshwater oil spills, underground and above ground storage tank leaks, and the accumulation of vast numbers of small releases of gasoline, motor oil, and related substances associated with transportation. The aim of this study is to determine the sources, spatial distribution, and compare the concentration levels of polycyclic aromatic hydrocarbons in the Imiringi River, whilst assessing the status of the environment with respect to stipulated United States Environmental Protection Agency (USEPA) regulatory guidelines. To achieve the above aim, the following objectives were set thus; to quantify the levels of PAHs in sediments and water samples of Imiringi River at different sampling points, as well as decipher the sources of PAHs by calculating the (Fluoranthene/pyrene), and (Fluoranthene/Fluoranthene + pyrene) ratios. Finally, the study is aimed at assessing environmental compliance viz: USEPA maximum contaminant level (MCL) of PAHs. Equipment Specification HP 6890 series gas chromatograph coupled with an HP 5973 mass selective detector (MSD) was used for the analysis. The capillary column used for separation of PAH components is the DB-5 type (30 m length x 0.32 internal diameter wide-bore). This column is lined with a stationary phase material (95% dimethyl -5% phenyl polysiloxane). Also, the carrier gas utilized for this test is 99.999% helium. Study Area The study was carried out along the Imiringi River stretch, which is located in Imiringi town, Ogbia Local Government Area of Bayelsa State. This water channel runs across the entire length of Imiringi community. For the purpose of this study, three sampling points were geo-referenced along the river course (Oswan-1, Olem-1 and Oswan-4) ( Table 1). Sample Collection Sampling was done on the 30th of August, 2019 at Oswan-1, Olem-1 and Oswan-4 sections of the Imiringi River. Surface water samples were collected and transferred into 250 mL transparent glass bottles with glass corks. A grab sampler was deployed to acquire seabed sediments from each geo-referenced sampling point and transferred into Ziploc bags. With a masking tape, all sampling containers were distinctly labeled. Afterwards, water samples were fixed with sulphuric acid (H2SO4). Meanwhile, all samples were stored in a cool box containing ice packs, before being immediately transported to the laboratory for sample pre-treatment and analysis. PAH extraction and clean up For each water sample, exactly 250 mL of was extracted with 25 mL of dichloromethane/n-hexane solvent (1:3 v/v) in a separating funnel. The aqueous phase was collected and extraction was carried out with a second aliquot of dichloromethane/n-hexane solvent (25 mL) in order to improve the percentage recovery of extraction (procedure is as extensively described in USEPA Method 3510C [20]. On the other hand, 1 g of air-dried and homogenized sediment sample was extracted via sonication, using duplicate 15 mL aliquots of dichloromethane/n-hexane solvent (1:3 v/v) [7]. The respective extracts from the different sample matrices were then evaporated to 1 mL using a temperature regulated water bath at 40 o C. The concentrated extracts were cleansed by solid phase extraction method. Afterwards, 1 µL portion of the cleansed extract was injected into the GC-MSD. GC-MSD Analysis Following the extraction of PAHs from water and sediment samples, 1 µL of the reconstituted sample extract was injected into the gas chromatograph -mass selective detector (GC-MSD). Separation of each PAH fraction is known to separate as the vapour constituent partitions between the mobile (helium) and stationary (95% dimethyl -5% phenyl polysiloxane) phases. In order to obtain optimum resolution of the calibrated PAH components, the following equipment conditions (starting oven temperature = 65ºC, final oven temperature = 320 ºC, detector temperature = 310ºC, injector temperature = 275ºC, helium flow rate = 30 mL/min. Statistical Analysis of Data/Spatial Variability One-way ANOVA and principal component analysis were used to study variabilities of PAHs in the different sites and among PAHs. SPSS statistical package (Windows version 18) and software Excel 2007 were used for data analysis. PAH Distribution The concentration of PAH components identified in surface water and sediment samples from three (3) site locations (Oswan-1, Olem-1 and Oswan-4) of Imiringi River are presented in Tables 2 and 3. For the surface water samples, naphthalene, acenaphthene, acenaphthylene, fluorene were not detected in all samples, while phenanthrene was not detected in Olem 1 and Oswan 4. Meanwhile, anthracene was not detected in Olem 1. On the other hand, sediment samples were void of naphthalene, acenaphthene, acenaphthylene components in Olem 1 and Oswan 4 locations, while fluorene was not detected in Olem 1. PAH concentrations ranged from (0.00E+00 to 5.07E-02 mg/L), (0.00E+00 to 2.96E+00 mg/L), and (0.00E+00 to 2.99E-01 mg/L) for Oswan-1, Olem-1 and Oswan-4 surface water samples respectively. Most noticeably, the readily volatile light molecular weight polycyclic aromatic hydrocarbons (LPAHs) (Naphthalene, Acenaphthene, Acenaphthylene, and Fluorene) were found below measurable detection limit (MDL) in water samples. These fractions may have been lost due to environmental attenuation factors such as elevated temperature, low humidity, and tidal nature of the river, amongst others. Aigberua [21] had reported loss in light molecular weight PAHs in surface waters of the Imiringi River. In addition, Table 3 depicted a total of 14, 10 and 11 PAH species in the sediment samples collected from Oswan-1 Olem-1 and Oswan-4 field locations respectively. The higher number of PAH species observed at Oswan locations may be due to its upstream location and the effect of redistribution towards the downstream river section at Olem-1. Generally, high molecular weight fractions (HPAHs) are the most important species found in the environment. Hence, Dibenz(a,h)anthracene was observed with the highest concentration at upstream locations of Oswan-1 (2.09E-02 mg/kg) and Oswan-4 (4.50E-02 mg/kg) respectively. On the other hand, Benzo(b)fluoranthene (3.97E+00 mg/kg) was the most important PAH fraction at Olem-1 downriver location. Like the surface water trend observed, the downriver sediment location at Olem-1 was recorded as being most contaminated. Apart from the unidentified PAH compounds, all other observed PAHs had concentrations exceeding the maximum allowable concentration (0.0002 mg/L) as given by Agency for Toxic Substance and Disease Regulation (ATSDR). Similarly, Aigberua [21] had reported the prevalence of high molecular weight PAHs (4-6 ring PAHs) in surface waters of Imiringi River. In the same vein, Yang et al. [22] had reported the prevalence of 4 to 6 ring PAHs in suspended particulate dust around the Niger Delta region. In contrast, Wu et al. [6] had reported 2 to 4 ring PAHs as the most important fractions of river Chaohu, China, thereby reflecting the predominance of petrogenic PAHs. The standards for maximum contaminant level (MCL) of PAHs as reported by USEPA (United States Environmental Protection Agency) are provided in Table 6. Table 4 Standards and regulations for PAHs in water [23]. Table 6 shows the maximum contaminant level (MCL) reported by USEPA. The comparative evaluation of PAHs obtained from test water sample shows concentrations exceeding USEPA standards. Based on the results obtained from this study (Tables 2 and 3), water from Imiringi River is not suitable for drinking, domestic and recreational purposes. This is owing to the risk of ingestion and biomagnifications across the food chain. Results obtained from this study is similar to the findings of Jack and Abiye [24] where PAH concentrations of the Eleme and Okrika creeks of Niger Delta were reported to exceed USEPA regulatory limits. Applied diagnostic ratios and delineation of PAH sources From the concentration values presented in Tables 2 and 3 respectively, the source of elevated PAHs can be deduced and likely reasons for exacerbated levels can be predicted. The two (2) diagnostic ratios that were applied for this source apportionment include: Fluoranthene/Pyrene (Flu/Py) ratio, and Fluoranthene/(Fluoranthene + Pyrene) (Flu/(Flu + Py)) ratio as described in Table 4. Table 5 Diagnostic ratio values and PAH emission sources [25]. Diagnostic Ratio Petrogenic Pyrogenic Flu/Py < 1 (gasoline, diesel) > 1 (coal combustion) Flu/Flu + Py < 0.4 (petrol) > 0.4 (fossil fuel, grass, wood) Flu represents Fluoranthene, while Py represents Pyrene Table 5 gives a direction on the sources of distribution of PAHs in the water and sediment samples. The values obtained from the applied ratios are an indication of likely originating sources of PAHs. The diagnostic ratio values for the surface water and sediments of Oswan-1, Olem-1 and Oswan-4 sample points along the Imiringi River are presented in Table 6. Table 5 indicates that diagnostic ratio Flu/Py for the surface water samples at Oswan-1 (0.8364), Olem-1 (0.7337) and sediment sample at Olem-1 (0.4894) were less than 1 (< 1), thereby depicting petrogenic source of PAHs (gasoline and diesel). This could have emanated from spill-over effects of industrial wastes and oil spillages from nearby oil installations along the river tributary. While the Flu/Py ratio of sediments at Oswan-1 (2.4558), Oswan-4 (2.3565) and surface water at Oswan-4 (2.0252) were observed at concentrations greater than 1 (> 1), indicating a pyrogenic source of PAHs (coal combustion). Results highlighted in Table 5 further revealed most Flu/(Flu + Py) ratios for all sample sites at values greater than 0.4 (> 0.4). This is an indication that the sources of PAHs are pyrogenic (pyrolysis or combustion of Fossil fuel, Grass, wood, among others). This may be due to intensive agricultural practice of bush burning. In addition, the sediment sample at Olem-1 depicted a Flu/(Flu + Py) ratio of less than 0.4 (< 0.4), this being an indication of petrogenic-sourced PAHs (petrol combustion). Similar results were obtained when Zhi, et al. [26] studied the fate of PAHs in water columns from Poyang lake. Statistical Data Analysis for the Identification of Spatial Variability Using the principal component analysis multivariate statistical tool, PAH species variability was determined for surface water and sediment across the three river sections. Figures 1 and 2 highlight the associated trend/pattern. (14) PAH species to be discretely located with close similarity in both surface water and sediment samples. However, two PAH fractions each (Anthracene, Dibenz(a,h)anthracene) and (Benzo(b)fluoranthene, Dibenz(a,h)anthracene) showed the most dissimilarity for Imiringi surface water and sediment respectively. For surface water, the spatial variability may have been due to outlier concentrations of Anthracene (Ant) at Oswan-4 upstream and Dibenz(a,h)anthracene at Olem-1 midstream section. The elevated level of LPAH (Anthracene) in surface water, especially within the upriver section (Oswan-4) may have stemmed from oil seepages that stem from illegal bunkering activities and faulty underlying oil pipelines. Conversely, the aggravated concentration of HPAH (Dibenz(a,h)anthracene) at Olem-1 section (downstream) is probably due to open combustion of biomass from waste dumps and bush burning occurrences ( Figure 1). Meanwhile, the spatial variability in sediment may have resulted from outlier concentrations of Benzo(b)fluoranthene at downstream location (Olem-1) and Dibenz(a,h)anthracene within upstream locations (Oswan-1 and Oswan-4). In addition, the high level of HPAHs (Benzo(b)fluoranthene and Dibenz(a,h)anthracene) is solely an indication of biomass combustion (Figure 2). A spatial variability trend similar to the current study was reported for surface water samples collected from Imiringi River [21]. Also, due to oil bunkering activities along the Azuabie creek, the affected water ways recorded outlier concentrations of hydrocarbons (TPH and PAHs) as compared to control location. As such, PCA of contaminated water and sediments showed significant variability from the control location [14]. Conclusion This study established that PAHs are present in the Imiringi River at significantly diverging concentrations across geospatial sampling points. PAHs depicted concentrations exceeding USEPA recommended contamination levels in water. As such, the Imiringi water body can be considered unsuitable for drinking purposes, or domestic and recreational activities. The PAHs found within the environment are linked to both petrogenic (petrol combustion) and pyrogenic (forest fires, coal combustion and burning of fossil fuel) sources. However, there is a comparatively higher distribution of pyrogenic HPAHs. This may likely be due to the poor volatility of HPAHs.
2021-05-04T22:06:38.595Z
2021-03-30T00:00:00.000
{ "year": 2021, "sha1": "d6973643c6323da0ae6bd1bba175ed7ceefd9f6b", "oa_license": "CCBYNCSA", "oa_url": "https://wjarr.com/sites/default/files/WJARR-2021-0121.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "679031cb80cebd23f731b620d11b833cf2998392", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
271034741
pes2o/s2orc
v3-fos-license
Polish Mother and (Not) Her Children: Intersectional State-Violence against Minors in Poland : This article seeks to explain the political responsibility that Polish right-wing female politicians directly associated with the 2015–2023 Polish government and the then-ruling Law and Justice Party bear in the state-sanctioned violence against minors in the context of LGBT-and immigration-related issues. Its main assumption is that, in times of the nationalist surge that has been sweeping Poland, women using anti-LGBT and anti-immigration discourses helped to legitimize discriminatory state practices and, consequently, made a significant contribution to the enactment of white, Christian, and heteronormative identity on Polish children. Drawing upon Critical Discourse Analysis, this work examines the anti-LGBT and anti-immigration political talk by female politicians who, in their narrative strategies, adopt the position of a “Polish mother” on a mission to save a “child in danger”. Through my analysis, I aim to demonstrate that anti-LGBT and anti-immigration discourses are equally significant areas of women’s political engagement. Despite the prevalent cultural norms of caring motherhood, women do exercise their agency in political struggles as supporters of discriminatory state policies directed against minors by re-politicizing a symbolic figure of the “Polish mother”. Introduction The aim of this article is to account for the political responsibility that Polish right-wing female politicians closely linked to the 2015-2023 Polish government and the then-ruling Law and Justice Party assume in the state-sanctioned violence against minors that happens in the context of LGBT-and immigration-related matters.The study supposes that, in times of nationalist revival that has been spreading across Poland, women who employ anti-LGBT and anti-immigration discourses play a prominent role in sanctioning discriminatory state practices by enacting white, Christian, and heteronormative identity on Polish children.The text builds on the existing body of literature that deals with the intersections of gender and nationalism in Europe to address yet another important manifestation of discursive crossroads between sexuality and race.It engages critically with the theoretical underpinnings of biopolitics and reproductive justice to explain the effect that the use of anti-LGBT and anti-immigration discourses has on social reality.Drawing upon Critical Discourse Analysis, this work scrutinizes parliamentary debates during which Polish rightwing female politicians adopt the position of a "Polish mother" on a mission to save a "child in danger" in support of anti-LGBT and anti-immigration political initiatives.I associate the symbolic figure of the "Polish mother" with the collective imaginary in which a woman represents a bearer of the national identity.An emblematic image of the "child in danger" is referred to as representing a child that bears the nation's future.I analyze the interaction of these two symbols to show a discursive paradox of selective protection of minors in two different contexts.The first context is anti-LGBT campaigns.Since 2016, as a result of numerous reports on psychological problems among LGBT teenagers, schoolchildren in Poland have been participating in an annual "Rainbow Friday" to show solidarity with and support for their LGBT peers.In response to this initiative, various politicians started fueling anti-LGBT sentiment that transformed into anti-LGBT state-sanctioned proposals in the area of public education.At the same time, Members of Parliament (MPs) engaged in anti-abortion activities with the purpose of protecting an "unborn child", which led to the successful restriction of abortion laws in 2021.Although right-wing female politicians claim that both anti-LGBT and anti-abortion campaigns aim to "save children in danger", the contrast between a prejudiced discourse in anti-LGBT crusades that scapegoated LGBT school children and a rhetoric of love towards the "unborn child" that accompanied antiabortion campaigns sparked wide social outrage.The second context concerns immigration.Since 2021, as a consequence of Russian-Belarussian hybrid warfare against the European Union, tens of thousands of non-European migrants, including children, have been seeking to cross the Polish border with Belarus.To resolve the crisis, Poland passed a series of laws allowing the immediate expulsion of illegal border crossers, which culminated in accusations of pushback.In contrast, after the Russian invasion of Ukraine on 24 February 2022, Poland welcomed millions of Ukrainians with children, granting them the same healthcare and education services as Polish citizens.On the one hand, the way Poland helped integrate Ukrainians built up a welcoming self-image of the government (then in power), but on the other, it exposed injustice towards non-European migrants at the Polish-Belarusian border.Through my analysis, I intend to show that anti-LGBT and antiimmigration discourses are of equivalent importance when it comes to women's political activity.Despite the dominant stereotypes of caring motherhood, women weaponize the symbolic figure of the "Polish mother" to endorse the xenophobic state-sanctioned policies that affect minors and, thereby, practice their agency in political struggles. Literature Review Studying women's political engagement in anti-LGBT and anti-immigration campaigns in the Polish context, I want to enrich existing research on state-sanctioned xenophobia in Europe with a geographically specific illustration of women's role in the legitimization of exclusionary politics.Prior academic research has shed light on various forms of women's active participation in right-wing political developments across Europe [1][2][3][4][5][6].Also, over the past years, Polish scholars have produced a number of significant studies on prejudiced rhetoric in different contexts.A seminal contribution has been made by Maciej Duda [7], who investigated right-wing female politicians' support for discriminatory state policies, concentrating exclusively on anti-genderism.Another insightful exploration has been offered by Monika Bobako [8], who studied examples of Islamophobia amongst liberal feminists.Although Agnieszka Graff and El żbieta Korolczuk [9] conducted a thorough analysis of discursive entanglements of anti-gender and anti-immigration narratives as an expression of anti-EU sentiment, they left the importance of right-wing female politicians understudied.Enlarging on my previous article that investigated the centrality of women's rights in anti-immigration discourses endorsed by Polish right-wing female politicians [10], I attempt to cover the existing gap in the literature and inform the wider debate on the prominence of right-wing women in xenophobic state practices.More specifically, I aspire to challenge a long tradition of Polish scholarship, which has claimed that the figure of the "Polish mother" is devoid of any emancipatory potential [11][12][13][14].I intend to shift the attention from the cultural and social aspects of motherhood onto its emergent political dimensions that the current studies seem to have failed to address.I try to demonstrate that women who appropriate a symbolic identity of the "Polish mother" and weaponize it to act as defenders of the "child in danger" transgress the stereotypical femininity and exercise their agency in new political contexts.To do so, I build on Anna Zawadzka's argument [15] that "feminized" (or, for the purpose of this article, "maternalized") narrative strategies help to forge an idea of Polishness as a superior morality and, thereby, sanction discrimination of non-white, non-Christian and non-heteronormative minors.I hope for this article to provide a novel perspective on women's support for state-sanctioned violence against minors and its consequences not only for the well-being of individual children but also for the social reality that surrounds them. Theoretical Framework To tackle the shortcomings of the available scholarship, this text engages critically with the theoretical underpinnings of biopolitics and reproductive justice.In Society Must Be Defended, Michel Foucault argued that "biopolitics deals with the population as political problem, as a problem that is at once scientific and political, as a biological problem and as power's problem" [16] (p.245) and discussed disciplinary and regulatory mechanisms that a state can use to exercise its power over people.For the philosopher, "sexuality represents the precise point where the disciplinary and the regulatory, the body and the population, are articulated" [16] (p.252).A political system that is founded upon biopower-that is, a set of norms that "can be applied to both a body one wishes to discipline and a population one wishes to regularize" [16] (p.253), can exercise its right to "make live or let die" [16] (p.241).Moreover, according to Foucault, racism is "a mechanism that allows biopower to work" [16] (p.258).In Foucauldian terms, through a hierarchy of species, racism establishes "a biological relationship" between different races that perceive each other as existential threats to and for their respective populations; that is, for a superior "us" to live, an inferior "they" must vanish.Therefore, racism is "the indispensable precondition" [16] (p.256) to render "killing" a permissible political tool, predominantly through its indirect forms: "the fact of exposing someone to death, increasing the risk of death for some people, or, quite simply, political death, expulsion, rejection, and so on" [16] (p.256). In addition, feminist theorizing on reproduction provides a useful framework for understanding the link between women and state biopolitical practices.As Floya Anthias and Nira Yuval-Davis [17] illustrated, women can affect and be affected by a broad spectrum of state policies.Firstly, due to their role as biological reproducers, women have influence over and are subject to policies aiming at population control (e.g., forced sterilization, birth control campaigns, and child benefit systems).Secondly, as guardians of ethnic/national boundaries, women act on as well as fall victim to religious and social gatekeeping that cultivate the symbolic group identity.Thirdly, being ideological reproducers, it is predominantly women's activity to socialize children into national collectives.Fourthly, the figure of a woman carries a symbolic meaning and transmits a national spirit in ideological discourses (e.g., wars are supposedly fought for the sake of "women and children").Finally, women are present in the military, both through active involvement in warfare and as support to men in combat. Last but not least, recent theories of reproductive justice that take new directions on state discriminatory practices in Europe and North America offer a vibrant array of theoretical lenses to be used in the analysis of women's contribution to state-sanctioned violence.This study is informed by (a) a theoretical conceptualization of state-driven population control efforts in the United States [18]; (b) theoretical approaches to situated and intersectional bordering processes [19], racialized-gendered logic of "crimmigration" [20] as well as the role of street-level bureaucrats in racialization and criminalization of asylum seekers [21] in the United Kingdom; and (c) an emergent theory of reproductive racism that stems from an analysis of a "birth-rate agenda" present in Hungarian, Polish, Italian and Greek right-wing politics [22]. Materials and Methods One of the main roles of the parliament is to debate issues of public importance.Parliamentary debates are a particular type of political discourse characterized by a set of rules and norms that apply to deliberation on any topic [23][24][25].First and foremost, the debates take place in a controlled and highly regulated environment.They start with an official address and are formally closed.Agendas of parliamentary sessions are set in advance, and parliamentarians take turns to speak for as much time as defined by parliamentary procedures.Secondly, MPs are aware that their appearances are for the record, and they normally prepare their speeches in advance.Thirdly, by design, parliamentary debates are supposed to be argumentative so parliamentarians can present their political stands, interact with their opponents, and argue for or against bills.Therefore, analysis of argumentation strategies employed during parliamentary debates gives insights into social representations constructed by MPs that translate into legislation and influence public opinion. Over the past decade, the parliament of Poland has increasingly dealt with topics related to, on the one hand, immigration and refugee crises and, on the other, LGBT and reproductive health (e.g., abortion and in vitro).Therefore, the latest social and political developments posed many opportunities for parliamentarians to display various forms of elite prejudice.This article studies parliamentary debates as a specific genre of political talk with a focus on the objectives and beliefs of MPs who take part in these communicative events.I analyze parliamentary debates as a form of social and political interaction that can serve, amongst others, to reproduce discriminatory practices.Polish right-wing female politicians directly associated with the 2015-2023 Polish government and the then-ruling Law and Justice Party are the subject of this analysis.The excerpts 1 studied in this article come from parliamentary appearances of the following politicians: Agata Katarzyna Wojtyszek, Anna D ąbrowska-Banaszek, Anna Kwiecie ń, Anna Maria Siarkowska 2 , Anna Milczanowska, Anna Paluch, Barbara Bartuś, Barbara Dziuk, Barbara Socha, Beata Strzałka, Bo żena Borys-Szopka, Dominika Chorosi ńska, El żbieta Duda, El żbieta Płonka, Ewa Szyma ńska, Iwona Kurowska, Joanna Borowiak, Józefa Szczurek-Żelazko, Katarzyna Sójka, Lidia Burzy ńska, Maria Kurowska, Marzena Machałek, Mirosława Stachowiak-Ró żecka, Teresa Glenc, Teresa Pamuła, and Teresa Wargocka.Contrary to the majority of Polish right-wing female politicians studied in my previous article on Muslim immigration being a threat to women's rights who progressed with their political careers after their active support for anti-Muslim crusades [10], women examined in this study have become prominent party representatives only recently-thanks to the vacancies created by their predecessors and the efforts they have been making to gain visibility with their contentious political talk.Some are very experienced politicians with long-standing political careers in both local government and the parliament, and some began their political careers in the last three terms. The following study of parliamentary debates employs Norman Fairclough's take on Critical Discourse Analysis, in which the scholar illuminates how relations of power and ideologies shape discourse and, at the same time, how discourse forms social identities, social relations as well as systems of knowledge and belief [26] (p.12).The model of discourse put forward by Fairclough links language analysis to social theory and encompasses three key dimensions: "discursive practice", "text", and "social practice".In line with this paradigm, critical discourse analysis of a speech act covers three aspects: the production and interpretation of a piece of text, language examination of this text as well as the institutional and societal context of the speech act itself [26] (p.4), [27] (p.94).Drawing upon Fairclough's definition of ideologies as "constructions of reality (the physical world, social relations, social identities), which are built into various dimensions of the forms/meanings of discursive practices, and which contribute to the production, reproduction or transformation of relations of domination" [26] (p.87), I start with the study of interdiscursivity that Polish right-wing female politicians incorporate in their political talk to overpower alternative constructions of meanings and, consequently, withstand hegemony through discourse [26] (p.92), [28] (p.17), [29] (p.56), [30] (p.76).Then, I look into linguistic tools used by Polish right-wing female politicians in their parliamentary appearances to discover how different social and political phenomena are framed and relationships between various participating figures produced.Finally, I explore the impact that Polish right-wing female politicians have on the intersectional state violence against minors in Poland through the social practice of discourse when they "produce and reproduce social realities through either maintaining or transforming social beliefs" [31] (p.115). I started my research by gathering data on issues related to immigration, refugee crises, LGBT, gender, and reproductive health that had become dominant topics on the parliamentary agenda due to the latest political and social developments in Poland.I decided to study the 9th term of the Sejm (the lower house of the Polish parliament that plays a governing role in the legislative process [32]) that started on 12 November 2019 and ended on 12 November 2023 because of the abundance of data available as well as the overall research gap on this period.Throughout the 9th term, there were 81 sessions of the Sejm, which amounted to 195 parliamentary transcripts (available online on the Sejm's website) for analysis.I uploaded all the electronic text files into the MAXQDA 2022 Version 22.7.0 software that I chose as my data analysis tool and started indexing and categorizing the transcripts to identify the final data sample, which comprised a set of 50.My sampling logic [33] was purposive (driven by the research questions) and followed an iterative process (I collected and analyzed data successively).To find the relevant exemplars of parliamentary speeches, first, I had to sample the right parliamentary debates and then, to identify the most illustrative excerpts, sample inside these debates.Because my intention was to create a heterogenous data sample to integrate both variety and variation into the study, I looked for both typical and extreme examples of text.The resulting collection of deliberately selected passages constituted the corpus of empirical data for further qualitative analysis.To examine this volume of text, I defined a coding system [34] and coded all the extracts accordingly.Through systematic coding of passages that covered the same issue, I created a framework of themes that I gradually developed into a more elaborate coding frame-a structured list of different codes with a set of rules to apply consistently throughout my textual analysis.I arranged the codes hierarchically: the first level of coding marked a related electronic text file (e.g., a code "reproductive health"); the second level was used to code a relevant debate in a given file (e.g., codes "abortion" and "in vitro fertilization" contained under the first-level code "reproductive health"); and, then, I applied the third-level code to selected excerpts (e.g., a code "abortion seen as killing" nested under the second-level code "abortion").Once I coded my data sample, I systematically retrieved passages with the same codes assigned and performed the textual analysis. Discursive Practice Before scrutinizing linguistic tools that Polish right-wing female politicians use in their parliamentary speeches, I examine diverse discourse types included in their rhetoric strategies to provide a more nuanced context for the following textual analysis.The study of interdiscursivity embedded in political talk is to demonstrate how right-wing women manage to suppress the construction of alternative meanings and, thereby, arrive at discursive hegemony. First and foremost, anti-LGBT and anti-immigration sentiments are framed in the discourse of love.The emotive language used by right-wing women in their narrative strategies creates a social reality founded upon a set of particular social beliefs.As Sara Ahmed argues in her book entitled The Cultural Politics of Emotion, "emotions are social and cultural practices" [35] (p.9) that serve to "align some bodies with the nation, and against those others who threaten to take the nation away" [35] (p.12) and, thereby, make language work as "a form of power" [35] (p.195).In this article, I focus exclusively on the emotion of love and study "a narrative of love as protection" [35] (p.123) that Polish right-wing female MPs employ in their discourses.Sara Ahmed proposes a concept of love as an example of an affective economy in which emotions of love are attributed to certain figures, move between them, and, as a result, unite them against the Other.According to Ahmed, "love functions as the promise of return" [35] (p.131).Acting out of love means investing in the nation.Therefore, "the return of the investment in the nation is imagined in the form of the future generation" [35] (p.124)-that is, children who will reproduce a national ideal.Nevertheless, the desired return is at risk due to the presence of the Other [35] (p.123).The affective dynamics of Polish right-wing women's political engagement create an affective alignment that is crucial to understanding the emotionality of the texts studied.The resulting alignment against the Other that will not reproduce the national ideal becomes a security relationship between two figures, namely the "Polish mother" and the "child in danger". Secondly, the cultural significance of the emotive language of love is materialized through the discourse of motherhood.For right-wing female politicians, displaying their identity as the "Polish mother" is a means to manifest their active participation in shaping the national community [13].The skillfully applied figure of the "Polish mother" helps female parliamentarians move beyond women's natural capacity and cultural responsibility to give birth and socialize children into the national community [36].Acting upon motherhood as the fundamental component of their identity that determines their role in biological and cultural reproduction, women make themselves credible in politics.When right-wing female MPs apply the maternal frame to their political agenda, they blur the public/private division as they assume roles of protectors, traditionally assigned to men in the patriarchal family and, thereby, transfer the agency from men to women.Therefore, enacting a social identity of a mother that protects her risk-exposed child may become a political statement. Consequently, anti-abortion discourse appears to be the backbone of state child protection policy.Framed as a fight for fundamental human rights, such as the right to life, the need to protect the "unborn child" is of paramount importance to the Law and Justice Party.Consequently, parliamentary debates on abortion have always been an opportunity to confront the opposition and construct polarized identities (i.e., pro-life vs. pro-choice).In 2016, the Stop Abortion pro-life coalition (steered by ultraconservative Ordo Iuris Institute) attempted to introduce a total ban on abortion through a citizens' initiative.The resulting bill proposal triggered a massive public outcry and unprecedented street demonstrations that, at the time, made the ruling party withdraw their support for the proposed legislation [9] (pp.[78][79].Nevertheless, on 27 January 2021, Poland's Constitutional Tribunal (an institution widely considered dependent on the Law and Justice Party), chaired by a woman, Julia Przyłębska, issued a ruling that eliminated abortion for fetal abnormalities and effectively introduced a near-total ban on pregnancy terminations [37].According to the statistics published by the National Health Fund, the number of legal abortions has decreased by 65% within one year since then [38].On 23 June 2022, after a heated debate, the Sejm rejected a bill proposal submitted as a citizens' initiative that would have significantly liberalized the current abortion law [39]. Furthermore, anti-LGBT discourse that is manifested in the analyzed political talk by opposing sex education at schools is the successor of and perhaps an upgrade to anti-gender discourse that has been present in the Polish public sphere since 2012.The intensification of anti-LGBT initiatives coincides with numerous disclosures of pedophilia scandals in the Catholic Church.Many claim that it is a strategy to transfer the accusations of child abuse from actual offenders linked to the Catholic Church to international institutions that "stigmatize the traditional family model" and "sexualize children" [9,40].The Law and Justice Party members take an active part in fueling anti-LGBT sentiments in support of state-sanctioned anti-LGBT initiatives-for example, local authorities adopting anti-LGBT resolutions to create LGBT-free zones across the country or the president publishing a "Family Charter" to ban the promotion of LGBT ideology in public institutions [41].The widespread anti-LGBT attitude in Poland has an impact on non-heteronormative children.Since 2016, in response to alarming data on suicidal thoughts and suicide attempts among LGBT teenagers, the Campaign Against Homophobia has been encouraging schoolchildren in Poland to participate in an annual "Rainbow Friday" to show solidarity with and support for their LGBT peers [42].To prevent pro-LGBT campaigns from gaining further popularity, the parliament passed three bills that ban organizations that "promote sexualization of children" from schools.The first two acts were vetoed by the president on the grounds of not having received enough social acceptance [43,44].The third proposal was approved as a citizens' initiative on 17 August 2023 [45].If the president does not veto the legislation this time, empowering non-heteronormative children will become even more difficult. Finally, anti-immigration discourse has been part of parliamentary debates since 2015.In the Polish case, matters related to immigration are multifaceted because the great majority of refugees and immigrants who arrived in Poland before 2015 came primarily from Ukraine, Russia (mostly Chechens), Belarus, and other post-Soviet states, not from the Middle East or Africa.The 2015 relocation proposal put forward by the European Commission that would have obliged Poland to admit 9287 refugees from the Middle East or Africa [46] resulted in public disapproval and significantly increased reluctance towards refugees in 2016 [47].Since 2021, tens of thousands of non-European migrants, including children, have been seeking to cross the Polish border with Belarus.In response to the crisis, in September 2021, Poland introduced a 90-day state of emergency along the Polish-Belarusian border, including a ban on the media and NGOs from entering the area [48].Subsequently, in October 2021, the Polish parliament passed a law allowing border guards to immediately expel illegal border crossers and maintaining the ban on entering the area by the media and NGOs.[49] The new legislation triggered accusations of pushback, including unlawful treatment of minors [50,51].In the summer of 2022, the Polish government completed a border wall to keep migrants out [52].In stark contrast, after Russia invaded Ukraine on 24 February 2022, Poland hosted millions of Ukrainians, including a large number of children.By passing a bill that equipped Ukrainians with the same access to healthcare and education services that every Polish citizen receives, the Law and Justice Party created a very positive self-image of a welcoming, refugee-friendly state.The warm welcome offered to Ukrainians backgrounded the ongoing crisis at the border with Belarus and, simultaneously, exposed the prejudice towards the non-European migrants and their children [53]. Text Having examined diverse discourse types present in rhetoric strategies employed by Polish right-wing female politicians, I now turn to a vast range of linguistic tools that are used to argue either for or against LGBT-and immigration-related bills during parliamentary sessions.To find out how certain social and political phenomena are situated and relationships between discourse participants are constructed, I focus on the following features of text: clauses used to construct identities and social relations [26] (pp.185-190); denoted (explicit) and connoted (implicit) meanings of clauses with which the relationships between participating figures and their roles in the respective processes are determined [26] (pp.177-185); modality that expresses politicians' affinity with the statements made [26] (pp.158-162); and the ways in which parliamentarians manage their interaction with the opposition [26] (pp.152-158).To allow for a logical flow, the following analysis is divided into sub-sections guided by content-related themes identified in the studied political talk. Protecting Citizens in the Prenatal Stage The discourse of protecting children remains at the center of the right-wing political agenda.In fact, for Polish right-wing female politicians, children are subject to state protection from the moment of conception: "A child is a human being from the moment of conception" (El żbieta Płonka) [54] (p.173); "A human being is created as a result of the fusion of female and male sex cells" (Anna D ąbrowska-Banaszek) [55] (p.132); "From the beginning, a child is a separate being with individual rights, including the fundamental right to life" (Anna D ąbrowska-Banaszek) [55] (p.132).The use of objective modality combined with medical terminology makes these claims sound universal and unquestionable.Moreover, parliamentarians frequently accentuate their declarative statements by alluding to their social identities: "I will repeat as a doctor" (Anna D ąbrowska-Banaszek) [55] (p.132); "I have been a doctor for 45 years, I have seen many things and I know a lot about the human life" (El żbieta Płonka) [54] (p.173).Nevertheless, the debate on abortion is framed predominantly in legal (thus authoritative) terms: "The law should primarily protect citizens, with particular emphasis on the most vulnerable ones.Children in the prenatal stage are undoubtedly the most innocent and vulnerable beings" (Teresa Glenc) [56] (p.99); "The current legal status is proof that abortion in Poland is forbidden" (El żbieta Płonka) [56] (pp.91-92); "We defend the constitutional right to life from conception to natural death" (El żbieta Płonka) [57] (p.167).Therefore, in line with this argumentation, the "unborn child" is constructed as a rightful citizen who is subject to state protection. Killing "Unborn Children" In general, the collected material shows that right-wing female MPs depict abortion as a crime against "unborn children".They do so by demonstrating categorical and authoritative assertiveness about what abortion is: "killing a child and killing a human being", "an intentional deprivation of a not-yet-born child's life" (Anna D ąbrowska-Banaszek) [55] (p.132); "an attempt on life, on a conceived child's life, on a vulnerable child's life", "activities that enable killing of future generations" (El żbieta Płonka) [55] (pp.139-140).Even if there is a change in the grammatical mood and speakers use interrogative instead of affirmative sentences, the questions are purely rhetorical and include references to common sense: "What is abortion, if not killing?[. ..]Abortion is the termination of pregnancy.And what is pregnancy?It is a child.It is all very logical" (Maria Kurowska) [55] (p.139).Also, the word "abortion" is a nominalization that obfuscates agency and causality: we do not know who the agent behind the alleged killing is or why it happened.This means that the process behind abortion (e.g., why pregnancies are terminated) is put out of sight, and the outcome (e.g., no new citizens born) is exposed.Consequently, a 2022 bill proposal submitted as a citizen's initiative that would have liberalized the abortion law in Poland was called "a project about killing unborn children" (El żbieta Płonka) [55] (pp.139-140) and deemed illegal. De-Medicalizing In Vitro Fertilization (IVF) Apart from abortion, right-wing female politicians see in vitro fertilization (IVF) as another threat to the "unborn child".They claim that IVF should not be state-funded as it does not treat infertility and, similarly to abortion, it kills "unborn children".Just like with the discussion on abortion, right-wing female parliamentarians use nominalization: "The beginning of today's debate concerned the right to subsidize human production (i.e., IVF).This is not a method of fighting infertility.This is human production" (Barbara Bartuś) [58] (p.44).Using medical terms and objective modality that dominates their declarative statements, right-wing women disregard IVF as a treatment procedure: "Unfortunately, IVF does not cure infertility.It is not a treatment.It does not improve women's health in any way" (El żbieta Płonka) [59] (p.160) and compare it to eugenics: "As part of the IVF procedure, a large number of spare embryos is created and then those conceived children are subject to eugenic selection in order to decide which ones will be born.For one child to be born, others must be destroyed, not to say killed" (El żbieta Płonka) [59] (p.161).Rightwing female MPs consider IVF not only as a way to deny children their fundamental right to life but also as a means to objectify them: "The child is treated in an utterly objectifying way [. ..] as if it was a commodity or a product and not a fully-fledged person" (El żbieta Płonka) [59] (p.161).Moreover, the Law and Justice female representatives worry that anonymous sperm donation, an allegedly frequent part of the IVF procedure, deprives children of their remaining rights: "A child conceived in this way will neither know the biological father nor even have access to key health-and life-related information, such as the history of genetic diseases in the family" (El żbieta Płonka) [59] (p.161). Ridiculing Feminist Postulates Right-wing female politicians do not shy away from directly interacting with their female counterparts in the opposition.One of the common strategies in discussions on reproductive health is to reverse charges and accuse their opponents of discrimination in a very explicit manner: "The ruling by the constitutional court [that introduced a near-total ban on abortion] abolished discrimination against conceived children [with serious birth defects diagnosed in the fetus] whom you allowed to kill only because they are sick" (Anna Maria Siarkowska) [59] (p.73).The Law and Justice female party members tend to discredit their opponents' understanding of feminism: "Women's rights?[. ..]Half of the children killed are also women.And where are the rights of these women who are being killed?"(Maria Kurowska) [55] (p.139).In addition, they mock the importance of feminist postulates as regards reproductive health: "I have a feeling that you ladies have only one solution to all programs and all social challenges.[. ..]This solution is total abortion" (Katarzyna Sójka) [60] (p.42); "You reduced the fight for women's rights, dear ladies, to the fight for contraception, for the right to abortion, that is, for the right to kill children that have already been conceived" (Barbara Bartuś) [58] (p.44); "I am very sorry that you ladies treat women as objects and that when you talk about women's health or children's health, you only talk about abortion and the morning-after pill" (Józefa Szczurek-Żelazko) [60] (p.45); "I am very sorry to hear that the Civic Platform and the Left female representatives reduce the quality of women's life to IVF and abortion" (Joanna Borowiak) [58] (p.46). Reappropriating Feminism The Law and Justice female representatives enact their own identity as "conservative feminists" to produce, on the one hand, a positive self-presentation and, on the other, a negative Other-presentation in the area of women's rights: "Women in Poland are not only left-wing feminists.[. ..]I consider myself a conservative feminist and I disagree with you ladies.As a conservative feminist together with my colleagues, I run government programs that address women's situation.[. ..]What is more-we do not fight with men.We love men and they love us dear left-wing feminists" (Teresa Wargocka) [58] (p.42).Overall, a lot of effort is made to antagonize men in the opposition with their female colleagues: "Ladies from the Left consider men as their opponents because they think men do not have the right to express their opinion on the matters related to a child that they conceived.Only the woman should have this right.A man, a husband, a partner has no right because he is unrelated" (Teresa Wargocka) [55] (p.137); "Pregnancy is not only a woman's work, so I am surprised that you fight for women's rights.Women's rights should be the same as men's because men and women should support each other" (Ewa Szyma ńska) [58] (p.45); "Your feminism is a façade and hypocrisy.You really do not know what you are fighting for because you want to deprive your partners of responsibility" (El żbieta Płonka) [58] (p.49). Shaming Fellow Mothers Most importantly, however, right-wing female parliamentarians shame women in the opposition for being bad mothers: "You want to love selected children and you choose those who should be born and those who should not.[. ..]This is not true love" (El żbieta Płonka) [58] (p.49); "You will not be happy if you kill your children in your wombs" (Teresa Wargocka) [55] (p.137); "Abortion is women's hell.There is nothing worse than a woman killing her own child.[. ..]Being a mother is the greatest happiness and every mother who has experienced giving birth to a child knows it.[. ..]You fight for the right to kill your own children" (Maria Kurowska) [56] (p.102); "[A mother] cannot want to give birth to one child and not the other.How do you love your children?" (El żbieta Płonka) [55] (pp.139-140); "The termination of human life before birth is a brutal interference with the maternal instinct.Yes, dear ladies, you are mothers, and you should know that" (Teresa Glenc) [56] (p.99).Furthermore, they also accuse their female opponents of conflicting interests, juxtaposing IVF and abortion, for example, when the opposition proposed a bill on IVF to increase fertility rates: "So do you want IVF or abortion because I am lost by now?" (Joanna Borowiak) [61] (p.94).Right-wing female parliamentarians address the opposition in a very direct manner with transitive clauses (subject-verb-clause) that describe directed actions (an agent acts upon a goal).These active constructions clearly attribute agency and responsibility.Women in the opposition are presented as explicit agents who can be held accountable for their actions (i.e., allegedly killing children).This strategy serves to construct a negative image of the Other and develop positive self-presentation. Promoting Family-Friendly Policies In addition, right-wing female party representatives create a positive self-image by portraying the current government as family-friendly: "Every life should be cared for and respected, just like a family.The Law and Justice government has been doing it from the very beginning" (El żbieta Płonka) [55] (pp.139-140).The centrality of family well-being is justified by the role that family plays in society: "Family is a priceless value.The positive impact of family upbringing is associated primarily with the family values implemented, reciprocal emotional bond, roles performed and patterns of communication. No one can replace a good family in the process of raising children and youth.[. ..]Family is the fundamental environment for the functioning and development of a child" (Agata Katarzyna Wojtyszek) [62] (p.57).From the nationalistic point of view, family is important because it ensures the nation's future: "It is necessary to guarantee the conditions for the creation and functioning of families that will give birth to and raise the next generations" (Beata Strzałka) [62] (p.58); "In the future, children from large families will join the job market and work for those seniors and their pensions that you [the opposition] worry about" (Iwona Kurowska) [63] (p.119).To strengthen positive self-presentation, right-wing female MPs reaffirm the importance of family-friendly initiatives or programs already in place: "We have introduced the 500+ program as a basic, flagship program, [...] which is aimed at supporting families in raising children who are Poland's future.They are the potential that we must take care of, nurture and support.[. ..]Children, our national treasure" (Teresa Wargocka) [63] (p.105); "Sociological research clearly shows that a significant proportion of Poles consider family to be the highest value.Family happiness is synonymous with individual happiness.[. ..]It is important to disseminate good quality knowledge about the fundamental importance of marriage and parenthood for society.[. ..]Therefore, the establishment of the Polish Institute of Family and Demography seems to be an extremely important matter" (Dominika Chorosi ńska) [62] (p.46); "The 'For Life' program must result in the creation of a stable assurance of care for all children who are born with a disease or are not fully able, as well as for their families.[. ..]There is no responsibility for social life without the responsibility for the life of a vulnerable child.[...] We must not stop to serve in defense of humanity" (Anna Milczanowska) [64] (p.57).While enacting a positive self-image in the area of family-friendly policies, right-wing female politicians add to a negative Other-presentation and question "liberal" demands that women in the opposition purportedly make to increase fertility: "It is very difficult to increase the fertility rate by promoting a liberal lifestyle at pride parades and encouraging abortion on demand.[. ..]I do not think we are going to increase fertility this way" (Iwona Kurowska) [63] (pp.118-119). De-Stigmatizing Traditional Values Furthermore, right-wing women reverse any possible accusation of discrimination by claiming that initiatives that supposedly combat domestic violence aim at stigmatizing the traditional Polish family: "[The Istanbul Convention] contains a number of dubious, even harmful provisions, amongst others, it stigmatizes families [. ..] as a source of violence" (Dominika Chorosi ńska) [65] (p.118); "The National Program for Counteracting Violence in the Family should be renamed since the term 'violence in the family' is ideologically charged and basically stigmatizes the family [. ..] so it should be replaced with 'domestic violence' instead" (Anna Maria Siarkowska) [66] (p.42).Apart from transferring the charges to others, they provide a discriminatory justification for their defense strategies, imposing a conservative and exclusive definition of a family (i.e., a married heterosexual couple): "The essence of the problem is that this violence happens in the privacy of a household, and therefore takes place between people who have personal relationships-they are not always a family.They can be cohabiting relationships.They can be same-sex relationships" (Anna Maria Siarkowska) [66] (p.42).To refute arguments that violence happens predominantly within families and that family members, overwhelmingly women and children, need more state protection, the Law and Justice female party members claim the opposite: "Good and permanent family ties are one of the best safeguards against violence" (Dominika Chorosi ńska) [65] (p.118).The reason why right-wing female politicians claim that the traditional family model is "under attack" is linked to their perspective on the role of the Catholic Church in the state: "The shoddy, disgusting attacks on John Paul II stem from the need to weaken and even destroy the Catholic Church in Poland.Because the Catholic Church has a specific position on bioethical matters, on abortion, euthanasia, it has a specific vision of the family.The Catholic Church cares about our tradition and national identity, and this stands in the way of centralizing the European Union.Because we, Poles, are to be cut off from our roots to easily impose a new vision of Europe with the capital in Berlin" (Anna Kwiecie ń) [67] (p.111).Such narratives help demonstrate the moral superiority of the traditional family model promoted by the Law and Justice female representatives. Empowering Parents Apart from the Catholic Church acting to safeguard Polish national identity, a special role is assigned to parents who are seen as irreplaceable in cultivating the "right" values in children: "Parents are to have a voice; they are to decide with what values and content their children are to be brought up" (Joanna Borowiak) [68] (p.38); "I think that every responsible parent knows how to take care of their child, knows their capabilities and expectations, so they should have the right to decide about what is in line with their beliefs in order for the child to feel safe.[. ..]That is why I am glad that parents will decide about the education of their children, including sex education" (Beata Strzałka) [68] (p.36).Therefore, the support for initiatives that would ban sex education from schools, such as the "Protect Children" bill, is allegedly based on the parental right to decide what children are taught at school: "Parents have the most sacred and absolute right to decide on the upbringing of their child" (Mirosława Stachowiak-Ró żecka) [68] (p.26); "Let us protect children from inappropriate content, let us support parents in their upbringing" (Joanna Borowiak) [69] (p.53); "Ultimately, this project gives parents the right to decide what content accompanies their children's upbringing" (Marzena Machałek) [69] (p.56).Similarly to the traditional family model where the accusation of discrimination was reversed, parents who want to raise their children in line with the traditional values are depicted as being victimized: "Those parents who do not want sex education are stigmatized, pointed out at school and really have no chance to defend their right to raise their own children" (Teresa Wargocka) [70] (p.63). Saving Children from Sexualization While supporting parents in upbringing children in line with their beliefs and, thus, building a positive self-image, right-wing female politicians attack the opposition for allegedly wanting to indoctrinate children with inappropriate content: "This law might not have been needed if it weren't for the fact that when you [the opposition] were in power, LGBT-related organizations entered schools through the back door and tried to distribute so-called research based on unsubstantiated data.Yes, this bill is needed precisely because you pose a risk" (Iwona Kurowska) [68] (p.37).Some would put forward radical arguments.For example, during one of the speeches given by a representative of the opposition who declared that, when they are in power, sex education will be taught by competent tutors, Joanna Borowiak shouted in response: "by a pedophile" [58] (p.42).Moreover, right-wing women equate sex education with LGBT-related topics and claim that both phenomena pose a threat to children: "Therefore, the deep internal disintegration is the cause of our children's tragedy, troubles, and suicides.There is a problem of family breakdown and there is also a problem of ideologizing children.Children really need proper care; they need love that no one or few talk about here.[. ..]And if it was not for the Law and Justice Party, we would only be talking about suicides of LGBT children here.But it is LGBT that is the cause of children committing suicide, this very ideology" (El żbieta Płonka) [54] (p.173).To support their arguments, they use categorical statements and refer to the constitution: "The constitution talks about promoting the family.[. ..] not homosexuality and LGBT movements" (Barbara Bartuś) [71] (p.15).Being strongly against "children's sexualization" but acknowledging the importance of the "appropriate" sex education, right-wing female MPs offer an alternative: "We reject sexualization, but we teach about sexuality, and we do it as part of the core curricula of various subjects, including family life education.And now we will start the 'For Life' program-if the parents agree.[. ..]In fact, teaching about sexuality is related to shaping pro-family, pro-social and pro-health attitudes, developing an ability to make the right choices, select a lifestyle that is good for reproductive health and preparing the youth to assume future marital and parental roles" (Marzena Machałek) [69] (p.55). Taking a Test on Humanity It is interesting to see how the family-friendly image is manifested in matters related to immigration.Since the Russian invasion of Ukraine on 24 February 2022, the parliamentary debates on Ukrainian refugees have been dominated by the language of empathy, hospitality, care, and kindness that emphasizes the need to help women and children fleeing war: "The situation that we are in is the sudden arrival of many friends, we have a big family and we have to face it" (El żbieta Płonka) [72] (p.85); "Millions of people will come to us and we must take care of them.These millions expect our support.I saw women with babies in their arms.I helped them comfort the crying ones" (Teresa Pamuła) [73] (p.39); "We are currently hosting refugees in Poland, and these refugees are mainly women and children" (Barbara Dziuk) [74] (p.43); "Poland welcomes [. ..] these refugees as if they were family" (Barbara Bartuś) [74] (p.38).Additionally, the Law and Justice female representatives argue for various bills in support of Ukrainian children, highlighting the horrors of war that they are fleeing, very often separated from their parents who stayed in Ukraine to fight: "There are a lot of children among the refugees.[. ..]Problems related to the safety of children are extremely important and need to be clarified" (Józefa Szczurek-Żelazko) [74] (p.105); "There is a great need for the Polish government to undertake the task of registering all minor refugees who have arrived in Poland.[. ..]These are very important changes that will protect these children" (Teresa Wargocka) [75] (p.40); "The goal is to regulate the legal situation of children coming to Poland from Ukraine, because a large number of them are without their parents, i.e., the only legal guardians under Polish law.[. ..]A temporary guardian is a person who will take care of the child, who will be able to represent the child, and therefore enroll them in school, go to the doctor with them or collect the benefits due to these children" (Barbara Socha) [76] (p.66).The fact that Poland welcomes refugees from Ukraine is an opportunity for nationalist self-glorification widely expressed by many right-wing politicians: "When around 2 million immigrants arrived in Europe in 2014, the EU countries raised the alarm about the refugee crisis and demanded their relocation.We, as the Polish government, as local governments, as non-governmental organizations, act, support each other and deal with it" (Józefa Szczurek-Żelazko) [76] (p.70); "We took on a huge responsibility, taking in over 7 million people fleeing the war, the vast majority being women and children.Accepting such a number of people without having refugee camps is a phenomenon on a global scale" (El żbieta Duda) [77] (p.54); "I am proud of Poles, I am proud of the Polish government, I am proud of Polish local government officials that we have opened to Ukrainians" (Ewa Szyma ńska) [76] (p.80); "Poles once again passed the test, the test on humanity" (Barbara Dziuk) [74] (p.33). Denying Empathy The empathy that dominates the debate on Ukrainian refugees is clearly missing in the discussion on refugees and immigrants trying to cross the Belarusian border.While the situation at the border with Ukraine is portrayed as an obligation to provide shelter for people fleeing war, the crisis at the border with Belarus is presented as a danger that requires taking immediate measures to protect the nation: "Poland is safe with us.Recall that when the hybrid war was proclaimed at the Polish-Belarusian border, the Polish government immediately started to build a wall between Poland and Belarus.We should act together so that our children, our fathers, our future generations are safe in Poland" (Lidia Burzy ńska) [78] (p.49).To refute different treatment given to Ukrainians and people at the border with Belarus, the Law and Justice female representatives use categorical statements: "Refugees are treated the same across the border" (Teresa Pamuła) [79] (p.14); "Poland is a state of law, Poland has border crossings and every refugee, every immigrant who appears at the border crossing and fills in an appropriate application, can count on it to be considered.And if we have an attack on a border where there is no crossing, then we should speak with one voice" (Barbara Bartuś) [80] (p.68).Nevertheless, the double standards are salient: "War refugees fleeing Ukraine will receive assistance regardless of where they cross the border" (Anna D ąbrowska-Banaszek) [75] (p.34).Moreover, it is common for right-wing parliamentarians to deny the responsibility to protect the immigrant children who attempt to cross the Belarusian border.For example, in response to accusations of forcibly returning children to Belarus, right-wing female MPs launch a counter-attack against their political opponents by changing the topic to reproductive issues: "These are children, not a clump of cells?" (Anna Paluch) [81] (p.161); "Do you also care for the unborn children?" (Joanna Borowiak) [81] (p.148); "Will you also defend the conceived children?" (Joanna Borowiak) [81] (p.149); "Where are the parents of these children?" (Joanna Borowiak) [81] (p.157).When the opposition asks for a minute of silence to commemorate people who died at the Polish-Belarussian border, the Law and Justice female politicians start praying for "all the aborted babies" (Joanna Borowiak) [81] (p.158).In addition, right-wing female party members tend to insinuate that the immigrant minors camping at the border with Belarus pose a threat.For instance, when confronted with questions about a 16-year-old teenager who was allegedly forced back over the border, Bo żena Borys-Szopka retorted: "Invite him to your home!" [81] (p.138).Overall, right-wing female politicians do not describe the immigrants trapped at the border with Belarus as victims; they present them as "intruders" (Joanna Borowiak) [82] (p.34).While building a positive self-image, right-wing female parliamentarians try to discredit the opposition by implying that they do not represent Polish national interests: "Whose interests are you representing?"(Joanna Borowiak) [81] (p.141).They deny the pushbacks and delegate the responsibility for the ill-treatment of immigrants trying to get to the European Union through Belarus: "This is what Belarusians do, not us" (Joanna Borowiak) [81] (p.137). Social Practice This section explores the contribution that Polish right-wing female politicians make to reinforce the intersectional state violence against minors in Poland and demonstrates how the social practice of discourse shapes social dogmas. Having scrutinized the interplay of different discourse types in their speeches as well as the linguistic toolkit, I argue that right-wing women bear a lot of political responsibility in the enactment of white, Christian, and heteronormative identity on Polish children.By glorifying the traditional family model, allegedly under attack and in decline, they operate on a myth that supports their desired social reality.The myth, symbolized by a patriarchal society with its religious and authoritarian norms and crowned with women's sacred roles as mothers, is depicted as seemingly under threat from secularism and liberalism.The traditional family model is used, therefore, to impose a moral authority that determines what today's society should be like and what future generations should be socialized into.This analysis reveals that the politics of sexual anxiety is the basic mechanism to exercise power over people [83][84][85].The threats posed by the Other help secure the role of the traditional family as the guardian of morality.Furthermore, these imagined and sexualized threats stand as examples of liberal values.The right to abortion is seen as a menace because it frees a woman from her dependence on a man while pregnant and during maternity leave.Sexual minorities are allegedly dangerous because LGBT people exercise their freedom to love and marry whomever they desire, possibly seeking "unconventional" ways of biological reproduction.Migrants pose an apparent threat because they corrupt the "purity" of the national stock through potential intermarriage.People who utilize their right to self-determination, who escape "tradition" and live their lives according to their own paradigms, and who remain intellectually independent generate a perceived loss of patriarchal hierarchy.To endorse a shared understanding of social reality, it is necessary to make the national community members stop deviating from the imposed ideal.And, as this article intends to demonstrate, the ideal is white, Christian, and heteronormative. Finally, anti-LGBT and anti-immigration attitudes promoted through various speech acts need to be put in the context of Polish-EU relations.Having established that secular and liberal values are the main hazards for the traditional family model, the EU is considered their embodiment.Therefore, anti-EU sentiment is a frequent theme in Polish political discourse.The animosity towards the Union stems from its apparent image as an imperialistic endeavor by Western elites who plot to control Poland politically and culturally.The anti-LGBT and anti-immigration policy proposals are, therefore, state-driven manifestations of Euroscepticism that many academics regard as symbolic attempts to change Poland's semi-peripheral status inside the Union [86][87][88][89].Moreover, due to concerns about the rule of law, media freedom, and minority rights, while the Law and Justice Party was in power from 2015 to 2023, the relationship between Warsaw and Brussels steadily deteriorated.Even though the government had never stated any intention to exit the EU, the ruling by the Polish Constitutional Tribunal in October 2021 that declared the primacy of national legislation over the EU treaties [48] incited a public debate on the erosion of democratic norms in Poland and a possibility of the country planning a "Polexit". Discussion As this article seeks to explain, anti-LGBT and anti-immigration discourses are important areas of right-wing women's political activity.The interplay of sexual and racial prejudice that female parliamentarians employ in their narrative strategies enables the enactment of white, Christian, and heteronormative identity on Polish children.Acting as supporters of state-sanctioned policies directed against certain groups of minors, the Law and Justice female party members legitimize discriminatory practices and, consequently, add to a xenophobic social reality. Women's political emancipation could be seen as acknowledging their ability to participate in the public sphere on equal terms with men.The wide array of speeches made by right-wing female MPs during parliamentary debates on LGBT-and immigrationrelated issues is a good indication of women's active engagement in the political domain.This study shows that by framing their anti-LGBT and anti-immigration political talk in a maternal discourse and acting upon their social identity of the "Polish mother" in their narrative strategies, the Law and Justice female representatives successfully exercise their political agency in a novel way and in a new political context.Their rhetoric relies on a concept of motherhood that conventionally confines women to the private sphere.Instead of limiting women's possibility to practice agency, the emblem of the "Polish mother" escapes the domestic domain and becomes a means to political emancipation.Women who assume responsibility to protect the "child in danger" transgress the cultural norms of passive femininity.The discursive transfer of agency traditionally assigned to men not only empowers women but also emancipates the "Polish mother" as a political subject.The strategic re-politicization and weaponization of motherhood entrusts women with a new political role in the defense of the nation's future and, as a result, allows for the "feminization" of nationalism. With this study having scrutinized the prominence of Polish right-wing female politicians in endorsing intersectional state violence against minors, future studies could investigate attempts to "feminize" the nationalistic discourse on accepting large numbers of refugee women fleeing the war in Ukraine.Contesting the universality of women's rights from a "conservative feminist" perspective in view of emerging anti-Ukrainian attitudes might prove an important area for future research. Conclusions The rhetoric of maternal love filled with denoted and connoted meanings that Polish right-wing female politicians use to create a positive self-image and negative Otherpresentation adds to the codification of the difference between "us" and "them".While "we" is legitimized as "normal", "they" is constructed as "deviant" and, hence, poses a threat to mitigate.As we demonize the difference between "us" and "them", we fail to remember the universal human dignity and start to valorize human worth instead.Once our society is organized hierarchically, we normalize the marginalization and exclusion of others only because "they" are not "us".While a white, Christian, and heteronormative "we" is offered protection, a disabled, non-heteronormative, and non-European migrant "they" is abandoned.The resulting delusional superiority, wrapped in the warmth of a mother's womb, makes us immune to the disabled children not having access to highquality healthcare, to the steadily rising rates of suicide attempts among LGBT teenagers or to non-European migrant minors brutally pushed back to a country where they would face further mistreatment.Perhaps exposing the strategic re-politicization and weaponization of motherhood and the subsequent "feminization" of public permission to socialize future generations to become indifferent to the suffering of the Other could offer at least a partial remedy to the ongoing xenophobic trends.
2024-07-07T15:53:58.996Z
2024-07-03T00:00:00.000
{ "year": 2024, "sha1": "e9e3bc5c78f16e56262335c7ff1bbc6aaf96d5c8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/soc14070108", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3b848933488374e13d69742b5237b974af6d58a2", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
103072023
pes2o/s2orc
v3-fos-license
Using thermodynamic parameters to calibrate a mechanistic dose-response for infection of a host by a virus Highlights • Dose-response model for pathogen infection developed on basis of thermodynamic association constants.• Fraction of pathogen oral dose surviving mucus barrier to reach intestinal epithelium modelled.• Fraction of host intestinal epithelial cells with bound pathogen modelled as function of dose.• Theory of parameterising thermodynamic association constants from molecular data developed.• Thermodynamics provides link between sequencing data and risk of infection. Introduction Microbiological risk assessment (MRA) requires a dose-response relationship to translate the exposure (i.e. number of pathogen particles entering the host through a given route) into the probability of infection. Infection by an oral pathogen is defined as the multiplication of organisms within the host, followed by excretion (Haas et al., 1999) and, for the purpose of the work here does not include progression of disease or the host acquired immune response. Obtaining dose-response data for humans has generally relied on volunteer challenge experiments e.g. Cryptosporidium parvum in students (Okhuysen et al., 1998) or using outbreak data to back-calculate the relationship between measured exposures and infection rates (Teunis et al., 2004). There are limitations to both approaches particularly with emerging pathogens for which the exposure routes may not be fully elucidated, and for pathogens with serious clinical outcomes, e.g. Zaire ebolavirus (EBOV). Furthermore zoonotic viruses emerge through jumping the species barrier from an animal source to humans, e.g. Nipah virus (NiV) and EBOV, and in this respect the dose response would be for a one-off event that may be inefficient and difficult to reproduce without large numbers of animals. An additional complication is that the pathogen may adapt to the new host, such that its infectivity increases. This is well established for filoviruses in laboratory animals where the infectivity per plaque-forming unit (pfu) may change by several orders of magnitude with passaging (Gale et al., 2016), and has recently been demonstrated for EBOV Makona adapting to humans through an amino acid substitution in its glycoprotein during the recent catastrophic outbreak in West Africa (Diehl et al., 2016;Urbanowicz et al., 2016). That outbreak also raised many questions regarding the unknown potential for companion animals (cats and dogs) to serve either as a reservoir or vector for the virus and so be involved in transmission of EBOV to humans and other animals. The absence of dose-response data for EBOV in humans limits development of MRAs for the risk of infection of citizens in the EU for example from EBOV in illegally imported bushmeat. Indeed, it has been proposed that the infectivity to humans of an EBOV pfu may differ not only from bushmeat samples from different wildlife species (e.g. fruits bats and nonhuman primates) but also from different individuals of the same species depending on the degree of host adaptation (Gale et al., 2016). In effect no two pieces of bushmeat from EBOVinfected wildlife may be the same in terms of infectivity to humans, although this remains to be proved. There is clearly a need for novel approaches to calibrate dose-response relationships for the purposes of MRA for emerging pathogens. The infection process of a host cell can be broken down into the component steps and modelled mathematically (Handel et al., 2014) and the probability of infection can be expressed as a function of the combined probabilities of each step (Gale et al., 2014). These steps include overcoming the initial host defenses, binding of the virion to its host cell receptor, entry to the host cell (i.e. internalisation and uncoating of the virion), and replication, capsid assembly and budding (Gale et al., 2014). Previously it was demonstrated that a dose-response model could, in part, be parameterized using thermodynamic data for some of the key molecular interactions in the infection process (Gale, 2017). The beauty of thermodynamic data is that they can be measured experimentally by biochemists (in some cases just using molecular components e.g. cloned virus protein and host receptor protein (Wang et al., 2016)) and do not involve live animal or human volunteer studies, which is a major advantage for dangerous pathogens. Furthermore the effect of amino acid substitutions in the host receptors on binding affinity can be measured directly (Yuan et al., 2015). The possibility of applying thermodynamics is further developed here for two of the key steps in the infection process of a host by a virus. The first step modelled is the probability of the virus overcoming the innate host defenses posed by mucin protein molecules and the pathogen recognition receptors (PRRs) produced by the host. Mucins have sugar units on their surface which bind to components on the surface of the virus, for example the haemagglutinnin (HA) glycoprotein molecules of influenza virus (de Graaf and Fouchier, 2014) or the VP1 of norovirus (NoV) (de Rougemont et al., 2011). Mucus present in the respiratory tract hampers influenza virus infection and in the case of humans predominantly contains α2,3-sialic acid receptors. Indeed influenza viruses with α2,3 specificity were inhibited by human mucins (de Graaf and Fouchier, 2014). The PRRs include the mannose binding protein (MBP) which has carbohydrate-recognition domains (CRD) which bind to regularly repeating sugar units on pathogen surface (Taylor and Drickamer, 2006). The second step modelled here is the binding of the virus to its specific receptors on the host cell surface. The approach here is developed for a generic faecal/oral virus such as NoV and rotavirus which infects epithelial cells lining the intestine (Boshuizen et al., 2005;de Rougemont et al., 2011), but could be applied to influenza A viruses which are inhaled and infect cells of the trachea and lung (de Graaf and Fouchier, 2014). This paper first gives an overview of a mechanistic dose response model to introduce two probability parameters, namely the fraction, F v , of virus escaping the mucin defense barrier and the fraction, F c , of host cells with bound virus. The Methods section sets out a difference equation method to model F v and F c as a function of the mucin: virus ratio and virus dose in the intestine, respectively. Central to determining F v and F c is the strength of binding of the virus to the mucin and host cell as defined by the equilibrium constants K mucin and K a respectively. In the Theory section, the application of published data on the binding of the virus surface envelop glycoprotein (GP) to host cell receptor (Cr) molecules or to mucin molecules is reviewed in terms of determining K mucin and K a in order to parameterize the dose-response. Particular reference is made to using the dissociation constant K d which is routinely determined experimentally for virus GPs binding to Cr molecules (Gambaryan et al., 2005;Raman et al., 2014;Yuan et al., 2016). It is then shown how the strength of virus/host cell binding (i.e. the magnitude of K a ) may be predicted from changes in two thermodynamic parameters, namely enthalpy (H) and entropy (S). The effects of amino acid changes at the contact surfaces of the virus GP and Cr on the enthalpy are considered with a view to the future parameterization of dose-response models based on genetic sequencing data. Entropy changes are also considered both in terms of virus binding and also in the interpretation of K d data. Overview of the development of a mechanistic dose-response model for infection in the intestine The model parameters and variables are summarised in Table 1. On ingestion of the initial virus challenge dose, V initial , by the host there are a number of immediate host defences in the mouth and gastrointestinal (GI) tract including the mucus barrier, decoy receptors and the innate immune system that selectively bind and hence remove the virus (McGuckin et al., 2011). For example the histoblood group antigens (HBGAs) are genetically determined glycans to which NoV selectively binds and are present on both decoy receptors in the saliva and on mucin, the main protein component of mucus (Shanker et al., 2011). The total number of viruses surviving the mucin barrier, and getting through to the intestine is given by where F v is the fraction of free virus, i.e. that not bound to mucin. As shown in Fig. 1, F v can be modelled by two parameters, namely the total number, Muc total , of mucin molecules in the mucus in the saliva and GI tract and an association constant, K mucin (defined below) that quantifies the strength of binding of the virus to a mucin molecule. Thus by inserting F v from Fig. 1 for a given mucin concentration into Eq. (1), the total number of free virus particles in the intestine and available to initiate infection of the epithelium may be modelled. On reaching the intestinal epithelium, a free virus particle binds to the surface of a host cell. The probability of infection of the host, p host , equals the probability of successful infection of at least one cell and is related to the number of cells (C.V) with bound virus by:- where p cell is the probability of successful infection of a host cell given a virus has bound to its surface. Thus the more cells with bound virus then the greater the chance that infection will be successful in at least one of them. The probability p cell depends on ability of the bound virus to enter the cell, replicate and bud (Gale et al., 2014;Gale, 2017) and is not discussed further here. Now where F c is the fraction of cells with bound virus, and C total is the number of cells in the host intestinal epithelium. As shown in Fig. 2, F c is directly proportional to the total number of virus particles in the intestine, V intestine , and is also dependent on the strength of the binding interaction between the virus and the epithelial cell surface as quantified by the association constant, K a , which is now defined. Expressing the fraction, F v , of viruses not bound by mucus and the fraction, F c , of host cells with bound virus in terms of the association constants, K mucin and K a . Each binding process is represented by a dynamic equilibrium. Thus, within the given volume of the intestine, free virus (V free ) and host cells with no bound virus (C free ) are in dynamic equilibrium with cells with bound virus (C.V) as represented by:- The association constant, K a , is expressed in terms of the concentrations (Gale, 2017) as:- where the square brackets, [], represent the concentration in moles dm −3 (M). The term "in dynamic equilibrium" means the process is reversible (Handel et al., 2014) such that free virus binds to free host cells to form C.V with a rate, k on , and the C.V complexes then dissociate into C free and V free at a slower rate, k off , depending on the strength of binding. Thus the association constant, K a (and similarly K mucin ) may also be written in terms of the association/dissociation rates as:- Visualising K a (or K mucin ) in terms of on/off rates may be easier conceptually and has implications when host or virus factors selectively change k on or k off (see below) thus affecting K a (or K mucin ) according to Eq. (6). As shown by Gale (2017), the fraction, F c , of host cells with 1. Fraction, F v , of virus not bound to mucin plotted as a function of the total mucin: total virus ratio. The virus challenge dose in the 0.314 dm 3 volume of intestine was fixed at 4.15 × 10 8 virus particles and the number of mucin molecules was increased from 10 3 to 4.15 × 10 12 molecules as represented by the symbols. Binding of all the virus is achieved below the horizontal dotted line which represents 1 unbound virus remaining in the intestine. Solid lines with symbols represent points calculated with difference equation approach (see text) with K mucin values of 10 22 (x), 10 20 (•), 10 18 (▲), 10 15 (■), 10 13 (Δ), 10 11.7 (♦) and 10 9 (□) (M −1 ). Dashed lines represent Eq. 11 assuming [Muc total ] ∼ [Muc free ] with K mucin values from left to right of 10 20 , 10 18 , 10 15 , 10 13 , 10 11.7 , 10 9 , 10 7 , and 10 5 (M −1 ). Eq. (11) fails for high K mucin values at mucin: virus ratios of < 1:1 (arrow). bound virus is given by:- ] c free Substituting [C.V] with Eq. (5) and rearranging gives F c in terms of the free virus concentration [V free ] and K a :- Similarly, when the virus enters the host there will be a dynamic equilibrium between virus that is not bound to mucin and hence reaches the intestinal epithelium (V intestine ), free mucin (Muc free ) and virus bound to mucin (V.Muc). The strength of the binding is reflected by the association constant, K mucin , between virus and mucin and is expressed as:- The fraction of free virus (F v ) is given by:- Replacing [V.Muc] with Eq. (9) and rearranging gives F v in terms of the free mucin concentration and K mucin :- Modelling how F c varies with dose of virus in the intestine The objective is to construct a plot of F c as a function of V intestine as shown in Fig. 2. Eq. (7) may be used with [V free ] representing [V intestine ] when the total number of virus particles, V intestine , greatly exceeds the number of host cells (C total ) such that [V free ] is relatively unaffected by virus binding and therefore [V free ] approximates [V intestine ]. This is also acceptable for low binding affinity viruses as in the species barrier model of Gale (2017) such that very little virus is bound even at high virus doses. However, in many natural infection processes (e.g. through drinking water), the host may be challenged by very low numbers of pathogen which bind with high affinity to host cells such that [V free ] is greatly diminished compared to [V intestine ] and tends to zero as all the virus is bound. The problem is that Eq. (7) cannot be expressed mathematically in terms of [V intestine ]. The solution adopted here to model F c in terms of [V intestine ] comprises three steps. The first step involves setting up a model for a host intestine so that the concentrations of virus and host cells may be defined in order to calculate K a using Eq. (5). In the second step, a difference equation approach is used to produce a range of [V free ], [C free ] and [C.V] combinations at six different total virus (V intestine ) doses. From these concentrations, K a values and F c values are calculated and are plotted in Fig. 3. In the third step, for each of the six V intestine doses, the F c is read off Fig. 3 for given K a values. F c is then plotted against V intestine for each K a in Fig. 2. The concentration of host cells in the model intestine Developing a dose-response model based on K a and K mucin requires an estimation of the volume in dm 3 (i.e. litres) within which the challenge dose of pathogen has access to susceptible cells in the host. This is needed to convert particle numbers in moles into concentrations for use in Eq. (5). The volume into which the pathogen enters within the host is for example the lumen of the intestine or even a drop of body fluid (e.g. blood) on a mucosal surface in the case of EBOV. For the purpose of the model, a 100 cm length of host intestine with a radius of 1 cm is simulated. The total surface area is therefore 628.3 cm 2 and the volume inside the intestinal lumen is 0.314 dm 3 (1 dm 3 = 1000 cm 3 ). According to Rosen and Misfeldt (1980) the density of cells in dog kidney epithelia is 6.6 × 10 5 cells per cm 2 . The total number, C total , of epithelial cells over the 628.3 cm 2 surface of the model intestine is thus 4.15 × 10 8 , and the density of cells by volume is 1.32 × 10 9 per dm 3 . Dividing by the Avogadro number, L, of 6.02 × 10 23 particles per mole (Price and Dwek, 1979) gives a mole concentration by volume [C total ] of total host cells of 2.19 × 10 −15 M. It is reassuring to note that this is similar to the cell concentration of 4.1 × 10 −15 M reported for the canine kidney cells in the avian influenza virus binding experiments of Nunes-Correia et al. (1999). Handel et al. (2014) in their simulation for influenza virus infection use a concentration which is ∼100-fold higher although they simply assumed a packed collection of cells each with a volume of 20 µm 3 . The model here therefore takes into account the large void volume of the intestine within which a faecal-oral pathogen is diluted. 2.2.3. Difference equation approach to model F c against K a for each virus dose (Fig. 3) For low virus doses (i.e. V intestine < C total ), K a s were calculated using [C.V], [V free ] and [C free ] in Eq. (5) over the full range of virus binding i.e. from one bound virus to all V intestine viruses being bound. According to Eq. (4), each cell can only bind one virus, and the number of cells with bound virus therefore equals the number of viruses bound to cells which was calculated as:- over the range of V free from 0 (all viruses bound) to V intestine (no viruses bound). For high virus doses (i.e. V intestine > C total ) the simulation was run for V free from 0 to C total (such that C.V is always positive) because the model in Eq. (4) assumes each cell can only bind one virus. For each C.V, the number of free cells, C free , was calculated as:- free total Values for [C free ], [V free ] and [C.V] were calculated as C free , V free and C.V respectively, divided by the volume of the intestine (in dm 3 ) and L and then used to calculate K a in Eq. (5) over the range of C.Vs. For each virus challenge dose, values of F c are plotted as a function of K a in Modelling how F V varies with ratio of mucin to virus: surviving the mucin defence The fraction of free virus, F v , represents the probability that virus is not bound to the mucin and in effect escapes the mucin barrier. The objective is to model F v as a function of the mucin to virus ratio over a range of K mucin values as shown in Fig. 1. The simple approach to model F v is to use Eq. (11) with [Muc free ] representing the total concentration of mucin in the intestine, [Muc total ], which can be measured in a host experimentally. As discussed below, this fails at mucin: virus ratios of <1:1. Therefore a difference equation approach was used. For this, the challenge dose, V initial , in the host intestine was fixed at 4.15 × 10 8 virions and the fraction of virus not bound to mucin, F v was then calculated for ten values of Muc total ranging from 10 3 mucin molecules to 4.15 × 10 12 mucin molecules, representing mucin: virus ratios ranging from 2.4 × 10 −6 :1 to 10,000:1. This was done using the same stages as described above for virus binding to host cells for seven K mucin values ranging from 10 9 to 10 22 M −1 . Thus for the number of free mucin molecules, Muc free , ranging from 0 (i.e. all mucin molecules bound to virus) to Muc total (i.e. all mucin molecules free of virus), values of V.Muc and V intestine were calculated as: By dividing by the volume of the intestine and L, the values of V.Muc, V intestine and Muc free were converted to corresponding concentrations, namely [V.Muc], [V intestine ] and [Muc free ], from which K mucin values were calculated using Eq. (9). For each K mucin value, F v was calculated as: and a plot (not shown) of F v versus K mucin constructed in the same was as for virus binding to host cells in Fig. 3. From that plot, values of F v for each of the ten Muc total s were read off for a given K mucin and plotted against the mucin molecule:virus ratio (i.e. Muc total :V initial ) in Fig. 1. Effects of stochasticity Stochasticity in the challenge dose (V initial ) would be addressed in the exposure calculation and is outside the scope of this work. The mechanistic dose-response model developed here using the difference equation approach is not affected by stochasticity because the values of C.V and V intestine calculated in Eqs. (12) and (14) respectively are "given" integers, and thus F c and F v calculated by Eqs. (13) and (15) respectively are exact for each integer C.V and V intestine (C total and V initial being constant integers in the simulation). The probability of infection of the host according to Eq. (2) is calculated for each integer value of C.V from 0, 1, 2, … to C total or V intestine (depending on which is lower) and is not affected by stochasticity. The effect of stochasticity is considered here for the use of Eq. (7) to calculate F c and where C.V is a fraction. Spatial heterogeneity in the model of the intestine The model assumes that the free pathogens and mucins are homogeneously distributed within the lumen of the intestine, such that the concentrations at equilibrium are constant along the 1 m length of the simulated intestine. The spatial heterogeneity of the pathogen in the lumen is not known and could vary depending on the pathogen distribution in the food, water or faecal/vomit contamination ingested by the host. For example, ingestion of a small amount of faeces laden with virus could give much higher virus concentrations at certain parts of the lumen, depending on the degree of mixing within the lumen. In contrast ingestion of a 0.314 dm 3 volume of water contaminated with pathogen could give a homogeneous (Poisson) distribution along the 1 m length. This is not considered further here. Spatial heterogeneity will exist in the mucin concentration because the mucus forms a layer lining the intestine wall (McGuckin et al., 2011). This needs consideration in the further development of this mechanistic approach. Results Here an intestine is simulated with an internal volume of 0.314 dm 3 and a total of 4.15 × 10 8 susceptible cells in the intestinal epithelium. In Fig. 1, an oral challenge dose of 4.15 × 10 8 viruses is administered, and the fraction of these escaping the mucin barrier and reaching the epithelium is modelled. Fig. 3 shows the fraction of epithelium cells with bound virus as a function of K a for six doses of virus that have got through the mucin barrier ranging from 1000 virions to 4.15 × 10 11 virions. In Fig. 2, the points from Fig. 3 are replotted in the form of a dose-response which relates the fraction of host cells with bound virus to the dose of virus that has got through the mucin barrier. These doseresponse relationships are presented for a range of K a values from 10 5 to 10 20 M −1 in Fig. 2. Assessing the magnitude of K mucin needed for an effective mucus barrier The fraction, F v , of virus escaping the mucus barrier is plotted as function of the mucin: virus ratio for a range of K mucin values from 10 9 to 10 22 (M −1 ) in Fig. 1. The horizontal dotted line in Fig. 1 represents just one free virus remaining in the 0.314 dm 3 volume of the simulated intestine. At values of F v below this line there is < 1 free virus in the simulated intestine and in effect all of the 4.15 × 10 8 virions in the initial challenge dose (V initial ) are bound to mucin. Thus, the effectiveness of the mucin barrier can be assessed simply in terms of K mucin and the mucin: virus ratio required to bring F v to below this line in Fig. 1. At the very high K mucin value of 10 22 M −1 all of the virus is bound as the mucin: virus ratio exceeds 1:1. However, at progressively lower K mucin values, less and less of the virus is bound at a given mucin: virus ratio. Thus even at K mucin values as high as 10 18 and 10 20 M −1 large numbers of viruses (20,330 and 180 respectively) are still free at mucin: virus ratios of 10:1. According to the model in Fig. 1, in the 0.314 dm 3 volume of simulated intestine, large excesses of mucin over virus are needed to make a significant impact on the fraction of free virus at the lower K mucin values. For example, at K mucin values of 10 13 M −1 and 10 11.7 M −1 4% and 50% of the virus is free, respectively, at mucin: virus ratios of 1000:1. Thus mopping up virus is relatively inefficient and, except at very high K mucin values, large numbers of virus may break through. This reflects the dilution in the large volume of the simulated intestine. The main conclusion from Fig. 1 is that as the mucin: virus ratio increases from < 1:1 through 1:1 to > 1:1 (i.e. going from left to right along the x-axis) for high K mucin values ( > 10 15 M −1 ), then the fraction of free virus falls dramatically and non-linearly, with all of the virus being mopped up at K mucin of ∼10 22 M −1 . The virus binding capacity of the mucin is finite As expected at mucin: virus ratios < 1:1, the fraction of free virus approaches 100%, even with very high K mucin values (Fig. 1). This simply reflects the fact that there is not enough mucin capacity to mop up all the virus such that very high virus challenge doses overwhelm the mucin barrier. With very high K mucin values (10 22 M −1 ), a very slight excess of mucin over virus is sufficient to mop up all the virus. At intermediate K mucin values (10 18 -10 20 M −1 ) mopping up the last remaining viruses is less efficient and proportionately higher mucin: virus ratios are required at lower K mucin values. 3.1.2. Simplifying the approach: using Eq. (11) to model F v in MRA The total mucin: total virus ratio is also expressed as the concentration of total mucin, [Muc total ], on the x-axis scale of Fig. 1. This is relative to the virus concentration in the intestine which is fixed at 2.2 × 10 −15 M. Thus 4.15 × 10 8 virus particles in a volume of 0.314 dm 3 on dividing by L represent a concentration of 2.2 × 10 −15 M. The dashed lines in Fig. 1 show F v as calculated by Eq. (11) for each K mucin value using [Muc total ] as an approximation for [Muc free ]. This is an appropriate approximation at high mucin to virus ratios. However, at mucin: virus ratios of < 1:1, [Muc free ] becomes much less than [Muc total ] as all the mucin is bound, particularly at high K mucin values, and Eq. (11) is not applicable. As an example, the arrow in Fig. 1 shows that Eq. (11) predicts that 4.4% of virus is free (i.e. 95.6% of virus is bound) when there is 10,000-fold more virus particles than mucin molecules. This is clearly not possible. At lower K mucin values, F v decreases linearly with increasing mucin: virus ratio and may be modelled by Eq. (11) in good agreement with F v calculated by the difference equation approach (symbols in Fig. 1). Thus Eq. (11) holds at low K mucin values or at high mucin: virus ratios (irrespective of K mucin ). This is important because high mucin: virus ratios might be expected in a natural infection situation even with the high virus challenge dose used in the simulation here. From the practical point of developing MRA methodology, applying Eq. (11) is easier than the complicated difference equation approach developed for the symbols in Fig. 1. Thus understanding the limitations of Eq. (11) is important. In summary Eq. (11) fails at mucin: virus ratios of < 1:1 at the higher K mucin values, as represented by the arrow, but is generally applicable at lower K mucin values particularly at higher mucin: virus ratios. The results show that the value of K mucin is critical not only in determining whether Eq. (11) can be used in MRA but also in assessing how effective the mucin barrier is in mopping up viruses. F c is related to both K a and the virus dose in the intestine The fraction of host cells with bound virus increases linearly with the dose of virus in the simulated intestine for all K a values until at high virus doses the host cells become saturated (F c → 1) with virus ( Fig. 2). At low K a values (<10 13 M −1 ) unrealistically high virus doses (>10 12 ) are required for saturation of the host cells. From Fig. 3, F c increases linearly with K a at a given virus dose but, not surprisingly, is limited by the virus dose at V intestine : C total ratios of < 1:1. Thus, there are 4.15 × 10 8 cells in the simulated intestine, and for the virus doses of 10 3 and 10 6 virions, the maximum values achievable for F c are 2.4 × 10 −6 and 2.4 × 10 −3 respectively when all those virions are bound. At K a values of > 10 15 M −1 , the binding is so strong that all the virions in the dose are bound (Fig. 3) and F c reaches its plateau. At V intestine : C total ratios > 1 (i.e. the 4.15 × 10 10 and 4.15 × 10 11 doses in Fig. 3 representing V intestine : C total ratios of 100:1 and 1000:1, respectively) saturation of the host cells occurs at high K a values with progressively lower K a values required for saturation of the host cells as the virus dose increases. Thus increasing K a above certain values has no effect on F c either because all the viruses are bound at low virus doses (i.e. V intestine : C total ratios < 1) or because all the host cells are saturated at high virus doses (i.e. V intestine : C total ratios > 1). This is borne out in Fig. 2 which shows that for K a values > 10 15 M −1 , the dose-response curves are superimposed such that F c is limited by available dose and not by K a . The main conclusion of Figs 2 and 3 is that the value of K a over the range 10 5 to 10 15 M −1 is critical in affecting F c . 3.2.1. Simplifying the approach: using Eq. (7) to model F c in MRA The total virus dose in the intestine (V intestine ) may be approximated to V free , converted to a concentration and used as [V free ] in Eq. (7) to calculate F c as an alternative to the difference equation approach used to construct Fig. 2. However, this simplification fails at K a values greater than 10 15 M −1 (results not shown) for which Eq. (7) overestimates F c by orders of magnitude at low virus doses. The results show that Eq. (7) is appropriate to calculate F c up to K a values of at least 10 13 M −1 irrespective of dose. Demonstration of the potential application of the model The application of the model is demonstrated in Table 2 for four scenarios representing different combinations of initial challenge dose and affinity of the virus for the cell (as determined by K a ). Thus the K a is 10 10 M −1 for the low affinity binding virus and 10 13 M −1 for the high affinity binding virus. These broadly reflect those measured experimentally (Nunes-Correia et al., 1999) for low binding and high affinity binding sites for influenza A virus H5N1 binding to canine kidney cells (Table 3). Also as discussed below, this difference of 1000-fold in K a could reflect the result of a single amino acid change in the GP or Cr molecule affecting a salt bridge at the GP/Cr interface. For the purpose of the demonstration, p cell is assumed to be 0.1, i.e. a cell with bound virus has a 10% chance of successful infection. For these scenarios, the predicted probability of the host being infected, p host , ranges from 4.4 × 10 −5 to 0.988. F v is constant at 0.19 (Table 2) and is calculated using Eq. (11) with a K mucin of 10 9 M −1 (at which Eq. (11) is appropriate) and a [Muc free ] of 4.25 × 10 −9 M. This value of [Muc free ] is calculated using a mucin mass concentration of 1.7 mg/ml for saliva (Kejriwal et al., 2014), a dilution factor of 100-fold in the food or water matrix and a mucin protein molecular weight of 4000,000 Daltons (Kesimer and Sheehan 2012). It is solely calculated for demonstration purposes in Table 2 in the absence of published data. F c in Table 2 varies for each scenario and is calculated using Eq. (7) with [V free ] calculated from V intestine (in the 0.314 dm 3 volume of the simulated intestine) and L. The values of K a used in these scenarios are low enough such that Eq. (7) is applicable (see above) as can be seen for the K a = 10 13 M −1 line from the difference equation approach in Fig. 2 where F c ∼ 1.0 × 10 −7 for a V intestine of 1904 virions in agreement with Table 2. Effect of stochasticity The approach using F c from Eq. (7) predicts fractions of a C.V (Table 2) for both low dose scenarios and also for the high dose, low affinity scenario. In reality CV must be an integer as in the difference equation approach. The issue of stochasticity arises in the model Table 2 Demonstration of the application of the mechanistic dose-response model: Predicting the probability of infection of the host (p host ) from the initial challenge dose (V initial ) for a low and a high affinity binding virus. Table 2 when C.V is a fraction. To investigate the effect of stochasticity, the mechanistic model developed in Table 2 is used to determine the infectious dose 50% (ID 50 ), i.e. the value of V initial for which p host = 0.5. It was found that for the high affinity virus (K a = 10 13 M −1 ), the ID 50 is 1574 viruses and for the low affinity virus (K a = 10 10 M −1 ) the ID 50 is 1574,000 viruses (Table 4). The C.V value for the ID 50 is 6.6 and, being much greater than 1, is in the range where stochasticity is less of an issue. The classic dose response model for infection of a host is written as:- where p 1 is the risk of infection from ingestion of a single virion by the host. By rearranging Eq. (16) and setting V initial = ID 50 and p host = 0.5, p 1 is calculated as 4.4 × 10 −4 and 4.4 × 10 −7 for the high affinity and low affinity viruses respectively. These values may be used directly in the conventional dose-response model in the form of Eq. (16). Furthermore, setting V initial = 1 in the mechanistic model in Table 4 predicts the same probability values of 4.4 × 10 −4 and 4.4 × 10 −7 for the high affinity and low affinity viruses respectively suggesting that stochasticity is not a problem as expected for dose-response models which are linear at low doses. Applying the outputs of the mechanistic model to the MRA The values of p host are also calculated in Table 2 by using p 1 (obtained from the mechanistic model as described above) directly in Eq. (16) to estimate the risk from the initial challenge dose (V initial ). The results are identical with those predicted by the mechanistic method. Large and negative (see below) hence lowering magnitude of K a K a Calculated from K d , n and ΔS a using Eq. (17) *For n = 3 and K d = 10 −4 M, K a = 10 12 M −1 . *For n = 1 and K d = 10 −12 M, K a = 10 12 M −1 . *assuming ΔS a = 0 J/mol/K Approach 3: Calculation of K a from enthalpy and entropy terms Enthalpy term, ΔH a Favourable interactions (e.g. formation of salt bridges) between amino acid residues at GP/Cr contact interface and good spatial fits give a large negative value driving virus binding. Limited by lack of specific data for GP/Cr contacts or for virus/host cell binding −56.5 kJ/mol for antibody binding to its antigen (Bostrom et al., 2011). −4958 kJ/mol for VSV binding to PL bilayers (Carneiro et al., 2002) Entropy term, ΔS a Component entropy terms ΔS solvent Likely to be positive as disordering and hence entropy of water solvent molecules displaced from GP/Cr contact surfaces during binding may increase +401 J/mol/K for antibody binding to its antigen (Bostrom et al., 2011) ΔS conf Negative due to reduction in conformational mobility. Estimated to be −6.1 J/mol/K per amino acid for ordering of disordered regions of proteins (Rajasekaran et al., 2016) −301 J/mol/K for antibody binding to its antigen (Bostrom et al., 2011) ΔS rt Negative for all interactions when a particle is immobilised on a surface. Parameterising the model: determining the virus/host cell association constant, K a Three approaches for parameterising K a are set out in Table 3. In a few cases, the K a is determined experimentally by directly measuring virus binding to cells (Table 3). Approach 2 developed here is to estimate K a from published K d values which are routinely measured experimentally. Thus, the magnitude of K a for binding of a virus to its host cell is determined by the strength of the interaction(s) between viral surface components, typically the viral glycoprotein (GP) in the case of enveloped viruses such as EBOV, and the receptors (Cr), e.g. T-cell immunoglobulin and mucin domain protein 1 (TIM-1) for EBOV, on the human host cell surface (Yuan et al., 2015). Biochemists quantify the binding affinity between molecules in terms of the dissociation constant (Price and Dwek, 1979): ] d for the reversible dissociation process given by:- The magnitude of K d in units of mol/dm 3 (M) may be determined experimentally between purified parts of the virus GP and the host Cr using surface plasmon resonance (SPR) in which one protein "partner" is immobilized to a chip surface and changes in refractive index are used to measure binding of the other "partner" which is free in solution. Thus, Yuan et al. (2015) immobilized EBOV-GP to a chip and used SPR to measure a K d of 2.67 × 10 −5 (M) for binding of soluble human TIM-1 from solution. EBOV entry involves a second GP/Cr binding step within the endosome in which the GP binds to the host Niemann-Pick C1 (NPC1) protein. Using SPR, Wang et al. (2016) reported a K d of 1.58 × 10 −4 (M) for EBOV GP binding to human NPC1. These binding affinities for EBOV GP are much weaker than that for MERS-CoV binding to its cellular receptor CD26 for which the K d was 1.67 × 10 −8 (M) as measured by SPR (Lu et al., 2013). Raman et al. (2014) summarise K d s for influenza virus HA binding to host cell glycans on human tracheal and alveolar sections. Some HAs bind to human receptors with K d s in the 10 −12 M range, while others show K d s of ∼10 −9 M. Gambaryan et al. (2005) present K d s for binding of H1 and H3 influenza viruses to glycans, typically in the range of 10 -100 × 10 −9 M. Taking into account multiple interactions between the virus and host cell For a single GP/Cr interaction, K a is the reciprocal of K d . However, the reciprocal K d values measured by SPR for the GP/Cr interaction cannot directly be used as the K a values in Figs. 2 and 3. This is because K d as measured by SPR does not take into account the number of GP/Cr contacts per virus/cell interaction and also does not allow for any changes in the entropy (S) on the binding and immobilization of a whole virus particle to a cell (discussed below). For example, Beniac et al. (2012) calculated that an EBOV virion filament of 982 nm in length would have about 1888 copies of the GP spike protein and could therefore make multiple contacts with Cr molecules on the host cell surface. The change in Gibbs free energy (ΔG d ) for dissociation of one mole of GP/Cr contacts is related to K d (Price and Dwek, 1979;Kastrits and Bonvin, 2013) where T is the temperature and R is the ideal gas constant (8.31 J/mol/ K (Price and Dwek 1979)). Since the Gibbs free energy (G) is a state function in thermodynamics, the overall change in G for dissociation of n moles of GP/Cr contacts, is given by Since the change in Gibbs free energy (ΔG a ) for association of one mole of virus with one mole is cells is related to K a by = − G R T l n K Δ a a (Gale, 2017) and the change in Gibbs free energy for association of n mole of GP/Cr contacts, = − G n G Δ Δ a d (Kastrits and Bonvin, 2013) then = K . a K 1 d n Taking into account the change in entropy on whole virus binding, ΔS a , (discussed below) K a may be expressed as:- The value of G is affected by temperature and pressure, and for this reason G is used by biochemists. Thus providing the temperature and pressure are constant, as assumed here for virus in the host intestine, then any changes in G relate only to those changes in energy within the system of interest, namely from the molecular interactions between virus and host during infection within the simulated intestine. Estimating K a through the thermodynamic parameters, enthalpy and entropy Approach 3 in Table 3 is to calculate the association constant K a through the changes in two thermodynamic parameters, namely enthalpy (H) and entropy (S) on binding. Thus K a , is related to ΔG a for association of the virus with the cell (Gale, 2017;Kastrits and Bonvin, 2013) according to: which on rearranging gives where ΔH a and ΔS a are the changes in enthalpy and entropy, respectively, of the virus/host cell system on association (Carneiro et al., 2002;Gale, 2017). Therefore in principle knowing ΔH a and ΔS a would enable calculation of K a and hence F c for a given dose of virus from Fig. 2. Bostrom et al. (2011) demonstrated how the binding affinity of an antibody could be broken down into the enthalpy and entropy terms. How this approach could be applied to a virus during infection with the aim of calibrating dose-response models is summarized as Approach 3 in Table 3. To the author's knowledge there are no data for ΔH a and ΔS a for viruses binding to host cells. However, Carneiro et al. (2002) measured the forces between vesicular stomatitis virus (VSV) and an artificial phospholipid (PL) bilayer (as opposed to a host cell) and obtained values for ΔH a and ΔS a of −4958 kJ/mol and = −16,062 J/ mol/K respectively. Carneiro et al. (2002) Approaches 2 and 3 in Table 3 are dependent on estimating ΔS a . As shown by Carneiro et al. (2002) for VSV binding to PL bilayers, the decrease in entropy is huge (Table 3), demonstrating the importance of understanding ΔS a in estimating K a . Unlike ΔH a which is based on molecular contacts at the binding faces, ΔS a involves changes in order and mobility and is difficult to visualise, but may be broken down into a number of components which act additively (Bostrom et al., 2011). Thus where ΔS solvent is the change in entropy of the water solvent molecules, ΔS conf is the change in the internal conformational freedom of the proteins, ΔS rt is the change in rotational and translational freedom of the virus, and ΔS mem is an entropic pressure associated with bringing two membranes close together, as for example, when the virus envelope approaches the host cell membrane prior to fusion (Sharma, 2013). The likely contributions from the four entropy component terms are summarized in Table 3. The change in conformational entropy (ΔS conf ): intrinsically disordered proteins in viruses and their role in ligand binding Many ligand binding proteins have intrinsically disordered regions (IDRs) which undergo a disorder-to-order transition on or near the interface on binding the ligand (Fong et al., 2009) decreasing the entropy such that ΔS conf is negative (Rajasekaran et al., 2016). Viral proteins are rich in intrinsic disorder (Dolan et al., 2015) and intrinsically disordered proteins serve as host cell receptors for viruses. An example is the ephrin receptor ligand binding domain (Fong et al., 2009) which also serves as receptor for Hendra virus (HeV) and NiV (Xu et al., 2012). Thus the unbound ephrin receptor contains partially disordered loops. In the complex (bound to ephrin) these loops are ordered to form the ligand-binding channel (Fong et al., 2009). This raises the question of whether the ephrin receptor undergoes a disorder-to-order transition on HeV/NiV binding. Interestingly Xu et al. (2012) show from the X-ray crystal structure that the binding of HeV G protein involves the movement of a tryptophan "latch" on the ephrin-B2 receptor. There would be a change in the conformational entropy ΔS conf associated with this during formation of the HeV G protein/ephrin-B2 receptor complex. According to Eq. (17), increasing the number of GP/Cr contacts during virus binding increases K a and hence the infectivity according to Fig. 3. Counterintuitively, in the case of EBOV GP pseudoviruses, increasing the surface density of GP actually decreases the infectivity by about 10-fold, perhaps reflecting steric hindrance from tightly packed GP proteins (Mohan et al., 2015) blocking a conformational change that gives an increase in ΔS conf required for binding of GP to TIM-1 (Gale, 2017). This is consistent with the observation that EBOV GPs are well separated in space on the EBOV filament surface (Beniac et al., 2012) thus allowing plenty of space for conformational changes without steric hindrance. A demonstration of how changes in ΔS conf could explain the 10-fold increase in infectivity reported by Mohan et al. (2015) on decreasing the GP density may be explored with Eq. (19). Thus increasing ΔS a by 20 J/mol/K (irrespective of the values of ΔH a and T) increases K a by 11-fold. A decrease in conformational entropy (ΔS conf ) due to steric hindrance of 20 J/mol/K represents the ordering of just four amino acid residues on the basis of the −6.1 J/ mol/K per residue proposed by Rajasekaran et al. (2016) for disordered proteins. Therefore blocking the disordering of just four amino acids in the EBOV GP could explain the 10-fold decrease in infectivity of EBOV GP pseudoviruses with high GP densities. Booth et al. (2013) show that filovirus filaments are very flexible, more flexible than filamentous paramyxoviruses and much more flexible than rhabdoviruses. Thus a flexible filament, making multiple GP/ Cr contacts, can adopt many more conformations in 2D-space when bound to a host cell surface than a rigid rod of the same length. A bound filament will therefore have a higher entropy than a stiff rod, and will have a higher K a . A flexible filament can also test out multiple GP/Cr contacts, thus maximising contacts and making ΔH a more negative in magnitude which increases K a according to Eq. (19). Indeed Booth et al. (2013) suggest that the extreme aspect ratio of filaments may be an adaptation that enhances cellular attachment. The change in rotational and translational entropy (ΔS rt ) The association of two species to form a bound complex, e.g. the binding of a ligand to a protein or the adsorption of a peptide on a lipid membrane, always involves an entropy loss (Ben-Tal et al., 2000). This no doubt applies to the binding of viruses to cells. Consideration of ΔS rt may be of particular importance for binding of filoviruses (Gale, 2017) which comprise repeating modular units linked together into single very long filaments (Beniac et al., 2012;Booth et al., 2013). Thus Gale (2017) argued that the prior immobilisation of multiple virus units through linking together into a polyploid filament enhanced cell binding compared to that of single virus units in the case of filoviruses through a less unfavourable ΔS rt term. Furthermore for this reason, natural polyploid filovirus filaments may bind more strongly to cells than suggested by results from experiments to determine filovirus infectivity using EBOV GP-expressing pseudoviruses, which are spherical and single. 4.3. Using enthalpy to relate changes in amino acid sequence at the binding faces of the virus GP and host receptor protein to a change in K a While lack of data on ΔH a limits the use of Approach 3 in Table 3 to determine K a , Eq. (19) may be applied to determine the impact of amino acid changes in GP and Cr on K a and hence link genetic sequencing data to the risk of infection. The ultimate aim would be to predict how such sequence changes affect infectivity, host range and jumping the species barrier from the perspective of the dose-response. This could include assessing the relative risk of novel strains of virus with mutations in the GP and also assessing the risk of a "standard" virus jumping the species barrier into a novel host which may have amino acid differences in its Cr relative to the "standard" host. The starting point is having information on the infectivity of the "standard" virus in the "standard" host and also crystal structure data of the "standard" virus GP bound to the "standard" host Cr. Crystal structure data are available for a number of viral GPs docked to their human host cell Cr at atomic detail including EBOV and MERS-CoV (Zhao et al., 2016;Wang et al., 2016;Lu et al., 2013). Knowing the crystal structure of the GP/Cr protein complex assists in understanding how changes in amino acids at the binding interface of GP and Cr affect ΔH a and hence K a through Eq. (19). Thus although ΔH a itself may not be known, it may be possible to predict the change in ΔH a (i.e. ΔΔH a ) for an amino acid substitution from basic biophysics knowledge of the energies of the intermolecular forces that hold proteins together. These are typically salt bridges and hydrogen bonds as shown for MERS-CoV bound to its Cr (Lu et al., 2013). An example of how this could be applied is demonstrated in Table 5 for changing amino acids involved in salt bridges. In a salt bridge a negatively charged amino acid residue on one protein interacts electrostatically with a positively charged residue on the other protein resulting in a strong attraction. Salt bridges, e.g. between histidine 30 (positively charged) and aspartate (Asp) 70 (negatively charged) in bacteriophage T4 lysozyme contribute −11 to −21 kJ/mol to ΔG (Anderson et al., 1990;Dong and Zhou, 2002). For the purpose of the demonstration in Table 5 it is assumed that the ΔH a for a salt bridge formation is −17.8 kJ/mol as this gives a 1000-fold change in K a (Price and Dwek 1979) for each salt bridge removed or added through mutation. Proximity of two residues of the same charge results in electrostatic repulsion as in Scenarios A and B which represent viruses with very weak affinities for the host. Scenarios D and E represent viruses which have adapted to the host with hugely increased affinities. Scenarios A and E are only two mutations away from each other. Thus changing two positively charged amino acids (Scenario A) into negatively charged amino acids (Scenario E) could increase K a by 10 12 -fold. EBOV adaptation to humans through changes in GP Amino acid changes in the EBOV GP affect its host specificity. Thus, EBOV Makona in the 2013 West Africa outbreak adapted to humans (and in doing so became less infectious to bats) through substituting alanine (A) with valine (V) at residue 82 of the GP (Diehl et al., 2016;Urbanowicz et al., 2016). This is referred to as the A82V substitution. However, the changes in infectivity to humans (as measured by EBOV GP pseudoviruses infecting human cell lines) appear to be relatively small and in the range of 2-fold to 4-fold. A 4-fold increase in infectivity due to a 4-fold increase in F c in Fig. 3 would reflect a 4-fold increase in K a , and a change in ΔG a according to Eq. (18) of −3.6 kJ/mol. This would probably reflect changes in ΔH a (and not the T.ΔS a term) since residue 82 on the EBOV GP is between residues 80 and 83 which make multiple atom contacts directly with six surface residues on the NPC1 protein (Wang et al., 2016). Thus, the A82V substitution in the GP may change ΔH a by ∼−3.6 kJ/mol on the basis of measured changes in infectivity. This relatively small change (compared to the −11 to −21 kJ/mol for a salt bridge) is consistent with the chemical similarity of alanine and valine. Changes in the host receptor (NPC1) that protect Eidolon helvum fruit bats from EBOV infection In the human NPC1 protein (Cr), an Asp at residue 502 protrudes from the surface and contacts the EBOV GP in the docked complex (Zhao et al., 2016). The natural presence of phenylalanine (Phe) at residue 502 (instead of Asp) in the NPC1 of the African Straw-coloured fruit bat (Eidolon helvum) appears to protect this species from EBOV (Ng et al., 2015). Indeed, replacing Phe at residue 502 of E. helvum NPC1 with Asp completely restored its binding to EBOV GP (Ng et al., 2015). From the X-ray structure, the presence of Phe at residue 502 of the NPC1 in E. helvum would cause severe clashes on binding EBOV GP (Zhao et al., 2016) such that ΔH a in the E. helvum NPC1/EBOV GP complex would be significantly less negative than that for binding to human NPC1 and perhaps even positive (reflecting a repulsion). Indeed, determining ΔΔH a would enable the effect of the Asp to Phe substitution at residue 502 on K a and hence on the infectivity of EBOV to E. helvum to be assessed through Eq. (19) as in Table 5. Hoffmann et al. (2016) developed VSV pseudoviruses with filovirus GPs expressed on their surfaces to test the efficiency of GP-mediated cell entry into cell lines of different bat species. Hoffmann et al. (2016) demonstrated that the efficiency of entry of EBOV-GP VSV pseudovirus into E. helvum cell lines was markedly reduced compared to that with GPs from other filoviruses and at standard virus levels no infection was detected. However, at high levels of (i.e. undiluted) virus, EBOV-GP VSV pseudovirus did show significantly increased infection compared to the negative controls. Thus EBOV GP is capable of mediating entry into E. helvum cells, albeit with low efficiency. This is entirely consistent with the dose-response model simulated in Fig. 2 which predicts for low K a s (10 5 -10 7 M −1 ) that very small fractions (10 −7 to 10 −5 ) of the host cells have bound virus at very high virus doses, i.e. 10 11 to 10 12 . Thus some infection would be expected in E. helvum on the basis of Eqs. (2) and (3) and the probability of infection which, although low at low K a values, would never be zero because there is no threshold K a below which binding does not occur (Fig. 3). Supporting this, Hayman et al. (2010) reported EBOV antibodies in one of 256 E. helvum bats tested in Ghana and suggested this resulted from EBOV infection. Parameterising the model: determining values for K mucin Values for K mucin may be derived from published data on the strength of interactions between lectins and glycans. Lectins are proteins which bind selective glycan groups (Taylor and Drickamer, 2006). Viruses such as influenza virus, NoV and rotavirus behave as lectins by binding selectively to glycans. Indeed, K mucin is related to the number of lectin/glycan interactions and their respective K d s in the same way as K a according to Eq. (17). Dam and Brewer (2010) report K d s of 2.0 × 10 −10 M for the lectin soy bean agglutinin binding to porcine submaxillary mucin which has multiple GalNAc sugars to bind. Similarly Vataira macrocarpa lectin has a K d of 1.0 × 10 −10 M with porcine submaxillary mucin. Acting singly (i.e. using n = 1 in Eq. (17)), these K d s would translate into K mucin values of ∼10 10 M −1 at which very high excesses of mucin would be needed to achieve removal of the virus according to Fig. 1. There are two mechanisms by which the magnitude of K mucin could be increased:-1. Through multiple binding interactions; and 2. Through irreversible binding. The effect of multiple contacts on the strength of binding has been shown for binding of the mannose binding protein (MBP) to carbohydrate structures on the surfaces of pathogens. Each carbohydrate-recognition domain (CRD) of the MBP interacts only with the terminal sugar residue in an oligosaccharide chain (Taylor and Drickamer, 2006). The K d for interaction with a high mannose oligosaccharide is ∼10 −3 M, i.e. low affinity. High-affinity binding of MBP requires interaction of multiple CRDs with multiple terminal mannose residues. The trimeric structure of MBP presents a cluster of CRDs for interaction with appropriately spaced terminal mannose residues on the pathogen surface. Thus the arrays of terminal sugar residues on the surfaces of microorganisms can interact with multiple sites Table 5 Theoretical consideration of how changes in ΔH a (ΔΔH a ) through mutations affecting two amino acid residues involved in salt bridges at the contact interface between GP and Cr could affect the binding affinity K a at 310 K (37°C). Virus/host scenario Effect of mutation on the number of charged amino acid residues opposite each other at the contact surface of GP and Cr Representation (+, positive amino acid residue; −, negative amino acid residue; 0, electrostatically neutral amino acid residue) Comments ΔΔH a (kJ/mol) relative to Scenario C. Calculated assuming one salt bridge contributes 17.8 kJ/mol (Anderson et al., 1990) Effect simultaneously. As shown in Eq. (17), a three way interaction involving three binding sites with K d s of 1.0 × 10 −3 M can result in an overall K a of up to 1.0 × 10 9 M −1 for a multivalent ligand (i.e. pathogen surface) with appropriately spaced terminal sugar residues (Taylor and Drickamer, 2006). In the case of influenza virus A, the binding site of each HA polypeptide is relatively shallow and interacts primarily with the terminal sialic acid residues linked to galactose (Taylor and Drickamer, 2006). The affinity for monomeric sialosides is weak (K d ∼10 −3 M), but binding to cell surfaces is enhanced by the simultaneous interaction of multiple HA-binding sites with multiple sialic acid residues on the target host cell (Taylor and Drickamer, 2006). The cumulative effect of n binding sites each with the same K d is given by the n th power (Eq. (17)). Thus seven interactions with K d ∼10 −3 M, for example, would give a K a ∼10 21 M −1 . It is concluded that mucins and MBP PRRs of the innate immune system could bind virus with sufficient affinity such that slight mucin excesses could remove all the virus (Fig. 1). Furthermore, K mucin could become higher if the virus were irreversibly bound for example by being "folded in" to the inside of mucous droplets such that it can no longer exchange with free virus on the outside. In effect k off would tend to 0 in Eq. (6) and hence K mucin would tend to infinity. It should be noted that the magnitude of K d (and hence K a and K mucin ) is constant for a given molecular interaction at a given temperature. However, the magnitude of K mucin may change during the infection process through both host mechanisms and virus mechanisms as is now discussed. Changes in K mucin through host effects Mucin glycosylation changes during infection such that mucins isolated from rotavirus-infected mice at 4 days post infection were more potent at inhibiting rotavirus infection than mucins from control mice (Boshuizen et al., 2005). Interestingly there are also age-dependent differences in mucin quantities, composition, and/or structure which alter the antiviral capabilities of the small intestine mucins (Boshuizen et al., 2005). Furthermore, mucin production by the host increases during infection by rotavirus with mucin-coding mRNA levels peaking at 1 day post infection (Boshuizen et al., 2005). Thus both the mucin: virus ratio and the magnitude of K mucin could change during progression of the infection, hence affecting F v in Fig. 1. Changes in K mucin through virus effects There are viral mechanisms that remove the virus from the mucin so increasing the concentration of free virus. For example in the case of influenza virus, the neuraminidase removes sialic acids from glycans, which enables virus particles to be removed from the cell surface after assembly and from decoy receptors e.g. in mucus (Guo et al., 2017). This has two effects on virus binding in Fig. 1. First it increases the rate of removal of the virus (k off in Eq. (6)) and hence decreases K mucin . Second it decreases the concentration of mucin thus decreasing the mucin: virus ratio. From Fig. 1 both these act synergistically to increase the fraction of free virus. In the case of influenza viruses, the HA protein binds to sialoside receptors on the host cell surface, preferentially binding to sialic acids linked to a penultimate galactose (Guo et al., 2017). The balance between activities of HA and neuraminidase proteins has a critical role in optimal viral fitness, tropism and transmission (Guo et al., 2017). The opposing effects of HA and neuraminidase in influenza A virus infectivity could be modelled through their effect on K mucin (through k on and k off in Eq. (6)) and the mucin: virus ratio in Fig. 1. Handel et al. (2014) use differential equations to model how sticky an influenza virus should be to maximize fitness, with stickiness representing the balance between attachment and detachment. Shanker et al. (2011) proposed that local flexibility in part of the NoV surface protein could allow the virus to disassociate from salivary mucin-linked HBGA in the changing microenvironment (pH for example) during its passage through the GI tract to subsequently associate with HBGAs linked to intestinal epithelial cells. In effect the k off rates increase in Eq. (6) such that K mucin varies in different parts of the host. Discussion This is a concept paper which presents a generic approach to use data from biochemistry and molecular biology to parameterize doseresponse models for MRA through thermodynamic equations. In the model developed here a host intestine is simulated with 4.15 × 10 8 cells in a volume of 0.314 dm 3 . The initial infection process is broken down into two stages, namely escape of the virus from the innate host defense barriers (Fig. 1) and the subsequent binding of any remaining virus to its specific receptor on the host cell (Fig. 2). The two parameters, namely F v and F c , which quantify these stages represent probabilities which together with p cell can be used with Eqs. (1)-(3) to give a complete dose-response model which calculates the risk of infection of the host from a given challenge dose (V initial ). (Although V intestine from Eq. (1) does not appear in Eqs. (2) or (3), it is used in Fig. 2 to estimate F c for Eq. (3)). The difference equation approach developed here is tedious to apply to MRA and applying Eq. (7) and Eq. (11) would greatly simplify the calculation of F c and F v , respectively for MRA purposes. However, using Eq. (7) for F c fails at low doses of a virus which has a high binding affinity for the host cells (K a > 10 15 M −1 ) and the difference equation approach has to be used as in Fig. 2. Using Eq. (11) for F v is appropriate for MRA if the mucin concentration exceeds the virus concentration in the host by a factor of > 10 or if K mucin < 10 13 M −1 (Fig. 1). The mechanistic approach developed here may be used to determine the ID 50 (see Table 4) from which it is easy to calculate p 1 , the risk to the host from ingestion of a dose of one pathogen. The parameter p 1 may then be used directly in conventional dose-response models (Eq. (16)) thus linking the application of the mechanistic model developed here to MRA. The work here appears to demonstrate that stochasticity in the mechanistic model is not an issue. If stochasticity issues were to arise for low doses of very high affinity pathogens (i.e. those with very high K a ), the approach should be to use the difference equation approach to determine the ID 50 from which p 1 can then be calculated and used directly in Eq. 16. Thus, depending of course on the value of p cell , C.V is much greater than 1 for the ID 50 (Table 4) and stochasticity issues are minimised. Although more information on the concentrations of mucin proteins in the saliva and intestine is needed, lack of data on the thermodynamic parameters is the main limiting factor. Approach 3 (Table 3) which uses Eq. (19) to calculate K a is limited by the current lack of data for ΔH a and ΔS a for the virus/host cell interaction and Approach 2 using published K d data should therefore be pursued in the absence of experimental K a data from Approach 1. While data are available for K d for GP/Cr binding for several viruses, Approach 2 using Eq. (17) to estimate K a is also limited by lack of information on ΔS a for binding of the whole virus to cells. This is important because the magnitude of ΔS a for immobilization of a whole virus is likely to be huge as demonstrated for VSV binding to artificial PL bilayers (Table 3). Lack of information on ΔS a is therefore a major data gap, and furthermore may also have wider implications in affecting our ability to interpret measured K d values in terms of the actual infectivity of the virus. Thus it is the K a as determined by Eq. (17) taking into account ΔS a for whole virus binding which defines infectivity (see results from Table 2) and not the K d alone. This is because ΔS a for binding of the whole virus to cells will be much greater than the ΔS a for binding of a soluble protein fragment as in K d determinations by SPR. Indeed the ΔS a for binding of VSV is a huge −16,062 J/mol/K compared to the ΔS rt and ΔS conf terms for binding of an antibody to its antigen (Table 3) which is comparable to GP binding to its Cr in terms of molecular sizes. The model developed here based on Eq. (4) does not allow for multiple virus particles to bind to a single host cell. However, the cell membrane of the host cell typically contains many copies of Cr and in theory each host cell could therefore bind multiple copies of the virus. Furthermore, the virus particles themselves could be ingested as a clump. Indeed in the case of EBOV, many genome copies may be linked together into long polyploid filaments (Beniac et al., 2012) which could affect the thermodynamics of binding (Gale 2017). Eq. (4) could be modified to accommodate multiple viruses binding to a single host cell (as described for ligands binding to a protein by Price and Dwek (1979)) if the number of Crs per host cell is known. It should be stressed that homogeneity in both the mucin and pathogen is assumed here for simplicity. In further developing this mechanistic model, data would be needed not only on the heterogeneity of the mucin concentration but also on the statistical distribution for the number of viruses bound per cell. For example the distribution could be Poisson such that for a ratio of 1 virus bound per host cell (representing a high V intestine ), some host cells have zero, most have one while a few have two, three or even four viruses bound. Alternatively the distribution could be over-dispersed, such that a few host cells have thousands of bound virus while most have none. Clearly, over-dispersion would decrease C.V compared to the Poisson distribution. In the model presented here for demonstrating proof of principle, the modelling of F c as function of V intestine assumes each host cell binds just one virus through Eq. (4). This is appropriate for two reasons. First, exposure to faecal/oral pathogens through routes such as water may involve very low numbers of pathogen per person (see Gale 2017), such that it would be unlikely that more than one pathogen would bind to the same host cell. Second, it maximises C.V and hence p host . Thus, the binding of more than one virus to a host cell may be waste of virus resource because C.V in Eq. 2 is not maximised. Binding of two viruses to the same cell will not affect p cell , unless there is co-operation between the two viruses in some way, such that one facilitates infection by the other. In this sense, the value of p host predicted through the mechanistic dose-response here is worst case. It should be noted that Gale (2017) proposed that formation of polyploid filaments in the case of EBOV did indeed enhance cell binding by optimising ΔS a , in effect representing a co-operation between viral particles in the infection process. This demonstrates the importance of not only considering the statistical distribution aspects of virus clumps but also the thermodynamic implications on their binding. As shown in Fig. 1, the probability of the virus escaping the mucin barrier (as represented by F v ) is controlled by the mucin: virus ratio and the mucin/virus binding affinity as represented by K mucin . The range for which K mucin is biologically significant is < 10 22 (M −1 ); 10 22 being the K mucin value at which there is no free virus at mucin: virus ratios > 1:1 according to the simulation in Fig. 1. Values for K mucin could be high due to multiple contacts with repeating units and "folding in" of bound virus into the interior of the mucus. Reading the x-axis from right to left in Fig. 1 suggests a threshold effect for virus dose, as the binding capacity of the mucin is exceeded at high virus doses. This is not considered further here, but is of interest for the development of MRA which generally assumes there is no threshold dose. The approach developed here in Fig. 1 for mucins could also be applied to modelling the probability of the virus being inactivated by other components of the host innate immune system such as the PRRs, which include the MBP, which recognize repeating units on the pathogen surface. Once bound to the MBP, the virus would be taken up (phagocytosis) through the complement system by macrophages and destroyed (Taylor and Drickamer, 2006). The probability of a host cell having bound virus (as represented by F c ) increases linearly with dose (Fig. 2). The biologically significant range for K a is 〈10 15 (M −1 ) within which F c increases linearly with increasing K a (Fig. 3). K d values measured by SPR are in the 10 −9 M-10 −12 M range for GP/Cr interactions for MERS-CoV and influenza virus suggesting very strong binding (Lu et al., 2013;Raman et al., 2014). Even for EBOV GP with K d s in the 10 −4 to 10 −5 M range (Yuan et al., 2015;Wang et al., 2016), high K a values could be achieved through multiple contacts (Eq. (17)) although this may be offset by the ΔS a term (as discussed above). Increasing the K a above 10 15 M −1 does not further increase the fraction (F c ) of host cells with bound virus (Fig. 3) such that F c versus virus dose curves are superimposed in Fig. 2 at K a 〉 10 15 M −1 . At such high K a s, receptor binding is not likely to be rate-limiting in the infection process. Thus Xu et al. (2012) found that the stability of the HeV-G/ephrin-B2 association does not strongly correlate with the efficiency of viral entry, suggesting that, for ephrin-B2-expressing cells, viral attachment is not the rate limiting step in the viral entry process. This is consistent with highly efficient binding (i.e. large K a ) such that further increasing K a does not increase the fraction of cells with bound virus, F c , according to Fig. 3. Hence C.V (through Eq. (3)) is constant and not rate-limiting, while a subsequent part of the infection process accommodated in p cell is less efficient than F c , such that p cell is very small and controls the overall value of p host according to Eq. (2). Indeed, the magnitude of K a is much more important when receptor binding is inefficient such as in the jumping of the species barrier into a novel host by an emerging virus as suggested by Gale (2017). This is confirmed by Fig. 3 with F c increasing linearly with K a at lower K a values. Gale (2017) demonstrated that virus binding to its receptor is only important in controlling the probability of infection of a host cell if it is a major barrier, i.e. relatively inefficient. The K a for a virus jumping the species barrier into a novel host species would be expected to be very low perhaps due to electrostatic repulsion as for Scenarios A and B in Table 5. The subsequent adaptation of the virus to its new host through mutation would increase K a as suggested here in Table 5 and reported for EBOV in humans (Diehl et al., 2016;Urbanowicz et al., 2016). The magnitude of ΔH a is largely controlled by how good the molecular fit is between the virus GP surface and the Cr surface and depends on hydrogen bonds and salt bridges. As shown in Table 5, changes in ΔH a (ΔΔH a ) may be used to predict changes in K a providing the molecular basis of the GP/Cr interaction is available at atomic detail to enable some assessment of ΔΔH a . The infectivity of the new virus/ host combination relative to the "standard" virus/host would be directly proportional to the change in K a (as demonstrated from results in Table 2). Thus, thermodynamics provides the link between changes in the virus/host identified through sequencing data and the risk of infection. This in theory allows predictions of the infectivity of emerging virus strains or the susceptibility of novel hosts to be assessed on the basis of the effects of changes in sequence data. It is concluded that thermodynamic approaches have a major contribution to make in developing dose-response models for emerging viruses. Conflict of interest None declared. Disclaimer The views expressed in this paper are those of the author and not necessarily those of any organisations.
2019-04-09T13:07:58.740Z
2018-01-04T00:00:00.000
{ "year": 2018, "sha1": "682dd9de31df83c75e7f250c82db0da6a4c1064a", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.mran.2018.01.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b9a28b5c8bf4a236b3f4d932f53a9d05be1ce33a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
12758811
pes2o/s2orc
v3-fos-license
Mammalian E-type Cyclins Control Chromosome Pairing, Telomere Stability and CDK2 Localization in Male Meiosis Loss of function of cyclin E1 or E2, important regulators of the mitotic cell cycle, yields viable mice, but E2-deficient males display reduced fertility. To elucidate the role of E-type cyclins during spermatogenesis, we characterized their expression patterns and produced additional deletions of Ccne1 and Ccne2 alleles in the germline, revealing unexpected meiotic functions. While Ccne2 mRNA and protein are abundantly expressed in spermatocytes, Ccne1 mRNA is present but its protein is detected only at low levels. However, abundant levels of cyclin E1 protein are detected in spermatocytes deficient in cyclin E2 protein. Additional depletion of E-type cyclins in the germline resulted in increasingly enhanced spermatogenic abnormalities and corresponding decreased fertility and loss of germ cells by apoptosis. Profound meiotic defects were observed in spermatocytes, including abnormal pairing and synapsis of homologous chromosomes, heterologous chromosome associations, unrepaired double-strand DNA breaks, disruptions in telomeric structure and defects in cyclin-dependent-kinase 2 localization. These results highlight a new role for E-type cyclins as important regulators of male meiosis. Introduction Cyclins are key cell cycle regulatory subunits that bind, activate, and provide substrate specificity for the cyclin-dependent kinases (CDKs). Although the role of cyclins in the somatic mitotic cycle has been extensively studied, their function in the meiotic cycle is poorly understood. Several cyclins have been identified with unique patterns of expression during spermatogenesis [1]. For example, the testis-specific A-type cyclin, cyclin A1, is restricted to spermatocytes during prophase I from the pachytene to diplotene stages [2]. Cyclin A1 was the first cyclin shown to be essential for meiosis: cyclin A1-deficient male mice are sterile due to an arrest in meiotic prophase at the diplotene stage, just prior to the first meiotic division [3]. In contrast, cyclin A2, which is generally considered to be the mammalian S-phase cyclin, is expressed in mitotically dividing spermatogonia but not in meiotic prophase spermatocytes [4]. Not surprisingly, deletion of the ubiquitously expressed cyclin A2 results in embryonic lethality shortly after implantation [5]. There are also two members of the mammalian E-type family, cyclin E1 and E2, which play important roles in mitoticallydividing cells. Cyclin E1 and E2 exhibit high homology within their protein sequence (70% identity between the cyclin box and 47% between the overall sequences) and it has been proposed that they have overlapping functions during the cell cycle. Indeed, Ccne1 (to be designated as E1 for simplicity in the rest of the text) or Ccne2 (designated as E2) single knockout mice are viable but double-knockout mice die during embryonic development due to placental abnormalities [6,7]. Interestingly, while both male and female E1 knockout mice were fertile as were E2 knockout females, E2 knockout males exhibited reduced fertility, decreased testis size, reduced sperm counts and apparently abnormal meiotic spermatocytes [6]. However, neither the cellular nor molecular basis for this phenotype has been elucidated. Moreover, it is unknown which cells in the testis express the E-type cyclins, or the function that E1 and E2 might have in germ cells. In the present study, we provide evidence for distinct functions of the E-type cyclins during spermatogenesis and novel regulation of their expression. We demonstrate that the E-type cyclins function in the progression of spermatocytes through meiotic prophase I, influencing homologous chromosome pairing, synapsis and DNA repair and, in particular, function at the chromosome ends. Further, in the absence of E-type cyclins, the proper localization of CDK2 on telomeres during male meiotic prophase I is disrupted and there is concomitant chromosome instability. These results reveal a critical role for the E-type cyclins during male mammalian meiosis and underscore their function in regulating spermatogenesis and hence, male fertility. Cyclins E1 and E2 have distinct patterns of expression during spermatogenesis To elucidate the role of E-type cyclins during spermatogenesis, we first identified their normal pattern of expression in the testis at the cellular and sub-cellular levels. Quantitative PCR analysis showed a robust E1 mRNA expression in spermatocytes as compared to the mitotically dividing spermatogonia and Sertoli cells that predominate in post-natal day 10 testes [8] ( Figure 1A). However, while E1 protein was readily detectable by immunohistochemistry in Sertoli cells in the adult testis, it was not evident in spermatocytes ( Figure S1a,b). In fact, expression of E1 protein in spermatocytes was only detected in immunoblots of purified pachytene spermatocytes ( Figure 1C) and in immuno-staining of chromosome spreads (Figure 1Da-e). To precisely examine the developmental stage and sub-cellular localization of cyclin E1 and E2 proteins in meiotic prophase, we undertook a detailed immunolocalization analysis of spermatocyte spreads. Concomitant immunolocalization of SYCP3, a component of the axial element (AE) of the synaptonemal complex (SC), facilitated identification of meiotic chromosomes and classification of specific stages of prophase I. Cyclin E1 protein signal was barely detectable if at all in early pachytene spermatocytes, but was detected in mid-pachytene to diplotene spermatocytes in most of the chromatin (Figure 1Dd,e). However, in the sex chromosomes, the distribution of E1 was distinct, being present as foci along the AE (Figure 1Dd,e). The specificity of detection of E1 was confirmed by the absence of signal in spreads from E12/2 E2+/+ testes (Figure 1Dd,e, lower right insets). Cyclin E2 protein was clearly detected as early as the leptotene stage ( Figure 1Df) and its expression increased throughout most of the chromatin during prophase I progression (Figure 1Dg-j). In contrast with E1, E2 was absent from the chromatin and AE of the X and Y (Figure 1Di,j). These results show clear differences between cyclin E1 and E2 proteins in their temporal appearance and levels of expression as well as their distinct nuclear distribution patterns during meiotic prophase. Altered cyclin E expression patterns in E12/2E2+/+ and E1+/+E22/2 testes As functional redundancy between E1 and E2 had been suggested in mitotic cells [9] and both E12/2E2+/+ and E1+/ +E22/2 mice are viable but E2-deficient male mice exhibit reduced fertility, we asked whether the pattern of expression of the remaining E type cyclin was altered, specifically in the testis. In E12/2E2+/+ testes, E2 mRNA levels were not significantly changed ( Figure 1B), but the levels of E2 protein were increased in spermatocytes and E2 protein was now also found in round spermatids ( Figure 1C). However, the temporal appearance and distribution of E2 protein, including its exclusion from the X and Y, were unchanged in E1-deficient spermatocytes (Figure 2Aa-e). In contrast, E1 mRNA levels were increased ( Figure 1B) and E1 protein was elevated in E1+/+E22/2 spermatocytes and was also detected in round spermatids by immunoblotting ( Figure 1C). By immunohistochemistry, E1 protein was now also readily detected at low levels in early-pachytene spermatocytes (Figure 2Ba), increasing in mid-pachytene (Figure 2Ba,b), and was also detected in late-pachytene and diplotene spermatocytes (Figure 2Ba). No detectable levels of cyclin E1 protein were observed in spermatogonia (Figure 2Ba,b), preleptotene spermatocytes (Figure 2Bb) or round spermatids by immunohistochemistry (Figure 2Ba,b). In E1+/+E22/2 spermatocyte spreads, the temporal appearance of cyclin E1 protein now resembled that of cyclin E2, being detected as a faint signal in leptotene and increasing at zygotene (Figure 2Bc Additional E-type cyclin deficiency causes progressive loss of advanced spermatogenic cells resulting in decreased fertility We next examined the effects of additional deletions of Etype cyclin alleles on male fertility, using constitutive knockout E12/2E2+/+ and E1+/+E22/2 mice [6], conditional E1 floxed mice [10], and mice expressing Cre under the Stra8 promoter [11] (allele designated E1D). Mating studies of the resulting progeny with various combinations of deleted E-type cyclins showed that, as previously reported [6,7], both E12/ 2E2+/+ and E12/2E2+/2 males were fertile (data not shown) but E1+/+E22/2 males exhibited reduced fertility. Detailed assessment of the nature of the reduced fertility revealed variable reduced sperm counts and variable numbers of fetuses/pups produced (Table S1A). Nonetheless, all E1+/+E22/2 males assessed (n = 10) were able to produce at least one fetus/pup. Importantly, the additional loss of one E1 allele on the E1+/ +E22/2 background had a striking effect on fertility: all E1+/ 2E22/2 males were completely infertile (n = 10), with a significant reduction in testis size (p,0.001) and complete azoospermia (Table S1B). As anticipated, the removal of the second E1 allele in germ cells (E12/DE22/2 or E1D/DE22/2) also yielded sterile and azoospermic males. Author Summary Understanding the control of meiosis is fundamental to deciphering the origin of male infertility. Although the mechanisms controlling meiosis are poorly understood, key regulators of mitosis, such as cyclins, appear to be critical. In this regard, male mice deficient for cyclin E2 exhibit subfertility and defects in spermatogenesis; however, neither the stages of germ cell differentiation affected nor the responsible mechanisms are known. We investigated how E-type cyclins control male meiosis by examining their expression in spermatogenesis and the consequences that multiple deletions of Ccne1 and Ccne2 alleles produce. Loss of Ccne2 expression increases cyclin E1 levels as a compensatory effect, but there are still meiotic defects and subfertility. Further, loss of one Ccne1 allele in the absence of cyclin E2 results in infertility as does loss of the remaining Ccne1 allele, but with even more severe meiotic abnormalities. We further found that cyclin E1 is involved in sex chromosome synapsis while E2 is involved with homologous pairing and chromosome and telomere integrity. These processes and structures were severely disrupted in absence of both cyclin E1 and E2, uncovering new roles for the E-type cyclins in regulating male meiosis. To begin to elucidate which spermatogenic cell types were affected by additional loss of E-type cyclin function, histological analysis of testes from adult mice of the various genotypes was performed (Figure 3a-f). E12/2E2+/+ and E12/2E2+/2 testes appeared morphologically normal (Figure 3b and data not shown, respectively), as compared to wild type (WT) testes ( Figure 3a). As shown in Figure 3c, E1+/+E22/2 testes displayed testicular abnormalities, as noted in earlier studies [6], but spermatogenesis was not arrested at a unique stage. The histological abnormalities became more severe with additional loss of E1 alleles. That is, E1+/2E22/2 adult testes (and similarly E1+/DE22/2 testes) contained a few tubules with spermatogenesis arrested at the spermatid stage, but such spermatids were sparsely populated and mostly degenerating (Figure 3d,e). Abnormally elongated spermatids (Figure 3d, bracket) or a few step 9 spermatids were the most advanced spermatogenic cell types (Figure 3d). Most mice displayed very severely disrupted spermatogenesis, with tubules typically containing only pachytene spermatocytes (Figure 3e), while others contained preleptotene-leptotene spermatocytes with a few step 9 spermatids (Figure 3e). Complete deletion of cyclin E function in the male germline (E1D/2E22/2 or E1D/DE22/2) yielded profound disruption of spermatogenesis, with testicular tubules containing spermatocytes mostly arrested at early pachytene stages ( Figure 3f). However, there were also some ''Sertoli cell-and spermatogonia-only'' tubules in adult testes (Figure 3f). It is interesting to note that most of the spermatogonia in these tubules are B-type, suggesting spermatogenesis may be delayed at the entry of B-type spermatogonia into preleptotene spermatocytes (Figure 3f). Additional cyclin E depletion increases apoptosis of pachytene cells To determine whether the cells in the abnormal testicular tubules of the various genotypes were undergoing apoptosis, TUNEL Figure 1. Cyclin E1 and E2 have differential mRNA and protein expression patterns during spermatogenesis. (A) Relative mRNA expression (compared to the expression of Arbp and normalized to expression levels obtained from pnd10 testes) was determined using RT-qPCR in pnd 10 testes and in purified populations of pachytene spermatocytes and round spermatids. Results are the mean 6 SEM of four independent experiments. *p,0.05, **p,0.01, ***p,0.001. (B) Relative E1 and E2 mRNA expression (compared to the expression of Arbp and normalized to expression levels obtained from WT testes) using RT-qPCR of whole adult testes of different genotypes. Results are the mean 6 SEM of three independent experiments. ***p,0.001. (C) Immunoblot analysis of E1 and E2 protein expression from tissues or purified populations of cells as noted. Immunodetection of a-tubulin was used as the loading control. (D) Localization of E1 (a-e) or E2 (f-j) (green) and SYCP3 (red) during prophase I in WT spermatocyte spreads. Right upper insets show the X and Y chromosomes (XY) outlined in d, e, i and j. E1 and E2 are totally absent in the XY of E12/ 2E2+/+ spermatocytes (lower right insets with asterisk in d,e). doi:10.1371/journal.pgen.1004165.g001 staining of adult testicular sections was used. In WT adult testis, a few TUNEL-positive spermatogonia and early meiotically dividing spermatocytes can be seen ( Figure S2a) [12]. In E12/2E2+/+ testes ( Figure S2b), the pattern of TUNEL-positive germ cells was similar to that of WT. However, in E1+/+E22/2 testes ( Figure S2c), TUNEL-positive pachytene spermatocytes were observed. Such TUNEL-positive cells were also detected in E1+/2E22/2 testes ( Figure S2d), regardless of whether the testicular abnormalities (as reflected in the loss of advanced spermatogenic cells) were relatively modest (Figure 3d) or extensive in severity (Figure 3e). Given the greatly reduced cellularity in the E1D/DE22/2 tubules, the number of cells that could be detected actively undergoing apoptosis was comparatively low. However, as above, any detectable TUNEL-positive cells were apparently early pachytene spermatocytes ( Figure S2e). E-type cyclin-deficient spermatocytes exhibit aberrant progression through prophase I To evaluate the effects that additional depletion of E-type cyclin alleles produce in prophase I progression, we quantified the number of spermatocytes in each stage of prophase I among the various genotypes, identifying the respective stages by immunolocalization of SYCP3 to identify the AE of the SC, chromosome morphology, and the behavior of the X and Y [13]. Immunolocalization of SUMO-1 to the sex body served as a marker for the transition of early to mid-pachytene and late prophase stages [13]. The number of cells at various stages of prophase I were similar between E12/2E2+/+ and WT spermatocytes, with the highest proportion of cells being in the pachytene and diplotene stages ( Figure S3a). In E1+/+E22/2 testes, there was a slight increase in the proportion of spermatocytes in earlier stages, such as leptotene through early pachytene ( Figure S3b). Strikingly, in E1+/DE22/ 2 testes, we observed an increase in the proportion of spermatocytes in early stages of prophase I: most of the cells were in the zygotene and early pachytene stages with only 12.661.16% comprising the later stages as compared with control ( Figure S3c). This suggests that spermatocytes accumulate at the zygotene/early pachytene stages, thus reducing the number of cells that can progress through later stages. This was even more apparent in E1D/DE22/2 testes, where most spermatocytes were in late zygotene or early pachytene and only 1.560.4% progressed into a mid pachytene-like stage, albeit highly aberrant ( Figure S3d). No diplotene spermatocytes were observed. Cyclin E1-deficiency associates with altered synapsis of the sex chromosomes and E2-deficiency with defects in pairing and synapsis of autosomes In mammalian meiosis, arrest at the pachytene stage and the subsequent induction of apoptosis are often triggered by abnormalities in homologous chromosome pairing and synapsis. We therefore analysed the pattern of chromosome pairing and synapsis in the different genotypes by immunolocalization of SYCP3 along with SYCP1, the main component of the central element (CE) of the SC. In normal meiosis, SYCP3 is initially loaded onto chromosomes during the leptotene stage and AE formation is completed in zygotene (Figure 4a). At this stage, the CE begins to form between the two AEs ( Figure 4a) and SC formation is complete by the beginning of the pachytene stage (Figure 4b). At the diplotene stage, homologous axes separate and remain attached by chiasmata (Figure 4c). During later stages of prophase I, it is common to observe a conical thickening of the AEs at the chromosome ends called synaptonemal complex attachment sites (SCAS) (Figure 4c). The SCAS are crucial for attaching the chromosomes to the nuclear envelope [14]. In E1-deficient (E12/2E2+/+) spermatocytes, the formation of the AEs and the loading of the CE ( Figure S4a,b), as well as the structure of SCAS (data not shown), were similar to WT spermatocytes. Although fertility was not affected in these mice, we nonetheless observed some abnormalities in sex chromosome synapsis. Total asynapsis of the X and Y was seen in 25.562% of pachytene spermatocytes (n = 85) ( Figure S4c). Of these asynapsed sex chromosomes, 8.261.7% exhibited the Y chromosome in selfsynapsis or in a ring configuration ( Figure S4a,e and d,f respectively). An additional 1265.2% (n = 85) showed an incomplete synapsis of the pseudoautosomal region (PAR) of the X and Y that was restricted to a small portion of the sub-distal region of both chromosomes ( Figure S4b). These results suggest that although the absence of cyclin E1 did not result in overtly impaired fertility, progression of synapsis of the PAR was altered. In E2-deficient (E1+/+E22/2) spermatocytes, wherein fertility was affected, spermatocytes progressed until the diplotene stage (Figure 4d-f). However, in some chromosomes of pachytene spermatocytes, the SC appeared interrupted. Although AEs were formed and aligned, the CE was not continuously assembled, as Arabic numerals indicate step of spermatid differentiation; Roman numerals indicate stage of the tubules. Although abnormal cell associations complicate staging, an attempt was made using the acrosomal system [45], and tubules are labeled with a Roman numeral followed by an asterisk (e.g. stage IX*). doi:10.1371/journal.pgen.1004165.g003 indicated by interrupted regions of SYCP1 (Figure 4e). These defects were observed in single or multiple chromosomes within the same cell and were detected in 22.862.7% of total pachytene spermatocytes (n = 100). Furthermore, we observed that 1963.1% of pachytene spermatocytes had one or more chromosomes with heterologous associations that involved autosomes or autosomes with the X chromosome ( Figure 4e). Almost all (9661.4%) of these associations involved the telomeric ends in a ''one to one'' chromosome connection ( Figure S5Ae) that persisted in diplotene spermatocytes (Figure 4f; S5Af), suggesting that spermatocytes with abnormal chromosome associations can progress through the pachytene stage. In addition, the SCAS at the chromosome ends were on average 12.5% reduced in length, compared to WT spermatocytes (Figure 4f; S5B). The frequency of synapsis defects increased with loss of E1 alleles in an E2-deficient background. In E1+/DE22/2 pachytene spermatocytes (Figure 4g-i), the frequency of intermittent SYCP1 localization on the SC increased to 54.967% (n = 57) and affected almost all the chromosomes. In E1+/DE22/2 testes, 29.664.1% (n = 57) of pachytene spermatocytes carried heterol- ogous associations, which frequently involved chromosome ends with SYCP1 at the association sites (Figure 4h; S5Ah,h9). Notably, in E1+/DE22/2 spermatocytes, all chromosome ends exhibited thinner SCAS compared with WT chromosomes (on average, 25% reduced) (Figure 4i; S5B). The defects in AEs and SC formation were more severe in E1D/DE22/2 spermatocytes and were detected at earlier stages (Figure 4j,k). E1D/DE22/2 spermatocytes exhibited chromosome configurations that resembled those characteristics of leptotene and zygotene stages and a pachytene-like stage, but later stages of meiotic prophase were never observed. In the few spermatocytes that reached a pachytene-like stage, SYCP3 localization revealed small fragmented filaments that rarely formed continuous AEs. We also observed small fragments of SC and aberrant synapsis in the majority of the chromosomes (Figure 4k). In addition, most of the heterologous associated chromosomes formed complex chromosome chains (Figure 4k; S5Ak). In the very few E1D/DE22/2 spermatocytes observed in a mid-pachytene-like stage (characterized by the presence of SUMO-1 and cH2AX restricted to a defined region of the chromatin), the SCAS were 37.5% narrower compared with WT SCAS ( Figure S5B). E-type cyclins are necessary for the normal progression of DNA double strand break (DSB) repair as well as for protecting chromosome ends To analyze whether the repair of DSBs was affected by depletion of E-type alleles, we analyzed spermatocyte spreads using cH2AX as a marker of DSBs [15]. WT leptotene-early zygotene spermatocytes exhibit cH2AX distributed throughout the entire nucleus (Figure 5a). In early pachytene spermatocytes, cH2AX was present only as small foci in the chromatin adjacent to the SC in the autosomes and throughout the chromatin of the X and Y (Figure 5b). In mid-pachytene to late diplotene spermatocytes, cH2AX was solely restricted to the XY body (Figure 5c,d). In all the E-type cyclin-deficient genotypes, cH2AX was distributed throughout the chromatin during the leptotene-early zygotene stages, similar to WT spermatocytes (Figure 5e,i,m). In E12/2E2+/+ spermatocytes, the cH2AX pattern was similar to WT throughout prophase ( Figure S4c). In contrast, in E1+/ +E22/2 and E1+/DE22/2 early and mid-late pachytene spermatocytes, cH2AX was not only present in the sex chromosomes, but also persisted as foci in the chromatin adjacent to the SC in autosomes, particularly at the chromosome ends (Figure 5f,g,j,k). These telomeric foci were more prominent during the diplotene stages of E2-deficient spermatocytes (Figure 5h,l). Moreover, these defects were exacerbated upon loss of the remaining E1 allele. In the few E1D/DE22/2 pachytene-like spermatocytes, all unsynapsed chromosomes (Figure 5n,o) had cH2AX signal in the chromatin (Figure 5n), indicating defects in DSB repair. Notably, the rare mid-pachytene-like spermatocytes that could be observed also contained chromosomes with cH2AX in the telomeric regions (Figure 5o). Meiotic sex chromosome inactivation (MSCI) is not overtly affected in the absence of the E-type cyclins To begin to explore whether MSCI was compromised with loss of cyclin E function, we studied the pattern of distribution of SUMO-1, a marker of unsynapsed chromosomes [13,16] and RNA pol II, a marker of transcriptional activity [13,16]. In all the genotypes except where all E-type cyclin function is lost, SUMO-1 appeared in mid-pachytene to diplotene spermatocytes in the X and Y, similar to its distribution in WT spermatocytes (Figure 6a,c,e). Interestingly, in E12/2E2+/+ mice, all (100%) of the mid/late pachytene spermatocytes expressed SUMO-1 in the X and Y, even if there was asynapsis in the PAR ( Figure S4e). In E1+/ +E22/2 and E1+/DE22/2 spermatocytes, SUMO-1 appeared not only in the sex chromosomes but also in unsynapsed autosomes (Figure 6c,e), indicating that asynapsis is properly recognized in mutant spermatocytes. That is, 9364.2% of E1+/ +E22/2 and 6966.1% of E1+/DE22/2 pachytene spermatocytes (n = 80 and 67, respectively) had SUMO-1 in the sex chromosomes, indicating that the majority of spermatocytes of these genotypes were able to progress through mid/late pachytene. Interestingly, in E1+/+E22/2 and E1+/DE22/2 pachytene spermatocytes, we also observed SUMO-1 signal at the ends of a few chromosomes (Figure 6c,e). The complete absence of both Etype cyclins resulted in .98% of spermatocytes arresting in early prophase, thus, SUMO-1 was absent from the chromatin (Figure 6g). In the rare mid pachytene-like spermatocytes that were found in E1D/DE22/2 testes, a hint of SUMO-1 signal was observed but there was no clearly sex body formation (data not shown). To determine whether the E-type cyclins were involved in transcriptional silencing of the sex chromosomes, we examined the general transcriptional status in spermatocytes by immunolocalization of RNA pol II in chromosome spreads. In WT spermatocytes, RNA pol II appeared at the beginning of the pachytene stage as a very low signal (data not shown) that increased in intensity as prophase I progressed. From mid-pachytene through diplotene, RNA pol II was detected as a bright signal distributed throughout almost all chromatin but was excluded from the sex body (Figure 6b), which reflects the transcriptional silencing of the sex chromosomes. This temporal and distribution pattern of RNA pol II was not affected by depletion of E1 ( Figure S4d) or E2 (Figure 6d) nor in the spermatocytes that reached late prophase I stages in E1+/DE22/2 testes (Figure 6f). This suggests that depletion of E-type cyclins does not affect the transcriptional silencing of the sex chromosomes during prophase I. However, in the complete absence of both E-type cyclins, most of the spermatocytes never reach a pachytene-like stage and therefore, these nuclei lack RNA pol II signal (Figure 6h). In exceptional cases, although their chromosome morphology was completely altered, few spermatocytes exhibited low levels of RNA pol II in the chromatin (data not shown) but no recognizable sex body was formed. E1 and E2 associate with CDK2 and are involved in its proper localization in meiotic prophase I spermatocytes Cyclin E1 and E2 are known to interact with CDK2 in mitotic cells. We performed co-immunoprecipitation analysis of WT pachytene spermatocytes to confirm that both cyclin E1 and E2 indeed interacted physically with CDK2 in spermatocytes in vivo ( Figure 8A). Interestingly, CDK2 has previously been shown to localize at telomeres, late recombination nodules (LRN), and in the sex body in prophase I spermatocytes [22] Therefore, we next examined CDK2 immunolocalization during prophase I in all Edeficient genotypes and performed a semi-quantitative analysis of the distribution of CDK2 localization. In WT spermatocytes, CDK2 first appeared in leptotene/zygotene spermatocytes ( Figure 8Ba) as a faint signal at the telomeres. During the pachytene stage, CDK2 increased dramatically at the telomeres and was also observed in the LRN and in AEs of the X and Y (Figure 8Bb). At this stage, 90.866.5% of telomeres exhibited an average of 39.5 arbitrary units (au) (defined in Materials and Methods) of the intensity of CDK2 signal (n = 120 telomeres) ( Figure 8C). At the diplotene stage, CDK2 was present in autosomal telomeres and was still associated with the X and Y chromosomes (Figure 8Bc). A similar pattern of CDK2 localization was observed in E1-deficient (E12/2E2+/+) spermatocytes, interestingly, even when the X and Y were fully asynapsed ( Figure S4f). In E2-deficient spermatocytes (E1+/+E22/2), however, the intensity of the telomeric signals from zygotene to diplotene spermatocytes were weaker compared to WT (Figure 8Bd-f), particularly during the pachytene stage, where telomeres exhibited a high heterogeneity in the intensity of CDK2 signal (Figure 8Be; 8C). Specifically, 59.069.6% of telomeres exhibited a CDK2 intensity similar to wild type spermatocytes ( x = 44.5 au) (Figure 8Be; 8C). However, 41.066% of telomeres exhibited lower intensities of CDK2 signal compared to WT (Figure 8Be; 8C). Occasionally, in individual chromosomes (Figure 8Bf) or chromosomes involved in an ''end-to-end association'' (Figure 8Be), CDK2 was diffusely distributed in the chromosome ends. This suggested that the loading of CDK2 upon telomeres was compromised in the absence of cyclin E2, but there were not significant alterations in the localization of CDK2 in the LRN and in the sex chromosomes, even when these chromosomes were aberrantly associated with other autosomes (Figure 8Be). That is, similar to WT spermatocytes, in E1+/+E22/2 spermatocytes, CDK2 localized mainly in the X chromosome and less intensely in the Y chromosome (Figure 8Be,f). The defective localization of CDK2 in the telomeres became more severe with the additional depletion of E1. In E1+/DE22/2 spermatocytes, CDK2 signal was reduced in 92.766.2% of the telomeres (Figure 8Bh; 8C) and was almost completely absent from the ends of chromosomes that were associated or fused (Figure 8Bh,i). Similarly, weaker CDK2 signals in LRN and sex chromosomes were observed in E1+/DE22/2 spermatocytes (Figure 8Bh). The telomeric signal of CDK2 in E1+/DE22/ 2 diplotene spermatocytes also exhibited either a heterogeneous intensity or was not localized at the end of the chromosomes (Figure 8Bi), revealing that in the absence of cyclin E2, reduced levels of E1 had dramatic effects on CDK2 telomeric localization during prophase I. In the absence of all E-type cyclin proteins, the pattern of CDK2 localization was totally disrupted and only a diffuse signal throughout the nucleus was observed (Figure 8Bj). Discussion In this study, we describe an essential role for the E-type cyclins in the regulation of mammalian male meiotic prophase I, controlling prophase I progression and regulating telomere and chromosome integrity. Surprisingly, expression of both E1 and E2 is not detected in most mitotic spermatogonia but is rather characteristic of meiotic spermatocytes and exhibits distinct expression patterns. E1 protein is expressed at low levels mainly in later stages of prophase I (pachytene and diplotene spermatocytes) while E2 protein can be detected as early as preleptotene, increasing throughout prophase I until the diplotene stage. When present, both E1 and E2 localize to the chromatin of autosomes and thus may co-localize in late prophase. However, localization in the chromatin of the XY body is strikingly different: E2 is never associated with the X or Y while E1 localizes as foci along the AEs of the sex chromosomes. Co-expression of the two E-type cyclins has been widely observed in mitotic cells and it has been suggested that they exert overlapping functions during G1/S progression [9]. Support for this idea was obtained from the viability of both E12/2E2+/+ and E1+/+E22/2 single knockout mice and the lethality of E12/2E22/2 mice [6,7]. It has also been shown that cyclin E2 depletion in the liver induces up-regulation of E1 expression at both the mRNA and protein levels and increases E1-CDK2 complex activity [23]. We found that depletion of E1 or E2 protein in spermatocytes induces an upregulation of E2 or E1 protein, respectively. This potential compensatory mechanism is most striking in the increase in levels of mRNA and protein expression of E1 upon E2 depletion and the ectopic presence of E1 protein in meiotic stages where E2 is normally expressed. However, the elevated levels of E1 do not fully compensate for loss of E2, as E1+/+E22/2 mice exhibit reduced fertility. This could be due to the changes in E1 protein expression incompletely mimicking normal E2 expression during prophase I. The normal presence of low levels of E2 in preleptotene cells and dividing B-type spermatogonia raises the possibility of a pre-meiotic function as well, which could contribute to the early meiotic defects. However, although depletion of E2 induces the expression of E1 in earlier stages of prophase I, it did not alter the expression pattern of E1 in non-meiotic cells, lessening the likelihood of an important premeiotic function for the E-type cyclins. Alternatively, and not mutually exclusively, the different expression pattern that cyclin E1 and E2 have during prophase I may hint to distinct functions for the two E cyclins during meiosis. Indeed, while loss of E2 in the germline results in abnormal synapsis, heterologous chromosome associations, defects in CDK2 localization, and late cH2AX foci on autosomes, there is increased severity of the spermatogenic defects and complete sterility upon additional deletion of E1 alleles, suggesting that E1 must have important functions as well. Loss of cyclin E1 function principally affected synapsis of the PAR and structural modifications that occur in the AEs of the sex chromosomes. Although these defects were noticeable in only a subset of pachytene spermatocytes, the defective synapsis of the PAR could account in part for the increased severity in meiotic abnormalities and enhanced apoptosis upon additional loss of E1 alleles on an E2-deficient background. Such pairing and synapsis defects of the PAR, as seen in spermatocytes that specifically lack the Spo11a isoform (but containing Spo11b) were proposed to trigger the spindle checkpoint during metaphase followed by apoptosis [24]. Based on the localization of E1 in the AEs of the X and Y and the pairing defects produced by its depletion, it appears that E1 could be involved in maintaining the stabilization of the PAR synapsis during the pachytene stage. The interdependence between pairing, synapsis and DNA repair during mammalian meiosis makes it difficult to discriminate specifically which (or all) processes are affected by E2-deficiency and further reduction of E1 protein. Alternatively, it is possible that they are secondary effects of the disruption of other processes, such as telomere anchoring in the nuclear envelope and chromosome movement. Regardless of the underlying mechanism, pairing and synapsis appear to be more compromised than DNA repair in E1+/+E22/2 and E1+/2E22/2 spermatocytes. That is, at the leptotene stage, chromatin-wide cH2AX staining appeared normal in all mutant spermatocytes, suggesting that generation of DSBs is not affected by depletion of E1 and/or E2. Furthermore, most of the cH2AX foci disappear during the zygotene stage similar to WT spermatocytes, suggesting that most DSB repair is not compromised in E1 and E2 mutant spermatocytes. However, a few cH2AX foci remain in the chromatin adjacent to the AEs and, more obviously, in the telomeric/ subtelomeric regions during late prophase. This implies that a DNA damage signaling is occurring at chromosome ends and that telomere integrity is affected by loss of E-type cyclins. . Telomere stability and chromosome integrity are increasingly disrupted by loss of E-type cyclin function. Chromosome spreads from adult WT, E1+/+E22/2 (a,d), E1+/DE22/2 (b,e) and E1D/DE22/2 pachytene spermatocytes (c,f) stained to detect telomeres (Telomere-FISH, red) and SYCP3 localization (green). Additional E-type cyclin deletion produces extended bridges between telomeres (a-c,e, white arrows and white insets), leading to telomeric fusions and chromosome rearrangements (asterisks): heterologous associations (d,e,f) and complex chromosome chains (d,e,f, yellow insets). A schematic representation of each inset is shown below the original, using one color for each chromosome (a-f). doi:10.1371/journal.pgen.1004165.g007 Functions of E-type Cyclins in Mammal Male Meiosis Among the more striking aspects of the phenotypes exhibited by the various E-type cyclin knockouts were the defective localization of CDK2 in telomeres, the concomitant loss of telomere structural integrity, and the presence of frequent telomere fusions, all of which increased with further loss of E1 alleles. Although neither cyclin E2 nor E1 was located specifically at the telomeres, we propose that formation of cyclin E-CDK2 complexes is necessary for the localization of CDK2 to the telomeres and the subsequent protection of the telomere ends. In support of this model, it should be recalled that CDK2-deficient spermatocytes have similar, but not identical, meiotic phenotype to E1D/DE22/2 spermatocytes [25,26]. That is, in absence of CDK2, spermatocytes also exhibited abnormal chromosome rearrangements, non-homologous pairing and defective telomeres that were not attached to the nuclear envelope. However, E1D/DE22/2 spermatocytes exhibited a more severe phenotype in terms of the pairing and synapsis defects and spermatocyte progression throughout prophase I. It was proposed that CDK2 may play a role in the proper telomere dynamics during prophase I [26]; herein we further propose that E-type cyclins are likely regulating the telomere-specific activity of CDK2. Alternatively, and not mutually exclusively, it is also possible that the E-type cyclins can exert a function in telomere protection in a CDK-independent manner. Such kinase-independent functions have been previously demonstrated for cyclin E during G(0)/S phase progression [27], and for cyclin D during regulation of cell growth and cancer [28] and has been suggested for cyclin B3 during spermatogenesis [29]. Loss of CDK2 localization and activity at the telomeres could trigger the loss of telomere positioning and function, which in turn can explain in part the defects observed in chromosome pairing and synapsis exhibited by E-type cyclin mutant spermatocytes. That is, loss of telomere end protection could affect their proper anchoring to the nuclear envelope, an event that is fundamental for accurate pairing and synapsis of the chromosomes [30]. Thus, depletion of E cyclins could affect telomeric anchoring to and movement through the nuclear envelope and subsequently trigger the meiotic defects in the mutant spermatocytes. Similar pairing and synapsis defects were observed when telomeres are unprotected, as in SMC1b-deficient spermatocytes [31], or when telomere dynamics and anchoring are altered, as seen in spermatocytes lacking LMNA, SUN1, and SUN2 [32][33][34]. The presence of telomeric bridges between different chromosomes, together with the appearance of cH2AX and the generation of chromosome fusions are indicative of dysfunctional telomeres and thus, telomeric instability [17][18][19][20][21]35]. Therefore, our results showed that E-type cyclins are required for normal telomere and chromosome stability during male meiosis and suggested that telomere homeostasis (i.e. telomere length and capping) are severely compromised [36]. Indeed, telomere uncapping could also explain the presence of cH2AX foci in the telomeric/subtelomeric regions and the thin SCAS observed in E1D/DE22/2 spermatocytes. That is, the inability to form functional cyclin E2-and E1-CDK2 complexes could result in telomere uncapping that could trigger an abnormal DNA damage checkpoint response, as demonstrated by the presence of cH2AX foci. Thus, possible targets of cyclin E-CDK2 complexes could also include proteins involved in telomere protection [37]. In summary, our findings indicate a critical requirement for cyclin E function in meiosis rather than mitosis in the male germline. The meiotic defects that are observed highlight E-type cyclins as essential regulators of male meiosis and strongly point to a mechanistic role for E-type cyclins in the maintenance of telomere integrity. The observations further provide evidence for distinct functions of the mammalian E-type cyclins, interestingly, in non-classical cell cycle regulatory events. Specific primers were designed as follows: Statistical analyses Results represent mean 6 SEM from at least three independent experiments. Statistical analyses between two parameters were performed using a non-parametric Mann Whitney U test (Prism4, Graphpad Software, Inc., San Diego, CA). The threshold of significance was set at 0.05. Cell separation, immunoblot Preparation of enriched populations of pachytene spermatocytes and round spermatids was carried out according to our laboratory's established protocol [41,42]. Purity of cell populations was assessed by flow cytometric analysis on a Becton Dickinson FACScan Flow Cytometer. Results were analyzed using CellQuest Pro software. Proteins were extracted from purified cell populations from adult testes as previously described [43]. Rabbit anticyclin E2 1:500 (Abcam, ab32103), rabbit anti-cyclin E1 1:3000 (provided by Dr. Jim Roberts, Fred Hutchinson Cancer Research Center) and mouse anti-a tubulin 1:5000 (Sigma T6199) were used for immunoblot analysis according to our standard procedures [43]. Co-immunoprecipitation Cell lysates were prepared from adult WT testes as previously described [43]. Lysates were pre-cleared with protein A agarose beads (Roche, cat#11134515001) at 4uC for 1 h. Pre-cleared lysates were then incubated with mouse anti-CDK2 1:30 (D-12) (Santa Cruz sc-6248) or IgG control for 4 h with gentle agitation at 4uC. Protein A agarose beads were added and incubated overnight. The beads and immunoprecipitated complexes were pelleted by a 10 s centrifugation at 500 g, and then washed in wash buffer (20 mM Tris, pH 8.0, 150 mM NaCl, 0.1% NP40) four times at 4uC. Final pellets were resuspended in 1XSDS loading buffer and boiled for 5 min. The supernatant was run on 10% SDS-PAGE and immunoblotting was performed as described above using antibodies specific for cyclin E1 and cyclin E2. Clean-Blot IP detection reagent HRP (Thermo Scientific, cat# 21230) was used as the secondary antibody at 1:500. Immunohistochemistry, immunofluorescence (IF), immuno-FISH and image quantifications For histology and immunohistochemistry, testes from different mice genotypes were dissected, fixed in Bouin's solution or 4% paraformaldehyde, respectively, and paraffin embedded, as previously described [44]. Primary antibodies against E1 or E2 were used at dilutions of 1:100 and 1:125, respectively. For combined immuno-FISH, we first performed immunofluorescence (IF) on chromosome spreads followed by telomere FISH. After counter-staining with DAPI and PBS rinsing, slides were incubated in 2X sodium saline citrate (SSC) 15 min at room temperature (RT). Slides were then dehydrated and air-dried. The slides were then denatured in 75% formamide/2X SSC at 85uC for 7 min, dehydrated in an ethanol series at 4uC and incubated with a Human Chromosome Pan-Telomeric probe (1696-CY3-01, CAMBIO, UK) overnight at 37uC. Finally, we performed three washes at 42uC (3 washes with 50% formamide/2XSSC and 3 washes in 2X SSC) followed by three washes in 4X SSC/0.1% Tween-20. For light microscopy, observations were made in a Nikon Eclipse E800 using a 20X/NA: 0.5 or 40X/NA:0.75 Plan Fluor objective, equipped with a RTtm KE color 3-shot digital camera. Photographs were taken using Spot Advance software. For immunofluorescence, observations were made in a Nikon Eclipse 80i using a 100X/NA: 1.4 oil immersion objective, equipped with a QImaging Retiga EXi Fast 1394 digital camera. Images were captured with QCapture Pro software. All images were processed using Adobe Photoshop CS5 software. SCAS length was quantified by measuring the width of each chromosome end in spermatocyte spreads, using the length measurement plugin in ImageJ (NIH). The images used for the measurements were improved using an inverted LUT to avoid potential pitfalls at the border of the SCAS. ANOVA and t-test were used to determine the differences between the values and the threshold of significance was set at 0.05. CDK2 intensity was quantified by selecting the area of CDK2 signal in each telomere and measuring the intensity of the signal using the measurement/mean value tool in ImageJ (NIH). Both CDK2 signal and background were measured and the final CDK2 intensity was calculated by subtracting the background from the CDK2 signal. Two tailed t-test was used to determine the differences between the genotypes and the threshold of significance was set at 0.05. Terminal deoxynucleotidyltransferase-mediated deoxy-UTP nick end labelling (TUNEL) staining TUNEL staining was performed on tissue sections using the in situ cell death detection kit (Roche Diagnostics, Indianapolis, IN) as previously described [12]. Only clearly stained cells were considered as apoptotic and only tubules cut perpendicular to the length of the tubule (round tubules in section) were evaluated. Figure S1 Cyclin E2 protein but not E1 was consistently found in pachytene spermatocytes at all stages. Histological sections of Functions of E-type Cyclins in Mammal Male Meiosis testes from adult wild type (WT) were immunostained with anticyclin E1 (a,b) and anti-cyclin E2 (c-f) antibodies. Magnification: a,c 620; b, d-f 640. B, B-type spermatogonia; Bm, dividing Btype spermatogonia; PL, preleptotene spermatocytes; L, leptotene spermatocytes; Z, zygotene spermatocytes; D, diplotene spermatocytes; P, pachytene spermatocytes. Arabic numerals indicate the step of spermatid differentiation; Roman numerals indicate the stage of the tubules. (TIF) Figure S2 TUNEL-positive pachytene spermatocytes were found in the E-type cyclin deficient germline. Representative stages of seminiferous tubules containing TUNEL positive cells in wild type (WT, a), E12/2E2+/+ (b), E1+/+E22/2 (c), E1+/ 2E22/2 (d) and E1D/DE22/2 (e) testes. The most striking observation was the presence of TUNEL-positive pachytene spermatocytes in both E1+/+E22/2 and E1+/2E22/2 testes (c,d), regardless of the severity of the testicular abnormalities (as reflected from the loss of advanced spermatogenic cells). In addition, TUNEL-positive spermatids were not detected. Magnification: a-e 640. PL/L/Z, preleptotene-leptotene-zygotene spermatocytes; P, pachytene spermatocytes, early P, early pachytene spermatocytes; mid-P, mid pachytene spermatocytes; RS, round spermatids. Arabic numerals indicate the step of spermatid differentiation; Roman numerals indicate the stage of the tubules. (TIF) Figure S3 Percentage of spermatocytes present in each stage of prophase I. Wild type (WT) (white bars) and mutant (black bars) spermatocytes: E12/2E2+/+ (a), E1+/+E22/2 (b), E1+/ DE22/2 (c) and E1D/DE22/2 (d) spermatocytes. Each bar represents the mean number of spermatocytes obtained from one testis each from three mice per genotype. Per animal, a total of 400 (in E1+/DE22/2 and E1D/DE22/2 testis) and 500 spermatocytes (in all other genotypes) were counted. Error bars represent SEM. (TIF) Figure S4 E1 depletion solely disrupts the synapsis of sex chromosomes. Chromosome spreads from E12/2E2+/+ spermatocytes immunostained with SYCP3 (red) and SYCP1 (a-b); cH2AX (c), RNA pol2 (d), SUMO-1 (e) and CDK2 (f) (green). Insets represent the magnifications of the area selected in (a-f) (white squares) above their schematic representations. The X and Y chromosomes were frequently observed in total asynapsis (a,c,,d,f, insets) or in a peculiar synapsis that comprised only a small area in the pseudo-autosomal region (PAR) (b, green arrow). Y chromosome self-synapsis (insets in a,e, white arrows) or telomeres of the X or Y chromosome close together in a ring configuration (insets in c,e,f,) were observed. (TIF) Figure S5 A) Schematic representations of white insets shown in Figure 4 and B) SCAS measurements. e) Insets and their respective schemes of Figure 4e showing the association of an autosomal end with the X chromosome. f) Inset and schematic of Figure 4f. Two heterologous autosomes are associated through their chromosome ends (white arrow). h-h9) Insets and schematics of Figure 4h. Three heterologous autosomes are associated through their chromosome end. k) Inset and schematic of Figure 4k. Two heterologous autosomes are partially synapsed. Each color represents a different chromosome in the insets. B) SCAS measurements. *** p#0.001, n = 6 cells per genotype. (TIF)
2016-05-12T22:15:10.714Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "7f44d96fe5782c859ce020bf864de3316477323b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004165&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "743261e3ced2517b55d21cd9b6781fa3b0cb8441", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
59334062
pes2o/s2orc
v3-fos-license
Effect of arbuscular mycorrhizal fungi on survival and growth of micropropagated Comanthera mucugensis spp . mucugensis ( Eriocaulaceae ) The use of micropropagation technique has been an alternative to conservation of endangered species, Comanthera mucugensis subsp. mucugensis (popular namely sempre viva de Mucuge); however, there is no information on the effect of arbuscular mycorrhizal fungi (AMF) on the acclimation process of micropropagated plants. This study evaluated the survival, growth and nutritional aspects of the species, C. mucugensis subsp. mucugensis inoculated with native AMFs in greenhouse condition. The design of the experiment consisted initially of 80 sampling units divided into four treatments: plants inoculated with native AMF, with microbiota filtrate from soil, with AMF plus filtrate and control (noninoculated plants). At three and eleven-month-old, the plants were collected for evaluation of growth, nutrition and mycorrhizal colonization. After eleven months of experiment, survival rate of AMF and AMF plus filtrate plants were 62.5 and 87.5%, respectively, and only one microbiota filtrate and one control plants survived. AMF inoculation also provided increase in n dry matter of rosettes and permitted obtaining flowering ten-month-growth plants. Rates of mycorrhizal colonization were high at three (aproximately 64.9%) and eleven (aproximately 94.5%) months for AMF and AMF plus filtrate plants. Number of spores in rhizosphere soil of mycorrhizal plants was also high (1599 per 100 dm 3 of soil) and seven diferent species of AMF were identified at the end of experiment. Data set evidenced mycortrophic character of C. mucugensis subsp. mucugensis and the importance of AMF inoculation for acclimation and survival of microprogagated plants which is essential for conservation of this endangered plant. INTRODUCTION The Eriocaulaceae family comprises eleven genera and ca.1200 species, has a pantropical distribution (Echternacht et al., 2010), and presents its diversity center on the Espinhaço Range between Minas Gerais and Bahia (Giulietti and Hensold, 1990;Sano, 2004).About 70% of the total Brazilian species of Eriocaulaceae occur at Espinhaço Range, 85% are endemic and often are restricted to a single mountain (Giulietti et al., 2005;Costa et al., 2008).The species, Comanthera mucugensis subsp.mucugensis is one of this microendemic Eriocaulaceae plants that occured on municipality of Mucuge (Bahia) at eastern side of the Chapada Diamantina region.This species is popularly known as sempre viva de Mucuge (evergreen of Mucuge) and its inflorescence remains with the same color and shape when their scapes, chapters and flowers are collected for making dried floral arrangements.At region of rupestrian field on Mucuge where these plants occur naturally, they were one of the main sources of income for local inhabitants at the mid-twentieth century, and each year were sold tons of flowers, especially to Europe and the United States (MMA/PNMA, 1996), whichreduced the natural population since the flowers are still at anthesis when collected to be sold as ornamental (Giulietti et al., 1988;Cerqueira et al., 2008). Recently, C. mucugensis subsp.mucugensis was prohibited from being collected because their exploitation has been carried out without planning and without any control or cultivation (Lima-Brito et al., 2016), and currently, this plant is on the Official List of Species of the Brazilian Flora Endangered (MMA, 2008).Some tentatives of plant management are already being developed at Parque Municipal de Mucuge, as to protect C. mucugensis subsp.mucugensis populations and promote the propagation and cultivation, seeking alternative sources of income to the population of the municipality (Paixão-Santos et al., 2003;Ramos et al., 2005;Teixeira and Linsker 2005). With the aim to increase C. mucugensis subsp.mucugensis populations in Mucuge region, the micropropagation technique has been used as a viable option for the production of seedlings of this species (Lima-Brito et al., 2011;Pêgo et al., 2013).Despite the advantages in using this technique, there are still some obstacles to their wide application, especially as regards acclimation, that is, the conditions to be transplanted in vitro to greenhouse, since mortality rate of C. mucugensis subsp.mucugensis micropropagated plants is high. The absence of beneficial soil microorganisms can result to negative effects on the plant acclimation process due low adaptation to new environmental conditions imposed (Borkowska, 2002).Studies on the association of arbuscular mycorrhizal fungi (AMF) with some agronomic and ornamental plants demonstrate benefits of these microorganisms as plant growth regulators and their importance to management and acclimation (Rocha et al., 2006;Yadav et al., 2013;Moreira et al., 2015;Villarreal et al., 2016).Arbuscular mycorrhizal fungi (AMF) are an important microbial group of the soil, which form a mutualistic symbiosis with the roots of plants affecting several processes and functions in the ecosystem such as nutrient cycling, plant productivity and competition (Hazard et al., 2013).This microorganism have been used as an alternative to increase the resilience of many species during the acclimation process, stimulating the autotrophic stage of transition from in vitro to soil and influencing morphogenesis and architecture of root, ensuring a health formation and development of root system after transplanting (Zemke et al., 2003;Kapoor et al., 2008;Stancato and Silveira, 2010).Apart from this, AMF can act as biological controller of some pathogens and to reduce tensions as nutrition, availability of water and salinity involved on micropropagation (Schubert et al., 1990;Jaizme-Vega and Azcón, 1991). In the present study, the authors evaluated native AMF and microbiota inoculation on acclimation of C. mucugensis subsp.mucugensis micropropagated plants, analysing survival and nutritional status with goal to contribute to process of population restoration of this endangered plant. In vitro culture In the experiment, 120 days old micropropagated plants of the C. mucugensis subsp.mucugensis obtained from the Vegetable Tissue Culture Laboratory of the Horto Florestal Experimental Unit, belonging to the Biological Sciences Department of the Feira de Santana State University, in the municipality of Feira of Santana, Bahia were used.The chemical characterization of the in vitro plants was carried out at the Laboratory of Analysis of Vegetable Tissues of the Cocoa Research Center (CEPEC) of the Executive Committee for Cocoa Plantation Planning (CEPLAC).The results were: N = 42.18g.Kg −1 ; P = 1.98 g.Kg −1 ; K = 22.92 g.Kg −1 ; Ca = 2.29 g.Kg −1 ; Mg = 1.28 g.Kg −1 ; Cu = 2.33 mg.Kg −1 ; Fe = 38.12mg.Kg −1 ; Mn = 44.3mg.Kg −1 ; Zn = 46.84mg.Kg −1 . Obtaining plant material The experiment was conducted in a greenhouse at the University of Santa Cruz (Ilheus, BA) under natural conditions of temperature and luminosity.Micropropagated seedlings of C. mucugensis (Giul.)L.R.Parra & Giul.subsp.mucugensis were provided by the Tissue Culture Laboratory of the State University of Feira de Santana (UEFS), and grown in plastic pots containing 0.4 dm 3 of soil collected at rupestrian field on Parque Municipal de Mucugê (Mucugê,Bahia,Brazil;12°59'27''S,41°20'11''W and 980 a.s.l).This native soil was previously sterilized at 121°C for two cycles of 1 h with 48 h interval, and after reaching ambient temperature, the resulting pH (measured in water) was 2.8 and it was not adjusted.Previous experiments liming on soil and substrate (coarse and fine *Corresponding author.E-mail: lidiegas@hotmail.com.Tel: +557336805191.Fax: +557336805111. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License sand) indicated that this plant do not tolerate (die) soil pH reaching 5. Experimental design The experimental design was completely randomized and initially 80 sampling units divided among the control and three treatments: plants inoculated with native AMF, with microbiota filtrate from soil, with AMF plus microbiota filtrate.In 20 replicates from each treatment, 12 were collected at three month plant growth to investigate the initial mycorrhiza establishment at the acclimatization phase.The remaining eight plants were collected at 11 month of plant growth.The spores of native AMF used as inoculum were obtained from the multiplication pot using C. mucugensis subsp.mucugensis as host plant, since there was low sporulation on the previous attempt using a conventional host plant (Brachiaria decumbes). Spores were isolated from 100 g of soil using the technique of wet sieving of Gerdemann and Nicolson (1963) and centrifugation in 50% sucrose using the technique of Jenkins (1964).To simulate the natural microbial composition of soil, a filtrate was prepared using a suspension of field soil with autoclaved distilled water (1:10 m/v), which was stirred for 24 h (Sudová and Vosátka, 2008).Subsequently, the material was passed through a glass funnel containing filter paper (Whatman no. 1) with the aid of a vacuum pump retaining the solid part and mycorrhizal propagules. After transplantation from in vitro condition to the platic pots in a greenhouse, the micropropagated plants (85 days old), and according to the treatment, received 10 ml suspension containing: mycorrhizal inoculum with 470 spores, microbiota filtrate and mycorrhizal inoculum and filtrate. Dry biomass and nutritional analysis For the analysis of biomass, rosettes (leaves) were dried at 60°C in an oven with forced air circulation until constant weight.Dry matter was obtained and due to the small volume of plant material, only one sample (the sum of all replicates) per treatment was sent to the Laboratory of Mineral Nutrition of Plants USP -ESALQ for nutritional analysis.The methodologies used in this analysis were: P: colorimetry (ammonium metavanadate method), S: colorimetry (turbidimetric barium sulfate), K, Ca and Mg by atomic absorption spectrophotometry, Cu, Fe, Mn and Zn: absorption spectrophotometry atomic; sulfuric digestion for total N, B: colorimetry (Azomethine H method). Assessment of AMFs colonization To estimate the percentage of mycorrhizal colonization, C. mucugensis subsp.mucugensis roots were blenched in 10% KOH and stained using trypan blue according to the methodology described by Phillips and Hayman (1970).The estimate of colonization of root segments was based on the method of intersection enlarged (McGonigle et al., 1990). Extraction and quantification of spore production Spores of rhizosphere soil samples were extracted following the technique of decanting and wet sieving of Gerdemann and Nicolson (1963) combined with the technique of centrifugation in sucrose solution at 50% of Jenkins (1964).The isolated spores were quantified in a Petri dish and stored in tubes, kept in the refrigerator until analysis of taxonomic characteristics needed for identification. Taxonomic identification of AMFs The spores were previously isolated in separate groups of morphotypes under a stereomicroscope and then mounted on slides with permanent PVLG resin and Melzer reagent (Morton et al., 1996).Spores preserved on slides were observed under an optical microscope (magnification of 1000x) and morphological characters such as size (in µm), shape, color, structure and decoration of wall, type of hyphae and spore germination mode, were recorded for comparison with the related literature.The identification was carried out by using Schenck and Perez (1988) manual and current avaliable literature. Statistical data analysis The data obtained for rosette dry mass, spore number and percentage of mycorrhizal colonization were compared by a oneway ANOVA/Tukey multiple comparison or a t-test when appropriate.The analyzes were performed in the statistical package STATISTICA 8.0 (Statsoft 2002). RESULTS Of the total 12 sample units for each treatment collected after three months (Figure 1A, B and C) of growth in greenhouse, 100% of C. mucugensis subsp.mucugensis plants inoculated with native AMF and inoculated with AMF plus microbiota filtrate survived.Three plants from microbiota filtrate treatment and four from control died.At nine months of growth plants initiate scape (flowering) production (Figure 1D) and some flowers were obtained at the end of eleven month of growth at greenhouse (Figure 1B).At the end of the experiment from the eight remaining sampling units, five plants from mycorrhiza treatment and seven plants from mycorrhiza plus filtrate treatment survived.On the other hand, seven plants from filtrate treatment and seven plants from control died. Aboveground biomass Rosette dry mass from three month growth plants of C. mucugensis subsp.mucugensis presented significant differences (p  0.05) among mycorrhiza treatments and non mycorrhizal (control and microbiota filtrate) plants evidenced the strong influence of AMF on biomass production (Table 1).The mean values of rosette biomass of eleven months growth plants from AMF inoculated and AMF plus microbiota filtrate did not statistically differ (t test p≤0.05) because only one plant from control and microbiota filtrate treatments survived; statistical analysis was not carried out, but the diference from mycorrhiza treatments was evident (Table 1). Mycorrhizal colonization The mean values of mycorrhizal colonization in AMF inoculated and AMF plus filtrate plants did not differ significantly from each other in both collection times; however, there was an increase in the percentage of colonization of these two treatments when comparing the three and eleven months palnt (Table 1).AMF inoculated plants showed the highest percentages of colonization in root fragments of plants evaluated at three and eleven months of growth.In roots of non-inoculated control and microbiota filtrate inoculated plants, no signal of mycorrhizal structures were observed in both periods (Table 1).During qualitative evaluation with microscope, intraradical hyphae (Figure 2A) and vesicles (Figure 2B) were observed, however arbuscules were the structures more frequently observed (Figure 2C and D). Nutritional diagnosis The levels of macro and micronutrients observed in composed samples of rosette dry biomass C. mucugensis subsp.mucugensis at three and eleven months of growth are presented in Table 2.The yield of dry matter of filtrate and control plants in eleven months old plants was insufficient for chemical analysis, therefore are not presented in Table 2.In general, there was no large variation on nutrient levels among plants from differet treatments. Quantification of spores The evaluation of the number of AMF spores of soil rhizosphere demonstrated, as expected, mycorrhiza and mycorrhiza plus microbiota filtrate plants presented significat differences when compared with filtrate and control plants, but not significately different between them (Table 1).Quantification performed for eleven month old plants presented mean values not statiticaly different between filtrate plus mycorrhiza and mycorrhiza plants (Table 1).The Claroideoglomus etunicatum and Glomus macrocarpon species were the only species found in both treatments.These spores are shown in Figure 3. DISCUSSION High rates of root colonization by native AMF was observed in C. mucugensis subsp.mucugensis micropropagated plants.These rates influenced growth responses of plants and showed the mycorrhizal dependence (mycotrophism) of C. mucugensis subsp.mucugensis since non-inoculated AMF plants, even with frequent nutrient solution fertilization on the natural soil, did not grow but died.Our results clearly pointed that C. mucugensis subsp.mucugensis is a mycotrophic plant with rate of mycorrhizal colonization of eleven old months higher than those observed by Pagano and Scotti (2009) on Paepalanthus bromelioides and Aristizabal et al. (2004) in roots of Paepalanthus sp., two Eriocaulaceae species.This rate of colonization by AMFs is also seen in other studies with plants of semi-arid environments (with low water availability) which showed a high symbiotic effectiveness between AMF and plant species (Yamato et al., 2008;Estrada et al., 2013). The effectiveness of the symbiosis between the micropropagated plants of C. mucugensis subsp.mucugensis and native AMFs was also verified by the production of extensive arbuscules, hypha and spores (completing life cycle of the fungus).Spore density in soil of three-month-old mycorrhized plants of C. mucugensis subsp.mucugensis growth at greenhouse was similar to those observed by Borba and Amorim (2007) in rhizosphere soil (1014 spores 100 g -1 soil) from natural plant population of same plant collected in Mucuge.Number of spores observed in mycorrhizal plants of C. mucugensis subsp.mucugensis can be considered high, demonstrating the dependence of this plant species on AMF for their development.Pagano and Scotti (2009) studying Paepalanthus bromelioides reported 139 spores per 100 g of rhizosphere sandy soil collected from field. It was possible to isolate and identify seven species of native AMF from mycorrhizal plants of C. mucugensis subsp.mucugensis, and with the exception of Scutellospora spiniosissima, all other AMF identified were reported in massive study of Carvalho et al. (2012) that identified and listed 49 species of AMFs collected in rupestrian field of Minas Gerais.Nutrient analyses of C. mucugenis var.mucugensis rosette demonstrated that mycorrhizal plants presented concentration of macro and micronutrients similar to those non-AMF inoculated plants, despite markedly difference in the plant growth.As known, probably, this is the first report on nutrient staus of a Eriocaulaceae plant, so, it is difficult to compare nutrients concentration on leaves of C. mucugenis var.mucugensis micropropaged plants with other poales plants for example.When we compare leaf nutrients between plants collected on field and from greenhouse experiment, it is observed that concentration of some nutrients such as N and P were higher (three-fold and ten-fold, respectively) in greenhouse plants than field collected plants due to frequent irrigation with nutrient solution. The presence of DSF in the roots of C. mucugensis subsp.mucugensis observed in AMF treatments possibly occurred during inoculation, the same being adhered to AMFs spores were isolated from soil samples.Reports of the coexistence of DSFs and AMFs in the roots of plants stressed environments (arid environments, acidic and nutrient-poor soils) have become increasingly common in studies involving symbiotic associations with fungi (Lingfei et al., 2005;Porras-Alfero et al., 2008;Schmidt et al., 2008). The filtrate of soil microorganisms combined with the native AMFs also had favorable responses on survival and acquisition of dry matter of micropropagated C. mucugensis subsp.mucugensis plants.However, when inoculated alone, microbiota filtrate did not promote plant growth and reduced the plant survival as observed in the control plants.The influence of soil microbiota on plant development as well as possible interactions between the microbial communities present in the rhizosphere and their consequent contribution to plant productivity are widely discussed in the literature (Walker et al., 2003;Artursson et al., 2006;Bonfante and Anca, 2009;Smith and Smith, 2011). Native AMFs inoculated in C. mucugenis var.mucugensis were essential for plants survival and growth, permitting the acclimatization at greenhouse on natural soil.The establishment of in vitro grown seedlings in soil is hampered by weak root system at the beginning of acclimation, however, the symbiotic association between AMF and plant roots increases the survival rate of plant to strengthen the root system (Yadav et al., 2012).This strengthening can reflect the importance of AMF for nutrients and water uptake at low fertilized environments, defense against pathogens, decreased water stress improving some important characteristics for plant acclimation (Joshee et al., 2007;Pindi, 2011;Singh et al., 2012;Yadav et al., 2013). In this study, a relatively high amount of organic matter was observed in the soil (76 g dm −3 ), one of soil characteristic that may have influenced the number of AMF species found.Borba and Amorim (2007) justified the increased number of species of mycorrhizal fungi in the rhizosphere soil, possibly due to a greater accumulation of soil organic matter.Moreover, the species richness from the rhizosphere soil of potted C. mucugensis subsp.mucugensis may have been influenced by soil type and growing conditions.According to Carvalho (2012), the high diversity of AMF on rupestrian fields can be explained by the heterogeneity of habitats in this environment and the occurrence of AMF species influenced by soil physical properties and also tolerance of these species to low humidity, as shown in some quantitative studies (Conceição and Pirani, 2005). Conclusion In this study, the authors reported on native AMF populations inoculated on C. mucugensis subsp.mucugensis plants, but the influence of one determined fungi species was not tested and is a subsequent step to evaluate the influence of mycorrhiza inoculation.The study shows that AMF inoculation is undoubtedly an important biotechnological tool and encourages the use of these microorganisms in conservation programs of endangered C. mucugensis subsp.mucugensis. Figure 1 . Figure 1.Partial view of experiment and mycorrhizal colonization in roots of an C. mucugensis subsp.mucugensis plants inoculated with AMF.(A) Partial view of experiment with C. mucugensis subsp.mucugensis plants inoculated with AMF after three months of growth in greenhouse.(B) Partial view of experiment showing plants with their floral scapes (arrows) developed after eleven months of growth.(C) Detail of a rosette from an AMF inoculated plant with three months of growth.(D) Detail of a rosette from an AMF inoculated plant with some initial flower scapes developing (nine months of growth in greenhouse). Figure 2 . Figure 2. Partial view of experiment and mycorrhizal colonization in roots of an C. mucugensis subsp.mucugensis plants inoculated with AMF.(A) Mycorrhizal colonization in roots of C. mucugensis subsp.mucugensis AMF plus microbiota filtrate inoculated plants.Arrow indicate a extraradical hypha; (B) detail of some vesicles in the cortex of an AMF plus microbiota filtrate plant; (C) general view of a densely arbuscules occupied cortical cells (arrows); (D) detail of an arbuscule (arrow) in the cortical cell of an AMF inoculated C. mucugensis subsp.mucugensis root segment. Figure 3 . Figure 3. Morphological characterization of AMF spores.(A) Photo of the spore of the species, Glomus etunicatum found in the soils of the treatment M and M + F; (B) Image of the spore of the Glomus macrocarpum species found in soils of both treatments, M and M + F; (C) Photo of the characteristic spore of the species Glomus microaggregatum, found in the soil of the M + F treatment; (D) Photo of the spore of the Glomus microcarpum species found in the treatment soil M; (E) Photo of the spore of the species Glomus sp.found in the soil of the M + F treatment; (F) Photo of the spore of the species Scutellospora dispurpurescens found in the treatment soil M; (G) Photo of the spore of the species Scutellospora spiniosissima found in the soil of the treatment M. L, wall layer; Lo, ornamental layer; HS, support hyphae; BS, suspensoroid bulb; 1, illustration of the wall layer; Seta, characteristic structure of the species. Table 1 . Biomass of rosettes and mycorrhiza of C. mucugensis subsp.mucugensis micropropagated plants inoculated with native AMF, microbiota filtrate, AMF plus microbiota filtrate and control plants after three and eleven months of growth in greenhouse conditions. Table 2 . Macronutrients and micronutrients concentrations in rosettes (leaves) of C. mucugensis subsp.mucugensiswith native AMF, microbiota filtrate, AMF plus microbiota filtrate and control plants after three and eleven months of growth in greenhouse conditions.Statistical analysis was not performed on the control and filtrate plants due to death of plants.
2018-12-22T11:11:41.601Z
2017-05-18T00:00:00.000
{ "year": 2017, "sha1": "43b0c2398e2b46b39e412322313c6a7486a61b74", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/E659F0764245.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "43b0c2398e2b46b39e412322313c6a7486a61b74", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
4395711
pes2o/s2orc
v3-fos-license
Hidden genomic MHC disparity between HLA-matched sibling pairs in hematopoietic stem cell transplantation Matching classical HLA alleles between donor and recipient is an important factor in avoiding adverse immunological effects in HSCT. Siblings with no differences in HLA alleles, either due to identical-by-state or identical-by-descent status, are considered to be optimal donors. We carried out a retrospective genomic sequence and SNP analysis of 336 fully HLA-A, -B, -DRB1 matched and 14 partially HLA-matched sibling HSCT pairs to determine the level of undetected mismatching within the MHC segment as well as to map their recombination sites. The genomic sequence of 34 genes locating in the MHC region revealed allelic mismatching at 1 to 8 additional genes in partially HLA-matched pairs. Also, fully matched pairs were found to have mismatching either at HLA-DPB1 or at non-HLA region within the MHC segment. Altogether, 3.9% of fully HLA-matched HSCT pairs had large genomic mismatching in the MHC segment. Recombination sites mapped to certain restricted locations. The number of mismatched nucleotides correlated with the risk of GvHD supporting the central role of full HLA matching in HSCT. High-density genome analysis revealed that fully HLA-matched siblings may not have identical MHC segments and even single allelic mismatching at any classical HLA gene often implies larger genomic differences along MHC. Matching human leukocyte antigen (HLA) alleles between the donor and recipient of hematopoietic stem cell transplantation (HSCT) is crucial to reduce the risk of graft-versus-host disease (GvHD), a major life-threatening complication of HSCT. The donor search usually begins by HLA genotyping of immediate family members to find an HLA-identical sibling that is considered to be an optimal donor in the HSCT setting. Due to the low rate of intra-major histocompatibility complex (MHC) recombination, a patient and a sibling donor have a 25% chance to be HLA identical, i.e., they share the same two HLA haplotypes. The segregation of all four parental HLA haplotypes in a family can often be defined on basis of HLA-A, -B, -DRB1 typing when either parents or sufficient number of siblings are available. As there is a strong linkage disequilibrium (LD) between HLA-B and -C genes and HLA-DRB1 and -DQB1 genes 1 , respectively, HLA-A,-B and -DRB1 matching usually implies also HLA-C and -DQB1 matching. However, due to the limited number of siblings and ever-older HSCT patients, all four haplotypes are not always distinguished in a family. Many HLA matched siblings can therefore be classified merely as HLA identical-by-state, rather than identical-by-descent. A chance for selecting only HLA matched but not HLA identical donor increases especially when similar haplotypes with the same HLA alleles in different combinations segregate in a family. To overcome these problems some histocompatibility laboratories now type larger than the minimal set of HLA genes. In the absence of an HLA identical sibling donor, i.e. 6/6 HLA-matched donor, HLA-mismatched related donor with a single mismatch at HLA-A, -B or -DRB1 is an option and requires typing of at least one additional classical HLA gene, HLA-C or -DQB1 or -DPB1. However, these 5/6 HLA-matched sibling pairs may have allelic disparities at other classical HLA genes and/or elsewhere in the major histocompatibility complex (MHC). There is emerging evidence that the role of the MHC in GvHD susceptibility may be more complex than merely HLA match status and imputation. Based on the clinical HLA typing of five classical HLA genes a total of 255 of 261 pairs were 6/6 HLA-matched (HLA-A, -B, -DRB1). Six pairs were known to be 5/6 HLA-matched prior to the present study. Antigen mismatching occurred at HLA-A in three pairs, at HLA-B in two pairs and at HLA-DRB1 in one pair. One of the two pairs with mismatching at HLA-B had mismatching also at HLA-C and the pair with an antigen mismatch at HLA-DRB1 was mismatched also at HLA-DQB1 (Table 1). To estimate the level of identity-by-state in seven classical HLA genes HLA-A, -B, -C, -DRB1, -DQA1, -DQB1 and -DPB1 in each 261 HSCT pair, we used an Immunochip array to screen SNP mismatches within the MHC. We explored the actual HLA match status of the six 5/6 HLA-matched HSCT pairs by imputing HLA alleles to the resolution level of unique amino acid chains (four-digit resolution). Imputed four-digit HLA alleles were in 100% concordance with two-digit clinical HLA types. The number of imputed alleles varied between 48 and 1646 depending on the specific locus. Absolute posterior probability Q2 for HLA-A, -B, -C, -DRB1, -DQA1, -DQB1 and -DPB1 varied between 0.54-1.00, 0.19-1.00, 0.17-1.00, 0.19-1.00, 0.35-1.00, 0.46-1.00 and 0.12-1.00, respectively. The differences of Q2 values within HLA-mismatched pairs were relatively low: 0.00-0.01 for HLA-A, 0.00-0.28 for HLA-B, 0.17 for HLA-C, 0.07 for HLA-DRB1, 0.44 for HLA-DQA1, 0.01 for HLA-DQB1 and 0.01-0.46 for HLA-DPB1. Metrics of the imputation and imputed HLA alleles for each pair are described in more detail in Supplementary Table S1. Imputation revealed additional mismatching at those classical HLA genes not typed prior to transplantation: pair 2329, which was known to have allelic mismatches at HLA-DRB1 and -DQB1, was also mismatched at HLA-DQA1, and four pairs with allelic mismatching at HLA-A or -B were also mismatched at HLA-DPB1 (pairs 2329, 3205, 4426 and 5366 in Table 1 and Supplementary Table S2). Furthermore, mismatching at HLA-DPB1 was revealed in four out of 255 fully (6/6) HLA-matched pairs (pairs 1812, 3450, 4152 and 5236 in Table 1). Subsequent HLA typing by the reference technique SSO confirmed allelic mismatching at HLA-DPB1 in all the pairs from which DNA was available, that is, five out of eight DPB1-mismatched pairs. In total, 10 of the 261 (3.8%) HSCTs in study cohort 1 were performed between pairs with at least one allelic mismatch in the classical HLA genes. MHC segment identity. We next scrutinized the level of overall SNP matching within the entire MHC region in each of the 261 HSCT pairs based on the matching of the 5137 SNPs that had been mapped to that segment. Figure 1a shows the matched (dark green) and mismatched (yellow) SNPs in HSCT pairs with large mismatching fragments. The mismatched fragments covered those HLA genes that were known to have allelic mismatches by clinical HLA typing or HLA imputation. Thus, the 10 HSCT pairs with identified mismatching at the classical HLA had mismatched SNP clusters especially at the particular gene with an allele mismatch. The mismatched regions, however, covered larger areas than only the exact HLA gene with allelic mismatching. For example, pair 2329, which was originally known to be mismatched at HLA-DRB1 and -DQB1 alleles, showed SNP mismatches encompassing the entire segment from complement C4 gene to the telomeric side of HLA-DPB1. In Supplementary Table S3. In addition to the 10 mismatched pairs described above, four other HSCT pairs had clusters of mismatched SNPs within the MHC. In other words, they were found to be not identical-by-state. These SNP mismatches were located telomeric to HLA-A or centromeric to HLA-DPB1 (pairs 2934, 3446, 4706, 3803 in Fig. 1a) covering the genomic area where non-classical HLA genes or HLA pseudogenes reside: HLA-G and -F genes at the telomeric end of the MHC and HLA-DPB2 at the other end. Surprisingly, pair 4205, which had no HLA allele disparities in the seven classical HLA genes (14/14 match), was found to have clusters of mismatched SNPs in the MHC. Large regions telomeric to HLA-A, between HLA-A and HLA-C and between HLA-B and DRB1, encompassing the entire TNF-C4 fragment in the MHC class III, were covered with SNP mismatches. Taken those mismatches in the MHC region into account that were not known before the transplantation, altogether 3.5% (9/255) of the HSCT pairs in study cohort 1 were mismatched. The remaining 246 fully 6/6 HLA-matched HSCT sibling pairs had no known HLA allelic mismatches or SNP mismatches, barring some apparently sporadic single SNP differences ( Supplementary Fig. S1). Study cohort 2. Study cohort 2 consisted of 89 HLA-matched sibling HSCT pairs, of which five were known to have a single allelic HLA-A mismatch and three to have HLA-B and -C mismatch. None of them had mismatching at HLA-DRB1 (Table 2). Furthermore, the alleles of 27 additional genes in the MHC region were assigned in pairs with known HLA mismatching. These genes included non-classical HLA genes, HLA pseudogenes and non-HLA genes: HLA-F, -V, -G, -H, -K, -J, -L, -E, -DRA, -DRB2-9, -DOB, -DMB, -DMA, -DOA, -DPA1, -DPB2, MICA, MICB, TAP1 and TAP2. In the 5/6 HLA-matched pairs, the number of additional genes with allelic mismatches leading to a change at the amino acid level varied between three and seven ( Table 2). Thus, the minimum number of genes with allelic disparities at four-digit resolution was four (pair 2468) and the maximum number was as high as eight (pair 5782). The matching status including all 34 MHC genes varied between 26/34 and 30/34. As expected, the number of mismatching was higher at the DNA level (six-digits), varying between five and nine ( Fig. 2). All five pairs with mismatching at HLA-A had also mismatching at HLA-F, -G, -H, and/or -K located telomeric to HLA-A gene. Likewise, all pairs with HLA-B mismatching also had MICA mismatching and one pair also had MICB mismatching. In addition, some extra mismatching in these pairs occurred at other genes of the MHC class I. Interestingly, many pairs with allelic mismatching in the MHC class I also had allelic disparities in MHC class II genes centromeric to HLA-DQB1 gene. In this study cohort, no allelic HLA-DQA1 or -DQB1 mismatching was found, most probably due to the tight LD of the matched HLA-DRB1 alleles. Consistent with the results of study cohort 1, not all of the 6/6 HLA-matched pairs were fully matched for the entire MHC. Pair 5672 had allelic mismatching at HLA-DPB1, -DOB and TAP2, and pair 6907 had mismatching at HLA-DPB1 together with HLA-DPA1 and -DOA (Table 2 and Fig. 2). In this cohort 8.9% (10/89) of the HSCT pairs were mismatched at least for one classical HLA allele. MHC segment identity. The extent of the mismatched fragments in the 89 HSCT pairs was examined further at the genomic sequence level (Fig. 1b). Again, the mismatched genomic regions covered larger segments around the particular gene with observed allelic HLA mismatching. Pairs 5426, 6245, and 6768 mismatched both at HLA-B and HLA-C showed mismatching at genomic segment starting from the telomeric side of HLA-C gene and ending centromeric to complement C4 gene in the MHC class III. The mismatched region in the pairs with allelic HLA-A disparity (pairs 2468, 5624, 5728, 5782, 6385) encompassed relatively long segments flanking the entire HLA-A gene. Two samples, 5559 and 7086, showed MHC fragment mismatching despite of having no mismatching at any of the 34 genes (Table 2 and Fig. 1b). The observed mismatching located centromeric to HLA-DPB2 gene in pair 5559 and telomeric to HLA-F gene in pair 7086. It is of note that approximately the same segments were also mismatched in some of the 5/6 HLA-matched pairs as well (5782, 6245, 6768, 7086). Altogether 4.9% (4/81) of the HLA matched HSCT pairs in study cohort 2 were found to have unexpected disparities in the MHC region. The remaining 77 fully HLA-matched HSCT sibling pairs were identical-by-state along the entire 4.5 Mbp MHC region, barring some sporadic single mismatched nucleotides ( Supplementary Fig. S1). Taking the two cohorts together, 7.7% (27/350) of the putative HSCT pairs were found to have mismatches in the MHC region of which 3.9% (13/336) were observed in fully HLA-A, -B, -C, -DRB1, -DQB1 matched pairs. matched, respectively. We found a weak but consistent trend between the number of mismatched nucleotides within the MHC segment and the risk of both acute and chronic GvHD as estimated by odds ratio vs. mismatch threshold (Fig. 3a,b). The trend remained significant after excluding 5/6 HLA-matched pairs (Fig. 3c,d). Discussion In the present study, we used SNP genotyping and genomic sequencing to investigate the matching of the entire MHC region in 350 fully or partially HLA-A, -B and -DRB1 matched sibling donor-recipient HSCT pairs. The basic questions addressed in the study were: how frequently do HLA-A, -B, -DRB1 matched pairs have mismatching in the MHC region, how large genomic fragments do them cover and does the level of mismatching correlate with the occurrence of graft-versus-host disease? Altogether, 7.7% of all HSCT pairs and 3.9% of those pairs without a prior mismatch at HLA-A, -B or -DRB1 were found to have genomic differences in the MHC segment. Hence, hidden mismatching at HLA and non-HLA regions in the MHC were uncovered not only in 5/6 HLA-matched pairs but also in 6/6 HLA-matched pairs. The mismatched genomic fragments were larger than just a single HLA gene with allelic mismatch, sometimes covering many flanking genes. It is of note that our material was retrospective and HLA typing and matching were done as recommended during the years 1993-2011. Currently, the technological advance, in particular, has led to typing of a wider HLA profile in some laboratories. Importantly, despite of the limited sample size in this study, the number of mismatched SNPs showed a positive association with the risks of acute and chronic GvHD even after excluding the cases with known prior mismatching at HLA-A, -B, or DRB1. This result supports the primary role of matching the HLA segment in HSCT. We determined the allelic variation in 34 genes located in the MHC and found that some pairs were only matched for 26 of these genes. Allelic mismatching at HLA-A usually resulted in mismatching at the non-classical HLA-F and -G as well as at pseudogenes HLA-H and -K. Likewise, mismatching at HLA-B encompassed mismatching at HLA-C and MICA and in one sample also at MICB. Allelic mismatching at HLA-DPB1 gene were relatively common, independently or together with mismatching elsewhere and, interestingly, usually also included TAP1 or TAP2 genes. In addition, we found a few cases with isolated allelic mismatching at HLA-DOB, TAP1, TAP2, HLA-DMA and HLA-DOA genes, without detectable mismatching at other MHC class II genes, indicating either highly-similar but different haplotypes or intra-MHC recombination. It seems that the functional Figure 3. Trend toward higher risk for graft-versus-host disease (GvHD) along increasing number of nucleotide mismatches between haematopoietic stem cell transplantation (HSCT) donor-recipient pairs in study cohort 1. The donor-recipient pairs are divided into low and high mismatch groups according to the total number of MHC region genotype differences between each pair. Similarly, each pair is also assigned into either aGvHD positive or negative group according to the recipient's clinical GvHD gradus. The mismatch and GvHD categorized data are then arranged into a contingency table to calculate the odds ratio (OR). The mismatch threshold value, defined as the natural logarithm of the number of total SNP genotype differences per each HSCT sibling pair, is varied from 0 to 6 and the corresponding odds ratio is calculated for each threshold value. Thus, each data point represents an odds ratio at a particular threshold value. Moreover, two alternate definitions for the GvHD negative status are used: grade 0 (no) or grades 0-2 (no/limited) in acute GvHD, and no GvHD or no/limited in the chronic GvHD. The plots show the odds ratios (y-axis) against the varying mismatch threshold values (x-axis). (a) Acute (n = 91) and (b) chronic (n = 62) GvHD for cohort 1, and (c) acute (n = 85) and (d) chronic (n = 56) GvHD for fully HLA A-B-DRB1 matched cohort 1 pairs. Pairs with zero MHC mismatches are omitted. Linear regression lines are shown for both GvHD negative groups by their corresponding colours. Correlation is calculated by Kendall's rank correlation. aGvHD, acute graft-versus-host disease; cGvHD chronic graft-versus-host disease; OR, odds ratio. consequences of a single allelic HLA mismatch can be wider than assumed. If all haplotypes in a family are not known, genotyping of the seven most important classical HLA genes in every sibling pair setting should be performed to ensure the HLA identity between a transplant pair. This action would only ensure the HLA identity but not the haplotype identity, as mismatches were also observed in large areas of the intervening sequence. To ascertain the haplotype identity a high-density SNP panel covering the entire MHC would be needed. There is evidence that variations in HLA-DPB1, -G, -E and MICA genes are associated with immunological diseases, including also reports on graft-versus-host diseases 5,[22][23][24][25][26] and graft rejection 27 . The role of allelic mismatching at HLA-DPB1 in HSCT appears to be complex as it is related to expression level 23 and so called permissive and non-permissive mismatch groups 22,27,28 . It is also known that even a single allelic mismatch at HLA can induce an alloimmune response 29,30 , including graft-versus-leukemia effect, a favourable phenomenon that reduces the risk of relapse 31,32 . Hence, our finding that the number of mismatched nucleotides correlates with the GvHD risk fits well to these findings. The MHC region is known to have relatively conserved genomic blocks of a few to hundreds of thousands of kilobases in length. These blocks are flanked by recombination sites or hot spots 33,34 resulting in the sharing of these segments by many unrelated individuals or different haplotypes. The borders of the mismatched segments observed in 27 pairs of this study were in agreement with the published recombination sites within the MHC class I, II and III regions 15,16 . Sometimes MHC blocks may span several megabases of DNA, covering almost the entire MHC fragment and are referred to as ancestral or conserved extended MHC haplotypes (AH or CEH) [10][11][12][13] . These long and fixed haplotypes are remarkably conserved, having a high level of allele identity across the MHC. For example, the most frequent North European HLA haplotype, 8.1 AH (HLA-A1-B8-DR3), is 92-98% congruent, but some polymorphisms are found telomeric to HLA-A and centromeric to HLA-DQB1 35 . The level of congruence can vary greatly depending on the haplotype group, especially if they are not AHs 14 . It is therefore possible that there are many different copies of the same haplotype in a family and that fully HLA-matched siblings are not haplotype identical 36 . This may explain SNP mismatching in the non-HLA MHC regions of the seven 6/6 HLA-matched pairs although meiotic recombination cannot be excluded. As no parents or children were available for HLA typing due to the clinical practice in Finland, the sibling pairs could be only classified identical-by-state but not by-descent. It is therefore not possible to know whether mismatching between 5/6 HLA-matched siblings was due to meiotic recombination or because the siblings had inherited different but very similar haplotypes with only one allelic difference at HLA. For example, two similar haplotypes that are very frequent in the Finnish population, A*02:01-C*07:02-B*07:2-DRB1*15:01-DQB1*06:02 and A*03:01-C*07:02-B*07:2-DRB1*15:01-DQB1*06:02, can readily occur within a single family. Alternatively, parental chromosomes may by chance carry the same HLA alleles, but in completely different haplotypes, i.e., siblings have inherited different chromosomes from their parents. This may be the case in pair 4205, which was identical at the classical HLA-A, -B, and -DRB1 genes, but large segments with mismatched SNPs were observed in the sequence between them. The frequencies of the putative haplotypes that can be formed from the pair's HLA combinations (HLA-A*02:01, 11:01; C*03:03, 03:03; B*15:01, 55:01; DRB1*13:01, 14:01; DQB1*05:03, 06:03) vary between 0.014 and 0.0003 in the Finnish population according to our unpublished results, which ranks 10 th -412 th in haplotype frequency. To our knowledge, this type of study in which MHC sequences between sibling pairs are scrutinised in detail has not been performed before. The discovery of hidden mismatches within the MHC that can encompass many genes, even in fully HLA-matched HSCT pairs, could encourage HLA laboratories to screen the MHC region more thoroughly with new DNA techniques in particular as there is a trend towards higher risk of GvHD along the number of mismatched nucleotides. The present study also provides useful information on the extent of MHC variation, which is important as haploidentical HSCTs are now becoming more into practice. Material and Methods Study cohorts and DNA extraction. Two different study cohorts were examined. Study cohort 1 was composed of 261 HSCT sibling pairs and cohort 2 of 89 possible HSCT sibling pairs. The actual transplantations of the cohorts were done between the years 1993 and 2011. Genomic DNA from the white blood cell fraction of whole blood was extracted with QiaAmp Blood minikit columns (Qiagen GmbH, Germany). This study was carried out in accordance with the recommendations of the Ethical Committee of Helsinki University Hospital with written informed consent from living subjects. The authority operating under the Ministry of Social Affairs and Health, Valvira, approved the study for deceased subjects. Demographic details and clinical outcomes including GvHD grading of HSCT pairs of cohort 1 are described in Supplementary Table S4 and in our previous studies 37,38 . Clinical HLA typing. Clinical HLA typing was performed at the HLA Laboratory of the Finnish Red Cross Blood Service using procedures accredited by the European Federation for Immunogenetics (EFI). All patients and donors of cohort 1, whose HSCT were done between the years 1993-2006, were typed for HLA-A, -B and -DRB1 either by the serological method (Lymphotype HLA-AB and Lymphotype HLA-DR, Bio-Rad Medical Diagnostics, Dreieich, Germany) or by PCR-based typing methods at two-digit resolution level (a LIPA reverse dot blot kit, Innogenetics Group, Gent, Belgium or Pel Freez HLA-SSP kits, Dynal Biotech LLC, Oslo, Norway). Depending on a given time period HLA-C and/or -DQB1 genes were also genotyped at four-digit resolution level in the pairs with antigen mismatching at any of HLA-A, -B or -DRB1 (5/6 HLA-matched pairs) conforming to the requirements given by EFI effective at the time. Study cohort 2, whose HSCT were done between the Depending on the recommendations by EFI at any given time period, the clinical HLA typing did not result in the homogeneous set of HLA genes typed and/or the assignation of the alleles at the same resolution. To simplify the analyses, antigenic matching at HLA-A, -B, -C, -DRB1 and -DQB1 was set as the starting point. Immunochip array. Study cohort 1 was genotyped at the Institute for Molecular Medicine Finland, University of Helsinki, by using Immunochip array (Illumina, Inc., CA, USA). The Immunochip included 8215 SNPs within the DNA segment from the telomeric side of HLA-F gene to the centromeric side of HLA-DPB2 gene at positions 29-33.5 Mbp (Genome Reference Consortium human [GRCh]37/hg19). The initial quality control identified samples with discordant sex information, duplicate samples, call rate <97%, and heterozygosity excess <−0.3 (not X chromosome) or >0.2, and >0.1 for the X chromosome. The data were quality-filtered according to Anderson et al. 39 . After quality controls, 5137 SNPs were included in the study. The alleles of the classical HLA-A, -B, -C, -DRB1, -DQA1, -DQB1 and -DPB1 genes were imputed at four-digit resolution level using the software HLA*IMP:02 40 (The Oxford HLA Imputation Framework, UK). Missing data threshold was set to 0.20. SNPs were aligned and genotypes phased against HapMap (CEU) reference panel (hapmap3_r27_b36_fwd.consensus.qc.poly.chr6_ceu.). The absolute posterior probability scores Q2 for each imputed genotype were produced by using threshold T = 0.00. The generated reads were aligned to the GRCH37/hg19 reference genome. Base quality score recalibration and SNP and INDEL discovery were performed using GATK v.3.6-0 VariantRecalibrator and ApplyRecalibration tools with the default settings. The used ts_filter_level setting was 99.0 [42][43][44] . The data were filtered using the hard cutoffs of the total depth of coverage per sample >7 and genomic quality >19 45 . For HLA typing, the fastq read data were quality checked using FastQC 46 , and adapters were trimmed using Cutadapt 47 . The Omixon Explore program version 1.2.0 (Omixon, Budapest, Hungary) was used for allele assignment of 30 genes at six-digit resolution level comprising both classical HLA genes, non-classical HLA genes and pseudogenes in the MHC region using the IMGT/HLA database's HLA nomenclature release 3. MHC mismatch analysis. The genotypes of each HSCT sibling donor-recipient pair were compared position-by-position over the segment 29-33.5 Mbp of chromosome 6p21.3 by classifying each position into to one of three categories: the diploid genotype in a given position between pairs was identical, not identical or missing. Genotype positions with missing calls of over 5% were removed before the comparison. For study cohort 1, which was genotyped with Immunochip, the data were first transformed into tped format using plink v.107 48 and then managed with R v.3.3.3 49 . For study cohort 2, the VCF data file was read into R using the seqminer library v.5.7 50 and then managed similarly to study cohort 1. Positions that were identical over all of the included pairs were removed before analysis and plotting. The final pair comparison result matrix was plotted with the R library lattice v.0.20-35 function levelplot 51 . Recombination segments were identified based on visual inspection of pairwise comparison of genotype similarity plot. Statistical analysis. Study cohort 1 was used for analysing the association of GvHD clinical status with the extent of pairwise MHC mismatching. The analysis was performed by first counting the natural logarithm of total number of MHC mismatches in each HSCT sibling donor-recipient pair and then dividing the pairs into 'high' and 'low' mismatch groups. Due to the log transform, pairs with zero mismatches were omitted from analysis. Similarly, each pair was assigned into either a GvHD positive or negative group according to the recipient's clinical GvHD gradus. The mismatch and GvHD categorized data were arranged into a contingency table to calculate the odds ratio (OR). The cutoff in defining the 'high' and 'low' mismatch pairs was varied from 0 to 6. The odds ratio was then calculated over the varying cutoff values and correlation between the cutoff and odds ratio was calculated with Kendall's rank correlation. The correlation p-value was calculated with the R function cor.test. The linear relationship was visualized using regression line with its 95% confidence intervals on the odds ratio vs. cutoff plots. Furthermore, to estimate the robustness of the correlation at different definitions of GvHD positive and GvHD negative groups, either grade 0 subjects or grades 0-2 subjects were included in the negative acute GvHD group and either no GvHD or no/limited GvHD subjects were included in the negative chronic GvHD group. To estimate the possible effect of HLA-A,-B,-DRB1 mismatching that was known before the HSCT, the mismatched pairs were removed and the same analysis procedure was carried out for only the fully matched pairs. The analysis was done with R v.3.3.3 49 . Data availability statement. Data reported in this study is not available due to the limitations set by Ethical committee. Ethical approval and informed consent. All experiments were carried out in accordance with relevant guidelines and regulations defined in the Finnish legislation. The Ethical Committee of the Helsinki University Hospital and Valvira, the authority operating under the Ministry of Social Affairs and Health, have approved the study.
2018-03-30T13:22:03.229Z
2018-03-29T00:00:00.000
{ "year": 2018, "sha1": "19a825e8151cb4e1190762d1ba6b8ca2ba5e3f23", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-23682-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ee0c10495c16fefc1c9034131d9d36f7687d075", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
47016731
pes2o/s2orc
v3-fos-license
Evidence on a link between the intensity of Schumann resonance and global surface temperature A correlation is investigated between the intensity of the global electromagnetic oscillations (Schumann resonance) with the planetary surface temperature. The electromagnetic signal was monitored at Moshiri (Japan), and temperature data were taken from surface meteorological observations. The series covers the period from November 1998 to May 2002. The Schumann resonance intensity is found to vary coherently with the global ground temperature in the latitude interval from 45° S to 45° N: the relevant cross-correlation coefficient reaches the value of 0.9. It slightly increases when the high-latitude temperature is incorporated. Correspondence among the data decreases when we reduce the latitude interval, which indicates the important role of the middle-latitude lightning in the Schumann resonance oscillations. We apply the principal component (or singular spectral) analysis to the electromagnetic and temperature records to extract annual, semiannual, and interannual variations. The principal component analysis (PCA) clarifies the links between electromagnetic records and meteorological data. Introduction A correspondence was demonstrated by Williams (1992) between the long-term Schumann resonance amplitude and the temperature anomaly in the tropical belt.The global electromagnetic or Schumann resonance is a phenomenon that takes place in the spherical Earth-ionosphere cavity (Nickolaenko and Hayakawa, 2002).Oscillations are maintained by the global lightning activity, which radiates the extremely low frequencies (ELF).The intensity of the resonance reflects Correspondence to: M. Hayakawa (hayakawa@whistler.ee.uec.ac.jp) the instantaneous level of thunderstorm activity, and it must vary with the global temperature.Williams (1992) used the Schumann resonance records collected by Charles Polk at the Rhode Island field site; the duration covered six years (Polk, 1969(Polk, , 1982)).We show in Fig. 1 the plot adopted from Williams (1992), where the electromagnetic data are compared with the monthly mean fluctuation in the surface (drybulb) temperature for the entire tropics.Time is shown on the abscissa, the global temperature anomaly in the tropical region is shown along the right ordinate in centigrade (the heavy line), and the amplitude is plotted on the left ordinate measured in the horizontal magnetic field at the fundamental mode of Schumann resonance (the line with open squares).As one may see, Fig. 1 shows that the Schumann resonance amplitude follows the temperature anomaly quite closely for the long-period variation. An interest in the Schumann resonance reappeared in the 90s.Many observatories were set at work (Sentman, 1995;Satori and Zieger, 1996;Füllekrug and Fraser-Smith, 1997;Nickolaenko, 1997;Nickolaenko et al., 1998Nickolaenko et al., , 1999;;Heckman et al., 1998;Hobara et al., 2000Hobara et al., , 2001Hobara et al., , 2003;;Hayakawa et al., 2004), and noticeable progress was made in the Schumann resonance modeling (Kirillov et al., 1997;Hayakawa and Otsuyama, 2002;Mushtak and Williams, 2002;Otsuyama et al., 2003;Simpson and Taflove, 2004;Yang and Pasko, 2005).The present work is a further development of the idea by Williams (1992), Price (1993), Füllekrug andFraser-Smith (1997), andWilliams (1994).We investigate the relationship between the Schumann resonance intensity and the global soil temperature, the latter was measured in symmetric latitudinal intervals.The driving idea is rather straightforward.Electromagnetic radiation from the global lightning strokes is the source of electromagnetic energy detected in the Schumann resonance band (from a few Hz to some tens of Hz).The planetary thunderstorm activity is concentrated in the tropics.The lightning flash rate is collected from space, using the optical transient detector (OTD) (Orville and Henderson, 1986;Christian et al., 2003;Hayakawa et al., 2005).The lightning flash is high in the tropics, remains relatively high in the mid-latitudes, and becomes rare in the polar and sub-polar regions.Therefore, one may expect that the Schumann resonance intensity is connected with the surface temperature within a certain latitudinal interval.We are going to test this idea and establish the width of such an interval. By following the Sun, global thunderstorms cyclically drift along the meridian to the Northern Hemisphere during boreal summer, and drift southward from the equator in winter.Since the continents occupy a greater area in the Northern Hemisphere, an annual variation develops in the thunderstorm activity: the global flash rate is smaller during the winter months (Christian et al., 2003;Hayakawa et al., 2005).Therefore, annual variations must also be present in the Schumann resonance intensity.The goal of our investigation is a formal comparison of trends present in the Schumann resonance intensity and the ground temperature itself. We correlate the long-term seasonal variations of cumulative intensity of three Schumann resonance modes with the changes in the median global land temperature relevant to different symmetric latitudinal belts.We look for the interval where the close correlation appears between the climatological and electromagnetic data.Afterwards, to remove the random oscillations always present in the data sets, we turn to the principal components (Troyan and Hayakawa, 2003), namely, to the annual, semiannual and interannual trends, which are extracted from the raw temperature and Schumann resonance data.Finally, the current results are discussed and explained, and the areas of future work are outlined. Acquisition of Schumann resonance data and signal processing The natural ELF signal is monitored at the Moshiri observatory (Japan, 44 The notation of H N S corresponds to the magnetic coil antenna aligned to the local meridian; such a sensor is sensitive to the radio waves arriving from the east (thunderstorms in America) or from the west (African activity).The ELF measurement system is periodically calibrated. The power spectra of the Schumann resonance are computed during the processing of a record with the FFT algorithm.Spectra are averaged over segments 10 min long.These data are additionally averaged over each hour of the day, which allows us to reduce the impact of data gaps on the output results.Finally, the monthly mean is computed for every hour.Thus, we obtain the monthly averaged spectra as the function of universal time.These sets were ultimately Ann.Geophys., 24, 1809Geophys., 24, -1817Geophys., 24, , 2006 www averaged to produce the median Schumann resonance spectra relevant to each month of a year.Schumann resonance is observed as the peaks (modes) in the power spectrum of the given field component.Each mode corresponds to a particular spatial distribution of the field.Thunderstorms move around the globe during the day, the distance from an observatory to the lightning sources varies, and the amplitude of individual Schumann resonance modes also varies.Alterations caused by the source position are superimposed on the changes connected with the contemporary intensity of the global thunderstorm activity.So, when obtaining the estimates of thunderstorm intensity, we have to reduce the impact of distance variations.The cumulative intensity of three Schumann resonance modes is used for this purpose, being less sensitive to the source-observer distance (see Polk, 1969, Sentman andFraser, 1991;Nickolaenko, 1997, Nickolaenko et al., 1999, Nickolaenko and Hayakawa, 2002): Here, A i is the magnetic field amplitude of the i-th mode (i=1, 2, 3).We use the ±0.5-Hz bandwidth for each mode.This approach was exploited in the experimental studies of Schumann resonance when obtaining estimates for the instant level of global thunderstorm activity (Clayton and Polk, 1997;Heckman et al., 1998;Nickolaenko et al., 1998Nickolaenko et al., , 1999)). Figure 2 illustrates the dynamics of the monthly averaged daily variation of the Schumann resonance intensity (1) during the whole period of observations.The abscissa indicates the month and year, while the ordinate indicates the universal time (UT).The measuring equipment is fully calibrated (Hobara et al., 2000), and the presentation in Fig. 2 is given in the absolute intensity (pT 2 ) and in the dB relative to 1 pT 2 : I [dB]=10 lg(I /I R ), where I R =1 pT 2 .The absolute intensity observed is in accord with the published data (see Sentman (1995)).The strong American thunderstorm activity is observed in the plots of Fig. 2 around midnight (23:00-00:00 hr) UT.A smaller African activity is visible at 14:00-15:00 UT, being somewhat reduced by the angular pattern of the magnetic antenna.An impact of the nearby powerful storms in Southeast Asia is present at 7:00-10:00 UT, even in the H N S field component.The initial data sets are presented in Fig. 3.The upper panel in Fig. 3 The data were selected from the National Climate Data Center (USA), which provides monthly mean temperatures for the land area based on the surface meteorological observations (see Petersen and Vose, 1997).Initial data correspond to the grid 15 • ×5 • (longitude times latitude, respectively) covering the entire Earth.The grid mean temperature was calculated for each interval: mean temperature at given latitudes was obtained by averaging the grid data in accordance with the ground area found in the interval. To reduce the impact of seasonal north-south drift, we averaged the temperature data in the symmetric latitude intervals centered at the equator.Thus, we consistently include the surface temperature of the most important tropical belt, corresponding to the narrow central peak in the latitudinal distribution of lightning flashes observed from space (Chris- tian et al., 2003;Hayakawa et al., 2005).As might be concluded from Fig. 3, this zone is associated with the strong semiannual variation.The north-south seasonal drift and the asymmetry in the global land distribution appear as the annual variation in the surface temperature (see the lower plots in Fig. 3 for wide latitudinal gaps).The semiannual variation is always present; however, it seems to diminish in the annual variation pertinent to the middle latitudes.The median temperature decreases when the width of the latitudinal interval increases, reflecting the general decline of the temperature with latitude. Correlation between Schumann resonance intensity and global temperature We calculated the cross-correlation coefficients for variations of the Schumann resonance intensity and the global ground temperature averaged in different latitudinal belts; see Fig. 4 (the widths of ±5 • , ±10 • , ±15 • , etc., were used).The width of latitude interval is shown on the abscissa of Fig. 4 in degrees, and the cross-correlation coefficient is indicated on the ordinate.It seems that a systematically stronger correlation is apparent between the temperature and the Schumann resonance intensity measured in dB (crosses) rather than absolute intensity (circles).This property implies that a link between the global temperature and electromagnetic intensity I is of the following form: One may see from Fig. 4 that the cross-correlation coefficient becomes saturated when the latitude interval exceeds ±45 • .Such a behavior might be expected, since the lightning activity is concentrated in the tropics and sub-tropics.On the other hand, a contribution from the mid-latitude temperature into the Schumann resonance intensity (and hence the role of the mid-latitude lightning activity) is significant.The conclusion is supported by the upper frames of Fig. 3, showing a strong annual variation in the Schumann resonance intensity, while the semiannual component prevails in the tropics and sub-tropics.This is the reason why two processes have insignificant cross-correlation until the latitude interval becomes sufficiently wide. The connection between two periodic variables, including their mutual phase shift is distinctly demonstrated by the Lissajous plots shown in Fig. 5. Here, the seasonal variations of the global soil temperature are plotted on the abscissa.The ordinate shows the Schumann resonance intensity in dB.We use various colors to show different seasons: green marks the spring, the red marks the summer, yellow is autumn, and black is winter.It is easy to see from Fig. 5 that the Schumann resonance intensity and global ground temperature tend to vary coherently.However, their links are complicated by the presence of higher harmonics and the phase shifts. Periodic variations of Schumann resonance intensity and of the global temperature The pattern of seasonal variation of the surface temperature depends on the width of the latitude interval.It consists of two major components: one is the semiannual variation coupled to the tropics, and the other is the annual component, corresponding to a wider latitude band.There are both annual and semiannual components present in the longterm variations of Schumann resonance parameters.A strong semi-annual variation was found in the frequency of the first resonance mode observed in the vertical electric field component E Z , which clearly reflects the seasonal variations of the area covered by the global lightning activity (Nickolaenko et al., 1998).A semiannual variation was also found in the cumulative field intensity recorded in the late 60s in Japan (Nickolaenko et al., 1999).It is interesting to use a similar process for the modern Schumann resonance records and compare the results with the previous data and with concurrent variations in the global temperature.We apply the Principal Component Analysis (PCA), which is also regarded in the literature as singular spectral analysis (SSA), to extract periodic components from the Schumann resonance intensity (dB) and from the net global temperature in different latitudinal belts.The software is called the "Caterpillar" algorithm (Troyan and Hayakawa, 2003).We extract the unknown regular variations which are hidden in the record, and the PCA algorithm is an appropriate tool for this purpose (Danilov and Zhiglyavsky, 1997;Troyan and Hayakawa, 2003).From the physical point of view, the PCA algorithm is similar to filtering, although its main distinction is that the basic functions are found automatically right from the original data. The algorithm works in the following way.Consider a finite time series of initial data x k =x(t k ) with 1 ≤k≤N .An integer number L< N is chosen, called the "caterpillar" length, and the linear set x k is transformed into a 2-D matrix in the following way.The first L points of the x k series (we use L=12, corresponding to the annual period) occupy the first line of the matrix.The elements from x 2 to x L+1 are placed in the second line, etc.The process continues until we reach the end of the data.The last line of the matrix contains the elements with indices starting from N −L+1, so that the last L samples are, x N−L+1 , x N −L+2 , . . ., x N . The eigen-values and eigen-vectors are found for the 2-D matrix in the second step of processing.These parameters depend on the signal structure and they allow us simultaneously to construct the "internal basis" relevant to the data: the so-called principal components.This stage of the procedure is equivalent to composing the bank of linear filters.Each filter corresponds to a particular principal component.We emphasize that filters are found from the data itself rather than postulated beforehand. In the third step, we visualize, survey, and select the desired principal components (PC) revealed in the previous step.The periodic principal components split into pairs: one of them resembles the "sine" wave, while the other is the "cosine" wave.We usually take PC #1 and PC#2 to obtain the complete annual variation of the lowest frequency. The final step implies the signal reconstruction by selecting and combining the desired principal components.This step is similar to extracting a set of harmonics in the conventional Fourier transform.We must add that the PCA procedure turns into an ordinary Fourier transform when the initial succession x k is a sinusoidal signal of infinite duration.In this case, the matrix constructs authentic sine and cosine basic functions, and the result coincides with the well-known procedures (Danilov and Zhiglyavsky, 1997). To obtain both annual and semiannual components of the variations of global temperature and Schumann resonance intensity, we apply the PCA procedure with the length L=12 (one year) and concentrate our attention on PC#1 and PC#2 (annual variation) and PC#3 and PC#4 (semiannual term).The annual component is present in both temperature and Schumann resonance intensity.A substantial semiannual variation is found only in the global temperature of the tropical belt. The PCA processing also shows that initial data contain only these two major components, plus insignificant random fluctuations, and we summarize the results in numbers 2 and 3 refer to an annual variation, and columns 4 and 5 correspond to a semiannual component.The numbers indicate the contributions of particular principal components into the energy of initial variation.Table 1 indicates that the semiannual term is very small in the Schumann resonance intensity: its contribution is below 3%, while the annual variation is responsible for 95%.The semiannual contribution is considerably smaller than in our previous study (about 20%), based on the old resonance data collected in Japan (Nickolaenko et al., 1999).In contrast, Table 1 shows that the annual pattern in the land temperature is of minor importance within the ±20 • latitude gap: its contribution is 17% against 68% from the semiannual trend. We depict temporal variations of the major principal components in Fig. 6.Time in months is shown on the abscissa.The ordinate depicts the resonance intensity in dB (upper plots) and temperature in centigrade (lower plots).By comparing Figs. 6 and 3, one may observe how the PCA processing "rectified" the variations.Figure 6 presents the annual trends (the left frames), semiannual terms (the second column of plots), the composition of annual and semiannual variations (the third plot), and interannual variation (the right frames).As is shown in the lower plots of Fig. 6, the amplitude of the annual temperature variation increases when a wider latitude interval is analyzed.This is explained by an increase in the north-south asymmetry of the land area in wider latitude intervals.Variations in the ±40 • and ±60 • belts occur in phase, while the temperature pattern of ±20 • latitudinal belt leads in phase by approximately 3 months.Contribution from the polar latitudes into annual variations of global temperature is insignificant (not shown), as was expected.However, the middle latitudes (from 40 to 60 deg) are responsible for a substantial fraction in annual variation of the global land temperature.Amplitude and phase of the semiannual component remain unchanged when we extend the interval of latitudes, as is seen in the second lower plot of Fig. 6.This means that only the tropical region contributes to this semiannual signal. Plots of Fig. 6 enable us to observe the important features: 1. Variations of electromagnetic intensity and land temperature on the annual scale are similar in the wide latitude belts. 2. The beating mode is observed in the semiannual Schumann resonance data, which agrees with the published results. 3. Semiannual temperature variations coincide for all the belts, which means that this term originates from the tropical region. 4. General variations of the Schumann resonance intensity occur in the way similar to the temperature observed in Ann.Geophys., 24, 1809Geophys., 24, -1817Geophys., 24, , 2006 www.ann-geophys.net/24/1809/2006/the middle latitudes.In contrast, the interannual variations (see the right plots in Fig. 6) exhibit a similarity between the Schumann resonance data and the temperature trends of the topical region.Unfortunately, the 3.5-year duration of the data is somewhat short to extract the interannual alterations with confidence and to make decisive conclusions, however, the similarity must be noted. We may expect that Lissajous figures will vary when we compare electromagnetic energy with the land temperature of different latitude belts.We show in Fig. 7 the Lissajous patterns of the initial data (the upper line of plots), the annual trends (the middle plots), and composition of annual and semiannual components (the lower plots).The Schumann resonance intensity is shown on the ordinate of each frame in dB.Variations of temperature are plotted on the abscissa.The left column of panels relates the variations of the Schumann resonance intensity with temperature in the trop-ical ±20 • region.The central plots correspond to a ±40 • latitude interval, and the right column depicts the data for a ±60 • belt. The left frame in the middle row shows that the annual component of temperature in ±20 • latitudes is in advance by about three months with respect to the annual variations of Schumann resonance intensity.The plot is practically a circle.Annual temperature variations in wider belts occur in phase with the electromagnetic data, and we observe the straight line in the central and right frames of the middle row.The lower plots depict the sums of the annual and semiannual components extracted from the records.The left plot reminds us of a typical Lissajous figure for two sinusoidal signals, with the frequency ratio 2:1.The central and right plots are just the same in-phase variations of the middle row, but slightly "spoiled" by the "second harmonic".We processed the modern long-term records of the Schumann resonance, which have the duration comparable with the former data collected by Prof. K. Sao's group in the Tottori observatory (Japan) in the 70s (Nickolaenko et al., 1999).The same software was applied toward the new data set.The annual component of the Schumann resonance intensity in the modern records is in good correspondence to that found in the old records.The complete annual excursion of the Schumann resonance intensity is about 5 dB for the H N S field observed in Japan.Characteristic variation extracted by the PCA algorithm is somewhat smaller, about ±2 dB.The semiannual component is smaller by an order of magnitude, and this is the main distinction of the new record from the previous data.We cannot indicate the reason of deviation; however, the data of the 70s contained a strong temporal modulation of semiannual pattern, the "beating mode".It might be that recent records coincide with the period of small amplitude in the semiannual term, owing to some unknown natural factor.Variations of the Schumann resonance intensity occur in the way similar to the temperature within the middle-latitude interval.In contrast, the interannual variation of Schumann resonance resembles that of temperature in the topics (see the right plots in Fig. 6).Unfortunately, the 3.5-year duration does not allow us to make the decisive conclusions. A comparison of electromagnetic and temperature data indicated that there is a link between the annual variation of the Schumann resonance intensity and the global temperature.The cross-correlation coefficient reaches 0.05, 0.85, 0.92, and 0.95 when we extend the latitude intervals from ±20 • , to ±40 • , ±60 • , and ±80 • , correspondingly. The data presented in this paper allow us to formulate the following conclusions. 1. Schumann resonance intensity of recent records, 43 months long in duration, made in Japan, is characterized by a strong annual variation (about ±2 dB), while the semi-annual trend is smaller by an order of magnitude. 2. Variations in the global land temperature are characterized by two seasonal patterns associated with different latitude intervals.The semiannual variation dominates in the tropics, and the annual trend prevails in the middle and high latitudes. 3. Intensity variations of Schumann resonance oscillations corresponds to alterations of the global land temperature in the mid-latitude interval of about ±45 • . Finally, we mention the areas of future works in the Schumann resonance band.Simultaneous observations at a few sites (including those positioned in the Southern Hemisphere) are desirable for a reduction of source proximity ef-fects and separating contributions from thunderstorms in different latitude zones.Measurements in the Southern Hemisphere are useful, since thunderstorms drift away from the observer in the Southern Hemisphere during boreal summer, and the source distance varies in anti-phase there.The combination of records performed in both hemispheres will compensate for the meridional drift of global thunderstorms, while variations are enhanced associated with the level of global thunderstorm activity itself.Simultaneous observations at a series of stations allow for deducing the global distribution of thunderstorm activity (Shvets, 2001;Ando et al., 2005), which can also help to extract alterations in the thunderstorm activity and thus to improve the quality of initial data. Fig. 1 . Fig. 1.Correlation between the global temperature and the intensity of Schumann resonance oscillations (adopted from Williams, 1992). Fig. 2 Fig.2 Monthly averaged diurnal variation of the cumulative energy of Schumann resonance.The color code is used to indicate the absolute intensity (in pT 2 ) (and also in dB). Fig. 2 . Fig. 2. Monthly averaged diurnal variation of the cumulative energy of Schumann resonance.The color code is used to indicate the absolute intensity (in pT 2 ) (and also in dB). Fig. 3 . Fig.3.Seasonal variations of electromagnetic and temperature data.Top and middle panels show the Schumann resonance intensity in pT 2 and in dB correspondingly.Bottom panel presents the global mean ground temperature in symmetric latitudinal belts , , , .20 ± 40 ± 60 ± 80 ± Fig. 3. Seasonal variations of electromagnetic and temperature data.Top and middle panels show the Schumann resonance intensity in pT 2 and in dB, correspondingly.Bottom panel presents the global mean ground temperature in symmetric latitudinal belts Fig. 4 . Fig.4.Cross-correlation coefficient as the function of latitude interval, for the intensity measured in absolute values (line connected by crosses) and for the intensity measured in dB (line connected by circles).The error bar is also indicated (as the level of confidence of 0.95). Fig. 4 . Fig. 4. Cross-correlation coefficient as a function of the latitude interval, for the intensity measured in absolute values (line connected by crosses) and for the intensity measured in dB (line connected by circles).The error bar is also indicated (as the level of confidence of 0.95). depicts the absolute power (in pT 2 ) of the three Schumann resonance modes (1) as a function of time recorded at Moshiri station from November 1998 to May 2002.The second frame repeats the same plot translated to dB relative to the 1 pT 2 scale.The bottom panel presents seasonal variations of the global land surface temperature in different latitudinal intervals.Here the time (in months) is plotted on the abscissa.Vertical dotted lines mark the halfyear intervals.As one may see from Fig. 3, the annual variation of the Schumann resonance intensity is close to 5 dB (amplitude variation by a factor of two). 3 Surface temperature data The bottom panel of Fig. 3 depicts seasonal variations of the surface temperature computed within intervals ranging from 20 • S to 20 • N; 40 • S-40 • N; 60 • S-60 • N; and 80 • S-80 • N. Fig. 5 . Fig.5.Lissajous figures of the Schumann resonance intensity versus global temperature.Green markers denote the spring, red is the summer, yellow is autumn, and black shows the winter. Fig. 5 . Fig. 5. Lissajous figures of the Schumann resonance intensity versus global temperature.Green markers denote spring, red is summer, yellow is autumn, and black shows winter. Fig. 6 . Fig.6.Principal components : annual, semi-annual, annual + semi-annual, and interannual.The upper plots refer to the variations of the Schumann resonance intensity, and the lower plots demonstrate the variations of the global soil temperature in different latitude intervals. Fig. 6 . Fig. 6.Principal components: annual, semiannual, annual + semiannual, and interannual.The upper plots refer to the variations of the Schumann resonance intensity, and the lower plots demonstrate the variations of the global soil temperature in different latitude intervals. Fig. 7 . Fig.7.Lissajous figures of global land temperature versus Schumann resonance intensity.The color of plot indicates season (green: spring, red: summer, yellow: autumn, and winter: black). It is clear that the field level in dB is a measure of the exponent ζ . Table 1 . Annual and semiannual components extracted from the data series by the PCA processing. Table 1 Annual and semi-annual components extracted from the data series by the PCA Fig. 7. Lissajous figures of global land temperature versus Schumann resonance intensity.The color of plot indicates season (green: spring, red: summer, yellow: autumn, and winter: black).
2018-06-11T14:48:37.766Z
2006-08-09T00:00:00.000
{ "year": 2006, "sha1": "f1c641fabe141eb9e3cfe8281109d363405b6c03", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/24/1809/2006/angeo-24-1809-2006.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "763f665c2c08b5b28ada020cf0f37e00d6b1ad11", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
91649021
pes2o/s2orc
v3-fos-license
Infection by Rickettsia felis in Ctenocephalides felis felis Fleas from North of Colombia. Background Rickettsia felis is an emergent Rickettsial agent whose main vector is Ctenocephalides felis, but ticks, mites and lice are also infected. We aimed to search for molecular evidence of Rickettsia spp. in fleas collected from dogs and wild rodents (Heteromys anomalous) from three villages of Córdoba and Antioquia provinces (Northern of Colombia), where outbreaks of rickettsioses have occurred, and discuss the possible role of fleas on endemic/enzootic regions for rickettsia. Methods During 2010 and 2012, 649 Ctenocephalides felis felis and 24 Pulex irritans fleas were removed from dogs and wild rodents (Heteromys anomalous), respectively, in 3 locations from Córdoba and Antioquia provinces (Colombia). These fleas were tested into pools for Rickettsial infection by PCR, targeting gltA, ompB, and ompA Rickettsial genes. Results Almost 20% (30/153) of C. felis felis pools contained Rickettsial DNA. The fragments of ompB gene showed high identity values between sequences from Necocli and Los Cordobas with R. felis strain from Senegal (100% and 99.7% respectively) and all were highly related by phylogenetic analyses. Rickettsial DNA in pools of P. irritans was not detected. Conclusion Our findings highlighted the endemicity of the infection by R. felis in fleas from northern of Colombia and showed the likely importance of dogs as hosts of C. felis felis fleas and their potential role as reservoirs of R. felis. Introduction The genus Rickettsia comprised arthropodassociated intracellular and gram-negative bacteria. It is divided into 4 groups based on their genotypic characteristics: Spotted fever group (R. rickettsii, R. conorii, R. parkeri, and several others), typhus group (R. prowazekii and R. typhi), transitional group (R. felis, R. akari, and R. australis), and the nonpathogenic ancestral group (R. bellii and R. canadensis) (1). Rickettsia felis is globally distributed and is the etiological agent of flea-borne spotted fever. The main vector is the flea Ctenocephalides felis, but ticks, mites and lice have also been found infected (2). Rickettsia felis in C. felis populations is principally maintained by transstadial and transovarial transmission (3). In colonized C. felis fleas, vertical transmission of R. felis is thought to be the primary route of maintenance, since the reported prevalence of R. felis in C. felis colonies ranged from 43-100% (4-6). In nature, fleas feeding on R. felis-infected mammalian hosts likely amplify the prevalence of R. felis in a flea´s population. Studies on the ecology of R. felis identified a role for opossums in the transmission cycle (7)(8)(9). Furthermore, a role for companion animals, rodents, and, specifically, their fleas as the potential source of human exposure has been suggested (4,9,10). Infecting fleas have been reported in many American countries and human cases of spotted fever by R. felis have been recently described in the United States, Mexico, and Brazil (11). Clinical manifestations of flea-borne spotted fever are variable and similar to other Rickettsial diseases. In Colombia, a transversal serological study was performed in seven municipalities of Caldas Province, and a human seroprevalence of 25.2% and 17.8% against R. typhi and R. felis, respectively was found (12). Additionally, the infection by R. felis in C. felis, C. canis, and P. irritans fleas was reported in the province of Caldas (13). Three important spotted fever group (SFG) rickettsiosis outbreaks occurred in Colombia, in the municipalities of Turbo y Necoclí (Antioquia Province) and Los Cordobas (Córdoba Province), between 2006 and 2008 (14,15). Consequently, these areas have been described as endemic for Rickettsioses in this country. The purpose of this study was to search for molecular evidence of Rickettsia spp. in fleas collected from dogs and wild rodents (Heteromys anomalous) from the three zones where outbreaks of rickettsioses occurred and discuss the likely role of fleas in the epidemiology of Rickettsia spp. in this region of Colombia. Study area and sampling The study was conducted in 3 neighboring municipalities: Turbo, (8°8.272´N, 76°33.009´ W) located at 400 m above sea level (masl), and Necocli, (8°32.892´N, 76°34.429´W), at 182 m above sea level. Both are located in the Antioquia Province, and Los Cordobas, (8°50.195´N, 76°20.252´W) located at eight meters above sea level, in the Cordoba Province (Fig. 1). All of these municipalities are placed on the Colombian Atlantic Coast. These three sites comprise part of the natural Caribbean region, and have a tropical humid climate characterized by a dry period from Jan to Mar and a rainy season from Apr to Dec, with an annual average temperature of 28 °C and relative humidity of 85% (Fig. 1). During 2010 and 2012, a total of 649 fleas were removed from 92 dogs of all studied locations (194 from Turbo, 225 from Necocli and 230 from Los Cordobas) and 24 fleas from three Heteromys anomalus rodents captured in Turbo and Necocli. They were obtained using tweezers or by combing wild and domestic animals, and care was taken to avoid damaging structures essential for taxonomic classification. The fleas were collected from each animal in one or various vials with alcohol 95% (depending on the number collected per animal) and were transported to the laboratory. Because the population of dogs in the study zones was unknown, a sample size was not determined. However, we were able to estimate the number of animals that lived with people at each site. Ethical, technical, scientific and administrative standards to perform research in animals were taken into consideration according to national regulations for the procedures of collection, management and conservation of samples (resolution No. 008430 of 1993 and Law 84 of Dec 27 th from 1989). Molecular detection of Rickettsia spp. Fleas were classified according to morphological keys (16)(17)(18). They were grouped in maximum "pools" of 10 individuals, according to host and sampling site: 153 pools of C. felis felis were collected from dogs and six pools of Pulex irritans fleas were collected from rodents. DNA from pools was extracted by using QIAamp DNA Mini-Kit (Qiagen®, Valencia, CA, USA), according to manufacturer conditions. Samples were stored at -20 °C until they were used for PCR assays. Samples were tested by PCR assay with primers CS-78 (forward GCAAGTATCGGT GAGGATGTAAT) and CS323 (reverse GC TTCCTTAAAATTCAATAAATCAGGAT), which amplify a 401bp fragment of the citrate synthase gene (gltA), previously reported as appropriate for the screening of Rickettsia spp. (19). Samples that came up positive for gltA were tested with the primers Rr190.70p (forward: 5´ATGGCGAA TATTTCTCCAAAA)-Rr190.701 (reverse: 3´GTTCCGTTAATGGCAGCATCT), that amplify a 632bp fragment of ompA genes (20); primers 120.M59 (forward 5´AAACAA TAATCAAGGTACTGT)-120.807 (reverse 3´TACTTCCGGTTACAGCAAAGT) that amplify an 812bp fragment of ompB gene, previously described (21). Negative (molecular grade water) and positive controls (DNA R. amblyomii) were included for each reaction. Positive products were purified by using a Quick Gel Extraction kit (PureLink TM , Invitrogen) and subsequently these were sequenced by a commercial facility (Macrogen). The sequences were assembled and edited with the Seqman program from the DNAstar packet (Lasergene®, Madison WI, USA), and phylogenetic analysis was performed with the MEGA 6 (22) and MrBayes 3.2 programs (23). Results Of 153 pools of C. felis felis (54 were from Turbo, 65 from Necocli and 34 from Los Cordobas), Rickettsial DNA was detected in 30 (19%) pools by gltA gene. Four pools amplified for ompB gene and none amplified for ompA. Pulex irritans pools were negative by PCR. The prevalence of Rickettsia in fleas expressed as percentage and minimum infection rate (MIR) of fleas were calculated. We made this assessment on the assumption that a PCR-positive pool contains only one positive specimen. The overall MIR of infected fleas was 4.45 (30/673). Of these, 4.6% (9/194) of C. felis was from Turbo, 5.7% (13/225) from Necocli and 3.5% (8/230) from Los Cordobas (Table 1). Nucleotide sequences of the ompB gene from Necocli and Los Cordobas were 99.9% identical to each other (Fig tree. 1). Sequence homology obtained from Necocli and Los Cordobas were 100 and 99.7% with R. felis strain Senegal, respectively. Evolutionary history of gltA gene was inferred by using the Neighbor-Joining method (not shown) and the Bayesian method was used for ompB gene (Fig. 2). The sequences generated in this study have been submitted to GenBank under the accessions KP870106 to KP870109. Discussion We reported the infection by R. felis in C. felis felis fleas collected from dogs from endemic areas of rickettsioses in Cordoba and Antioquia provinces, northern of Colombia. Values of minimum infection rates (MIR) reported herein for C. felis are similar to the one previously reported in the province of Caldas, Colombia (5.3% MIR) (13). However, there are lower than MIRs shown in other countries, such as Brazil (14.3%) (24), the United States (13.3%) (25) and Taiwan (8.2%) (26). The proportion of C. felis felis positive pools in our study was 19% (30/153) and the proportion obtained in Caldas (Colombia) was 41% (54/132) (13). The rates of MIRs of these studies were calculated based on the assumption that only one flea from each positive pool was positive for the Rickettsia gene evaluated. It may underestimate the frequency of R. felis in pools, possibly because of the greater amount of DNA in pools or other contaminants that may inhibit PCR assays (27). Otherwise, in Brazil, differences in the percentage of infection between regions were related to the environmental and climatic conditions (28). Higher rates of R. felis infection in fleas were significantly related with regions with temperate climates, and lower rates were linked with dry climates. Several studies highlight the broad distribution of infection by R. felis in C. felis. Different proportions of infection have been reported in other American countries. For example, in Mexico, 20% of 54 pools of C. felis collected from dogs were reported infected (29). Sixty-four percent (55/86) and 58% (47/81) of pools of C. felis removed from cats and dogs were infected in Guatemala and Costa Rica, respectively (30); and 41% of infected pools (25/62 C. felis and 2/4 C. canis) collected from 15 cats and dogs were reported in Uruguay (31). In our study, R. felis was detected in 30/153 (19%) C. felis pools removed from dogs, which is very similar to the Mexican report which, by the way, suggests the likely relevance of this host in maintaining C. felis and possibly R. felis in studied areas. Moreover, some studies have detected R. felis by PCR in blood of dogs, suggesting that dogs may have the potential to act as an important reservoir of infection (32,33). In the present study, the sequences obtained from Necocli and Los Cordobas were identical to each other and they showed extremely high sequence homology to a R. felis strain from Senegal (100 and 99.7% respectively, Fig. 2). In province of Caldas (Colombia), authors have described a high homology (>98%), between several R. felis sequences obtained from C. felis and the R. felis URRWXCal2 (Genbank accession CP 000053). Likewise, they showed a very close monophyletic relationship of these sequences with the R. akari group (13). Sequences of R. felis from Necoclí (KP 870109) and Los Cordobas (KP870106) obtained in the present study were compared with the sequences obtained in Caldas (Colombia), called Colombia5 and Colombia7 (Fig. 2) Human infection with R. felis and its clinical implications have been controversial. This microorganism may be an emerging human pathogen; meanwhile, other authors consider that their casual appearance in human samples and vector is a proof of endosymbiosis (11,35,36). Before we determine whether human beings of this region of Colombia could be at real risk of getting ill by R. felis, further studies are necessary to show the seroprevalence in humans and animals and demonstrate its presence in other human cases compatible with rickettsiosis. Conclusion In the present study, we reported the infection by Rickettsia felis in C. felis felis fleas collected from dogs from endemic areas of rickettsioses in Cordoba and Antioquia provinces (Colombia). Almost 20% (30/153) of C. felis felis pools contained Rickettsial DNA. Our findings highlighted the endemicity of the infection by R. felis in fleas from northern of Colombia and suggest the importance of dogs as host of C. felis felis fleas and their potential as reservoirs of R. felis. Human infection with R. felis and its clinical implications have been controversial. May before we determine whether human beings of this region of Colombia could be at real risk of getting ill by R. felis, further studies are necessary to show the seroprevalence in humans and animals and demonstrate its presence in other human cases compatible with rickettsiosis.
2019-04-03T13:10:19.957Z
2019-01-30T00:00:00.000
{ "year": 2019, "sha1": "1fe507b3056cfbd81e232d4b1a18a99ba989e2e6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18502/jad.v13i1.927", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2686181ddff66706eb5a9d22506b6bc1ea19005d", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268241478
pes2o/s2orc
v3-fos-license
Subthalamic nucleus deep brain stimulation alleviates oxidative stress via mitophagy in Parkinson’s disease Subthalamic nucleus deep brain stimulation (STN-DBS) has the potential to delay Parkinson’s disease (PD) progression. Whether oxidative stress participates in the neuroprotective effects of DBS and related signaling pathways remains unknown. To address this, we applied STN-DBS to mice and monkey models of PD and collected brain tissue to evaluate mitophagy, oxidative stress, and related pathway. To confirm findings in animal experiments, a cohort of PD patients was recruited and oxidative stress was evaluated in cerebrospinal fluid. When PD mice received STN stimulation, the mTOR pathway was suppressed, accompanied by elevated LC3 II expression, increased mitophagosomes, and a decrease in p62 expression. The increase in mitophagy and balance of mitochondrial fission/fusion dynamics in the substantia nigra caused a marked enhancement of the antioxidant enzymes superoxide dismutase and glutathione levels. Subsequently, fewer mitochondrial apoptogenic factors were released to the cytoplasm, which resulted in a suppression of caspase activation and reservation of dopaminergic neurons. While interfaced with an mTOR activator, oxidative stress was no longer regulated by STN-DBS, with no neuroprotective effect. Similar results to those found in the rodent experiments were obtained in monkeys treated with chronic STN stimulation. Moreover, antioxidant enzymes in PD patients were increased after the operation, however, there was no relation between changes in antioxidant enzymes and motor impairment. Collectively, our study found that STN-DBS was able to increase mitophagy via an mTOR-dependent pathway, and oxidative stress was suppressed due to removal of damaged mitochondria, which was attributed to the dopaminergic neuroprotection of STN-DBS in PD. Check for updates Yingchuan Chen 1,2 , Guanyu Zhu 1,2 , Tianshuo Yuan 1,2 , Ruoyu Ma 1,2 , Xin Zhang 2,3 , Fangang Meng 2,3 , Anchao Yang 1,2 , Tingting Du 2,3 & Jianguo Zhang 1,2,3 Subthalamic nucleus deep brain stimulation (STN-DBS) has the potential to delay Parkinson's disease (PD) progression.Whether oxidative stress participates in the neuroprotective effects of DBS and related signaling pathways remains unknown.To address this, we applied STN-DBS to mice and monkey models of PD and collected brain tissue to evaluate mitophagy, oxidative stress, and related pathway.To confirm findings in animal experiments, a cohort of PD patients was recruited and oxidative stress was evaluated in cerebrospinal fluid.When PD mice received STN stimulation, the mTOR pathway was suppressed, accompanied by elevated LC3 II expression, increased mitophagosomes, and a decrease in p62 expression.The increase in mitophagy and balance of mitochondrial fission/fusion dynamics in the substantia nigra caused a marked enhancement of the antioxidant enzymes superoxide dismutase and glutathione levels.Subsequently, fewer mitochondrial apoptogenic factors were released to the cytoplasm, which resulted in a suppression of caspase activation and reservation of dopaminergic neurons.While interfaced with an mTOR activator, oxidative stress was no longer regulated by STN-DBS, with no neuroprotective effect.Similar results to those found in the rodent experiments were obtained in monkeys treated with chronic STN stimulation.Moreover, antioxidant enzymes in PD patients were increased after the operation, however, there was no relation between changes in antioxidant enzymes and motor impairment.Collectively, our study found that STN-DBS was able to increase mitophagy via an mTOR-dependent pathway, and oxidative stress was suppressed due to removal of damaged mitochondria, which was attributed to the dopaminergic neuroprotection of STN-DBS in PD. Parkinson's disease (PD), one of the most common neurodegenerative disorders, affects 2-3% of the world's population over the age of 65 1 .The cardinal motor symptoms of the disease are the result of the selective degeneration of dopaminergic neurons in the substantia nigra (SN) pars compacta and other brain areas 2 .This results in motor symptoms, which include resting tremors, bradykinesia, rigidity, and postural instability, and non-motor symptoms, such as dementia and depression 3 . The long-term use of levodopa, the gold standard of such therapies, is associated with severe side effects several years after the 'honeymoon' 2 .Despite decades of research into finding a pharmacological cure, drugs for PD are merely symptomatic and do not halt the progressive loss of neurons 2 .Subthalamic nucleus deep brain stimulation (STN-DBS) has been identified as a very effective surgical therapy against the cardinal motor symptoms in patients with advanced PD 4 .Furthermore, early DBS reduces the need for and complexity of PD medications while providing long-term motor benefits over standard medical therapy and delaying disease progression [5][6][7] , which suggests that STN-DBS may have a neuroprotective effect 8 . The pathogenic mechanism of PD is not fully understood; however, genetic factors and environmental exposures can contribute to its pathological progression, and mechanisms include changes in dopamine metabolism, mitochondrial dysfunction, endoplasmic reticulum stress, impaired autophagy, oxidative stress, and immunity [9][10][11][12] .Among the mechanisms mentioned above, oxidative stress plays an indispensable role.The generation of reactive oxygen species (ROS) is a normal physiological process and is essential for the survival of aerobic organisms.Nevertheless, the excessive production of ROS results in oxidative stress, causing damage to biomolecules, such as DNA and proteins, and apoptosis.A previous study has found that levels of superoxide dismutase (SOD), an important antioxidant enzyme, were lower in PD patients, furthermore, SOD level is negatively correlated with the severity of PD symptoms 13 .The intrinsic apoptotic pathway could be triggered by oxidate stress via generalized and irreversible mitochondrial outer membrane permeabilization 14 .Additionally, Cu x O nanoparticle clusters, an inorganic nanomaterial that functionally mimics the activities of peroxidase and SOD, further eliminated ROS and inhibited neurotoxicity in a model of PD 15 .Recent studies have found that damaged mitochondria undergo mitophagy to prevent the further spread of oxidative damage 16 .Deferiprone, an iron chelator, has undergone Phase II clinical trials, with observed improvements in motor function and improved quality of life without significant side effects.Iron chelators can reduce oxidative stress, and this may be attributed to the triggering of mitophagy 17 . Due to the positive therapeutic effects of STN-DBS, many studies have focused on its mechanism.A recent study observed that DBS could inhibit or reverse the reduction in mitochondrial volume and numbers caused by PD 18 .Moreover, STN stimulation could rescue loss of dopaminergic SN neurons in a PD model 8 .Our previous study found that STN-DBS could exert neuroprotective effects against 6-OHDA-induced cell injury in PD by inducing autophagy 19 .However, the mechanism behind STN-DBS remains unclear and, specifically, whether it enhances neuroprotection by mitophagy-mediated oxidative stress needs to be illustrated.Several important structures in the basal ganglia are implicated in PD progression, and these structures have major differences in non-human primates (NHP) and rodents 20 .Given that NHPs are more similar to humans, in addition to rodents, monkeys and human specimens were included in the present study.It was found that STN-DBS was able to increase mitophagy via an mTORdependent pathway, and oxidative stress was suppressed.Consequently, apoptogenic factors released from mitochondria were reduced, which was attributed to the dopaminergic neuroprotection of STN-DBS in the SN of PD brains. Time point selection of STN-DBS To select the suitable time point for the lead implantation in mice, first, we evaluated the change in dopaminergic neuron loss along with time in the 1methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) model.The motor impairment was measured with a rotarod test, which suggested that PD animals did not show an obvious reduction in time on the rod at one, two, and three weeks after the first MPTP injection.However, over time, a significant decrease was obtained.Additionally, a progressive decline was observed between four and five weeks.These results indicated that motor impairment could only be obtained four weeks after the MPTP injection, with a progressive impairment during four to five weeks (Fig. 1a) (n = 8).To further investigate the change in the dopaminergic neurons of basal ganglia, the tyrosine hydroxylase (TH) levels in the striatum and SN were evaluated.We found that TH levels were significantly decreased at four, five, and six weeks after the injection, with a progressive decline between the four and five-week time points (Fig. 1b-d) (n = 6).The TH + neuron and neurites in the SN and striatum were similar with the above results (Fig. 1e-g) (n = 6).Therefore, we performed STN-DBS four weeks after the first MPTP injection to mimic the clinical course of PD, with a stimulation duration of one week. The accurate location of STN-DBS The lead location in mice was developed by using Nissl staining.The results indicated that the lead correctly targeted the STN region in all mice, and these animals were used in the subsequent experiments (Thirteen mice were removed from the experiments, due to misplacing electrodes.) (Fig. 2a, b). STN-DBS relieved motor impairment and loss of dopaminergic neurons in the PD model As mentioned above, the MPTP rodent PD model exhibited a motor impairment.We found that STN-DBS could extend the time spent on the rods in the PD mice model (Fig. 2c) (n = 8). We assessed dopaminergic neuron loss in different groups to investigate whether the reduction in motor performance was attributed to a change in dopaminergic neuron loss.The PD and PD-sham-DBS groups showed a severe reduction in TH + cell number, but STN-DBS relieved this tendency.Nevertheless, TH + neurites in the striatum were only increased slightly, without a significant P value (P = 0.1430, vs PD group; P = 0.1263, vs PD-sham-DBS group) (Fig. 2d-f) (n = 6).Additionally, the TH levels in SN were evaluated via western blotting, the PD-DBS group showed an increase in TH level in SN by comparison with PD and the PD-sham-DBS group (Fig. 2g, h) (n = 6).These results indicate that STN-DBS could reduce dopaminergic neuron loss in SN in the PD rodent model. The neuroprotective effect of STN-DBS is mediated by mTORdependent mitophagy Mitochondrial failure and reduced mitophagy have been proposed as important components in determining pathological heterogeneity and selective vulnerability of PD 17 .The integrity of mitochondria was directly measured via a transmission electron microscope (TEM).A marked mitochondrial ultrastructural injury was observed in the PD and PD-sham-DBS groups, with significant swelling of the mitochondrial matrix.In some instances, mitochondrial swelling was accompanied by the disruption of membrane integrity.In contrast, mitochondria in the PD-DBS group had relative normal morphology, with slight evidence of swelling, outer membrane breakage, or intracristal dilation, which was closer to that of the control group (Fig. 3a, b) (n = 6). The predominant effect of mTOR activation is to suppress mitophagy 16 .We found that phospho-mTOR (p-mTOR, Ser2448) and mTOR levels were both elevated in PD and PD-sham-DBS groups due to the MPTP injection.Rapamycin, also known as sirolimus, forms a complex with FK506-binding protein 12 and in this form inhibits the activity of mTOR 12 .In the present study, rapamycin could successfully decline the p-mTOR and mTOR expression in the PD model.Interestingly, the mTOR pathway was suppressed by STN-DBS in the PD mouse model, and this phenomenon disappeared when intervened with 3BDO (mTOR activator) (Fig. 3c-e) (n = 6).The results above suggested STN stimulation regulated the activation of the mTOR pathway in the PD mice, however, different types of cells exist in SN, and whether it involved dopaminergic neurons still needs to be further measured.Therefore, immunofluorescence (IF) staining was conducted to study the colocalization of p-mTOR and the dopaminergic neuron marker.It was found that p-mTOR expression was elevated in the dopaminergic neuron of PD, but decreased by STN-DBS, compared with the model without stimulation.With the injection of 3BDO, the mTOR activation in dopaminergic neurons was not further influenced by STN-DBS (Fig. 3h, i) (n = 6). Similar to the mice treated with STN-DBS, mice that received an injection of rapamycin also showed a better performance in the rotarod test (Fig. 2c) (n = 8), accompanied by preserved dopaminergic neurons (Fig. 2d-h) (n = 6).The therapeutic efficacy of STN-DBS on motor symptoms and neuroprotection of dopaminergic neurons dispersed when the mTOR activator was applied.One PD mice group was treated with both STN-DBS and 3BDO-mTOR activator and did not show any improvement in motor performance or preserved dopaminergic neurons, indicating a blocking effect on the STN-DBS (Fig. 2c, d-h) (n = 8; n = 6). The expressions of LC3 II, an autophagosome marker reflecting autophagy activity 12,19 , were reduced in the SN of the PD and PD-sham-DBS groups, whereas STN-DBS was found to reverse the tendency, a similar result was obtained in PD mice treated with rapamycin (Fig. 3c, f).SQSTM1/ p62, an autophagic adapter, is recruited to mitochondrial clusters and is essential for the clearance of mitochondria 21 .The increased p62 expression in PD models was reversed by STN stimulation, which was inhibited in the PD-DBS + 3BDO group (Fig. 3c, g) (n = 6).Mitophagy, the selective autophagy of mitochondria, needed to be further evaluated by IF and TEM.Significantly, the STN stimulation was able to increase the co-localization of TOMM20 (mitochondria marker) and LC3 in the TH + cells in the SN of the PD model, nevertheless, this could be interrupted by treating with the mTOR activator (Supplementary Fig. 1a, b) (n = 6).Additionally, the number of mitophagosomes was also measured by TEM.Fewer mitophagosomes were observed in the PD and PD-sham-DBS groups compared with the control group.Nevertheless, STN-DBS and rapamycin induced a marked increase in the number of mitophagosomes (Fig. 3j, k) (n = 6).These results indicated that STN-DBS could exert an improvement in the mitophagy of dopaminergic neurons via an mTOR-dependent pathway. STN-DBS alleviates oxidative stress and mitochondrial dysfunction via mitophagy Mitophagy is essential for the elimination of damaged mitochondria and alleviates oxidative stress 16 .The antioxidant enzyme levels were measured to reflect the level of oxidative stress.Both SOD and glutathione (GSH) were down-regulated in the SN of PD mice, whereas, rapamycin was able to rescue the decreased expression of these antioxidant enzymes.Meanwhile, STN-DBS exerted a 'rapamycin' like effect, increasing the expression of SOD and GSH in SN, compared with the mice without stimulation, indicating that STN-DBS played an antioxidant role in PD mice, which was disrupted by using an mTOR activator (Fig. 4a, b) (n = 6). Complex I activity can reflect mitochondrial function.Complex I was detected with increased activity in PD mice that had received STN stimulation.Nevertheless, treating the mice with an mTOR activator could suppress the activity of complex I (Fig. 4c) (n = 6). STN-DBS regulates mitochondrial homeostasis Mitochondrial homeostasis is maintained by mitochondrial fission and fusion, which is catalyzed by the Drp1 and Opa1, respectively 22 .Opa1 protein expression in mitochondria was decreased in the PD and PD-sham-DBS groups, whereas an elevated expression was detected in PD mice treated with STN stimulation, and this effect could be interrupted by 3BDO injection.However, the opposite trend in Drp1 protein expression was observed in SN mitochondria (Fig. 4d-f) (n = 6).These observations highlight how STN-DBS stabilizes mitochondrial homeostasis via upregulating mitochondrial fusion and down-regulating mitochondrial fission.Mitochondria-mediated apoptosis is suppressed by STN-DBS As mentioned above, TEM was used to evaluate mitochondrial injury.PD mice injected with rapamycin showed an alleviation in mitochondrial injury, which was similar to in PD mice that received STN stimulation.Serious injury was obtained in the PD-DBS + 3BDO group.This phenomenon was scored and confirmed by the criteria of mitochondrial injury (Fig. 3a, b) (n = 6).Mitochondria-related apoptogenic factors, including apoptosis-inducing factor (AIF) and cytochrome c, can leak from damaged mitochondria 23 .In the cytoplasm, significantly more AIF was detected in PD mice, however, that was reduced by STN stimulation or rapamycin injection.The efficacy of STN stimulation on AIF detection in the cytoplasm was inhibited by 3BDO.The tendency of AIF expression in mitochondria among the different groups was opposite to that in the cytoplasm.Moreover, cytochrome c showed a similar trend in expression in both cytoplasm and mitochondria (Fig. 5a-e) (n = 6).Our results suggest that STN-DBS could reduce the leakage of AIF and cytochrome c from the mitochondria to the cytoplasm, through an mTOR-dependent pathway.Cytochrome c release induces caspase activation, which in turn promotes cell apoptosis.Cleavedcaspase-9 and -3 were up-regulated in PD mice, which leads to dopaminergic neuron loss.STN stimulation and rapamycin could suppress apoptosis-associated caspase activation, which could be inhibited by an mTOR activator (Fig. 5a, f, g) (n = 6). Chronic STN-DBS exerts an "antioxidant" role and stabilizes mitochondrial homeostasis in the PD monkey model Besides the differences in basal ganglia structure between the NHP and rodent, whether a much longer stimulation could exert a neuroprotective role via the anti-oxidative stress still needs to be explored.Hence, we developed a monkey experiment (Supplementary Fig. 2a, b).The accuracy of neurosurgical robot-assisted DBS implantation has been confirmed by our previous study 24 .By comparison with the surgical plan via image fusion, the Euclidean errors were 1.15 ± 0.50 and 1.28 ± 0.49 mm in the PD-sham-DBS and PD-DBS groups, respectively, suggesting no obvious difference (Fig. 6a, b) (n = 6).Apomorphine (APO)-induced rotation was applied to test motor impairment.Single-side rotation was observed at a high speed in the PD model, after APO injection.After treatments with STN-DBS for two months, monkeys showed a significantly reduced cyclomatic number (Fig. 6c) (n = 6).Additionally, a zero score was obtained in control animals on the hemiparkinsonism rating scale because no symptoms were observed.The monkeys in the PD and PD-sham-DBS groups achieved a relatively high score, with symptoms including fixed posture, reduced arm movements, etc.However, monkeys that received STN stimulation appeared to have fewer tremors and more arm movement, which resulted in an obvious decrease in the score (Fig. 6d) (n = 6).Moreover, we tested whether the reduced cyclomatic number and hemiparkinsonism rating score contributed to the neuroprotection of STN-DBS, and observed that TH expression was upregulated by STN stimulation, which indicated a dopaminergic neuroprotection of long-term STN stimulation in the PD monkey (Fig. 6e, f) (n = 3). Besides the behavior tests, we evaluated the mechanism at the molecular level.The levels of antioxidant enzymes were measured.We found that the SOD expression in SN was significantly increased by chronic STN stimulation compared with the PD monkey (Fig. 6m) (n = 3).Similarly, GSH was down-regulated in SN of PD and PD-sham-DBS groups and was reversed by STN-DBS (a significant increase vs PD group; an elevated tendency vs the PD-sham-DBS group, P = 0.0835) (Fig. 6n) (n = 3).In another aspect, we evaluated the mitochondrial homeostasis of SN.Similar to the mouse experiment, Opa1 expression in mitochondria was increased by STN-DBS.Meanwhile, Drp1 was down-regulated by this treatment, indicating that mitochondrial homeostasis was maintained by long-term STN stimulation (Fig. 6e, i, g).Less AIF and cytochrome c were detected in the cytoplasm of monkeys receiving STN stimulation, with a contrary tendency in mitochondria, which indicated that a lower level of these apoptogenic factors was released from mitochondria (Fig. 6e, g, h, k, l).Most of these results were similar to those of the mice experiments, which suggests that long-term STN stimulation could alleviate oxidative stress and stabilize mitochondrial homeostasis, and plays a neuroprotective role in NHP. STN-DBS suppressed oxidative stress in PD patients Although the mechanism of STN-DBS has been confirmed not only in rodents but also NHP, whether the antioxidative stress effects could be observed in the clinic is still unknown.Therefore, we recruited PD patients with STN-DBS and collected cerebrospinal fluid (CSF) pre-and postoperation. The lead positions in PD patients were confirmed with a MATLAB toolbox-LeadDBS (v2.1.8,https://www.lead-dbs.org)via fusion of preoperative magnetic resonance imaging (MRI) and postoperative computed tomography (CT) 25 , the results showed that the leads were accurately implanted into the STN (Fig. 7a).We compared the SOD and GSH preoperation and six months post-operation.The SOD level was obviously increased after patients received the treatment, nevertheless, GSH expression did not seem to change during the same period (Fig. 7b, c) (n = 8).We were curious whether changes in oxidative stress were correlated with the alleviation of symptoms.In this cohort of patients, the Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS III) scores were significantly reduced after the operation, suggesting a therapeutic efficacy on motor symptoms (Fig. 7d, e) (n = 7).However, we did not observe a significant relationship between the change in antioxidant enzymes (SOD or GSH) and MDS-UPDRS III scores (all Ps > 0.05) (Fig. 7f, g) (n = 7).Moreover, we divided MDS-UPDRS III into several dimensions, to score the different symptoms of PD, according to the previous study 26 .Still, no obvious correlation was observed between the change in levels of SOD or GSH, and alteration in tremors, rigidity, bradykinesia, and axial symptoms (all Ps > 0.05) (Supplementary Fig 3a-h) (n = 7). Discussion STN-DBS is commonly indicated for PD and became the standard of care for this neurodegenerative disease 27 .However, the mechanism of this neuromodulation technique at a molecular level still needs to be further illustrated.Here, we showed that STN-DBS could play an "antioxidant" role in PD, which is dependent on an increase in mitophagy via an mTOR pathway.The "antioxidant" effect of STN-DBS reduced the apoptosis of dopaminergic neurons and exerted a neuroprotection effect, which inhibited disease progression (Fig. 8). These findings were first confirmed in the rodent experiment with a short period of stimulation.However, the results from a rodent experiment only were not convincing enough.Several important structures in the basal ganglia are implicated in specific neural circuits and the disease progression of PD 28,29 , and these structures have major differences between NHP and rodents.For instance, in rodents, the dorsal striatum is named neostriatum and it could be divided into a dorsomedial and a dorsolateral part, while in monkeys and humans it is divided into the caudate nucleus and putamen.In parallel, both in humans and the NHP, the internal and external segments of the globus pallidus are structurally divided by the internal lamina and are placed close to each other.Rodents lack such structural separation 30 .Additionally, only short-term stimulation was delivered in the rodent 70 .c The rotarod test of different groups.STN-DBS and rapamycin extended the time on the rod of the PD mouse model; however, 3BDO inhibited this effect (n = 8 per group; F (5,42) = 15.08,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).d Immunohistochemistry staining of TH in different groups.e, f STN-DBS, as well as rapamycin, relieved the tendency of reduction of TH + cells in SN.Only rapamycin significantly elevated the TH + neurites in the striatum, and STN-DBS merely obtained a tendency to increase (n = 6 per group; TH + neurons: F (5,30) = 197.3,P < 0.0001; TH + neurites: F (5,30) = 69 .51, P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).g Western blot analysis of TH in the SN of different groups.h PD-DBS and PD+Rap groups showed an increase in TH level in the SN by comparison with PD and PD-sham-DBS groups; however, 3BDO obstructed the preservation of TH by STN-DBS (n = 6 per group; F (5,30) = 27.7,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).I: control group; II: PD group; III: PD-sham-DBS group; IV: PD-DBS group; V: PD+Rap group; VI: PD-DBS + 3BDO group.* P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001.Error bars: standard deviation of the mean.STN-DBS subthalamic nuclei deep brain stimulation, PD Parkinson's disease, SN substantia nigra, TH tyrosine hydroxylase, MPTP 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine, ns not significant. experiment, whether a long-term stimulation could exert neuroprotection, and through a similar mechanism remains unknown.The previous study also addressed the limitation of rodents in the DBS study 31 .Therefore, we illustrated these issues and confirmed our findings, using the tissue from NHP and CSF from patients after long-term stimulation. Even though the majority of STN fibers terminate at the level of the reticulate part, a few fibers ascend along the dopaminergic cell columns of the SN pars compacta (SNc), thereby exerting an influence on both dopaminergic and non-dopaminergic cells 32 .Cortical information received at the input stations is transmitted to the output nuclei via three distinct pathways: (1) Direct pathway: striatal neurons expressing substance P receive cortical inputs and project directly to the internal globus pallidus (GPi)/ SN pars reticulata (SNr).( 2) Indirect pathway: striatal neurons expressing enkephalin receive cortical inputs and transmit to the GPi/SNr via a polysynaptic route involving the external globus pallidus (GPe) and STN.(3) Hyperdirect pathway: STN neurons receive direct cortical inputs and project directly to the GPi/SNr 33 .Therefore, the function of SN may be influenced via modifying STN. DBS was approved by the Food and Drug Administration in 2002 as one of the therapies for PD.Randomized controlled clinical trials of STN-DBS have been performed and showed a marked reduction in motor severity and an increase in the quality of daily life 34 .DBS can obtain an immediate effect on the firing rate and pattern of individual neurons and neurotransmitters in the basal ganglia 35 .The major unmet need in the management of PD is to slow down the progression of the disease and reduce or prevent key disability milestones, however, no medication has In contrast, mitochondria in the PD-DBS and PD+Rap groups showed less injury, which disappeared after treatment with 3BDO (n = 6 per group; F (5,30) = 20.9,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).c Western blot analysis of p-mTOR, mTOR, LC3, and p62 in different groups (shared with the same mice in Fig. 2g).d, e The mTOR and p-mTOR expressions were suppressed by STN-DBS in the PD mouse model, and this phenomenon disappeared when intervened with 3BDO (n = 6 per group; mTOR: F (5,30) = 14.1, P < 0.0001; p-mTOR: F (5,30) = 10.3,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).f STN-DBS reversed the decrease in LC3 II, and a similar result was also obtained in PD mice treated with rapamycin (F (5,30) = 22.3, P < 0.0001; one-way ANOVA followed by a Tukey posthoc correction).g The increased p62 expression in PD models was reversed by STN stimulation, which was inhibited by 3BDO injection (n = 6 per group; F (5,30) = 42.9,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).h Co-localization of TH and p-mTOR via IF staining.i The p-mTOR expression was elevated in the dopaminergic neurons of PD but was decreased by STN-DBS and rapamycin.With the injection of 3BDO, the p-mTOR expression in dopaminergic neurons was not further influenced by STN-DBS (n = 6 per group; F (5,30) = 19.1,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).j Mitophagosomes (neuron) observed via TEM in different groups (shared with the same mice in Fig. 3a).Magnification: 20,000×.k Fewer mitophagosomes (white arrows) were observed in the PD mice compared with the control.However, STN-DBS and rapamycin induced a marked increase in the number of mitophagosomes in the SN of the PD model (n = 6 per group; F (5,30) = 24.8,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).been confirmed to inhibit the progression.Fortunately, the studies involve patients with an early stage of PD treated with STN-DBS and long-term follow-up could give important insights into the neuroprotective potential of DBS 7,36,37 .Moreover, in observations via neuroimaging in PD patients, STN-DBS showed a specific pattern of changes in the motor circuit, including increased ligand uptake in the basal ganglia 38 .Our previous study showed rats treated with STN-DBS increased the survival of dopaminergic neurons in the SN 19 .Several studies including our own tried to investigate how STN-DBS exerts a neuroprotective effect.There is an interesting issue that only high-frequency stimulation delivered to STN could increase the fractional anisotropy value of the SN and exert a neuroprotective effect 39 . Fischer et al. found that STN-DBS activates BDNF-trkB signaling, accompanied by phosphorylation of Akt in SN neurons 40 .Moreover, STN stimulation has been found to protect against α-synuclein induced dopaminergic neuron loss 8 .Our former findings indicated that the neuroprotective effect was attributed to activated autophagy 19 .Another study of ours showed a reduction of inflammatory mediators in microglia via the modulation of CX3CL1/CX3CR1 and ERK signaling, finally alleviates the apoptosis of SN neurons 41 .Some studies found neuroprotection was achieved via influencing mitophagy.For instance, celastrol has been confirmed to exerts neuroprotection in PD by activating mitophagy to degrade impaired b-e In the cytoplasm, significantly more AIF was detected in PD mice; however, that was reduced by STN-DBS and rapamycin injection.The efficacy of STN stimulation on AIF expression in the cytoplasm was blocked by 3BDO (n = 6 per group; F (5,30) = 28.2,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).The tendency of AIF expression in mitochondria among the different groups was opposite to that in the cytoplasm (n = 6 per group; F (5,30) = 17.5, P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).Moreover, cytochrome c showed a similar expression tendency both in cytoplasm (n = 6 per group; F (5,30) = 24.4,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction) and mitochondria (n = 6 per group; F (5,30) = 22.6, P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).f, g The cleaved-caspase-9 (F (5,30) = 26.1,P < 0.0001) and -3 (F (5,30) = 18.5, P < 0.0001) were up-regulated in PD mice, on the contrary, they were suppressed by STN stimulation and rapamycin treatment (n = 6 per group; one-way ANOVA followed by a Tukey post-hoc correction).mitochondria and further inhibit dopaminergic neuronal apoptosis , and Artemisia Leaf Extract can exert neuroprotective effects by stimulating TRPML1 and rescuing neuronal cells by boosting autophagy/mitophagy and upregulating a survival pathway 43 .A post-mortem study indicated that STN-DBS inhibited or reversed the reduction in mitochondrial volume and numbers caused by PD 18 .Similarly, in the present study, our results also indicated that STN-DBS could reduce mitochondrial injury.Our former study has revealed that STN-DBS can increase autophagy in SN 19 , nevertheless, some issues remain unknown, for instance, whether the mitophagy was influenced and through which pathway.PD patients display decreased activity of the complex I of the respiratory chain in the SN, along with mitochondrial damage and mitochondrial DNA deletions 14 .We showed STN stimulation reserved complex I activity in PD, which convinced us that the function of mitochondrion is preserved by this treatment.Mitophagy, the key process in keeping the cell healthy, promotes the turnover of mitochondria and prevents the accumulation of dysfunctional mitochondria which can lead to cellular degeneration.Therapies enhancing mitophagy and promoting the removal of damaged mitochondria are of interest for developing the disease management of PD.Small-molecule activators of PINK1 and parkin, or inhibitors of mTOR, USP30, and pSer65Ub phosphatases, are promising therapeutic targets for enhancing mitophagy in PD, and several of them have been found efficacious 44 .Particularly, mTORdependent mitophagy has been investigated widely.Similarly with other studies, we observed that MPTP injection could elevate the expression of mTOR 45 . In some studies, both mTOR and p-mTOR were affected, however, in other studies, only p-mTOR was affected 46,47 .We found STN-DBS could suppress both mTOR and p-mTOR levels, suggesting that both the phosphorylation and expression of mTOR affected, which resulted in the inhibition of the mTOR pathway.Meanwhile, an elevation of LC3 II with a decrease in p62 was observed in PD models treated with STN stimulation, combining the result of TEM and IF staining, it is believed that STN-DBS enhanced mitophagy in dopaminergic neurons via an mTOR-dependent manner. Mitochondria are highly dynamic, undergoing fusion and fission ceaselessly, and damaged mitochondria are removed by mitophagy.Before fission, damaged DNA and proteins are segregated to one side of the mitochondrion, which is targeted for mitophagy, while the other daughter mitochondrion is now pristine 48 .The mitochondrial homeostasis, including fusion and fission cycles, is influenced in PD, and could be mediated by a set of energy-sensing factors (e.g., mTOR, AMPK, Sirtuins) 49 In our study, we found a decrease in Drp1 and an increase in Opa1 were observed in PD models with stimulation of the STN, showing that mitochondrial fission and fusion were suppressed and elevated, respectively, compared to PD without stimulation.In other words, mitochondrial homeostasis was stabilized by STN-DBS.Some studies have focused on the effect of genetic variability on PD patients with DBS.A claim that LRRK2 G2019S patients have greater improvement following surgery for STN-DBS than idiopathic patients was made in a comparative study 50 .Twelve months postoperatively, patients with parkin mutations had significantly lower levodopa equivalent daily dose than mutation-negative patients 51 , and patients with parkin mutations could be suitable for early surgery 52 .We found STN-DBS could increase the mitophagy in dopaminergic neurons in PD, which may illustrate why patients with mitophagy-related mutation showed positive outcomes to some degree. Oxidative stress refers to "an imbalance between the generation of oxidants and their elimination systems, e.g., antioxidants in favor of oxidants, leading to disruption of redox signaling and control and/or molecular damage" 53 .Mitochondria are a major source of ROS as a of electron transport chain activity 49 .Impaired mitochondrial function results in a reduction in cellular energy and excessive ROS production in neurons, which in turn exacerbate mitochondrial damage 14 .Several markers of oxidative stress are down-regulated in the SN, CSF, and blood of PD patients, including oxidized DNA bases, coenzymes, and lipids, with a concomitant reduction of antioxidant molecules 54 .Overexpression of α-synuclein indicated that it interacts with several outer mitochondrial membrane components, including TOMM20 and VDAC, causing mitochondrial permeability transition pore opening, and oxidative stress 44,55,56 .In fact, dopaminergic neurons are extremely susceptible to oxidative stress, because numerous oxidants are produced when dopamine is released from synaptic vesicles into the synaptic cleft or the cytosol 14 .Antioxidants may act as scavengers of oxidants to maintain the biological redox steady states.Vitamin C and SOD, the major antioxidants, were significantly lower in patients with severe PD 57 .Mutation of the LRRK2, G2019S, could reduce mitophagy in PD, which can promote ROS-induced dopaminergic neuronal death, and the application of truncated LRRK2 reverses ROS accumulation and prevents neuronal injury 58 .We observed antioxidant enzymes, including SOD and GSH, were elevated by stimulation of STN, suggesting oxidative stress was relieved.However, when interrupted with an mTOR activator, this phenomenon disappeared, which demonstrates that the "antioxidant" role of STN-DBS relied on the mTOR signaling pathway. Once permeabilization in the mitochondrial outer membrane is increased, several mitochondrial apoptogenic factors, including cytochrome c, second mitochondrial activator of caspases, and AIF, are released into the cytosol.In the cytosol, a multimeric structure composed of cytochrome c, apoptotic peptidase activating factor 1, and procaspase-9 is formed, which activates caspase-9, and then caspases-3, finally, this process results in apoptosis 14,54,59 .Researchers have tried to investigate novel therapies that could exert an antioxidative effect.For instance, Cu x O nanoparticle clusters functionally mimicked the activities of peroxidase, SOD, catalase, and glutathione peroxidase, which inhibits neurotoxicity and rescues memory loss in a PD model 15 .In the present study, the oxidative stress was suppressed by STN-DBS with elevated levels of antioxidant enzymes, and the mitochondrial apoptogenic factors, including AIF and cytochrome c, were released less from damaged mitochondria because mitochondrial function and structure were preserved, which resulted in less expression of cleavedcaspase 3 and 9.These findings convinced us that STN-DBS suppressed mitochondrial apoptogenic factors, which reduces cell apoptosis and exerts a neuroprotective effect.ROS bursts are also pro-inflammatory stimulus via activation of nuclear factor κB 60 .Hou et al. found inhibition of NADPH oxidase via apocynin, alleviated impairments of learning and memory via the suppression of oxidative stress and neuroinflammation in PD 61 .Our former study showed STN-DBS could exert an anti-neuroinflammation effect via a nuclear factor κB dependent pathway 41 .It is hypothesized that the antioxidative effect of STN-DBS may reduce neuroinflammation. In the present study, we only investigated the effect of a single stimulation frequency (130 Hz) on the motor performance and oxidative stress.Some researchers found a higher stimulation frequency (>130 Hz) may achieve the higher number of rotation in rat 62 .The stimulation frequency (130 Hz) in this study was selected based on the former studies [63][64][65] .And Isaac R. Cassare et al. found sporadic spike failure at high frequencies limits the efficacy of the informational lesion, yielding a parabolic profile with optimal effects at 130 Hz.However, we hope that the effect of the higher (>130 Hz) and lower (<100 Hz) stimulation frequency on the motor symptoms and oxidative stress will be investigated in the further study. In conclusion, our study demonstrated that STN-DBS was able to increase mitophagy in dopaminergic neurons via an mTOR-dependent pathway, and oxidative stress was suppressed due to removal of damaged mitochondria.Consequently, apoptogenic factors released from mitochondria were reduced, which finally was attributed to the dopaminergic neuroprotection of STN-DBS in the SN of PD.These findings were confirmed in a variety of experiments in rodents, NHPs, and humans.Although an antioxidative effect was observed in PD patients with STN-DBS, a relationship between antioxidant levels and the therapeutic outcome was not obtained.Our study further contributes to understanding the mechanism of STN-DBS in controlling symptoms and inhibiting the disease progression of PD. Methods Animals, participants, and ethics Adult (n = 379, 10-weeks old) male C57BL/6J mice and rhesus monkeys (n = 24, 6-9 years old) were provided by Beijing HFK Bioscience Co. Ltd. (Beijing, China) and the Laboratory Animal Center of the Military Medical Science Academy of China (Beijing, China), respectively.This study was approved by the Ethics Committee of Beijing Neurosurgical Institute (Process No. 202101017; 201704005) and was consistent with the National Institutes of Health Guide for the Care and Use of Laboratory Animals.To evaluate the motor impairment and dopaminergic neuron loss in a PD model and select the lead implantation time point, mice were assigned to control, PD-one (PD-1w), -two (PD-2w), -three (PD-3w), -four (PD-4w), -five (PD-5w), and -six (PD-6w) week groups.To investigate the effect of STN-DBS on mitophagy-mediated oxidative stress, mice were divided into control, PD, PD-sham-DBS, PD-DBS, PD+Rap, and PD-DBS + 3BDO groups.The monkeys were assigned to control, PD, PD-sham-DBS, and PD-DBS groups. Eight patients (six females and two males; aged 65.9 ± 6.0 years old) with PD who were scheduled to undergo DBS surgery at Beijing Tiantan Hospital were recruited prospectively from 2019 to 2021, after approval by the institutional review board of Beijing Tiantan Hospital (KY 2018-008-01) 66 .All patients provided written informed consent. PD model establishment The PD model was established using MPTP.For the mouse PD model, MPTP hydrochloride (25 mg/kg in saline, subcutaneous, Sigma-Aldrich, St. Louis, MO, USA) and probenecid (250 mg/kg in Tris-HCl buffer, Sigma-Aldrich) were administered over 4 weeks at 3.5-day intervals 67 .The animals in the control group received a normal saline injection of the same dosage.The mice in PD+Rap and PD-DBS + 3BDO were injected with rapamycin (7.5 mg/kg/day, 7 days, Selleck Chemicals, Houston, TX, USA) and 3BDO (80 mg/kg/day, 7 days, Selleck Chemicals), respectively, four weeks after the first MPTP injection.For the monkey PD model, all animals were given general anesthesia with intramuscular injection of Zoletil (5 mg/ kg, Virbac, Alpes-Maritimes, France) and Dexdomitor (20 μg/kg, Zoetis, NJ, USA), before being fixed on the bed of a digital subtraction angiography (DSA) in a supine position.The left femoral artery was punctured using the Seldinger method.The left internal carotid artery was catheterized, and 20 ml saline containing MPTP (0.4 mg/kg) was pumped (with a constant speed) over a 20 min period 68,69 .The monkeys in the control received a normal saline injection of the same dosage. STN-DBS implantation Four weeks after the first MPTP injection, mice in the PD-sham-DBS, PD-DBS, and PD-DBS + 3BDO groups were anesthetized (isoflurane, inhalation, RWD Life Science, Shenzhen, China) and prepared for the STN-DBS implantation.The concentric bipolar stimulation electrode was implanted in the left STN (AP = −2.06mm, ML = +1.5 mm, DV = −4.5 mm) in these groups 70 .The electrodes of the PD-DBS and PD-DBS + 3BDO groups were connected to a stimulator (Master-8 Programmable Stimulator; AMPI, Jerusalem, Israel) that delivered an electrical pulse (frequency = 130 Hz, pulse width = 90 μs, intensity = 100 μA) for one week, but the electrodes in the PD-sham-DBS group were not connected to the stimulator.The other groups of animals did not receive the electrode implantation.After one week of stimulation, all mice were sacrificed.The SN (Only the ipsilateral SN; Due to technical difficulties, the SNpc and SNr regions were collected together.) of the mice was collected and stored at −80 °C and the rest of the mice was perfused with normal saline followed by 4% paraformaldehyde in 0.1 mol/L phosphate-buffered saline (PBS). For NHP STN-DBS implantation, details were described in our previous study 24 .Briefly, the monkeys in the PD-sham-DBS and PD-DBS groups were anesthetized and underwent MRI (including 3D T1-, T2weighted imaging, and magnetic resonance angiography) with a 3-Tesla MRI scanner (SIGNA; GE Healthcare, Waukesha, WI, USA).Six weeks after the first MPTP injection, the DBS leads (L301; Beijing PINS Medical Co. Ltd., Beijing, China) were used to target the left STN, according to individual MRI and the atlas of the rhesus monkey brain 71 .Electrode implantation was conducted by a neurosurgical robotic system, the accuracy of this new method was confirmed by our previous study 24,72 .An extension was tunneled subcutaneously from the neck to the abdomen where the implantable pulse generator (IPG, G102; Beijing PINS Medical Co. Ltd.) was located.The surgical complications and accuracy of lead placement were measured via postoperative CT.Another two weeks later, in the monkeys of the PD-DBS group, CT (fusion with MRI) was performed to select the optimal contact within the STN, and an electrical pulse (1.5 V, 90 μs and 130 Hz) was delivered through the selected contact (contact-, IPG + ).The monkey tissues (SN: SNpc + SNr) were collected two months after stimulation. The patients with PD underwent bilateral STN-DBS through surgical procedures that were described in our previous studies 66,73 .Before surgery, the patients underwent a preoperative MRI and CT scan.Bilateral STN implantations were performed using a Leksell G frame system (Elekta Instrument AB, Stockholm, Sweden) under the guidance of preoperative images.Micro-electrode recordings as well as macro-stimulation were applied during surgery.The quadripolar DBS electrodes (Model L301; PINS Medical Co. Ltd., Beijing, China) were implanted and fixed and connected to the IPG.The patients were asked to come back to the hospital and received regular programming to achieve a satisfactory clinical outcome 4-5 weeks after surgery.The CSF of patients was collected preoperatively and 6 months after surgery and centrifuged at 4000 × g for 10 min at 4 °C. Behavior test The rotarod test was used to evaluate motor deficits in the mice.Animals were pre-trained on an automated four-lane rotarod (Panlab, Harvard Apparatus, Barcelona, Spain) unit with a 3-cm diameter rod and an acceleration of 4 to 40 rpm over a period of 5 min, prior to MPTP injection.The length of time that each animal was able to stay on the rod was recorded as the latency to fall, which was automatically registered by a trip switch under the floor of the rotating drum.To evaluate the monkey motor impairment, the monkey hemiparkinsonism rating scale was used according to a previous study (Supplementary Table 1) 69 .Furthermore, contralateral rotation was measured following subcutaneous injection with APO (0.3 mg/kg) and recorded for 5 min, with a higher rotation number indicating a more severe motor impairment.The motor symptoms of PD patients were measured preoperation and 6 post-operation via the MDS-UPDRS III. Nissl staining To evaluate the lead position in mice brains, frozen serial coronal sections (20 μm) of the brain containing the STN were cut and subjected to Nissl staining as previously described 8 .Briefly, slides containing the sections were sequentially immersed for 5 min in xylene followed by 100%, 95%, and 70% ethanol, then dipped in distilled water and stained with 0.5% cresyl violet solution for 15-30 min.After rinsing in water for 3-5 min and dehydrating in 70%, 95%, and 100% ethanol, the slides were placed in xylene for 10 min and the sections were covered with a coverslip. Enzyme-linked immunosorbent assay (ELISA) Levels of SOD (Cu/Zn cytosolic and Mn Mitochondrial SOD), GSH, and complex I activity in brain tissue and CSF were evaluated using ELISA according to the manufactory's manual (SES134, CEA294Ge, Uscn Life Science Inc., Wuhan, China; ab136809, Abcam).Briefly, the tissue homogenates were centrifuged for 5 min at 10,000 × g, and supernatants were collected.Standard and diluted samples were added to separate wells in the reaction plates.After washing, a developing solution (provide by the ELISA kit) was added and the reaction was terminated with a stop solution.The optical density (OD) at 450 nm was measured with a microplate reader. IF staining and immunohistochemistry For immunohistochemistry staining, the brain was removed and dehydrated in 20% and 30% sucrose solution, then cut into sections at a thickness of 40 μm, rinsed in PBS, and incubated in 3% H 2 O 2 to quench endogenous peroxidase activity.After washing with PBS, sections were incubated in 10% goat serum followed by 0.3% TritonX-100 in PBS, then subsequently by overnight incubation at 4 °C with anti-TH antibody (T2928, Sigma-Aldrich, 1:2000) in a humidified chamber.Immunoreactivity was detected with a biotinylated secondary antibody (Zhongshan Golden Bridge Biotechnology Co., Beijing, China) and diaminobenzidine. The number of TH + neurons in each section was counted via unbiased stereology with Stereo Investigator software (MBF Biosciences, Williston, VT, USA).Consecutive sections from each brain (every sixth section, 40 μm) were selected throughout the entire rostrocaudal extent of the SNc for examination.Using the optical fractionator principle, the SN was outlined on each section at 5× magnification and TH + neurons on the left sides were separately counted at 20× magnification under a brightfield microscope (Olympus, Tokyo, Japan).The weighted section thickness was used to correct for variations in tissue thickness at different sites. The OD of TH + neurites in the striatum (whole striatum) was determined using ImageJ software.The left striatum was selected as the measurement area.The OD of this area was measured and corrected by subtracting non-specific background signal.The average OD (OD/area) value was calculated. TEM TEM was performed according to a previous study 74 .The SN was washed in 0.1 M PBS and stored in 2.5% glutaraldehyde in 0.1 M PBS until processed.The slices were washed in 0.1 M PBS, post-fixed in 1% osmium tetroxide in 0.1 M PBS for 2 h, and washed again in 0.1 M PBS.Ultrathin sections were observed using an electron microscope (JEM2100; JEOL, Japan), and were observed in ten random cells and visual fields from each sample, scored according to the criteria, and recorded as described previously (Supplementary Table 2) 74 .When the condition was considered to fall between two scores, an increment of 0.5 points was added to the score.If more than one neuron or mitochondrion was observed in one field, the average grade was recorded. Statistical analysis Data are expressed as the means ± standard deviations (SD).One-way ANOVA followed by the Tukey post-hoc correction was used to analyze the statistical significance of differences among multiple groups.Two-way ANOVA was used to analyze the rotarod test at different time points and groups, followed by a Tukey post-hoc correction for multiple comparisons.Two sample t test was used to measure the difference of Euclidean error.A paired-T test was used to evaluate the changes in PD patients over different time points.Pearson correlation test was used to investigate the correlation between changes in antioxidant enzymes and symtoms.Data were analyzed and plotted using GraphPad Prism version 9.5 software (GraphPad Software, La Jolla, CA, USA).A P < 0.05 was considered significant. Fig. 1 | Fig. 1 | Time point selection of STN-DBS implantation.a The rotarod test of PD and normal animals at different time points.An obvious motor impairment was observed four weeks after model establishment, with a progressive impairment at four to five weeks.**** P < 0.0001, PD vs. control group; # P < 0.05, the different time point comparison in the PD group (n = 8 per time point per group; F Time(5,84) = 4.210, P = 0.0018; F Group(1,84) = 112.6,P < 0.0001; two-way ANOVA followed by a Tukey post-hoc correction).b Western blot analysis of TH expression at different time points.c, d TH levels were significantly decreased four weeks after model establishment, with a progressive decline (n = 6 per group; Striatum: F (3,20) = 143.6,P < 0.0001; SN: F (3,20) = 17.84,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).e Immunohistochemistry staining of TH at different time points.f, g The TH + neurons and neurites in the SN and striatum were significantly less detected four weeks after model establishment, and a progressive decline was found between four and five weeks after model establishment (n = 6 per group; TH + neurons: F (3,20) = 79.56,P < 0.0001; TH + neurites: F (3,20) = 165.2,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).* P < 0.05; Fig. 2 | Fig. 2 | The neuroprotection of STN-DBS in a PD mouse model.a Experimental design of the mouse STN-DBS.b Nissl staining of lead implantation.The lead accurately targeted the STN region.Red area indicates the STN region of the mouse brain in the atlas70 .c The rotarod test of different groups.STN-DBS and rapamycin extended the time on the rod of the PD mouse model; however, 3BDO inhibited this effect (n = 8 per group; F (5,42) = 15.08,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).d Immunohistochemistry staining of TH in different groups.e, f STN-DBS, as well as rapamycin, relieved the tendency of reduction of TH + cells in SN.Only rapamycin significantly elevated the TH + neurites in the striatum, and STN-DBS merely obtained a tendency to increase (n = 6 per group; TH + neurons: F (5,30) = 197.3,P < 0.0001; TH + neurites: F (5,30) = 69 .51, P < 0.0001; Fig. 4 | Fig. 4 | STN-DBS alleviates oxidative stress and stabilizes mitochondrial homeostasis.a, b SOD and GSH levels in SN measured via ELISAs.An increased expression of SOD and GSH in SN, compared with the mice that received no stimulation or rapamycin (only SOD) injection.The mTOR activator 3BDO inhibited this effect (n = 6 per group; SOD: F (5,30) = 22.6, P < 0.0001; GSH: F (5,30) = 13.6,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).c Complex I activity in SN measured via ELISAs.Complex I had higher activity in PD mice that received STN stimulation or rapamycin treatment, however, the mTOR activator could suppress the effect of STN stimulation on the activity of complex I (n = 6 per group; F (5,30) = 44.0,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).d Western blot analysis of Drp1 and Opa1 in mitochondria of SN. e, f Opa1 expression in mitochondria was decreased in PD mice, whereas, an elevated expression of Opa1 was detected in PD mice treated with STN stimulation (n = 6 per group; F (3,20) = 26.3,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).On the contrary, the opposite trend of Drp1 expression was observed in the mitochondria of SN (n = 6 per group; F (3,20) = 74.6,P < 0.0001; one-way ANOVA followed by a Tukey post-hoc correction).I: control group; II: PD group; III: PD-sham-DBS group; IV: PD-DBS group; V: PD+Rap group; VI: PD-DBS + 3BDO group.* P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001.Error bars: standard deviation of the mean.STN-DBS subthalamic nuclei deep brain stimulation, PD Parkinson's disease, SN substantia nigra, ELISA enzyme-linked immunosorbent assay, SOD superoxide dismutase, GSH glutathione, ns not significant. Fig. 8 | Fig. 8 | Schematic illustration of the neuroprotective effects of STN-DBS.STN-DBS was able to increase the mitophagy in dopaminergic neurons via an mTORdependent pathway, and the oxidative stress was suppressed due to removal of damaged mitochondria.As a consequence, the apoptogenic factors released from mitochondria were reduced, which was finally attributed to the dopaminergic neuroprotection of STN-DBS in the SN of PD.STN-DBS subthalamic nuclei deep brain stimulation, PD Parkinson's disease, SN substantia nigra, AIF apoptosisinducing factor.
2024-03-06T14:16:04.242Z
2024-03-06T00:00:00.000
{ "year": 2024, "sha1": "42b9e201aeae1d825106432ae04792b5a8f1cd9e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41531-024-00668-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "800b6a69ddb7c19095827df9fa76746820b15664", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
222126048
pes2o/s2orc
v3-fos-license
Use of a bronchial blocker in the prone position 1. Kairamkonda V, Thorburn K, Sarginson R. Tracheal bronchus associated with VACTERL. Eur J Pediatr 2003;162:165‐7. 2. Ghaye B, Szapiro D, Fanchamps JM, Dondelinger RF. Congenital bronchial abnormalities revisited. Radiographics 2001;21:105‐19. 3. O’Sullivan BP, Frassica JJ, Rayder SM. Tracheal bronchus: A cause of prolonged atelectasis in intubated children. Chest 1998;113:537‐40. 4. Schweigert M, Dubecz A, Ofner D, Stein HJ. Tracheal bronchus associated with recurrent pneumonia. Ulster Med J 2013;82:94‐6. 5. Lai KM, Hsieh MH, Lam F, Chen CY, Chen TL, Chang CC. Anaesthesia for patients with tracheal bronchus. Asian J Anesthesiol 2017;55:87‐8. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. Use of a bronchial blocker in the prone position Sir, A 28-year-old gentleman, who was previously healthy, was found to have a large spine tumor involving the mid-thoracic level that was found incidentally on a chest X-ray examination [ Figure 1] when he was applying for a job. He had no pain and was neurologically intact. Further investigations by computerized tomography scan (CT scan) and magnetic resonance imaging (MRI) revealed a well-circumscribed cystic lesion arising from the costovertebral junction involving the T7 vertebra and adjacent rib [ Figure 2]. It had multiple sclerotic margins with internal septations and calcification. The surgical plan was a two-staged surgical technique. The first stage was the posterior approach with the patient in a prone position to stabilize the spine and resect the tumor. The second stage was to reposition the patient into a lateral decubitus position for a thoracotomy. Accordingly, the plan for general anesthesia was sought to insert bronchial blocker and not a single lumen tube. Managing double-lumen tubes in a prone position was challenging hence, we did not intend to use it. Therefore, in this case, we used a bronchial blocker (BB), univent tube, [1] and we placed the blocker under vision using fiber optic bronchoscope (FOB) to the target bronchus. We Dear Editor, We read with great interest the case presented by Bellapukonda et al. entitled: "Can intubate but cannot ventilate! An unexpected event in a child with stridor after accidental aspiration of potassium permanganate solution." [1] The report highlights airway and respiratory complications that occurred after the ingestion of a caustic solution. Despite uncomplicated endotracheal intubation, ventilation and oxygenation could not be provided and this led to the rapid deterioration of respiratory and hemodynamic status, which necessitated a futile emergency tracheostomy. Despite the tracheostomy, oxygenation and ventilation could not be established and the cause of difficulty in ventilation was eventually noted to be the result of airway debris occluding the tracheobronchial tree. Potassium permanganate (KMnO4) is a known caustic solution that is used clinically as an antiseptic and antifungal agent. It is not meant to enteral or systemic administration. We are not aware of its use or indications in the treatment of accidental ingestions. As a caustic solution, ingestion or aspiration of potassium permanganate may result in damage kept the blocker pilot cuff deflated. As surgery proceeded the tumor was completely resected from the posterior approach with instrumented fusion from T6 to T8. There was no need for thoracotomy. The patient recovered well postoperatively and was discharged home well. Pathologic examination of the tumor revealed enchondroma. We believe using BB in the prone position is another good indication if the patient will undergo a surgical procedure in two stages like our case. We believe that this indication of bronchial blockers should be added to the indications of using BB. Financial support and sponsorship Nil.
2020-10-05T13:47:20.463Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "bf90047c4b786bc1e0c8f9a60f734fd3e521b6c1", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/sja.sja_397_20", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "19511bea28a91322a374a8c98e3a1811dd6fb6cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234358010
pes2o/s2orc
v3-fos-license
Are rare rogue fluctuations generic to strongly nonlinear and non-integrable systems? Extensive dynamical simulations are used to explore the possible existence of sudden sufficiently large energy or rogue fluctuations (RF) at late times and across short time windows in the {\it strongly nonlinear regime} of the $\beta$-Fermi-Pasta-Ulam-Tsingou (FPUT) type systems. Our studies build on a study of RF in the non-dissipative granular chain system and suggest that {\it rare RF may be generic to non-integrable, strongly nonlinear systems at late enough times}. We comment on the role of initial conditions and the intriguing influence of harmonic forces on these strongly nonlinear systems. The RF under focus here are distinct from the well known Peregrine solitons used to describe rogue waves via the weakly nonlinear Schr\"odinger equation. Introduction When exploring many particle systems with nonlinear interactions, we often learn about continuum nonlinear equations of motion that are integrable [ 2,3] and yield solutions in the form of solitons or localized excitations such as breathers and/or bound solitons (see for example in [4]). Among these we have the Korteweg deVries equation, the nonlinear Schrödinger equation and others. Then there is an exactly solvable discrete system, the monodispersed Toda lattice [5,6,7]. A feature of the Toda system is that the solitary waves (SWs) typically suffer a phase shift when they interact with one another. Other than the phase shift they remain unscathed [6,1,8,9]. However, the vast majority of equations of motion for many body systems with nonlinear forces are not necessarily accurately represented by integrable equations of motion. Hence there is a knowledge gap that needs to be addressed by examining the dynamics of nonintegrable, strongly nonlinear systems at both early and late times. It turns out that the study of SW interactions with each other and in systems with boundaries offer insights into the consequences of non-integrability. Further, this is a mature subject that is well over 40 years old [10,11,12,13,14,15,16]. For these non-integrable systems one often finds that SWs of finite spatial expanse, localized excitations and acoustic-like oscillations are the energy carriers [17,18]. Further, one may encounter unknown (neither SWs nor localized excitations but some combination of both) and unstable nonlinear objects that are neither of the three just noted [18,19,20,21,22]. These objects interact in nontrivial ways [23,24,25,26,27]. The SW-SW and SW-wall interactions may eventually drive the system to an equilibrium state. The equilibrium state is characterized by Maxwell's Gaussian distribution of velocities and the system's energy is approximately equally distributed among the available degrees of freedom thereby leading to equipartitioning of energy and the natural introduction of an equilibrium temperature of the system [28,29]. Additionally, in equilibrium there is no memory of the initial perturbation that was used to remove the system from the original equilibrium state and the system is presumed to be ergodic [30,31,32,33,34,35]. The approach to some form of equilibrium-like state of an interacting many body system is a subject of considerable fundamental interest in the context of the work presented below [36,37,38,39,40]. In a recent study [41] we pushed the idea of strongly perturbing a system and considered an intrinsically nonlinear many particle system represented by an alignment of elastic beads in gentle contact interacting via the one-sided Hertz potential (i.e., no interaction upon contact breaking) and held between two fixed end walls [42]. We ignored the role of dissipative losses in our studies and considered a conservative system. We gave each bead a random velocity at initiation. These strong perturbations led to the development of an early phase of the system that may be viewed as one with many interacting SWs. Such a problem would be hard to approach analytically. The equations of motion were hence carefully and numerically integrated forward in time to probe the time evolution of the system. The results of the high accuracy simulations were as follows. It was found that due to the persistent interactions between the SWs, the system ended up for extended times in a phase that was characterized by large kinetic energy fluctuations. These large kinetic energy fluctuations manifested themselves as what we called hotspots (HS) and rogue fluctuations (RF) [18,41]. The state of the system with large kinetic energy fluctuations has been earlier referred to as the quasi-equilibrium (QEQ) state [43,44,45,46]. Upon continued simulation one would expect, based on our earlier work [27,47,48,49], that the system would eventually reach a state with energy equipartitioning and hence an equilibrium state. It is worth noting that ideas akin to that of the QEQ state appear to have been independently developed in studies of small quantum systems and are often referred to as the prethermalization phase [50,51]. In this context, we ask if RF are generic to nonlinear many-body systems and whether they play a role in the dynamics of the system in QEQ. In seeking to answer these questions we examine the nature of HS and RF in the β-Fermi-Pasta-Ulam-Tsingou (β-FPUT) system and this work is described below [52]. In closing here, we should mention that formalisms such as Kubo's linear response theory [53,34] have been very successful in describing relaxation processes in a great many systems to the equipartitioned state. However, it is our understanding that applying such an approach to study relaxation in these strongly perturbed systems in an analytic manner is still not practical and this is why the current work is based purely on dynamical simulations based on the well tested and openly accessible PULSEDYN [27] code. The paper is organized as follows: the model and the simulational details are addressed in Sec. 2, the results are discussed in Sec. 3 and the conclusion and discussions are presented in Sec. 4. Sec 3 is split into 4 subsections: 3.1 and 3.2 which discuss the HSs and the more nuanced behavior of the RF in the β-FPUT system, 3.3 addresses the studies when different initial conditions are used and 3.4, where we contend that the RF enter late in the QEQ phase. Model System and Simulational Details We consider systems with the β-FPUT like Hamiltonian below, where the potential V (x i − x i+1 ) is given by [52] V Here, p i and m i are the momentum and mass of the ith particle, respectively, x i is the displacement of the ith particle, α controls the strength of the harmonic term and β controls the strength of the nonlinear term in the potential. We control the exponent of the nonlinear potential term by varying n. We use n = 2, 3, 4, 5, 6 and 7. Increasing n beyond 7 proves to be too expensive computationally. N is the system size. For the studies reported here we set N = 100, which is sufficiently large to observe RF without making the simulations too expensive. It is important to note that with the current model we are unable to explore potentials with nonlinear power less than 4 and hence the results shown here cannot be easily connected to those seen in the granular chain system where the nonlinearity in the potential is 5/2 [41]. We also do not address the consequences of asymmetry of the potential in the realization of RF here, which could have interesting consequences, and would be addressed in future work. To integrate the force equations, we use the velocity Verlet algorithm [27,54,55,56]. The issue of error accumulation over long simulation times sets limits on the time step of integration and the extent of nonlinearity we can consider. We observe that increasing n increases the computational expense and the error associated with the dynamical simulations. For this reason, we use n = 2, 3 and 4 for most of our simulations, while n = 5, 6 and 7 are used less often. We have used a time step δt = 10 −5 . With this time step, we achieve energy conservation of up to 1 part in 10 9 on average per time step across extended times. We record data every 1/δt time steps and the simulations are run to 10 11 time steps, i.e. N t = 10 6 , where N t is the number of recorded snapshots of the system in time. From our simulations, we find that N t = 10 6 is a long enough simulation time such that we recover results for RF that are not influenced by the run times. This length of run time is also computationally feasible. While it may be possible to observe RF from any velocity perturbation as an initial condition, simpler the condition or weaker the magnitude of the perturbation, longer it would likely take the RF to form and weaker they are likely to be. Hence, from an intuitive standpoint it makes sense to start from as disordered a state as possible to observe a large number of RF within reasonable times after initiating the system. With this in mind we assign uniformly distributed random velocity perturbations to each particle within the bounds v o and −v o . All the results shown in this paper have been obtained with v o = 0.6, though studies with a range of values of v o have been done. Values much larger than 0.6 are not recommended as they can incur unnecessary calculational errors. While we have also used the Gaussian and beta distributions to explore the role of initial conditions, all results shown are obtained using the uniform distribution unless stated otherwise. As we shall see in Sec. 3.3, the details of the distributions do not influence the statistics of RF we observe in our studies. With the chosen v o and N values, the system has a total energy in our dimensionless units of ∼ 10 −2 and relaxes to the early stage QEQ phase within the first few hundred recorded time steps. The relaxation process of the system to QEQ and in QEQ itself is discussed in more detail in Section 3.4. Results We discuss the dynamical simulation based observation of HS and RF for our system below. For our studies, the kinetic energy fluctuations in the system turn out to be important to understand. We will define the kinetic energy per particle as, Here, E K and E K /N denote the average kinetic energy of the system and the average kinetic energy per particle in the QEQ phase, respectively. By the virial theorem, for α = 0 in Eq. (2), E K = 2nE 2n+2 , where E is the total system energy. To identify high energy regions in the chain, we search for sites in the spacetime lattice with kinetic energy greater than EK N + 6δ K . We call these sites HS. However, since HS are typically fleeting, we associate RF with fluctuations that last across a small window of time. Therefore, we define a RF as a set of contiguous HS on the space time lattice. Here we have set the minimum number of contiguous spots to be 6. The criteria for choosing 6 as the threshold is arbitrary and is discussed below. A larger number than 6 would reduce the number of RF seen while a smaller number would increase the same without significantly affecting the findings reported here. In our experiments, varying this threshold does not affect the trends in any significant manner. We report the results of our calculations in Sec. 3 below. Hotspots In all of our simulations, we find that HS are abundant at sufficiently late times. This observation is fully consistent with the findings on the Hertz system reported earlier [41]. To the extent we can accurately study these systems up to late times, we note that the number of HS do not depend in any significant manner on the total energy of the system (set by v o ), on the nonlinearity of the system which is controlled by n, and the strength of the harmonic forces which is set by α in Eq. (1) (see Fig. 1). Typical variations in the number of HS seen is ∼ 5 − 10%. The calculations reported here have been run significantly deeper into QEQ and for a system which is 1/5 th of the system size compared to that reported in Ref. [41]. This is why the typical number of HS is larger by a factor of 10 in the studies reported here compared to that reported in Ref. [41]. Rogue Fluctuations We now turn our attention to the nature of RF in the β-FPUT chain. In Fig. 2(a) we show the contour plots of the kinetic energy of the system versus space and time with lighter regions representing higher kinetic energy. The HS and the RF can be seen in Fig. 2(a). The simulations reveal that there are two kinds of high kinetic energy footprints in Fig. 2(a). We see fast moving (shown inside ellipses) and slow moving or nearly localized regions (shown inside rectangular boxes). Fig. 2(b) shows the contour map of just the HS in the chain. We should mention that localized excitations for significant times are rare in QEQ (see Ref. [18] for details). While the fast-moving excitations are identified as RF (shown inside ellipses), the semi-localized excitations (shown in rectangular boxes) are rejected based on our definition. We also note that the RF we report here appear to be ubiquitous for all strongly nonlinear systems and appear to be distinct from the rogue waves alluded to in studies of Peregrine solitons of the weakly nonlinear Schrödinger equation [57]. We further note here that while there is extensive evidence of existence of rogue waves, the oceanographic analyses of the origins of these waves in the open ocean is very much an evolving subject [58,59,60]. In earlier work on granular systems it has been shown that as n → ∞, the width of a SW should shrink [61]. We anticipate that with increasing n, the width of the SWs would be smaller and hence the number SWs that can be accommodated in a system ought to increase. Such an increase is expected in due course to increase the number of RF in the system. This can be seen clearly in Fig. 3 for the case α = 0. For Fig. 3 we have used a linear fit to the data obtained from the dynamical simulations, where n RF is the total number of RF found in the system. The parameters of the fit are calculated to be a = 550 ± 32 and b = −680 ± 130. We observe that this growth behavior of RF with increasing n for 1 < n ≤ 1.3 in the Hertz potential (i.e., greater than quadratic and less than quartic) is exponential in nature as reported in our earlier work [41]. The 1.3 < n < 2 region is not readily accessible in the studies reported here for the β-FPUT chain. The nonlinear regimes with 1 < n < 2 and the n ≥ 2 regimes cannot be easily connected in this work. However, our earlier study in Ref. [41] and the current work together suggest that strongly nonlinear systems are generically prone to RF in QEQ. Harmonic forces can be introduced in the system by setting α > 0. Typically, harmonic oscillations tend to progressively disperse SWs in a system [19,20,62,63]. Therefore, we expect that increasing α would decrease n RF . Our dynamical simulations strongly suggest that n RF decreases exponentially with increasing α as expected except for a region of α where n RF shows an unexpected rise followed by a fall to the exponential decay with increasing α as can be seen in Fig. 4. Initially this unexpected increase in n RF over a window of increasing α may seem like an error. However, after extensive analyses of the results such as exploring whether there are errors in energy conservation over extended time simulations and repeating the calculations with slightly changed parameters we found that the effect showed up in every study and hence is real. What is remarkable is that the behavior is observed for all values of n that we are able to explore while maintaining the energy conservation accuracy over long time simulations. The simulations also suggest that the maximum number of RF are realized for progressively larger values of α as n increases (see Fig. 4(a)). In earlier work, we have reported strongly nonlinear behavior when the linear and nonlinear parts of the potential become highly competitive leading to exceedingly long-lived system dynamics and absence of relaxation [64,65,66,67]. Further, the system behaves almost like an integrable system [68] with the SW-SW interactions being much weaker than what is seen in the β-FPUT system [67]. Hence, in retrospective, the observed behavior is perhaps not entirely unexpected. In summary, our studies suggest that the system behaves as a nearly integrable system in this special regime where the acoustic oscillations and the solitary waves become weakly interactive. This is so possibly because the length scales and the velocities associated wth these typical acoustic-like waves is comparable to that of the SWs [64,65,66,67]. Excluding the co-existence regime, we find that the exponential function provides a suitable fit as shown in Fig. 4 (b) for n = 2. The fit uses the following function We obtain ρ = 3.07 ± 0.49, 3.90 ± 0.35 and 5.1 ± 0.46 for n = 2, 3 and 4, respectively. Interestingly, our fits suggest that ρ ≈ (n + 1). Therefore, for progressively increasing α, the RF get increasingly suppressed as n increases. The results in Ref. [41] were for various values of v o rather than for various values of n as discussed here. Role of Initial Conditions We now address the role of initial conditions in the formation of RF. In QEQ the system loses memory of its initial conditions [34,37,38,39]. The formation of RF, therefore, must not depend on the initial conditions. While we have used the uniform random distribution of particle velocities as an initial condition for the results reported discussed in this paper until now, we explored the cases where the initial velocities of the particles have been drawn from the beta and Gaussian distributions to test if our expectation was correct. Fig. 5(a) shows n RF plotted against total system energy E for various initial conditions with random velocities. In Fig. 5 (b), the dependence of the initial conditions on the onset of QEQ is explored. When the system is initiated by a single SW we see that the system eventually reaches the QEQ phase characterized by the small values of E Kmax /E (see the caption of Fig. 5(b)) where RF are eventually observed. We thus show that the QEQ phase is reached regardless of the initial conditions used and this happens to be the case for the α = 0 and α > 0 cases with n = 2 as shown in Fig. 5 (b). As we shall see below, many more interactions are needed when a SW is seeded at t = 0 as opposed to when some random distribution of velocities is used to seed the dynamics. This explains why the case where the SW is seeded takes nearly a decade longer to reach the QEQ phase as seen in Fig. 5 (b). RF are seen in all the cases we have probed. The beta distribution used to explore a case of an initial random distribution of velocities is given by where, B(α D , β D ) is a normalization constant. The parameters α D and β D change the shape of the probability distribution. The Gaussian distribution which is also used to explore a separate case of random distribution of initial velocities is given by where, µ and σ are the mean and the standard deviation of the distribution, respectively. Although we have studied several cases of beta and Gaussian distributions for our studies, we show results from a selected study where we set (α D , β D ) = (0.5, 0.5) and (µ, σ) = (0, 0.01) for the simulations corresponding to the beta and Gaussian distributions. For the two simulations with initial velocities sampled from the uniform distribution, we have set v 0 = 0.6. For the simulation with a single SW, we have set the initial velocity of particle 50 in the chain to 0.6 while the rest of the particles in the chain are unperturbed. As expected, we observe that the occurrence of RF is independent of initial conditions. Further, the occurrence of RF does not depend on the total energy of the system in a significant manner as seen in Fig. 5 (a). We note that the number of RF we see in the studies shown here are ∼ 10 2 in simulations across 10 11 time steps. A question that will be addressed in future work has to do with the minimal conditions needed to realize RF in these strongly nonlinear systems? RF are rare events and to understand how rare and why are daunting challenges for future analyses. Our preliminary studies along these lines suggest that the birth of RF presumably requires 5-6 Fig. 5 Subfigure (a) shows n RF as a function of total energy of the system E, for three different distributions of the initial conditions -uniform, Gaussian and the beta distribution. Subfigure (b) shows the kinetic energy of the most energetic SW E K,max , normalized by the total system energy E, for a range of initial conditions and system parameters. The inset shows early time transition of the system to QEQ for all the cases except the case of the single seeded SW. Both the plots show data for α = 0 and n = 2. energetic SWs to come together within a small time window and it is necessary to understand what conditions control such conditions. The RF in the Late QEQ Phase It is important to examine whether the RF recorded are late in the QEQ phase (as shown in Figs. 2) or past the same into an energy-equipartitioned state. Because if indeed RF are seen in the energy equipartitioned state, it could be possible to see them when systems are in equilibrium as well. In earlier work we have studied how the β-FPUT system with n = 2, α = 0 at late enough times relaxes beyond QEQ to an energy equipartitioned state [27]. A key test used to determine the establishment of the equipartitioned state was to examine whether the simulations yielded a value of C v that can only be obtained when equilibrium prevails. Hence, it is reasonable to examine whether the system in our study relaxes past the QEQ state to an energy equipartitoned state by calcuating the specific heat C v across various time windows of relaxation. The key idea here is that if the equilibrium C v is attained then the system must have gone past the QEQ phase to the equilibrium phase. We followed the approach outlined recently by Przedborski et al. and presented in Eq. (16) in Ref. [49] to carry out our calculations. For n = 2, 3 and 4, the theoretical values of C v when the systems reach the energy equipartitioned state were calculated to be 0.725, 0.64 and 0.608, respectively [49]. The corresponding C v values from our simulations using Eq. (24) in Ref. [27] were found to be 0.741, 0.657 and 0.615. While these values are close to the values in the energy equipartitioned state, our experiences with these calculations (see [27]) suggest that they are not close enough to infer that the equipartitioned stage has been reached. Further, for the strongly nonlinear systems under various initial conditions being studied, the simulation times to reach the equipartitioned state turn out to be too long to simulate. Our results hence suggest that our study of RF is for systems that have not quite reached the equipartitioned state and that our calculations have been performed in QEQ. Summary and Conclusions The story of RF in non-integrable 1D systems with strongly nonlinear interactions is an interesting as well as an evolving one. Given that RF emerge late into system dynamics, and that nonlinear dynamical simulations need to be done with high precision, it is challenging to resolve even relatively simple questions within a short time. Our studies in this work were done on RF as the systems evolved through the time steps outlined in Sec. 2. Hence, in this second paper on RF, we lay out our progress and close with questions that we wish to address in future work. We note here that depending on the nature of the interactions, these systems when subjected to a perturbation typically exchange energy between the particles using energy carriers that are characteristic of strongly nonlinear systems. Depending upon the details of the interactions, the carriers can be some or all of propagating SWs, localized nonlinear excitations, and objects that have SW and localized excitation-like features along with acoustic-like oscillations [18]. Interactions between these objects are non-trivial. Eventually these systems evolve into the QEQ phase characterized by no memory of initial conditions, a Gaussian distribution of velocities and no equipartitioning of energy. Our contention is that RF seem to appear in the QEQ phase and possibly are characteristic of strongly nonlinear systems. In earlier work [41], we showed that for granular chains held within fixed boundaries characterized by Hertz and Hertz-like potentials with 1 < n < 1.3, n RF ∼ exp(γ2n), where γ = 6.41 ± 0.81. Here we report that for β-FPUT systems and similar systems with sextic, octic, etc potentials (see Eq. (2)), i.e., for n ≥ 2, where n is an integer, n RF ∼ an + b, where a = 550 ± 32 and b = −680 ± 130 are constants. The region between 1.3 < n < 2.0 where the cross-over happens from exponential to linear growth of n RF with respect to n cannot be explored using the β-FPUT like systems and will be the subject of future work using the Hertz-like potential. Our studies show that increasing α in the β-FPUT chain suppressed n RF as n RF ∼ exp(−ρα), where ρ ≈ (n+1). This result is similar to the suppression of n RF reported with increasing precompression in the Hertz and Hertz-like chains (see Fig. 3 in [41]). However, in the Hertz chain we found that n RF decayed with precompression in a way that seemed consistent with a doubleexponential function. While the Hertz-like and FPUT potentials are different, and there is no reason to expect that the suppression of n RF in all models would be the same, it would be interesting and important to understand more about how n RF gets suppressed by the presence of harmonic interactions for various model systems. The co-existence regime is where the harmonic and nonlinear pieces of the potential become competitive as alluded to in Refs. [64] and in [67] and the system shows behavior akin to that of a strongly nonlinear system. In our β-FPUT-like systems we see a similar co-existence phase when the harmonic and nonlinear forces become competitive, i.e., for a range of values of α given β = 1 and the power of n in Eq. (2). Our investigations in [64] suggest that the system dynamics in this co-existence phase is similar to that in an integrable system where the SW never gets destroyed. Typically, in this state, we find that the SW interacts very weakly with the background oscillations of the particles in the chain. We contend that the SW seen in this regime for the Hertz chain problem is best described by Nesterenko's solitary wave solution for the strongly nonlinear Hertz chain under weak loading [68]. It is conceivable that Nesterenko's solution could hold the key to a better understanding of the co-existence phase in the β-FPUT like system. Future work may need to address the following outstanding questions -(1) exactly when do the RF appear in the relaxation process of a perturbed chain and how long do they last, and how does their number distribution change as the system evolves? (2) Do RF exist in QEQ phases only or do they appear infrequently in equilibrium as well? (3) Are RF special to 1D systems or can they happen in higher dimensions? Acknowledgments SS has been partially supported by a Fulbright-Nehru Academic and Professional Excellence R-Flex Fellowship while at IIESTS, India where a part of this work was completed. The authors declare that they have no conflict of interest associated with publishing this work.
2021-05-12T01:16:36.547Z
2021-05-11T00:00:00.000
{ "year": 2021, "sha1": "1c35cb247fab9a8daf4bb850cdc7570bae34445b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1c35cb247fab9a8daf4bb850cdc7570bae34445b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256462400
pes2o/s2orc
v3-fos-license
Dose rate effect on mortality from ischemic heart disease in the cohort of Russian Mayak Production Association workers For improvement of the radiation protection system it is crucial to know the factors that modify the radiation dose–response relationship. One of such key factors is the ionizing radiation dose rate. There are, however, very few studies that examine the impact of the dose rate on radiogenic risks observed in human cohorts exposed to radiation at various dose rates. Here we investigated the impact of the dose rate (in terms of the recorded annual dose) on ischemic heart disease (IHD) mortality among Russian nuclear workers chronically exposed to radiation. We observed significantly increased excess relative risks (ERR) of IHD mortality per unit of external gamma-ray absorbed dose accumulated at higher dose rates (0.005–0.050 Gy/year). The present findings provide evidence for the association between radiation dose rate and ERRs of IHD mortality in occupationally chronically exposed workers per unit total dose. IHD mortality risk estimates considerably increased with increasing duration of uninterrupted radiation exposure at high rates. The present findings are consistent with other studies and can contribute to the scientific basis for recommendations on the radiation protection system. • the follow-up period was complete for the resident subcohort, which started on a date of hire and continued until a date of death with annual routine health check-ups; • the quality of clinical verification for causes of death in residents was higher due to a large number of autopsy examinations performed for members of the resident subcohort (52.0%) than the migrant subcohort (12.0%); • the number of workers who had potentially received internal radiation exposure and for whom bioassay alpha activity measured was higher in the resident subcohort (72.2%) than the migrant subcohort (6.0%); • the number of workers for whom multiple bioassay measurements of alpha-activity were above the detection limit was higher in the resident subcohort (mean 6.32, standard deviations 6.41) than in the migrant subcohort (mean 3.16, standard deviations 5.27); and • as a consequence, more data on organ and tissue absorbed alpha doses with lower uncertainties were available for the resident subcohort than for the migrant subcohort. Results of the analysis for the entire cohort of Mayak PA workers are summarized in Supplementary Information (Tables S1-13, Figure S1). Dosimetry. The dosimetry system for the Mayak PA worker cohort has been updated several times over recent decades and this study is based on the Mayak Worker Dosimetry System 2013 (MWDS-2013) 14,15 that provides improved individual estimates of annual gamma-ray, neutron and alpha-particle doses from external and internal exposures. The majority of Mayak PA workers (76.1% of the entire cohort and 76.2% of the resident subcohort) received combined (both external and internal) radiation exposures and the rest of the workers received only external exposures to gamma-rays and/or neutrons. As noted earlier 16 , some Mayak workers were internally exposed to radionuclides other than plutonium, but the contribution of plutonium to the alpha dose in the Mayak worker cohort was the largest (> 90%). Consistent with the previous study 16 , the present analyses considered doses from external and internal exposures absorbed in liver because MWDS-2013 provides no dose estimates for the circulatory system organs such as the heart. However, it should be noted that the www.nature.com/scientificreports/ Statistical analysis. The analysis considered the datasets each for the entire cohort and for the resident subcohort. The data for the analyses were compiled as Table S14. Doses considered in all analyses were lagged for 10 years. At the first stage both sexes were considered together except that while performing the baseline analysis the heterogeneity between sexes was checked. Then all the analyses considered males and females separately. In this study, a lag period refers to a period of time just before death when it is thought that exposure can have no further effect on its occurrence. To conduct analyses considering lagged doses from exyetnal gamma rays, person-years were included in the analyses, beginning from the start date of employment, with the first x years included in the zero gamma dose category when the radiation dose was lagged for x years. The estimates of excess relative risk per unit absorbed dose (ERR/Gy) were based on the Poisson regression and computed with the AMFIT module of the EPICURE software 17 . 95% confidence intervals (CI) and p values demonstrating the statistical significance were computed with AMFIT module using the likelihood techniques. All statistical significance criteria were two-sided. The differences were considered significant at p < 0.05. First, similarly to the previous study 16 , the ERR/Gy estimates were obtained using the conventional linear model that did not consider the dose rate. Adjustments via stratification were made for the following nonradiation factors: sex, attained age (< 20, 20-25, …, 80-85, > 85), calendar period (1948-1950, 1951-1955, 1956-1960, …, 2011-2015, 2016-2018), smoking status (never smoker, ever smoker, unknown), alcohol consumption (seldom drinker, moderate drinker, heavy drinker, unknown) and migration status (when the entire cohort was considered in the analyses) and for alpha dose from internal exposure. The analysis with the adjustment for alpha dose did not exclude from the dataset those workers who had not been monitored for internal exposure to alpha particles, instead they were assigned to "unknown" dose category (all workers with unmeasured bioassay alpha activity). So, the Poisson regression model used was where λ denotes the IHD mortality in the study cohort; λ 0 denotes the background IHD mortality assuming the zero radiation dose; s denotes sex; aa denotes attained age; ct denotes calendar period; smok denotes smoking status; alc denotes alcohol consumption; mig denotes migration status (in the analysis considering the entire cohort), d α denotes a categorical variable for the cumulative liver absorbed alpha dose from internal exposure (Gy); β denotes ERR/Gy; and D γ denotes the cumulative liver absorbed gamma-ray dose from external exposure (Gy). Then the analysis considering the dose rate based on annual doses recorded with individual film badges (as a sum of individual daily doses measured with a film badge dosimeter) was carried out using the following model: where D γL denotes the total dose accumulated at a dose rate lower than a dose rate cutpoint, and D γH denotes the total dose accumulated at a dose rate higher than a cutpoint (illustration of two dose-rate windows shown in Table 1) 18 , β L and β H denote the ERR/Gy estimates based on D γL (ERR L /Gy) and D γH (ERR H /Gy), respectively. The dose rate cutpoints were examined from 0.005 to 0.050 Gy/year with a 0.005 Gy/year interval. The comparison was made between the conventional model and the model considering the dose rate, using maximum likelihood techniques. Deviations from the conventional (linear) model of the dose-response were tested by fitting the dataset using alternative (linear-quadratic) models: The comparison between the linear and the linear-quadratic models was based on the difference between the corresponding maximum likelihoods. While assessing the ERR/Gy, the following sensitivity analyses were carried out: = 0 s, aa, ct, smok, alc, mig, d α · 1 + β L1 · D γL + β L2 · D 2 γL + β H1 · D γH , = 0 s, aa, ct, smok, alc, mig, d α · 1 + β L1 · D γL + β H1 · D γH + β H2 · D 2 γH , and = 0 s, aa, ct, smok, alc, mig, • various lag periods (0, 5, 20 and 30 years) for external and internal occupational radiation doses were examined; • an adjustment (via stratification) for internal alpha dose was excluded; • an alternative adjustment for internal alpha dose was made using the approach when those workers who had not been monitored for internal alpha exposure were considered in two categories: one category included reactor workers exposed only externally, the other category included all the rest workers with unmeasured alpha activity; • the linear trend with the weighted cumulative gamma-neutron dose (with a radiation weighting factor of 10 for the absorbed neutron dose 19 ) was analyzed. The radiation weighting factor for neutrons was chosen in accordance with ICRP Publication 103 taking into account the energy of neutron spectrum in the Mayak PA workplace 19,20 . To assess the weighted cumulative gamma-neutron dose the unmeasured neutron dose was given 0.00 value; • adjustments (via stratification) for additional factors were included: period of hire (1948-1958, 1959-1972, 1973-1982), age at hire (< 20, 20-30, ≥ 30); • the dataset considered in the analysis was limited to workers who had been employed for > 1 year. Results At the end of the follow-up period, 3824 deaths from DCS were registered as the main cause of death in the resident subcohort over 622,199 person-years of the follow-up, among which there were 2267 (59.3%) deaths from IHD. Tables S15 and S16 summarize distributions of person-years and numbers of deaths from IHD within various gamma dose categories by dose rates. Tables 2 and 3 summarize main characteristics of the Mayak worker cohort and the resident subcohort. At the end of follow-up the means (standard deviations) of cumulative liver absorbed gamma-ray doses from external exposure were 0.43 (0.63) Gy for both sexes, 0.45 (0.65) Gy for males and 0.37 (0.56) Gy for females in the entire cohort, and 0.42 (0.60) Gy for both sexes, 0.45 (0.63) Gy for males and 0.33 (0.53) Gy for females in the resident subcohort. Figure 1 demonstrates the distribution of the entire cohort workers and resident subcohort workers by external gamma-ray dose. The distributions of workers by the cumulative liver absorbed gamma-ray dose from external exposure did not significantly differ between the entire cohort and the resident subcohort (p = 0.09). Means (standard deviations) of annual liver absorbed gamma-ray doses from chronic external exposure were 0.053 (0.110) Gy for both sexes, 0.054 (0.114) Gy for males and 0.050 (0.096) Gy for females in the entire cohort, and 0.030 (0.076) Gy for both sexes, 0.030 (0.078) Gy for males and 0.030 (0.068) Gy for females in the resident subcohort. The changes in mean annual gamma-ray doses over the whole employment period are demonstrated in Fig. 2. It should be noted that in early years of Mayak PA operation mean annual gamma-ray doses from external exposure were the highest. In 1951 the mean dose rate was 0.25 Gy/year; during the following decade the dose sharply declined down to 0.05 Gy/year by 1960. The annual doses continued leveling down gradually in the 1960s-1980s and thereafter remained stable at approximately 0.008 Gy/year. It should also be noted that 4083 (18.2%) workers of the entire cohort and 2583 (19.6%) workers of the resident subcohort who had been working at reactors and at some departments of the radiochemical and the plutonium production plants had been exposed to neutrons. Means (standard deviations) of cumulative liver absorbed neutron doses were 0.0011 (0.0042) Gy for both sexes, 0.0011 (0.0044) Gy for males and 0.0013 (0.0048) Gy for Table S14. In accordance with MWDS-2013, bioassay alpha activity due to incorporation of plutonium (24-h urine) was measured only in 44.8% (42.0% of males/52.6% of females) of the entire cohort workers and in 72.2% (70.3% of males/76.9% of females) of the resident subcohort workers who had been exposed to combined radiation. Means (standard deviations) of cumulative liver absorbed alpha doses from internal exposure to incorporated plutonium were 0. 25 Table S17. In the previous studies of DCS mortality, including IHD mortality 16 , while using a conventional linear model that included adjustments for non-radiation factors (sex, attained age, calendar period, smoking status and alcohol consumption status, migration status for the analysis considering the entire Mayak worker cohort) and for alpha dose from internal exposure, there was no significant association of IHD mortality with the cumulative liver absorbed gamma-ray dose from external exposure: the ERR/Gy was 0.06 (95% CI − 0.04; 0.18) in males and 0.14 (95% CI − 0.07; 0.45) in females. The baseline analysis. In this study the ERR/Gy of gamma-ray dose for IHD mortality was assessed with another model that took into account the dose rate. The results of the baseline analysis are presented in Table 4 and Fig. 3. IHD mortality risks estimated with the conventional model (without considering dose rate cutpoints) were significantly different from the corresponding risks estimated with the model used in this study (considering dose rate cutpoints) for both sexes (except for cutpoints of 0.045 and 0.050 Gy/year) and for males (except for cutpoints of 0.040, 0.045 and 0.050 Gy/year) in the resident subcohort. Females of the resident subcohort showed no significant differences in the risk estimates between the models (Table 4). It should be noted that for all dose rate cutpoints, the estimates of ERR H /Gy (due to higher dose rates) were higher than ERR L /Gy (due to lower dose rates) for both sexes, males and females in the resident subcohort. IHD mortality risks in the resident subcohort significantly increased at dose rates of > 0.015, > 0.020, > 0.025, > 0.030, > 0.035, > 0.040, > 0.045, and > 0.050 Gy/year when compared to exposures at dose rates below the specified cutpoints. There was no significantly increased risk in females of the resident subcohort, but there was no significant difference between sexes at any dose rate cutpoint (Table 4). Sensitivity analyses. We performed a sensitivity analysis to assess the effect of uninterrupted duration of high dose-rate exposure over 5 years on the risk estimate. Table 5 summarizes the results of this analysis. The analysis demonstrated that high dose rate exposure during 5 years notably increased the IHD mortality risk (ERR H5 /Gy, Table 5) compared to the risk estimate due to high dose rate exposure during 1 year (ERR H /Gy, Table 4). Tables 6, S18 and S19 summarize IHD mortality risks analyzed for associations with the dose rate lagged for various periods (0, 5, 20, 30 years). First, it should be noted that differences in IHD mortality risks for the resident subcohort between the conventional model and the alternative model were significant at every cutpoint with any of the lag period (except at some certain cutpoints with some certain lag periods) ( Table 6). The same tendency was observed separately for males of the resident subcohort (Table S18), and there were significant differences in females only at two cutpoints with a zero lag-period (0.020 and 0.025 Gy/year) (Table S19). However, the IHD mortality risk estimates at higher dose rates for both sexes (Table 6) and for males (Table S18) in www.nature.com/scientificreports/ the resident subcohort while being lagged for more than 10 years decreased down to the non-significance level with the increasing lag period. The sensitivity analysis of the IHD mortality that considered the weighted cumulative gamma-ray and neutron liver absorbed dose (weighting factor of 10) provided similar results for both sexes (Table 7), males (Table S20) and females (Table S21) in the resident subcohort. Meanwhile the exclusion of the adjustment for alpha dose from the model resulted in the decrease of the ERR H /Gy due to higher dose rates (by 30-35%) and even in the loss of significance at two dose rate cutpoints (0.045 and 0.050 Gy/year). In contrast, such exclusion of this adjustment resulted in the increase of risk estimates due to lower dose rates at all cutpoints (> 10%) without changes in significance of the risk estimates (the resident subcohort, Table 7). In males of the resident subcohort the exclusion of the adjustment for alpha dose from the model did not change markedly the magnitude of the IHD mortality risk due to higher dose rates but at certain cutpoints (0.010, 0.025, 0.040, 0.045 and 0.050 Gy/year) the risk gained significance (Table S20). In females of the resident subcohort the exclusion of the adjustment for alpha dose considerably changed the magnitude of the risk estimate due to higher dose rates and the risk became negative, but non-significant, at every dose rate cutpoint (Table S21). The sensitivity analysis performed with the model that included the alternative adjustment for alpha dose demonstrated the increase in IHD mortality risks due to both higher and lower dose rates at every cutpoint for both sexes (Table 7) and for males in the resident subcohort (Table S20). In females the risk estimate remained stable when analyzed with the model including the alternative alpha dose adjustment (Table S21). Table 4. Excess relative risk per Gy of IHD mortality in relation to 10-year lagged cumulative liver absorbed doses from external radiation exposure, adjusted for various non-radiation factors and alpha absorbed dose to the liver (main analysis, residents). Numbers in bold indicate significant differences. The dataset for the analysis was stratified by sex, attained age, calendar period, smoking status, alcohol consumption, alpha dose. ERR/Gy excess relative risk per unit gray of gamma-ray dose, IHD ischemic heart disease (ICD-9 codes: 410-414). a Test for heterogeneity between sexes. b Likelihood ratio test comparing the models with and without cutpoint. The consideration of the limited dataset that included only those workers who had worked at the Mayak PA for > 1 year (the sensitivity analysis for which workers with duration of employment < 1 year were excluded from the analyzed dataset) did not affect considerably the result for both sexes (Table 8), males (Table S22) and females (Table S23) in the resident subcohort. Inclusion of an additional adjustment for the hire period in the model resulted in modest changes in IHD mortality risk estimates (with widening of the corresponding confidence intervals) both due to lower and higher dose rates at all cutpoints for both sexes (Table 8), males (Table S22) and females (Table S23) in the resident subcohort. This sensitivity analysis revealed significant differences in risk estimates between the conventional model and the alternative model at cutpoints 0.045 Gy/year for both sexes, and 0.040 and 0.045 Gy/year for males in the resident subcohort. The sensitivity analysis that included the adjustment for age at hire in the model demonstrated significant differences between the conventional model and the alternative model at all cutpoints, whereas significantly increased IHD mortality risks were observed only due to higher dose rates at all cutpoints (excluding 0.005 Gy/ year). Moreover, ERR H /Gy considerably increased (by 30-80%) for both sexes in the resident subcohort (Table 6), and the similar results were observed also in males (Table S21). In females the inclusion of the adjustment for age at hire in the model resulted in a considerable (three-fourfold) increase in ERR H /Gy due to higher dose rate and the risks gained significance at every cutpoint (Table S23). The comparison between the IHD mortality risk in relation to dose rate provided with the linear model and corresponding estimates provided with alternative models did not demonstrate significant differences at any cutpoints for both sexes (Table 9), males (Table S24) and females (Table S25) in the resident subcohort. Discussion For improvement of the radiation protection system it is essential to consider factors that modify the dose-response relationship 9,10 , and dose rate is among such factors. This study examined the impact of dose rate (in terms of annual dose rate) on IHD mortality among chronically exposed Russian nuclear workers. Significantly increased excess relative risks of IHD mortality per unit of total external gamma-ray dose accumulated at higher dose rates were observed for both sexes at 0.015-0.050 Gy/year and for males at 0.020-0.035 Gy/year in the resident subcohort. In females the estimates of ERR/Gy of the cumulative dose for IHD mortality were nonsignificantly increased due to higher dose rates compared to lower dose rates at all cutpoints of annual doses. There were, however, no significant differences between sexes. Table 5. Excess relative risk of IHD mortality per Gy in relation to 10-year lagged cumulative liver absorbed doses from external radiation exposure, adjusted for various non-radiation factors and alpha absorbed dose to the liver (sensitivity analysis, residents). Numbers in bold indicate significant differences. The dataset for the analysis was stratified by sex, attained age, calendar period, smoking status, alcohol consumption, and alpha dose. Assessment of the effect of the uninterrupted high dose rate exposure over 5 years. ERR/Gy excess relative risk per unit gray of gamma-ray dose, IHD ischemic heart disease (ICD-9 codes: 410-414). a Test for heterogeneity between sexes. b Likelihood ratio test comparing the models with and without cutpoint. The ERR L /Gy and ERR H /Gy for IHD mortality increased with increasing dose rate cutpoints from 0.005 through 0.050 Gy/year and the confidence intervals became narrower due to the increment of the number of person-years of the follow-up that corresponded to high dose rates that exceeded occupational annual dose limits 19,21,22 . It should be noted that uninterrupted external high gamma-dose rate exposure over 5 years resulted in a notable (3-4.5 fold) increase in the IHD mortality risk compared to exposure at similar dose rates over 1 year (EER H5 /Gy > ERR H /Gy, p < 0.001). In this study lagging of the gamma-ray dose affected the risk estimate, consistent with the previous study 16 . The increase in the lag period resulted in the decrease (almost two-fold) in the IHD mortality risk due to higher dose rates (ERR H /Gy) at all cutpoints except for 0.005 Gy/year, and even to the loss of significance with 20 and 30-year lagging. In our opinion, the observed result was not attributable to the loss of higher dose rates since Mayak workers had been exposed at higher dose rates in early years after hire (Fig. 2). In this study the conventional 10-year lag was used; however, there is ongoing discussion on an appropriate lag period for certain causes of death from non-cancer diseases including IHD. Neither the conventional model (without considering dose rate) nor the alternative model (considering dose rate) revealed the effect of adjusting for neutron dose on the IHD mortality risk following chronic external gamma-ray exposure. In contrast, the adjustment for alpha dose (exclusion from the model and the alternative adjusting) changed the ERR/Gy estimates for IHD mortality regardless of whether the conventional model or the alternative model was used. This is why in order to provide more precise and less uncertain risk estimates all radiation types should be considered in analyzing radiogenic risks in individual cohort members exposed to combined radiation. Table 6. Excess relative risk per Gy of IHD mortality in relation to cumulative liver absorbed doses from external gamma-ray exposure, adjusted for various non-radiation factors and alpha absorbed dose to the liver (sensitivity analyses-various lag periods, both sexes, residents). Numbers in bold indicate significant differences. The dataset for the analysis was stratified by sex, attained age, calendar period, smoking status, alcohol consumption, alpha dose. ERR/Gy excess relative risk per unit gray of gamma-ray dose, IHD ischemic heart disease (ICD-9 codes: 410-414). a Likelihood ratio test comparing the models with and without cutpoint. www.nature.com/scientificreports/ Experimental studies have shown both sparing and enhancing (inverse) dose protraction effects of radiation exposure on the circulatory system [23][24][25][26][27][28][29] , and consensus has not yet been reached regarding dose rate effectiveness 30,31 . Recently Kloosterman and colleagues 32 have developed a biophysical mathematical model to describe the radiation-promoted atheroslerotic plague development. The authors state that with the adequate experimental data available this model could be further elaborated to take into account the dose rate effect. In the meantime, studies of dose rate effects on risks of radiation-related health outcomes in human cohorts are very limited [5][6][7] . On the one hand, there are indications of larger risks per unit dose for lower dose rate and fractionated exposures 33,34 . On the other hand, it should be noted that the results and conclusions of this study of IHD mortality in the cohort of the Russian nuclear Mayak workers are overall in good agreement with those observed in the study of UK nuclear workers of the Hanford site 7 that gives evidence for an increase in the ERR/Gy estimates at higher dose rates. This is why to improve the radiological protection system it is highly important to continue studies of cancer and non-cancer risks taking into account a dose rate in addition to nonradiation confounding factors and cumulative dose, as well as mechanistic studies for outcome development due to exposures at different dose rates. This study has a number of strengths: the large size of the Mayak worker cohort (22,377 individuals) and the resident subcohort (13,156 individuals); availability of individual annual gamma-ray doses from external exposure measured with individual film badges over the whole follow-up period; the long follow-up period Table 7. Excess relative risk per Gy of IHD mortality in relation to 10-year lagged cumulative liver absorbed gamma-ray doses from external exposure (sensitivity analyses-various parameters of the adjustment for alpha and neutron dose, both sexes, residents). Numbers in bold indicate significant differences. ERR/Gy excess relative risk per unit gray of gamma-ray dose, IHD ischemic heart disease (ICD-9 codes: 410− 414). a Unmonitored for plutonium alpha activity workers divided into two subgroups: only workers of reactors and the rest of unmonitored workers. b For all workers. c Likelihood ratio test comparing the models with and without cutpoint. www.nature.com/scientificreports/ (70 years); the available vital status (96%) of cohort members, high quality of data on causes of death; available information on acknowledged confounders (e.g., smoking, alcohol consumption that were taken into account in the present study, hypertension, high body mass index); available biological specimens including heart tissues that enable investigation of outcome mechanisms due to chronic radiation exposure 35,36 . The limitation of this study includes the lack of data on temporal radiation dose distributions in the MWDS-2013 precisely enough to calculate hourly or daily dose rate, wherefore we employed annual dose rate. In addition, it should be noted that alpha actvity was measured in bioassays for only 44.8% of Mayak workers who could have been affected by aerosols containing alpha particles (workers of the radiochemical and plutonium production plants). Despite the fact that dosimetry systems for Mayak PA workers have been updated and improved over many years within the Russian-American cooperation 37 , considerable uncertainties remain in the dose estimates from external and internal exposures. This study used point dose estimates provided by MWDS-2013, and did not consider uncertainties in external gamma and neutron or internal alpha particle dose estimates. The limitations of this study were the small number of migrants in the Mayak worker cohort whose complete medical information or data on confounding factors were unavailable, and also the low statistical power of the analysis that considered females separately due to the smaller number of females in the Mayak worker cohort and even smaller in the resident subcohort. Table 8. Excess relative risk per Gy of IHD mortality in relation to 10-year lagged cumulative liver absorbed gamma-ray doses from external exposure (sensitivity analyses-dataset restricted and additional inclusion of the adjustment, both sexes, residents). Numbers in bold indicate significant differences. ERR/Gy excess relative risk per unit gray of gamma-ray dose, IHD ischemic heart disease (ICD-9 codes: 410-414). a Likelihood ratio test comparing the models with and without cutpoint. Conclusions The results of this study provide evidence supporting associations of dose rate and duration of uninterrupted high dose rate exposure with the ERR/Gy estimates for IHD mortality in chronically exposed workers. The observed findings are in good agreement with findings of other studies and considerably contribute to the scientific basis for recommendations of the radiation protection system.
2023-02-02T14:58:25.494Z
2023-02-02T00:00:00.000
{ "year": 2023, "sha1": "2b9aad46b90b3440ad42df8a66259781f1247bd5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "2b9aad46b90b3440ad42df8a66259781f1247bd5", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
15204612
pes2o/s2orc
v3-fos-license
Severe Periodontitis Is Inversely Associated with Coffee Consumption in the Maintenance Phase of Periodontal Treatment This cross-sectional study addressed the relationship between coffee consumption and periodontitis in patients during the maintenance phase of periodontal treatment. A total of 414 periodontitis patients in the maintenance phase of periodontal treatment completed a questionnaire including items related to coffee intake and underwent periodontal examination. Logistic regression analysis showed that presence of moderate/severe periodontitis was correlated with presence of hypertension (Odds Ratio (OR) = 1.99, p < 0.05), smoking (former, OR = 5.63, p < 0.01; current, OR = 6.81, p = 0.076), number of teeth present (OR = 0.89, p < 0.001), plaque control record ≥20% (OR = 1.88, p < 0.05), and duration of maintenance phase (OR = 1.07, p < 0.01). On the other hand, presence of severe periodontitis was correlated with smoking (former, OR = 1.35, p = 0.501; current, OR = 3.98, p < 0.05), coffee consumption (≥1 cup/day, OR = 0.55, p < 0.05), number of teeth present (OR = 0.95, p < 0.05), and bleeding on probing ≥ 20% (OR = 3.67, p < 0.001). There appears to be an inverse association between coffee consumption (≥1 cup/day) and prevalence of severe periodontitis in the maintenance phase of periodontal treatment. Introduction Periodontitis is a chronic oral disease affecting all populations all over the world [1,2]. It is characterized by inflammation of the gingiva and/or destruction of the connective tissue and alveolar bone that support the teeth. Subgingival microorganisms that adhere to and grow in the periodontal pocket, along with excessive and aggressive immune response against these microorganisms, are considered to cause periodontitis. Therefore, the primary purpose of periodontal treatment is to control subgingival microorganisms. In addition to removal of the etiological agent of periodontitis, the control of risk factors of periodontitis is also essential to maintain periodontal health. Several studies have established that factors such as tobacco use [3,4], excessive alcohol consumption [5,6], diabetes mellitus [7], and dyslipidemia [8] are risks for periodontitis. Other studies have also suggested that nutritional modulation, which is influenced by lifestyle, environmental and genetic exposure, may be involved in periodontitis [9,10]. For instance, an epidemiological study suggested a modest inverse association between the intake of green tea and periodontitis [11]. Also, recent research has reported that a high intake of fruits and vegetables may be inversely associated with periodontitis progression [12]. Furthermore, it is reported that higher daily intakes of milk and fermented foods may be protective against periodontitis [13]. However, the association between nutritional factors and periodontitis is still not completely understood. Coffee is one of the most consumed drinks in the world. Several studies have suggested that coffee polyphenols including chlorogenic acid are potent chemopreventive agents [14,15]. Epidemiological studies demonstrate that a higher intake of coffee is associated with lower grade of nonalcoholic fatty liver disease [16], metabolic syndrome [17], liver cancer [18], and lower prevalence of oral, pharyngeal, and esophageal cancers [19]. Moreover, coffee consumption was reported to be inversely associated with markers of inflammation and endothelial dysfunction [20]. Therefore, it is possible that coffee consumption affects periodontitis. The maintenance phase of periodontal treatment is important for maintaining the periodontal condition after initial preparation therapy, periodontal surgical therapy or therapy for recovery of oral function. It is necessary to determine the factors associated with periodontitis to maintain the periodontal condition, even in the maintenance phase of periodontal treatment. It has been shown that higher coffee consumption was associated with a significant reduction in the number of teeth with periodontal bone loss in men [21]. However, it remains unclear whether coffee consumption modulates periodontal condition during the maintenance phase of periodontal treatment. In the present study, we hypothesized that habitual coffee consumption was associated with periodontal condition during the maintenance phase of periodontal treatment. Therefore, the purpose of this cross-sectional study was to investigate the relationship between coffee consumption and periodontal condition in the maintenance phase of periodontal treatment. Study Population Four hundred and thirty chronic periodontitis patients (mean ± standard deviation (SD), 66.4 ± 9.9 years) were recruited at the Department of Preventive Dentistry, Okayama University Hospital from June 2013 to December 2013. Chronic periodontitis was defined as ≥1 tooth sites with probing pocket depth (PPD) ≥4 mm [22]. The entrance criteria of maintenance therapy were absence of acute inflammation and no use of antibiotics for over 6 months after initial preparation. All participants received comprehensive dental care that included non-surgical periodontal therapy consisting of oral examination, oral hygiene instructions, supra/sub-gingival debridement and scaling and root-planing of all pockets (≥4 mm) every 3 to 4 months. At the onset of the study period, they had already entered the maintenance phase for over 1 year. Period of maintenance phase (mean ± SD) was 10.9 ± 6.7 years. Exclusion criteria were pregnancy and use of antibiotic drugs within 3 months; 16 participants were excluded based on these criteria. Data of 414 participants were therefore analyzed. This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human participants were approved by the Ethics Committee of Okayama University. After obtaining written informed consent, a detailed medical questionnaire was completed by the dentists and participants who fulfilled the study requirements were enrolled. Oral Examination PPD and clinical attachment level (CAL) were determined at six sites (mesio-buccal, mid-buccal, disto-buccal, mesio-lingual, mid-lingual and disto-lingual) on all teeth using a color-coded probe (Hu-Friedy, Chicago, IL, USA). Sites that bled upon gentle probing with 25 g of probing force were recorded, and the percentage of sites with bleeding on probing (BOP) versus total sites was measured in each participant. Plaque levels (plaque control record (PCR)) were measured after staining with erythrosine, and were recorded in terms of presence or absence adjacent to the gingival margin at four sites (mesial, distal, buccal and lingual) around each tooth [23]. All clinical procedures were performed by six trained and calibrated dentists (Takaaki Tomofuji, Daisuke Ekuni, Tetsuji Azuma, Noriko Takeuchi, Takayuki Maruyama, and Tatsuya Machida). In order to check the intra-and inter-examiner agreement, measurements of PPD and CAL were recorded and repeated within a 2-week interval in eight randomly selected chronic periodontitis patients. Data were analyzed with the non-parametric κ test and intra-class correlation was determined. The κ coefficients for intra-and inter-examiner and intra-class correlation coefficients were >0.8. Physical Assessment The weight and height of participants were recorded from the questionnaire. Body mass index (BMI) (weight in kilograms per height in meter 2 ) was calculated for each participant. Questionnaire Dietary intake was assessed by a self-administered food frequency questionnaire. The questionnaire provided four categories of response to describe participants' frequency of vegetable and fruit consumption: ≤2 times/month; 1 to 2 times/week; 3 to 4 times/week; or every day [24]. We combined the categories of vegetable and fruit consumption into two categories: ≤3 to 4 times/week and every day. The questionnaire provided three categories of responses to describe participants' green tea and coffee consumption frequency: <1 cup/day; 1 to 3 cups/day; and ≥4 cups/day [19]. We further combined the categories of coffee consumption and green tea consumption into two categories: <1 cup/day and ≥1 cup/day [11,25]. The volume of a typical cup of green tea and coffee was 150 mL. The questionnaire also included details of the coffee typically consumed, details of alcohol drinking (never, former, or current) and smoking (never, former, or current), exercise habits (2 categories divided by median (120 min/week)), and tooth brushing frequency. We also collected data via questionnaire twice at an interval of more than 1 month from the present participants (n = 48) and performed Spearman's rank coefficient in test-retest method. In addition, we collected data in two different seasons to assess the seasonal changes in coffee, green tea, vegetable, and fruits consumption (n = 15). Assessment of Periodontitis Severity Periodontitis severity was determined using the consensus definitions published by the joint Center for Disease Control/American Association of Periodontology (CDC/AAP) working group [26]. Severe periodontitis was defined as '≥2 interproximal sites with CAL ≥6 mm (not on same tooth) and ≥1 interproximal site with PPD ≥5 mm'. Moderate periodontitis was defined as '≥2 interproximal sites with CAL ≥4 mm (not on same tooth), or ≥2 interproximal sites with PPD ≥5 mm (not on same tooth)'. Mild periodontitis was defined as '≥2 interproximal sites with CAL ≥3 mm (not on same tooth), and ≥2 interproximal sites with PPD ≥4 mm (not on same tooth) or one site with PPD ≥5 mm'. No periodontitis was defined as 'no evidence of mild, moderate, or severe periodontitis'. These criteria were applied to all permanent teeth except for third molars. Statistical Analysis Chi-square test or unpaired t-test were used to explore potential confounders for periodontitis severity [30][31][32]. Logistic regression analyses were performed with moderate/severe periodontitis and severe periodontitis as dependent variables. Independent variables were selected when the P value was <0.20 for the chi-square test or unpaired t-test in each variable and based on previous studies because it has been suggested that potential confounders should be eliminated only if P > 0.20 in order to prevent residual confounding [33]. The significance level was two-sided for each statistical comparison. Reported P values in logistic regression analyses were considered statistically significant if less than 0.05. We assessed the model fit using Hosmer and Lemeshow test. Analyses were performed using a statistical package (IBM SPSS statistics version 20, IBM Japan, Tokyo, Japan). Results The Spearman's rank coefficients in the test-retest method of all variables were more than 0.8. In addition, concordance rates of frequency for coffee, green tea, vegetable, and fruits consumption in different seasons were 0.722, 0.431, 0.452, and 0.185, respectively. Table 1 presents the characteristics of the participants. The prevalence of never alcohol drinker, vegetable consumption every day, green tea consumption ≥1 cup/day was more than 60% and that of coffee consumption ≥1 cup/day was more than 50%. On the other hand, the prevalence of the participants with BOP ≥20% was less than 15%. All of the participants' brushing frequency were ≥2 times/day. Results of the comparisons of the participants with different periodontitis severity are shown in Table 2. Variables with p values less than 0.2 for the chi-square test or unpaired t-test comparing the participants who had no/mild periodontitis with the participants who had moderate/severe periodontitis were gender, age, BMI, hypertension, smoking, number of teeth present, PCR, and duration of maintenance phase. Similarly, variables with p values less than 0.2 for the chi-square test or unpaired t-test comparing the participants who had no/mild/moderate periodontitis with the participants who had severe periodontitis were age, hypertension, smoking, alcohol consumption, vegetable consumption, coffee consumption, feature of drinking coffee (consumed with sugar), number of teeth present, and BOP. Table 3 represents the results of the logistic regression analysis with moderate/severe periodontitis as dependent variable. Moderate/severe periodontitis was related with presence of hypertension (Odds ratio (OR) = 1.99, p < 0.05), smoking (former, OR = 5.63, p < 0.01; current, OR = 6.81, p = 0.076), number of teeth present (OR = 0.89, p < 0.001), PCR ≥20% (OR = 1.88, p < 0.05), and duration of maintenance phase (OR = 1.07, p < 0.01) after adjusting for gender, age, BMI, hypertension, smoking, number of teeth present, PCR, and duration of maintenance phase. Table 4 represents the results of the logistic regression analysis with severe periodontitis as dependent variable. Severe periodontitis was related with smoking (former, OR = 1.35, p = 0.501; current, OR = 3.98, p < 0.05), coffee consumption (≥1 cup/day, OR = 0.55, p < 0.05), number of teeth present (OR = 0.95, p < 0.05), and BOP ≥ 20% (OR = 3.67, p < 0.001) after adjusting for age, hypertension, smoking, alcohol consumption, vegetable consumption, coffee consumption, feature of drinking coffee (consumed with sugar), number of teeth present, and BOP. Discussion This cross-sectional study assessed the relationship between habitual coffee consumption and periodontal condition in the maintenance phase of periodontal treatment. We found that the group drinking ≥1 cup of coffee/day had lower prevalence of severe periodontitis than the group drinking <1 cup of coffee/day after adjusting for the independent variables. This indicates that habitual coffee consumption was related to severe periodontitis. On the other hand, there was no significant difference between the group drinking ≥1 cup of coffee/day and <1 cup of coffee/day in the prevalence of moderate periodontitis. Although habitual coffee consumption could prevent the progression of periodontitis, it may have little effect in the early stage of periodontitis. Coffee contains some chemical compounds; the phenolic compounds of coffee (chlorogenic acid, ferulic acid, and p-coumaric acid) are known to have a strong protective antioxidant property [20]. A previous study reported that dihydrocaffeic acid, which is detected in human plasma following coffee ingestion, scavenges intracellular reactive oxygen species (ROS) [34]. These findings suggest that the systemic increase in anti-oxidative property following coffee consumption contributes to a decrease in ROS-induced damage at the local level. ROS is involved in the pathology of periodontitis [35]. Therefore, the anti-oxidative property of coffee may correlate with severe periodontitis in our findings. However, further studies are needed to clarify this point. In this study, we divided coffee consumption status into <1 cup/day and ≥1 cup/day. A previous study [25] reported that coffee (≥1 cup/day) consumption reduced the risk of cardiovascular disease because of its antioxidant activities. Consumption of coffee (≥1 cup/day) was associated with lower risk of upper gastrointestinal cancer in a Japanese population [19]. Therefore, we estimated that consumption of ≥1 cup of coffee/day could affect the periodontitis severity because of its antioxidative properties. In our population, the concordance rate of green tea consumption in different seasons was moderate (n = 15). This indicates that frequency of green tea consumption varied according to the seasons. It is reported an inverse correlation between green tea consumption and periodontitis [11]. However, since the present investigation was performed in seasons different from the previous study, similar relationship might not be observed. All participants in this study had already received supportive periodontal care for over 1 year. Moreover, all of the participants' brushing frequency was ≥2 times/day and the low percentage of sites with BOP reflected the periodontal status of the well-maintained patient population. This suggests that the lower prevalence of severe periodontitis in the present population was associated with coffee consumption, even when local disease activity of periodontitis was little. Although the main purpose of periodontal treatment is to eliminate subgingival microorganisms, systemic therapeutic approaches including dietary consultation may also offer clinical benefits in the prevention of severe periodontitis in patients during the maintenance phase of periodontal treatment. Investigators have studied the association between coffee consumption and inflammation. High caffeinated coffee consumption was associated with lower plasma inflammatory parameters including E-selectin and C-reactive protein (CRP) in women with type 2 diabetes [20]. It is also known that the serum CRP concentration is progressively lower with higher intake of coffee in men with high serum γ-glutamyltransferase [36]. On the other hand, a cross-sectional study found that coffee consumption showed significant positive associations with adiponectin and total and low-density lipoprotein cholesterol, and inverse associations with leptin, high sensitivity CRP, triglycerides and liver enzymes [37]. Furthermore, a 30-year longitudinal study conducted between 1968 and 1998 [21] reported that coffee consumption might be protective against periodontal bone loss in adult men. These findings are consistent with the present concept that coffee consumption was related to severe periodontitis in the maintenance phase of periodontal treatment. In this study, 58.9% of participants drank ≥1 cup of coffee/day. According to a previous study [24], among 38,701 participants (18,867 men and 19,834 women, aged 40-64 years) who participated in a large-scale prospective cohort study in Japan, 45.5% of participants drank ≥1 cup of coffee/day. In addition, prevalence of the participants with mild, moderate, and severe periodontitis in this study was 2.9%, 49.3%, and 21.5%, respectively. The prevalence of severe periodontitis in this study may be high, because it is reported that the prevalence of the participants with mild, moderate, severe periodontitis in the non-institutionalized United States population aged ≥65 years were 5.9%, 53.0%, and 11.2%, respectively [38]. The discrepancy of these percentages might depend on the characteristics of participants. Therefore, the limitation of the present study is that all participants were recruited at the Okayama University Hospital, and this may limit the ability to extrapolate these findings to the general population. We used two cut-off points, namely moderate/severe periodontitis and severe periodontitis, because the relationship between coffee consumption and periodontitis severity was unclear. In the two logistic regression models (Tables 3 and 4), different confounding variables were selected. Effects of these variables on moderate/severe periodontitis and severe periodontitis were different. The effect of nutritional status on moderate periodontitis may be weak because other variables including age and PCR (%) have a strong effect on it. In contrast, although dental plaque can cause periodontitis, PCR (%) may not sufficiently reflect the subgingival plaque and dental calculus in deep periodontal pockets and it may have a weak effect on severe periodontitis. A further study is needed to clarify the relationship between these confounding variables and the periodontitis severity during the maintenance phase of periodontal therapy. This study has other limitations. We did not record sociological factors, which are known to influence periodontal condition. Also, the reliability and validity of the questionnaire used in our study were not tested sufficiently. The above additional information would increase the validity of the presently established relationship between habitual coffee consumption and severe periodontitis. Additionally, the present study was a cross-sectional study. Further longitudinal studies are needed to clarify the relationship between coffee consumption and periodontitis progression. Conclusions There appears to be an inverse association between drinking ≥1 cup of coffee/day and severe periodontitis during the maintenance phase of periodontal therapy.
2016-03-22T00:56:01.885Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "9839b9b66489f062dedf3715952f2b6a5625bbb6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/6/10/4476/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9839b9b66489f062dedf3715952f2b6a5625bbb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9263225
pes2o/s2orc
v3-fos-license
Limited T Cell Receptor Repertoire Diversity in Tuberculosis Patients Correlates with Clinical Severity Background The importance of CD4+ and CD8+ T cells in protection against tuberculosis (TB) is well known, however, the association between changes to the T cell repertoire and disease presentation has never been analyzed. Characterization of T-cells in TB patients in previous study only analyzed the TCR β chain and omitted analysis of the Vα family even though α chain also contribute to antigen recognition. Furthermore, limited information is available regarding the heterogeneity compartment and overall function of the T cells in TB patients as well as the common TCR structural features of Mtb antigen specific T cells among the vast numbers of TB patients. Methodology/Principal Findings CDR3 spectratypes of CD4+ and CD8+ T cells were analyzed from 86 patients with TB exhibiting differing degrees of disease severity, and CDR3 spectratype complexity scoring system was used to characterize TCR repertoire diversity. TB patients with history of other chronic disease and other bacterial or viral infections were excluded for the study to decrease the likely contribution of TCRs specific to non-TB antigens as far as possible. Each patient was age-matched with a healthy donor group to control for age variability. Results showed that healthy controls had a normally diversified TCR repertoire while TB patients represented with restricted TCR repertoire. Patients with mild disease had the highest diversity of TCR repertoire while severely infected patients had the lowest, which suggest TCR repertoire diversity inversely correlates with disease severity. In addition, TB patients showed preferred usage of certain TCR types and have a bias in the usage of variable (V) and joining (J) gene segments and N nucleotide insertions. Conclusions/Significance Results from this study promote a better knowledge about the public characteristics of T cells among TB patients and provides new insight into the TCR repertoire associated with clinic presentation in TB patients. Introduction Tuberculosis (TB) is a common, worldwide contagious infection caused by infections with Mycobacterium tuberculosis (Mtb) [1]. Mtb is an intracellular bacterium that can be cleared following the elicitation of effective cell-mediated immune responses [2,3]. Control of Mtb infections is mediated primarily by the development of T helper 1 (Th1) type immune response that involves the participation of CD4 + , CD8 + T lymphocytes, and macrophages [4][5][6]. The central role of CD4 + T cells in protection against TB has been clearly demonstrated by the increased susceptibility to Mtb infections in HIV/AIDS patients in correlation with diminished CD4 + T cell counts [7,8]. However, strong CD4 + T cell responses cannot alone resolve TB, that is, CD8 + T cells are also necessary based on in vivo experiments carried out in murine Mtb infection models [9][10][11]. Mtb-reactive CD8 + T cells have also been reported to be present at high frequencies in the circulation of patients with active tuberculosis, further supporting a role for CD8 + T cells in mediating immunity to Mtb [12]. To date, the importance of CD4 + and CD8 + T cells in protection against TB appears clear [13,14], clonally expanded CD8 + T cells has been found in granuloma lesions and PBMCs in Mtb-infected individuals [15,16]. However, the heterogeneity compartment of the CD4 + and CD8 + T cells in TB patients as well as the influence of TB disease severity on the CD4 + and CD8 + T cell receptor (TCR) repertoire diversity has never been analyzed. The Mtb antigen specific TCR gene modified CD4 + and CD8 + T cells has been developed for autologous adoptive T cell immunotherapy study [17]. In an attempt to identify and characterize Mtb antigen specific T cells among the vast numbers of TB patients [18], it is important to detect Mtb antigen specific T cells which share TCR structural features, i.e., biased utilization of particular TCR variable gene fragment or contain a public complement determining region 3 (CDR3) motif [19,20]. But the common TCR structural features of Mtb antigen specific T cells among the vast numbers of TB patients are rarely reported. In this study, we analyzed the CD4 + and CD8 + TCR repertoire from 86 TB patients with differing levels of disease severity. Many factors influence the TCR repertoire of each individual including antigen stimulation, age, and chemotherapy etc. The extent of the likely contribution of TCRs specific to non-TB antigens is not known. In order to reduce the impaction of non-TB antigens to the TCR repertoire as far as possible, TB patients had chronic disease history or with other bacterial or viral (include HIV) infections, and patients accepted chemotherapy or other biological treatment in the past five years were excluded for the study. Our results showed that age-matched healthy controls had a normally diversified TCR repertoire while TB patients represent with restricted TCR repertoire. Patients with mild disease had the highest diversity of TCR repertoire while severely infected patients had the lowest, suggest TCR repertoire diversity had a significantly inversely correlates with disease severity, which was independent of other clinical parameters. In addition, TB patients showed preferred usage of certain TCR types and sequencing of the TCRs showed a high frequency usage of certain V and J gene segments and CDR3 motif. Data presented in this report promote a better understanding of the common characteristics of T cells among TB patients and provides new insight into the TCR repertoire associated with clinic presentation in TB patients. Study Population 86 TB patients recruited from the Nanfang Hospital were assessed for eligibility (26 female, 60 male, mean age 43.77614.36 years, range 18-74 years). No patient had a history of chronic lung disease or other diseases such as cancer, heart, kidney failure or autoimmune diseases, nor was they diagnosed with other bacterial or viral (include HIV) infections, and none of patients accepted chemotherapy or any other biological treatment in the past five years. TB diagnosis was made based on clinical signs and symptoms as well as chest X-ray (CXR) and acid-fast smear examination. Informed written consent was obtained from each patient before the start of the study, and the study was approved by the Ethics Committee at Southern Medical University. Two radiologists, blinded to patient's clinical details, classification patients into mild, moderate and severe tuberculosis groups based on results of posteroanterior CXR (performed at the time of TB diagnosis). The classification criteria was that: the percentage of lung affected#25%, effusion#25% and no cavitation are determined as mild lesions; 25%,the percentage of lung affected#50%, 25%,effusion#50%, cavitation#2 cm are determined as moderate lesions; the percentage of lung affected.50%, effusion.50%, cavitation.2 cm are determined as severe lesions. Each patient's clinical severity was also checked by two experienced pulmonologists based on integrated assessment of the sputum smear grade, radiographic result, clinical symptoms, and PPD skin test result. Then the 86 patients were divided into 3 groups: i) mild (n = 25, 16 males, 9 females, mean age of 44.0614.04), ii) moderate (n = 28, 20 males, 8 females, mean age of 44.96614.72) and iii) severe (n = 33, 24 males, 9 females, mean age of 42.58614.63). T cells were obtained from blood samples before chemotherapy for the CDR3 spectratype analysis. A group of 36 age-matched healthy blood donors was used to control for age and each patient was matched with a healthy group consisting of 3 healthy donors age-matched within 3 years. These donors had a BCG vaccination and had a positive PPD skin test (5,15 mm). All of healthy blood donors had no abnormal chest radiographs and laboratory evidence of latent TB infection (determined by the antigen specific IFN-c assay, T-SPOT.TB). All controls also had no clinical or laboratory evidence of connective tissue diseases or immunological disorders and were enrolled and consented as described above for the study population. Forty-three of the 86 patients and 11/36 healthy controls were smokers with a mean cumulative cigarette consumption of 564 pack-months and 362 packs/month, respectively. Analysis of CDR3 Spectratype by GeneScan Peripheral blood mononuclear cells (PBMCs) isolation, CD4 + and CD8 + T cell separation, CDR3 spectratype analysis, and use of the GeneScan CDR3 spectratype complexity scoring system was performed as previously described [21,22]. To assess the reliability of the CDR3 spectratype complexity score system, samples drawn from the same individual at least two times were analyzed. The results demonstrated that the score system is quite reliable. We used relative complexity score (patient's CDR3 spectratype complexity score / mean value of age-matched healthy donor group's complexity score) to control for age variability when compare the TCR repertoire diversity among 3 patient groups. Sequencing the CDR3 Corresponding to TCR Va and Vb Families The TCR Va or TCR Vb families showing CDR3 spectratype with single peaks following GeneScan analysis was selected to sequence since single peaks of defined CDR3 lengths often indicate monoclonal T cell expansions according to the specific characteristics of the hypervariable NDN-regions [23]. PCR products were amplified using the same TCR Va or TCR Vb family sense primers and TCR Ca or Cb anti-sense primers (not FAM-labeled) using PCR conditions similar to those used for the first amplification. The PCR products will purified by gel electrophoresis and inserted into a pGEM-T vector (Promega Co., Madison, USA) and then sequence the T vector to obtain the nucleotide sequences of CDR3 to avoid measuring impassability in case of the single peak is reflected by two or more different NDN regions which have identical lengths. Nucleotide sequences of the amplified products were determined using an ABI 377 DNA sequencer (Applied Biosystems, Foster City, CA). Statistical Analyses The paired t test was used to compare the means of the CDR3 spectratype complexity scores obtained from patients and healthy controls as well as CD4 + and CD8 + T cell subsets. One-way analysis of variance (One-way ANOVA) was used to test for relative complexity score differences among the 3 patient groups. K independent samples test was used to assess differences between patient age, gender, BCG vaccine history, tuberculin skin test results, previous TB or TB contact history, smoking status, and type of TB. Correlations between relative complexity scores and disease severity, as well as between relative scores and other Demographic and Clinical Details Demographic and clinical characteristics, as well as CDR3 spectratype complexity scores data for the TB patients are shown in Table 1. T Cell Clonality in TB Patients We compared the CDR3 spectratypes of the 86 TB patients before treatment to the CDR3 spectraypes of the 36 age-matched healthy (uninfected) controls. Results demonstrated that most healthy controls displayed a normally diversified TCR repertoire based on the CDR3 spectratypes of their TCR Va and Vb gene sequences which showed a Gaussian distribution involving about 8 peaks (Fig. 1). The 9 elderly, age-matched healthy controls (age.60) presented with 4-6 oligoclonal subfamilies and 1-3 monoclonal subfamilies. In contrast, only rarely did TB patients present with a normal spectratype pattern, that is, the vast majority of TB patients presented with a restricted TCR repertoire by most of TCR Va and Vb gene families' CDR3 spectratype showing fewer than 8 peaks or even just a single peak (Figure 1) Representative spectratype profiles of patients are shown in Figure 2 and Figure 3. Sequence Analysis of the TCR a and b Chain CDR3 Regions PCR products from the TCR Va and Vb gene families showing CDR3 spectratype with single peaks following GeneScan analysis were selected for sequencing. This analysis identified a high frequency use of J gene segments Ja34, Jb2-1, and of N nucleotide insertions in both aand b-chains in these TB patients, that is, a highly conserved GGGGNKLI, GGGNEQYF, APDTGSGAF amino acid motif in the CDR3 of TCR a chains and GGGNKLI, TNKLI, SADKLI motif in the b chains. We also examined the MHC class I and class II haplotypes of the TB patients with conserved CDR3 motifs. This analysis did not identify a correlation between HLA haplotypes and conserved CDR3 motif usage ( Table 2 and Table 3). TB Patient Spectratype Complexity Scores The results of 86 TB patients obtained from the examination before chemotherapy was included for comparison of the TCR repertoire diversity between TB patients and age-matched healthy controls. CDR3 scoring system was used to obtain a uniform standard to quantify TCR repertoire diversity. The CDR3 spectratype complexity score for each patient was lower than that of the corresponding age-matched healthy controls. .51 for Vb, P = 0.000) in TB patients were significantly lower than those of age-matched healthy controls. The complexity scores of CD4 + T cells were significantly higher than those of CD8 + T cells both in healthy controls and in TB patients (P = 0.000). Relationship between the TCR Repertoire Diversity and Clinical Severity It was previously reported that older, healthy individuals had a restricted T cell repertoire [24,25]. To control for the influence of age, each patient was matched with a healthy group consisting of 3 healthy donors age-matched within 3 years. The ratio of the scores of the patient to that of the mean scores of the age-matched healthy group was used as a parameter to compare the TCR repertoire diversity among patient groups with different disease severities. Patients in the mild group were found to have the highest relative complexity scores for both CD4 + T cell subsets. In contrast, the severe group had the lowest relative scores for both CD4 + and CD8 + T cells ( Table 1). The relative complexity scores (TCR repertoire diversity) showed a significant negative correlation with disease severity, and have a certain degree of correlation with disease types. Importantly, comparisons between complexity scores for patients in the 3 groups revealed that other clinical parameters, including gender, BCG vaccine history, tuberculin skin test results, previous TB or TB contact history and smoking status could not account for the observed differences in TCR repertoire diversity (Table 4). Discussion T lymphocytes have been shown to play critical roles in mediating anti-TB immune responses. Specifically, protective immunity against Mtb involves both CD4 + and CD8 + T cells. A majority of circulating mature T cells recognizes TB antigens via a/b TCRs. The a/b TCR repertoire diversity can be affected by CD4 + and CD8 + T cell clonal expansion following a TB infection that affects a patient's immune response to this pathogen and in turn, disease severity. We evaluated the diversity of CD4 + and CD8 + T lymphocytes in TB patients in an attempt to discover the relationship between T cell repertoire diversity and disease presentation. We demonstrated that CD4 + and CD8 + T lymphocyte expansion followed Mtb infection and was accompanied by the preferential expression of certain TCR Va and Vb gene families, supporting the previous work that demonstrated clonal expansion of Vb 2 + CD8 + effector T cells in the peripheral blood of pediatric tuberculosis patients [20] and selection expansion of HLA-DR17, DQ2-restricted Va 2.3 + CD4 + T cells following stimulation with live Mtb or soluble Mtb extracts [26]. Although the Va and Vb family expansion profiles differed between patients in our study, particular types of Va and Vb TCR gene families showed preferred usage more frequently than other gene families in TB patients, suggesting a common antigen recognize specificity in the TB patients. To better understand the share nature of the T cells in disease presentation associated with Mtb infections, we sequenced the CDR3 of the clonally expanded T cells and found a highly conserved GGGGNKLI, GGGNEQYF, APDTGSGAF amino acid motif in the CDR3 of TCR a chains and GGGNKLI, TNKLI, SADKLI motif in the b chains as well as an increased frequency of Ja34 and Jb2s7 in the proximal region of the TCR CDR3 which may be linked with certain common Mtb antigen identify attributes and will facilitate us to development of allogeneic adoptive T cell immunotherapy. The variability of the expanded V families between individuals stimulation by Mtb 16-kDa antigen was previously described in the context of HLA polymorphisms [27]. However, no evidence demonstrated that similarly expanded Va or Vb family clones, or the conserved CDR3 sequence, occurred in patients with similar HLA backgrounds and cross-reactive TCR responses had been reported presented by different MHC class II molecules [28]. To determine whether the conserved CDR3 sequence was associated with a common HLA phenotype, we determined the HLA types of patients with conserved CDR3 sequences. The results indicated that conserved CDR3 sequences were independent of the HLA phenotype, suggesting that common TCR structural features exists in TB-associated T cells even in individual with different HLA phenotype. Although TCR repertoire drift has been described to occur in peripheral blood samples collected from a small TB patient sample, previous studies ignore the influence of disease presentation to TCR repertoire diversity. Because the formation of TCR repertoire diversity was influenced by many factors, the extent of the likely contribution of TCRs specific to non-TB antigens is not known. So it is critical to reduce of the impact of TCR repertoire diversity by other factors as far as possible. We select TB patients with no history of chronic lung disease or other system diseases and without other bacterial or viral (include HIV) infections to decrease and testing patients prior to chemotherapy for chemotherapy can substantially change the TCR repertoire. Mtb infection with high bacterial loads may changed the T cell immune response by i) anergy of skin DTH and T cell proliferation, ii) shifts the immune response from Th1 to Th2, and consequently, iii) enhanced B cell differentiation and antibody responses, which may associated with a more restricted TCR repertoire. Theoretically speaking, the severe patients load more Mtb bacterium than patients with mild or moderate TB [29]. Consistent with these, results described in this study demonstrated that TCR repertoire diversity correlated negatively with disease severity, that is, patients with mild disease had the highest relative complexity while severely infected patients had the lowest complexity. The TCR repertoire diversity showed a certain degree of correlation with disease types maybe because there is a correlation between certain TB types and disease severity. For example, tuberculosis pleurisy usually represents a mild type of TB which is generally resolves without treatment, and hematogenous disseminated pulmonary tuberculosis is a severe type of TB which always acutely ill with high fever and is in danger of dying [30]. Due to limitations in the collection of blood samples of more patients at different stages of disease, our analysis was restricted to patients in pre-treatment. A study including a cohort with different disease stages would allow for the confirmation of the data presented in this report. However, it is important to note that this study provides the first analysis describing the correlation between disease severity and TCR repertoire diversity in vivo, thereby providing useful information that allows for a better understanding of anti-Mtb immune responses. In addition, the public characteristics of T cells among TB patients may be association with certain common (conserve) TB antigen. Future analysis the antigen/ epitope specificity of TCRs contain conserved CDR3 motif will be our next step of work.
2017-05-02T03:13:43.353Z
2012-10-26T00:00:00.000
{ "year": 2012, "sha1": "48f705913efda66b439bd823fcbc0b9a06413db2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0048117&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48f705913efda66b439bd823fcbc0b9a06413db2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235222377
pes2o/s2orc
v3-fos-license
Gallic acid protects the liver against NAFLD induced by dust exposure and high-fat diet through inhibiting oxidative stress and repressing the inflammatory signaling pathways NF-kβ/TNF-α/IL-6 in Wistar rats Objective: The burden of diseases and death related to environmental pollution is becoming a major public health challenge. This study was designed to evaluate the deleterious effects of a combination of dust exposure and high-fat diet on liver function. Gallic acid as a potent antioxidant was used to prevent/alleviate non-alcoholic fatty liver disease (NAFLD) in rats exposed to dust and HFD. Materials and Methods: 24 rats were randomly divided into 3 experimental groups: HFD+Clean air, HFD+N/S+Dust and HFD+gallic acid+Dust. Animals were exposed to CA/ dust for six weeks on alternate days. At the end of the experiments, rats were anesthetized and samples were taken to perform molecular, biomedical, and histopathological evaluations. Results: Dust exposure induced NAFLD features in rats under HFD. Dust exposure and HFD disrupted liver enzymes and lipid profile. Dust exposure and HFD increased liver MDA level, mRNA expression of NF-Kβ, TNF-α, IL-6, Nrf2, HO1 and miRs122, and 34a. Dust+HFD also decreased liver total antioxidant capacity level. Pretreatment with GA improved almost studied variables in the HFD+GA+Dust group. Conclusion: The present study showed that HFD given for 6 weeks and dust exposure induced NAFLD in Wistar rats through inducing oxidative stress. Oxidative stress through activating the inflammatory pathways caused NAFLD features. GA pretreatment by inhibiting oxidative stress, effectively protected liver functions against HFD+Dust induced inflammation. Introduction The burden of diseases and death related to environmental pollution is becoming a major public health challenge, especially in developing countries. Gaseous pollutants (NOx, ozone, sulfur dioxide and carbon monoxide), persistent organic pollutants (dioxins, pesticides and furans), heavy metals (mercury, silver, lead, nickel, vanadium, manganese chromium and cadmium), and particulate matters (PMs) constitute air pollution compounds (Kampa and Castanas, 2008). Particulate matter (PM) is a generic term used for various kinds of air pollutant, with different sizes and compositions, that are produced by natural and anthropogenic phonemes (Poschl, 2005). The size of the particle matters according to different categories, was defined as: Ultrafine, Fine and Coarse particles(aerodynamic diameter smaller than 0.1 µm, equal to 1 µm and larger than 1 µm, respectively) (Kampa and Castanas, 2008). The existing literatures on air pollution have mostly focused on particulate matters and gaseous pollutant (SO 2 , ozone and CO) and less, on the effect of dust on health status (Al-Taiar and Thalib, 2014). The natural phenomenon, dust storm, transfers soil particles to another place, sometimes miles away from the point of dust storms origin (Al-Taiar and Thalib, 2014). Dust storm may carry fine and coarse particle matter fractions (Zauli Sajani et al., 2011) bio-particulates and microorganisms, pollen, and related protein and lipid components (Gonzalez-Martin et al., 2014). In spite of high frequency of dust storms in the Middle East, few studies have focused on its public health effects. A previous study showed that particle matter exposure via induction of oxidative stress, induced liver inflammation that have critical role in liver pathogenesis (Zheng et al., 2015). Mice exposed to PM2.5 demonstrated increased mRNA expressions of inflammatory mediators such as TNFα and IL-6 and showed Nonalcoholic fatty liver diseases (Tan et al., 2009). Non-alcoholic fatty liver disease is a spectrum of liver diseases ranging from simple non-alcoholic fatty liver (NAFL), to non-alcoholic steatohepatitis (NASH), and finally, liver irreversible cirrhosis (Brunt, 2001). A previous study showed that rats with sub-chronic exposure to PM2.5, exhibited liver histopathological changes and elevated serum aspartate aminotransferase and alanine aminotransferase (Li et al., 2018). Exposure to ultrafine PMs has been indicated to increase hepatic levels of malondialdehyde which shows systemic oxidative stress, and enhance the gene expression of antioxidants related to Nrf2 (Araujo et al., 2008). Nrf2 as a transcription factor regulates antioxidant response and promotes cellular pathways protecting against oxidative stress. Nrf2 plays vital role in almost all organs such as the liver, brain, and lung. A study demonstrated that exposure to air pollutants activated Nrf2 pathway (Jang et al., 2008). MicroRNAs (miRNAs) have important roles in physiological processes such as cell growth, differentiation and development (Bernardi et al., 2013). miR-122 shows liver health status and recognizes specific miRs in the liver (Siaj et al., 2012). MiR-34a reflects liver damage, and a direct correlation between the serum level of miR-34 and liver injury has been proven (McDaniel et al., 2014). Gallic acid (GA) as a phenolic compound is abundant in vegetables, tea, berries and wine. GA possesses different beneficial activities, including anti-inflammatory (Hsiang et al., 2013), antiobesity (Jang et al., 2008) and hepato protective effects (Chao et al., 2014). Thus, this study was designed to evaluate the deleterious effects of a combination of dust exposure and high fat diet, on liver function. Gallic acid as a potent antioxidant was also used to prevent/alleviate NAFLD in rats exposed to dust and high-fat diet. Chemicals Gallic acid was purchased from sigma Aldrich Co. (Germany). Kits for measuring malondialdehyde (MDA) and total antioxidant capacity (TAC) were purchased from ZellBio Co. (Germany). Kits for determination of ALT, AST, ALP, TG, cholesterol and HDL were purchased from Pars Azmun Company (Iran). Animals grouping Twenty-four adult male Wistar rats (200-250 g) were purchased from the animal house of AJUMS. Animals were housed in the standard cage for one week before initiation of the experiment. The animals were kept in the animal house of AJUMS, Ahvaz, Iran, under a dark-light cycle of 12 hr and a temperature of 22±2 o C. Rats had free access to standard rat chow diet and tap water. Rats were divided randomly into 3 groups (n=8 in each): HFD+CA (rats under high fat diet, exposed to clean air), HFD+N/S+Dust (animals under HFD received normal saline as vehicle just before dust exposure), and HFD+GA+Dust (rats under HFD, received oral gallic acid at 100 mg/kg, just before dust exposure). All rats were exposed to clean air or dust for 6 weeks (three days a week on alternate days). The following materials were added to standard rats'diet to make HFD including: cholesterol (0.4%), and beef tallow (30%) and supplemented with 30% fructose (Savari et al.). Animals in none of the experimental groups had free access to food and water during dust exposure period (Niwa et al., 2008). All protocols and experiments were approved by Experimental Animals Ethics Committee of AJUMS (IR.AJUMS.ABHC.REC.1397.060). Dust sampling, and area of study Dust was collected in autumn (2018) from Ahvaz, the capital city of southwest Province of Iran, Khuzestan. This city is located at 31° 20 N, 48° 40 E geographically and is18 m above sea level (Heidari-Farsani et al., 2013). To collect dust, we placed large dishes on the Golestan medical students'dormitory roof for 3 months; then, the settled dust was collected and used in this study. Dust exposure To create a dusty environment, dust exposure chamber was designed ( Figure 1). Whole body exposure was performed 5 hr per day, 3 days per week (for 6 weeks) on alternate days, for a total of 90 hours. The concentration of PM10 during the study was 500-2000 µg/m 3 . On the 43rd day of the experiments and prior to sacrificing, animals were fasted overnight. All rats were anesthetized by ketamine and xylazine (80 and 6 mg/kg, i.p, respectively). After inducing deep anesthesia, abdominal cavity was opened. Blood collection was performed, and liver sections from left lateral lobe were rapidly removed, frozen in liquid nitrogen and then, stored at -80°C (Sellmann et al., 2015). Another liver tissue sample from left lateral lobe was fixed in formalin solution (10%). Histopathological analysis of the liver Formaldehyde fixed liver tissue, were embedded in paraffin, and sectioned (5 µm) by using a microtome. The sections were stained by hematoxylin and eosin. Histopathological analysis was done blindly under a light microscope. Liver histology scoring The following grading method was used to determine the histopathological changes of the liver. Fatty change was graded according to the percentage of hepatocytes containing macro-vesicular fat (grade 1: 0-25%; grade 2: 26-50%; grade 3: 51-75%; and grade 4, 76-100%) (Ip et al., 2003). The degree of inflammation and accumulation of RBCs is expressed as the mean of 10 different fields within each slide that had been classified on a scale of 0-3 (0: normal; 1: mild; 2: moderate; and 3: severe) (Bruck et al., 2003). Assessment of the activity of antioxidants and level of lipid peroxidation The frozen liver tissue was homogenized in1ml phosphate buffered saline (pH 7.4) and then centrifuged (15000 rpm for 15 min). The TAC and MDA levels in homogenates of liver tissue, were measured using specific kits according to the manufacturer's instructions. Measurement of the expression of miRs and mRNAs Total RNA (miRNAs and mRNAs) were extracted from the frozen serum samples using miRNeasy/Plasma kit and RNeasy plus mini kit, respectively (Qiagen, GmbH, Germany). After determination of the purity and concentration of the extracted RNA, cDNA was synthesized (Qiagen, GmbH, Germany). To quantify the expression levels of studied microRNAs (122 and 34a) and mRNAs (NFκB, TNF-α, IL-6, Nrf2 and HO1), semi-quantitative realtime PCR (qRT-PCR) was performed. Table 1 lists the sequence of forward and reverse primers used for amplifying of each gene. No-template negative control (H 2 O) was routinely run in every PCR. The levels of miRNAs and mRNAs expression were respectively normalized against RNU6 (as an internal control) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) and the fold change was calculated using the 2 -ΔΔCt formula. Statistical analysis Results are expressed as means±standard error of means (SEM). One-way analysis of variance (ANOVA) with LSD post hoc tests was used for identification of significant differences among the studied groups (IBM SPSS statistics 16). Kruskal-Wallis test was used to analyze histopathologic scoring. NFκβ: nuclear factor κβ, IL-6: interleukin6, TNFα: tumor necrosis factor α, Nrf2: nuclear factor erythroid-related factor 2, HO1: heme oxygenase-1, and GAPDH: glyceraldehyde-3phosphate dehydrogenase. Analysis of the given dust Analyzing the composition of the given dust in term of the heavy metals content showed the existence of 25 heavy metals: Ag, Al, As, B, Ba, Be, Cd, Co, Cr, Cu, Hg, Li, Mn, Mo, Ni, Pb, Sb, Se, Si, Sn, Sr, Ti, V and Zn. The concentrations of Aluminum, Manganese and Zinc in the given dust were the higher compared to blank sample (Figure 2). Rat whole body exposure to dust accompanied by HFD, induced nonalcoholic fatty liver disease phenotype To elucidate in vivo effects of whole body exposure to dust, male Wistar rats under HFD, were exposed to dust or clean air (CA) after 42 days. We tried to create a situation similar to the real life by this exposure. During the exposure period, the mean concentration of PM10 and PM2.5 in the exposure chamber was respectively 1388 and 416.4 µg/m 3 and it was 50 µg/m 3 for CA group. Changes in macroscopic appearance and microscopic architecture of the liver following exposure to dust As shown in Figure 3, accumulation of RBC and fatty deposit was seen in HFD+N/S+Dust rats' liver ( Figure 3b). No sign of fatty deposit was seen in HFD+GA+Dust group, but mild inflammation and accumulation of RBCs were seen in this group. H&E staining revealed that inflammation, accumulation of RBCs and fatty deposit in HFD+N/S+Dust group, were significantly higher than HFD+CA group. GA pretreatment also significantly reduced these levels (Table 2). The liver in HFD+N/S+ Dust exposed rats showed inflammation (I), accumulation of RBCs (A) and fatty deposit (F); and c: Animals in HFD+gallic acid + Dust group showed mild inflammation and blood cell accumulation but no signs of fatty deposit. Table 2. NAFLD grading and staging in Wistar rats exposed to clean or dusty air. Gallic acid increased total antioxidant capacity (TAC) while decreased hepatic malondealdehyde (MDA) level in dustexposed rats under high-fat diet As indicated in Figure 4, dust exposure significantly decreased hepatic TAC level in the HFD+N/S+Dust rats compared to the HFD+CA group while increased hepatic MDA level (p˂0.05 in both cases). Pretreatment with GA at 100 mg/kg reverted TAC and MDA levels to those of the HFD+CA group. Figure 4. Gallic acid decreased hepatic malondialdehyde (MDA) level while increased hepatic level of total antioxidant capacity (TAC) in rats exposed to dust under high-fat diet (HFD). Data are expressed as the means±SEM.*p˂0.05 compared to the HFD+CA group and #p<0.05 compared to the HFD+N/S+Dust group. Gallic acid pretreatment decreased serum levels of miR-122 and miR-34 following dust exposure in rats under high-fat diet As illustrated in Figures 5 (a and b), serum levels of miR-34a and miR-122 significantly increased following dust exposure in rats under high-fat diet (p˂0.05 in both cases). GA pretreatment (100 mg/kg) just before dust exposure, reverted the level of mir-34a and miR-122 to that of the HFD+CA group. Figure 5. Effect of dust exposure and GA pretreatment on serum levels of (a) miR-122 and (b) miR-34a in rats under high-fat diet (HFD). The data are presented as the mean±SEM. *p<05 compared to the HFD+CA group; and #p<0.05 compared to the HFD+N/S+Dust group. A B Gallic acid improved lipid profile in dust-exposed rats under high-fat diet Exposure to dust for 6 weeks, significantly increased serum levels of TG and cholesterol in the HFD+N/S+Dust rats compared with the HFD+CA group (p˂0.05 in both cases) (Figure 6a-b). There was no significant difference in serum LDL and HDL level between the HFD+CA and HFD+Dust groups. Pretreatment with GA at 100 mg/kg for 6 weeks just before dust exposure, significantly decreased serum level of TG, cholesterol and LDL, while increased serum HDL level (p˂0.05 in all cases). Gallic acid improved liver enzyme disturbances in dust-exposed rats under high-fat diet As demonstrated in Figure 7, there was no significant difference in ALT serum level between CA-and Dust-exposed rats under HFD. Serum levels of AST and ALP increased significantly in the HFD+N/S+Dust group compared with the HFD+CA rats (p˂0.05 in both cases) (Figure 7a-c). GA pretreatment decreased serum levels of AST, ALT and ALP (p˂0.05 in all cases). Effect of dust exposure and gallic acid pretreatment on mRNA expression of NF-κβ, pro-inflammatory cytokines, Nrf2 and HO1 As shown in Figure 8(a-c), exposure to dust increased the mRNA expression levels of NF-κβ, IL-6 and TNF-α in the HFD+N/S+Dust compared to the HFD+CA group (p˂0.05 in both cases). GA pretreatment before dust exposure significantly prevented the increment of these levels (p˂0.05 in both cases). Nrf2 and HO1 mRNA expression in the HFD+N/S+Dust and HFD+GA+Dust rats increased significantly (p˂0.05 in both cases). There was no significant difference between the HFD+N/S+Dust and HFD+GA+Dust rats in Nrf2 and HO1mRNA expression. The origin of the collected dust One of several dust events that happened in autumn 2018 is demonstrated based on Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model, and MODIS data in Figure 9. The highest PM10 level of this event was725 µg/m 3 at 05:00 AM on October 26, 2018 though there were other dust storms in the same period that reduced visibility in Ahvaz with more PM10 severity. PM10 concentration was greater than 2000 µg/m 3 . It appears that Saudi Arabia was the main source of the dust storm. Discussion Dust composition is dependent on itsorigin. There is a direct correlation between dust composition(s) and its health side effects. Prior studies have shown dust components (PMs or heavy metals) when evaluated individually could endanger health status (Araujo, 2010;Cave et al., 2010;Hyder et al., 2013;VoPham, 2019). A human study has shown a direct correlation between serum levels of heavy metals and development of NAFLD features (VoPham, 2019). Our results showed that heavy metals concentration in the given dust washigher than normal levels. Higher concentrations of heavy metals in the given dust compared to the sample blank could serve as an important reason for functional and morphological changes observed inliver tissuse of the HFD+N/S+Dust rats. The results of the present study showed that whole body dust exposure accompanied by HFD, induced NAFLD features in rats. Our microscopic evaluations demonstrated that dust exposure accompanied by HFD, led to inflammation, accumulation of RBCs and fatty deposit in rats'liver compared to the HFD+CA group while gallic acid pretreatment significantly decreased inflammation, and accumulation of RBCs. However, in the liver of the HFD+GA+Dust rats, no signs of fatty deposit were seen. A previous study has shown protective effects of GA against HFD-induced hepatic steatosis (Chao et al., 2014). Generally, these findings showed HFD+Dust (alsoshown by the current results) and its components (as revealed by previous reports), caused liver damage, and GA pretreatment prevented emergence of NAFLD features. A previous study suggested that ulter fine particle (UFP) exposure triggers injury via oxidative stress induction (Brown et al., 2004). An earlier study has shown thatheavy metals are cable of oxidative stress induction (Ayres et al., 2008). Dust analysis showed that the amount of heavy metals was dramatically high. Thus, these findings showed that oxidative stress induction occurred due to high concentrations of heavy metals along with other components present in the given dust. However, increased levels of MDA related to dust exposure, could represent oxidative stress situation. A prior study has also reported that UFP exposure via systemic oxidative stress induction, increased MDA level (Delfino et al., 2011). In agreement with such data, the present study showed that liver MDA level increased in the HFD+N/S+Dust group compared to the HFD+CA rats. These findings showed that dust exposure accompanied by HFD, induced liver oxidative stress, increased lipid peroxidation, damaged hepatocytes, increased cell membranes permeability that ledto an increment in serum levels of ALT, AST, ALP and miR-122. This study showed that liver MDA level in the HFD+GA+Dust group was significantly lower than that of the HFD+N/S+Dust rats. A previous report has indicated the ROS scavenging activity and antioxidantpromoting activity of GA in ischemia/reperfusion liver injuries (Bayramoglu et al., 2015). Therefore, it can be concludedthat GA is able to inhibit oxidative stress. There is a positive correlation between the serum levels of miR-34a and oxidative stress-induced injuries. Oxidative stress condition led to increased serum levels of miR-34a (Upton et al., 2012). The current study showed that serum levels of miR-34a and liver MDA level increased in the HFD+N/S+Dust rats. The present results also showed that GA pretreatment improved these levels. From these findings, we concluded that dust exposure accompanied by HFD, induced oxidative stress injury and GA pretreatment via reduction of oxidative stress-protected the liver. Liver structural damage and cellular leakage lead to the over release of liver amino transaminases and ALP from hepatocytes and elevated levels of these markers in serum (Jalili et al., 2015). The present results showed that dust exposure accompanied by HFD, increased serum levels of AST, ALT and ALP while GA pretreatment improved these levels. GA as an antioxidant inhibited lipid peroxidation, maintained cell membrane integrity and consequently, improved these levels. Our results showed that serum level of TG and cholesterol increased in the HFD+N/S+Dust, while serum HDL and LDL were not significantly different between the HFD+CA and HFD+N/S+Dust groups. GA pretreatment improved lipid profile in dust-exposed rats under HFD. PMs exposure increased oxidative stress which in turn, adversely affected lipoproteins such as HDL. HDL modulates the plasma level of cholesterol (Araujo et al., 2008). Ambient air pollution increased adipose inflammation and insulin resistance . Insulin resistance increased lipolysis which in turn, increased plasma TG level. Then, our results in agreement with a previous study, have shown that air pollution disrupted lipid profile (Wei et al., 2016) while GA pretreatment via inhibiting oxidative stress and then, inhibiting lipid peroxidation improved lipid profile and prevented NAFLD development (Chao et al., 2014). This study showed that expressions of Nrf2 and HO1 increase in the HFD+N/S+Dust and HFD+GA+ Dust groups compared to the HFD+ CA group. A pervious study has shown that airborne particulate matter increases Nrf2 and HO1 mRNA expression (Araujo et al., 2008). In agreement with a previous report, our study has shown that air pollution and GA pretreatment increased mRNA expression of Nrf2 and HO1 (Yeh and Yen, 2006). Oxidative stress activated NF-κβ pathway and increased expression of proinflammatory factors TNF-α, IL-1, and IL-6 (Sivandzade et al., 2019). The present results showed that dust exposure accompanied by HFD, increased the mRNA levels of NF-κβ, TNF-α, and IL-6. Similar findings were also reported following exposure to PMs (Vignal et al., 2017). Therefore, taken these results together, it is concluded that exposure to dust accompanied by HFD as shownhere or its components (PMs) as reported by previous studies, by activating the inflammatory pathway, damaged the liver and induced hepatic inflammation. Our findings showed that dust exposure caused a significant decrease in the liver TAC while GA pretreatment significantly increased TAC in rats under HFD. Air pollution components increased oxidative stress, disturbed oxidant-antioxidant balance and oxidant mediators conquered antioxidant defense. This situation makes organs susceptible to oxidative stress damages. A previous study has shown that air pollution exposure reduced both serum and organs antioxidant levels (Liu and Meng, 2005). Another study also showed that GA improved antioxidant capacity and exhibited protective effect on liver oxidative stress (Ahmadvand et al., 2017). Our findings were consistent with prior studies showing that dust exposure decreased antioxidant potency while GA pretreatment increased TAC and improved antioxidant capacity. In conclusion, this study showed that HFD given for six weeks and exposure to dust, induced NAFLD in Wistar rats through inducing oxidative stress. Oxidative stress through activating the inflammatory pathways caused NAFLD features. Gallic acid pretreatment by inhibiting oxidative stress effectively protected liver function against HFD+Dust induced inflammation.
2021-05-28T11:45:03.254Z
2021-04-10T00:00:00.000
{ "year": 2021, "sha1": "7d0401209c96a3b5bf57f11eb1c9dc6ccfdadd58", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5b793ff802815d2872745c24aee39ddd964a0e82", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
8396044
pes2o/s2orc
v3-fos-license
Impact of daptomycin resistance on Staphylococcus aureus virulence Daptomycin resistance (DAPR) in Staphylococcus aureus is associated with mutations in genes that are also implicated in staphylococcal pathogenesis. Using a laboratory-derived series of DAP exposed strains, we showed a relationship between increasing DAP MIC and reduced virulence in a Galleria mellonella infection model. Point mutations in walK and rpoC led to cumulative reductions in virulence and simultaneous increases in DAP MIC. A point mutation to mprF did not impact on S.aureus virulence; however deletion of mprF led to virulence attenuation and hyper-susceptibility to DAP. To validate our findings in G. mellonella, we confirmed the attenuated virulence of select isolates from the laboratory-derived series using a murine septicaemia model. As a corollary, we showed significant virulence reductions for clinically-derived DAPR isolates compared to their isogenic, DAP-susceptible progenitors (DAPS). Intriguingly, each clinical DAPR isolate was persistent in vivo. Taken together, it appears the genetic correlates underlying daptomycin resistance in S. aureus also alter pathogenicity. Introduction Daptomycin (DAP) is a cyclic lipopeptide antibiotic that is increasingly being used to treat Staphylococcus aureus infections. 1 Unfortunately, therapeutic failures, albeit relatively uncommon, have been reported. [1][2][3] Thus far, the mechanisms underlying daptomycin resistance (DAP R ) in S. aureus have focused on point mutations in genes involved in phospholipid biosynthesis, particularly mprF, 4 which codes for lysyl-phosphatidylglycerol (L-PG) synthetase, cls2, which codes for cardiolipin synthase and pgsA, which codes for CDP-diacylglycerol-glycerol-3-phosphate-3phosphatidyltransferase. 5 It is hypothesized that these mutations lead to changes in phospholipid membrane composition, which may affect membrane charge causing electro-repulsion of calcium-complexed daptomycin or may directly affect daptomycin binding. 5 Interestingly, daptomycin, when complexed with calcium, appears to act similar to cationic antimicrobial peptides, and therefore genetic mutations associated with daptomycin resistance have been shown to simultaneously confer resistance to host innate immune responses. 4 Other genes associated with reduced susceptibility to daptomycin may also affect S. aureushost interactions, particularly the sensor-histidine kinase, WalK (previously YycG), which has an integral role in cell wall homeostasis but also regulates a number of genes important for virulence. 6,7 The focus of this study was to assess for the first time the pathogenic consequences of DAP R in S. aureus. Methods Bacterial strains used in this study are shown in Table 1. For the purpose of this study, daptomycin non-susceptible isolates (defined by an MIC of DAP >1 mg/L) are referred to as DAP R . The laboratory-derived, DAP-exposed series was obtained from Friedman et al., and included a parent strain (CB1118) and 4 mutants isolated after serial in vitro DAP exposure over 20 days. 8 These strains were previously genome sequenced and their cumulative mutations are shown in Table 1. 8 An mprF deletion mutant (CB1118DmprF) was also included. 9 In addition, 3 clinical S. aureus pairs were assessed, which included a daptomycinsusceptible (DAP S ) parent strain with its corresponding DAP R daughter strain that developed after DAP therapy and clinical failure. 5 Growth kinetic experiments were performed as described previously. 10 We used Galleria mellonella as a substitute in vivo host to assess staphylococcal virulence as described previously. 7,10,11 This model was chosen because G. mellonella not only have phagocytic cells in their hemolymph but they also rely on antimicrobial peptides for their immune defense, hence their utility for the study of DAP R . Briefly, bacteria were injected into the hemocoel of each caterpillar (n D 16 / strain, 1 £ 10 5 -1 £ 10 6 CFU/larvae) using a 10 ml Hamilton syringe. 10 For the mammalian experiments, 10-15 C57BL/6 or CD-1 mice per strain were injected intraperitoneally with 2-4 £ 10 7 CFU of bacteria mixed with 6% porcine gastric mucin (Sigma Aldrich) in 500 ml, and were monitored for 7 days. 12 To assess bacterial persistence, kidneys were harvested from surviving DAP R infected animals 7-days post infection, and bacterial counts were performed. Survival curves were plotted using the Kaplan-Meier method with differences calculated using log-rank tests (GraphPad Prism v 6.0). Experiments were approved by the Institutional Animal Care and Use Committee at Cubist Pharmaceuticals, Inc.. and Monash University prior to initiation of studies. Results and discussion To determine the impact of DAP R on staphylococcal virulence we first infected G. mellonella with a genetically characterized in vitro derived series of S. aureus strains that had incremental increases in the MIC of DAP and a cumulative number of mutations ( Table 1). 8 Importantly, these strains grew similarly in vitro except for CB1618-d20, which was impaired for growth ( Supplementary Fig. 1). As shown in Figure 1A, there was a significant trend of reducing virulence as the MIC of DAP increased (P < 0.05, Chi square for trend). However, when individual strains in the series were compared, significant reductions in virulence occurred only on 2 occasions. The first occurred due to a mutation (R263C) in the sensor histidine kinase known as walK (previously yycG). Working together with its cognate response regulator (WalR), this 2-component regulatory system is indispensible for cell wall homeostasis in S. aureus but has also been reported to regulate virulence. 6 This isolate (CB1618-d9) had an increase in DAP MIC from 2 mg/L to 4 mg/L and it was attenuated in killing G. mellonella (P < 0.01) compared to its progenitor (CB1618-d6) (Fig. 1A). The second occurred with the final isolate of the series (CB1618-d20), which acquired a mutation in rpoC, leading to a substantial rise in MIC of daptomycin to 16 mg/L and marked virulence attenuation (Fig. 1A). DAP R in S. aureus has most commonly been associated with 'gain of function' point mutations in mprF. 13 This leads to greater L-PG (cationic) being produced and translocated to the outer layer of the membrane causing a reduction in the net-negative membrane charge. As a consequence, these mutations simultaneously lead to resistance to daptomycin and cationic antimicrobial peptides. 14 It is not surprising then that we observed no decrease in virulence for the first isolate of the laboratory-exposed series harbouring an mprF T345A mutation (CB1618-d6) (Fig. 1A). Conversely, we would expect that deletion of mprF would create a strain not only hypersusceptible to daptomycin but also less virulent as it would be hypersusceptible to cationic antimicrobial peptides. To test this hypothesis, we assessed an mprF deletion mutant (CB1118DmprF) from the same wild-type strain. 9 Not only was CB1118DmprF hypersusceptible to daptomycin (MIC 0.125 mg/L, Table 1), it was also significantly less virulent compared to its parent strain (P < 0.001, Fig. 1B). Taken together, these data further highlight MprF as an attractive drug target, as inhibition of the protein may render S. aureus simultaneously susceptible to antimicrobials and the host innate immune system. 15 To validate our findings from G. mellonella and to assess the impact of DAP R on mammalian disease, we assessed the virulence of each attenuated mutant strain from the laboratoryderived series using an established murine septicaemia model. 12 Consistent with that observed in G. mellonella, CB1618-d9 (DAP MIC D 4 mg/L) and CB1618-d20 (DAP MIC D 16 mg/ L) were significantly attenuated for virulence when compared to their susceptible parent strain (CB1118) (P < 0.001 for each) (Fig. 1C). We were intrigued by the survival of the DAP R infected mice and wanted to assess whether the bacteria were being cleared or were persisting within the host. To test this, we assessed the bacterial burden in the kidneys of mice infected with each DAP R strain at 7-days post infection. Despite the surviving mice, the kidneys of mice infected with CB1618-d9 showed high . For clarity, CB1618-d6, CB1618-d9, CB1618-d13, CB1618-d20 are represented by d6, d9, d13 and d20 respectively. Virulence attenuation was observed for CB1618-d9 when compared to CB1618-d6 (P < 0.01) and CB1618-d20 when compared with CB1618-d13 (P < 0.01). No significant virulence attenuation was observed for CB1618-d6 (P D 0.44) and CB1618-d13 (P D 0.70) when compared to their respective progenitor strains. (B) An mprF deletion strain (CB1118DmprF) produced significantly less killing of G. mellonella when compared to its progenitor (P < 0.001, n D 16 for each strain). (C) CB1618-d9 and CB1618-d20 were attenuated for virulence in a murine septicaemia model (P < 0.001, n D 10 for each strain). (D) CB1618-d9 was capable of in vivo persistence as determined by bacterial densities in the kidneys of mice 7-days post infection. In contrast, bacterial burden was not observed in the kidneys of mice infected with CB1618-d20. (E) Virulence of 3 DAP-exposed clinical pairs was assessed using a murine septicaemia model. The daptomycin-resistant (R) isolates were significantly attenuated for virulence compared to their susceptible progenitors (S) (P < 0.001, n D 15 for each strain) and (F) were persistent in the kidneys of infected mice out to 7-days post-infection. bacterial burdens (average of 2.5 £ 10 7 CFU/g of tissue) indicating that this strain was persistent in vivo (Fig. 1D). In contrast, no bacteria were recovered from mice infected with CB1618-d20 suggesting that mutations in this strain were associated with in vitro and in vivo fitness costs ( Fig. 1D and Supp. Fig. 1A). As a corollary to what was observed for the laboratory-derived series, we next infected mice with 3 clinical S. aureus pairs consisting of a DAP S parent strain with its corresponding DAP R daughter strain that arose after DAP therapy and clinical failure. Each of the pairs were associated with severe infections including osteomyelitis, septic arthritis and endocarditis and were confirmed to be isogenic by whole genome sequencing as described previously ( Table 1). 5 As shown in Figure 1E, each of the clinical DAP R strains were significantly attenuated for virulence when compared to their DAP S progenitor strain (P < 0.001). Furthermore, as shown with the laboratory derived DAP R strain (CB1618-d9), despite surviving animals out to 7 days post-infection, high bacterial loads were recovered from the kidneys of mice infected with each DAP R strain (Fig. 1F). These data lend further support that clinical DAP R S. aureus strains have the capacity to persist in vivo. To our knowledge, this is the first study to describe an association between the development of DAP R and altered S. aureus pathogenicity. Subtle genetic changes associated with DAP R , such as a point mutation in WalK (R263C), can have significant impact on the ability of the bacteria to cause mammalian disease. Intriguingly, despite DAP R S. aureus strains causing less lethal disease, they have the capacity to persist in vivo, a finding consistent with that observed in humans. [16][17][18] Developing a deeper understanding of the genetic determinants impacting antibiotic resistance and virulence in S. aureus will provide critical insights into novel therapeutic strategies for this aggressive human pathogen. Conflicts of Interest A.Y.P has been to one advisory board meeting for Ortho-McNeil-Janssen and AstraZeneca, and has received a speaker's honorarium from AstraZeneca for one presentation. G.M.E. has served on Scientific Advisory Boards for Cubist, Bayer Schering, Johnson & Johnson Pharmaceutical Research and development, Novartis, Pfizer, Shionogi, Theravance; he has had research training support from Cubist, research contracts from Novexel, Pfizer and Theravance, and speaking honoraria from Novartis.
2018-04-03T00:45:56.010Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "841194148e84cf28628e60a6e4639813c8536582", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2015.1011532?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "9a76618a0dbef887607cfa7ab0ed1a6ca1882129", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12789131
pes2o/s2orc
v3-fos-license
Waiting time for radiation therapy after breast-conserving surgery in early breast cancer: a retrospective analysis of local relapse and distant metastases in 615 patients Background Postoperative radiotherapy after breast-conserving surgery (BCS) is the standard in the management of breast cancer. The optimal timing for starting postoperative radiation therapy has not yet been well defined. In this study, we aimed to evaluate if the time interval between BCS and postoperative radiotherapy is related to the incidence of local and distant relapse in women with early node-negative breast cancer not receiving chemotherapy. Methods We retrospectively analyzed clinical data concerning 615 women treated from 1984 to 2010, divided into three groups according to the timing of radiotherapy: ≤60, 61–120, and >120 days. To estimate the presence of imbalanced distribution of prognostic and treatment factors among the three groups, the χ2 test or the Fisher exact test were performed. Local relapse-free survival, distant metastasis-free survival (DMFS), and disease-free survival (DFS) were estimated with the Kaplan–Meier method, and multivariate Cox regression was used to test for the independent effect of timing of RT after adjusting for known confounding factors. The median follow-up time was 65.8 months. Results Differences in distribution of age, type of hormone therapy, and year of diagnosis were statistically significant. At 15-year follow-up, we failed to detect a significant correlation between time interval and the risk of local relapse (p = 0.09) both at the univariate and the multivariate analysis. The DMFS and the DFS univariate analysis showed a decreased outcome when radiotherapy was started early (p = 0.041 and p = 0.046), but this was not confirmed at the multivariate analysis (p = 0.406 and p = 0.102, respectively). Conclusions Our results show that no correlation exists between the timing of postoperative radiotherapy and the risk of local relapse or distant metastasis development in a particular subgroup of women with node-negative early breast cancer. Background Breast cancer (BC) is the leading cancer among women [1] worldwide. It represented 13.5 % of all cancers in Europe in 2012 [2] and the most frequent cause of death in women. Several studies have reported a decline in BC mortality thanks to early detection and progress in cancer treatment [3,4]. Radiotherapy (RT) after breast-conserving surgery (BCS) halves the incidence of local recurrence and reduces the cancer-specific death by a sixth. Evidence from randomized clinical trials [5][6][7][8] and meta-analyses [9,10] The optimal timing for starting postoperative RT is not yet well defined. In principle, a delay between surgery and the start of RT could allow the growth of clonogenic cells in the tumor bed and the development of radioresistance [11]. Delays of >8-12 weeks seem to increase the risk of local relapse in observational studies, but results are conflicting. Moreover, no phase III studies about the optimal interval between surgery and radiotherapy are available. National Canadian clinical practice guidelines recommend that RT should be given <12 weeks after BCS to keep the incidence of local failure and disease-free survival (DFS) similar to that of mastectomy [12]. The National Cancer Intelligence Network suggests that "the time between surgery and the start of radiotherapy should be no more than 31 days" [13]. The Merseyside and Cheshire cancer network guidelines report that "radiotherapy should be started within 12 weeks of the date of surgery" [14]. The latest Italian guidelines [15] recommend starting RT earlier than 20 weeks after surgery if no systemic treatment is given, especially in women <40 years of age and/or with positive margins [16,17]. In this study, we retrospectively analyzed the long-term follow-up of 615 women treated with BCS and whole breast conventional radiation therapy (WBI) for early BC. The aim was to investigate the relationship between waiting time for postoperative RT and the development of local relapse and distant metastases. Methods We analyzed data concerning 615 patients with early BC who underwent WBI with conventional fractionation at our institution between December 1984 and December 2010. All patients had DCIS-T1-T2, N0, M0 BC, and underwent BCS (quadrantectomy ± sentinel lymph node biopsy and/or axillary dissection). After surgery, they all received WBI using an isocentric technique with two tangential fields, followed by a boost on the tumor bed in 89.6 % of cases. The mean dose was 50 Gy (range 40-60 Gy) for WBI, delivered in 2 Gy fractions five times a week. The dose was prescribed at the isocenter, on the basis of the ICRU 50 guidelines [18], and the CTV (clinical target volume) was set at a 95 % isodose level. The dose to the breast was administered with a ≥6 MV photon beam; the tumor bed boost was administrated by electrons or photon beam to a total dose of 10 Gy in the case of negative surgical margins (96.7 %) and to higher doses (14-20 Gy) in the case of close or positive margins (3.3 %). Surgical margins were considered free if ≥2 mm, close if <2 mm, and positive if disease persisted on the margin. Patients with close or positive margins refused reexcision or did not receive this recommendation by the surgeon. Patients without positivity of hormone receptors did not receive chemotherapy for age or comorbidities. The waiting list for BC patients was not formally conditioned by protocols or guidelines concerning risk factors. The time of delay was mostly conditioned by the delay in referring to the radiotherapy center and the overall waiting list for starting radiotherapy. After RT, patients were evaluated every 4 months for the first 2 years, then every 6 months until the 5th year and henceforth every year. One patient developed a cutaneous angiosarcoma at the level of the breast surgical scar, probably related to previous breast irradiation, 9 years after treatment. All patient subscribed a written consent to treatment. This analysis was approved by our institutional Ethics Committee. Statistical methods For the survival analysis, the BCS date, defined as the date of the last surgery on the breast, was used as the start of observation, and the date of the last medical follow-up visit was used as the end of the follow-up period. Timing of RT was calculated as the interval between BCS and the RT start date, defined as the date in which the first fraction of RT was administered. The χ2 test or the Fisher exact test, when appropriate, was used to calculate intergroup differences of some clinical categorical variables. Results are shown in Table 1. Survival statistics (Local relapse-free survival-LRFS, distant metastasis-free survival-DMFS, and disease-free survival-DFS) were estimated with the Kaplan-Meier method, and differences between groups were validated by the Log-rank test. The multivariate Cox regression was used to test for the independent effect of timing of RT after adjusting for known confounding factors. The results were presented as hazard ratios (HR) with corresponding 95 % confidence intervals. The group with the shortest time interval, <60 days, was the reference category. Statistical significance was achieved at a p < 0.05. All the analyses were performed using the Statistical Analysis System (SAS Institute, Cary, NC) software. Results The median time of follow-up, defined as the median time between BCS and last follow-up, was 65.8 months (range 4-179); only 7.5 % of patients had a follow-up <6 months. The median waiting time from surgery to the start of RT was 111 days (range 21-532 days). The mean patient age was 58 years (range 21-87). Differences in distribution of age, type of hormone therapy (HT), and year of diagnosis among the three groups were statistically significant (p < 0.0001, p = 0.0006 and p < 0.0001, respectively) ( Table 1). Local relapse-free survival Overall, we found 11 disease relapses in the treated breast (Fig. 1). When the significant patient variables (age, type of HT, and year of diagnosis) shown in Table 1 were considered in the analysis, the HR for the second and third group were 0.28 (95 % CI 0.05-1.58) and 0.58 (95 % CI 0.09-3.62), respectively, compared with the first group, as shown in At the univariate analysis, we found a statistically significant difference in the distribution of metastases among the three groups (p = 0.041) (Fig. 2). Nevertheless, when corrected for age, type of HT, and year of diagnosis, the HR of the second group was 0.32 (95 % CI 0.06-1.70). It was not possible to calculate the HR for the third group (>120 days) because no metastases occurred in this group ( Table 2). The p value was 0.406. (Fig. 3). Disease-free survival Multivariate analysis confirmed that timing of RT is not an independent prognostic factor (p = 0.102) ( Table 2). The HRs for the second and third group were 0.36 (95 % CI 0.10-1.24) and 0.18 (95 % CI 0.04-0.91), respectively, compared with the first group. An analysis aiming to find correlation between local relapse or distant metastases and certain characteristics, as patient age, status of surgical margins or tumor histology, grade, and stage, was conducted, but no statistically significant differences were found among these subgroups. Discussion The interval between BCS and postoperative RT in breast cancer treatment can significantly change. Causes of this variation could be patient compliance and socioeconomic status; geographic distribution of radiotherapy centers; long waiting lists; characteristics of patients and their cancer (age, prognostic factors, presence of comorbidities, etc.) [19], and surgical complications (slow wound healing, inflammation, or infections). Moreover, the waiting time for radiotherapy has increased dramatically during the last decades [20,21] due to the increased demand for radiation treatments. In theory, an excessive time to RT could be associated with an increased risk of local recurrence and with a subsequent increased risk of metastasis. Some radiobiological models [22] have shown a small decrease in local control of 1-2 % per month's delay in treatment. However, clinical data are not always consistent with this theory. No phase III trials have been published, but only several retrospective analyses: some experiences showed a positive association, while others did not find a clear correlation. One of the first studies on the relationship between waiting time for radiotherapy and clinical outcomes was conducted in 1985 at the Institut Gustave Roussy in France by Clarke et al [23]. They reviewed 436 patients with T1 and small T2 breast carcinoma, finding a significant increase in local relapse in the group that received radiotherapy more than 7 weeks after surgery. These results were not confirmed on multivariate analysis. One of the largest studies published reviewed data concerning 13,907 women ≥65 years with stage I-II BC diagnosed and treated between 1991 and 1999 and not receiving chemotherapy, extracted from the surveillance, epidemiology, and end results (SEER) registry [24]. The Authors concluded that "patients who do not receive RT until more than 12 weeks after BCS appear to have poorer survival. " A successive SEER database analysis conducted in 2010 [25] showed that intervals over 6 weeks were associated with an increased incidence of local recurrence. Many other retrospective studies found a significant correlation between a long RT delay and increased rate of local relapse, with or without impact on survival, in the same cohort of patients [16,[26][27][28]. Simultaneously, another group of retrospective analyses conducted on similar patients failed to find a significant variation of these endpoints [29][30][31][32][33][34]. Contrary to all this evidence, a recent retrospective analysis [35] on 1107 women with early-stage BC without lymph node metastases or adjuvant systemic therapies found a significantly decreased DMFS and disease-specific survival in the tertile that started radiotherapy early after BCS (<45 days) compared with the other two tertiles (45-56 and 57-112 days), without differences in local control. The Authors attributed these results to residual confounding factors (age and prognostic factors) that could lead the physician to start radiotherapy early in high-risk patients. Another probability is that starting radiotherapy too early could induce vascular damage, a delay in stromal cell growth, and an increase in tumor cell damage: these mechanisms could lead to a easier spread of metastatic cells, as suggested in a previous study on radiation induced metastatic spread and angiogenesis. Further information on the optimal timing of RT in early-stage cancer treated with sole Radiotherapy could be extracted from studies that also included patients with advanced-stage BC, through the analysis of subgroups. Even in these studies results are not unequivocal, ranging from the nonexistence of association [17,36,37] to a [38] or significant [39][40][41][42][43][44][45] correlation between RT delay and local recurrence rate. Some Authors have tried to summarize these conflicting results through meta-analysis and reviews, but final analysis remains conflicting. The critical review conducted by Hebert-Croteau published between 1985 and 2000 reported a nonsignificant association between time interval and the risk of local recurrence or death related to BC [46]. In contrast, a meta-analysis by Huang et al [47] published in 2003 and including 7401 BC patients showed a 1.62-fold increase in local recurrence rate when radiotherapy was administered >8 weeks after BCS. In another review, Ruo Redda et al. [48] suggested that, in the group of patients that do not need any treatments other than RT, the time interval should not exceed 8 weeks. A more recent systematic review conducted by Chen et al. [49] evaluated time interval to RT as a continuous variable, showing an increase of the RR of local recurrence of 1.11 every month after BCS. An association between waiting time and distant metastases or overall survival did not emerge. The most recent systematic review by Tsoutsou et al. [50] found that an interval of more than 8-12 weeks increased local recurrence rates when RT was administered as the sole adjuvant modality. In our study, we aimed to explore the correlation between the delay of postoperative RT and the development of local recurrence and distant metastases in women with node-negative T1-T2 BC treated with RT, with/without HT but without chemotherapy. We divided the 615 patients into three subgroups according to the timing of RT (T1: <60 days; T2: 60-120 days; T3: >120 days). The majority of patients were in the second group. Our experience failed to detect a significant correlation between BCS-to-RT time interval and the risk of local relapse in early-stage BC patients (p = 0.09). This lack of significance was confirmed at the multivariate analysis adjusted for age and type of HT, indicating that the failure to find a univariate relationship between timing of RT and BC local recurrence was not imputable to an uneven distribution of these two variables. On the other hand, we found an unexpected but significant inverse correlation at the univariate analysis between timing of RT and the risk of distant metastases development. In particular, at the Kaplan-Meier analysis, the DMFS was 91.3 % for the group that received RT prior to 60 days, 95.2 % for the group who received RT between 61 and 120 days after surgery, and 100 % for the group who received RT more than 120 days after BCS (p = 0.041). The outcome seemed to be worse in patients who started RT early (<60 days after surgery). However, this correlation was not confirmed at the multivariate analysis (HR T2: 0.32; 95 % CI 0.06-1.70, p = 0.406) when age of patients, type of HT, and year of diagnosis were considered. These results seem to be related to an unbalanced distribution of two main variables (age and year of diagnosis) among the three groups. In turn, the third confounding variable (type of HT) is often strictly dependent on the age of patients. Probably, these factors operated as real confounding factors when DMFS and overall DFS are considered. In particular, we observed a prevalence of young women, with a consequent younger median age, in the first group compared with the other two groups. It is possible that physicians involuntarily expedited RT for cases they perceived to be at higher risk of local or distant recurrence, such as young women. The correlation between age and rate of local [5,10,51] and distant [52] relapse, especially visceral metastases [53], has been demonstrated in several large studies and is often considered by physicians during the first evaluation of a breast cancer patient. Moreover, in the first group, we found a greater proportion of patients treated in the 1980s and early 90s compared with the other two groups, probably due to the lesser impact of waiting lists in past years. This finding could justify the worse outcome in terms of DMFS and DFS which emerged for the T1 group at the univariate analysis: when these BC patients were treated, RT technology was not advanced: this could lead to an under-dosage or a partial miss of the radiotherapy target. Moreover, in past decades, accurate guidelines about systemic treatment of BC were not available: nowadays, adjuvant chemotherapy is more often prescribed [54], so it is possible that patients treated in the first years of our analysis would have to be treated with more aggressive therapies because they presented high-risk features. Finally, we know that local and distant relapse usually occur after several years: patients in the T1 group, treated in the 80s and early 90s, are more likely to develop relapses because of their longer follow-up rather than because of a real correlation with timing of RT. All these confounding factors, considered in the multivariate analysis, could explain the lack of significant correlation between timing of RT and all the events considered. Evidently, our study has some limits intrinsic to its retrospective nature, such as the bias regarding patient selection and their unequal distribution among the three timing groups. Moreover, although we followed patients for a long time (until 15 years), the median follow-up is only about 6 years, because of the large time of accrual and the presence of patients who did not attended the followup program. Local recurrence in BC could develop also 10-15 years after BCS, so a longer follow-up is requested to obtain more realistic data. Conclusions Our results showed that there is no correlation between the BCS-to-RT interval time and the risk of local relapse or distant metastases in a particular subset of node-negative early-stage BC patients not receiving chemotherapy. However, our results are limited by the retrospective nature of the study, so they should be validated by randomized studies or well selected meta-analysis, with the aim of filling the gap in clinical evidence about the optimal time interval between BCS and PORT.
2017-08-01T08:00:01.221Z
2016-08-11T00:00:00.000
{ "year": 2016, "sha1": "2ed7cbae2eceb16512d6b2fad224974fda862c92", "oa_license": "CCBY", "oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-016-0226-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ed7cbae2eceb16512d6b2fad224974fda862c92", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244463031
pes2o/s2orc
v3-fos-license
Swing-up of quantum emitter population using detuned pulses The controlled preparation of the excited state in a quantum emitter is a prerequisite for its usage as single-photon sources - a key building block for quantum technologies. In this paper we propose a coherent excitation scheme using off-resonant pulses. In the usual Rabi scheme, these pulses would not lead to a significant occupation. This is overcome by using a frequency modulated pulse to swing up the excited state population. The same effect can be obtained using two pulses with different strong detunings of the same sign. We theoretically analyze the applicability of the scheme to a semiconductor quantum dot. In this case the excitation is several meV below the band gap, i.e., far away from the detection frequency allowing for easy spectral filtering, and does not rely on any auxiliary particles such as phonons. Our scheme has the potential to lead to the generation of close-to-ideal photons. I. INTRODUCTION Deterministically preparing the excited state of a quantum emitter is a key to many applications in quantum information technology, since the subsequent decay of the excited state yields a single-photon [1][2][3]. Prominent examples for quantum emitters are semiconductor quantum dots [4][5][6][7][8][9][10], strain potentials and defects in monolayers of atomically thin semiconductors [11][12][13], defect centers in diamond [14][15][16][17][18] or in hexagonal boron nitride [19][20][21]. The deterministic preparation relies on the direct excitation of the quantum emitter excited state by an external laser pulse. Since the photon emission is controlled by the timing of the laser pulse, the source is called ondemand or deterministic. A common way to achieve a precise preparation is using resonant excitation yielding Rabi rotations [6,[22][23][24][25]. However, this scheme suffers from several drawbacks: Because the excitation and detection is at the same frequency, sophisticated filtering has to be applied to separate the desired photons. Additionally, this scheme is sensitive to variations in the pulse parameters. The latter can be overcome by using chirped pulses within the adiabatic rapid passage (ARP) scheme, though the necessity to apply filtering remains [8,26,27]. Another direct excitation scheme is the phonon-assisted preparation, where the laser is tuned above the exciton resonance and the exciton state is then occupied by phonon-induced relaxation [28][29][30][31][32]. This overcomes the problem of exciting and measuring at the same frequency, but the drawback of this scheme is that it relies on an incoherent relaxation path. Excitations with a strongly detuned laser pulse beyond the phonon spectral density will only lead to a vanishing transient occupation of the quantum emitter. In this article an alternative scheme is proposed that results in the Swing-UP of the quantum EmitteR population (SUPER). The SUPER scheme makes use of highly detuned laser pulses, which in a classical Rabi scheme would not lead to any significant excited state occupation at all. Our scheme relies on periodic changes of the Rabi frequency, leading to a swing-up behavior in the occupation dynamics. We explain the mechanism of the SUPER scheme using a frequency modulation (FM) of the exciting pulse. An implementation of the SUPER scheme with state-of-the-art lasers can be achieved using a two-color protocol. We show that this two-color protocol leads to a complete exciton occupation and nearunity single-photon purity, indistinguishability and photon output. This makes the SUPER scheme ideal for implementing a single-photon source. II. CONCEPT OF THE SUPER SCHEME To grasp the concept of the swing-up scheme, it is most instructive to look at a pulse which is modulated in a step-like manner. We consider a generic two-level system with the energy spacing ω 0 . For constant driving with laser frequency ω L , the two-level system performs Rabi oscillations with frequency Ω R and amplitude a where Ω 0 is proportional to the laser amplitude and is the spectral detuning. We note that only a resonant pulse, i.e., ∆ = 0, with pulse area α = (2n + 1)π, n ∈ N 0 , results in complete inversion of the system. When the detuning is large, the amplitude of the Rabi oscillations becomes negligible. They can be displayed on the Bloch sphere as shown in Fig. 1 Interestingly, if we combine these two off-resonant oscillations in a clever way, we can reach a full inversion of the quantum emitter. This can be seen in Fig. 1(b,c), which show Bloch vector and temporal dynamics of the SUPER scheme using a rectangular pulse. We achieve the population inversion by switching back and forth between the two detunings ∆ high and ∆ low . Each time the occupation of the excited state increases, we use the smaller detuning ∆ low (higher amplitude of the Rabi oscillation). On the other hand, if the occupation falls, we use the higher detuning ∆ high (lower amplitude of the Rabi oscillation). By this precise timing, each oscillation period in (d) gives rise to a small increase in population of the excited state and the occupation swings up to the excited state. The frequency of the modulation in the SUPER scheme typically lies close to the Rabi frequency Ω (C) R = Ω 2 0 + ∆ 2 C induced by a constant pulse with the mean detuning ∆ C = (∆ high + ∆ low )/2. Looking more closely, we find that an occupation of unity can reliably be achieved by using a slightly longer time at ∆ low than ∆ high , according to the Rabi frequencies corresponding to these detunings. The spectrum of the driving pulse contains several peaks, one of those close to the transition frequency ω 0 . One could, therefore, expect that the driving is in resonance with the transition and this is the reason for the effective occupation change. This is, however, not the case. Using Gaussian pulses for the excitation we demonstrate below that SUPER works, even when none of the spectral peaks of the driving pulse are in resonance with the two-level system transition. III. FM-SUPER SCHEME We now consider the excitation of a specific system, namely a quantum dot excited by an optical laser pulse. Semiconductor quantum dots have already been shown to perform well as deterministic single-photon sources [1,7,9,10,[33][34][35][36], for which a high-fidelity preparation of the excited state is necessary. In such a system, the energy separation of ground and excited state is in the range of 1 − 2 eV and typical detunings of the laser are of the order of several meV. Therefore, the temporal dynamics takes place on a picosecond time scale. We note that for phonon-assisted schemes detunings of 1 − 4 meV are typical [28][29][30], which is already sufficient to perform filtering of the exciting laser for the detection process. In our scheme we consider highly detuned pulses with detunings larger than 5 meV. Note that smaller detunings might be possible using longer pulses. In the optical regime, experimental realizations often rely on laser pulses with a Gaussian pulse shape. Therefore we consider pulses of the form where Ω 0 (t) is the pulse envelope with pulse area α = Ω 0 (t) dt and duration σ. The time-dependent detuning is connected to the phase of the laser via ∆(t) =φ(t)−ω 0 . Motivated by the periodic switching of the detuning leading to the swing-up effect for the rectangular pulse, we use FM assuming a smooth switching process resulting in the general form of a sinusoidal detuning The laser frequency is composed of a constant detuning ∆ C and a sinusoidal modulation with amplitude ∆ M and frequency ω M . Similar to the considerations using the rectangular pulse, we expect a good performance of the FM-SUPER scheme if the modulation frequency ω M is close to the Rabi frequency at the pulse maximum, i.e. ω M ≈ Ω 2 0 + ∆ 2 C . Figure 2(a,b) exemplarily shows the dynamics resulting from two different parameter sets. In both cases, we use a constant detuning of ∆ C = −6 meV and σ = 4 ps, with ∆ M = 2 meV, α = 6.2π, ω M = 6.08 meV for (a) and ∆ M = 1 meV, α = 30.3π, ω M = 8.32 meV for (b). In both cases the occupation is completely transferred to the excited states, while performing small oscillations. This demonstrates that the SUPER scheme also works for Gaussian pulses, even for high detunings where without modulation no occupation of the excited state would occur. Due to the frequency modulation, the spectrum of the pulse does not only contain the carrier frequency leading to the detuning ∆ C , but also additional side-bands at multiples of the modulation frequency ω M . The moderate pulse duration of σ = 4 ps leads to a small contribution to the spectral width by the pulse envelope, so the distinct peaks in the spectrum are relatively sharp. This is shown in the spectra in Fig. 2(c), corresponding to the cases shown in (a,b). The question now arises, which role these side-bands play in the SUPER scheme. Of most interest is the first side-band, which can be resonant to the ground-state exciton transition if |ω M | ≈ ∆ C . To get insight into the relevance of ω M , we analyze the final excited state occupation, i.e., the occupation after the pulse for different parameters. Accordingly, Fig. 2(d,e) shows the impact of ω M for different pulse areas α, for (d) ∆ M = 2 meV and (e) ∆ M = 1 meV. In both cases a stripe pattern sets in at ω M = 6 meV. We remind the reader that ∆ C = −6 meV was used, so this parameter set leads to a resonant side-band. This is mainly responsible for the final occupation in this regime. Indeed, approximating this first side-band amplitude by the Bessel function of the first kind leads to an effective pulse area of α eff = αJ 1 (∆ M /ω M ). This breaks the scheme for ω M ≈ 6 meV down to Rabi rotations using the effective pulse area of the side-band, also resulting in the stripe pattern in α. We note that the parameters used in Fig. 2(a) mark such a case [red dot in (d)]. Therefore, in Fig. 2(a) we also show a resonant excitation, which agrees with the mean of the oscillations. However, if we increase the pulse area and accordingly ω M > |∆ C |, the first side-band is no longer resonant to the transition frequency ω 0 . In these cases, the full occupation of the excited state can no longer be explained by resonant Rabi oscillations any more. Instead, we here truly make use of the SUPER mechanism. When moving to higher detunings, the stripes in Fig. 2(d,e) show a square-root-like behavior. The lowest stripe is similar to the behavior of the Rabi frequency as a function of pulse area, shown as a red line. Using ∆ M = 1 meV in (e), this first stripe shows a high final occupation, even for large ω M . One example using this truly off-resonant SUPER scheme is marked by the blue dot in (e). The modulation with ω M = 8.32 meV leads to a side-band located at 2.32 meV, which is well out of resonance. The time evolution for this case is shown in Fig. 2(b) and exhibits high-frequency high-amplitude oscillations in the occupation, ending up in the excited state. By using the SUPER scheme, we can achieve full occupation of the excited state by a highly detuned pulse, which has no spectral component resonant to the transition frequency. It should be noted that absence of the resonance with the bare transition frequency ω 0 does not exclude resonances with full non-linear dynamics of the system. Assuming the case of an optically excited quantum dot, the sinusoidal modulation of the laser is on the femtosecond time scale. To the best of our knowledge, such a fast modulation is not possible with state-of-theart laser technology. We propose a two-color solution in the following, which provides a practical route to exploit the SUPER scheme with standard laser technology. Nonetheless, the FM-SUPER scheme conveys convincingly that a periodic change of the laser frequency can result in a coherent preparation of the excited state and it would be interesting to see if this FM scheme can be realized in other two-level systems like superconducting circuits [37,38] . IV. TWO COLOR SUPER SCHEME The SUPER scheme relies on using Rabi oscillations with different frequencies and amplitudes. For constant frequency, an amplitude modulation can likewise result in a change of the Rabi frequency, see Eq. (1). Such an amplitude modulation can be achieved by the beats induced by two laser pulses. We will consider two Gaussian pulses of similar width but different detunings. Such pulses are straight-forward to realize in an experimental setting, when compared to frequency modulated pulses. The electric field consists of two pulses with constant energy to excite the system, where Ω j (j = 1, 2) are the real envelopes of two Gaussian-shaped pulses with (in general) different pulse area α j and duration σ j as well as temporal separation of τ . The frequencies of the lasers will again be expressed in terms of the detuning ∆ j = ω j − ω 0 for each laser pulse. Accordingly, we call this the two-color (2C)-SUPER scheme. Additionally, an arbitrary phase difference ϕ between the pulses is introduced. As shown in appendix B, the phase does not alter the final occupation, such that we set ϕ = 0 in the following. To achieve a high occupation of the excited state, the swing-up process has to be coordinated, which is again achieved by choosing the two detunings of the laser in such a way that the difference between the detunings is equal to the Rabi frequency associated with the first pulse. Therefore, if we set the detuning for one laser pulse to ∆ 1 and use the corresponding Rabi frequency Ω 1 at its maximum, the detuning of the other pulse calculates to From the equation we see that |∆ 2 | > 2|∆ 1 |, such that the second pulse is detuned even further from the transition. Note that we use ∆ 1 < 0 in order to obtain an excitation in the transparent region below the exciton line. An example of the occupation dynamics for the 2C-SUPER scheme is shown in Fig. 3(a). The two pulses, which are both energetically well below the band gap, lead to a complete transfer of the electronic system to the excited state. During the pulses, the occupation performs fast oscillations, similar to the case of FM-SUPER. The right inset shows the spectrum of the complete laser field, each pulse leads to a distinct peak centered around the corresponding frequency, i.e., there is no spectral overlap with the transition energy of the quantum emitter. Here, ∆ 1 = −8 meV and ∆ 2 = −19.1630 meV were used. In Fig. 3(b) the envelopes of the pulses are shown, having different pulse durations (σ 1 = 2.4 ps, σ 2 = 3.04 ps, corresponding in the spectral regime to a FWHM of 0.65 meV and 0.5 meV respectively) and pulse areas (α 1 = 22.65 π, α 2 = 19.29 π). Additionally both pulses are separated by τ = −0.73 ps < 0, which leads to the pulse we refer to as the second pulse arriving earlier in time than the first one, while both pulses still show a strong overlap. Next, we evaluate for which parameters a high fidelity inversion is possible. As such, we start from the parameters used in the previous figure and vary ∆ 2 and τ , which is shown in Fig. 4(a). The region where high final occupations are reached is symmetric regarding the sign of the pulse offset τ , such that it does not matter if the second pulse arrives earlier or later than the first pulse. For growing |τ | the required detuning decreases, i.e., the energy of the second pulse increases slightly. If the offset is too large, the pulses arrive separately from each other and no inversion is achieved for high |τ |. The pulses used so far had different pulse durations and areas. In Fig. 4(b) we analyze the final occupation for two pulses of identical shape, where we choose σ 2 = σ 1 and α 2 = α 1 . The result is a behavior similar to that in (a), with a few distinctions: For completely overlapping pulses (τ = 0) complete inversion can not be achieved, but the occupation only goes up to a maximum of about 90 %. Only for a finite pulse separation, e.g., for τ 2.5 ps, the 2C-SUPER scheme results in a full occupation of the excited state. For the variations of the detuning, we find that in both Fig. 4(a) and Fig. 4(b), the detuning ∆ 2 which leads to the best results corresponds to that given by Eq. (6), confirming that the concept of the SUPER scheme depends on the Rabi frequency. Additionally, in both panels (a) and (b) small maxima are visible for larger ∆ 2 . These stem from excitation and de-excitation processes similar to higher order Rabi rotations. However, here the scheme does not achieve full inversion. Next, we analyze the impact of the pulse shape on the 2C-SUPER scheme. As such, we vary the pulse areas and the pulse length systematically in Fig. 5. The figure shows the excited state occupation for the case that the pulse areas of both pulses are equal (α 1 = α 2 ), but have different length σ 2 = 1.5σ 1 and τ = 0. In (a) the detuning of the first pulse is set to −5 meV, in (b) −11 meV is used. In both cases the scheme yields high inversion over a broad parameter range and the structure of the behavior is qualitatively the same. It is important to note, that the changes in pulse area and pulse width lead to a change of the second detuning, according to Eq. (6). Looking at the pulse parameters in (b), where larger detunings are used, also larger pulse areas or shorter pulses are necessary for the scheme to work efficiently, which correspond to a larger pulse intensity. The behavior at very small detunings (∼ −1 meV) for otherwise the same excitation parameters is discussed in appendix C. This two-color approach to the SUPER scheme performs well on a broad range of parameters. In an experimental environment only two separate laser pulses are required, while there are many possible parameters for which the scheme works. To further illustrate this, we give a few examples of laser parameters which result in an excited state occupation of one in Tab. I. The parameters found are generally accessible with standard laser technology [39]. Likewise, two pulses could be generated by carving from a single femtosecond pulse [40]. Because the final occupation in the scheme does not depend on the phase difference between the pulses (cf. appendix B, Fig. 6), phase locking should not be required. We find that the required pulse areas used in the SU-PER scheme are rather high compared to resonant excitation, leading to high peak intensities of the laser pulses. While high laser intensities can damage the quantum dot, one has to keep in mind that we use pulses detuned below the transition energy. When choosing such a large negative detuning, we are in the transparent region of the material, which results in low absorption-induced heating of the sample. Note that this is different for a positive detuning, where higher excited states or states of the host material could be excited by these pulses. V. PERFORMANCE AS SINGLE-PHOTON SOURCE In terms of quantum technology, one of the most prominent applications of optical state control of twolevel systems is their usage as single-photon sources. In the case of quantum dots, single-photon emission has already been achieved in several experiments [1, 7, 9, 10, 33-36, 41, 42], but the search for the optimal source is ongoing [43] including the development of new excitation protocols like ARP [8,26,27] or phonon-assisted schemes [28][29][30][31][32] or symmetrically detuned excitation [40,44]. To compete with these preparation protocols in terms of photon sources, the SUPER scheme should at least theoretically have a similar performance regarding the photon quality. The quality of the generated photons can be estimated in terms of three quantities: the single-photon purity P, the indistinguishability I of two subsequently emitted photons, and the brightness of the source. Especially the brightness can have different definitions, amounting to the number of photons generated [45,46] or collected [7] per excitation. Here, to estimate the brightness, we calculate the photon output O as the probability of a photon being generated per excitation cycle. A definition of these quantities as well as methods to calculate them are given in appendix D. The source is a perfect single-photon emitter, if all three quantities are unity. We evaluate them for the 2C-SUPER scheme, which is more straightforward to implement experimentally, using the parameters from Fig. 3. For the calculations we are additionally assuming a radiative decay rate of 1 ns −1 for the quantum dot exciton. The results are displayed in Tab. II, where we find that all quantities are very close to unity for the 2C-SUPER scheme. They are compared with the corresponding characteristics of a resonant π-pulse excitation with otherwise the same pulse parameters. Using the 2C-SUPER scheme, we can reach a single-photon purity and indistinguishability of 99.85 % as well as a photon output of 99.64 %, such that our protocol can compete with the single-photon source operated in resonance fluorescence and compares favorably concerning P and I. The crucial difference is the off-resonant operation of the quantum dot in our protocol presented in this work, which allows for a spectral separation of pump and signal. Note that P and I actually do differ slightly (beyond the decimal places shown in Tab. II). The reason for their similarity lies in the fact that the single-photon purity is already close to unity. The state preparation and the photon properties are also subject to decoherence mechanisms for example via phonons, the hyperfine interaction or the interaction with nuclear spins. For an optically excited quantum dot, phonons have been shown to be a major source of decoherence, which already might hinder the state preparation schemes drastically [47][48][49]. Therefore, we briefly analyze, if the phonons hinder the preparation of the excited state in the quantum dot when employing the scheme presented here. For this, we use the standard pure dephasing-type Hamiltonian and apply a path-integral formalism to solve the corresponding equations of motions [50,51]. We take standard GaAs parameters [52] for a quantum dot with radius 4 nm at a temperature of 4 K. Choosing the 2C-SUPER scheme as in Fig. 3 and Tab. II, we find that phonons hardly influence the scheme. To be specific, the final occupation without phonons is 99.99 %, while the calculation with phonons yields an occupation of 97.31 % and thus the difference is negligibly small. Phonons might also lead to a degradation of the photon properties [46,53,54], which will be explored in future work. Nonetheless, the small effect of the phonons on the SUPER scheme for state preparation gives a good hint that our scheme can be used to generate high-quality photons. VI. CONCLUSION In this work we propose to use largely detuned pulses for the preparation of the excited state in a quantum emitter system using the SUPER scheme. The mechanism relies on the swing-up of the excited state occupation by combining Rabi oscillations with different frequencies, which alone would not lead to any significant occupation. The SUPER mechanism can be exploited either by using (i) a frequency modulation or (ii) a twocolor scheme. In both cases, the excitation takes place in the transparent region of the material yielding a very distinct separation of exciting pulse and detection. The 2C-SUPER scheme can be experimentally realized using state-of-theart laser pulses and results in the generation of highquality single photons. In the recent years, many different schemes for the state preparation of quantum emitters have been proposed and implemented that go beyond simple Rabi rotations [48] like ARP [8,26,27] or phonon-assisted schemes [28][29][30][31][32]. Only the phonon-assisted schemes are off-resonant, but they rely on an incoherent process, while our scheme uses a coherent excitation mechanism. Recently also two-color excitation schemes have been developed to excite the quantum dot [40,44], underlining the need to remove the filtering to obtain a high photon yield. However, in these schemes, the energies of the two laser pulses were symmetric to the transition frequency, hence, one pulse was close to the region of higher excited states. In contrast, our proposed scheme uses two pulses which are both strongly negatively detuned and hence in the transparent region. This makes the SUPER scheme highly attractive for applications in the field of quantum information technology. The system studied in this paper consists of a ground state |g , an excited state |x separated by an energy ω 0 and is driven by a time dependent term Ω(t) within the rotating wave approximation (RWA). The RWA is useful in this case, because the detunings considered are in the range of a few meV, while the energy difference of the two levels is in the eV range. For a quantum dot, the driving is given in the dipole moment approximation, where Ω is proportional to the product of the electric field E(t) and the dipole moment with Ω(t) = dE(t)/ . The Hamiltonian describing this system reads Due to the rotating wave approximation, the field is given with its complex time dependence as given in Eqs. (3) and (5). From this, equations of motion can be obtained using the von-Neumann equation. Introducing the occupation of the excited state f = |x x| and the coherence (or polarization) p = |g x| leads to the Bloch equations Standard numerical approaches for solving differential equations can then be used to obtain the temporal dynamics of the system. The use of a rotating reference frame is beneficial for the numerical integration, as it reduces the fast oscillations due to the electric field, so that a larger time-step can be chosen. For illustration purposes we make use of the Bloch vector representation. For this, we change into a reference frame rotating with the laser frequency, resulting in a driving term which is real. In this case the equation of motion can be formulated using a cross product where r is the Bloch vector, rotating around a rotation axis Ω. Ω 0 corresponds to the envelope of the field as given in Eq. (3) and ∆ is the detuning. Note that a clear picture using the Bloch vector is not possible for fields like 6. Comparison of the calculations for ϕ = 0 in Fig. 3 with the result for ϕ = π/2. The inset shows that the final occupation is constant for phases ϕ = 0, .., 2π. the one given in Eq. (5), as due to the two frequencies, no rotating frame exists that results in a purely real driving term. Still, also for the two-color scheme the use of a rotating frame is beneficial for numerical purposes. Appendix B: Phase dependence In the 2C-SUPER scheme we introduced a relative phase ϕ between the two pulses in Eq. (5). When varying the phase from ϕ = 0, ..., 2π, we find that the final occupation does not change as a function of ϕ. The results are given in Fig. 6 for the same pulse parameters as in Fig. 3. Exemplarily, we show the dynamics for ϕ = 0 and ϕ = π/2. Due to the phase, the oscillations are shifted slightly while the final occupation remains unchanged. This is true for all phases, the inset shows that the final excited state occupation for all possible relative phases is the same. Appendix C: Behavior at very small detunings The SUPER scheme works similarly well at smaller detunings (∼ −1 meV), however, at some point the spectrum of the pulse will overlap with the transition energy due to the spectral width of the pulses. This is shown in Fig. 7, which is similar to Fig. 5 but shows the results for smaller detunings of (a) ∆ 1 = −1 meV and (b) ∆ 1 = −0.5 meV. In both cases, for small σ 1 a transition from 2C-SUPER to resonant Rabi-like oscillations occurs. For long pulses, i.e., when the spectral separation is larger, the occupation is achieved via the SUPERscheme. Appendix D: Definition of photonic quantities Here, we define the three photonic quantities discussed in Sec. V. The quantities are extracted from the projection operatorp = |g x|, yielding the polarization with To obtain the correlation functions occurring in the following, a pulse train has to be modeled. The separation between the pulses in this train is denoted as T Pulse and chosen such that all quantities are relaxed before the next pulse hits the quantum dot. The brightness is estimated via the photon output O, which we define as the number of photons emitted per excitation cycle given as [32,45,46] O := γ where t 0 is the center time of the pulse and 0 ≤ O ≤ γT Pulse . We scale O such that 100 % corresponds to the ideal case of a delta-pulse excitation with pulse area π. Note that the photon output of the source is different to the number of photons collected per excitation, which might be significantly lower. The single-photon purity P is defined as The single-photon purity P is a measure for the singlephoton component of the photonic state [4-9, 41, 42]. P = 1 implies a perfect single-photon purity. It has no lower bound, −∞ < P ≤ 1, since p 1ph can be larger than one in the case of bunching behavior instead of antibunching. The indistinguishability I of two successively emitted photons is p † (t)p(t) p † (t + τ )p(t + τ ) The last term in G HOM (τ ) it is bounded by 0.5 ≤ I ≤ 1 [56]. All correlation functions are calculated within the density matrix formalism by solving the Liouville-von Neumann equation and applying the quantum regression theorem for the propagation in the delay time argument [57]. Since the dynamics is purely Markovian in the case studied here, the quantum regression theorem is exact.
2021-11-22T02:16:05.251Z
2021-11-19T00:00:00.000
{ "year": 2021, "sha1": "1a1f432e1ada477bf795fa39d380d1fe7ee6aae8", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PRXQuantum.2.040354", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "1a1f432e1ada477bf795fa39d380d1fe7ee6aae8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics" ] }
18165280
pes2o/s2orc
v3-fos-license
Flow Cytometric Methods for Indirect Analysis and Quantification of Gametogenesis in Chlamydomonas reinhardtii (Chlorophyceae) Induction of sexual reproduction in the facultatively sexual Chlamydomonas reinhardtii is cued by depletion of nitrogen. We explore the capacity for indirect monitoring of population variation in the gametogenic process using flow cytometry. We describe a high-throughput method capable of identifying fluorescence, ploidy and scatter profiles that track vegetative cells entering and undergoing gametogenesis. We demonstrate for the first time, that very early and late growth phases reduce the capacity to distinguish putative gametes from vegetative cells based on scatter and fluorescence profiles, and that early/mid-logarithmic cultures show the optimal distinction between vegetative cells and gamete scatter profiles. We argue that early/mid logarithmic cultures are valuable in such high throughput comparative approaches when investigating optimisation or quantification of gametogenesis based on scatter and fluorescence profiles. This approach provides new insights into the impact of culture conditions on gametogenesis, while documenting novel scatter and fluorescence profile shifts which typify the process. This method has potential applications to; enabling quick high-throughput monitoring, uses in increasing efficiency in the quantification of gametogenesis, as a method of comparing the switch between vegetative and gametic states across treatments, and as criteria for enrichment of gametic phenotypes in cell sorting assays. Introduction Chlamydomonas reinhardtii is a unicellular, isogamous alga of the Chlorophyceae, which displays facultative sexual reproduction; able to switch between asexual and sexual modes of reproduction. Chlamydomonas is a widely used model in areas such as biofuel production [1,2] flagella biology [3,4], and as a model for the evolution and maintenance of sexual reproduction [5,6]. Naturally occurring in soil and fresh water, the facultative nature of Chlamydomonas reproduction is understood to be an adaptation to harsh environments. Granick Sager and Granick (7) initially demonstrated that gametogenesis in this species was cued by depletion of nitrogen in the environment, with high nitrogen availability inhibiting gametogenesis. Reproducing via mitotic asexual division in environments of nutrient abundance, cells possess one of two non-recombining mating-type regions, leading to the requirement for fusion of cells of opposite mating types in sexual reproduction upon a Nitrogen level decrease [8]. During gametogenesis, Chlamydomonas undergoes a mitotic division, producing gametes capable of fusing to produce recombinant offspring, each displaying flagellar proteins termed agglutinins, integral to gamete fusion [8,9]. Once fusion of opposite mating types has occurredresulting in the production of a diploid, zygote-meiosis occurs, leading to the production of recombinant offspring [10][11][12][13]. Gametogenesis can broadly be divided into two processes; the conversion to pregametes [14], and the acquisition of mating competence [15][16][17]. A detailed picture of the transcriptional programs involved in gametogenesis, including the associated transcriptional changes associated with mating competence is emerging [18][19][20]. Previous analyses have demonstrated variability of mating efficiency within lines and between clones [21], however, the exact nature of this variation, and its relationship to gametogenesis and cell morphology remain unclear. Therefore, high-throughput and multi-parameter methods are required for a robust quantification of mating efficiency, efficient and repeatable production of gametes, competitive mating methods, as well as a screening method for gametic and sexual mutants, and cell sorting to obtain clonal and axenic populations derived from gametic cells [22]. Chlamydomonas reinhardtii divides asexually every 4-8 hours under conditions of continuous light. In early/mid G1 phase, cells pass a size checkpoint controlling transition through the cell division cycle; a minimum size must be attained for asexual division, estimated at 2.2 times the post mitotic cell size [23], which can be moderated by light regime [24]. Craigie and Cavalier-Smith (24) showed that in mitotic divisions, daughter cell size is uniform and independent of parent cell volume, suggesting a minimum volume of daughter cells. By adopting alternating light cycles, cell cycles can be synchronised [25], enabling successive rounds of division based on cell size to occur only in the dark cycle. This capacity for synchronisation makes Chlamydomonas an attractive model for exploring individual differences in the capacity to undergo gametogenesis; traits which have previously been less amenable to analysis due to the low resolution methods available. The capacity for a cell to undergo gametogenesis under synchrony (where cell division is limited to a dark phase), is understood to be determined both by a cells capacity to undergo cell division after environmental nitrogen levels decrease [26], in addition to a cells ability to detect the decreasing nitrogen levels in the environment; the second of which there has been relatively little investigation. It is expected that the phase of growth of a cell under study may affect the scatter and fluorescence changes detected under experimental conditions. From exponential to stationary stages, changes in cell morphology such as decreases in cell size, physiology and gene expression are expected to occur. A previous study in E. coli identified 20 cell types representing different stages of differentiation and growth [27], while other studies have shown growth phase dependent enzymatic production [28]. Earlier investigations in Chlamydomonas show growth phase related changes in intracellular components such as carbohydrate:protein and lipid;protein ratios [29]. Growth phase dependent changes occur in physiology and behaviour, such as autophagy which is upregulated in stationary phase and down regulated upon induction of log phase [30], and phototaxis which is highest under conditions of exponential growth [31]. The deprivation of nutrients, such as phosphate, sulphur, magnesium, CO 2 , potassium and the gametogenic cue nitrogen, (which is expected to increase stress under later growth phases) has been shown to lead to changes including glycogen accumulation [32,33]. It is expected that under nutrient stress, cells will prioritise the accumulation of storage products. As a result it is important to investigate whether such processes make vegetative cells morphologically more similar to gametes or whether such environmental changes can lead to cells undergoing gametogenesis (due to decreases in nitrogen levels). While there has been a great deal of interest in the process of gametogenesis, it remains difficult to identify and therefore quantify gametes within a population. Given the environmental cue required for gametogenesis, we would expect population variation in the perception of the cue and initiation of gametogenesis, and populations may therefore be a mixture of vegetative cells, cells undergoing gametogenesis, and gametes. Attempts to create gamete mating-typespecific agglutinin antibodies specific enough to repeatedly identify and enrich for gametes has been difficult, with many of the antigens present in gametic agglutinins showing domain crossover with other cell wall proteins found in gametes and non-gametic cells [34], limiting the applicability of immuno-methods for enriching gametes cells. Previous approaches to studying gametogenesis have relied on microscopic analyses and mating efficiency tests to determine when mating competent cells emerge within a population [35]. This approach is time-consuming and requires extrapolation from small samples to larger populations. We sought to explore whether scatter and fluorescence profiles of cells measured using flow cytometry, show repeatable changes during the process of gametogenesis, and whether this may offer an alternative method for exploring gametogenesis, providing single cell quantitative resolution of cellular properties over the process of gametogenesis with large sample sizes. Given that flow cytometry enables addition of fluorescent stains and immuno-methods, this method could enhance our understanding of the process of gametogenesis. Secondly, by identifying parameters to distinguish individual vegetative and gamete cells in a population, it may be possible to quantify the relative number of cells of each type, and to explore the dynamics and genetics underlying the decision to become a gamete. Flow cytometry is a mechanised technique involving a hydrodynamic focussed fluid stream of single cells which is passed through a series of laser beams, collecting fluorescence and light scattering properties as the cells scatter light while passing through the light stream [36]. Forward-scatter is a measure of light scatter detected by a photodiode in line with the excitation light, and often provides a relative indication of cell size [37]. Light which is scattered largely from its original path is collected by a second detector situated orthogonally to the emitting beam [37]. Dichroic mirrors are used to split this light. Scattered excitation light collected through the orthogonally situated detectors is termed side-scatter and provides an indication of internal and external structure/complexity in addition to the refractive index of the cell [37]. However, the precise relationship between scatter and cell characteristics may differ depending on the cell type. Optical filters are used to detect specific ranges of scattered light which can be quantified, before the scattered light and emitted fluorescence are detected by the photomultiplier tubes [37]. Collection of such fluorescence intensities allows the detection of experimentally introduced dyes or labels, which are excited by a low wavelength laser, and which fluoresce at higher wavelengths [36,38,39]. However, these studies can be affected by naturally fluorescent pigments in cells under analysis. This auto fluorescence is often seen as an issue for flow cytometric analyses, with auto fluorescent signals overlapping into fluorophore channels which must be compensated for. Complex changes occur throughout the process of Chlamydomonas gametogenesis in terms of size, subcellular structure and emission spectra of auto fluorescent pigments. More widely in biology, different isotypes or specific combinations of pigments seen in different species, or even between cell types of the same species have been proposed as possible signatures with which we can identify specific cell populations [40], and the auto fluorescent properties of Chlamydomonas gametes indicate a candidate for such an approach. Due to naturally fluorescent compounds such as NADPH, chlorophyll a and b (which are excited by the 488 laser), flow cytometric monitoring of scattering properties and fluorescence has been shown to be able to distinguish algal species [39,41,42]. While endogenous compounds may limit the fluorescent dyes and labels compatible with algal studies [39], their auto fluorescent properties, rather than being a barrier to analyses, may in fact act as a substitute for externally applied fluorophores. Here we show how auto fluorescence in Chlamydomonas (which fluoresces highly in the red, far red and infra-red spectra [39] and lower in other channels), may allow monitoring of differentiation and the distinguishing of different differentiated cell types within the same population. We demonstrate the utility of flow cytometry for high throughput quantification of gametogenesis, and highlight the potential of this technique to allow for detailed approaches to studies of gametogenesis and the decision to become a gamete. We use these techniques to explore optimal conditions for studying gametogenesis. We show how flow cytometry relates to earlier findings applying mating capacity and manual mating counts methods to studies of gametogenesis in Chlamydomonas, and therefore whether flow cytometry can serve as a proxy for understanding efficiency and variation in gametogenesis. Finally we explore the relationship between gamete and vegetative characteristics across the growth curve of Chlamydomonas cultures. Strains and culture conditions Strain CC-125 of Chlamydomonas reinhardtii for use in gametogenesis time courses was obtained from the Chlamydomonas Resource Centre (University of Minnesota). Additional strains CC-1690, CC-1692, CC-124, CC-2935, CC-2936, CC-2931 and CC-2932 (for CEROS computer-assisted sperm analysis system (CASA and comparative analyses) were also obtained from the CRC. Cells were maintained for long term storage on Tris-Acetate-Phosphate (TAP) [43] agar plates supplemented with yeast extract. TAP media was inoculated with cells taken directly from agar plates. All liquid and agar cultures were maintained at 22±1°C [44] with a 12:12 light/dark diurnal photoregime under 'cool white' fluorescent bulbs [8]. Liquid cultures were agitated using an orbital shaker at 120rpm. All cell concentration measures were calculated using a Haemocytometer counting chamber using the fixative Lugol's solution as described in [8]. Vegetative and gametic comparisons For vegetative and gametic fluorescence comparisons, three biological replicates of line CC-125 were grown in pre-growth cultures for three days, re-diluted to 500,000 cells/ml and grown for a further three days to ensure similarity in phase of growth; in the early logarithmic phase. Line CC125 was chosen as it shows capacity to mate with CC-124 and is amenable to mating trials (unpublished data). All cell concentration measures were calculated using a haemocytometer counting chamber using the fixative Lugol's solution as described [8]. 500ul samples of 1) vegetative growths, 2) cells immediately after centrifugation (centrifuged at 3000g for 2 min and resuspended in double distilled water (ddH 2 0), and 3) cells 24 hours after resuspension in (ddH 2 0), were each fixed in 500ul 2% paraformaldehyde (PFA), to create 1% final concentration of fixative. A second experiment repeated these methods, with the adjustment that condition 2 used cells immediately after centrifugation and resuspended in TAP media. This analysis confirmed no significant effect of centrifugation on scatter or fluorescence profiles (S1 Fig). Thus we can assume that the cells remain intact after centrifugation and, that changes in scatter and fluorescence after 24hours of nitrogen removal can be attributed to gametogenesis. Gametogenesis time-courses In the gametogenesis time-course analyses, two biological replicates of line CC-125 were grown directly from plates for seven days for late logarithmic cultures (~4-5 x 10 6 cells /ml). This enabled subsequent testing of effect of gametogenesis by applying t tests using two independent cultures that each had~20,000 independent flow cytometric data points. The~20,000 data points were combined to give a mean value at the beginning and end of the time-course for each independent culture. Late logarithmic cultures were chosen for the detailed time course due to historical use of this phase in mating experiments of Chlamydomonas species [11,[45][46][47][48]. To investigate the effect on growth phase on the phenotype of gametes, two biological replicates were grown for three days for the early logarithmic cultures (~1 x 10 6 cells /ml). Haemocytometer counts were taken before experiments for vegetative growth phase verification, and cells were dilute to 500,000cells/ml to avoid high concentrations in flow analysis. To investigate the effect of phase of light on gametogenesis, at the initiation of the light cycle, cells from two late logarithmic replicates were aliquoted, centrifuged at 3000g for 2min and resuspended in ddH 2 0 which is nitrogen free. Full randomisation was employed. The same process was repeated mid-way through the 12 hour light cycle for all remaining samples to induce gametogenesis. For vegetative and gametic comparisons, 500ul samples of three time-points; vegetative, 0hr (immediately after transfer to ddH 2 0) and 24hr post ddH 2 0 transfer, were fixed in 500ul 2% PFA and maintained at 4°C. ddH 2 0 was used to eliminate any effect of trace nitrogen present in the trace elements solution and TRIS used to create low nitrogen media, which may affect the induction of gametogenesis. Samples were then collated back into the conditions and randomised. Samples were taken hourly over 24hours for the late logarithmic gametogenesis time course. Samples were taken every six hours for two replicates each of early and late logarithmic phase samples and two replicates of samples testing the effect of time in light cycle nitrogen removal was performed. Cultures were maintained under continuous illumination over the course of gametogenesis. 250ul Samples were taken and fixed in 250ul 2% PFA. These were stored at 4°C until analysis. Prior to all analyses samples were filtered using 40um filters. In all analyses, time 0hr samples were taken immediately after transfer to ddH 2 0 to limit scatter effects of media on flow cytometric assessment and the effect of centrifugation on forward-scatter values. To investigate the effect of growth phase on gametogenesis, three replicates samples of similar inoculum size from line CC-125 Chlamydomonas were inoculated into separate test tubes of 10ml TAP directly from a single 1.5% agar plate. Samples were then randomised. Each day for 8 days, 100ul samples were taken from each vegetative growth and fixed in Lugol's solution for quantification using a haemocytometer to determine vegetative phase of growth. 500ul Samples were also taken from each tube of live cells, centrifuged at 3000g for 1min and resuspended in 600ul ddH20. 100ul of this solution was fixed in Lugol's solution for quantification of post transfer concentration. The nitrogen free suspension was kept in continuous light for 24hours, after which 100ul was fixed in Lugol's solution for quantification of the gametic population. The remaining 400ul was fixed in 400ul of 2% PFA for flow cytometric analysis and stored at 4C. Concentrations of the vegetative, post nitrogen transfer and post gametic samples were measured using a haemocytometer. Samples were filtered through a 40um filter before flow cytometric analysis CASA Relative cell size analyses were obtained using three replicates of ten cell lines (CC-1690, CC-1692, CC-125, CC-124, CC-2935, CC-2936, CC-2931, CC-2932, CC-2343, CC-2342). Replicate populations were derived from the sample plate of the respective cell line, and grown in separate pre-growth 10ml tube environments for three days under described growth conditions. Cells were then resuspended at 250,000 cells/ml for a further three days growth. Relative cell size was quantified using the CEROS computer-assisted sperm analysis system (CASA) (v.10, Hamilton & Thorne Research) using parameters optimised for Chlamydomonas. Individual database text files with track details were generated for every cell analysed. At early logarithmic phase, two 250ul samples were taken from each population. One was measured immediately using CASA to indicate relative vegetative size and cell concentration. The second was centrifuged at 3000g for 2min and resuspended in ddH 2 0 for 24hours, when measurements were again taken to indicate relative gametic size and cell concentration using a haemocytometer. Flow cytometry All flow cytometric analyses were conducted on a FACS Calibur flow cytometer (BD Biosciences) with an air-cooled argon laser (488nm emission), a red emitting diode (635nm emission) and four filters and detectors. For all morphological and auto fluorescent analyses, 20,000 cells were sampled per condition. In the gametogenesis time-course samples, where sample concentration limited data collection, samples were only included if they exceeded 15,000 events in the 'Cell' population. For estimation of proportion of dead cells, live samples were filtered immediately before analysis, when 2ug/ml Propidium Iodide (PI) was added in dark for 10min before measurement and percentage dead cells was calculated (average = 2.45% dead cells) (S2 Fig). All flow cytometric samples were filtered through 40um filters before analysis. Bleach was run between each sample until contaminating cells were not detected. PI stock solutions were filtered through 0.22um filters to remove small particles and stored at -20°C before use. DNA ploidy analysis. Optimisation of the ploidy protocol was determined using midlogarithmic synchronised vegetative cells prior to, 1 hour after and 2 hours into the dark phase of the 12:12 light: dark cycle as under synchronisation, cells only divide in the dark phase (S3A Fig). Concentrations and combinations of stains, fixatives and permeabilisation agents were optimised. For ploidy determination during gametogenesis, two 30ml replicates were grown for three days to produce two early logarithmic cultures (1-2 x 10 6 cells /ml). This time frame was chosen subsequent to the hourly late logarithmic time course. It is unclear whether the overlap in gametic/vegetative morphologies in late logarithmic cultures is due to morphological changes associated with growth, or depletion of nutrients leading to premature gametogenic induction. To minimise this risk, early logarithmic cultures were used for DNA quantification. Samples were aliquoted and centrifuged at 3000g for 3min, media was replaced with ddH 2 0. Aliquots were then pooled back into the two samples. 100ul Samples were taken each hour for 25hours and fixed in 10ul Lugol's solution. Concentrations were measured using a haemocytometer. Every hour 1000μl of the gametogenesis replicates were centrifuged at 3000g for 3min and replaces with 70% ethyl alcohol and stored at 4°C. The day before flow cytometric analysis, cells were sedimented by centrifugation at 3000g for 5min, and supernatant replaced with 10ug/ml RNase in PBS and incubated at 37°C for 1hr. PI was added at a concentration of 15ug/ ml, and maintained in the dark overnight at 4°C. Samples were filtered using 40um filters directly before analysis. For DNA analyses, 10,000 PI positive data points were collected based on singlet discrimination using PI height vs area plots. Autofluorescence profiles in the FL3 (670LPnm) channel meant that PI emission was read in the FL2 (585/42nm) channel [39]. Peaks were manually gated, and the same gates were employed on all cells. Cell concentrations did not exceed 1000cells/sec. Statistical analysis Statistical analysis was performed using IBM SPSS Version 23 and R version 3.0.1. All flow cytometric data was tested using FlowJo X 10.0.7r2 (TreeStar, USA). Median Fluorescence and scatter trends were plotted using the ggplot2 function in R [49] with stat_smooth. Effects of nitrogen removal with respect to light phase were plotted using qplot and geom_smooth functions. Results and Discussion Chlamydomonas displays shifts in fluorescence and scatter properties over gametogenesis in late logarithmic populations consistent with previous investigations The time-course of gametogenesis display noticeable and reproducible changes in the scatter and fluorescence properties of the population (Fig 1), with changes occurring in specific subsets of the population over time (Fig 2). Chlamydomonas gametes are smaller than vegetative cells (S4 Fig) [9], and are therefore expected to create lower levels of scatter and to contain reduced relative and absolute levels of auto fluorescent compounds. Consistent with this, side-scatter and fluorescence intensities decreased to below initial values over the final hours of gametogenesis (Fig 1), suggesting that the cells at the end of the time course are smaller and less internally complex than the vegetative cells present at the beginning of the time-course. A paired samples t-test showed that the difference in the median values of all fluorescence and scatter profiles at the removal of nitrogen and again at the end of gametogenesis, was significant only for the channel FL1 (t 1 = 16.0 P = 0.040) and approached significance for side-scatter (t 1 = 9.9 P = 0.064) in this small late logarithmic dataset. (df = 1 as there were two independent cultures that each had~20,000 independent replicates, the~20,000 replicates were combined to give a mean value for the specific timepoints tested). Testing for an increase in scatter and fluorescence over the first five hours of the trial with a linear regression, showed that there was a significant linear increase in FL2/ 585 channel (yellow fluorescence) (F 1,9 = 6.577, P = 0.03) indicating that in this part of the spectrum the flow cytometer detected increases in cell complexity or other fluorescence profiles prior to cell division in this late logarithmic culture. There were no significant changes in other channels (all P values > 0.05). Logarithmically scaled side-scatter intensities ( Fig 2B) taken at hourly intervals show an overall decrease in side-scatter associated with gametogenesis, indicating a relative decrease in internal complexity, with a concentration of cells displaying high side-scatter, emerging 6 hours after nitrogen removal, and peaking around 9-10 hours. This is associated with the development at 10 hrs after nitrogen removal, of a concentration of low side-scatter cells, which increases as the high side-scatter population size decreases. This is demonstrated in both replicates and corroborates previous evidence in the literature based upon microscopic approaches in late logarithmic cultures, which showed that beyond 9 hours light after the induction of gametogenesis, genetic division occurs (doubling of genetic material before division) [35]. This would be expected to increase internal complexity (and therefore side-scatter), which in turn would decrease when the cells subsequently divide. Gametic offspring are described forming within the cell walls of adult cells around 12hrs, and by 15hours of nitrogen deprivation, are released from the parental cell wall into the media [35]. Thus the lowering of side-scatter after this time (Fig 1) is what we expect to observe. Daughter cells remain within the mother cell wall for hours post division, with the cell wall swelling, this can be determined by increases in cell volume [24]. This time frame matches our observations in the side-scatter channel (Figs 2 and 1B), with the loss of a high side-scatter population, and the drop in median scatter by 15 hour post-nitrogen removal. This may indicate that internal complexity, as measured by side-scatter, tracks cells as they increase in size prior to division. While FL1 (530/30) and FL2 (585/42) fluorescence channels represent unknown components of Chlamydomonas biology, emissions detected in FL3 and FL4 correspond to chlorophyll fluorescence, emitting at 650-750 nm [50]. Changes to the number of chloroplast nucleoids are expected to change long wavelength fluorescence intensities. Examination through the process of gametogenesis, shows no increase in high FL3 or FL4 subsets (supported in Fig 1E and 1F), suggesting there is no de novo chloroplast nucleoid production through gametogenesis, a hypothesis which has previously been supported [51,52]. The data corresponds to evidence that vegetative cells possess multiple chloroplasts while gametes possess a single chloroplast [52]. Therefore, the fluorescence profiles collected by flow cytometry may be able to track the production of gametes with their single chloroplast through autofluorescence intensities. Fluorescence and scatter properties may convey meaningful information regarding the process of gametogenesis, providing a high throughput and less time-consuming method of assaying the process. However, this method is not able to detect the production of mating capable gametes, only the morphology of gametic populations. In assays of mating capacity during gametogenesis, Kates and Jones [35] documented two peaks of mating capable gametes, around 12hours (~35%) and again between 21 and 24 hours (from~0%-to~100%). This is expected to represent the production of gametes after division of larger cells which lose mating capacity as they prepare to divide again. These dynamics cannot be determined using the fluorescence or scatter profiles described here. However, due to the morphological changes that occur during gametogenesis, scatter and fluorescence changes associated with gametogenesis may allow indirect monitoring or may complement other methods capable of estimating mating efficiency. Additionally, it is clear that while scatter and autofluorescence trends can be observed in late logarithmic cultures and used to track gametogenesis (Fig 1), overlap in the emission profiles of vegetative and gametic cell populations means that this method is less able to distinguish phenotypes for enriching gamete phenotypes in late logarithmic cultures (Fig 3). As such, we sought to explore the role that growth phase has on the ability for scatter and fluorescence analyses to distinguish vegetative and gametic cells. Existing literature demonstrated the association between forward scatter and cell size [53][54][55]. When we compare late logarithmic cultures to early logarithmic culture time courses (Fig 4) we see that the positive trend in forward-scatter of late logarithmic cultures is reversed in early logarithmic cultures, and that the difference in scatter properties between vegetative and gametic cells is increase in early logarithmic cultures. This suggests that the non-significant decrease seen in forward-scatter in the two late logarithmic time-courses (t 1 = -4.647 P = 0.135) may be effects due to growth phase. This explains the increase in FSC (Size), when gametes should be smaller than vegetative cells. Whether this is due to cellular changes, or changes to the fragility of flagella, is unclear. In contrast, in the early logarithmic cultures, there is a significant increase in forward-scatter over gametogenesis as seen in a paired samples t-test between nitrogen removal and the end of the timecourse (t 1 = 25.444 P = 0.025). This suggests forward-scatter can be eliminated as criteria for monitoring gametogenesis in late logarithmic cultures but that it may be of utility in early logarithmic cultures. Overlap compared to A is noticeable in forward-scatter measures (FSC), however, there is a difference in side-scatter measures (SSC) between the cell types, creating separate regions for the cell types. C), Late logarithmic culture immediately after transfer to nitrogen free media (vegetative cells) D). Late logarithmic culture 24hrs after transfer to nitrogen free media (gametic cells). Overlap compared to C is noticeable in both forward-scatter measures (FSC) and side-scatter measures (SSC) between the cell types, limiting the capacity to distinguish between cell types. Effect of growth phase on distinguishing vegetative and gametic profiles Previous investigations suggest that the growth phase of cultures affect the number of daughter cells that can be created in multiple fission cell division, with cells dividing into 2 cells or no division at all at the end of the growth curve, compared to 2, 4, and 8 daughter cells in earlier stages of growth. Cell count data confirmed reproducible changes in the number of gamete cells produced per vegetative cell as growth progressed (Fig 4). A Generalized linear mixed model where the vegetative and gametic cell concentrations were the dependent variable fitted the effects of time (day of growth) using Poisson probability distribution and a log-link function. This model showed a significant effect of day (df = 7, P<0.001). Early and late in the growth curve, the ratio of vegetative to gametic cells is close to 1:1, however, in mid-growth it is between 4 and 8; as we would expect. Changes occurring over the growth phase are also reported to impact the ability of gametes to mate, with early growth phase cells not mating well [21,35] those at the stationary phase often failing to produce quantitative (100%) mating, and cells at the end of the linear growth phase able to mate with 100% efficiency [35]. Given that growth phase affects cell division capacity, and is associated with changing nutrient availability and waste accumulation, we sought to investigate whether phases in growth differ in regard to changes in fluorescence and scatter over gametogenesis and the ability to distinguish vegetative and gametic populations based on scatter and autofluorescence profiles (Fig 5). In late logarithmic growths, there are no fluorescence channels showing complete separation between vegetative and gametic cells (Fig 6). Side-scatter/forward-scatter plots shown in Fig 6 show the increased overlap between vegetative and gametic profiles in late logarithmic cultures compared to early logarithmic cultures. Early logarithmic growths however, show better distinction between vegetative and gametic cells based on side-scatter and other fluorescences (Fig 6). These changes are significant in many of the fluorescence channels. A paired samples t-test was used to test the difference in the median values of scatter and fluorescence profiles at the initiation and conclusion of gametogenesis in early logarithmic cultures. This analysis shows that five of the six factors showed significant changes over gametogenesis; FSC t 1 = 25.4 P = 0.025, SSC t 1 = 165.8 P = 0.004, FL1 t 1 = 13.0 P = 0.049, FL3 t 1 = 46.1 P = 0.014, FL4 t 1 = 39.5 P = 0.016) while the FL2 channel (t 1 = 2.1 P = 0.278) was not significant (again, df is 1 because there were two independent cultures that each had~20,000 independent replicates, the~20,000 replicates were combined to give a mean value at the beginning and end of the time-course for each independent culture). This suggests that further examination of the role that phase of growth can have on the morphological and fluorescent profiles in gametogenesis would be useful. Early/mid logarithmic cultures offer a better separation of vegetative and gametic cells based on fluorescence and scatter profiles, than late logarithmic cultures (Fig 6). However, if late logarithmic cultures must be used, earlier analyses show that FL1 gives the best shift in distribution over gametogenesis as the change is significant. These data also suggest that the phase in growth can alter the trend seen in gametogenesis (See FSC and 585, Fig 6), and lower the distinction between the cell types in all channels based on morphological and fluorescence properties. Effect of growth phase on scatter properties of Chlamydomonas cells Above, we have described both the general changes in scatter and fluorescence comparing both gametic and vegetative cells, as well as the effect of growth phase (Figs 4 and 6). While gamete cells tend to show lower forward and side scatter intensities than vegetative cells (due to their smaller size and lower complexity), these trends may be affected by changes in scatter associated with the growth phase (Fig 4), i.e. the population density and history of the culture, accumulation of waste products as well as progressive utilisation of nutrients. To investigate the role of growth phase, we sampled vegetative and gametic Chlamydomonas cells across the growth curve (Fig 7). This replicated previous findings that late logarithmic cultures show increased overlap in scatter profiles of vegetative and gametic cells at later phases of growth, and show variation in the number of gametes produced (S5 Fig). Interestingly, we found that forward scatter values showed overlap in early growth cultures, which separated at early/mid logarithmic concentrations, and overlapped again towards higher concentrations. This showed that vegetative size increased to a consistent level from 4 days growth, whereas gametic cells showed lower sizes in early phases, and increased to a consistent size from day 5. In contrast, the side scatter profiles of both gametic and vegetative cells decrease over the growth phase. There is a point at which the side scatter of gamete cells decreases to a constant level in the mid and later phases of growth. We have previously seen that for every vegetative cell, the number of gametes produced changes depending on the phase of growth (Fig 5). This may explain the difference in scatter between vegetative and gametic cells, with cells at early and late phases of growth showing direct differentiation from vegetative to gametic cells, without division leading to larger gametes on average. These results also raise questions as to the reasons behind the overlap in scatter of the cell types at later phases of growth and question the utility of using late logarithmic cultures specifically for assessing the morphological differentiation of gametes (i.e. measuring the propensity for mating). The question remains, are these similarities between early and late logarithmic scatter profiles due to cell growth forces that lead to differentiation without division, or are late logarithmic cultures already differentiated into gametes due to depletion of nutrients associated with cell growth in confined media? DNA replication over gametogenesis Previous investigations into the process of cell division during the production of gametes, describes the morphological identification of DNA multiplication [35]. In order to create a sensitive quantification of this process in Chlamydomonas, we employed Propidium Iodide (PI) staining, since PI fluoresces when intercalated in DNA [56]. Microscopic analyses suggest that after 9 hours in light after the induction of gametogenesis, genetic division occurs (doubling of genetic material before division) [35]. In our flow cytometric time course we see diploid levels peaking at 9 hours, but the beginning of DNA synthesis starting as early as 4 hours after the removal of nitrogen with rates of diploidy beginning to drop at 11hrs-14hrs (Fig 8). This suggests variation in the initiation of cell division between cells. Microscopic approaches [35] confirm that gametic offspring form within the adult cells wall around 12hrs, and by 15hours of nitrogen deprivation have been released from the parental cell wall into the media. This corresponds to the end of the peak seen in side-scatter and ploidy fluorescence prior to 14-15 hours post nitrogen removal. We see a drop in diploidy around 15hours, which may confirm loss from parental cell walls, however, the distribution of diploidy continues to fall after 15hours, suggesting some cells may still be being released from parental cell walls, and that there is variation in this process. The relationship between nitrogen removal and light phase on the process of gametogenesis Mating reactions suggest that light phase impacts the process of gametogenesis [35], and so we measured the effect that nitrogen removal at the beginning or middle of the light phase had on the percentage of cells occupying the gametic side-scatter distribution in the timecourse. We used a generalized linear mixed model to test the effect of the time in the light cycle that nitrogen was removed, on the proportion of cells in the 'gamete' side-scatter gate over the course of gametogenesis. We specified the treatments 'time into the light-cycle' (that nitrogen was removed), and 'time since' (the removal of nitrogen) as fixed factors, replicate was a random factor and was nested within 'time into the light-cycle'. The whole model was significant (F 10,10 = 4,721, P<0.001), there was a significant time into the light-cycle by time since interaction (F 4,10 , = 865.9 P<0.001) and so the main effects of time into the light-cycle (F 1,10 = 7.39, P = 0.02) and time since nitrogen (F 4,10 = 866.0 P<0.001) are not interpreted further. The variance due to replicate(time into the light-cycle) was not significant (Z = 0.95, P = 0.341). Comparing the process of gametogenesis from the time nitrogen is removed ( Fig 9A), cells induced in the middle of the light cycle show a faster production of gametes than those induced at the beginning of the light cycle. Whether this creates an earlier plateau in gamete production is not clear given the resolution of the time course. The altered slope of the distribution (Fig 9) is interpreted as the requirement to reach a certain size to produce gametes, and as cells are smallest near the beginning of the light cycle, they require a period of growth before gametogenesis. By comparing percentage gamete formation to light phase (Fig 9B), gamete formation appears to begin at the same point in the light cycle (around 12-13hours after light initiation), but cells induced in the middle of the light cycle form gametes at a higher rate. Flow Cytometric Analysis of Gametogenesis in C. reinhardtii (Chlorophyceae) Effective Scatter and fluorescent properties for identifying gamete subpopulations Evidence from three experiments; the late logarithmic time course, early logarithmic time course and early logarithmic replication experiment, confirm that the scatter and fluorescence channels differ in the extent to which vegetative and gametic cells overlap in their profiles. In early logarithmic cultures there is also evidence of shifts in the distribution of fluorescence with low levels of overlap especially in side-scatter and to a lower extent FL1(530/30) and FL2 (585/ 42) (Fig 6). This corresponds to the t-tests reported earlier, in which significant changes in early logarithmic cultures were found in all channels except FL2. Chi squared tests on raw cell counts in uniform SSC and FL1 gates (based on gamete cell profiles) collating all three replicates in early logarithmic cultures were performed to determine if the difference between post transfer and gametic cultures were significant in early logarithmic cultures. Changes in sidescatter χ 2 1 = 41295, and FL1 χ 2 1 = 42603.06, showed highly significant changes (P > > 0.05). Flow Cytometric Analysis of Gametogenesis in C. reinhardtii (Chlorophyceae) Conclusion In summary, we report a novel and high-throughput method for indirectly monitoring population variation in the process of gametogenesis in Chlamydomonas reinhardtii; a method compatible with existing approaches to studying gametogenesis and the acquisition of mating competence. Flow cytometric time courses revealed significant changes in scatter and fluorescent profiles, associated with gametogenesis, narrowing down scatter and fluorescence criteria for enriching gamete phenotypes (SSC/FL1 in early logarithmic cultures and FL1 in late logarithmic cultures). This high throughput approach allowed investigation of the dynamics of gametogenesis, such as the impact of light phase and therefore cell size and growth on the rate at which cells displaying a gametic phenotype emerge. Changes documented throughout gametogenesis were able to provide scatter and fluorescent profiles for future cell sorting or comparisons of gametogenesis, and display the utility of autofluorescence in tracking cell changes. Finally, we have demonstrated the complex relationship between growth phase (Fig 10), cell morphology and differentiation; raising questions about the validity of using late logarithmic cultures in assessments of cell specification. The overlap between vegetative and gametic distributions, in addition to the similarity between early logarithmic gametes and late logarithmic vegetative cells (Fig 4), raises further important questions. It is unclear whether the lowered median side-scatter of late logarithmic vegetative cells are due to cell competition, nutrient limitation, waste accumulation or other factors. For example, we can expect that as cells approach stationary phase, nutrients will decrease in abundance and waste products will accumulate. Given that nitrogen can be depleted in plate cultures over a few days [9], a similar process may occur in liquid cultures, therefore the hypothesis that late logarithmic cultures may include a subset of already differentiated gametes, does require investigation. In the related species, Chlamydomonas eugametos, where gametogenesis is not cued by nitrogen, but another nutrient stressor, mating capable gametes have been observed to develop later in the growth phase as nutrients become limiting [46]. If this is true for Chlamydomonas reinhardtii, mating efficiency tests using late logarithmic cultures (a common experimental strategy) would not give a repeatable measure of mating efficiency. Further complications of the process of gametogenesis in late logarithmic cultures come from the observations of [35] who noted that above 3 x 10 6 cells /ml, cells can lose synchronisation. Non-synchronous cultures show a different distribution in the rate of gamete production, which might also contribute to the quantitative mating capacity seen in late logarithmic cultures [35]. This could be further investigated by testing the mating capacity of vegetative cultures at different phases of growth.
2018-04-03T05:39:05.658Z
2016-09-27T00:00:00.000
{ "year": 2016, "sha1": "68ccb568a002b0e03646ff71d07526f6af0bb5ab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0161453", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68ccb568a002b0e03646ff71d07526f6af0bb5ab", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118749971
pes2o/s2orc
v3-fos-license
k-shape poset and branching of k-Schur functions We give a combinatorial expansion of a Schubert homology class in the affine Grassmannian Gr_{SL_k} into Schubert homology classes in Gr_{SL_{k+1}}. This is achieved by studying the combinatorics of a new class of partitions called k-shapes, which interpolates between k-cores and k+1-cores. We define a symmetric function for each k-shape, and show that they expand positively in terms of dual k-Schur functions. We obtain an explicit combinatorial description of the expansion of an ungraded k-Schur function into k+1-Schur functions. As a corollary, we give a formula for the Schur expansion of an ungraded k-Schur function. 1. Introduction 1.1. k-Schur functions and branching coefficients. The theory of k-Schur functions arose from the study of Macdonald polynomials and has since been connected to quantum and affine Schubert calculus, K-theory, and representation theory. The origin of the k-Schur functions is related to Macdonald's positivity conjecture, which asserted that in the expansion the coefficients K λµ (q, t), called q, t-Kostka polynomials, belong to Z ≥0 [q, t]. Although the final piece in the proof of this conjecture was made by Haiman [4] using representation theoretic and geometric methods, the long study of this conjecture brought forth many further problems and theories. The study of the q, t-Kostka polynomials remains a matter of great interest. It was conjectured in [9] that by fixing an integer k > 0, any Macdonald polynomial indexed by λ ∈ B k (the set of partitions such that λ 1 ≤ k) could be decomposed as: for some symmetric functions s (k) λ [X; t] associated to sets of tableaux called atoms. Conjecturally equivalent characterizations of s (k) λ [X; t] were later given in [10,8] and the descriptions of [9,10,8] are now all generically called (graded) k-Schur functions. A basic property of the k-Schur functions is that and it thus follows that Eq. (2) significantly refines Macdonald's original conjecture since the expansion coefficient K λµ (q, t) reduces to K λµ (q, t) for large k. Furthermore, it was conjectured that the k-Schur functions satisfy a highly structured filtration, which is our primary focus here. To be precise: Conjecture 1. For k ′ > k and partitions µ ∈ B k and λ ∈ B k ′ , there are polynomialsb In particular, the Schur function expansion of a k-Schur function is obtained from (3) and (4) by letting k ′ → ∞. The remarkable property described in Conjecture 1 provides a step-by-step approach to understanding k-Schur functions since the polynomialsb (k→k ′ ) µλ (t) can be expressed positively in terms of the branching polynomialsb It has also come to light that ungraded k-Schur functions (the case when t = 1) are intimately tied to problems in combinatorics, geometry, and representation theory beyond the theory of Macdonald polynomials. Thus, understanding the branching coefficients,b (k) µλ :=b (k) µλ (1) gives a step-by-step approach to problems in areas such as affine Schubert calculus and K-theory (for example, see § §1.4). Our work here gives a combinatorial description for the branching coefficients, proving Conjecture 1 when t = 1. We use the ungraded k-Schur functions s (k) λ [X] defined in [12], which coincide with those defined in [8] terms of strong k-tableaux. Moreover, we conjecture a formula for the branching polynomials in general. The combinatorics behind these formulas involves a certain k-shape poset. 1.2. k-shape poset. A key development in our work is the introduction of a new family of partitions called k-shapes and a poset on these partitions (see §2 for full details and examples). Our formula for the branching coefficients is given in terms of path enumeration in the k-shape poset. For any partition λ identified by its Ferrers diagram, we define its k-boundary ∂λ to be the cells of λ with hook-length no greater than k. ∂λ is a skew shape, to which we associate compositions rs(λ) and cs(λ), where rs(λ) i (resp. cs(λ) i ) is the number of cells in the i-th row (resp. column) of ∂λ. A partition λ is said to be a k-shape if both rs(λ) and cs(λ) are partitions. The rank of k-shape λ is defined to be |∂λ|, the number of cells in its k-boundary. Π k denotes the set of all k-shapes. We introduce a poset structure on Π k where the partial order is generated by distinguished downward relations in the poset called moves (Definition 19). The set of k-shapes contains the set C k of all k-cores (partitions with no cells of hooklength k) and the set C k+1 of k + 1-cores. Moreover, the maximal elements of Π k are given by C k+1 and the minimal elements by C k . In Definition 36 we give a charge statistic on moves from which we obtain an equivalence relation on paths (sequences of moves) in Π k ; roughly speaking, two paths are equivalent if they are related by a sequence of charge-preserving diamonds (see Eqs. (42)-(44)). Charge is thus constant on equivalence classes of paths. For λ, µ ∈ Π k , P k (λ, µ) is the set of paths in Π k from λ to µ and P k (λ, µ) is the set of equivalence classes in P k (λ, µ). Our main result is that the branching coefficients enumerate these equivalence classes. To be precise, for λ ∈ C k+1 and µ ∈ C k , set b (k) so that Hereafter, we will label k-Schur functions by cores rather than k-bounded partitions using the bijection between C k+1 and B k given by the map rs. Theorem 2. For all λ ∈ C k+1 and µ ∈ C k , b (k) We conjecture that the charge statistic on paths gives the branching polynomials. Conjecture 3. For all λ ∈ C k+1 and µ ∈ C k , b (k) 1.3. k-shape functions. The proof of Theorem 2 relies on the introduction of a new family of symmetric functions indexed by k-shapes. These functions generalize the dual (affine/weak) k-Schur functions studied in [13,5,8]. For λ a k + 1-core, the dual k-Schur function is defined as the weight generating function Weak where WTab k λ is the set of k-tableaux of shape λ. Here we consider k-shape tableaux. These are defined similarly, but now we allow the shapes in (12) to be k-shapes and λ (i) /λ (i−1) are certain reverse-maximal strips (defined in §4). The weight is again defined by (13) and for each k-shape λ, we then define the cohomology k-shape function S (k) λ to be the weight generating function where Tab k λ denotes the set of reverse-maximal k-shape tableaux of shape λ. We show the k-shape functions are symmetric and that when λ is a k + 1-core, (see Proposition 73). We give a combinatorial expansion of any k-shape function in terms of dual (k − 1)-Schur functions. Theorem 4. For λ ∈ Π k , the cohomology k-shape function S (k) λ [X] is a symmetric function with the decomposition It is from this theorem that we deduce Theorem 2. Letting λ ∈ C k+1 and µ ∈ C k , we have b (k) µλ = Weak using (7), (11) for k − 1, (16), and Theorem 4. A (homology) k-shape function can also be defined for each k-shape µ by and its ungraded version is s (7), we have that for µ ∈ C k . The Pieri rule for ungraded homology k-shape functions is given by Theorem 5. For λ ∈ Π k and r ≤ k − 1, one has where the sum is over maximal strips ν/λ of rank r. Here we have introduced the cohomology k-shape functions as the generating function of tableaux that generalize k-tableaux (those defining the dual k-Schur functions). There is another family of "strong k-tableaux" whose generating functions are k-Schur functions [8]. The generalization of this family to give a direct characterization of homology k-shape functions remains an open problem (see § §1.5 for further details). Theorems 4 and 5 are proved using an explicit bijection (Theorem 75): such that wt(T ) = wt(U ). The bulk of this article is in establishing this bijection, which requires many intricate details. See § §1.8 for pointers to the highlights of our development. Using Peterson's characterization of the Schubert basis of H * (Gr SL k+1 ) and the definition of [12] for s (k) λ [X], it is shown in [6] that there is a Hopf algebra isomorphism mapping homology Schubert classes to k-Schur functions. Let i (k) : ΩSU k → ΩSU k+1 be the inclusion map and i Then i It is shown using geometric techniques that b (k) µλ ∈ Z ≥0 in [7]. This entire picture can be dualized. There is a Hopf algebra isomorphism [6] H * (Gr SL k+1 ) −→ Λ/I k mapping cohomology Schubert classes to dual k-Schur functions. Writing i (k) * : H * (ΩSU k+1 ) → H * (ΩSU k ) and π (k) : Λ/I k → Λ/I k−1 for the natural projection, we have the commutative diagram Using (24) and (26), one has The combinatorics of this article is set in the cohomological side of the picture. However, we also speculate that the k-shape functions s (k) λ [X] (λ ∈ Π k ) represent naturally-defined finite-dimensional subvarieties of Gr SL k+1 , interpolating between the Schubert varieties of Gr SL k+1 and (the image in Gr SL k+1 of) the Schubert varieties of Gr SL k . Definition (18) would then express the decomposition of this subvariety in terms of Schubert classes in H * (Gr SL k+1 ). 1.5. k-branching polynomials and strong k-tableaux. The results of this paper suggest an approach to proving Conjecture 3. Recall that the conjecture concerns the graded k-Schur functions s (k) λ [X; t], for which there are several conjecturally equivalent characterizations. Our approach lends itself to proving the conjecture for the description of k-Schur functions given in [8]; that is, as the weight generating function of strong k-tableaux: where STab k+1 λ is the set of strong (k + 1)-core tableaux of shape λ and spin(T ) is a statistic assigned to strong tableaux. Note, it was shown [8] that the s (k) λ used in this article equals the specialization of this function when t = 1. To achieve this, the notion of strong strip (defined on cores) needs to be generalized to certain intervals µ ⊂ λ of k-shapes λ, µ ∈ Π k . We should point out that the symmetry of the k-Schur functions defined by (28) is non-trivial. A forthcoming paper of Assaf and Billey [1] proves this result, as well as the positivity of b k→∞ µλ (t), using dual equivalence graphs. The bijection described above would also give a direct proof of the symmetry. 1.6. Tableaux atoms and bijection (20). The earliest characterization of k-Schur functions is the tableaux atom definition of [9]. The definition has the form where A Unfortunately, actually determining which tableaux are in an atom A (k) µ is an extremely intricate process. Nonetheless, the construction of our bijection (20) was guided by the tableaux atoms and has led us to yet another conjecturally equivalent characterization for the k-Schur functions. In particular, iterating the bijection from a tableau T of weight µ ⊢ n, we get: Namely, this provides a bijection between T and (T (k) , [p n−1 ], . . . , [p k ]). We then say that T (k) is the k-tableau associated to T and conjecture that Conjecture 6. Let ρ be the unique element of C k+1 such that rs(ρ) = µ, and let T (k) µ be the unique k-tableau of weight µ and shape ρ (see [11]). Then is the k-tableau associated to T . Support for this conjecture is given in [14] where it is shown that the bijection between T and (T (k) , [p n−1 ], . . . , [p k ]) is compatible with charge. In particular, it is shown that one can define a charge on k-tableaux satisfying the relation charge(T ) = charge(T (k) ) + charge([p n−1 ]) + · · · + charge([p k ]) . (34) 1.7. Connection with representation theory. In his thesis, L.-C. Chen [3] defined a family of graded S n -modules associated to skew shapes whose row shape and column shape are partitions. Applying the Frobenius map (Schur-Weyl duality) to the characters of these modules, one obtains symmetric functions. Chen has a remarkable conjecture on their Schur expansions, formulated in terms of katabolizable tableaux. We expect that if the skew shape is the k-boundary of a k-shape λ then the resulting symmetric function is the homology k-shape function s λ [X; t] defined in (18). In [3], an important conjectural connection is also made between the above S n -modules and certain virtual GL n -modules supported in nilpotent conjugacy classes, via taking the zero weight space. Using a subquotient of the extended affine Hecke algebra, J. Blasiak [2] constructed a noncommutative analogue of the Garsia-Procesi modules R λ , whose Frobenius image is the modified Hall-Littlewood symmetric function. In this setup there is an analogue of katabolizable tableaux and conjectured analogues of homology k-shape functions and the atoms of [9] and [3]. 1.8. Outline. In §2 we define basic objects of interest here such as k-shapes, moves and the k-shape poset, and give some of their elementary properties. In §3 we introduce an equivalence relation on paths in the k-shape poset called diamond equivalence and show that it is generated by a smaller set of equivalences called elementary equivalences. In §4 we introduce covers and strips for k-shapes, and prove that there is a unique path in the k-shape poset allowing the extraction of a maximal strip from a given strip (Proposition 91). In §4 we also state the main result (bijection (20)) of this article (Theorem 75) and show how it leads to Theorem 4 and Theorem 5. Elementary properties of the functions S The remaining sections, which contain the bulk of the technical details in this article, are concerned with the proof of bijection (20) by iteration of the pushout. This bijection sends compatible initial pairs (certain pairs (S, m) consisting of a strip S and a move m, both of which start from a common k-shape) to compatible final pairs (certain pairs (S ′ , m ′ ) consisting of a strip S ′ and a move m ′ , both of which end at a common k-shape). The basic properties of the pushout are established in §5 and §6. The most technical parts of this article ( §7 and §8) are devoted to the interaction between pushouts and equivalences in the k-shape poset. The basic statement can be summarized as: pushouts send equivalent paths to equivalent paths. In §9- §14 we develop, in a brief form, the pullback, which is inverse to the pushout ( §15). For those interested in getting a quick hold on the pushout algorithm on which bijection (20) relies, we suggest reading the beginning of §2, §3, §4 and §7 to get the basic definitions and ideas, along with § §4.9 and § §7.1 that describe canonical processes to obtain a maximal strip and to perform the pushout respectively. 0652641, DMS-0652648, DMS-0652668, and DMS-0901111. T.L. was supported by a Sloan Fellowship. L.L was supported by FONDECYT (Chile) grant #1090016 and by CONICYT (Chile) grant ACT56 Lattices and Symmetry. The k-shape poset For a fixed positive integer k, the object central to our study is a family of "kshape" partitions that contains both k and k+1-cores. The formula for k-branching coefficients counts paths in a poset on the k-shapes. As with Young order, we will define the order relation in terms of adding boxes to a given vertex λ, but now the added boxes must form a sequence of "strings". Here we introduce k-shapes, strings, and moves -the ingredients for our poset. The elements of Z 2 >0 are called cells. The row and column indices of a cell b = (i, j) are denoted row(b) = i and col(b) = j. We use the French/transpose-Cartesian depiction of Z 2 >0 : row indices increase from bottom to top. The transpose involution on Z 2 >0 defined by (i, j) → (j, i) induces an involution on Y denoted λ → λ t . The diagonal index of b = (i, j) is given by d(b) = j − i and we then define the distance between cells x and y to be |d(x) − d(y)|. The arm (resp. leg) of b = (i, j) ∈ λ is defined by a λ (b) = λ i − j (resp. l λ (b) = λ t j − i) is the number of cells in the diagram of λ in the row of b to its right (resp. in the column of b and above it). The hook length of b = (i, j) ∈ λ is defined by Let D = µ/λ be a skew shape, the difference of Ferrers diagrams of partitions µ ⊃ λ. Although such a set of cells may be realized by different pairs of partitions, unless specifically stated otherwise, we shall use the notation µ/λ with the fixed pair λ ⊂ µ in mind. D is referred to as λ-addable and µ-removable. A horizontal (resp. vertical ) strip is a skew shape that contains at most one cell in each column (resp. row). A λ-addable cell (corner) is a skew shape µ/λ consisting of a single cell. Define top c (D) and bot c (D) to be the top and bottom cells in column c of D and let right r (D) and left r (D) be the rightmost and leftmost cells in row r of D. Let c + (resp. c − ) denote the column right-adjacent (resp. left-adjacent) to column c. Similar notation is used for rows. 2.2. k-shapes. The k-interior of a partition λ is the subpartition of cells with hook length exceeding k: The k-boundary of λ is the skew shape of cells with hook bounded by k: We define the k-row shape rs k (λ) ∈ Z ∞ ≥0 (resp. k-column shape cs k (λ) ∈ Z ∞ ≥0 ) of λ to be the sequence giving the numbers of cells in the rows (resp. columns) of ∂ k (λ). Remark 9. The transpose map is an involution on Π k N . The set of k-shapes includes both the k-cores and k + 1-cores. Remark 11. A k-shape λ is uniquely determined by its row shape rs(λ) and column shape cs(λ). Remark 13. Suppose for some c, p ≥ 1 and µ ∈ Π, the cells top j (∂µ) for c ≤ j < c + p, all lie in the same row. As cs(µ) and Int(µ) are partitions, it follows that the cells bot j (∂µ) lie in the same row (say the r-th) for c ≤ j < c + p. Since rs(µ) is a partition, one may deduce that µ r−1 ≥ µ r + p. In particular, there is a µ-addable corner in the row of bot c (∂µ) for all columns c. Strings. Given the k-shape vertices, the primary notion to define our order is a string of cells lying at a diagonal distance k or k + 1 from one another. To be precise, let b and b ′ be contiguous cells when Remark 14. Since λ-addable cells occur on consecutive diagonals, a λ-addable corner x is contiguous with at most one λ-addable corner above (resp. below) it. Definition 15. A string of length ℓ is a skew shape µ/λ which consists of cells {a 1 , . . . , a ℓ }, where a i+1 is below a i and they are contiguous for each 1 ≤ i < ℓ. Note that all cells in a string s = µ/λ are λ-addable and µ-removable. We thus refer to λ-addable or µ-removable strings. Any string s = µ/λ can be categorized into one of four types depending on the elements of ∂λ \ ∂µ, as described by the following property. Definition 17. A string s = µ/λ is defined to be one of four types, cover-type, row-type, column-type, or cocover-type when ∂λ \ ∂µ equals the first, second, third, or fourth set, respectively, given in (36). It is helpful to depict a string s = µ/λ by its diagram, defined by the following data: cells of s are represented by the symbol •, cells of ∂λ \ ∂µ a represented by •, and cells of ∂µ ∩ ∂λ in the same row (resp. column) as some • or • are collectively depicted by a horizontal (resp. vertical) line segment. The four possible string diagrams are shown in Figure 1. Given a string s = µ/λ = {a 1 , . . . , a ℓ }, of particular importance are the columns and rows in its diagram that contain only a • or only •. To precisely specify such rows and columns, we need some notation. For a skew shape D = µ/λ, define ∆ rs (D) = rs(µ) − rs(λ) ∈ Z ∞ . The positively (resp. negatively) modified rows of D are those corresponding to positive (resp. negative) entries in ∆ rs (D). Similar definitions apply for columns. It is clear from the Figure 1 diagrams that a given string has at most one positively or negatively modified row and column. Such rows and columns are earmarked as follows, given they exist: • c s,u is the unique column negatively modified by s. Equivalently, c s,u = col(left row(a1) (∂λ)) if and only if the leftmost column in the diagram of s has a • • r s,d is the unique row negatively modified by s. Equivalently, r s,d = row(bot col(a ℓ ) ) iff the lowest row in the diagram of s has a • • r s,u is the unique row positively modified by s. Equivalently, r s,u = row(a 1 ) iff the topmost row in the diagram of s has no • • c s,d is the unique column positively modified by s. Equivalently, c s,d = col(a ℓ ) if the rightmost column in the diagram of s has no •. Note that c s,u < col(a 1 ) and r s,d < row(a ℓ ) when defined. Remark 18. For a λ-addable string s, we have the following vector equalities in the free Z-module Z ∞ = i∈Z>0 Ze i with standard basis {e i | i ∈ Z >0 }: ∆ rs (s) = e rs,u − e r s,d where by convention e i = 0 if the subscript i is not defined (e.g. c s,u is not defined when left row(a1) (∂λ) ∈ ∂λ \ ∂µ). 2.4. Moves. Our poset will be defined by taking a k-shape µ to be larger than λ ∈ Π when the skew diagram µ/λ is a particular succession of strings (called a move). To this end, define two strings to be translates when they are translates of each other in Z 2 by a fixed vector, and their corresponding modified rows and columns agree in size. Equivalently, their diagrams have the property that •'s and •'s appear in the same relative positions with respect to each other and the lengths of each corresponding horizontal and vertical segment are the same. We will also refer to cells a j and b j as translates when strings s 1 = {a 1 , . . . , a ℓ } and s 2 = {b 1 , . . . , b ℓ } are translates. Definition 19. A row move m of rank r and length ℓ is a chain of partitions λ = λ 0 ⊂ λ 1 ⊂ · · · ⊂ λ r = µ that meets the following conditions: (1) λ ∈ Π (2) s i = λ i /λ i−1 is a row-type string consisting of ℓ cells for all 1 ≤ i ≤ r (3) the strings s i are translates of each other (4) the top cells of s 1 , . . . , s r occur in consecutive columns from left to right (5) µ ∈ Π. We say that m is a row move from λ to µ and write µ = m * λ or m = µ/λ. A column move is the transpose analogue of a row move. A move is a row move or column move. Example 20. For k = 5, a row move of length 1 and rank 3 with strings s 1 = {A}, s 2 = {B}, and s 3 = {C} is pictured below. The lower case letters are the cells that are removed from the k-boundary when the corresponding strings are added. For k = 3, a row move of length 2 and rank 2 with strings s 1 = {A 1 , A 2 } and s 2 = {B 1 , B 2 } is: Note that a row move from λ to µ merits its name because ∂µ can be viewed as a right-shift of some rows of ∂λ. In particular, |∂µ| = |∂λ|. Property 21. If a row move negatively (resp. positively) modifies a column then it negatively (resp. positively) modifies all columns of the same size to the right (resp. left). Proof. All of the columns positively (resp. negatively) modified by a row move, are consecutive and have the same size, by Definition 19 (3), (4). The result follows from Definition 19 (5). A move m is said to be degenerate if c + sr ,u = c s1,d . Note that a degenerate move can be of any rank but always has length 1. The first move in Example 20 is degenerate. Proof. The precise column and row modification of a string is pinpointed in Remark 18 and immediately implies the claim by definition of k-shape. Remark 23. Consider a k-shape λ and a string s 1 = λ 1 /λ. If there is a row move from λ starting as λ ⊂ λ 1 , then Conditions (3),(4) and (5) of Definition 19 determine s 2 , . . . , s r (and thus the move). Note that Property 21 and Property 22 ensures a unique r since cs(λ) cs r ,u > cs(λ) c + sr ,u implies that an extra row type string would not be a translate of s 1 . Lemma 24. Suppose s and t are strings in a move m and the cells x ∈ s and y ∈ t are translates of each other. Then |d(x) − d(y)| < k − 1. Proof. Let m = s 1 ∪ · · · ∪ s r be a row move from λ to µ and let s j = {a j 1 , . . . , a j n } for j = 1, . . . , r. It suffices to prove the case where x = a 1 1 and y = a r 1 are the topmost cells of s 1 and s r , respectively. First suppose that d(a r 1 ) − d(a 1 1 ) ≥ k. Then c sr,u ≥ col(a 1 1 ) since a 1 1 , a r 1 ∈ µ, and further, c sr ,u < col(a r 1 ). Thus c sr ,u = col(a j 1 ) for some j < r since a 1 1 , . . . , a r 1 occur in adjacent columns by Definition 19 of row move. Moreover, m is a row move implies that s j and s r are translates and therefore col(a j 1 ) = c sr ,u of ∂(λ ∪ s 1 ∪ · · · ∪ s j ) has the same length as col(a r 1 ) in ∂(λ∪s 1 ∪· · ·∪s r = µ). However, column c sr ,u is negatively modified by s r implying the contradiction µ ∈ Π. In the case that d(a 1 r ) − d(a 1 1 ) = k − 1, the top cell in column c sr ,u of ∂(λ ∪ s 1 ∪ · · · ∪ s r−1 ) is left-adjacent to a 1 1 . However, this column is negatively modified by s r implying that in ∂µ, it is shorter than col(a 1 1 ). Again, the assumption that µ ∈ Π is contradicted. Corollary 25. The rank of a move is at most k − 1. Property 26. (1) If m is a row move where µ = m * λ, then µ/λ is a horizontal strip (2) If M is a column move where µ = M * λ, then µ/λ is a vertical strip (3) Any cell common to a row and a column move from the same shape λ, is a λ-addable corner. Proof. Consider a row move m from λ to µ with strings s 1 , s 2 , . . . , s r and let s 1 = {a 1 , a 2 , . . . , a ℓ }. Suppose that µ/λ is not a horizontal strip. Since the strings are translates and their topmost cells occur in consecutive columns by the definition of move, a violation of the horizontal strip condition must occur where a 2 lies below the top cell b 1 of string s i , for some i > 1. Therefore, |d(a 1 ) − d(b 1 )| ∈ {k − 1, k} since the definition of string implies |d(a 1 )−d(a 2 )| ∈ {k, k+1}. However, Lemma 24 is contradicted implying µ/λ is a horizontal strip. By the transpose argument, we also have that a column move is a vertical strip. (1) and (2) imply (3). Proposition 27. Let m be a row or column move from λ to µ. Then the decomposition of m = µ/λ into strings (according to Definition 19) is unique. Proof. Given row move m from λ to µ, Remark 23 implies it suffices to show that the λ-addable string s 1 is uniquely determined. By (37) and Definition 19 (3), for any m = µ/λ = {s 1 , . . . , s r }, Since c sr ,u < col(a 1 ) ≤ col(a ℓ ) = c s1,d there is no cancellation in this formula, so the rank of m can be read from the number of consecutive +1's in ∆ cs (µ/λ) (and is independent of s 1 , . . . , s r ). The length of m is then simply |µ/λ|/r. Since the leftmost cell of the horizontal strip m must be the top cell of the first string s of m and the length of s is determined, by Remark 14 it follows that the λ-addable string s 1 is determined. 2.5. Poset structure on k-shapes. We endow the set Π N of k-shapes of fixed size N , with the structure of a directed acyclic graph with an edge from λ to µ if there is a move from λ to µ. Since a row (resp. column) move from λ to µ satisfies rs(λ) = rs(µ) and cs(λ) cs(µ) (resp. cs(λ) = cs(µ) and rs(λ) rs(µ)), this directed graph induces a poset structure on Π N which is a subposet of the Cartesian square of the dominance order on partitions of size N . Proposition 28. An element of the k-shape poset is maximal (resp. minimal) if and only if it is a (k + 1)-core (resp. k-core). Proof. Since a k-core has no hook sizes of size k, it also has no row-type or columntype strings addable. Thus k-cores are minimal elements of the k-shape poset. Now suppose λ is a minimal element of the k-shape poset, and suppose λ has a hook of size k. Let us take the rightmost such cell of ∂λ, say b. Then there is a λ-addable corner a 1 at the end of the row of b. Let s = {a 1 , a 2 , . . . , a ℓ } be the longest row-type string with top cell a 1 (see Lemma 30). The proof that the (k + 1)-cores are exactly the maximal elements is similar. Example 29. The graph Π 2 4 is pictured below. Only the cells of the k-boundaries are shown. Row moves are indicated by r and column moves by c. 2.6. String and move miscellany. Here we highlight a number of lemmata about strings that will be needed later. In the special case that µ or λ is a k-shape, the string s = µ/λ obeys a number of explicit properties. (1) If s negatively modifies a row, then it can be extended below to a λ-addable string that does not have negatively modified rows. (2) If s negatively modifies a column, then it can be extended above to a λaddable string that does not have negatively modified columns. Proof. Let s negatively modify a row. By Remark 13, there is a λ-addable cell x in the row of b = bot col(a ℓ ) (∂λ) and we have h λ (b) = k. Therefore d(x) − d(a ℓ ) = k + 1 and s ∪ {x} is a λ-addable string that extends s below. The required string exists by induction. Part (2) is similar. Lemma 33. Let λ ∈ Π. Consider a λ-addable corner b and some x ∈ λ in a lower row than row(b) that is right-adjacent to a cell in λ. is in the row of x and thus Remark 13 implies that row(x) has λ-addable corner (namely x). If |d(b) − d(x)| = k, then either bot col(b) (∂λ) is in the row of x (and as before, x is λ-addable) or bot col(b) (∂λ) is in the row below x. If x is not λ-addable then the latter case holds and the cell immediately below x is λ-addable. Proof. Consider the case that m is a row move (the column case follows by transposition). By definition of move, the strings of m have diagrams which are translates of each other. Since Property 26 implies the strings never lie on top of each other, if cell a is the translate of cell b then cs(λ) col(a) = cs(λ) col(b) and cs(µ) col(a) = cs(µ) col(b) . Since strings in a row move never change the row reading we have by translation of diagrams that rs(λ) col(a) = rs(λ) col(b) and rs(µ) col(a) = rs(µ) col(b) . Let b be a cell in a skew shape D. Lemma 35. Let λ ∈ Π, m a row move from λ, and s a string of m. Then Ind m (b) is constant for b ∈ s. In particular, if some cell of m is λ-addable, then so is every cell in its string. Proof. The first assertion follows by induction on Ind m (b) using Definition 19(3). The second holds by Proposition 27. 3. Equivalence of paths in the k-shape poset 3.1. Diamond equivalences. Definition 36. Given a move m, the charge of m, written charge(m), is 0 if m is a row move and rℓ if m is a column move of length ℓ and rank r. Notice that in the column case, rℓ is simply the number of cells in the move m when viewed as a skew shape. The charge of a path (m 1 , . . . , m n ) in Π N is charge(m 1 ) + · · · + charge(m n ), the sum of the charges of the moves that constitute the path. The commutation is equivalent to the equalityM ∪ m =m ∪ M where a move is regarded as a set of cells. Observe that the charge is by definition constant on equivalence classes of paths. Example 37. Continuing Example 29, the two paths in Π 2 4 from λ = (3, 1, 1) to ν = (4, 3, 2, 1) have charge 2 and 3 respectively, and so are not equivalent. Thus by Theorem 2 one has b (k) µλ = 2, and according to Conjecture 3, we have b (k) The two paths in Π 3 5 from λ = (3, 2, 1) to ν = (4, 2, 1, 1) are diamond equivalent, both having charge 1. Thus by Theorem 2 one has b We will describe in more detail in this section when two moves m and M can obey a diamond equivalence. We will also see that the relation ≡ is generated by special diamond equivalences called elementary equivalences (see Proposition 55). Elementary equivalences. We require a few more notions to define elementary equivalence. Let m and M be moves from λ ∈ Π. We say that m and M intersect if they are non-disjoint as sets of cells. Similarly, we say that two strings s and t intersect if they have cells in common. We say that the pair (m, M ) is reasonable if for every string s and t of m and M respectively that intersect, we either have s ⊆ t or t ⊆ s. Let s and t be intersecting strings. Either the highest (resp. lowest) cell of s ∪ t is in s \ t, or in s ∩ t, or in t \ s; in these cases we say that s continues above (resp. below) t, or s and t are matched above (resp. below), or t continues above (resp. below) s. We say that m continues above (resp. below) M , or m and M are matched above (resp. below), or M continues above (resp. below) m, if the corresponding relation holds for all pairs of strings s in m and t in M such that s ∩ t = ∅. We say that the disjoint strings s and t are contiguous if s ∪ t is a string. We say that the moves m and M are not contiguous if no string of m is contiguous to a string of M . For the sake of clarity, the overall picture is presented first, the proofs being relegated to Subsections 3.8, 3.9 and 3.10. The following lemma asserts that any pair of intersecting strings s ⊂ m and t ⊂ M are in the same relative position. Lemma 38. Let m and M be intersecting λ-addable moves for λ ∈ Π. Then m continues above M (resp. m and M are matched above, resp. M continues above m) if and only if there exist strings s ⊂ m and t ⊂ M such that s continues above t (resp. s and t are matched above, resp. t continues above s). A similar statement holds with the word "above" replaced by the word "below". Notation 39. For two sets of cells X and Y , let → X (Y ) (resp. ↑ X (Y )) denote the result of shifting to the right (resp. up), each row (resp. column) of Y by the number of cells of X in that row (resp. column). Define ← X (Y ) and ↓ X (Y ) analogously. Mixed elementary equivalence. Definition 40. A mixed elementary equivalence is a relation of the form (42) arising from a row move m and column move M from some λ ∈ Π, which has one of the following forms: Suppose (m, M ) is interfering and the top cell of m is above the top cell of M . A lower (resp. upper) perfection of the pair (m, M ) is a k-shape of the form λ∪m∪M ∪m per (resp. λ∪m∪M ∪M per ) where m per (resp. M per ) is a (λ∪m∪M )addable skew shape such that m ∪ m per (resp. M ∪ M per ) is a row move from M * λ (resp. m * λ) of rank r (resp. r ′ ) and length ℓ + ℓ ′ and M ∪ m per (resp. m ∪ M per ) is a row move from m * λ (resp. M * λ) of rank r + r ′ and length ℓ ′ (resp. ℓ). We say that (m, M ) is lower-perfectible (resp. upper perfectible) if it admits a lower (resp. upper) perfection. By Lemma 47, the lower (resp. upper) perfection is unique if it exists. (1) Suppose a lower perfection ρ exists. Then it is unique: m per is such that m ∪ m per is the unique move from λ ∪ M obtained by extending each of the strings of m below by ℓ ′ cells, and also M ∪ m per is the unique move from λ ∪ m obtained by adding r more translates to the right of the strings of M . Lemma 53. Any diamond equivalenceM m ≡mM in which m is a row move and M a column move from some λ ∈ Π, is a mixed elementary equivalence. Lemma 54. Let m and M be row (resp. column) moves such thatM m ≡mM is a diamond equivalence. Then the relationM m ≡mM can be generated by row (resp. column) elementary equivalences. We have immediately: Proposition 55. The equivalence relations generated respectively by diamond equivalences and by elementary equivalences are identical. Proof. Lemma 53 and Lemma 54 imply that diamond equivalences are generated by elementary equivalences. Since elementary equivalences are diamond equivalences by Propositions 43 and 51, the proposition follows. Proof. Suppose there is some string s = {a 1 , a 2 , . . .} ⊂ m where a i ∈ t and a j ∈t for distinct column-type strings t,t ∈ M . Let i and j be such that j − i is minimum where i < j. We first show that a i is not the bottom cell of t. If this were the case, the distance between the bottom cell of t and the bottom cell oft would be larger than k − 1, which would contradict Lemma 24. Therefore a i is not the lowest cell of t. The cells a i and a j are λ-addable by Property 26(3) and so is s by Lemma 35. Thus by Remark 14, the cell of t contiguous to and below a i is a i+1 . If a i+1 = a j we have a contradiction to the choice of i and j. If a i+1 = a j we have the contradiction that t andt intersect. Taking transposes, every string of M meets at most one string of m. Proof. Let x ∈ s ∩ t. By Property 26 and Lemma 35 all the cells of s and t are λ-addable. Suppose both s and t contain cells below (resp. above) x. Since cells in strings satisfy a contiguity property, there are λ-addable cells z ∈ s and z ′ ∈ t such that z and z ′ are contiguous with and below (resp. above) x. By Remark 14, z = z ′ . The Lemma follows. Call a row-type (resp. column-type) string of m (resp. M ) primary if it consists of λ-addable corners. Write Prim(m) for the set of primary strings of m; the dependence on λ is suppressed. The strings of m (resp. M ) are totally ordered, and this induces an order on the primary strings. For s ∈ Prim(m) with s = max(Prim(m)) (resp. s = min(Prim(m))) we write succ(s) (resp. pred(s)) for the cover (resp. cocover) of s in Prim(m). Remark 59. By Lemma 58, if s is a string in m and t a string in M such that s ∩ t = ∅ then s ∈ Prim(m) and t ∈ Prim(M ). Lemma 60. Suppose s is a string in m and t a string in M such that s ∩ t = ∅. Proof. We prove (1) as (2) is the transpose analogue. Suppose s continues below t and s = min(Prim(m)). Let b be the bottom cell in t; it is also the bottom cell of s ∩ t. By hypothesis the string s has a λ-addable cell b ′ ∈ t, contiguous to and below b. M shortens the row of b ′ since (row(b ′ ), col(b)) is removed by M and b ′ is not added by M by Property 57. Let c and c ′ be the translates in pred(s) of the cells b and b ′ in s. Note that row(b ′ ) < row(c ′ ) since b ′ and c ′ are λ-addable and c ′ ∈ pred(s). Furthermore, by Corollary 34, rs(λ) row(c ′ ) = rs(λ) row(b ′ ) . Now, from a previous comment rs(M * λ) row(b ′ ) = rs(λ) row(b ′ ) − 1. In order for M * λ to belong to Π, M must also remove the cell (row(c ′ ), col(c)) without adding the cell c ′ . Therefore there is a string t ′ > t such that pred(s) ∩ t ′ = ∅ and such that pred(s) continues below t ′ . Suppose s continues above t and s = max(Prim(m)). Let b be the highest cell in t, b ′ the cell of s below and contiguous with b. Let c and c ′ be the translates in succ(s) of b and b ′ in s. One may show that M adds a cell to the row of b and removes none. M must do the same to the row of c since M * λ ∈ Π. The rest of the argument is similar to the previous case. Lemma 61. Lemma 38 holds for a row move and a column move. Proof. Suppose s and s ′ are strings of m and t and t ′ are strings of M such that Since the distance between c and b is more than k − 1, the distance between t and t ′ is also more than k − 1. But this violates Lemma 24. (1) The disjointness of m and M implies the commutation of (43), and (44) holds trivially in this case. So it suffices to show that m is a row move from M * λ; showing that M is a column move from m * λ is similar. Since M ∩ m = ∅, s 1 is a (M * λ)-addable string. The diagram of the string s 1 remains the same in passing from λ to M * λ; the only place it could change is in the row of a 1 and the column of a ℓ , and this could only occur if a 1 or a ℓ were contiguous with a cell of M , which is false by assumption. So s 1 is a row-type (M * λ)-addable string. The argument for the other strings of m is similar. Since M is a column move from λ, cs(λ) = cs(M * λ). But then Property 22 implies that (M * λ) ∪ m ∈ Π. This proves that m is a row move from M * λ as required. (2) We prove case (a) as (b) is similar. By definition ofM ,M contains the same number of strings as M and the strings ofM are of the same length as those of M . Thus (44) is satisfied. By Lemma 60, all primary strings of m meet M . In particular, since the first string s 1 of m is always primary, it meets M . The string s 1 meets a single strinĝ s in M and s 1 ∩ŝ =ŝ = {a p , a p+1 , . . . , a n } for some 1 < p ≤ n < ℓ by Lemma 58 since s 1 continues above and belowŝ. We now show that t =→ M (s 1 ) is a (M * λ)-addable string. By Property 26 M is a vertical strip and where a † denotes the cell right-adjacent to the cell a. Since a p is the top cell of the column-type λ-addable stringŝ and there is a λaddable corner a p−1 contiguous to and above a p , by Definition 17 Thus the cells of t satisfy the contiguity conditions for a string. Let µ = M * λ. For i < p and i > n, a i is µ-addable since it is λ-addable and a i ∈ M by Property 57. Let c be the column of a p−1 . Since a p is the top cell of a string in M , there is no cell removed in the row of a p when going from λ to µ, and thus bot c (∂µ) still lies in the row of a p . By Remark 13 there is a µ-addable corner in the row of a p and it corresponds to a † p . Now observe that if a † p+1 is not a µ-addable corner, then there is a µ-addable corner e below it by Lemma 33 (since a † p+1 is a distance k or k + 1 from the µaddable corner a † p ). Since M is a column move, there is a µ-removable string with cells c p and c p+1 in the columns of a p and a p+1 which is a translate of stringŝ (the two strings may coincide). The distance between a † p and e is thus larger (by exactly one unit) than the distance between c p and c p+1 . Furthermore, a † p and e lie in columns immediately to the right of those of c p and c p+1 respectively. We then have the contradiction that the removable string containing c p and c p+1 and the addable string containing a † p and e violate Lemma 32. Therefore a † p+1 is a µaddable corner and repeating the previous argument again and again we get that a † i is µ-addable for any p < i ≤ n. Therefore t is a µ-addable string. It is of row-type since the top and bottom of its string diagram are unaffected by adding M to λ and coincide with the top and bottom of the diagram of the λ-addable row-type string s. Suppose there are q strings of m in the rows of s 1 , and let t j =→ M (s j ) for 1 ≤ j ≤ q. It follows from the results of Subsection 3.8 and the translation property of strings in row moves, that t j is a translate of t 1 : the top and bottom of t j agree with those of s j , and → M right-shifts the p-th through n-th cells in s j to obtain t j , which are the same positions within the string s 1 that are right-shifted to obtain t 1 . In particular t j is a row-type string. We claim that t j is (M ∪ t 1 ∪ · · · ∪ t j−1 ) * λ-addable for 1 ≤ j ≤ q. It holds for j = 1. For the general case, since m is a horizontal strip, we have that λ row(ap−1)−1 − λ row(ap−1) ≥ q. In µ we still have µ row(ap−1)−1 − µ row(ap−1) ≥ q since there is no cell of M in row(a p ). By Lemma 31 applied to the string t we have that The same approach shows that t ′ i =→ M (s i ) is a µ-addable string for any other primary string s i of m, and that the strings lying in the rows of s i can be rightshifted as prescribed. Moreover, arguing as in the proof of Lemma 61, one may show thatŝ i = s i ∩ t i consists of the p-th through n-th cells of s i (using the same p and n as for s 1 ). It follows that all the stringss i are translates of each other. m * M * λ is a k-shape, because M * λ is, and because the condition in Property 22 is unchanged in passing from the move λ → m * λ, to the move M * λ →m * M * λ. Thereforem is a row move from M * λ with first stringss 1 , . . . ,s r . We must show thatM is a column move from m * λ. It is a vertical strip, being the difference of partitions m * λ and λ ∪ M ∪m, and having at most one cell per row by definition. It was shown previously that for any string s ′ of M that meets m, s ′ is contained in the string of m that it meets. Note that since strings in a move are translates of each other, we have that if the primary string t = {a 1 , . . . , a ℓ } of m is such that there are n cells of m in the row of a 1 , then there are also n cells of m in the row of a i for all i. It follows that under → m , every string in M is translated directly to the right by some number of cells (possibly zero). ThereforeM is the disjoint union of strings that are translates of each other and which start in consecutive rows. SinceM is an m * λ-addable vertical strip we deduce that it is a column move from m * λ. 3.9. Proving properties of row equivalence. We state the analogues of results in Subsection 3.8 for intersections of row moves m and M from λ ∈ Π. Property 62. Every string of m meets at most one string of M . (1) The leftmost cell of m ∩ M is contained in either s 1 or t 1 . (2) The rightmost cell of m ∩ M is contained in either s p or t q . Lemma 65. Lemma 38 holds for m and M both row moves. (1) Suppose that m continues above M but the two are matched below. Then p ≤ q and s i contains t i and continues above it for 1 ≤ i ≤ p. (2) Suppose that m continues below M but the two are matched above. Then p ≤ q and s p−i contains t q−i and continues below it for 0 ≤ i < p. Proof. We prove (1) as (2) is similar. The hypotheses imply that for some i and j we have c si,d = c tj ,d . It follows from Property 21 that c s1,d = c t1,d , that is, s 1 and t 1 intersect. Applying Property 21 to the upper part of M we conclude that p ≤ q. We have that s i meets t i for 1 ≤ i ≤ p since it is true for i = 1 and strings in a move are translates. Since m continues above M and they are matched below, s i contains t i and continues above it. Proof of Lemma 47. We prove (1) as (2) is similar. Let m per give rise to the lower perfection λ∪m∪M ∪m per ∈ Π. Since m∪m per is a row move from M * λ of rank r, it follows that m per , viewed as λ∪m∪M -addable, must negatively modify (by −1) precisely the columns c s1,u through c sr ,u . So M ∪ m per is a row move from m * λ which negatively modifies the r + r ′ consecutive columns c t1,u , . . . , c t r ′ ,u , c s1,d , . . . , c sr ,d . Therefore m per is specified by adjoining to λ ∪ m ∪ M , translates of t 1 in the r columns just after c t r ′ ,u . The other claims are clear. Proof of Proposition 51. Cases (1) and (4) are similar to Cases (1) and (2) of mixed equivalence. Case (5) is trivial. Case (2) holds by definition. So consider Case (3). We suppose that m and M are row moves on λ that are matched below, as the "matched above" case is similar. If m and M are also matched above then it follows that m = M : intersecting strings must coincide, and Property 21 implies that the two moves must modify the same columns. So we may assume that m continues above M . Using the notation of Lemma 66, we see thatM decomposes into strings t p+1 , . . . , t q . These strings neither Since m contains the cell a = (r, λ r + 1),m must contain the cell (r ′ , λ r + 1) in order to remove (r ′ , c). But then M contains the cell a, contradicting the disjointness of m and M . Now consider the case where m and M intersect but are not reasonable. Suppose there is a string s of m that meets a string t of M , with s continuing below t but not above it. By Property 56, we know that t finishes above s. Let t = {a 1 , . . . , a ℓ } and s be such that s ∩ t = {a i , . . . , a ℓ }, and let b be the cell of s contiguous to and below a ℓ (it exists by our hypotheses). Since M is a column move, ∆ rs (M ) has a −1 in row(b). Thus ∆ rs (M ) must also have a −1 in row(b). This implies that there is a string t ′ = {a ′ 1 , . . . , a ′ ℓ } ofM (recall that M andM have the same length) such that ∆ rs (t ′ ) has a −1 in row(b). By definition of column moves, and since ∆ rs (M ) = ∆ rs (M ) (which implies that M andM have the same rank), we have that the upper cells of t and t ′ must coincide. That is, Note that since m is a horizontal strip andM is a vertical strip, the cells outside m * λ catty-corner to {a i , . . . , a ℓ } are not inM m * λ. Now, the distance between a i−1 and a i is k + 1 (a i is the top cell of a row move). Thus from the previous comment and . But then we have the contradiction that a ′ ℓ cannot negatively modify row(b) since in this row there is no cell of ∂(m * λ) weakly to the left of col(a ℓ ). The case where there is a string s of m that meets a string t of M , with s continuing above t but not below it is similar. Proof of Lemma 54. All cases that could produce a diamond equivalence where m and M do not intersect are covered by Definition 48. In case (1) there are no strings that could be added at the same time to m and M to produce movesm = m and M = M . In case (2), unicity is guaranteed by Lemma 47. Suppose we have a diamond equivalenceM m ≡mM , where m and M are such as in case (3), and suppose that and m and M are matched below with m continuing above. As mentioned in the proof of Proposition 51,m decomposes into strings u i = s i \ t i for 1 ≤ i ≤ p andM decomposes into strings t p+1 , . . . , t q . We now show that if q > p thenm =m andM =M . It is obvious thatm ⊆m andM ⊆M . There are two possible options: eitherM has more strings thanM or its strings are extensions of those ofM . Sincem \m =M \M , in the first option the extra strings must extend the u i 's below, and in the second option the extension must form strings to the right of those ofm. The former is impossible since the distance between the bottom cell of any u i and the top cell of any of the new strings added is more than k + 1. The latter case is impossible since no new strings can be added to the right ofm to form a move by Property 21. Thus the only option is p = q. In this caseM = ∅,m = m \ M and we have: The only other cases that could produce a diamond equivalence which are not covered by Definition 48 are those where m and M are not reasonable, that is, there are strings s and t of m and M respectively such that s ∩ t = ∅, t s and s t. Suppose that t continues below s. We show that if there are strings t i , . . . , t i+j of M that do not intersect strings of m then there is no possible diamond equivalencē M m =mM . The strings t i , . . . , t i+j need to be to the right of those that meet strings of m by Property 21 applied to the positively modified columns of m. For the diamond equivalence to hold, we need a M per that extends the strings t i , . . . , t i+j above and that add extra strings to the right of m \ M . But this is impossible by Property 21. In a similar way, if there are strings of m that do not intersect strings of M then there is no possible diamond equivalence. Therefore, we are left with the case where the strings s 1 , . . . , s p of m and t 1 , . . . , t p of M each intersect with one another. In this case we necessarily havem = m \ M andM = M \ m. But thenmM =M m = M is also a move and we have the situation. In this case both triangles correspond to Case (3) of Definition 48 and thus this diamond equivalence is also generated by elementary ones. Strips and tableaux for k-shapes In this section we introduce a notion of (horizontal) strip and tableau for kshapes. 4.1. Strips for cores. We recall from [11,8] the notion of weak strip and weak tableau for cores. LetS k+1 and S k+1 be the affine and finite symmetric groups and letS 0 k+1 denote the set of minimal length coset representatives forS k+1 /S k+1 . C k+1 has a poset structure given by the left weak Bruhat order transported across the bijectionS 0 k+1 → C k+1 . Explicitly, µ covers λ in C k+1 if µ/λ is a nonempty maximal λ-addable string. Such a string is always of cover-type and consists of all λ-addable cells whose diagonal indices have a fixed residue (say i) mod k + 1, and corresponds to a length-increasing left multiplication by the simple reflection s i ∈S k+1 . A weak strip in C k+1 is an interval in the left weak order whose corresponding skew shape is a horizontal strip; its rank is the height of this interval, which coincides with the number of distinct residues mod k + 1 of the diagonal indices of the cells of the corresponding skew shape. Strips for k-shapes. Definition 67. A strip of rank r is a horizontal strip µ/λ of k-shapes such that rs(µ)/rs(λ) is a horizontal strip and cs(µ)/cs(λ) is a vertical strip, both of size r. A cover is a strip of rank 1. By the assumption that rs(µ)/rs(λ) is a horizontal strip, distinct modified rows of µ/λ do not have the same length (in either rs(λ) or rs(µ)). The modified columns however form groups which have the same length in both cs(µ) and cs(λ), where by definition two modified columns c, c ′ are in the same group if and only if cs(λ) c = cs(λ) c ′ . Proposition 68. A strip S = µ/λ has rank at most k. Remark 69. Although strips of rank k exist, in the remainder of the article we shall only admit strips of rank strictly smaller than k. For the purposes of this paper, this restriction is not so important: in Theorem 4, mod the ideal I k−1 , monomials with a multiple of x k i are killed, and therefore we choose to leave such tableaux out of the generating function by definition. Remark 76 will further elaborate on the effects of allowing strips of rank k in our construction. The notion of a strip generalizes that of weak strips for k-cores and k + 1-cores. Proof. It was established in [11] that if µ, λ ∈ C k+1 , rs(λ)/rs(µ) is a horizontal strip and cs(λ)/cs(µ) is a vertical strip, then µ/λ is a horizontal strip (Proposition 54 of [11]) and the cells in µ/λ correspond to one letter in a k-tableau (Theorem 71 of [11]). It was further established in Lemma 9.1 of [8] that k-tableaux and weak tableaux (sequences of weak strips in C k+1 ) are identical. Therefore λ/µ is a weak strip in C k+1 . The same argument works for µ, λ ∈ C k . Maximal strips and tableaux. Definition 71. Let λ ∈ Π be fixed. Let Strip λ ⊂ Π be the induced subgraph of ν ∈ Π such that ν/λ is a strip. Moves (paths) in Strip λ are called λ-augmentation moves (paths). By abuse of language, if m is a move (path) from µ to ν in Strip λ we shall say that m is a λ-augmentation move (path) from the strip µ/λ to the strip ν/λ. An augmentation of a strip S = µ/λ is a strip reachable from S via a λ-augmentation path. Diagrammatically, an augmentation move is such that the following diagram where S andS are strips and ∅ denotes the empty move. These definitions depend on a fixed λ ∈ Π, which shall usually be suppressed in the notation. Later we shall consider augmentations of a given strip S, meaning λ-augmentations where S = µ/λ. Clearly augmentation paths pass through strips of a constant rank. Definition 72. Let µ ∈ Π be fixed. Let Strip µ ⊂ Π be the induced subgraph of ρ ∈ Π such that µ/ρ is a strip. A strip S = µ/ρ is reverse-maximal if ρ is minimal in the graph Strip µ (see Definition 168 for more details). It is maximal (resp. reverse-maximal ) if its strips are. The tableau has weight wt(T ) = (a 1 , a 2 , . . . , a N ) where a i is the rank of the strip λ (i) /λ (i−1) (which we require to be strictly smaller than k by Remark 69). Let where Tab k µ/λ (resp. Tab k µ/λ ) denotes the set of maximal (resp. reverse-maximal) tableaux of shape µ/λ for λ, µ ∈ Π k . For k-cores (resp. k + 1-cores), the maximal (resp. reverse-maximal) tableau generating functions reduce to dual k − 1 (resp. k) Schur functions. The following result is a consequence of Propositions 106 and 171. In particular, for λ ∈ Π k , Tab k λ is empty unless λ ∈ C k and in that case Tab (2) For any µ ∈ C k+1 and λ ∈ Π k such that µ/λ is a reverse-maximal strip, λ ∈ C k+1 . In particular, for λ ∈ C k+1 and for every weight β = (β 1 , β 2 , . . . ) with β i ≤ k − 1 for all i, the set of reverse maximal tableaux of shape λ and weight β is equal to the set of weak k-tableaux of shape λ and weight β and thus S (k) Theorem 4 is established as follows. The map (S, [p]) → (T, [q]) is called the pushout and the inverse bijection (T, [q]) → (S, [p]) is called the pullback, in reminiscence of homological diagrams, as the following diagram "commutes" for some λ, ρ: Since tableaux are sequences of strips, we can immediately reduce the pushout bijection to the case that S and T are both single strips. One might try to straightforwardly reduce to the case that paths p and q are single moves m and m ′ . This does not work: not all pairs (S, m) admit a pushout. Those that do will be called compatible. The bijection (51) is defined by combining certain moves (called augmentation moves) with pushouts of compatible pairs. The proof of Theorem 75 will be completed in §15. Proof of Theorem 4. Let ν be the empty k-shape. Then the only possibility for λ is the empty k-shape, S runs over Tab k µ , and by Proposition 73, ρ runs over C k and T over WTab k ρ . By Theorem 75, we thus have a bijection between Tab k µ and ρ∈C k WTab k ρ × P k (µ, ρ) . Theorem 4 follows since it is known that each is a symmetric function [12]. Proof of Theorem 5. The k-Schur functions satisfy (essentially by definition) the Pieri rule [12] where the sum is over weak strips ρ/µ of k + 1-cores of rank r. Thus for a fixed λ ∈ Π k , In the third equality we used Proposition 73, and in the fourth equality we used Theorem 75. Remark 76. Suppose that strips of rank k are allowed. The results of this paper hold with a few minor changes. 1 For instance, Theorem 75 and Theorem 5 are still valid (with the case r = k being allowed in Theorem 5). However, as the rest of the remark should make clear, the extension of Theorem 4 is somewhat more subtle. When ν = ∅ (and thus also µ = ∅), the bijection on which Theorem 75 relies, associates to a reverse-maximal tableaux S a pair (T, [q]), where T is a maximal tableau of a given shape ρ. If strips of rank k are allowed then Proposition 73 is not valid anymore, as adding a strip of rank k on a k-core does not produce a k-core. Therefore, if the weight of S has entries of size k, then the pushout of S does not produce a weak tableau T and Theorem 4 ceases to be valid. In the following, we will extend Theorem 4 to the case when strips of rank k are allowed. The fact that Weak (k−1) ρ [X] is a symmetric function is not sufficient anymore to is a symmetric function. By Theorem 4, the sum of the terms that do not involve any power is a symmetric function in that case (see the proof of Proposition 79). Now if µ is a k-shape such that rs(µ) 1 = k, then by Lemma 78, µ has a unique reverse maximal strip of rank k. In this manner, it is not too difficult to see that the sum of the terms in S and is thus a symmetric function by induction. This proves that is also a symmetric function if strips of rank k are allowed. Finally, to complete the extension of Theorem 4, let π (k) be the projection onto Λ/I k−1 . Then for µ ∈ Π k , the cohomology k-shape function S (k) µ [X] has the 1 However, the concept of lower augmentable corner which will be introduced in § §4.6 needs to be slightly modified: we define an augmentable corner b of a strip S = µ/λ as usual, except we disallow the case that b lies in a row of S that already contains k cells. The result holds trivially from Theorem 4 since the projection will kill every x wt(T ) such that T has a strip of rank k. Since λ is obtained from ρ by a sequence of moves, it only remains to show that |P k (λ r , λ)| = 1. That is, that there exists a unique equivalence class of paths in the k-shape poset from λ r to λ. Or equivalently, that there exists a unique equivalence class of paths in the k-shape poset from λ to λ r . The proof is analogous to the proof that given µ/λ a strip, there exists a unique equivalence class of paths in Strip µ to the reverse-maximal strip µ/ν (see Proposition 169). Let µ be a k-shape. The surface strip µ/λ of µ is the horizontal strip consisting of the topmost cell of each column of µ. Lemma 78. The surface strip of µ is the unique reverse maximal strip of µ with rank rs(µ) 1 . Proof. It follows from the definitions and the fact that µ is a k-shape that the skew shape Int(µ)/Int(λ) is the surface strip of Int(µ). Thus rs(λ) is obtained from rs(µ) by removing the first row, and cs(λ) is obtained from cs(µ) by reducing the last rs(µ) 1 columns each by 1. In particular, µ/λ is a strip. It is clear that the surface strip S = µ/λ is reverse maximal. Proof. Let T ∈ Tab k λ , and suppose that wt(T ) = µ. Since T = ∅ = λ (0) ⊂ λ (1) ⊂ · · · ⊂ λ (N ) = λ is a sequence of strips, we have in particular that rs(λ (i) )/rs(λ (i−1) ) is a horizontal µ i -strip for all i. This gives immediately that µ ν (think of the triangular expansion of the homogeneous symmetric functions into Schur functions). Now, the unique T ∈ Tab k λ such that wt(T ) = ν is obtained by recursively taking the surface strips of λ. Proposition 80. Let λ ∈ Π k , and let ω : Λ → Λ be the homomorphism that sends the r th complete symmetric function to the r th elementary symmetric function. Then Proof. For the proof of (56) we proceed by induction. The result holds for k large since in that case s For the proof of (57), we have from Theorem 4 that The duality (11) between k-Schur functions and dual k-Schur functions implies that given that ω(s [X] (see [13]) and that ω is an isometry. The result then follows from (59) since, as we saw earlier, |P k (µ ′ , ρ ′ )| = |P k (µ, ρ)|. 4.5. Basics on strips. The remainder of this section deals with the properties of strips and augmentation moves. Sections 5 and 6 study pushouts involving row and column moves respectively. The next results help in checking whether something is a strip. Property 81. Let S = µ/λ be a horizontal strip of k-shapes and c a column which contains a cell of S. Then cs(µ) c ≥ cs(λ) c . Proof. Let b ∈ ∂λ be in column c and b ′ be the cell just . This implies b ′ ∈ ∂µ and the result follows since there is a cell of S in column c. Proof. The lemma follows from Property 81 and the fact that µ/λ is a horizontal strip. Then there is a cover-type λ-addable string s such that c s,d = c, λ ∪ s ∈ Π k , and µ/(λ ∪ s) is a strip. Proof. Let b be the unique cell in column c of µ/λ. The hypotheses imply that b is λ-addable. Let s be the maximal λ-addable string such that s ⊂ µ and s ends with b. Say the top cell y of s is in row i. Let x = (i, j) = left i (∂λ). Let b ′ be the λ-addable cell in column j, if it exists. The string s is of row-type or cover-type by the hypotheses. Suppose s is of row-type. Then h λ (x) = k. Then b ′ ∪ s is a λ-addable string. Since µ/λ is a strip we have cs(µ) j ≥ cs(λ) j . But x = bot j (∂λ) ∈ ∂µ. Hence b ′ ∈ µ, contradicting the maximality of s. Therefore s is of cover-type. Since µ/λ is a strip, rs(µ) i ≥ rs(λ) i . Suppose rs(µ) i = rs(λ) i , and let µ/λ have ℓ cells in row i. By supposition, there are also ℓ cells of ∂λ \ ∂µ in row i. Since cs(λ) ⊆ cs(µ) and µ/λ is a horizontal strip, there must then be cells of µ/λ in columns j, . . . , j + ℓ − 1 that are contiguous to the ℓ cells of µ/λ in row i. In particular, the λ-addable corner b ′ is contiguous to y. Again b ′ ∪ s is a λ-addable string, contradicting the maximality of s. Proof. Follows by induction from Lemma 83. Corollary 85. Let S = µ/λ be a strip. If a column contains a cell in ∂λ \ ∂µ then it also contains a cell of S. Proof. Each such cell is a removed cell for one of the cover-type strings that constitute S. Lemma 86. Suppose S = µ/λ is a strip, and let s = {a 1 , . . . , a ℓ } ⊆ S be a λaddable string (with a 1 the topmost). For each i ∈ [1, ℓ] let r i (resp. c i ) be the row (resp. column) of a i . Then (1) If i < j then there are at least as many cells of S in row r j than there are in row r i . (2) If i > j then there are at least as many µ-addable cells in column c j than there are in column c i . Proof. For (1), let r i and r i+1 violate the first assertion, and let b be the the rightmost cell of S in row r i . It is then easy to see that in µ we have the contradiction that the column of b is larger than the column of the first cell of S in row r i . For (2), it is enough to consider the case c i and c i+1 . Since a i and a i+1 are contiguous, left(∂λ) ri+1 lies in column c i . Suppose that column c i+1 has p ≥ 1 µ-addable cells, and let b = bot(∂µ) c − i lie in row R. Then row R is at least p rows above row r i+1 since otherwise rs(µ) R would be larger than rs(λ) ri+1 contradicting the fact that rs(µ)/rs(λ) is a horizontal strip. Since cs(µ) c − i ≥ cs(µ) ci , column c i needs to have at least p µ-addable cells. 4.6. Augmentation of strips. We first observe the following: Remark 87. The negatively modified columns (resp. rows) of an augmentation move of the strip S are positively modified columns (resp. rows) of S. Property 88. All augmentation column moves of a strip S = µ/λ have rank 1. Proof. If it were not the case, the modified rows of m (which all have the same length by definition) would violate the condition that rs(m * µ)/rs(λ) is a horizontal strip. Let S = µ/λ be a strip. A µ-addable cell a is called (1) a lower augmentable corner of S if adding a to µ removes a cell from ∂µ in a modified column c of S in the same row as a. We say that a is associated to c (or r, respectively). We call a modified column c of S leading if the cell c ∩ S (the cell of S in column c) is leftmost in its row in S. Lemma 89. Let S = µ/λ be a strip. Then any augmentation move m contains an augmentable corner of S. Proof. If the strip S admits an augmentation row (resp. column) move m then the top left (resp. bottom right) cell of m is a lower (resp. upper) augmentable corner of S. Definition 90. A completion row move is one in which all strings start in the same row. It is maximal if the first string cannot be extended below. A quasi-completion column move is a column augmentation move from a strip S that contains no lower augmentable corner. A completion column move is a quasi-completion move from a strip S that contains no upper augmentable corner below its unique (by Property 88) string 2 . A completion column move or a quasi-completion column move is maximal if its string cannot be extended above. A completion move is a completion row/column move. The definition of completion move is transpose-asymmetric since strips are. Our main result for augmentations of strips is the following. Its proof occupies the remainder of the section. (2) There is one equivalence class of paths in Strip λ from S to S ′ . (3) The unique equivalence class of paths in Strip λ from S to S ′ has a representative consisting entirely of maximal completion moves. Let m = s 1 ∪ s 2 ∪ · · · ∪ s r be an augmentation row move from S. Then c si,u is a modified column of S for each i ∈ [1, r]. Since m is a move and m * µ ∈ Π, the columns {c si,u | i ∈ [1, r]} are part of a group of modified columns of S and must be the rightmost r columns in this group, by Property 21. Lemma 92. Let s be a λ-addable row-type (resp. column-type) string that cannot be extended below (resp. above). Then cs(λ) c s,d < cs(λ) c − s,d (resp. rs(λ) rs,u < rs(λ) r − s,u ). Proof. Let s = {a 1 , a 2 , . . . , a ℓ } be a row-type string and suppose cs(λ) c = cs(λ) c − where c = c s,d . We have λ c − > λ c since a ℓ is λ-addable, and thus the cell immediately to the left of b = bot c (∂λ) is not in ∂λ. This implies that h λ (b) ≥ k − 1 so that h λ∪s (b) = k given that s is a row-type string. By Remark 13, there is a λ-addable corner at the end of the row of b that is contiguous with a ℓ , so that s can be extended below, a contradiction. The column-type case is similar. Proof. By Proposition 27, if n exists, it is determined by t 1 . We show that there are strings t 2 , t 3 , . . . , t r that can be added to λ ∪ t 1 . Let s i = {a Lemma 94. Suppose a is a lower augmentable corner in row R of the strip S = µ/λ, associated to the column c. (2) c is a leading column. (3) Suppose that c ′ > c is a leading column such that cs(µ) c ′ = cs(µ) c . Then there is a lower augmentable corner a ′ which is associated to c ′ . (4) Let r be the number of cells of S in the row of top c (µ). Then λ R − ≥ λ R + r. Proof. (1) and (2) are straightforward using the fact that c is a modified column of S. For (3), let b = bot c ′ (∂µ). Then h µ (b) ≥ h µ (bot c (∂µ)) = k by (1). Thus the addable corner at the end of the row of b (assured to exist by Remark 13) must be lower augmentable. (4) is implied by Remark 13. Lemma 95. Suppose m is a non-maximal completion row move from a strip S = µ/λ. Let t 1 = s 1 ∪ {a ℓ+1 , a ℓ+2 , . . . , a ℓ+ℓ ′ } be the maximal row-type string which extends the first string s 1 = {a 1 , . . . , a ℓ } of m below. Then the completion row move n from µ of Lemma 93 is a maximal completion row move from S. Proof. We use the notation of Lemma 93. We first show that n * S is a horizontal strip. Since a ℓ+1 = a ℓ+j . Since bot c (∂µ) lies in row R and since there are no cells of S in column c by supposition, we have that left R−1 (∂λ) is strictly to the right of column c by Corollary 85. Therefore, there are at least r extra cells of ∂µ in row R to the left of left R−1 (∂λ). This implies that λ R−1 −µ R ≥ r since rs(µ)/rs(λ) is a horizontal strip. This proves that {a (i) ℓ+j+1 | i ∈ [1, r]} also do not lie on on any cell of S and we get by induction that S is a horizontal strip. Since n is a row move, rs(n * µ)/rs(λ) = rs(µ)/rs(λ) is a horizontal strip. Finally, since n removes cells in the same columns of ∂µ as m does and n * S is a horizontal strip, cs(n * µ)/cs(λ) is a vertical strip. Hence n * S is a strip in Π. Lemma 96. Let λ ∈ C k and S = µ/λ a strip with no lower augmentable corners. Suppose a is a µ-addable corner such that adding a to the shape µ removes a cell from ∂µ in a modified row r of S. Then a is an upper augmentable corner. Proof. We must show that a does not lie on top of any cell in S. Suppose otherwise. Let b = bot col(a) (∂µ). We have that col(a) is not a modified column of S, for otherwise row(b) has a lower augmentable corner for S, a contradiction. Let b ′ be the cell immediately below b. Since col(a) is not a modified column but it contains a cell in S, we must have b ′ ∈ ∂λ − ∂µ. Furthermore, Property 82 Example 97. The k-core condition in Lemma 96 is necessary. For k = 4 consider Lemma 98. Let a be a lower augmentable corner of a strip S = µ/λ associated to column c. Let S contain r cells in the row containing the cell c ∩ S. Suppose that a is chosen rightmost amongst augmentable corners associated to columns of the same size in ∂µ. Let t 1 = {a = a 1 , a 2 , . . . , a ℓ ′ } be the maximal row type string which extends a below. Then there is a maximal completion row move n from S which has rank r and initial string t 1 . Proof. We apply the construction in Lemma 95 with m an empty move. m is not maximal since there is a lower augmentable corner a in some row R, which can be extended to a row-type string by Lemma 30. The move m has rank r since r cells can be added to row R of λ by Lemma 94(4). The choice of a guarantees that the negatively modified columns of n have the same size and that the monotonicity of column sizes is preserved. The argument in Lemma 95 completes the proof. Case (2). Since there is a cell of S in the column of b, any column of λ to the right of b is shorter than the column of c in µ. Given that h µ (b) = k − 1 and rs(µ) R = rs(λ) R − , we have the contradiction that h λ (c ′ ) ≤ 1 + h µ (b) = k. Case (3). By hypothesis h µ (b) = k. If there is a cell of S in col(b) then col(b) is a modified column of S and we have the contradiction that there is a lower augmentable corner of S in row R (λ R − > µ R by hypothesis). Otherwise we get the contradiction that there is an upper augmentable corner of S associated to row R in col(b). Lemma 102. Let S = µ/λ be a strip without lower augmentable corners and let a be an upper augmentable corner of S. Let s = {a 1 , . . . , a ℓ = a} be the maximal extension of a above, subject to the condition that the a i do not lie on top of cells of S. Then m = s is a quasi-completion column move from S. Proof. Let row(a 1 ) = R, b = (R, c) = left R (∂µ) and d = left R − (∂λ). Suppose first that s is the maximal extension of a without being constrained by not lying on top of S. By Lemmata 30 and 92, s is a column type string and m * µ is a k-shape. It suffices to show that rs(m * µ)/rs(λ) is a horizontal strip. Since S is a strip, rs(λ) R − ≤ rs(µ) R − = rs(m * µ) R − , so it remains to show that rs(m * µ) R ≤ rs(λ) R − . The only way this would fail is if rs(µ) R = rs(λ) R − . Since s is maximal, we have h b (µ) < k − 1. Thus from Lemma 101(1), we have that b and d lie in the same column. Since rs(µ) R = rs(λ) R − this gives the contradiction that a 1 lies over a cell of S. Now suppose that s is blocked from extending further by the constraint of not lying on top of S. Consider first the case that h µ (b) = k − 1, and observe that, as in the previous case, if m * µ fails to be a k-shape or rs(m * µ)/rs(λ) fails to be a horizontal strip, then rs(µ) R = rs(λ) R − (S is a strip and thus rs(µ) R− ≥ rs(λ) R− ≥ rs(µ) R ). Lemma 101(2) then implies that b and d lie in the same column and the result follows from the argument given in the previous case. Finally, consider the case that h µ (b) = k. Column c is not a modified column of S since a 1 cannot be a lower augmentable corner. Thus the cell b ′ below b is in ∂λ and so is the cell below a 1 . This gives the contradiction h b ′ (λ) > h b (µ) = k. 4.7. Maximal strips for cores. Recall that a strip S is maximal if it does not admit any augmentation move. Proposition 103. A strip is maximal if and only if it has no augmentable corners. Proof. By Lemma 89, if the strip S admits an augmentation move then S has an augmentable corner. Conversely, if S has an augmentable corner, then S admits a maximal completion move by Lemmata 98 and 102. Lemma 104. Let S = µ/λ be a maximal strip and let c, c ′ be two modified columns such that cs(λ) c = cs(λ) c ′ . Then the cells S ∩ c and S ∩ c ′ are on the same row. Proof. Suppose otherwise. We may assume that c ′ = c + 1. Let b ′ = bot c ′ (∂λ) and b be the cell just below bot c (∂λ). Then Since c ′ is a modified column, b ′ ∈ ∂µ, that is, h µ (b ′ ) = k. But then there must be a lower augmentable corner for S at the end of the row of b ′ , contradicting Proposition 103. Proof. It suffices to check h µ (x) for cells x in the modified row or column, such that h µ (x) = h λ (x) + 1. For the modified row r, let b = left r (∂λ). Then h λ (b) < k − 1, for otherwise S is not maximal. All cells to the left of b have h λ > k. Similar reasoning applies to the modified column. Proof. By Proposition 105 it suffices to show that S can be expressed as a sequence of maximal covers. Construct a sequence of covers for S using Lemma 83. By Proposition 103, S has no augmentable corner. We claim that this implies that the successive covers constructed have no augmentable corners which would then imply their maximality. Note that a modified row or column of one of these covers is immediately also one of S. Let C = ν/κ be such a cover. For lower augmentable corners, this is clear since such augmentable corners are augmentable corners of S. For an upper augmentable corner a / ∈ S of C, we apply Lemma 96 which implies that a is an upper augmentable corner of S. Suppose r > r ′ . We have rs(µ) r ′ ≥ rs(µ) r and cs(µ) col(x) = cs(µ) c − = cs(µ) c + 1, so that h µ (y) ≥ h µ (x) − 1 = k − 1. Therefore h µ (y) = k − 1. By Remark 13 there is a µ-addable cell in row r, which is below and contiguous with the cell of m in column c. This contradicts the maximality of m. Therefore r = r ′ and y = (r, c). Let ν = µ ∪ M . We have h ν (y) = k − 1. Since the negatively modified columns of M and the positively modified columns of m have their lowest k-bounded cell in the same row and rs(ν) r − ≥ rs(ν) r , we deduce that ν r − − ν r ≥ rank(m) + rank(M ). Using this and the maximality of M , by Lemma 93 we may deduce that viewing m as ν-addable, each of its strings can be maximally extended below to contain a cell in each of the rows of M by Lemma 31. Call the added cells m per . It is straightforward to verify the remaining assertions. It thus suffices to show that any augmentation path (m 1 , m 2 , . . . , m x ) ending at a maximal strip, is equivalent to one which begins with a maximal completion move. If m 1 is a non-maximal completion row move, then Lemma 95 implies that m 1 ⊂ m where m is a maximal completion row move with the same lower augmentable corner. But then (m\m 1 )(m 1 ) ≡ m is a row equivalence and using (61) we deduce that (m 1 , m 2 , . . . , m x ) is equivalent to a path beginning with m. A similar argument works for the column case. We may thus assume that m 1 is a non-completion augmentation row or column move. In the case of the non-completion augmentation row move, the argument is completed by Lemma 99. In the case of the non-completion augmentation column move, S either contains some lower augmentable corners or some upper augmentable corners above the one associated to m 1 . In the former case, let M be the maximal completion row move associated to a lower augmentable corner a of S such as described in Lemma 98. By Lemma 107, the argument is completed in that case. In the latter case, let M be the maximal completion column move associated to the highest upper augmentable corner a of S such as described in Lemma 102. The lemma then follows from Lemma 109. proc MaximizeStrip(µ, λ): local ρ := µ, q := (µ) while True: if the strip ρ/λ has a lower augmentable corner: let x be the rightmost one let s be the maximal ρ-addable string extending x below ρ := ρ ∪ s append ρ to q continue if the strip ρ/λ has an upper augmentable corner: let x be the rightmost one let s be the maximal ρ-addable string extending x above, subject to not having a cell atop ρ/λ ρ := ρ ∪ s append ρ to q continue break return q The path q is initialized to be the path of length zero starting and ending at µ and the current strip ρ/λ is initialized to be µ/λ. Whenever the current strip ρ/λ has a lower augmentable corner, the algorithm appends a completion row move to q by Lemma 98 and applies the move to ρ. Whenever the current strip ρ/λ has no lower augmentable corner but an upper augmentable one, the algorithm appends a completion column move m to q by Lemma 102 and applies the move to ρ. When ρ/λ has no augmentable corners, by Proposition 103 the algorithm terminates with ρ/λ a maximal strip and returns the current path q. Pushout of strips and row moves Let (S, m) be an initial pair where S = µ/λ is a strip and m = ν/λ is a nonempty row move. We say that (S, m) is compatible if it is reasonable, not contiguous, and is either (1) non-interfering, or (2) is interfering but is also pushout-perfectible; these notions are defined below. For compatible pairs (S, m) we define an output k-shape η ∈ Π (see Subsections 5.4 and 5.5 for cases (1) and (2) respectively). This given, we define the pushout push(S, m) = (S,m) (62) which produces a final pair (S,m) whereS = η/ν is a strip andm = η/µ is a move (possibly empty). This is depicted by the following diagram. Proof. Suppose otherwise. Let c be the leftmost modified column of S = µ/λ that is negatively modified by m. We have that c is also the leftmost negatively modified column of m since otherwise cs(µ) would not be a partition. By the previous comment, b = bot c (∂λ) is leftmost in its row in ∂λ and h λ (b) = k. But looking at S we see that h λ (b) < k, a contradiction. 5.1. Reasonableness. We say that the pair (S, m) is reasonable if for every string s of m, either s ∩ S = ∅ or s ⊂ S. In other words, every string of m which intersects S must be contained in S. Suppose the string s of m satisfies s ⊂ S. We say that S matches s below if c s,d is a modified column of S and otherwise say that S continues below s. Lemma 112. Suppose (S, m) is reasonable where m = s 1 ∪ s 2 ∪ · · · ∪ s r . If S matches s i below then S matches s j below for each j ≤ i. If S continues below s i then S continues below s j for each s j on the same rows as s i satisfying j ≤ i. Proof. The first assertion follows directly from the assumption that cs(µ) is a partition. In the case of the second assertion, we have that bot c (∂λ) belongs to the same row for every column c corresponding to such s j 's. Given that column c is not a modified column of S there is a cell of ∂λ \ ∂µ in column c. Given that µ/λ needs to be a skew diagram, the assertion follows. Proof. Let s = {a 1 , a 2 , . . . , a ℓ } be a string of m and suppose a i ∈ S. As in Corollary 84, we choose the unique decomposition of S into cover-type strings such that the bottom cell of t j is the j-th modified column of S for all j (going from left to right) and t j is taken to be maximal given t 1 , . . . , t j−1 . Suppose a i is in the string t of S. It suffices to show that (1) a i−1 ∈ t if i > 1 and (2) a i+1 ∈ t if i < ℓ. We prove (2) as (1) is similar. The proof proceeds by induction on the indent Ind m (s) of s in m. Suppose first that Ind m (s) = 0, that is, s is λ-addable. We have a i+1 ∈ S, for otherwise it would be a lower augmentable corner of S which would contradict the maximality of S by Proposition 103. By the choice of the decomposition of S, a i+1 and a i are both in t. Now suppose the Lemma holds for all strings s ′ of m with Ind m (s ′ ) < d. Let s ′ = {b 1 , b 2 , . . . , b ℓ } be the string of m preceding s. Since d > 0, b j is just left of a j for all j. Since a i ∈ S it follows that b i ∈ S. By induction the cover-type string t ′ of S containing b i contains s ′ . So col(b i ) = col(a i ) − is not a modified column of S. This implies that col(a i ) is also not a modified column of S. Due to the decomposition of S into covers, this means that t has a cell below a i , that is, a i+1 ∈ t. 5.2. Contiguity. Suppose (S, m) is reasonable where S = µ/λ and m is a move from λ to ν. We say that (S, m) is contiguous if there is a cell b ∈ ∂µ ∩ ∂ν which is not present in ∂(µ ∪ ν); b is called a disappearing cell. Since S is λ-addable, x * ∈ S. But by reasonableness of (S, m), since y ∈ S we get the contradiction that x * ∈ S. Therefore column c contains a cell of m (namely, x). We have n S > n m and h µ (b) = k, and x has no cell of m contiguous to and below it. Item (1) follows. If r is not a modified row of S then S removes the n S cells just left of b and S contains n S cells just left of x. Since m is λ-addable m also contains the n S cells just left of x. But then m doesn't modify some of these columns (since n S > n m ) while it modifies column c, contradicting Property 21. This proves (2). It follows that x ∈ m is an upper augmentable corner for S, proving (3). By Proposition 115, we have the following corollary. Proof. Let m + = t 1 ∪ t 2 ∪ · · · ∪ t ρ where each t i is either a string in m ′ \ S, or a string in m ′ ∩ S shifted upwards. We assume that the t i are ordered from left to right, as is the convention for row moves. It is clear that the t i are weak translates of each other in the correct columns. In order to prove the lemma, we will show that they are successive row type addable strings that are translates of the strings of m. We proceed by induction on i. First suppose that t i ⊂ m ′ was not bumped up. By Lemma 114, c s,u contains no cells of S. By non-contiguity and Corollary 85, column c s,d is identical in λ and µ, and also in ν and µ ∪ ν. Thus t i is a row type string of µ ∪ t 1 ∪ · · · ∪ t i−1 equal to s i . Now suppose that t i was bumped up from s i ∈ m ′ ∩S. By Lemma 112, it suffices to check the case that s i is λ-addable. First we show that t i is addable. This is clear if s i is not equal to s 1 , for s i−1 is higher than s i . Lemma 122 deals with the case s i = s 1 . The diagram of t i is a translate of that of s i by Lemma 114 and the assumption that S continues below s i , which ensures that the their modified columns agree in size. Lemma 122. Suppose S continues below the first string s 1 = {a 1 , a 2 , . . . , a ℓ } of m. For each i ∈ [1, ℓ] let c i be the column containing a i . Then there is an addable corner of µ in column c i . Proof. Consider the case i = ℓ and set c = c ℓ . We prove the equivalent statement that column c − either intersects S or satisfies (λ) c − ≥ (λ) c + 2. Since m is a move, we have cs(λ) c − > cs(λ) c . Assume there is no cell of S in column c − . Then Corollary 85 and "continuing below" imply that the bottom of c − in ∂λ starts higher than that of c. This implies (λ) c − ≥ (λ) c + 2. The general case then follows from Lemma 86 since s 1 is λ-addable and there is a µ-addable corner in column c = c ℓ . 5.4. Row-type pushout: non-interfering case. Let (S, m) be reasonable, noncontiguous, and non-interfering. Then by definition we declare (S, m) to be compatible, set η = (m + ) * µ, let (S,m) be as in (63), and define the pushout of (S, m) by (62). By Lemma 121 and Proposition 123,m is a (possibly empty) row move andS is a strip. Proof. It is immediate that η/ν is a horizontal strip. We have rs(η)/rs(ν) = rs(µ)/rs(λ), which is a horizontal strip by assumption. Also Observe that m\m ′ corresponds to the strings s of m such that s and S are matched below. Therefore cs(η) − cs(ν) is a 0-1 vector since the positively modified columns of m \ m ′ cancel out with some modified columns of S, and the negatively modified columns of m \ m ′ do not coincide with modified columns of S by Property 111. This means that S has a lower augmentable corner in row r, contradicting Proposition 103 and maximality. Therefore b and b ′ are in row r, and this row corresponds to the row of the top cell of the last string of m. Now suppose that c + is also a modified column of S with cs(λ) c = cs(λ) c + , and letb = bot c + (∂λ). By the same argument we get that b andb lie in the same row. Continuing in this way, we get that all modified columns d of S such that cs(λ) d = cs(λ) c occupy the same rows. If there are ℓ of them and ℓ ′ cells of m in row r, we have established that λ r − − λ r ≥ ℓ + ℓ ′ . Therefore ρ = m + * µ is such that ρ r − − ρ r ≥ ℓ since exactly ℓ ′ cells of m + ∪ S lie in that row by hypothesis (otherwise column c would not be a modified column of S). By Lemma 93, any row R below row r that contains a cell of the last string of m is also such that λ R − − λ R ≥ ℓ + ℓ ′ . Furthermore, for any such row R we also have ρ R − − ρ R ≥ ℓ since again exactly ℓ ′ cells of m + ∪ S lie in that row by hypothesis (otherwise there would be an upper augmentable corner associated to a given row R, contradicting maximality). We have established that ℓ cells can be added to the right of every cell of the last string of m in ρ, and from our proof, these cells do not lie above cells of S. Let m comp be the union of those cells. Defining η = ρ ∪ m comp , it is clear that η/ν is a horizontal strip. We have rs(η) = rs(µ) so rs(η)/rs(ν) is a horizontal strip. Finally, one checks that cs(η) is a partition and cs(η)/cs(ν) a horizontal strip in the same manner as in Proposition 123. 5.6. Alternative description of pushouts (row moves). Suppose m = s 1 ∪ s 2 ∪ · · · is a row move such that ∆ cs (s 1 ) affects columns c and c + d. If α is not a partition, we suppose that α i + 1 = α i+1 = α i+2 = · · · = α i+a > α i+a+1 . Then the perfection of α with respect to m is the vector Here e j denotes the unit vector with a 1 in the j-th position and 0's elsewhere. Let (S = µ/λ, m = ν/λ) be any initial pair where m = s 1 ∪ · · · ∪ s r . Let m ′ be the collection of cells obtained from m by removing s i whenever the positively modified column of s i is a modified column of S. It is easy to see that m ′ is of the form s j ∪ s j+1 ∪ · · · ∪ s r . The expected column shape ecs(S, m) of (S, m) is defined to be ecs(S, m) = per m (cs(λ) + ∆ cs (S) + ∆ cs (m ′ )). Proof. It is easy to see that η/µ decomposes into row type strings as m ′′ ∪ m comp where cs(m ′′ * µ) = cs(µ) + ∆ cs (m ′ ). Since m ′′ modifies the same columns as m ′ , and the two have the same diagrams we conclude that each string of m ′′ is either a string in m ′ or a string in m ′ shifted up one cell. But m ′′ is a collection of strings on µ, so the strings in m ′ must be reasonable with respect to S. We now claim that m comp ∩ m = ∅. Suppose otherwise. Let a be the rightmost cell in the intersection m comp ∩ m, lying in a string s ∈ (m \ m ′ ) and a string t ∈ m comp . If a is not the rightmost cell in s we let b be the cell immediately right of a in s. Now s and t have the same diagram so we deduce that the cell b ′ after a in t is either equal to b or immediately to the left of b. In either case, this contradicts the assumption that a is rightmost. Thus a is in the positively modified column c of s. But by the original assumptions c is also a modified column of S. This contradicts the fact that m comp ∩ S = ∅ and we conclude m comp ∩ m = ∅. Now we apply (3) to see that all strings m \ m ′ must have already been contained in µthus (S, m) is reasonable. Suppose (S, m) is contiguous. By Lemma 117, this means there is a disappearing cell b, and b is in a column c which does not contain cells of S but does contain cells of m (it is in fact a positively modified column of m). By reasonableness, the column c thus contains cells of m ′ and in particular is not in a modified column of m comp . Thus cs(λ) c = ecs(S, m) c − 1. However, the disappearance implies cs(η) c = cs(λ) c , a contradiction. To show that η/ν is a horizontal strip, we only need to show that no cell of m comp lies above a cell of S. Suppose x is the leftmost cell in m comp that lies above a cell of S, and let r be the row of x. Let s ∈ m comp be the string that contains x and let c be the column of the cell removed in row r when adding s. Since rs r (µ) ≤ rs r − (λ) we have that c is weakly to the right of left r − (∂λ) and thus the cell in row r − and column c belongs to ∂λ \ ∂µ. Hence, by Corollary 85, there is a cell of S in column c. First assume that x is not the highest cell in s, and let y be above x in s. Then y is in column c and we either have the contradiction that y lies above a cell of S or that m comp ∩ S = ∅. Now assume that x is the highest cell in its string. This time we have the contradiction that c is not a modified column of S. Since rs(η) = rs(µ) and rs(ν) = rs(λ) we have that rs(η)/rs(ν) is a horizontal strip. Finally, by supposition the negatively modified columns of m comp are positively modified columns of S and the lowest cell of each string of m comp modifies positively its column. Since η/ν is a horizontal strip, we have that cs(η)/cs(ν) is a vertical strip. Proof. In the proof of Proposition 126 it was shown that no cell of m comp can lie above a cell of S. Therefore the cells of m comp lie above cells of λ. Suppose m comp and m share columns. Then they intersect, and must do so in m \ m ′ ⊂ S, a contradiction. Pushout of strips and column moves In this section we consider initial pairs (S = µ/λ, m = ν/λ) consisting of a strip and a column move. We define (S, m) to be compatible if it is reasonable, non-contiguous, normal, and either (1) it is non-interfering or (2) it is interfering but is pushout-perfectible; these notions are defined below. As for row moves, in each of the above cases we specify an output k-shape η ∈ Π and define the pushout of (S, m) and the final pair (S,m) as in (62) (63). We omit proofs which are essentially the same in the row and column cases. 6.1. Reasonableness. We say that the pair (S, m) is reasonable if for every string s ⊂ m, either s ∩ S = ∅, or s ⊂ S. If s ⊂ m is contained inside S, we say that S matches s above if r s,u is a modified row of S. Otherwise we say that S continues above s. Lemma 128. Let (S, m) be any initial pair. If a modified row of S contains a cell of m, then that row intersects the initial string of m. If a modified row of S is a negatively modified row of m, then S intersects the initial string s ⊂ m. In particular, if (S, m) is reasonable, only the initial string s ⊂ m can be matched above. Proof. Follows immediately from the definition of column moves and the fact that rs(µ)/rs(λ) is a horizontal strip. Proof. The pair (S, m) is reasonable by Proposition 131. Suppose S continues above s = {a 1 , a 2 , . . . , a ℓ }, where the cells are indexed by decreasing diagonal index. By Lemma 128, normality cannot be violated if s is not the initial string of m, so we suppose s is the initial string of m. Since S does not match s above by definition, the row r ℓ containing a ℓ is not a modified row of S. The claim is thus trivial if ℓ = 1 so we assume ℓ > 1. Suppose r ℓ contains p ≥ 1 cells of S, implying that the p leftmost cells of r ℓ are moved when going from ∂λ to ∂µ (and none of the columns of these p cells are modified columns of S). It follows from Property 82 and rs(λ) r ℓ < rs(λ) r − ℓ that λ r − ℓ ≥ λ r ℓ + p + 1. In particular, there is an addable corner b * on row r ℓ of µ. It is easy to see that the row r ℓ−1 containing a ℓ−1 contains at least p cells of S, with equality if and only if r ℓ−1 is not a modified row of S. If r ℓ−1 is a modified row of S, then the addable corner b * will be an upper augmentable corner for S, contradicting maximality and Proposition 103. So r ℓ−1 is not a modified row of S. Lemma 133. Suppose (S, m) is normal and let s ⊂ m be any string such that S continues above s. Then S contains the same number of cells in each row r containing a cell of s, and also the same number of cells in the negatively modified row of s. Furthermore, if s is the initial string of m, then each such row r has a µ-addable corner. Proof. The first statement follows easily from the definition of normality. The last statement is proven as in Proposition 132. 6.3. Contiguity. Suppose (S, m) is reasonable. We say that (S, m) is contiguous if there is a cell b ∈ ∂µ ∩ ∂ν which is not present in ∂(µ ∪ ν). Call such a b a disappearing cell. Proof. Suppose row r contains p ≥ 1 cells of S. Then m must contain cells in column c. Let b ′ be the cell below bot c (∂ν), R = row(b ′ ), and h = cs(ν) c . We have the sequence of inequalities: which contradicts the fact that rs(µ)/rs(λ) is a horizontal strip. Thus row r contains no cells of S and contains exactly one cell x ∈ m. Also column c exactly one cell a ∈ S and no cells of m. Suppose r is not a positively modified row of m. Then m contains a cell a * in the column of the cell b * immediately left of b, and we have h λ (b * ) = k. But a * is in the same row as a, so a * ∈ S as well. But by reasonableness, x ∈ S, a contradiction. This proves (1). Suppose c is not a modified column of S. Then there is a cell x ′ ∈ S contiguous to and below a ∈ S. Considering hook lengths we conclude that x ′ is just below x, S removes the cell b ′ just below b, h λ (b ′ ) = k, and λ r = λ r − . But since m is λ-addable it follows that x ′ ∈ m. But then row r − must be positively modified by m, contradicting h λ (b ′ ) = k. This proves (2) and that x is λ-addable. For (3), the cell x is a lower augmentable corner for S. If m ′ = ∅ we say that (S, m) is non-interfering if rs(λ) + ∆ rs (S) + ∆ rs (m ′ ) is a partition and interfering otherwise. If m ′ = ∅ we say that (S, m) is noninterfering if rs(µ)/rs(ν) is a horizontal strip and interfering otherwise (observe that rs(λ) + ∆ rs (S) + ∆(m ′ ) = rs(λ) + ∆ rs (S) = rs(µ) is always a partition in that case). The latter case is referred to as special interference. Proof. The proof is similar to Lemma 121, except that we now use Lemmata 133 and 137. Lemma 137. Suppose S continues above the first string s 1 = {a 1 , a 2 , . . . , a ℓ } of m. For each i ∈ [1, ℓ] let r i be the row containing a i . Then there is an addable corner of µ in row r i . Moreover, the addable corner of µ in row r i does not lie above a cell of S. Proof. Consider the case i = ℓ and set r = r ℓ . Since S does not match s 1 above by definition, the row r ℓ containing a ℓ is not a modified row of S by normality. Suppose r ℓ contains p ≥ 1 cells of S, implying that the p leftmost cells of r ℓ are moved when going from ∂λ to ∂µ (and none of the columns of these p cells are modified columns of S). It follows from Property 82 and rs(λ) r ℓ < rs(λ) r − ℓ that λ r − ℓ ≥ λ r ℓ + p + 1. In particular, there is an addable corner in row r ℓ of µ and it does not lie above a cell of S. Since s 1 = {a 1 , a 2 , . . . , a ℓ } is λ-addable, Lemma 31 ensures that λ r − i ≥ λ ri +p+1 for all i. There are exactly p cells of S in row r i for all i by Lemma 133. So again there is an addable corner in row r i of µ and it does not lie above a cell of S. 6.5. Column-type pushout: non-interfering case. Suppose (S, m) is normal, non-contiguous, and non-interfering. In this case, by definition (S, m) is declared to be compatible where we set η = (m + ) * µ and define (S,m) by (63) and the pushout of (S, m) by (62).m is a (possibly empty) column move andS is a strip by Lemma 136 and Proposition 138. Proof. That η/ν is a horizontal strip is not difficult (Lemma 137 ensures that the cells of m + do not lie above cells of S). We also have cs(η)/cs(ν) = cs(µ)/cs(λ). Suppose m ′ is not empty and that rs(η) = rs(λ)+∆ rs (S)+∆ rs (m ′ ) is a partition. We must prove that rs(ν) r − ≥ rs(η) r ≥ rs(ν) r for each row r. Recall that modified rows of m ′ are modified rows of m and that the only string that may possibly be in m \ m ′ is the initial one. Therefore the second inequality follows from the fact that if m \ m ′ is not empty then the positively modified row of the initial string of m is a modified row of S. To prove the first inequality, observe that Suppose that m = m ′ . Then the first inequality can only fail if r − is the uppermost negatively modified row of m or if r is the positively modified row of the initial string of m. Suppose that m is non-degenerate. Let r − be the uppermost negatively modified row of m. By normality we have rs(λ) r − = rs(µ) r − . Therefore if the first inequality fails, we have rs r − (µ) = rs r − (λ) = rs r (µ) which is a contradiction since η would not then be a k-shape (r − is a negatively modified row of m ′ that is not a modified row of S by normality). Let r be the positively modified row of the initial string of m. By normality we have rs(λ) r = rs(µ) r . Therefore if the first inequality fails, we have rs(λ) r − = rs(µ) r = rs(λ) r which is a contradiction since λ would not then be a k-shape (r − is a negatively modified row of m). Suppose that m is degenerate, and let r − be the uppermost negatively modified row of m (and thus r is the positively modified row of the initial string of m). By normality we have rs(λ) r − = rs(µ) r − and rs(λ) r = rs(µ) r . Therefore if the first inequality fails, we have rs(λ) r − ≤ rs(µ) r + 1 = rs(λ) r + 1 which is a contradiction since λ would not then be a k-shape (rs(λ) r − − rs(λ) r ≥ 2 since m is degenerate). Finally, suppose that m and m ′ are distinct. The only case to consider that was not considered in the case m = m ′ is when r − is the negatively modified row of the first string of m. By hypothesis m ′ is not empty and so r is also a negatively modified row of m ′ . The first inequality then follows immediately. 6.6. Column-type pushout: interfering case. Suppose (S, m) is normal, noncontiguous, and interfering. We say that (S, m) is pushout-perfectible if there is a set of cells m comp outside (m + ) * µ so that if then η/ν is a strip and η/µ is a column move from µ whose strings have the same diagram as those of m. Since rs(η)/rs(ν) is a horizontal strip, m comp can only be a single column-type string and will thus be unique if it exists. In the case that (S, m) is pushout-perfectible, by definition we declare (S, m) to be compatible where η is specified by (69) and define (S,m) by (63) and the pushout of (S, m) by (62). By definition,m is a column move andS is a strip. Example 139. This is an example of special interference for k = 3. In µ = S * λ and ν = m * λ the new cells added to λ are shaded. In the lower right k-shape the cells of m comp are shaded. Lemma 140. Let (S, m) be such that m ′ is empty. Then there is (special) interference iff rs(µ) r − = rs(µ) r , where r − is the negatively modified row of m (m is necessarily of rank 1). Proposition 141. Suppose (S, m) is interfering, S is maximal, and m is a column move. Then (S, m) is pushout-perfectible (and hence compatible). Moreover m comp consists of a single string lying in the same columns as the last string of m. Proof. By Proposition 132 and Corollary 135, (S, m) is normal and not contiguous, so that it makes sense to refer to interference. Suppose m + is non-empty so that rs(λ) + ∆ rs (S) + ∆ rs (m ′ ) is not a partition. Let r − be the negatively modified row of the final string s of m. We may assume that r − is not a modified row of S, for otherwise s would have to be initial, and Lemma 130 would imply that s ⊆ S and thus that s is matched above, implying that (S, m) does not interfere. Also, r must be a modified row of S for interference to occur. Note that since rs(λ) + ∆ rs (S) + ∆ rs (m ′ ) is not a partition we have rs(µ) r − = rs(µ) r and thus rs(λ) r − = rs(µ) r (r − is not a modified row of S). We claim that η has an addable corner directly above the first cell a of s. Since h λ (left r − (∂λ)) = k by the definition of a move, we have from rs(λ) r − = rs(µ) r that h = h µ (left r (∂µ)) is k − 1 or k. In either case (using Lemma 101(3) when h = k) we see that left r − (∂λ) lies in the same column as left r (∂µ). We then have immediately that there is an addable corner directly above the first cell a of s. Since s is λaddable, we obtain from Lemma 86 that there is a µ-addable corner above every cell of m. The rest of the proof that m + ∪ m comp is a column move is analogous to the proof of Lemma 102. Suppose m + is empty and rs(η)/rs(ν) = rs(µ)/rs(ν) is not a horizontal strip. Recall that only the initial string of m can disappear and thus m is of rank 1. From Lemma 140, the negatively modified row of m is in a row r − such that row r is a modified row of S with rs(λ) r − = rs(µ) r − = rs(µ) r (recall that rs(λ) r − = rs(µ) r − by normality). Again η has an addable corner directly above the first cell a of s by Lemma 101 (3). The rest of the proof that m comp is a column move is as in the non-empty case. Since the cells of m comp lie above cells of m we have that η/ν is a horizontal strip. Obviously cs(η)/cs(ν) = cs(µ)/cs(λ) is a vertical strip. We thus only have to prove that rs(η)/rs(ν) is a horizontal strip. If m ′ is non-empty there is interference only if rs(λ) + ∆ rs (S) + ∆ rs (m ′ ) is not a partition. Following the proof of Proposition 138 we have that rs(ν) i ≥ rs(η) i+1 for all i except possibly when i = R is the highest positively modified row of m. In that case rs(η)/rs(ν) is a horizontal strip since the positively modified row R + of m comp lies in the row above row R and rs(µ)/rs(λ) is a horizontal strip by definition (that is, given λ R ≥ µ R + , η R + = µ R + + 1 and ν R = λ R + 1, we have ν R ≥ η R + ). If m ′ is empty, then by Lemma 140 there is interference iff rs(µ) i = rs(µ) i+1 , where i = r − is the negatively modified row of m = s. In that case, given η r = µ r − 1 and ν r − = λ r − − 1, the fact that λ r − ≥ µ r guarantees that ν r − ≥ η r . So we only have to check what happens at the positively modified row of m comp . The result follows just as in the m ′ = ∅ case. Proof. Let a be a cell of s that lies above a cell of S. From the definition of s = m comp , there is a cell of the final string t of m in the row below that of a. Hence, since S is a horizontal strip, a also lies above a cell of t. Finally, since s and t are translates the lemma follows. Proof. By the definition of interference, S modifies the row r above the highest negatively modified row of m. From the hypotheses, h µ (r, col(a)) = k, so that the cell a is contiguous and above a cell of S in row r. By Lemma 142 a does not lie above a cell of S and is therefore an upper augmentable corner of S. 6.7. Alternative description of pushouts (column moves). Let (S, m) = (µ/λ, ν/λ) be any initial pair where m = s 1 ∪ · · · ∪ s r is a column move. Let m ′ be the collection of cells obtained from m by removing s i whenever the positively modified row of s i is a modified row of S. It is easy to see that m ′ is of the form s 1 ∪ s 2 ∪ · · · ∪ s r or s 2 ∪ · · · ∪ s r . Suppose that ∆(s 1 ) affects rows c and c + d. If α is not a partition, we suppose that α i + 1 = α i+1 > α i+2 . We say that there is interference if α is not a partition or if m ′ is empty and α i = α i+1 , where i = c is the negatively modified row of s 1 . Then the perfection of α with respect to (S, m) is the vector The expected row shape ers(S, m) of (S, m) is defined to be ers(S, m) = per S,m (rs(λ) + ∆ rs (S) + ∆ rs (m ′ )). (2) η/µ is either empty or a column move whose strings are translates of those of m. Proof. It is easy to see that η/µ decomposes into column type strings as m ′′ ∪m comp where cs(m ′′ * µ) = cs(µ)+∆ cs (m ′ ). The proof of reasonableness and non-contiguity of (S, m) is similar to the proofs in Proposition 126 with Lemma 117 replaced by Lemma 134. To prove normality, suppose the first string s of m is continued above by S. Then s ∈ m ′ and since m comp cannot affect the rows affected by the strings of m ′ , there must exist a string of η/µ that is the rightward shift of s. This implies that S contains the same number of boxes in each row r containing a box of s and also in the negatively modified row of s. Therefore, none of the rows containing a box of s is a modified row of S (given that the uppermost row containing a box of s is by hypothesis not a modified row of S), and also the negatively modified row of s is not a modified row of S. If (S, m) is non-interfering then η/ν is a strip by Proposition 138. If (S, m) is interfering then m comp is a single string t. Suppose a cell x of t lies above a cell y of S. Since S is a horizontal strip y is λ-addable, and thus by hypothesis y is also a cell of m (t is a translate of the strings of m and it starts one row above the final string of m). Therefore η/ν is a horizontal strip. Obviously cs(η)/cs(ν) = cs(µ)/cs(λ) is a vertical strip so it only remains to show that rs(η)/rs(ν) is a horizontal strip. This is done as in the proof of Proposition 141. Pushout sequences Consider an initial pair (S, p) consisting of a strip µ/λ for λ, µ ∈ Π and a path p from λ to ν ∈ Π. A pushout sequence from (S, p) is a sequence of augmentation moves and pushouts which produces a final pair (S, q) consisting of a maximal strip S = η/ν and a path q from µ to η for some η ∈ Π: where λ 0 = λ, S = S 0 , the top row of (71) consists of the path p (possibly with empty moves interspersed), each S i is a strip withS = S L maximal, the n i are (possibly empty) moves, the bottom row of (71) is the path q, and for each 1 ≤ i ≤ L, the diagram defines an augmentation move if m i is empty, or the pushout of a compatible pair if m i is not empty. The main technical work in this paper is to establish the following existence and uniqueness properties of pushout sequences. Proposition 145. Each initial pair (S, p) admits a canonical pushout sequence, which repeatedly maximizes the current strip and pushes out the resulting maximal strip with the next move, and ends with maximization. We prove Proposition 145 in Subsection 7.1 by giving an algorithm which computes the canonical pushout sequence. Proposition 146. Pushout sequences take equivalent paths to equivalent paths. That is, if (S, p) and (S, p ′ ) are initial pairs with p ≡ p ′ and there are pushout sequences from (S, p) and (S, p ′ ) that produce the final pairs (S, q) and (S ′ , q ′ ) respectively, thenS =S ′ and q ≡ q ′ . It follows that pushout sequences define a map (S, [p]) → (S, [q]) where (S, p) is an initial pair and (S, q) is a final pair withS maximal, fitting the diagram (70). The special case p ′ = p of Proposition 146 is proved in Subsection 7.2. The general case is proved in Section 8. 7.1. Canonical pushout sequence. The following algorithm PushoutSequence produces a canonical pushout sequence from (S = µ/λ, p). It suffices to produce the path q, as the output stripS is defined by the last elements of p and q. We may assume that p = (λ = λ 0 , λ 1 , . . . , λ L ) has no empty moves and m i is the move from λ i−1 to λ i . Let PushoutCompatiblePair(ρ, λ i−1 , λ i ) compute the following pushout and return η as specified in Subsections 5.4 and 5.5 if m i is a row move and 6.5 and 6.6 if m i is a column move. proc PushoutSequence(µ, λ, p): local q := (µ), q ′ ρ := µ for i from 1 to length(p): extend q by q ′ ρ := last(q ′ ) ρ := PushoutCompatiblePair(ρ, λ i−1 , λ i ) append ρ to q q ′ := MaximizeStrip(ρ, λ) extend q by q ′ return q This procedure builds up a path q, implemented as a list of shapes. The variable q is initialized to be the list with a single item µ. For each move m i in p, the current strip is maximized. By Propositions 125 and 141, the resulting initial pair is compatible and hence its pushout with the current move is well-defined. The output strip (given by the last shapes in q and p respectively) is maximal due to the last invocation of MaximizeStrip. The "extension" step takes the path q, given as a list of k-shapes, and extends it by the path q ′ . Note that the last element of q equals the first element of q ′ . 7.2. Pushout sequences from (S, p) are equivalent. In this subsection we prove the following result, which is the p = p ′ case of Proposition 146. Proposition 147. Let S = µ/λ be a strip and p a path in Π from λ to ν. Then any two pushout sequences from (S, p) produce the same strip and equivalent paths. We shall reduce the proof of Proposition 147 to that of Proposition 148 and then use the rest of the subsection to prove the latter. Consider the setup of Proposition 147. By induction on the number of moves in p we may assume that p = m is a single move. We may assume that one of the pushout sequences to be compared, is the canonical one, which first passes from the strip S to its maximization S max by the augmentation path r, then does the pushout push(S max , m) = (S ′′ ,m), and finally maximizes the resulting strip via the augmentation pathr, resulting in the maximal stripS and the path q =rmr. Consider any other pushout sequence from (S, m), which produces (S ′ , q ′ ), say. Suppose the first operation in this pushout sequence is an augmentation move m ′ . The move m ′ is the first in the output path q ′ ; let the pathq be the rest of q ′ . Let t be any augmentation path from m ′ ∪ S to its maximization. By Proposition 91, this maximization is equal to S max and tm ′ ≡ r. We have which holds by induction sinceq andrmt are equivalent, being produced from the same pair (m ′ ∪ S, m) by pushout sequences, with m ′ ∪ S closer to maximal than S. We may therefore assume that the first operation in the pushout sequence producing (S ′ , q ′ ), is a pushout, and in particular that (S, m) is compatible. Let push(S, m) = (S ′ , M ). Writing q ′ =qM ,q is an augmentation path that maximizes S ′ and producesS ′ . We may also assume that S is not already maximal, for otherwise there is only one way to begin the pushout sequence from (S, m). Then r is nonempty; let its first move be x and r ′ the remainder of r. Since x is a move in the canonical maximization of S, it is a maximal completion move that augments S. We apply Proposition 148, using the label S ′ ∪x for the front right upward arrow. Let y be an augmentation path that maximizes the strip S ′ ∪x. We have If M is empty then it is easy to check that (S ∪ x, m) is compatible (with pushout (S,M ), say) such that eitherM is empty orM is a row move from x * µ andx :=M ∪ x is a row move from µ withM extending the strings of x above. Either way we obtain an elementary equivalencexM ≡M x. II) m is a row move and x is a maximal completion column move. Let M and x be intersecting moves. We show that x continues above and below M , so that M and x satisfy an elementary equivalence. Since a row and a column move cannot be matched above and cannot be matched below, it suffices to show that M does not continue above x and does not continue below x. Suppose that M continues above x. Let b be the highest cell of M ∩ x and s the string in M containing b. Let a be the cell above and contiguous to b in s. By the maximality of x, a has to lie above a cell of S. So the string s of M was pushed above during the pushout. Hence the cell below b is also in S. This contradicts the fact that x is a completion move. Suppose that M continues below Suppose M and x do not intersect and are contiguous. In this case M is above x. Let b be the highest cell of x and a the cell of M contiguous with b. By maximality of x, a has to lie above a cell a ′ of S. But since this implies that the string s of M that contains a was pushed above during the pushout, we have that the column of a ′ is not a modified column of S. Therefore there needs to be a cell of S below b. But this is a contradiction to the fact that x is a completion move. Suppose M is empty. In this case one may deduce that (S ∪ x, m) is compatible withM empty. Then x and M satisfy a trivial equivalence. III) m is a column move and x is a maximal completion row move. Let M and x be intersecting. By maximality of x, x continues below M . Suppose that M continues above x. Let b be the highest cell of x. It is a lower augmentable corner of S associated to a modified column c of S. Since M continues above, there is a cell a of M in column c that lies above a cell of S. Suppose a belongs to m + . By Lemma 137, a does not belong to the first string of m + and so there is a cell of m + in the row below that of a. Given that S is a horizontal strip, a lies above a cell of m ∩ S and so does b by reasonableness and translation of strings in a move. But then we have the contradiction that b lies above a cell of S. Therefore a ∈ m comp . By Lemma 142 the cells of m comp are in the same column as the final string of m and thus we get again the contradiction that b lies above a cell of m ∩ S. Therefore if M and x intersect x continues above and below M . Suppose M and x do not intersect and M and x are contiguous. M cannot be below x due to the maximality of x. If M is above x then a contradiction is reached as in the previous paragraph. Suppose M is empty. Then m is a single string that is matched above by S and (S, m) is non-interfering. Since x is a completion row move for S, it follows that m is matched above by S ∪ x and (S ∪ x, m) is non-interfering. Therefore (S ∪ x, m) is compatible withM empty. Hence M and x satisfy an elementary equivalence. IV) m is a column move and x a maximal completion column move. Let M and x be intersecting. Suppose M continues above x. Let b be the highest cell of x and let a be the cell of M contiguous to it from above. By maximality of x, cell a lies above a cell of S. Therefore by Lemma 137 a belongs to m comp and we have as before the contradiction that x lies above a cell of m. Suppose M continues below x. Let s be the string of M that meets x. The bottommost cell a of x is an upper augmentable corner of S associated to row R, say. The cell a is also in s and since M continues below x, s has a cell b contiguous to and below a. Since row R has a µ-addable cell contiguous to a, we deduce that it is b. The cell b cannot belong to m since m is a vertical strip and there are cells of S to the left of b since R is a modified row of S. Neither can b belong to m + since in this case R could not be a modified row of S by normality. So a and b belong to m comp . Since m comp does not lie above the last string of m (otherwise there would be a cell of S below a), there is an upper augmentable corner of S below a by Lemma 143. This contradicts the assumption that x is a maximal completion column move. Now suppose that x and M interfere. As in (I), Lemma 150 covers the case where M is above x. And if M is below x, the transpose of the argument given in (I), shows that (M, x) satisfies the transpose analogue of an upper-perfectible pair of interfering row moves. Suppose M is empty. Then m consists of a single string that is matched above by S and (S, m) is non-interfering. Since m is also contained in S ∪ x we see that (S ∪ x, m) is normal. Suppose m is matched above by S ∪ x. The case that special interference occurs for (S ∪ x, m), is handled by Lemma 152 below; in particular M and x satisfy an elementary equivalence. Otherwise (S ∪ x, m) is non-interfering and therefore compatible. ThenM is empty, which leads to a trivial elementary equivalence for M and x. Otherwise m is continued above by S ∪ x. Then the negatively modified row of x is the positively modified row of m (say the r-th) and ∆ rs (S) r = 1. In this case one may deduce the noninterference of (S ∪ x, m) from that of (S, m). Therefore (S ∪ x, m) is compatible. We haveM = m + (where m + is defined for the pair (S ∪ x, m)) which is continued above by x to be a column move x ∪M from µ. Hence M and x satisfy an elementary equivalence. Lemma 152. Suppose m = s is a column move such that push(S, m) = (S \ m, ∅), and suppose that x is a maximal completion column move from µ such that (S∪x, m) is in the special interference case. Then (S ∪ x, m) is pushout-perfectible, with m comp such that push(S ∪ x, m) = ((S ∪ x ∪ m comp ) \ m, m comp ). Moreover m comp corresponds to m shifted up one cell and m comp extends x above to a column move from µ. Proof. Let η = x * µ. By assumption m ⊂ S, the single string of m matches S ∪ x above, and rs(η)/rs(ν) is not a horizontal strip where ν = m * λ. In this case, we must have rs(η) R + = rs(µ) R , with R the negatively modified row of m and R + the positively modified row of x. Let b be the leftmost cell in ∂η in row R + , and let c be the leftmost cell in ∂λ in row R. We claim that b and c lie in the same column. Since the hook-length of c in λ is k by definition of moves, we have from rs(µ) R = rs(η) R + that the hook-length of b in η is k − 1 or k. In the case that it is equal to k − 1 we easily see that b and c lie in the same column. In the other case, the claim follows from Lemma 101 (3). The rest of the proof is then exactly as in the proof of Proposition 141. Proof of Proposition 148. The existence of an equivalenceM x =xM is guaranteed by Lemma 151. In some cases (when M = ∅ or when (x, M ) interferes) there may be more than one choice for such an equivalence. In such cases, the proof of Lemma 151 provides a particularM . In the other cases M and x define a unique elementary equivalence which uniquely specifiesM . It suffices to show that (S ∪x, m) is compatible and that push(S ∪x, m) = (S,M ) for some stripS and withM specified as above. It will then follow thatx augments S ′ to giveS. Let κ and η be defined by S ∪ x = κ/λ and η =M * κ. We will use the criteria of Propositions 126 and Proposition 144. It is clear that (when it is nonempty)M has the same diagram as M which has the same diagram as m. It is also clear from the commutativity of the top face that ν ⊂ η. It remains to check Condition (1) II) m is a row move and x is a column move. Notice that ∆ cs (S) = ∆ cs (S ∪ x) and the strips S and S ∪ x modify the same columns. Therefore ecs(S ∪ x, m) = ecs(S, m) = cs(η) and the result follows immediately. III) m is a column move and x is a row move. This is similar to case II). IV) m and x are column moves. This case is basically the same as case I), except that special care needs to be taken when there is special interference. Suppose there is special interference in (S ∪ x, m) but none in (S, m). In this case, if m ⊂ S, All the other cases are as in case I). Pushouts of equivalent paths are equivalent The goal of this section is to prove Proposition 146. By Proposition 147 it suffices to show that there exist pushout sequences starting from (S, p) and (S, p ′ ) respectively, such that the resulting final pairs (S, q) and (S ′ , q ′ ), satisfyS =S ′ and q ≡ q ′ . By Proposition 145 we may assume that both pushout sequences start by maximizing the strip S. We may therefore assume that S is already maximal. Since equivalences in the k-shape poset are generated by elementary equivalences, we may assume by induction on the length of the paths, that p =ñm ≡mn = p ′ is an elementary equivalence starting at λ. To summarize, it suffices to show that given the elementary equivalenceñm ≡ mn starting at λ and a λ-addable maximal strip S, there exist pushout sequences from (S,ñm) and (S,mn), producing (S,Ñ M ) and (S ′ ,M N ) respectively, such thatS =S ′ andÑ M ≡M N . Since S is maximal, (S, m) and (S, n) are compatible. Let push(S, m) = (S m , M ) and push(S, n) = (S n , N ). This furnishes the three faces touching the vertex λ in the cube pictured in (76), and all vertices except ω. Lemma 153. Suppose (m, n) is an interfering pair of row (resp. column) moves from λ which is (either lower-or upper-) perfectible by adding the set of cells mn per . Suppose S = µ/λ is a maximal strip. Then for each string s ∈ mn per , we have s ∩ S = ∅ or s ⊂ S. Proof. Let s be a string of mn per and let x, y ∈ s be contiguous cells with x ∈ S while y ∈ S. Suppose x is above y. We may assume that y is µ-addable, for otherwise we may shift left or down to another string of mn per . Then the column of x is a modified column of S and so y is a lower augmentable corner of S, a contradiction. Suppose x is below y. Then the row of x is a modified row of S and so y is an upper augmentable corner of S, again a contradiction. Lemma 154. Suppose M and N interfere. Then so do m and n. Proof. We first suppose that M and N are row moves and we assume without loss of generality that M is above N . Let c N be the rightmost negatively modified column of N , so that c + N is the leftmost positively modified column of M . Since M and N interfere we have cs(µ) cN = cs(µ) c + N + 1. Now let c n be the rightmost negatively modified column of n, and let c m be the leftmost positively modified column of M . Since strings of M and N are translates of those of m and n we have cs(µ) cN = cs(λ) cn and cs(µ) c + N = cs(λ) cm . But then cs(λ) cn = cs(λ) cm + 1 so m and n interfere. When M and N are column moves, the proof is similar. Otherwise we may assume that M and N intersect. By the definition of row equivalence we may assume that there exist strings s and t of M and N respectively such that s continues above t while t continues below s. Suppose the string s (resp. t) belongs to m comp (resp. n comp ) and the string t (resp. s) comes from a string of n (resp. of m) that was pushed above. Then we obtain the contradiction that S m (resp. S n ) is not a horizontal strip. Suppose the string s (resp. t) belongs to m comp (resp. n comp ) and the string t (resp. s) is a string of n (resp. m). By Proposition 125 m comp (resp. n comp ) lies on the rows of the last string of m (resp. n), yielding the contradiction that m and n already intersected and did not satisfy an elementary equivalence. Suppose the string s (resp. t) belongs to m (resp. n) and the string t (resp. s) was pushed above. This is a contradiction since m, n, and S are all λ-addable. In all other cases, one deduces the contradiction that m and n meet but do not satisfy an elementary equivalence. II) m is a row move and n is a column move. It suffices to check that M and N are reasonable and non-contiguous. Suppose M and N are not reasonable. Let s and t be strings of M and N respectively that are not reasonable. Suppose the strings s and t belong to m comp and n comp respectively. Let a ∈ s∩t. By Proposition 141, a ∈ s = n comp lies atop a cell b of n. In particular b ∈ λ. Since a ∈ M and M is a λ ∪ S-addable horizontal strip, b ∈ λ ∪ S, that is, b ∈ S. Then we have the contradiction that either S m is not a horizontal strip or m and n already intersected and did not satisfy an elementary equivalence. Suppose the strings s and t come from strings of m and n that have been pushed up and right respectively. Then we have the contradiction that either S n is not a horizontal strip or m and n already intersected and did not satisfy an elementary equivalence. Suppose the string s belongs to m comp and the string t is a string of n. By Proposition 125 m comp lies on the rows of the last string of m. This leads to the contradiction that m and n already intersected and did not satisfy an elementary equivalence. All the other cases can easily be ruled out. Now suppose M and N are contiguous. Suppose M is above N . Let x and y be cells of M and N respectively that are contiguous. Suppose y ∈ n comp . By Proposition 141 it follows that the cell y − just below y is in n and row(y − ) is a modified row of n. This implies that the cell below x does not belong to λ and thus needs to belong to S. Therefore x cannot be part of m comp since otherwise S m would not be a horizontal strip by Lemma 127. So the string that contains x was pushed up during the pushout. But then we have the contradiction that m and n are contiguous. Suppose that y belongs to a string of n that was pushed right during the pushout. In this case the row of y is not a modified row of S and we have that the cell x − immediately to the left of x is also in S, but x − / ∈ m for otherwise m and n would already be contiguous. By Proposition 125 it follows that x ∈ m comp . Since x − ∈ S it follows that x was pushed up during push(S, m). c = col(x) is not a modified column of S and we get that the cell below y is also in S, which yields the contradiction cs(λ) c − = cs(λ) c . Finally, suppose that y belongs to n. Since m and n are not contiguous x does not belong to m. If x ∈ m comp then by Proposition 125 there is a cell z in its row that belongs to m. But then a hook-length analysis shows that the column of z cannot be a modified column of m, a contradiction. So x belongs to a string of M that was pushed up during the pushout. Hence the column of x is not a modified column of S and there is a cell of S ∩ n below y that is contiguous with the cell below x, contradicting the assumption that m and n are not contiguous. The case in which M is below N , is similar. III) m and n are column moves. The proof is similar to that of I) (using the fact that the perfection of a column move lies on the same columns as its final string). 8.2. Commuting cube (non-degenerate case). Suppose that m, n,m,ñ, M and N are non-empty. Then the following cube commutes so that the two horizontal faces are elementary equivalences and the four vertical faces are pushouts. The three faces touching λ are assumed to be given. By Lemma 155, the top face defines an elementary equivalence. Since M and N are non-empty ω is determined uniquely. We will use Proposition 126 (or Proposition 144) to show that there exists aS such that push(S m ,ñ) = (S,Ñ ) and push(S n ,m) = (S,M ). It is obvious that conditions (2) holds since by definition of pushouts and equivalences, M and N are moves whose strings have the same diagrams respectively as m and n and thusM andÑ are moves whose strings have the same diagram respectively asm andñ (M and N interfere in this case if and only if m and n interfere). We will now see that condition (3) also holds, that is that θ ⊂ ω. Suppose m and n are row moves. Strings ofm that are strings of m are obviously contained in ω. Suppose s + is a string ofm that corresponds to a string s of m that has been pushed up (let's say it intersected with string t of n) . Then s ⊂ t ∈ n and we either have t ⊂ S or t ∩ S = ∅. In the former case, s + ∈ M and thus s + ⊂ ω. In the latter case, s ∈ M ∩ N with t ∈ N and thus s + ∈M , which gives s + ⊂ ω. Finally, suppose that s is a string in the perfection mn per of m and n. By Lemma 153, we either have s ⊂ S or s ∩ S = ∅. In the former case, obviously s ⊂ ω. In the latter case, since M and N are not empty by hypothesis, we have that (M, N ) interferes. In every case s will belong to M N per and will thus belong to ω. If m and n are column moves, or if m is a row move and n is a column move, then condition (3) is shown in a similar way. To check that the vertical faces are pushouts, it remains to verify condition (1) of Proposition 126 (or Proposition 144). We will use the fact (see the proof of Lemma 155) that if (m, n) is interfering and lower (resp. upper) perfectible and (M, N ) is interfering, then (M, N ) is lower (resp. upper) perfectible. I) m and n are row moves. Let m comp and n comp be the sets of cells added in the pushout perfections of (S, m) and (S, n) respectively; they are empty in the noninterfering case. Also let mn per denote the set of cells defining the lower or upper perfection of (m, n), if it exists. We will repeatedly use the fact (Proposition 125) that m comp (resp. n comp ) all lie on the same row as the last string of m (resp. n). Main claim: ecs(S m ,ñ) = cs(ω) = ecs(S n ,m) To prove (78), it suffices to make a calculation with modified columns. We shall be dividing our study into four cases according to the type of row equivalence: m and n do not interact, m and n are matched below, m and n are matched above, and m and n interfere. 8.2.1. m and n do not interact. By Lemma 154, M and N do not interfere. Furthermore, since m comp (resp. n comp ) all lie on the same row as the last string of m (resp. n), we have that m comp does not interact with n, n comp does not interact with m, and m comp does not interact with n comp . In particular, M and N do not interact. It is thus not difficult to see that we obtain ecs(S m ,ñ) = cs(ω) = ecs(S n ,m) = cs(λ) + ∆ cs (S) + ∆ cs (m ′ ) + ∆ cs (n ′ ) where m ′ and n ′ originate respectively from (S, m) and (S, n). 8.2.2. m and n are matched below, with m continuing above n. By Lemma 66, n has rank greater than m. By maximality of S, we see that m comp and n comp cannot intersect. Thus, by Lemma 66 applied to M and N , positively modified columns of m comp are positively modified columns of n. We conclude that there are three interesting types of modified columns of S (the other types interact in a manner that was covered in 8.2.1): (a) those which are positively modified columns of both m and n, (b) those which are immediately to the right of negatively modified columns of m and cause interference, and (c) those which are immediately to the right of negatively modified columns of n and cause interference. Each such column will affect the vectors in (78) at three different indices: (+) a positively modified column of m ∪ m comp or n ∪ n comp , (−m) a negatively modified column of m ∪ m comp , or (−n) a negatively modified column of n ∪ n comp . For each of the cases (a), (b) and (c), we draw a cube whose edges give the entries (−m), (−n), (+) associated to the corresponding move or strip in the cube (77). The commutation of the three cubes implies that (78) is satisfied. We will write1 to denote a negatively modified column. II) m is a row move and n is a column move. We need to show that ecs(S, m) = cs(ω) = ecs(S n ,m) and ers(S, n) = rs(ω) = ers(S m ,ñ). We will prove the first equality, and the second one will follow from the same principles. We need to show that per m (cs(λ) + ∆ cs (S) + ∆ cs (m ′ )) = perm(cs(κ) + ∆ cs (S n ) + ∆ cs (m ′ )) . III) m and n are column moves. The proof is basically the same as when m and n are row moves. One obtains the commuting cube (79) except when (m, n) interferes and the perfection mn per is made of strings that are translates of those of m, in which case one obtains the commuting cube (80). We consider the case that m and n are row moves as the column move case is similar. The exceptionsl situation can occur in two ways: either n is above m and the lower perfection exists, or m is above n and the upper perfection exists. Suppose first that n is above m. Let mn per =n I ∪n F wheren I are the strings of mn per whose positively modified columns (resp. rows) are modified columns (resp. rows) of S. Suppose n = n I ∪ n F where n I are the strings of n whose prolongation in mn per is given byn I . Suppose (S, n) interferes with pushout perfection given by n comp (if there is no interference then the situation is simpler). Then (S n ,m) also interferes; letn comp be the cells which define the pushout perfection. Finally, (S \ m,ñ) also interferes with pushout perfection given by n comp ∪n comp . Then N = n + I ∪ n + F ∪ n comp ,Ñ = n + F ∪n + F ∪ n comp ∪n comp ,M =n + F ∪n comp , and x = n + I are such thatM N ≡ xÑ is an elementary equivalence. The last vertical face is then such that x is an augmentation move. Note thatn I may be empty in which case x is empty and we have an ordinary cube but withM = ∅. Suppose m is above n. Let n = n I ∪ n F where n I consists of the strings of n whose positively modified columns are positively modified columns of S. Let mn per =n I ∪n F wheren I consists of the strings of mn per that extend the strings of n I above. Then N = n + F ∪ n comp where n comp is defined as in push(S, n) and N =n + F ∪ n + F ∪ m + ∪ n comp since S \ m andñ interfere and its completion is m + ∪ n comp . WithM =n F ∪ m + and x = ∅ we are in the situation of the commuting cube (80). 8.4. Commuting cube (degenerate case m = ∅). Suppose that m = ∅ and that mn =ñ∅ is an elementary equivalence. This situation can be seen as a degenerate where vertical faces are either pushouts or augmentations moves, and whereM N ≡ xÑ is an elementary equivalence. The case where x is non-empty will occur when n = n I ∪ n F andm =n I ∪n F is such that the positively modified columns (resp. rows) ofn I are positively modified columns (resp. rows) of S. Then N = n + I ∪ n + F ,M =n + F ,Ñ = n + F ∪n + F and x = n + I are such thatM N ≡ xÑ is an elementary equivalence. The last vertical face is then such that x is an augmentation move. 8.5. Commuting cube (degenerate casem = ∅). This case is similar to the m = ∅ case. Another way to see this case is to consider that if we haveñm ≡ ∅n then we can use ∅n ≡ n∅ (which leads trivially to a commuting cube) to fall back on the already treated n = ∅ case (which is equal to m = ∅ by symmetry). Pullbacks Given a strip S = µ/λ and a class of paths [p] in the k-shape poset from λ to ν, the pushout algorithm gives rise to a maximal stripS = η/ν and a unique class of paths [q] in the k-shape poset from µ to some η: Our goal is to show that this process is invertible when the strip S is reverse maximal. That is, given the maximal stripS = η/ν and the class of paths [q] in the k-shape poset from µ to η, we will describe a pullback algorithm that gives back the maximal strip S = µ/λ and the class of paths [p] in the k-shape poset from λ to ν. To indicate that we are in the pullback situation, the direction of the arrows will be reversed λ ν µ η The situation in the reverse case is quite similar to the situation we have encountered so far (which we will refer to as the forward case). We will establish a dictionary that allows to translate between the forward and reverse cases. Then only the main results will be stated. Equivalences in the reverse case If m = µ/λ is a move from λ then we say that m is a move to µ. We write m#µ = λ. We will use the same notation for the string decomposition m = s 1 ∪ · · · ∪ s ℓ of the forward case also in the reverse situation. That is, string s 1 is the leftmost and string s ℓ is the rightmost. The following dictionary translates between the forward and reverse situations: λ ←→ µ move m from λ ←→ move m to µ µ = m * λ ←→ λ = m#µ leftmost (rightmost) string of m ←→ rightmost (leftmost) string of m continues below (resp. above) ←→ continues below (resp. above) column to the right (resp. left) ←→ column to the left (resp. right) row above (resp. below) ←→ row below (resp. above) shifting to the right (resp. up) ←→ shifting to the left (resp. down) Notation 156. For two sets of cells X and Y , let ← X (Y ) (resp. ↓ X (Y )) denote the result of shifting to the left (resp. down), each row (resp. column) of Y by the number of cells of X in that row (resp. column). 10.1. Reverse mixed elementary equivalence. Letm andM be respectively a row move and a column move to γ. The contiguity of two moves is defined as in the forward case (that is, whether two disjoint strings can be joined to form one string). Definition 157. A reverse mixed elementary equivalence is a relation of the form (42) satisfying (43) arising from a row movem and column moveM to some γ ∈ Π k , which has one of the following forms: 10.2. Reverse row elementary equivalence. Letm andM be row moves to γ. We say that (m,M ) is interfering ifm andM do not intersect and γ \ (m ∪M ) is not a k-shape (or to be more precise cs(γ \ (m ∪M )) is not a partition). Let m = s 1 ∪ · · · ∪ s r andM = s ′ 1 ∪ · · · ∪ s ′ r ′ interfere. We immediately have Lemma 160. Suppose (m,M ) is interfering and the top cell ofm is above the top cell ofM . Then (1) c s ′ 1 ,u = c + sr ,d . In particular,m andM are non-degenerate. (2) Every cell of m is above every cell of M . Supposem is a move of rank r and length ℓ andM is a move of rank r ′ and length ℓ ′ , both to γ. Suppose also that (m,M ) is interfering and that the top cell ofm is above the top cell ofM . A lower perfection (resp. upper perfection is a k-shape of the form γ \ (m ∪M ∪ m per ) (resp. γ \ (m ∪M ∪ M per )) where m per (resp. M per ) is a (γ \ (m ∪M ))-removable skew shape such thatm ∪ m per (resp. M ∪ M per ) is a row move toM #λ (resp.m#λ) of rank r (resp. r ′ ) and length ℓ + ℓ ′ andM ∪ m per (resp.m ∪ M per ) is a row move tom#γ (resp.M #γ) of rank r + r ′ and length ℓ ′ (resp. ℓ). If (m,M ) is interfering then it is lower (resp. upper ) perfectible if it admits a lower (resp. upper) perfection. Definition 162. A reverse row elementary equivalence is a relation of the form (42) satisfying (43) arising from two row movesm andM to some k-shape γ, which has one of the following forms: (1)m andM do not intersect andm andM do not interfere. Then m =m and M =M . In case (2), (4) and (5) Definition 166. Let µ ∈ Π be fixed. Let Strip µ ⊂ Π be the induced subgraph of ν ∈ Π such that µ/ν is a strip inside µ. Ifm is a move such that λ =m#ν in Strip µ we shall say thatm is a reverse µ-augmentation move from the strip µ/ν to the strip µ/λ. A reverse augmentation of a stripS = µ/λ is a strip reachable fromS via a reverse µ-augmentation path. A stripS = µ/λ is reverse maximal if it admits no reverse µ-augmentation move. Diagrammatically, a reverse augmentation move is such that the following diagram commutes for strips S andS inside µ. These definitions depend on a fixed µ ∈ Π, which shall usually be suppressed in the notation. Later we shall consider reverse augmentations of a given stripS, meaning reverse µ-augmentations whereS = µ/λ. Proposition 167. All reverse augmentation column moves of a stripS = µ/λ have rank 1. LetS = µ/λ be a strip and a be a removable corner of λ. We will call a a (1) lower reverse augmentable corner ofS if removing a from λ adds a box to ∂λ in a modified column c ofS. (2) upper reverse augmentable corner ofS if a does not lie below a box inS and removing a from λ adds a box to ∂λ in a modified row r ofS. A λ-removable string s of row-type (resp. column-type) can be reverse extended below (resp. above) if there is a λ-removable corner contiguous and below (resp. above) the lowest (resp. highest) cell of s. Definition 168. A reverse completion row move is one in which all strings start in the same row. It is maximal if the first string cannot be reverse extended below. A reverse quasi-completion column move is a reverse column augmentation move from a stripS that contains no lower reverse augmentable corner. A reverse completion column move is a reverse quasi-completion move from a stripS that contains no upper reverse augmentable corner below its unique (by Proposition 167) string. A reverse completion column move or a reverse quasi-completion column move is maximal if its string cannot be reverse extended above. A reverse completion move is a reverse completion row/column move. (2) There is one equivalence class of paths in Strip µ fromS toS ′ . (3) The unique equivalence class of paths in Strip µ fromS toS ′ has a representative consisting entirely of maximal reverse completion moves. We say that (S,m) is non-interfering if cs(η) − ∆ cs (S) − ∆ cs (m ′ ) is a partition and is interfering otherwise. Proposition 174. Let (S,m) be a reasonable, non-contiguous and non-interfering final pair. Then µ/λ is a strip. 12.5. Row-type pullback: interfering case. Assume that (S,m) is reasonable, non-contiguous, and interfering withm a row move. Say that (S,m) is pullbackperfectible if there is a set of cellsm comp inside (m − )#ν so that if λ = ((m − )#ν) \ m comp then µ/λ is a strip and ν/λ is a row move to ν with the same initial string asm − . Proposition 175. Suppose (S,m) is a reasonable, non-contiguous, interfering final pair such thatm is a row move andS is a reverse maximal strip. Then (S,m) is pullback-perfectible. Furthermore the strings ofm comp lie on exactly the same rows as the initial string ofm. Corollary 176. Suppose (S,m) is a final pair such thatS is a reverse maximal strip andm is a row move. Then (S,m) is compatible. 12.6. Column-type pullback: normality. Suppose that (S,m) is a reasonable final pair withm a column move. If s ⊂m is contained insideS, we say thatS matches s above if r s,u is a modified row ofS. Otherwise we say thatS continues above s. Let s ⊂m be the final string of the movem. We say that (S,m) is normal if it is reasonable, and, in the case that s is continued above, (a) none of the modified rows ofS contains boxes of s and (b) the negatively modified row of s is not a modified row of S. Proposition 177. LetS be a reverse maximal strip andm a column move. Then (S,m) is normal and non-contiguous. 12.8. Column-type pullback: non-interfering case. Assume that (S,m) is normal, non-contiguous and non-interfering withm a column move. In this case we declare (S,m) to be compatible.m − is a move to ν and we define λ =m − #ν. The pullback is defined by (84). 12.9. Column-type pullback: interfering case. Assume that (S,m) is normal, non-contiguous and interfering withm a column move. Say that (S,m) is pullbackperfectible if there is a set of cellsm comp inside (m − )#ν so that if λ = ((m − )#ν) \ m comp then µ/λ is a strip and ν/λ is a row move to ν with the same initial string asm − . In the case that (S,m) is pullback-perfectible, we declare that (S,m) is compatible and use the above λ to define the pullback via (84). Proposition 179. Suppose (S,m) is reasonable, normal, non-contiguous and interfering withm a column move andS a reverse maximal strip. Then (S,m) is pullback-perfectible. Furthermorem comp consists of a single string that lies on the same columns as the initial string ofm. Corollary 180. Suppose (S,m) is a final pair withS a reverse maximal strip and m a column move. Then (S,m) is compatible. Pullbacks sequences are all equivalent Given a stripS = η/ν and a path q in the k-shape poset from µ to η, one can do a sequence of pullbacks and reverse augmentations to obtain a reverse maximal strip S = µ/λ and a path p in the k-shape poset from λ to ν: Such a process, which we will call a pullback sequence, can always be done since we have seen that a reverse maximal strip is compatible with any move. As in the forward case, it does not matter which pullout sequence is used since they give rise to equivalent paths (and therefore to a unique reverse maximal strip S). Proposition 181. LetS = η/ν be strip and q a path in the k-shape poset from µ to η, and suppose that a given pullback sequence gives rise to a reverse maximal strip S = µ/λ and a path p in the k-shape poset from λ to ν. Then any other given pullback sequence gives rise to the reverse maximal strip S = µ/λ and a pathp equivalent to p. 14. Pullbacks of equivalent paths are equivalent The next proposition tells us that the pullbacks of equivalent paths produce equivalent paths. Proposition 182. LetS = η/ν be a strip and and let q and q ′ be equivalent paths in the k-shape poset from µ to η. If the pullback sequence associated toS and q gives rise to a reverse maximal strip S = µ/λ and a path p in the k-shape poset from λ to ν, then the pullback sequence associated toS and q ′ gives rise to the same reverse maximal strip S = µ/λ and a path p ′ equivalent to p. Propositions 182 and Proposition 181 provide an algorithm, which we will call the pullback algorithm, that, given a stripS = η/ν and a class of paths [q] in the k-shape poset from µ to η, gives rise to a reverse maximal strip S = µ/λ and a unique class of paths [p] in the k-shape poset from λ to ν: Proof. The non-empty cases follow from the alternative descriptions of pushouts and its analogue for pullbacks via expected row and column shape. The empty cases are immediate. We now prove Theorem 75. As already mentioned after the statement of Theorem 75, it suffices to prove the case where S and T are single strips. That is, we need to show that given a reverse maximal strip S = µ/λ and a class of paths [p] from λ to ν, the pushout algorithm gives rise to a maximal stripS = η/ν and the class of paths [q] from µ to a η: if and only if given the maximal stripS = η/ν and the class of paths [q] from µ to η, the pullback algorithm gives rise to the reverse maximal strip S = µ/λ and the class of paths [p] from λ to ν: Suppose we are given a reverse maximal strip S = µ/λ and a class of paths [p], and suppose that the pushout algorithm leads to the maximal stripS = η/ν and the class of paths [q]. As we have seen, this implies that any pushout sequence leads to the maximal stripS = η/ν and the class of paths [q]. By Proposition 183, every pushout sequence can be reverted to give a pullback sequence from the maximal stripS = η/ν and the class of paths [q]. This ensures that there is at least one pullback sequence from the maximal stripS = η/ν and the class of paths [q] that leads to the reverse maximal strip S = µ/λ and the class of paths [p]. As we have seen, this implies that the pullback algorithm always leads to the reverse maximal strip S = µ/λ and the class of paths [p]. Therefore the pullback of a pushout gives back the initial pair. We can prove that the pushout of a pullback gives back the final pair in a similar way. Note that for the bijection to work, we need S to be reverse maximal andS to be maximal. This is because the pushout algorithm produces a maximal strip while the pullback algorithm yields a reverse maximal strip.
2010-07-29T21:05:15.000Z
2010-07-29T00:00:00.000
{ "year": 2010, "sha1": "11656a30c95281ceff75dbfeaadda6046600e91a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "11656a30c95281ceff75dbfeaadda6046600e91a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221006518
pes2o/s2orc
v3-fos-license
The MOSDEF-LRIS Survey: The Connection Between Massive Stars and Ionized Gas in Individual Galaxies at $z\sim2$ We present constraints on the massive star and ionized gas properties for a sample of 62 star-forming galaxies at $z\sim2.3$. Using BPASS stellar population models, we fit the rest-UV spectra of galaxies in our sample to estimate age and stellar metallicity which, in turn, determine the ionizing spectrum. In addition to the median properties of well-defined subsets of our sample, we derive the ages and stellar metallicities for 30 high-SNR individual galaxies -- the largest sample of individual galaxies at high redshift with such measurements. Most galaxies in this high-SNR subsample have stellar metallicities of $0.001<Z_*<0.004$. We then use Cloudy+BPASS photoionization models to match observed rest-optical line ratios and infer nebular properties. Our high-SNR subsample is characterized by a median ionization parameter and oxygen abundance, respectively, of $\log(U)_{\textrm{med}}=-2.98\pm0.25$ and $12+\log(\textrm{O/H})_{\textrm{med}}=8.48\pm0.11$. Accordingly, we find that all galaxies in our sample show evidence for $\alpha$-enhancement. In addition, based on inferred $\log(U)$ and $12+\log(\textrm{O/H})$ values, we find that the local relationship between ionization parameter and metallicity applies at $z\sim2$. Finally, we find that the high-redshift galaxies most offset from the local excitation sequence in the BPT diagram are the most $\alpha$-enhanced. This trend suggests that $\alpha$-enhancement resulting in a harder ionizing spectrum at fixed oxygen abundance is a significant driver of the high-redshift galaxy offset on the BPT diagram relative to local systems. The ubiquity of $\alpha$-enhancement among $z\sim2.3$ star-forming galaxies indicates important differences between high-redshift and local galaxies that must be accounted for in order to derive physical properties at high redshift. INTRODUCTION Studies of large numbers of high-redshift galaxies in the restoptical have provided a wealth of information about the physical conditions of their interstellar medium (ISM). In the local universe, measurements of optical emission lines reveal that star-forming galaxies follow a tight sequence E-mail: mtopping@astro.ucla.edu of simultaneously increasing [NII]λ6584/Hα and decreasing [OIII]λ5007/Hβ (e.g., Veilleux & Osterbrock 1987;Kauffmann et al. 2003). The shape and location of the sequence in this "BPT" diagram (Baldwin et al. 1981) reflects the physical conditions of ionized gas in the ISM of galaxies (e.g., chemical abundances, density, ionization mechanism, etc.). Analogous observations of galaxies at high redshift expose a similar sequence, but offset toward higher [OIII]λ5007/Hβ and [NII]λ6584/Hα on the BPT diagram relative to local [NII]λ6584/Hα BPT diagram. The grey histogram shows the location of SDSS galaxies (grey; Abazajian et al. 2009). Blue and red points indicate galaxies included, respectively, in the high and low composite spectra described in Topping et al. (2019). For reference, the 'maximum starburst' model of Kewley et al. (2001) (dotted curve) and star-formation/AGN boundary from Kauffmann et al. (2003) (solid curve) are plotted. Right: SFR calculated from the dust-corrected Balmer lines vs. M * for all objects with LRIS redshifts at 2.0 ≤ z ≤ 2.7. As in the left panel, galaxies comprising either the high or low stack are colored blue and red respectively. The dashed line shows the SFR-M * relation of z ∼ 2.3 MOSDEF galaxies (Sanders et al. 2018). Many properties of galaxies at high redshift may be responsible for this observed difference on the BPT diagram, including higher ionization parameters at fixed nebular metallicity, harder ionizing spectra at fixed nebular metallicity, higher densities, variations in gas-phase abundance patterns, and enhanced contributions from AGNs and shocks at high redshift (see e.g., Kewley et al. 2013, for a review). Initial results from the MOSFIRE Deep Evolution Field (MOSDEF; Kriek et al. 2015) survey, suggested that the observed offset is primarily caused by an enhanced N/O ratio at fixed oxygen abundance, in addition to higher physical densities in high-redshift galaxies compared to local systems (Masters et al. 2014;Shapley et al. 2015;Sanders et al. 2016b). Results from the Keck Baryonic Structure Survey (KBSS; Steidel et al. 2014) instead suggested that the offset is driven primarily by a harder intrinsic ionizing spectrum at fixed oxygen abundance (Steidel et al. 2016;Strom et al. 2017). Updated results from the MOSDEF survey support the explanation of a harder ionizing spectrum at fixed oxygen abundance (Sanders et al. 2020;Shapley et al. 2019;Sanders et al. 2019, Runco et al. in prep.). Furthermore, a harder stellar ionizing spectrum at fixed oxygen abundance (i.e., nebular metallicity, which traces α/H) arises naturally in the case of α enhancement of the most massive stars, which produce the bulk of the ionizing radiation in star-forming galaxies. For a given oxygen abundance, such α-enhanced stars will have lower Fe/H than their local counterparts with a solar abundance pattern. A lower Fe/H corresponds to a harder ionizing spectrum. The α-enhancement described above is expected in high-redshift galaxies due to their young ages and rising star-formation histories (e.g., based on SED-fit ages in previous papers, as well as preference for rising SFHs in high-redshift galaxies Papovich et al. 2011;Reddy et al. 2012) resulting in enrichment primarily from Type II supernovae (Steidel et al. 2016;Matthee & Schaye 2018). The rest-optical emission lines observed in high-redshift star-forming galaxies are strongly affected by the intrinsic ionizing spectrum primarily produced by the most massive stars. Several properties of the massive stars modulate the production of ionizing photons, including stellar metallicity, IMF, and stellar binarity (Topping & Shull 2015;Steidel et al. 2016;Stanway et al. 2016;Stanway & Eldridge 2018). In addition to controlling the ionizing spectrum, the formation of massive stars is regulated by gas accretion onto galaxies, and in turn regulates the resulting chemical enrichment of the ISM through the deposition of metals by supernova explosions and stellar winds, and the ejection of metals through star-formation-driven galaxy-scale outflows. Strom et al. (2018) investigated the relationship between properties of the ionized ISM and factors contributing to the excitation state (including stellar metallicity) for a sample of z ∼ 2 star-forming galaxies, finding that typically, z ∼ 2 galaxies have lower stellar metallicity compared to their nebular metallicity. This investigation relied purely on strong rest-optical emission lines. However, the emission-line spectrum depends on a host of physical properties in addition to the ionizing spectral shape and is thus a highly indirect tracer of the intrinsic ionizing spectrum. In particular, breaking the degeneracy between the ionization parameter and the ionizing spectral shape is challenging when only the strongest rest-optical emission-line ratios are available. Sanders et al. (2020) used a small sample of star-forming galaxies at z ∼ 1.5−3.5 with direct-method nebular metallicities to connect factors affecting the ionizing spectrum with properties of the ISM. The tight constraints on stellar metallicity facilitate breaking the degeneracy between ionization parameter and hardness of the ionizing spectrum. Another avenue for breaking this degeneracy is directly constraining the ionizing spectrum from massive stars using rest-UV spectra of these high-redshift galaxies. While directly observing the ionizing spectrum within high-redshift galaxies is extremely challenging, information about the massive star population can be determined based on the non-ionizing rest-UV stellar spectrum that is far easier to observe. Specifically, features including the Civλλ1548, 1550 and Heiiλ1640 stellar wind lines, and a multitude of stellar photospheric features are sensitive to the properties of massive stars (Leitherer et al. 2001;Crowther et al. 2006;Rix et al. 2004). For example, Halliday et al. (2008) utilized Feiii absorption lines to measure a sub-solar stellar metallicity for a composite spectrum of z ∼ 2 galaxies. Sommariva et al. (2012) tested the ability of additional photospheric absorption line indicators to accurately determine the stellar metallicity of the massive star population of high-redshift galaxies. Recently, instead of using integrated regions of the rest-UV spectrum, Cullen et al. (2019) instead fit stellar population models to the full rest-UV spectrum of multiple composite spectra to investigate galaxy properties across 2.5 < z < 5.0 and a stellar mass range of 8.5 < log(M * /M ) < 10.2. Crucially, recent studies have concurrently studied the production of the ionizing spectrum with the rest-optical properties of high-redshift galaxies by utilizing combined rest-UV and rest-optical spectroscopy (Steidel et al. 2016;Chisholm et al. 2019;Topping et al. 2019). Steidel et al. (2016) constructed a composite spectrum of 30 z ∼ 2.4 starforming galaxies from KBSS, and found that their rest-UV properties could only be reproduced by stellar population models that include binary stars, have a low stellar metallicity (Z * /Z ∼ 0.1), and a different, higher, nebular metallicity (Z neb /Z ∼ 0.5). By analyzing a single composite rest-UV spectrum, Steidel et al. (2016) only probed average properties of their high-redshift galaxy sample. With single rest-UV and rest-optical composite spectra it is not possible to probe the rest-UV spectral properties as a function of the location in the BPT diagram. In contrast, Chisholm et al. (2019) fit linear combinations of stellar population models to 19 individual galaxy rest-UV spectra at z ∼ 2, and determined light-weighted properties. Using these individual spectra, Chisholm et al. (2019) found using mixed age populations that galaxies in their sample had comparable stellar and nebular metallicities. In Topping et al. (2019) we compared the properties of two composite spectra one of which included galaxies lying along the local sequence and the other including galaxies offset towards high [NII]λ6584/Hα and [OIII]λ5007/Hβ. This analysis indicated that galaxies offset from the local sequence had younger ages and lower stellar metallicities on average, resulting in a harder ionizing spectrum in addition to a higher ionization parameter and a higher α-enhancement, all of which contributed to the difference in BPT diagram location. Intriguingly, we found that even high-redshift galaxies coincident with local star-forming sequence on the BPT diagram were α-enhanced (O/Fe ∼ 3 O/Fe ) relative to their local counterparts , suggesting that coincidence in the BPT diagram does not necessarily imply similar ISM conditions. In this paper, we improve the methods presented in Topping et al. (2019) by expanding the stellar population synthesis models to consider more complex star formation histories (SFHs), and including additional rest-optical emission lines to the photoionization modelling. Furthermore, we test the capability of the models to be fit to individual galaxy spectra that have lower SNR than composite spectra, and analyze a sample of ∼ 30 individual galaxies with combined rest-UV and rest-optical spectra. The organization of this paper is as follows: Section 2 describes our observations, data reduction, and methods. Section 3 presents the results of our analysis, and Section 4 provides a summary and discussion of our key results. Throughout this paper we assume a cosmology with Ω m = 0.3, Ω Λ = 0.7, H 0 = 70km s −1 Mpc −1 , and adopt solar abundances from Asplund et al. (2009, i.e., Z = 0.014, 12 + log(O/H) = 8.69). Rest-Optical Spectra and the MOSDEF survey Our analysis utilizes rest-optical spectroscopy of galaxies from the MOSDEF survey (Kriek et al. 2015 Rest-UV Spectra and the MOSDEF-LRIS sample A full description of the rest-UV data collection and reduction procedures will be described in a future work, but we provide a brief description here. We selected a subset of Best-fit stellar metallicity and age for the high (blue) and low (red) for five different star formation histories including a continuous SFH, and four realizations of the delayed-τ model, each depicted by a different shape. In all cases, galaxies in the high stack have younger ages and lower stellar metallicities compared to the low stack. The best-fit age and stellar metallicity increases with increasing τ when considering models with a 'delayed-τ' SFH. MOSDEF galaxies for rest-UV spectroscopic followup using the Low-Resolution Imaging Spectrograph (LRIS; Oke et al. 1995). Target priorities were determined using the following prescription. Highest priority was given to objects from the MOSDEF survey that had detections in all four BPT emission lines (Hβ, [OIII], Hα, [NII]λ6584) with ≥ 3σ. Then, objects were added to the sample with detections in Hβ, [OIII]λ5007, and Hα with ≥ 3σ, and an upper limit in [NII]λ6584. With decreasing priority, the remaining targets were selected by having a spectroscopic redshift measurement from the MOSDEF survey, objects from the MOS-DEF survey without a successfully measured redshift, and objects not targeted as part of the MOSDEF survey, but that were part of the 3D-HST survey catalog (Momcheva et al. 2016) with properties within the MOSDEF survey photometric redshift and apparent magnitude ranges. In total, these targets comprise a sample of 260 galaxies. Of those targets with spectroscopic redshifts from the MOSDEF survey, 32, 162, and 20 were in the redshift ranges 1.40 ≤ z ≤ 1.90, 1.90 ≤ z ≤ 2.65, and 2.95 ≤ z ≤ 3.80 respectively. The remaining galaxies, with either a spectroscopic redshift not from MOSDEF, or a photometric redshift, made up 9, 31, and 6 galaxies in the redshift intervals 1.40 ≤ z ≤ 1.90, 1.90 ≤ z ≤ 2.65, and 2.95 ≤ z ≤ 3.80 respectively. For this analysis, we excluded the few objects identified as AGN based on their X-ray and rest-frame near-IR properties. Rest-UV spectra were obtained using Keck/LRIS during ten nights across five observing runs between January 2017 and June 2018. Our target sample totals 260 distinct galaxies on 9 multi-object slit masks with 1 . 2 slits in the COSMOS, AEGIS, GOODS-S, and GOODS-N fields. In order to obtain continuous wavelength coverage from the at-mospheric cut-off at 3100Å up to a median red wavelength limit of ∼7650Å, we observed all slit masks using the d500 dichroic, the 400 lines mm −1 grism blazed at 3400Å on the blue side, and the 600 lines mm −1 grating blazed at 5000Å on the red side. This setup yielded a resolution of R ∼ 800 on the blue side, and a resolution of R ∼ 1300 on the red side. The exposure times ranged from 6-11 hours on different masks, with a median exposure time of ∼ 7.5 hours for the full sample. The data were collected with seeing ranging from 0 . 6 to 1 . 2 with typical values of 0 . 8. We reduced the red-and blue-side data from LRIS using custom iraf, idl, and python scripts. First, we fit polynomials to the edges of each 2D slitlet, and transformed it to be rectilinear. The subsequent steps required slightly different treatment for the red and blue spectral images. We flat fielded each image using twilight sky flats for the blue side, and dome flats for the red side images. Then we cut out each slitlet for each object in every flat-fielded exposure. Following this step, the blue-side slitlets were cleaned of cosmic rays, and background subtracted. These images were registered and median combined to create a stacked twodimensional spectrum. In order to prevent over-estimation of the background due to the target, we measured the trace of each object in the stacked two-dimensional spectrum, and masked it out for a second-pass background subtraction . As the red-side images were more significantly affected by cosmic rays, we reduced the red side using slightly different methods. For the red-side slitlets, we constructed a stacked two-dimensional spectrum by first registering and median combining the images using minmax rejection to remove cosmic rays. We used this stacked image to measure the object traces in each slitlet. We then recomputed the background subtraction in the individual images with the object traces masked out, as the stacked image is too contaminated by sky lines to achieve a good background subtraction. After the second pass background subtraction, the individual red-side slitlets were combined to create the final stacked image. We extracted the 1D spectrum of each object from the red and blue side stacked slitlets. We calculated the wavelength solution by fitting a 4th-order polynomial to the red and blue arc lamp spectra, which resulted in residuals of ∼ 0.035Å and ∼ 0.3Å respectively. We repeated this step on a set of frames that had not had sky lines removed. Using the resulting sky spectra, we measured the centroid of several sky lines and shifted the wavelength solution zeropoint until the sky lines aligned with their known wavelengths. This shift typically had a magnitude of ∼ 4Å throughout the sample. We applied an initial flux calibration based on spectrophotometric standard star observations obtained during each observing run. We checked the flux calibration by comparing spectrophotometric measurements calculated from our objects to measurements in the 3D-HST photometric catalog, and applied a multiplicative factor to correct our calibration. Following the final flux calibration, we ensured that continuum levels on either side of the dichroic at 5000Å were consistent. While the full sample described above consists of 260 galaxies across three distinct redshift intervals (1.40 ≤ z ≤ 1.90, 1.90 ≤ z ≤ 2.65, and 2.95 ≤ z ≤ 3.80), we focus on a subset of this sample composed of galaxies in the central redshift window that have detections in four primary Figure 3. Fractional difference between the best-fit model spectra from models using a delayed-τ SFH compared to the model spectrum using a constant SFH. Best-fit models fit to the high and low stacks are displayed in the top and bottom panels respectively. The regions masked out using 'mask1' from Steidel et al. (2016) defined to include contamination from non-stellar sources is shown in grey. On average, the models assuming a delayed-τ SFH differ from those with a constant SFH at the few percent level. BPT lines (Hβ, [OIII]λ5007, Hα, [NII]λ6584) at ≥ 3σ from the MOSDEF survey. This 'LRIS-BPT' sample comprises 62 galaxies, each of which has a systemic redshift measured from rest-optical emission lines. Figure 1 shows the galaxy properties of objects in the LRIS-BPT sample. Topping et al. (2019) constructed composite spectra from two subsets of the LRIS-BPT sample, each subset being defined based on their location on the BPT diagram (Figure 1, left). These two subsets, referred to as the high and low samples, comprise galaxies that are offset from the local BPT sequence, and those that lie along the sequence respectively. Figure 1 (right) shows the SFR measured from dust-corrected Balmer lines and stellar mass for the galaxies in our sample. The SFR and stellar mass of this sample are characterized by a median and inner 68 th percentile log(SFR/(M / yr)) = 1.53 ± 0.44 and log(M * /M ) = 10.02 ± 0.52 respectively. These median values are consistent with those of the sample of galaxies comprising the central redshift range (1.90 ≤ z ≤ 2.65) of the full MOSDEF survey, which are characterized by median and inner 68 th percentile of log( SFR/(M /yr)) = 1.36 ± 0.50 and log( M * /M ) = 9.93 ± 0.60. These comparisons of SFR and stellar mass suggest that our LRIS-BPT sample is an unbiased subset of the full z ∼ 2 MOSDEF sample. Stellar Population Synthesis and Photoionization Models For this analysis, we used the version 2.2.1 Binary Population and Spectral Synthesis (BPASS) stellar population models to interpret our observed rest-UV galaxy spectra (Eldridge et al. 2017;Stanway & Eldridge 2018). Notably, these stellar population models incorporate the effects of stellar rotation, quasi-homogeneous evolution, stellar winds, and binary stars. These effects can have a substantial effect on the spectrum of a model stellar population, and in particular, the EUV spectrum produced by massive, shortlived stars. The BPASS models are evaluated with multiple Initial Mass Functions (IMFs), including the Chabrier (2003) IMF, and IMFs with high-mass (M ≥ 0.5M ) slopes of α = −2.00, −2.35, and −2.70. In addition, the models using each IMF were formulated with a high-mass cutoff of 100M and 300M . For this analysis, we only considered models that assume a Chabrier (2003) IMF, and a high-mass cutoff of 100M . Finally, the BPASS models have been computed with ages between log(Age/yr) = 6.0 − 11.0 in increments of 0.1 dex, and stellar metallicities of Z * = 10 −5 − 0.04. While we considered all available stellar metallicities in our analysis, we restricted the ages to log(Age/yr) = 7.0 − 9.6. At ages younger than log(Age/yr) = 7.0 we would be probing timescales shorter than the dynamical timescale of the galaxies, and therefore could not accurately attribute physical properties to the entire galaxy simultaneously. Additionally, the ionizing spectrum of stellar populations with constant star formation reaches an equilibrium at an age ∼ 10 7 yr. Finally, at the lowest redshift galaxy in our sample, the age of the universe was ∼ 4Gyr, so including older stellar populations is not necessary. We constructed stellar population models that assume different star-formation histories (SFH) by combining the BPASS models, which describe a coeval stellar population, log(Age/yr) 7. 0, 7.3, 7.4, 7.5, 7.7, 8.0, 8.5, 9.0, 9.6 Z * 10 −5 , 10 −4 , 0.0001, 0.001, 0.002, 0.003, 0.004, 0.006, 0.008, 0.01, 0.014, 0.02, 0.03, 0.04 log(Z neb /Z ) -1.3, -1.0, -0.8, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0.0, 0.1, 0.2 log(U) -3.6, -3.4, -3.2, -3.0, -2.8, -2.6, -2.4, -2.2, -2.0, -1.8, -1.6, -1.4 Table 1. Summary of model grid parameters. The age and stellar metallicity values correspond to BPASS models we fit to our observed spectra. For each combination of age and stellar metallicity, we computed a set of photoionzation models with the listed nebular metallicity and ionization parameter values. On this scale log(Z neb /Z ) = 0 corresponds to 12 + log(O/H) = 8.69 Figure 4. The best-fit stellar metallicity (left) and stellar age (right) as a function of spectra SNR created by artificially adding an increasing amount of noise to one of our composite spectra. The best-fit stellar metallicity remains consistent to the high-SNR value in the range SNR per spectral resolution element ≥ 5.6. The best-fit age remains consistent at all SNR values, however the 1σ uncertainties increase to the size of the parameter space (7.0 ≤ log(Age/yr) ≤ 9.6) at low SNR. using: where t max is the age of the population, ∆t 0 is the time between the age of the first model (i.e., 10 6 yr) and the onset of star formation, Ψ t 0 is the SFR at the time of the first model, f (λ) t 0 is the luminosity density of the model spectrum per unit stellar mass at the time of the first model, Ψ t i is the star-formation rate of the population at time t i , f (λ) t max −t i is the luminosity density of the model spectrum per unit stellar mass with age t max − t i (i.e., the model that began t i years prior to the final age, t max ), and (t i − t i−1 ) is the time between subsequent model spectra. For the case of a constant SFH, all of the SFR weightings, Ψ t i , are set to unity. In addition to a constant SFH, we considered several models with a 'delayed-τ' SFH of the form SFR ∝ t × e −t/τ , with log(τ/yr) = 7, 8,9,10. With this set of models we covered three schematically different SFHs. These SFHs allowed us to explore models when star formation is rising (t < τ), falling(t > τ), and peaked(t ∼ τ). We processed the stellar population model spectra using the photoionization code Cloudy (Ferland et al. 2017). Using this code, we input an ionizing spectrum from BPASS and, given a set of ISM properties, calculated expected emis-sion line fluxes. We compared the simulated line fluxes to the observed rest-optical emission lines of galaxies in our sample to infer properties of the ISM. We assumed a fixed electron density of n e = 250 cm −3 , which is representative of the galaxies in our sample (Sanders et al. 2016a). In addition, while we vary the nebular oxygen abundance, we assume solar abundance ratios for most elements. However, For each BPASS model, we ran a grid of Cloudy models for a range of nebular metallicity (−1.6 ≤ log(Z neb /Z ) ≤ 0.3) and ionization parameter (−3.6 ≤ log(U) ≤ −1.4). We have made several updates to the parameters of the model grids described in Topping et al. (2019), in order to more finely sample the parameter space in regions of interest, including amending the abundances of all elements to the values reported by Asplund et al. (2009). Table 1 summarizes the parameters, and lists each value for which we compute a model. An additional component of the photoionization Figure 5. Best-fit stellar parameters as a function of SNR computed by adding varying amounts of noise to an array of BPASS model spectra for which the log(Age/yr) and Z * are known. For all panels the color corresponds to the model input stellar metallicity, and the symbol depicts the input stellar age. Top Right: Best-fit stellar metallicities as a function of SNR computed for four different input values (0.001, 0.002, 0.004, 0.008) each of which has been computed at three different ages. At high SNR/resolution element all of the models asymptote to their input values, however at SNR/resolution element ≤ 5.6 the best-fit values are biased high. Bottom Right: Same as top right but for the fractional difference between the best-fit Z * and the input Z * . At the lowest SNR, the best-fit values can be biased high by ∼ 50% − 150%. Top Left: Best-fit stellar ages as a function of SNR for three different input ages at a range of stellar metallicities. At the lowest SNR, the uncertainties expand to fill the parameter search range, and the best-fit values are biased toward log(Age/yr) ∼ 8.5. Bottom Right: Fractional difference between the best-fit age and the input age for each of our models. The best-fit results begin to significantly diverge from their input values at < SNR per spectral resolution element ∼ 5.6, indicated by the vertical dashed line. models is the nebular continuum. The nebular continuum contributes a relatively small amount of flux to the UV spectrum, compared to the stellar component (See Steidel et al. (2016), Figure 3). We explicitly compute the nebular continuum for BPASS models listed in Table 1, however, it changes smoothly with age, so we interpolate the nebular contribution for the remaining BPASS models. Composite Spectra and Fitting To compute a composite spectrum, we first interpolated each of the individual galaxy spectra onto a common wavelength grid. We chose the sampling of this common wavelength grid to be 0.8Å, which corresponds to the rest-frame sampling of our spectra at the median redshift of our sample. Then, at each wavelength, we median combined all spectra that had coverage at that wavelength. We defined the error spectrum as the standard deviation of all contributing spectra at each wavelength. Our fitting analysis utilized continuum normalized spectra for comparison with the models. In the process of continuum fitting, we first extracted regions of the spectra that are not contaminated by absorption lines, in the win-dows defined by Rix et al. (2004). We then fit a cubic spline to the median flux values within each window to define the continuum level. Because of this approach, we did not need to consider effects that smoothly affect the continuum (e.g., reddening by dust). To fit the BPASS stellar population synthesis models to our individual galaxy and composite spectra, we masked out regions of the observed spectra that include components not present in the models (e.g., strong interstellar absorption lines). Then, we continuum normalized both the observed and BPASS model spectra. We then interpolated the BPASS models onto the wavelength grid of the galaxy spectra. We did not smooth either the models or the observed spectra as their resolutions were comparable with values of ∼ 1Å in the rest-frame. Following this step, we calculated the χ 2 statistic for each BPASS model in the grid, and determined which age and stellar metallicity produced the minimum χ 2 value. We determined the uncertainties in these parameters by perturbing the observed spectrum and calculating which age and stellar metallicity best-fit the observed spectrum. In the case of an individual galaxy spectrum, this perturbation simply constitutes adding in noise to each wavelength Figure 6. Best-fit age and stellar metallicity for all galaxies in the LRIS-BPT sample. For completeness, galaxies with SNR per spectral resolution element ≤ 5.6 are displayed as faint grey symbols. The sample comprises galaxies with ages in the range 7.0 ≤ log(Age/yr) ≤ 9.6, with the majority of galaxies having stellar metallicities of 0.0005 ≤ Z * ≤ 0.004. element pulled from a normal distribution with a standard deviation defined by the magnitude of the error spectrum at that wavelength element. For a simulated composite spectrum, we selected a new sample of galaxies from the initial composite spectrum sample with replacement. Then, each galaxy spectrum was perturbed using the method described above before being combined. After repeating this process 1000 times, we defined the best-fit value and upper and lower 1σ uncertainties as the median, 16 th and 84 th percentile of the distribution. Testing models with additional SFHs We expanded the model grid used in Topping et al. (2019), which included only stellar population models assuming a constant SFH. We repeated fitting the model grids to the high and low stacks using our updated models that assume different SFHs. For each SFH, we found that the results are consistent with those of Topping et al. (2019). In particular, we fit models that assume a 'delayed-τ' SFH, with log(τ/year) = 7, 8, 9, and 10. Figure 2 shows the best-fit age and stellar metallicity of the high and low stacks for each model grid. For each SFH, we find that the high stack has lower stellar metallicity, and a younger stellar age compared to the low stack. While this trend between the properties of the high and low stacks persists for each SFH we considered, the exact values of the stellar metallicity and age differ between the assumed models. In particular, the delayed-τ model with log(τ/yr) = 7 has the youngest best-fit age and stellar metallicity, and both the age and stellar metallicity increase when assuming an increasing τ. For each of the assumed SFHs, we recovered the same qualitative trend found in Topping et al. (2019), according to which the high stack had a younger age and lower stel-lar metallicity relative to the low stack. This result assumes that both the high and low stacks are both described by a similar SFH, which is a reasonable assumption based on the SFHs determined for galaxies in these two stacks estimated from SED fitting. We also investigated if the stellar population models yielded any constraint on the form of the SFH for each stack. We tested this question by measuring the minimum χ 2 value for the best-fit model of each SFH. For the high and low stacks, none of the SFHs were preferred over the others, suggesting that a given UV stellar spectrum is not unique to a particular SFH. A KS-test revealed that the high and low stacks have τ distributions with a 45% probability of being drawn from the same parent distribution. Figure 3 compares the best-fit spectrum of a constant SFH model to models with a delayed-τ SFH. The best-fit models for the high stack are nearly identical, and at some wavelengths are different at the few percent level. The low stack models vary more, in particular for the log(τ/yr) = 7 model, which has some signatures of a young population not seen in the other best-fit models. In particular, this model has slightly enhanced Civλλ1548, 1550 and Heiiλ1640 emission compared to the other models. However, the majority of the models are in agreement, with differences of only up to ∼ 10% in a few wavelength elements. These tests have shown that we cannot determine which SFH best characterizes the rest-UV spectra. However, using models that assume different SFHs does not qualitatively affect our results. The low SNR boundary to avoid biased results While the composite spectra provide useful constraints due to their high SNR, the SNR of individual galaxy spectra can be much lower. For example, the high and low stacks have SNR/pixel ∼ 18, while the individual spectra have a median SNR/pixel ∼ 4.5. These values correspond to SNR per spectral resolution element ∼ 25.5 and SNR per spectral resolution element ∼ 6.3 for the composite and median individual spectra respectively. We tested how the SNR of a spectrum affects the best-fit stellar properties by manually introducing noise to one of our composite spectra, refitting the models, and checking if biases arise as the spectrum drops in quality. Figure 4 displays how the best-fit stellar metallicity and age change as a function of the amount of added noise. For this composite, the best-fit stellar metallicity retains an unbiased estimate of the value obtained in the high-SNR limit down to a SNR per spectral resolution element ∼ 5.6 (SNR/pixel∼ 4). Below this value, the measurement of stellar metallicity is biased high. However, even above this limit, the stellar metallicity uncertainty increases with decreasing SNR. The bestfit age remains consistent throughout the range of SNR per spectral resolution element, yet as the SNR decreases below SNR per spectral resolution element ∼ 5.6, the uncertainty grows to ≥ 2 dex, leaving the age unconstrained. This test showcases how biases in the best-fit stellar parameters may occur in lower SNR spectra. In order to further quantify this effect, we repeated the process used on the composite spectrum, except on BPASS models for which the 'true' parameters are known. We added noise selected from a normal distribution to the BPASS models at each wavelength element. We repeated this for a combination of ages and stellar metallicities to determine if these biases exist throughout the range of parameter space our SFHs span. Figure 5 shows how the best-fit age and stellar metallicity changes as noise is introduced into the model spectra. At all stellar metallicities, a low SNR/resolution element introduces a positive bias in stellar metallicity leading to an overestimate of Z * relative to the true value. At the lowest SNR, this bias can be up to ∼ 150%. The best-fit stellar age also changes at low SNR/resolution element, however in contrast to the bias of the stellar metallicity, the trend of the bias depends on the 'true' age. At low SNR/resolution element, models with an old input age are biased younger, and models with a young age are biased toward older values. While these biases exist at low SNR/resolution element for both age and stellar metallicity, the input values of Z * and age can be accurately determined for spectra with SNR/resolution element ≥∼ 5.6. Attempting to model the spectra with lower SNR leads to high uncertainties and systematic biases. RESULTS Based on our tests on the BPASS models, we can achieve accurate age and stellar metallicity measurements for individual spectra with SNR per spectral resolution element ≥ 5.6. Figure 6 displays these values for all galaxies in the LRIS-BPT sample, highlighting those with sufficiently high SNR to yield unbiased Z * and age. The stellar metallicity for the individual galaxies ranges between 0.001 ≤ Z * ≤ 0.006, consistent with the best-fit metallicities found for our composite spectra. Importantly, based on the stellar metallicity and age we are able to constrain the shape of the ionizing spectrum for each of these individual galaxies. However, for some galaxy spectra, the stellar metallicity probability distribution uniformly spanned the parameter space, and we could not assign a stellar metallicity in such cases. Ionized Gas Properties The ionizing spectrum emitted from the stellar population drives the production of the emergent rest-optical emission line ratios. Therefore, constraining the ionizing spectrum within galaxies is crucial in order to use photoionization models to extract physical properties from the observed nebular emission lines. In particular, using Cloudy, we set the ionizing spectrum based on the appropriate best-fit BPASS stellar population model, and vary the nebular metallicity and ionization parameter, recording the emergent line fluxes. We then compare the resulting catalog of nebular emission line flux ratios to those observed from an individual or composite spectrum. The inferred nebular metallicity and ionization parameter is set by which model best reproduces the observed emission lines defined by the minimum χ 2 calculated for all observed line ratios simultaneously. To understand the uncertainty in these quantities, we perturb the observed emission line flux ratios by their corresponding uncertainties, and recompute the best-fit nebular parameters. Topping et al. (2019) used an approach that compared the locations of the models and observed galaxies on the [NII] and Hβ are farther apart in wavelength compared to the other line ratios, we apply a dust correction to both lines. We investigate the effect that including these additional observables has on the inferred nebular parameters. Figure 7 shows an example of how the constraint on nebular metallicity and ionization parameter changes for different sets of nebular emission lines when our method is applied to the low stack composite spectrum. In this example, the inferred nebular properties are consistent when considering different sets of lines, however the ionization parameter and nebular metallicity are better constrained when additional emission lines are included. One assumption made in the photoionization modelling is the form of the N/O vs. O/H relation. The median nitrogen abundance of HII regions in the local universe has been measured to vary by ∼ 0.5 dex for 8.2 ≤ 12 + log(O/H) ≤ 8.6, with scatter of ∼ 0.2 dex at fixed O/H (Pilyugin et al. 2012). This assumption strongly affects the output [NII]λ6584/Hα flux in our photoionization models. These Nitrogen abundance variations can result in a disparity of the [NII]λ6584/Hα ratio ∼ 0.5 dex, resulting in a biased inference of Z neb and log(U). Figure 8 shows the effect of removing [NII]λ6584/Hα from our fitting procedure, eliminating the uncertainty surrounding the N/O relation. Without [NII]λ6584/Hα, the inferred ionization parameters are well matched to those inferred when using [NII]λ6584/Hα, with nearly all galaxies falling on the one-to-one relation. In addition, there is a slight difference at low O/H when [NII]λ6584/Hα is not considered, however this difference is small compared to the uncertainties. Based on this result, we conclude that the inclusion of [NII]λ6584/Hα in our analysis does not significantly bias our inferred nebular parameters. Figure 9 shows log(U) against 12 + log(O/H) for individual galaxies in the LRIS-BPT sample. A majority of the galaxies in our high-SNR sample fall within the region populated by local HII regions shown by the white Figure 9. Inferred log(U) vs. 12 + log(O/H) for each galaxy in the LRIS-BPT sample. Galaxies with low SNR/resolution element (≤ 5.6) are displayed as the faint points. The white region depicts the location of local HII regions in the parameter space as defined by Pérez-Montero (2014). Left: log(U) and 12 + log(O/H) inferred using all strong rest-optical emission line ratios. The majority of galaxies in our sample lie within the region populated by local HII regions. Right: log(U) and 12 + log(O/H) inferred when [NII]λ6584/Hα is excluded from our analysis. Figure 10. Nebular metallicity inferred from photoionization modelling plotted against stellar metallicity measured for each galaxy in our sample. Lines of constant α-enhancement (i.e., O/Fe) are displayed as solid, dashed, and dotted lines, respectively, for O/Fe , 2×O/Fe , and 5×O/Fe . All galaxies in our sample show evidence for α-enhancement. Additionally, some galaxies are in the regime above 5×O/Fe , which has been suggested as the theoretical limit based on supernova yield models. The galaxies for which the stellar metallicity could not be determined are displayed as lower limits. area in Figure 9 (Pérez-Montero 2014), with median values of 12 + log(O/H) = 8.48 ± 0.11 and log(U) = −2.98 ± 0.25. Sanders et al. (2020) found a median log(U) = −2.30 ± 0.06 for z ∼ 2 galaxies at 12 + log(O/H) ∼ 8.0, also consistent with the local U-O/H relation. The measurements of log(U) and 12 + log(O/H) for the galaxies in this high-SNR sample in combination with measurements of lower metallicity galaxies from Sanders et al. (2020) suggest that there may be a trend of decreasing ionization parameter with increasing nebular metallicity consistent with the trend seen in Pérez-Montero (2014). The offset of this sample from the z ∼ 0 excitation sequence in the BPT diagram is thus not driven by higher ionization parameters at fixed O/H (Cullen et al. 2016;Kashino et al. 2017;Bian et al. 2020). Furthermore, this result remains largely the same when considering log(U) and 12+log(O/H) inferred without [NII]λ6584/Hα. However, the different methods used to measure the oxygen abundances between our high-SNR sample, and those from Pérez-Montero (2014), which used the 'direct' method may introduce systematics. In particular, Esteban et al. (2014) demonstrated that metallicities measured using the direct method are ∼ 0.24 lower than those which use nebular recombination lines on average. However, metallicities measured from recombination lines are in better agreement with those inferred from photoionization modelling. In addition the ionized gas properties presented by Pérez-Montero (2014) were inferred using softer ionizing spectra compared to those used in this paper. The result is a potential overestimate of the ionization parameter measured for the local systems compared to galaxies in our high-SNR sample. While addressing this potential systematic could move our measurements away from the Pérez-Montero (2014) relation, the difference in methods used to infer oxygen abundances would act in such a way to remedy the disparity. Therefore, despite these systematics, we conclude that there is no strong offset in the U vs. O/H relation of galaxies in our sample compared to the local relation Combined Stellar and Nebular Properties To connect the stellar and nebular properties of the individual galaxies in our high-SNR sample, we combine the nebular O/H abundance inferred from photoionization modelling with the Fe/H measured from the BPASS model fitting to look at the α-enhancement of individual galaxies. Figure 10 compares the nebular metallicity and the stellar metallicity for each galaxy in our sample. Noticeably, all of the galaxies in our sample show evidence for α-enhancement, falling significantly above the O/Fe line (solid black line). These values range from ∼ 1.75O/Fe to ≥ 5O/Fe . Investigating a small sample of z ∼ 2 − 3.5 galaxies with direct oxygen abundance measurements, Sanders et al. (2020) found similar results, where most of the galaxies analyzed show evidence for α-enhancement, some having values > 5 × O/Fe . A number of objects in our sample fall above the expected theoretical limit from pure Type II SNe of ∼ 5 × O/Fe , assuming a typical IMF Kobayashi et al. 2006). However, this limit depends on the details of the stellar population and expected Type II SNe yields. For example, the theoretical O/Fe limit increases when calculated assuming a top-heavy IMF. Therefore, different assumptions of the IMF or supernova yields could remove the tension between the theoretical limit and some of our observed galaxies. Finally, the tension between our observed O/Fe and the theoretical limit may be relieved due to uncertainties in the stellar modelling of low-metallicity massive stars. If, in fact, the low-metallicity stars produce harder ionizing radiation than what is included in the BPASS models, our galaxies would be best described by stellar metallicities that are higher than what we have found. A higher best-fit stellar metallicity could bring the α-enhancement into better agreement with the theoretical limits. We check how the α-enhancement changes as a function of distance away from the local BPT sequence for our sample of high-redshift galaxies. We calculate the distance away from the local sequence by first finding the line that is perpendicular to the local sequence from Kewley et al. (2013) defined as: and intersects the [NII]/Hα and [OIII]/Hβ value of our data point. The distance, D, is then: where N2 = log([NII]λ6584/Hα) and O3 = log([OIII]λ5007/Hβ). We calculated uncertainties by perturbing each rest-UV spectrum and fitting the grid of BPASS stellar population models to measure the stellar metallicity. The best-fit stellar population parameters define an ionizing spectrum which we then use as an input to Figure 11. α-enhancement calculated as a function of distance from the z = 0 BPT star-forming sequence. The galaxies that show the highest values of O/Fe are those that are the most offset from the local BPT sequence. Measurements of the high and low stacks from Topping et al. (2019) are displayed as the blue and red squared respectively. The KBSS-LM1 composite from Steidel et al. (2016) is shown as the green triangle. Cloudy, from which, in turn, we infer the nebular oxygen abundance. We define the 1σ uncertainty as the 16 th and 84 th percentile of the distribution of α-enhancements resulting from repeating this process 1000 times. Figure 11 displays how the α-enhancement depends on our measured distance from the local BPT sequence. There appears to be a weak positive correlation between the distance away from the local BPT sequence and α-enhancement. While there is a large amount of scatter in this relation, we do see that the galaxies with the highest O/Fe typically lie farther from the local sequence. We also compare the measurements of individual galaxies in our high-SNR sample with the high and low composite spectra from Topping et al. (2019). These two composite spectra follow the trend of higher O/Fe at a greater BPT distance, and both composite spectra are consistent with the results of our high-SNR sample. While the data are not constraining enough to measure a functional form of a relation between α-enhancement and BPT distance, they are consistent with the result that the offset of high-redshift galaxies relative to local galaxies on the BPT sequence is significantly driven by a harder ionizing spectrum due to α-enhancement. DISCUSSION Since the first evidence that suggested that high-redshift galaxies are offset on the BPT diagram, several hypotheses have been proposed to explain the underlying cause. Among the proposed sources for this offset between local and highredshift galaxies are harder ionizing spectra at fixed nebular metallicity, higher electron densities, contributions from AGNs and shocks at high redshift, and variations in gas-phase abundance patterns. Recently, two prevailing theories suggest that the offset is primarily driven by higher ioniziation parameters at fixed gas-phase metallicity (Kewley et al. 2015;Kashino et al. 2017;Cullen et al. 2018;Bian et al. 2018), or that high-redshift galaxies exhibit a harder intrinsic ionizing spectrum at fixed nebular metallicity driven by α-enhancement at high redshift (Steidel et al. 2016;Sanders et al. 2020). To answer this question of the origin of the BPT offset, Sanders et al. (2020) used the 'direct' method to estimate oxygen abundances for a sample of 18 high-redshift galaxies at low nebular metallicities. These authors then used photoionization models very similar to our own to fit for log(U) and Z * after fixing Z neb to match the direct metallicity, finding that the average of the sample lies along log U vs. 12 + log(O/H) of local HII regions (Pérez-Montero 2014). This result suggests that the high ionization parameter measured in their sample is due to their low nebular metallicity, and that their sample has consistent ionization parameter with local HII regions that share the same O/H. Furthermore, Shapley et al. (2019) demonstrated that high-redshift galaxies are also offset toward higher [SII]λλ6717, 6731/Hα and [OIII]λ5007/Hβ when using the appropriate comparison to local galaxies with low contribution from diffuse ionized gas (DIG) within the ISM. Using photoionization models from Sanders et al. (2016a), Shapley et al. (2019) concluded that both the offset on the [NII] and [SII] BPT diagrams is best explained by a harder ionizing spectrum at fixed nebular metallicity. Finally, the results described in this paper suggest that z ∼ 2 galaxies do not have an elevated ionization parameter compared to local HII regions that share the same 12 + log(O/H). Our analysis illustrates the importance of an independently constrained ionizing spectrum when trying to extract ionization parameter information from strong restoptical line ratios. Without such a constraint, the degeneracy between ionization parameter and the intrinsic ionizing spectrum can bias inferences of the ionization parameter. It is important to note that the method used to infer oxygen abundances of our sample is different from the method of our local HII region comparison sample (Pérez-Montero 2014), which could introduce systematics in the comparison. The offset between these two methods results in ∼ 0.24 dex lower oxygen abundance when using the direct method relative to the nebular recombination lines (see, e.g., Sanders et al. (2020) for a more detailed discussion). However, the modelling from Pérez-Montero (2014) assumed a softer ionizing spectrum at fixed oxygen abundance compared relative to the BPASS models used for our analysis. Using a softer ionizing spectrum results in an ionization parameter that is systematically higher for a fixed set of nebular emission line ratios (Sanders et al. 2016a). Therefore, correcting for these biases affecting the inferred ionization parameter and oxygen abundance would shift our inferred values along the U vs. O/H relation described by Pérez-Montero (2014), and would not lead to a significant difference between the nebular parameters of local HII regions and the z ∼ 2 galaxies in our sample. Based on our stellar and nebular results, we find that the offset on the BPT diagram is primarily due to a harder ionizing spectrum resulting from super-solar O/Fe values relative to local galaxies. SUMMARY & CONCLUSIONS We used the combination of rest-UV and rest-optical spectra for a sample of 62 galaxies to investigate the physical conditions within galaxies at z ∼ 2.3. We expanded upon the results of Topping et al. (2019) in which we constructed composite spectra based on location in the [NII]λ6584 BPT diagram, and found that galaxies offset from the local sequence typically had younger ages, lower stellar metallicities, higher ionization parameters, and were more α-enhanced. We expanded the fitting analysis to include additional SFHs and rest-optical emission line fluxes. In addition, we quantitatively determined the rest-UV continuum SNR limit above which we can constrain the stellar metallicity and age of individual galaxies in an unbiased manner. We summarize our main results and conclusions below. (i) We constructed additional BPASS stellar population models for a variety of SFHs. We repeated the fitting analysis of the two stacked spectra defined by Topping et al. (2019) and found that for each SFH, the stack composed of galaxies offset from the local sequence on the BPT diagram had a younger age and lower stellar metallicity. Additionally, when fitting across all SFHs for a single stack, we do not find any preference for one SFH over another. Therefore, we cannot determine which SFH best characterizes the rest-UV spectra. (ii) We tested the SNR down to which individual galaxy spectra are suitable to be fit using this type of analysis. Based on the test of perturbing model spectra with known stellar parameters with increasing amounts of noise, we find that the best-fit stellar metallicity is biased high when the spectrum reaches a SNR/resolution element< 5.6, and the best-fit age is biased toward the middle of the grid (log(Age/yr) ∼ 8.5), with an uncertainty that fills the parameter space. The best-fit age and stellar metallicity remain consistent with the true value for spectra with SNR per spectral resolution element > 5.6. Therefore, rest-UV spectral fitting should not be attempted on spectra for which the SNR/resolution element is less than 5.6 in order to avoid biased results. (iii) Based on the SNR requirements described above, we found that 30 galaxies in our sample satisfied the criteria to be fit on an individual basis. We find that galaxies in this high-SNR sample have ages that have a wide ranges of ages spanning log(Age/yr) ∼ 7.0 − 9.6, and that most galaxies have stellar metallicities in the range ∼ 0.001 < Z * < 0.004. (iv) We examined how different rest-optical emission lines affect the inferred ionization parameter and nebular metallicity. Previously, we inferred nebular parameters by comparing observed [NII]λ6584/Hα and [OIII]λ5007/Hβ with a suite of photoionization models. In this analysis, we tested how adding [SII]λλ6717, 6731/Hα and [OII]λ3727/Hβ to the fitting procedure affects the resultant parameters. In general, adding the additional lines yields results with smaller uncertainties. Additionally, because one assumption we made is the form of the N/O vs. O/H relation as an input to our models, we tested fitting the rest-optical lines that are not affected by this assumption, namely [OIII]λ5007/Hβ, [OII]λ3727/Hβ, and [SII]λλ6717, 6731/Hα. We find that when [NII]λ6584/Hα is excluded from fitting, the nebular metallicity and ionization parameter remain consistent with the values inferred when [NII]λ6584/Hα is included. (v) With the constrained ionizing spectrum for each individual galaxy, we used photoionization models to infer ionization parameters and nebular metallicities for each galaxy in our sample, and find that the inferred ionization parameters (log(U) med = −2.98 ± 0.25) are consistent with those measured in local HII regions that share the same oxygen abundance (12 + log(O/H) med = 8.48 ± 0.11). This result suggests that the offset of high-redshift galaxies on the BPT diagram relative to local galaxies is not due to elevated ionization parameters at fixed O/H. (vi) Combining the best-fit stellar metallicities from fitting BPASS model spectra to the nebular metallicities inferred from photoionization modelling we find that all of our individual galaxies are α-enhanced compared to local galaxies. Furthermore, we find that high-redshift galaxies that show the highest levels of α-enhancement are typically more offset from the local BPT star-forming sequence. The O/Fe values span the range from ∼ 1.75×O/Fe to above the theoretical limit (∼ 5×O/Fe Nomoto et al. 2006). This limit could be affected by details of the IMF, including the highmass slope and upper mass cutoff. Furthermore, the stellar metallicities may change as stellar modelling of the most metal-poor massive stars are better understood. In particular, if the lowest Z * stars actually produce harder ionizing spectra compared to current models, we would infer a higher stellar metallicity for our rest-UV spectra. In combination with (v), this result suggests that the offset of high-redshift galaxies on the BPT diagram relative to the local sequence is likely due to a harder ionizing spectrum at fixed nebular oxygen abundance resulting from elevated O/Fe in high-redshift galaxies. A combined understanding of the intrinsic ionizing spectrum that excites HII regions and the physical properties of the ISM is a crucial step toward a complete model of high-redshift galaxy evolution. In order to fully understand high-redshift galaxies we must explore how their properties differ from local galaxies, but also how the population of high-redshift galaxies varies within itself. Ultimately, detailed modelling of large numbers of individual galaxies will be required to expand our understanding of galaxies beyond the level of a sample average, but our work here begins to illustrate the range and interplay of nebular and stellar properties observed during the epoch of peak star formation in the universe. Constraining the ionizing spectrum is crucial in order to use photoionization models to infer nebular properties, as without this constraint solutions are highly degenerate. Rest-UV spectroscopy is the ideal tool to gain insight into the massive star population, and therefore the ionizing spectrum, for individual galaxies at high redshift. This type of analysis is key in order to compare the internal properties of high-redshift galaxies to those of local HII regions and galaxies.
2020-08-07T01:00:25.414Z
2020-08-05T00:00:00.000
{ "year": 2020, "sha1": "7519867405d8dcb81c83a4f8ef696e5169c1ce8f", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/499/2/1652/33890906/staa2941.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7519867405d8dcb81c83a4f8ef696e5169c1ce8f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248219563
pes2o/s2orc
v3-fos-license
Resolving subcellular pH with a quantitative fluorescent lifetime biosensor Changes in sub-cellular pH play a key role in metabolism, membrane transport, and triggering cargo release from therapeutic delivery systems. Most methods to measure pH rely on intensity changes of pH sensitive fluorophores, however, these measurements are hampered by high uncertainty in the inferred pH and the need for multiple fluorophores. To address this, here we combine pH dependant fluorescent lifetime imaging microscopy (pHLIM) with deep learning to accurately quantify sub-cellular pH in individual vesicles. We engineer the pH sensitive protein mApple to localise in the cytosol, endosomes, and lysosomes, and demonstrate that pHLIM can rapidly detect pH changes induced by drugs such as bafilomycin A1 and chloroquine. We also demonstrate that polyethylenimine (a common transfection reagent) does not exhibit a proton sponge effect and had no measurable impact on the pH of endocytic vesicles. pHLIM is a simple and quantitative method that will help to understand drug action and disease progression. The regulation of pH and establishing pH gradients across cell membranes plays a crucial role in many cell functions, including membrane transport, energy production and degradation pathways 1 . Compartmentalisation of the eukaryotic cell into different organelles, each with specific chemical and pH environments, means the pH within a cell can range from <pH 5 in lysosomes 2 to~pH 8 in the mitochondrial matrix 3 . Intracellular pH can be a marker of overall cell health, with apoptosis 4 and some disease states 5,6 displaying altered or dysregulated pH. Of particular interest is the pH gradient established in the endo/lysosomal pathway. The acidic environment within the lysosomes is necessary for the proper function of numerous enzymes essential for key physiological actions such as degradation of macromolecules and pathogens 7 . Genetic disorders that prevent lysosomal acidification can significantly affect homeostasis by causing a build-up of material destined for degradation 8 . This natural pH gradient is also exploited for drug delivery to enhance delivery of drug payloads into cells, either by triggering release of the drug from a carrier or disrupting the membranes in a pH-dependent manner to allow delivery into the cytosol [9][10][11][12][13] . Therefore, accurately measuring intracellular pH is vital to improve understanding of diseases while also helping in the development of potential treatments. Discerning changes in the pH of sub-cellular compartments in response to different stimuli, or due to dysregulation in the disease state, requires the use of quantitative tools to measure intracellular pH. However, current methods to measure intracellular pH have several limitations. Numerous synthetic fluorophores have been engineered to change their fluorescence intensity in response to pH [14][15][16] . A substantial limitation of small molecule pH sensors is controlling where they localise inside the cell. Typically, these dyes are taken up into endosomal/lysosomal vesicles, but a number of these dyes are released from the endosomal compartment when they change their protonation state 17 . Therefore, it is challenging to ensure the pH that is measured comes from a specific organelle. To overcome this, genetically encodable pH-sensitive proteins can be used as an alternative. These protein-based pH sensors [18][19][20][21][22] can be fused to proteins that natively localise to specific organelles within the cell, ensuring the pH measurement comes from the desired location. They also have the benefit of exhibiting lower toxicity than their small molecule counterparts 17 . Most pH biosensors (protein or synthetic) measure a change in fluorescence intensity based on a change in the protonation state of the fluorophore. A limitation with all intensity-based pH sensors is decoupling the pH measurement from the concentration of the sensor. To distinguish between a high concentration of sensor with a low fluorescent signal from a low concentration of sensor with a high fluorescent signal, the pH dependant signal is typically referenced to a second pH-insensitive fluorophore. However, the necessity for the second fluorophore complicates the pH measurement through a combination of increased complexity of the sensor, FRET interactions between the fluorophores, and spectral overlap in the emission channels. Furthermore, an inherent limitation with intensity-based measurements is the drop in signal-to-noise ratio when the intensity of the pH responsive fluorophore decreases. The absolute error in each fluorescence measurement remains constant across the physiological pH range, but if the fluorescence intensity decreases at lower pH, the relative error increases substantially as the intensity approaches zero. This results in a high degree of uncertainty in the subsequent pH measurement. In addition to this, these sensors rely on interpolation from a sigmoidal curve, which has greater error in the exponential and asymptotic regions than linear regression. Both of these factors limit the useful range of the sensors to a narrow pH band, which is typically smaller than relevant physiological pH values. Many of these challenges can be overcome by using fluorescent lifetime to infer pH 20,23,24 , as it is independent of fluorophore concentration and can quantify pH without the need for a second reference fluorophore. However, pH measurements using fluorescence lifetime have previously lacked the spatial resolution necessary to ascertain the pH of individual subcellular compartments with high accuracy. To overcome the limitations with existing intracellular pH measurements, here we present a pH-dependent fluorescence lifetime imaging microscopy (pHLIM) approach to quantitatively determine intracellular pH (Fig. 1a, b). pHLIM uses the fluorescent protein mApple, which we have observed has a pH-dependent fluorescent lifetime that is linear across the physiologically relevant pH range. By expressing mApple as a fusion protein and developing an automated deep learning analysis tool, we can Fig. 1 | mApple is a genetically encodable biosensor that can quantitatively determine subcellular pH using fast FLIM. a Schematic of cellular membrane and endocytic vesicles with mApple fused to different transmembrane proteins to achieve targeted cellular localisation. pH gradient indicates increasing acidification as endosomes mature. Not to scale. b Overview of FLIM technique involving confocal microscopy of mApple expressing cells and subsequent analysis indicating the altered mApple fluorescence lifetime in different subcellular pH environments. c mApple fluorescence emission intensity over the calibration range with the least squares fit (solid line) and 95% prediction band (dotted lines). The points shown are the mean value from each of three independent experiments (n = 3). d Uncertainty of interpolated pH as a function of actual pH for intensity or lifetime (G value) measurements, as determined by interpolation of (c), (h). e Calibration of recombinant mApple mean weighted fluorescent lifetime from pH 4.6-7.4 (n = 3). f pH dependence of recombinant mApple fluorescent lifetime visualised on a phasor plot, colour is indicative of the frequency of photons (red = high, blue = low), n = 3. g Equivalent phasor plot (f) with a 'phasor mask' applied which creates a pseudo colour scale that can be applied to fast FLIM confocal images (n = 3). h Extracted mean weighted G values from the phasor calibration, with a linear trendline (solid line) and 95% prediction band (dotted lines). The points shown are the mean value from each of three independent experiments (n = 3). accurately determine the pH of different sub-cellular compartments in live cells in real time. Results Photophysical properties of mApple mApple is an engineered protein that has been evolved from dsRed, a fluorescent protein originally isolated from a coral anemone (Discosoma sp.) 25,26 . It is part of a family of pH-sensitive fluorophores that includes pHuji 19 , which was specifically evolved from mApple for use as an intensity-based pH biosensor. To measure the photophysical properties of mApple, we expressed and purified recombinant mApple from E. coli (Fig. S1). The pH dependence of mApple fluorescence was measured from pH 7.4-4.6 and we observed a 90% drop in the emission intensity across this range (Fig. 1c). The correlation between fluorescence intensity and pH showed a similar trend to other pHsensitive proteins 18,19 . The principal limitation with using fluorescence intensity to determine pH (aside from the need for a reference fluorophore) is the increased uncertainty in the fluorescence measurement as the signal decreases. This is highlighted by plotting the uncertainty of the interpolated pH vs buffer pH (Fig. 1d), which shows uncertainty in the interpolated pH increased from~0.2 at pH 7 to >1 at pH 4.6. This high degree of uncertainty in the intensity measurement limits the useful range of intensity-based biosensors. To investigate the pH-responsive fluorescence lifetime of mApple, we used fast FLIM to calculate the fluorescent lifetime of mApple at each pH (Fig. 1e). We determined mApple lifetime to be 2.2 ns at pH 7.4, which is shorter than the 2.9 ns lifetime previously reported for mApple 27 . When the pH was decreased to 4.6 the lifetime decreased significantly to 1.3 ns. The change in lifetime followed a linear trend through the physiologically relevant pH range, with a lifetime change of~0.34 ns per pH unit. This linear trend is in contrast to other lifetime sensitive fluorescent proteins (E2GFP 23 and ECFP 24 ), which exhibit sigmoidal behaviour over the physiological pH range. Non-linear analysis complicates data fitting and lowers certainty in pH measurement at high and low pH. Although directly using the lifetime can be useful, detailed modelling of fluorescence lifetime can be complex as it requires fitting of several exponentials. To simplify our approach, we opted for the fit-free phasor analysis approach developed by Jameson, Gratton and Hall 28 which obviated the need for complex curve fitting. Using this method, lifetime data is displayed as a graphical representation on a phasor plot to indicate different species visually, by determining the sine (S) and cosine (G) Fourier transformations of normalised emission decays on a per pixel basis. Simplistically, each pixel in the image is correlated to a cartesian coordinate on the phasor plot, permitting the inference of pH without multi-exponential fitting. An overlay of phasor plots obtained for mApple indicated a narrow distribution at each of the tested pH values (Fig. 1f) and served as a calibration of the pH colour scale (Fig. 1g) used for image analysis later, herein denoted as a phasor mask. Analysis of the average G value within the pH range also displayed a linear trend (Fig. 1h), enabling simple and accurate determination of pH, with each pixel estimated to within 0.1 pH unit (Fig. 1d) using this G value calibration. The linear dependence of mApple lifetime (G value) across the physiological pH range is in contrast to other commonly employed fluorescent proteins. muGFP 29 shows a similar drop in fluorescence intensity from pH 7.4 to 4.6 (~90%), however, the G value remains consistent across this pH range (Fig. S2a). When calibrating fluorescence intensity with pH, the signal-to-noise ratio decreases as the pH drops, leading to a significant increase in the uncertainty of the intensity measurement (Fig S3a). pHlourin 18 , a fluorescent protein developed specifically as a pH sensor exhibits a drop in intensity from pH 7.6 to 6, but is not sensitive to pH below this range. Similar to muGFP, the uncertainty in the intensity measurement increases greatly as the pH decreases (Fig S3c). The G value of pHlourin shows some dependence on pH between 7.6 to 6 (decreasing from G = 0.55 at pH 7.6 to G = 0.45 at pH 6), however, the magnitude of the lifetime change and the range over which the change occurs is less than observed for mApple (Fig. S2b). The fluorescence intensity of mCherry 30 is largely insensitive to pH (with a 20% drop in intensity below pH 5.5) and has no change in the G value across the physiological range (Fig. S2c). Each of these points highlight the advantages of using mApple as a pH biosensor. This lifetime-based pH biosensor is a substantial improvement compared to intensity measurements of mApple, especially at pH below 6.5. Importantly, fluorescence lifetime is independent of protein concentration (Fig. S4a), which means fluorescence lifetime can be used as a single measurement without the need for a reference fluorophore to determine pH. Fluorescence lifetime is also independent of the ionic strength (Fig. S4b), which means variations of salt concentrations in different cellular organelles will not influence the pH measurement. Furthermore, the pH-induced lifetime change is reversible (Fig. S5), enabling dynamic changes in pH to be measured. Together, this demonstrates mApple is an excellent biosensor to detect changes in pH within a physiologically relevant pH range. mApple as an organelle-specific pH sensor We next moved to express mApple in mammalian cells to allow pH measurements in different sub-cellular compartments (Fig. 1a, b). When mApple is expressed in NIH-3T3 cells without fusing it to another protein, it is distributed throughout the cytosol and nucleus (Fig. 2a, Figs. S6, S7). To localise mApple to specific cellular compartments, it was fused to proteins that natively traffic to the compartment of interest. In this study, we chose to examine two fusion proteins that traffic differently within the endo/lysosomal pathway, transferrin receptor 1 (TfR) and transmembrane protein 106b (TMEM106b). TfR is a rapidly internalised surface receptor important for iron uptake, which has a high abundance within the early endosomal trafficking pathway and is constantly recycled back to the cell surface 31 . Fusing mApple to the C-terminus of TfR enables the pH of the cell surface and endosomal pathway to be measured. In contrast to TfR, TMEM106b is a late-stage endo/lysosomal protein with unknown function. The C-terminus has been demonstrated to reside in the luminal domain of vesicle membranes and represents a method to assess the pH of late endosomes and lysosomes 32,33 . mApple fusions of both TfR (TfR-mApple) and TMEM106b (TMEM106b-mApple) were expressed in NIH-3T3 cells (Fig. 2b, c) and the localisation was confirmed by coexpression with mEmerald fused Rab5a (early endosome) or LAMP1 (lysosome) (Figs. S8, S9). Analysis of the confocal fluorescent images revealed the majority of TfR-mApple resides in vesicles inside the cell, whilst a proportion can be observed on the surface of the cell. Colocalisation of TfR-mApple was observed with Rab5a and LAMP1, indicating its presence throughout the endosomal trafficking pathway (Fig. S8a, b). The high prevalence of TfR-mApple in endosomal vesicles was expected due to the rapid turnover of TfR on the plasma membrane. TMEM106b-mApple was not observed on the plasma membrane and there was minimal colocalisation with Rab5a (Fig. S8c). However, substantial colocalisation was observed with LAMP1 ( Fig. S8d), confirming its presence in lysosomes and indicating that TMEM106b-mApple is a good marker for the later stages of the endo/lysosomal pathway. After confirming their localisation, we next analysed the fluorescent lifetime and phasor plot of each fusion protein to provide a broad pH overview. Applying a pH calibrated phasor mask to the confocal images ( Fig. 2a-c) aids the visualisation of intracellular pH ( Fig. 2d-f), while the phasor plot ( Fig. 2g-i) indicates the distribution of measured pH values. The cytosolically expressed mApple formed a tight population on the phasor plot (Fig. 2g), and correspondingly showed a homogenous red colour throughout the cell in the phasor mask image (Fig. 2d). A mean G value of 0.39 was obtained from the phasor plot for this image, which corresponds to a cytosolic pH of 7.4, based on the G value calibration (Fig. 1g). The phasor overlayed image and phasor plot of TfR-mApple ( Fig. 2e, h) showed a markedly different pattern to cytosolic mApple. From the phasor plot, the mean G value of TfR-mApple was 0.52, which corresponds to a pH of 6.1, and the modal G value was 0.45 which corresponded to a pH of 6.8. Unlike cytosolic mApple where there was a tight distribution of pH, the TfR-mApple image exhibited a broad pH range, with the majority (majority is calculated as the 0.125 and 0.875 quantiles (middle 75%) of the weighted mean G value which is then converted to pH using the calibration from Fig. 1h.) of pH values falling between 5.2 and 7.0. The broad pH range was anticipated due to the presence of TfR-mApple on the cell surface and in endo/lysosomal vesicles. These different populations of TfR-mApple (high pH surface and early endosome, and low pH late endosome/lysosome) can be easily distinguished by the lifetime measurements (Fig. S10). TMEM106b-mApple also displayed punctate fluorescence (Fig. 2c, f) consistent with endo/lysosomal localisation and accordingly the phasor plot was skewed to higher G values (lower pH- Fig. 2i) compared to both cytosolic and TfR localised mApple. TMEM106b-mApple also displayed low levels of cytosolic fluorescence, likely due to over-expression of the protein. This signal was substantially lower intensity than the signal from the endosomes/ lysosomes and if a low intensity (15 photons/pixel) threshold is applied to the images, the cytosolic signal can be removed. The mean G value for the TMEM106b-mApple image was 0.57, which corresponds to a pH of 5.6, while the modal G value was 0.62 corresponding to a pH of 5.1. Both mean and modal pH was lower than TfR-mApple, again indicating that TMEM106b-mApple predominantly resides later in the endo/lysosomal trafficking pathway. TMEM106b-mApple exhibited a similarly broad pH range, with the majority between the pH of 4.8 and 6.8. Quantification of sub-cellular pH Although analysing the phasor plot of several whole cells can be useful to give a qualitative picture of pH diversity in the cell, the wide distribution of pH measurements observed for TMEM106b and TfR-mApple highlights the heterogeneous nature of pH environments within the cell. FLIM provides spatial resolution down to the limits of confocal microscopy (~250 nm), which enables analysis of individual endosomes. We have demonstrated the high spatial resolution of the data with the pH of individual endo/lysosomes clearly visible (Fig. 2e, f). These images also show that the wide pH distribution observed stems from individual endosomes with different pH, rather than uncertainty in the pH measurement (Fig. S11). Although it is possible to manually mask and measure the pH of individual endo/lysosomes, this type of image analysis by its nature is subjective and is time consuming due to the large number of vesicles detected per cell (>50). To address this, we have trained an established convolutional neural network (StarDist) 34 to identify vesicles and developed an algorithm to determine the pH of each detected endosome. This algorithm calculates the endosomal pH by interrogating each pixel in the detected endosome and calculating the intensity weighted mean G value (Eq. (1)). Mean weighted G value ðvesicleÞ = P ðPixel photon count × Pixel G valueÞ P Photon counts in vesicle The G value is converted to pH using the calibration curve in Fig. 1h. Using this workflow, we analysed the pH of >3700 individual vesicles from >30 TfR-mApple cells and >6000 individual vesicles from >150 TMEM-mApple cells (Fig. 3, Fig. S12). The histograms show the pH of individual vesicles and highlight the variation in pH between individual vesicles in the cell. The ability to observe the distribution of pH within the cell is a key advantage of the pHLIM technique. These results show a similar trend to that observed in the bulk analysis above, but enable pH quantitation of individual vesicles within the cell. In NIH-3T3 cells, TfR-mApple endosomes exhibited a higher mean pH (5.9) than the TMEM106b-mApple endosomes (5.0) (Fig. 3e, f). Similar results were observed in HEK293 cells (Figs. S13, S14). The TfR-mApple mean pH was significantly reduced in this analysis compared to manual analysis of the whole image, as the algorithm limits the contribution of the surface signal, which otherwise skews the mean TfR-mApple pH higher. The analysis of individual endosomes shows a narrower distribution of pH for TMEM106b-mApple vesicles (majority between 4.8 and 5.3) compared to the TfR-mApple endosomes (majority between 5.1 and 6.8). Similar results were observed when the analysis was expanded to further images (Fig. S12). We also investigated the intra-vesicular pH variability of the detected compartments (Fig. 2b, c), which indicated that the average standard deviation in pH within each vesicle for TfR-mApple was 0.22, compared to 0.19 for TMEM-mApple (Fig. S11). Temporal resolution of pH The dynamic nature of endosomal trafficking means that pH can change rapidly within the cell. To demonstrate the temporal resolution of this method, we acquired images at a rate of 0.7 frame/s to track dynamic changes in endosomal pH (Supplementary movie 1). To demonstrate the mApple pHLIM sensor can be used to probe changes in pH induced by drugs, we next moved to assess the effect of adding the V-ATPase H + pump inhibitor bafilomycin A1 35 (BafA1). We first confirmed that the presence of BafA1 did not influence the lifetime of mApple (Fig. S15). Incubating TMEM106b-mApple expressing cells with 100 nM BafA1 for 1 h resulted in a clear increase in pH, both visually by colour differences (Fig. 4a, b) and when the pH distribution of vesicles was plotted (Fig. 4c). BafA1 caused a significant (p < 0.05) increase in the total mean vesicle pH (Fig. 4d, Supplementary movie 2). The average pH of TMEM106b-mApple endosomes increased from pH 5.3 (G = 0.60) to pH 5.7 (G = 0.55) after 15 min, then to pH 6.1 (G = 0.52) after 30 min and pH 6.5 (G = 0.48) after 60 min. The addition of chloroquine also resulted in a similar increase in endosomal pH (Figs. S16, S17). This demonstrates the utility of the sensor to measure dynamic changes in pH over time within individual vesicles. Applying pHLIM to probe the buffering capacity of PEI Finally, we used the mApple pHLIM sensor to probe the proposed proton sponge hypothesis, which has been suggested to help rupture endo/lysosomes and deliver therapeutic cargo to the cytosol 36 . We followed the uptake of Cy5 labelled polyethylenimine (PEI) over 6 h into TMEM106b-mApple transduced NIH-3T3 cells. Colocalisation of PEI with mApple was observed after 30 min (Figs. S18, S19). Over the same period of time, we investigated the effects of unlabelled PEI upon lysosomal pH. Phasor overlayed images did not reveal a population of intracellular vesicles with elevated pH after treatment with PEI (Fig. 5a, b) which was also verified when our automated algorithm was used to analyse vesicle pH in comparison to untreated samples (Fig. 5c). Over the 6-h time course, the mean pH of detected vesicles ranged from pH 5.0-5.3 for both the untreated and PEI treated samples (Fig. 5d). These results suggest that there was not an observable increase in lysosomal pH after treatment with PEI over 6 h. To further investigate the potential buffering effect of PEI, we probed the pH of endosomal compartments containing PEI/DNA complexes. pDNA that encodes for EGFP was complexed with Cy5 labelled PEI. The polyplexes were incubated with TMEM106b-mApple transduced NIH-3T3 cells (2 µg/mL DNA concentration) for 4 or 6 h, and both incubation times resulted in strong GFP expression in~50% of cells after 24 h (Fig. S20). Using the StarDist algorithm, we identified all the mApple positive vesicles as well as Cy5 positive vesicles that contain the PEI/DNA polyplexes (Fig. S21). We then measured the pH of the double Cy5/mApple positive endosomes and compared them to the pH of the Cy5 negative, mApple positive endosomes (Fig. 4e). Confirming the result observed for PEI by itself, the pH of endosomes containing PEI/DNA polyplexes was not significantly different to the pH of PEI/DNA negative endosomes (both~pH 5.5- Fig. S22). We further investigated to see if there was a correlation between the amount of PEI in each endosome (measured from the Cy5 intensity) and the pH of the endosome. Increased sequestration of PEI in endosomes did not correlate with increased endosomal pH ( Fig. 5f and Fig. S23). Discussion Intensity-based pH measurements are hampered by the inherent drop in signal when the sensors are in their low-intensity state. Measuring the intensity of sensors when the signal has dropped to <10% of the original signal (and sometimes <1%) results in elevated levels of uncertainty in the pH measurement. It is important to note that the uncertainty in the inferred pH comes from both the intensity/lifetime measurement, and the pH calibration. This latter source of uncertainty is often ignored, but can significantly affect the accuracy of the measurement. When modelling the sigmoidal response of intensity to pH, the exponential and asymptotic regions of the curve have substantially higher uncertainty than the linear region. This is exemplified by the interpolation of pH from the intensity of muGFP, pHluorin and mApple (Fig S3). For each of these fluorophores, the uncertainty in pH is substantially higher than if the lifetime of mApple is used for pH calibration. This uncertainly can be >1 pH unit, which greatly limits the application of intensity measurements to determine physiologically relevant pH changes. Furthermore, the need for a reference fluorophore that is truly pH insensitive along with potential complications with FRET makes it difficult to ensure intensity-based measurements accurately reflect the true pH. mApple exhibits a large (1 ns), linear shift in its fluorescence lifetime in response to changes in environmental pH. The linear response of mApple fluorescent lifetime within the physiological pH range makes pH interpretation simpler and less prone to high levels of uncertainty compared to previously reported lifetime sensitive fluorophores 20,24 . Using the fluorescent lifetime decouples the pH measurement from the concentration of the sensor, eliminating the need for a reference fluorophore and associated ratiometric analysis. Despite mApple exhibiting reduced intensity at lower pH, the lifetime measurements have the same level of certainty across the physiological pH range, resulting in <0.1 pH uncertainty in the measurements on a per-pixel basis. The increased accuracy of the pH measurements using pHLIM means we can identify pH changes with a high degree of certainty as soon as subtle changes occur (as per treatment with BafA1- Fig. 4) and definitively demonstrate when no changes occur (as per treatment with PEI- Fig. 5). When expressed as a fusion protein, mApple was able to determine the pH of different subcellular compartments. Unmodified mApple distributed throughout the cytosol and as expected measured a pH of 7.4. Fusing mApple to TfR enabled visualisation of high pH (~pH 7.4) on the cell surface, as well as a diverse range of lower pH (5.2-7.0) inside the cell, consistent with localisation in the endosomal and lysosomal pathway. In comparison, TMEM106b-mApple showed no surface signal and localised in low pH vesicles (e.g. lysosomes). Colocalisation analysis with Rab5a and LAMP1 confirmed these observations, with TfR-mApple colocalising with both Rab5a and LAMP1, but TMEM106b-mApple only localising with LAMP1. To enable quantitative analysis of sub-cellular pH, we implemented a StarDist 34 deep learning algorithm to automatically detect intracellular vesicles. StarDist was trained to identify endosomal compartments using 8 images containing a total of 1086 individually segmented endosomes. This enabled the detection of the majority of intracellular vesicles, whilst minimising the occurrence of false positives (Fig. S24). Implementation of this algorithm permitted the pH analysis of several thousand individual endo/lysosomes across multiple images, which would otherwise be time consuming with manual analysis. This is a key advance over previous pH studies which lack the specific compartment labelling and spatial resolution to identify individual sub-cellular compartments. The automated analysis of each vesicle enables both quantitative determination of the pH, as well as determining the range and diversity of pH within the cell. In addition, it permits the dynamic measurement of labelled vesicles over several hours. The ability to quantify sub-cellular pH is important for a range of applications. For example, the acidifying endo/lysosomal pathway poses as a significant opportunity to enhance the specificity of drug delivery 37 . Through employing pH-responsive systems such as nanomaterials 38 and linkers (e.g. acetal 11,13 ), the acidifying environment can facilitate cargo delivery to desired pH compartments in the trafficking pathway. Therefore, visualising the pH of subcellular locations where these materials natively traffic is of keen interest to optimise the design of these systems. pHLIM enabled us to make dynamic quantitative pH measurements and assess the effect of two different reagents (bafilomycin A1 and PEI) on endo/lysosomal pH. Bafilomycin A1 is a V-ATPase inhibitor that prevents endo/lysosomes from maintaining low pH and is widely used to study acidification of these vesicles 20,35 . We observed a rapid onset of BafA1's neutralising effects with the endo/lysosomes of treated cells becoming significantly (p < 0.05) higher pH than untreated cells after 15 min. The neutralising effects continued over the 1-h incubation, however, began to plateau after 50 minutes at a mean pH of 6.5 compared to untreated control at pH 5.3. BafA1 increases endosomal pH by inhibiting the V-ATPase H + pump 35 , which prevents acidification of vesicles. It has also been shown that BafA1 inhibits the SERCA Ca 2+ pump, which disrupts autophagosome/lysosomal fusion independently of its effect on lysosomal pH 39,40 . However, it is possible the disruption of lysosomal fusion could play a role in increasing the pH of endosomes. To investigate this further we analysed the distribution of vesicle pH and the number of vesicles detected throughout the BafA1 treatment. If V-ATPase H + pump inhibition is the primary mechanism for increasing the endosomal pH, we would expect to see a steady increase in the pH of all the endocytic vesicles. However, if disruption of autophagosome/lysosomal fusion prevents acidification of the vesicles, we would expect to observe two vesicle populations; the initial population of vesicles with lower pH; and a new population of vesicles with a higher pH that are unable to fuse to the autophagosome/lysosomal compartments. Conventional analysis of the average pH inside the cell would not be able to distinguish between these two mechanisms, as in both cases the overall pH of the cell would increase. However, by analysing the individual vesicles we observed a single pH distribution with increasing (and narrower) pH (Fig. 4c), with a similar number of vesicles present throughout the experiment (Fig. S25). This shows that the primary mechanism of BafA1-induced lysosomal neutralisation is inhibition of V-ATPase H + pumps. It should be noted that this result does not contradict the findings that BafA1 can also inhibit autophagosome/lysosomal fusion. However, the lack of a second population of vesicles with a higher pH and the consistent number of vesicles suggests that over the 60-min time course, inhibition of autophagosomal/lysosomal fusion is not the driving force behind neutralisation of TMEM106b + vesicles. This analysis highlights the usefulness of pHLIM in making dynamic intracellular measurements of all detected vesicles within the field of view. We next investigated the purported buffering effects of PEI upon vesicular pH. The delivery of biological therapeutics to their site of action in the cytosol is a significant challenge, as most biologics which are endocytosed into these endo/lysosomal compartments are degraded 41 . Delivery to the cytosol (also referred to as endosomal escape 42 ) is very inefficient, with <2% of internalised material being trafficked to the cytosol 43,44 . To overcome this, some pH-responsive materials can be engineered to induce endosomal escape in the endo/ lysosomal pathway 45 . However, the mechanisms by which these materials escape the endosome is not clear and is hotly contested. One proposed mechanism is the proton sponge effect 36 , where polymers can buffer the acidification of endosomal compartments, which in turn leads to an increase in osmotic pressure as counter ions are pumped into the endosomes to balance the overall charge. It is proposed the osmotic pressure reaches a point which ruptures the endosomal compartment, delivering the contents of the endosome to the cytosol. There is mounting evidence to suggest there are significant limitations with this hypothesis, including demonstrating that polymers with high buffering capacity do not have increased endosomal escape 46 , and ratiometric pH studies have failed to observe buffering of the pH 47 . However, these ratiometric methods have been hampered by high levels of uncertainty in the inferred pH (as demonstrated in Fig. S3) and have typically relied on treating the cells with synthetic pH sensors that do not necessarily localise to the same cellular compartments as the polymers. A number of reports also lack evidence to show colocalisation of PEI with the specific subcellular compartments that are being measured. Here, we have demonstrated that Cy5 labelled PEI strongly colocalises with TMEM106b-mApple within 60 min. Despite this strong colocalisation, we did not observe any buffering from the PEI over 6 h. This was in stark comparison to BafA1 where elevated pH effects were observed as early as 15 min after treatment. By using Cy5 labelled PEI/ DNA complexes, we were also able to measure the pH of individual PEI/ DNA positive endosomes and compare the pH to endosomes in the same cell without PEI/DNA (Fig. 5e). There was no difference in the mean pH or the pH distribution in TMEM106b-mApple vesicles with or without PEI. Furthermore, in addition to measuring the pH of each individual endosome, we measured the fluorescence intensity of Cy5 in each endosome to determine the relative amount of PEI. We would anticipate that if PEI exerts a buffering effect in endocytic vesicles, vesicles with a greater amount of PEI would have a higher pH. By plotting the amount of PEI vs pH for >2500 individual endosomes from 3 independent replicates (Fig. 5f) we have shown that increased sequestration of PEI in endosomes does not correspond to a higher endosomal pH. Our results here show that (a) the average pH of vesicles does not change with PEI treatment, (b) there is no population of vesicles with higher pH and the distribution of pH is similar regardless of if the vesicle contains PEI or not, (c) there is no correlation between the amount of PEI in the vesicle and the pH. All combined, this strongly suggests that the proton sponge effect is not the predominant mechanism by which cytosolic delivery is induced by PEI. We have demonstrated that FLIM measurements of mApple, combined with automated analysis of individual endosomes enables quantitative and accurate measurement of intracellular pH across the physiologically relevant pH range. This technique has a number of advantages over existing methods. (1) Simplicity: FLIM only requires a single measurement, rather than needing ratiometric measurements of two fluorophores. (2) Accuracy: our pHLIM measurements are accurate to <0.1 pH unit, compared to >0.5 for intensity-based measurements. (3) Responsive range: mApple exhibits a linear lifetime response across the tested physiological pH range. (4) Sub-cellular quantification: the application of StarDist enables the distribution of pH within the cell to be determined. (5) Endosome composition: we can identify which endosomes contain material (such as PEI) and correlate the pH to the amount of material in the endosome. Furthermore, because mApple is a genetically encodable sensor, we were able to express it in various intracellular locations such as the cytosol, cell surface, endosomes or lysosomes, which permitted local pH measurements at each of these locations. We were able to interrogate pH changes in response to treatment with bafilomycin A1 and PEI. Although substantial changes in lysosomal pH were observed with BafA1, changes in lysosomal pH were not observed over 6 h despite substantial colocalisation of PEI with these compartments. These results highlight the power of coupling a genetically encodable pH sensor with an automated detection and analysis workflow to make robust intracellular pH measurements. The simple and quantitative pHLIM technique outlined here has the potential to improve our understanding of drug action in addition to disease progression and will also be a valuable tool to help design the next generation of controlled drug release systems. Buffers and materials All chemicals and materials were purchased from Sigma Aldrich except where specified. Cell culture materials were purchased from Thermo-Fisher Scientific. DNA cloning reagents including restriction enzymes, DNA polymerases and NEBuilder HiFi DNA assembly master mix were obtained from New England Biolabs. Buffers for assessing fluorescence lifetime were composed of either 0.01 M PBS (pH 6.5-7.4) or a 0.01 M citrate buffer (pH 4.6-6.0). Plasmid construction All plasmids were constructed using NEBuilder HiFi DNA assembly master mix with PCR products, vector restriction digests or DNA oligonucleotides with compatible overhangs. All synthetic oligonucleotides were obtained from Integrated DNA Technologies (IDT). Cloning was performed in TOP10 chemically competent Escherichia coli (E. coli) (ThermoFisher Scientific). mApple and TfR DNA were obtained from mApple-Lysosomes-20 (RRID:Addgene_54921) and mCherry-TFR-20 (RRID:Addgene_55144), respectively, which were a gift from Michael Davidson. The sequence for transmembrane protein 106b (TMEM106b) was obtained from the Gene database of the National Center for Biotechnology Information 48 (Gene ID: 54664) and ordered as a plasmid from Twist Bioscience, inserted into pTwist Lenti SFFV Puro WPRE. TMEM106b and TfR DNA were amplified for insertion as an N-terminal fusion to mApple and subcloned into the third-generation lentiviral plasmid pCDH-EF1-IRES-Puro (System Biosciences) which was digested with EcoRI and NotI (TMEM106b) or NheI and NotI (TfR) restriction enzymes. mApple was also inserted into pCDH-EF1-IRES-Puro alone by amplifying the mApple DNA from mApple-Lysosomes-20. These plasmids are available from Addgene (RRID:Addgene 179383, 179384 and 179385). The plasmid encoding CMV-EGFP was generated by digesting sfGFP-TFR-20 (RRID: Addgene_56488, a gift from Michael Davidson) with NheI and AgeI restriction enzymes before blunting with T4 DNA Polymerase, and blunt end ligation with T4 DNA ligase. For expression in E. coli, mApple DNA was inserted into pET His6 TEV LIC cloning vector (1B), a gift from Scott Gradia (RRI-D:Addgene_29653). mEmerald-Rab5a and mEmerald-Lysosomes-20 were both gifts from Michael Davidson (RRID:Addgene_54243 and RRID:Addgene_54149, respectively). Protein expression and purification pET-His6-mApple was recombinantly expressed and purified using a previously reported method 44 by transformation into the E. coli strain B-95.ΔA 49 . Briefly, transformed bacteria were directly inoculated into a 2 L plastic baffled flask (Thomson Instrument Company) containing 200 mL optimised growth medium with 15 g/L tryptone, 30 g/L yeast extract, 8 mL/L glycerol (Promega), 10 g/L NaCl and shaken at 200 RPM overnight at 37°C. High-density cultures were then reduced to room temperature and induced with 0.4 mM IPTG (Roche) for 6 h. Bacteria were harvested by centrifugation at 4000 g. The bacterial pellet was resuspended in a high salt buffer (1 M NaCl, 50 mM Imidazole, 50 mM monosodium phosphate, adjusted to pH 8.0) supplemented with complete EDTA-free protease inhibitors, 2 mM MgCl 2 and benzonase. Resuspended bacteria were lysed by homogenisation with an EmulsiFlex-C3 (Avestin) before centrifugation at 12,000 × g for 1 h and clarified through a 0.45-µm syringe filter to remove cellular debris. Protein was purified by immobilised metal affinity chromatography (IMAC) using Protino Ni-NTA agarose (Machery-Nagel). Captured protein was washed copiously with high salt buffer and a low salt buffer (100 mM NaCl, 50 mM Imidazole, 50 mM monosodium phosphate, adjusted to pH 8.0) before elution (300 mM NaCl, 450 mM Imidazole, 50 mM monosodium phosphate, adjusted to pH 8.0). Eluted mApple was concentrated and buffer exchanged into pH 7.4 PBS using 10 kDa molecular weight cut-off Amicon centrifugal filters (Merck). Protein concentration was determined by A568 with e = 82,000 M −1 cm −1 27 . Labelling of PEI PEI (Mn~1200 g mol −1 , product #482595) at 1 mg mL −1 in MilliQ water was incubated with 5 molar equivalents of Cyanine5 succinimidyl Ester (Lumiprobe) in a total volume of 35 µL for 2 h at room temperature. Removal of excess dye was achieved by 0.5 mL Zeba spin desalting columns (7 kDa molecular weight cut-off) which were equilibrated with PBS, according to manufacturer's instructions. Lentivirus production and transduction HEK293-FT cells were seeded one day prior at 400,000 cells/well in 6-well culture plates. The following day, lipofectamine 3000 (ThermoFisher Scientific) was used to transfect the cells with transfer plasmid and third-generation lentiviral vectors to generate lentivirus. 48 h post transfection, HEK293-FT culture medium was clarified with a 0.45 µm springe filter before being applied to NIH-3T3 cells, seeded one day prior in a 12-well culture plate at 50,000 cells/ well. NIH-3T3 cells were grown to~80% confluency then selected with 2 µg mL −1 puromycin for positive incorporation of the transfer gene. Transient expression of endosomal stage markers NIH-3T3 cells expressing either TfR or TMEM106b fused mApple were seeded at 2500 cells/well in a black 96-well clear bottom plate. The following day, lipofectamine 3000 was used to transfect Rab5a-mEmerald or LAMP1-mEmerald plasmids. In both constructs the mEmerald resides on the cytosolic side of the endosomal membrane. Cells were imaged live 48 h later using LEICA SP8X FALCON Confocal system using a HC PL APO 86× 1.2NA water immersion objective. Excitation for mEmerald and mApple was from a SuperContinum WLL at 488 nm and 561 nm, respectively. Emission was collected to SMD HyD detectors at 500-560 nm for mEmerald and 580-695 nm for mApple with a pixel size of 133 nm. Fast fluorescence lifetime microscopy Traditional time-correlated single-photon counting (TCSPC) is intrinsically slow, requiring long integration times. Here we used a Leica SP8 FALCON (FAst Lifetime CONtrast) microscope to acquire the FLIM data. The FALCON system uses pattern recognition analysis of digitised signal from the spectral single-photon counting detectors, and transforms this signal into photon arrival times. This approach allows for significantly higher photon flux, resulting in shorter integration times for each image 50 . mApple was excited at 561 nm with a repetition rate of 80 MHz and emission was detected from 571 to 660 nm. 8-16 lines were accumulated per capture to increase photon counts with a pixel size of 133 nm. mApple calibration For the pH calibration, 5 µL of 75 µM mApple protein was combined with 120 µL of relevant pH buffer in a black 96-well clear bottom plate. For ionic strength calibration, 2x PBS (300 mM ionic strength) was used to create solutions of relevant ionic strengths by dilution with MilliQ water before combining mApple with these buffers in the ratio outlined above. The plate was then imaged on a LEICASP8X FALCON Confocal system as mentioned above. System was pre-warmed to 37°C in the focal plane just above the surface of the plate. Fast fluorescence lifetimes (Fig. 1c) were calculated by applying a bin of 8 to the captured image then processing according to Eq. In the case of the G value calibration (Fig. 1g), a bin of 8 was applied to the captured image and the phasor G coordinates were averaged according to Eq. (3). Mean weighted G value ðimageÞ = P ðPixel photon count × Pixel G valueÞ P Photon counts in image ð3Þ G value calibration was fit with linear regression and 95% prediction bands were plotted in Prism. For the intensity calibration (Fig. 1f), intensity values from the acquired images were normalised to the maximum value at pH 7.4, then data was fitted to a four-parameter logistic sigmoidal fit in Prism. 95% prediction bands were plotted using prism. Uncertainty in the pH measurement for the intensity and G value calibrations were determined from the 95% asymmetric confidence interval. Live cell imaging NIH-3T3 cells expressing the relevant fluorescent proteins were seeded one day prior in cell culture medium at 10,000 cells/well in a black 96well clear bottom plates. Prior to imaging, cell media was replaced with pre-warmed (37°C) imaging medium (Fluorobrite, 10% FBS) and incubated for at least 10 min before being inserted into the prewarmed (37°C, 5% CO 2 ) microscope. If BafA1 or PEI were to be added, they were diluted to a final concentration of 100 nM or 80 µg/mL, respectively, in imaging medium before replacing the original cell culture medium after the plate was inserted into the microscope. The same regions were imaged at relevant timepoints. In the case of Cy5-PEI treatment, cells were treated with Cy5-PEI at 80 µg/mL for the indicated time before stringent washing with imaging medium before imaging. It was not possible to image the same cells over the time course with Cy5-PEI treatment due to the high signal of Cy5-PEI in the surrounding medium. In the case of Cy5-PEI pDNA polyplex treatment, polyplexes were assembled by combining Cy5 labelled PEI (as above) and transfection grade PEI (PEI max linear, Polysciences, MW 40,000) at a weight ratio of 1:5, respectively, then adding EGFP encoding pDNA at a weight ratio of 1:40 (pDNA:PEI) in fluorobrite supplemented with 10% FBS to a final pDNA concentration of 2 µg/mL. This polyplex solution was incubated with NIH-3T3 cells transduced with TMEM-mApple for 4 or 6 h, after which the solution was removed and the cells were washed three times with fluorobrite. Cells were imaged after washing, then returned to the incubator for EGFP transfection assessment the following day. Transfection analysis of PEI/pDNA polyplexes Twenty-four hours after the initial addition of PEI/pDNA polyplexes above, cells were detached from the imaging plate using TryplE and the EGFP fluorescence was quantified by flow cytometry using a Stratedigm S1000EON with a 488 nm laser. Fluorescence emission was collected from 500 to 540 nm of~10,000 events per sample. FCS3.0 files were exported using CellCapTure Analysis Software (Stratedigm, California, USA) and analysed using FlowJo (version 10, Becton, Dickinson and Company; 2021). Training of StarDist The StartDist algorithm was trained using the ZeroCostDL4Mic 51 Google Colab notebook. The images for training were initially segmented using the pre-trained "versatile (fluorescent nuclei)" model using normalised images, percentile low = 0.5, percentile high = 99.8, probability threshold = 0.05 and overlap threshold = 0. This resulted in images with a large number of false positives, but with nearly all the endosomes identified. These images were then individually inspected and all the false positive ROIs were deleted. Eight 512 × 512 images with >250 endosomes identified per image were uploaded into the 2D StarDist ZeroCostDL4Mic Google Colab notebook and training was performed using the default settings. Automated analysis of images Images were exported by applying a preview filter with a value of 1000 and phasor threshold of 5. Exported images were analysed using custom FIJI (ImageJ) scripts. Briefly, to identify vesicles, the analyse_-FLIM_Images_with_Stardist.ijm script employs a custom trained Stardist model (outlined above) which automatically detects endosome-like objects in the mApple intensity channel. The algorithm was run with the following settings: Normalize input = true, percentile low = 25, percentile high = 99.8, probability threshold = 0.5, overlap threshold = 0. The intensity channel of each image had an intensity threshold of 15 photons/pixel applied then was segmented into endosomes by the algorithm. The segmentation mask was then applied to the corresponding G value channel with an upper area cutoff of 2 µm 2 . Intensity and G value data found within the detected endosome was then used to determine the average weighted G value of the vesicle according to Eq. (1). The mean weighted G value was used as it weights the mean value towards G values with higher photon representations in the vesicle. Note that intensity values correspond to the number of detected photons directly. Mean weighted G values were then converted to a mean weighted pH as per the linear trend observed in Fig. 1g. A custom R script was used to analyse and generate plots of this data. To identify vesicles that were double positive for both mApple and PEI (Cy5), the Create_Double_Positive_mask.ijm script was used. Briefly, this script uses the same Stardist model outlined above to separately identify vesicles that are positive for mApple or PEI in their respective channels. These masks are then eroded by one pixel to limit the detection of vesicles that are close to each other, but not completely coincident. A mask for vesicles that have signal from both the mApple and PEI channels (using the original non-eroded mask for mApple) is then created, as well as a mask of vesicles that contain only mApple. The same script was also used to calculate the coincidence of mApple with Rab5a and LAMP1. The percentage conincidence number from each image was calculated by ratioing the number of double positive vesicles detected by the total number of mApple positive vesicles detected. To determine the pH of vesicles that contain PEI vs those that do not contain PEI, the masks generated by the Create_-Double_Positive_mask.ijm script were used in conjunction with the Analyse_FLIM_Images_with_precalculated_masks.ijm. This script works the same way as the analyse_FLIM_Images_with_Stardist.ijm script, except it used a predetermined mask instead of Stardist to identify the vesicles. To correlate the intensity of PEI in each vesicle with the pH of the vesicle, the Quantify_pH_mask&intensity.ijm script was used. This script takes the G value pH image, the PEI intensity image and double positive mask (generated from Create_Double_Positive_mask.ijm) and calculates the pH and total PEI (Cy5) intensity based on the supplied double positive mask. Statistics and reproducibility No data were excluded from the analyses. The number of biological replicates and the statistical tests used to determine significance are outlined in the figure capture. No statistical method was used to predetermine sample size. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data including microscopy images generated in this study are provided in the Source data file and are also available on figshare (https://doi.org/10.6084/m9.figshare.20454867.v1). Source data are provided with this paper. Code availability Custom ImageJ scripts used in this study are available with this manuscript as Supplementary Software files, and are also available on figshare (https://doi.org/10.6084/m9.figshare.20454867.v1). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
2022-04-18T13:20:29.660Z
2022-04-13T00:00:00.000
{ "year": 2022, "sha1": "63bd6db995fd61f6a6b31c015376b189f40c5619", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "945ca21032f352d6eb22cbe8e243818e9f6480ce", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
5928329
pes2o/s2orc
v3-fos-license
Antimicrobial Peptides: Insights into Membrane Permeabilization, Lipopolysaccharide Fragmentation and Application in Plant Disease Control The recent increase in multidrug resistance against bacterial infections has become a major concern to human health and global food security. Synthetic antimicrobial peptides (AMPs) have recently received substantial attention as potential alternatives to conventional antibiotics because of their potent broad-spectrum antimicrobial activity. These peptides have also been implicated in plant disease control for replacing conventional treatment methods that are polluting and hazardous to the environment and to human health. Here, we report de novo design and antimicrobial studies of VG16, a 16-residue active fragment of Dengue virus fusion peptide. Our results reveal that VG16KRKP, a non-toxic and non-hemolytic analogue of VG16, shows significant antimicrobial activity against Gram-negative E. coli and plant pathogens X. oryzae and X. campestris, as well as against human fungal pathogens C. albicans and C. grubii. VG16KRKP is also capable of inhibiting bacterial disease progression in plants. The solution-NMR structure of VG16KRKP in lipopolysaccharide features a folded conformation with a centrally located turn-type structure stabilized by aromatic-aromatic packing interactions with extended N- and C-termini. The de novo design of VG16KRKP provides valuable insights into the development of more potent antibacterial and antiendotoxic peptides for the treatment of human and plant infections. a high degree of variability in their sequence, mass, charge and three-dimensional structure 8 . They constitute a vast group of molecules that are widely distributed throughout nature 9 . A variety of organisms, ranging from invertebrates to plants, animals and humans, produce AMPs to protect themselves against infection, and share common elements in their defense mechanisms against pathogens 6 . In fact, AMPs are less susceptible to fall prey to bacterial resistance than traditional antibiotics 10 . A majority of these AMPs are cationic and selectively bind to the negatively charged lipids of bacterial membrane, mainly through an electrostatic interaction, and have the ability to follow an amphipathic arrangement, with a segregation of the charged face from a hydrophobic face that permits its entry into the hydrophobic microbial membrane, leading to membrane disruption and cell death [11][12][13] . In case of Gram-negative bacteria, AMPs have to encounter lipopolysaccharide (LPS), a major component present in leaflet of the outer membrane, in order to gain access into the plasma membrane [14][15][16] . LPS acts as an efficient barrier against entry of antibiotics or antimicrobial proteins or peptides rendering them inactive; the observed resistance in Gram-negative bacteria may therefore be attributed fairly to LPS, although other modes of AMP resistance do exist 6 . A number of recent studies have demonstrated that bacterial resistance to cationic AMPs might occur through a variety of mechanisms, including chemical modification of membrane lipids, repulsion via modification of negative charges in their membrane, sequestration, proteolytic destruction, export through efflux pumps, uptake and destruction via transporters, and release of glycosaminoglycans (GAGs), polysaccharides and other polyanionic scavenging species [17][18][19][20] . A major concern to global food security involves the significant worldwide loss in crops caused by plant pathogens such as bacteria, viruses, fungi and other microbial organisms; such losses account for more than 10% of the overall loss in global food production 21 . Due to their genetic variability and ability to mutate, plant pathogens continuously invade plants and compromise their tendency for growth and reproduction. Prevention and control of bacterial and fungal diseases in plants is largely based on copper compounds and other synthetic chemicals, which are considered to be environmental pollutants and may be toxic or even carcinogenic 22 . Consequently, the development of non-toxic and non-polluting treatments to control bacterial and fungal diseases in plants has been the focus of extensive research in agriculture. In this regard, non-cytotoxic membrane-associated peptides with LPS-binding affinities have attracted considerable attention as promising antibiotics for agricultural applications and plant disease control. In this study, we have investigated the antimicrobial properties of VG16, a 16 residues conserved fusion peptide chiefly responsible for host endosomal membrane fusion with viral envelope and subsequent progression of infection ( Fig. 1A-C) 23 . The structural and functional characterization of the interaction of VG16 with different model membranes, such as zwitterionic dodecylphosphocholine (DPC) detergent micelles, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC)/1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl glycerol (POPG) lipid vesicles and anionic sodium-dodecyl-sulfate (SDS) detergent micelles, have shown that VG16 forms a loop-like structure in both neutral DPC/POPC and anionic POPG membranes 23 . A close inspection of the three-dimensional structure determined by NMR spectroscopy reveals that the structure is stabilized by a hydrophobic triad formed by Trp101, Leu107 and Phe108 of VG16 (Fig. 1B) 23 . This hydrophobic packing interaction is very crucial for membrane fusion. For instance, replacement of Trp101 with Ala eliminated the hydrophobic triad formation and completely abolished membrane fusion 23 . Since anionic membrane mimetic models, such as SDS micelles and POPG vesicles, are bacterial membrane mimetic models, the loop-like structure motivated us to utilize VG16 peptide as a building block for the de novo design of antimicrobial peptides against Gram-negative bacteria. In this study we show that VG16KRKP, a 16 residues analogue of VG16, exhibits a ~10-fold reduction in the MIC values against a range of Gram-negative bacteria (Fig. 1D). We also report live-cell NMR study of this peptide and attempt to provide a correlation between the three-dimensional solution structure of designed AMPs in lipopolysaccharide (LPS) (mimics the outer-membrane of Gram-negative bacteria) and its direct application to treat pathogenic bacterial infection in rice and cabbage, caused by Xanthomonas oryzae and Xanthomonas campestris, respectively. Our findings indicate that the designed peptide is capable of resisting disease progression in plants. Results and Discussion Evaluation of antimicrobial activities of rationally designed AMPs. Several crystal structures of LPS-binding receptors, co-crystallized with LPS, have shown that several positively charged amino acid residues are required to stabilize the complex structure through the formation of plausible salt bridges and/or hydrogen bonds between LPS phosphate groups and protein basic residues 24 . Therefore, a high positive charge may also be vital for overcoming the anionic LPS barrier. In fact, the structured LPS-binding motif of YW12, a potent AMP designed on the basis of the structure of a β -barrel outer membrane protein of E. coli (FhuA) co-crystallized with LPS, comprises a centrally located stretch of four consecutive Lys and Arg residues 14,25 . This stretch shows multiple hydrogen bonds and salt bridge interactions with the biphosphate groups of lipid A in LPS 25,26 . However, inspection of the amino acid sequence of VG16 revealed a paucity of positively charged residues (Fig. 1C), responsible for electrostatic interaction between peptide and anionic LPS that enable the cell-mediated uptake of the AMPs into the hydrophobic interior. Thus, we hypothesized that inserting cationic "KRK" stretch in the VG16 peptide would improve its potency against Gram-negative bacteria (Fig. 1C). To this end, we designed VG16KRKP, where Arg and Lys residues were introduced in the extended loop region observed in the NMR structure of VG16 23 . Moreover, Pro10 was also inserted in the central region to bring hydrophobic Scientific RepoRts | 5:11951 | DOi: 10.1038/srep11951 and aromatic residues, such as Leu11 and Phe12, close to Trp5 (Fig. 1C). Interestingly, VG16KRKP is capable of neutralizing LPS by around 50% at a concentration of 12 μ M (Fig. 1D). VG16 alone, without the KRKP residue, showed neither any bactericidal effect nor antifungal activity against the strains tested up to a concentration of 100 μ M (Fig. 1D). Regarding the bacterial selectivity, VG16KRKP showed MIC values of 8 μ M for E. coli, but no activity against P. aeruginosa, indicating the peptide is highly selective, even if both are Gram-negative bacteria. This may be attributed in part to the presence of an alginate capsule present outside the bacterial membrane in the case of P. aeruginosa, which is known to inhibit the entry of antimicrobial agents, rendering them inactive 27,28 . Nonetheless, further studies are additionally needed to investigate the presence of other potential modes of action, if any, of the designed peptide. VG16KRKP was active against plant pathogens X. campestris and X. oryzae, with comparable MIC values (Fig. 1D). It also inhibited the growth of B. subtilis with an MIC value of 50 μ M. Moreover, VG16KRKP also showed strong antifungal activity against Candida albicans and Cryptococcus grubii with MIC values of 2 and 5 μ M, respectively (Fig. 1D). In all cases, VG16 and VG16A are inactive, suggesting the importance of the presence of positive charges in the amino acid sequence. Studies on the effects of the net positive charge, hydrophobicity and amphipathicity on the activity of AMPs have shown that an increase in positively charged residues and hydrophobicity up to a certain extent while maintaining amphipathicity have led to an increase in their observed antimicrobial activity and bacterial cell selectivity 29,30 . In light of these results, our further studies focused exclusively on the VG16KRKP peptide. Live-cell NMR spectroscopy provides information on the disruption of bacterial membrane leading to cell lysis. Interaction of the designed VG16KRKP peptide with E. coli (DH5α ) cell was investigated at different peptide concentrations as well as with different peptide to cell ratios using solution NMR spectroscopy. Under all employed experimental conditions, the cells started to die rather immediately after peptide addition, as evidenced from the appearance of new peaks corresponding to the metabolites released from the cells lysis ( Fig. 2A). In particular, for the untreated cells, after overnight incubation, the number of vital cells was comparable with those at t 0 , while for those treated with the peptide, typically a reduction of 1 to 2 orders of magnitude in the number of colony forming units (CFU) was observed (data not shown). These data represent a further demonstration of antibacterial activity of the peptide. One-dimensional 1 H NMR spectra reveal dramatic broadening as well as reduction of NMR signal intensities of VG16KRKP even in the presence of different number of cells. It is worth mentioning that the concentration of the peptide was kept unchanged while the number of cells was decreased by a factor of 2, 3 or 4, depending on the dilution factor (Fig. 2B). After several hours of co-incubation, the peptide resonance intensities considerably increased, while the line shape returned to a stage comparable to those of the peptide alone, as a consequence of significant cell death and subsequent peptide dissociation ( Fig. 2A). The interaction could also be deduced from the dramatic changes in the indole (Nε H) ring protons of Trp5 (resonating at ~10 ppm) (Fig. 2C), aromatic resonances (Fig. 2D), along with methyl and other aliphatic protons (Fig. 2E) of VG16KRKP. Furthermore, scanning electron microscopy (SEM) was performed to determine the rate of killing of the bacteria by VG16KRKP. Bacterial suspension of the two Gram-negative bacteria E. coli and X. oryzae, containing 10 6 cells, were incubated with VG16KRKP for different time intervals and analyzed by SEM in order to understand the nature and extent of cell lysis ( Supplementary Fig. S1). The concentration of VG16KRKP used was close to MIC against both the Gram-negative bacteria (Fig. 1D). Interestingly, shrinkage in the bacterial wall and cell lysis, leading to leakage of intracellular material, was evident from SEM images as early as 5 min post cell incubation with the peptide (Supplementary Fig. S1). After 45 min of incubation, no clear shape for cells was observed (Fig. 2F,G), indicating that the peptide is very active and efficient against both the Gram-negative bacteria used here. VG16KRKP binds LPS, which in turn mediates its disaggregation. As mentioned earlier, AMPs should first interact with LPS before gaining access into the cell for its lysis. The intrinsic fluorescence of the Trp residue present in the peptides was used to determine the binding parameters. Addition of small aliquots of LPS into the sample containing VG16/VG16A did not show significant blue shift (~3 nm) of Trp fluorescence (Fig. 3A). In contrast, ~11 nm of blue shift was observed in the emission maxima of VG16KRKP upon successive addition of LPS (Fig. 3A). The noticeable blue shift of the emission wavelength is a strong evidence of the insertion of the Trp residue of VG16KRKP into the LPS hydrophobic environment. Additionally, downward trends of the ITC profiles were observed for the binding interaction of either VG16A or VG16KRKP with LPS, suggesting an exothermic or enthalpy-driven process where electrostatic/ionic interaction plays a vital role. Figure 3B and Supplementary Fig. S2 summarize the thermodynamic parameters of peptide binding to LPS. The interaction of VG16KRKP with LPS has been estimated to have dissociation constant (K D ) of 9.5 μ M, one order of magnitude lower than that for VG16A ( Supplementary Fig. S2). Taken together, these results suggest that the lack of positive charges in VG16/VG16A impedes their efficient binding to the LPS micelle. To further explore the bacterial entry process through LPS layer, a combination of spectroscopic and microscopic methods was utilized. Transmission electron microscopy (TEM) images of LPS obtained in the absence and in the presence of VG16KRKP are shown in Fig. 3C,D, respectively. LPS in aqueous solution shows a ribbon-like assembly with variable width, thickness and few hundred μ m length (Fig. 3C). This result indicates the formation of large inhomogeneous aggregation of LPS. A similar observation has been reported earlier in two independent studies 31,32 . In contrast, TEM images confirmed the disaggregation of ribbon-like assembly of LPS to small thread-like structures with filamentous forms in the context of VG16KRKP treatment for 3 hours (Fig. 3D). In addition, small dense spherical particles of LPS molecules in the presence of VG16KRKP were also observed from the TEM image (Fig. 3D). Similar morphological changes of LPS in the presence of the KYE28 peptide (derived from human heparin cofactor II) have been recently observed 33 . Shai and co-workers have also reported EM images of LPS upon treatment with a series of 12 amino-acid peptides and their fatty acid conjugated analogues to study disaggregation 29 . Similar conclusions can also be drawn from dynamic light scattering (DLS) experiments. The hydrodynamic diameter (~1000 nm) and high polydispersity of LPS in aqueous solution show twoand seven-fold decrease upon incubation with VG16 and VG16KRKP, respectively (Supplementary Fig. S3). This result also supports that VG16KRKP has a stronger effect on disaggregation of LPS micelle. Studies of LPS disaggregation using light scattering studies demonstrating a reduction in polydispersity and diameter of LPS micelles upon treatment with AMP have been previously reported 34 . In order to gain more insights into the mechanism of disruption of LPS aggregation at atomic-resolution, 31 P NMR experiments of LPS alone as well as in the presence of different concentrations of VG16KRKP were carried out using MnCl 2 , a paramagnetic quencher, as a dopant. The paramagnetic ion Mn 2+ quenches 31 P NMR peaks of LPS phosphate head groups in its vicinity. In the absence of VG16KRKP, a negligible quenching of the phosphate head group signal was observed for the sample containing 10 mM MnCl 2 and 0.5 mM LPS (Fig. 3E). The heterogeneous aggregation of LPS makes Mn 2+ ions inaccessible to the phosphate groups of LPS. Addition of VG16KRKP to the sample containing LPS at a molar ratio of 1:1 showed a negligible effect on 31 P peaks of LPS phosphate groups, confirming that LPS remains intact as a heterogeneous aggregate (Fig. 3E). However, upon subsequent addition of up to 3 mM VG16KRKP (LPS:VG16KRKP = 1:6), a drastic quenching of the intensity of 31 P peaks of LPS phosphate head groups was detected. This result points towards the fragmentation or disruption of LPS aggregation by formation of a small lipid vesicle, which tumbles sufficiently fast on the NMR time scale (Fig. 3E), suggesting that the peptide follows detergent-like mechanism to fragment LPS aggregates. Collectively, the results from 31 P NMR experiments on LPS in the presence of the MnCl 2 quencher support the hypothesis of a two-step mechanism of membrane fragmentation demonstrated for AMP or amyloid beta peptide 35,36 . Structural insights in the absence and presence of LPS by NMR spectroscopy. One-dimensional 1 H NMR spectrum was monitored to understand the binding of peptides to LPS. Addition of small but increasing concentrations of LPS caused visible concentration-dependent broadening (without inducing any chemical shift change) for most of the proton resonances of VG16A as well as those of VG16KRKP ( Supplementary Fig. S4), implying a fast chemical exchange between free and bound forms of the peptide in the NMR time scale, which is an ideal situation to determine the bound conformation of the peptide in the presence of LPS by transferred NOESY (trNOESY) 37,38 . It is worth mentioning that LPS aggregates into a large molecular weight micelle/bilayer at 14 μ g/mL concentration 39 . The trNOESY spectra of VG16 ( Supplementary Fig. S5A, left panel) and VG16A ( Supplementary Fig. S5A, right panel) showed very few cross peaks characterized by intra-residual as well as sequential NOE contacts between the backbone and side-chain proton resonances (Supplementary Fig. S5). It is interesting to note that 43.8% of the residues of VG16/VG16A are Gly and hence, due to its flexibility, VG16/VG16A are highly dynamic in aqueous solution as well as in LPS (Supplementary Fig. S5D). On the other hand, the trNOESY spectra of VG16KRKP at a LPS:peptide molar ratio of 1:50 yielded a large number of NOE cross peaks, thus signifying the development of a well-folded conformation (Fig. 4A,B). Analysis of the spectra revealed the presence of strong sequential α N (i, i + 1) and HN/HN NOEs for most of the residues along with few long range (i to ≥ i + 5) NOEs. A closer look at the NOE distribution showed that residues Val1, Ala2, Trp5, Cys9, Pro10, Leu11 and Phe12 were characterized by a higher number of NOE contacts in the presence of LPS (Fig. 4B and Supplementary Fig. S6). All long-range NOE contacts are summarized in Table S1. The most important long-range NOE contacts were observed between the ring protons of Trp5 and the aliphatic side-chain (β , γ and δ ) protons of Leu11. NOE contacts were additionally observed between the residues Trp5 and Phe12 (Fig. 4B). Surprisingly, the indole (Nε H) ring protons of Trp5 did not show any NOE contact with other peptide residues. The Cys9-Pro10 bond of VG16KRKP in LPS adopts trans conformation due to the presence of Cys9Cα H/Pro10Cδ Hs NOEs. Additionally, several long range NOEs such as Phe11Cδ Hs/Trp5Cβ Hs, Trp5H6/Leu11Cα H, Cys9Cα H/Lys6, Lys14Cα H/ Leu11 and Pro10Cγ Hs/Trp5H6 are also observed ( Fig. 4B and Table S1). Notably, the α N (i, i + 1) NOEs such as Trp5/Lys6 and Arg7/Lys8 are broad in nature (Fig. 4A), indicating the dynamic properties of "KRK" segment of VG16KRKP. Strikingly, the C-terminal residues Gly13-Lys14-Gly15-Gly16 of VG16KRKP did not show any NOEs in the context of LPS, indicating that this region still remains highly flexible. Three-dimensional structure of VG16KRKP in LPS. Twenty ensemble structures of VG16KRKP associated to LPS was determined using NOE based distance constraints (Fig. 4C,D and Table 1) and verified using PROCHECK NMR 40 . The LPS-bound backbone ensemble structure of VG16KRKP was rigid whereas the side chains of the positively charged residues remain highly dynamic. The positively charged ammonium (H 3 N + -) group of Lys residues and guanidinium groups of Arg residues of VG16KRKP maintain a distance of ~11-14 Å (Fig. 4E), comparable to that obtained between the two phosphate head groups of the lipid A moiety of LPS 41 . The structure of LPS-bound VG16KRKP is amphipathic, with the positively charged residues (Arg3, Lys6, Arg7 and Lys8) oriented in one specific direction, thus forming a charged surface region (Fig. 4D,E). Conversely, the hydrophobic residues Trp5, Leu11 and Phe12 from the central region of the peptide sequence pack together forming a hydrophobic triad, and stabilize a loop-type structure (Fig. 4D,E). This hydrophobic cluster is further intensified by the presence of Val1 and Ala2, which are packed towards Trp5, and by Pro10 (Fig. 4E). Due to the lack of NOEs, the C-terminus, Gly13-Lys14-Gly15-Gly16, is extended (Fig. 4E). Interestingly, this structure bears a close resemblance to the LPS-bound structure of the synthetic peptide YI12, a modified and more potent form of YW12. This peptide and the fusion domain of the influenza virus haemagglutinin protein in DPC micelles show i to i + 5 aromatic packing interactions between Phe and Trp residues (Fig. 4F) 42 ; they resemble the i to i + 7 aromatic packing interaction between Trp5 and Phe12 observed herein. The position of Trp residue of VG16KRKP in the hydrophobic core of LPS bilayer was measured using fluorescence quenching experiments in the presence of two spin-labeled fatty acids, 5-DSA (5-doxyl stearic acid) and 16-DSA (16-doxyl stearic acid). It was found that the Trp residue of VG16KRKP was around 6.8 Å from the center of the LPS bilayer (Fig. 4E), suggesting that the Trp residue as well as the associated hydrophobic hub are inserted into the hydrophobic core of LPS bilayer, most likely interacting with the acyl chains of LPS. VG16KRKP is non-toxic and non-hemolytic. To evaluate VG16KRKP as a therapeutic agent, we performed hemolytic assay on human blood samples and cytotoxicity assay on HT1080 cell line. The in vitro hemolytic assay on human blood measures the hemoglobin release in the plasma as a consequence of RBC lysis mediated by the agent being tested. Interestingly, VG16KRKP showed almost no hemolysis of RBC up to a concentration of 250 μ M, ~30 times higher than its MIC value (Fig. 5A), whereas 2% Triton X, used as a control, did 100% of hemolysis. Furthermore, VG16KRKP did not show any significant (less than 5%) toxicity on HT1080 cell line up to a final concentration of 50 μ M VG16KRKP, i.e., ~6 times higher than the MIC value (Fig. 5B). The 0.5% Triton X 100 was used as a control for the toxicity assay and it showed only 20% cell viability after treatment with Triton X 100. These results collectively indicate that VG16KRKP is a non-hemolytic and non-toxic peptide. VG16KRKP-treated Xanthomonas shows impaired infectivity to plant. Our data showed significant antimicrobial activity against two devastating plant pathogens, namely Xanthomonas oryzae and Xanthomonas campestris (Fig. 1D), isolated from the fields of Kalyani, West Bengal, India. To depict the efficiency of the peptide in inhibiting leaf blight disease development in vivo, the in vitro mixtures used for the antimicrobial assays were also used to inoculate rice plants. X. oryzae alone and the bacteria pretreated with 500 μ M VG16KRKP were used for inoculation. Leaf curling was observed in 86% of infected plants, 5 days post infection, and also to a greater extent when compared to that observed in only peptide treated plants (28% had any disease-like symptom) (Fig. 6). At 10 to 12 days post infection, lesion formation was also more pronounced in infected plants compared to peptide treated plants. In control plants, no leaf curling or lesion formation was observed (Fig. 6A). Upon observation of uprooted plants, the wet weight of infected plants (n = 14 in each set) was found to be 38% lesser compared to control plants (Fig. 6B,i). The wet weight of treated plants was however only 9% lesser than the control plants (Fig. 6A,B). The number of healthy leaves was 63% and 22% lower for bacteria-infected plants and peptide-treated plants, respectively, in comparison to control plants (Fig. 6B,ii). Bacteria-infected plants root length and shoot height were slightly affected, showing a reduction of 11% and 17%, respectively. In contrast, peptide-treated plants show negligible effect with a reduction of 1% and 4%, respectively (Fig. 6A,B,iii,iv). These results indicate that peptides are capable of weakening the pathogen, thus an inhibition of disease progression has occurred. To further quantify the pathogen in rice plants upon treatment, equal amounts of surface sterilized leaf tissues from mock infected, Xanthomonas-infected and peptide-treated Xanthomonas infected plants were crushed and plated on suitable media and X. oryzae growth was compared. No colonies were observed in control sets even when clarified tissue extract was spread. Samples from plants infected with peptide-treated bacteria had a ~10 fold reduction in the number of colony forming units (CFU) compared to infected samples. Similar data was also obtained when 10-fold diluted samples were used (Fig. 6C). These data indicated that VG16KRKP-treated bacteria were unable to sustain their growth in planta, thus the peptide could effectively prevent bacterial disease development in a crop plant. We also extended our study with same peptide on X. campestris, a causative agent of black rot infection in cabbage (Brassica oleracea) and observed that VG16KRKP-treated plants show almost similar symptoms as control plants (see Supplementary Fig. S7). Taken together, VG16KRKP is able to control the bacteria infected plant disease for two different plant systems. Meanwhile, it is important to mention that degradation of the peptide on the plant surface, caused by proteases and phenolic compounds, cannot be completely ruled out 43 . Nevertheless, growth promoting bacteria are generally found to be associated with the rhizosphere and therefore may be protected from exposure to peptide 44,45 . Recently, the three-dimensional structure of a plant defensin antimicrobial peptide has been determined 46 . Most of the plant defensins are cysteine-rich peptides and are responsible for innate immunity and metal tolerance, such as zinc, in plants 47,48 . Unlike AMPs, the defensins are larger in size, and their positive charge along with their hydrophobic residues are scattered at the surface of the molecule, hence they do not adopt amphipathic structure 49 . In most of the plant defensins, Pro residue plays an important role as well, where the prolines mostly prefer trans conformation than cis conformation 46 . In general, proline rich AMPs, isolated from insects, have also been investigated extensively due to their variety of modes of action to destabilize the membrane 50,51 . Additionally, Pro-rich AMPs cross the blood brain barrier (BBB) easily, and hence can be used as a potential novel carrier. It is to be noted that the VG16KRKP peptide contains Cys as well as Pro residues, which means it satisfies the plant defensin activity. Moreover, the Pro residue of VG16KRKP prefers trans conformation in LPS. We, therefore, believe that our small designed AMP may be an alternative solution to plant defensins for killing bacteria. Conclusion Unlike current methods for agricultural pathogen management that include applications of hazardous chemicals, AMPs are of paramount interest for application in agriculture. They are biodegradable and generally do not induce bacterial resistance 52,53 . In fact, small peptides that form a major part of defense mechanisms of a variety of organisms have been widely used for the development of genetically 54,55 . Therefore, application of AMPs towards the protection of crops can help in controlling plant pathogens and in turn improve agriculture by reducing environmental hazards. We have provided a comprehensive study on a de novo designed peptide, both from structural and functional aspects, including its application for treating plant disease, thus enabling a correlation between the two aspects. Observing the potency of this peptide against Gram-negative plant pathogens such as X. campestris and X. oryzae, we have studied disease progression in rice and cabbage through external application of peptide-treated pathogens. Peptide treatment to the bacteria was effective in inhibiting diseases to a significant extent. This observation is of considerable importance, as it demonstrates the potential use of the peptide in controlling agriculturally important pathogens. Further studies will be useful to modify the peptide for increasing its potency against pathogens, and to study its stability and economical feasibility in usage as a foliar spray for inhibiting plant diseases. Our present results also encourage the development of disease resistant transgenic plants. In this context, studies involving the preparation of nanoparticles attached to AMPs for external applications to the plant and the development of transgenic plants including the overexpressed designed peptide are in progress. Methods Plant Materials and Culture. Grains of rice (Oryza sativa) variety IR64 were surface sterilized using 5% HgCl 2 , washed thoroughly with distilled water to remove all traces of HgCl 2, and planted in soil rite. The plants were grown in normal light at 30 °C. Cabbage (Brassica oleracea) seeds variety VC612 and Golden Acre obtained from Sutton Seeds, India, were soaked overnight in water, planted in Soil rite (Keltech, India) and grown in a growth chamber at 10000 lux, 25 °C, 85% relative humidity and a photoperiod of 16 hours. Scanning Electron Microscopy (SEM). E. coli and X. oryzae were cultured in luria broth (LB) and PS broth (Peptone 1%, sucrose 1%), respectively to mid-log phase and harvested by centrifugation at 8000 rpm for 10 min. Cell pellets were washed twice with 10 mM PBS and resuspended to an OD 600 of 0.01. The cell suspensions were incubated with peptide at a concentration corresponding to its MIC for different time periods starting from 5 min to 1 hour at 37 °C. After incubation, 10 μ L of bacterial suspension was spotted on a slide and fixed with 10 μ L of 4% (v/v) glutaraldehyde in PBS at 4 °C overnight. Thereafter, the slides were washed twice with PBS and dehydrated by treatment with a graded ethanol series (30%, 50%, 70%, 90%, and 100%), for 15 min each. The samples were air dried, followed by gold coating and observed under a SEM. . 456 t1 increments (with 112 scans and 16 dummy scans per increment) and 2 K t2 data points were recorded in each experiment with a 1s recycle delay. TOCSY and NOESY experiments were performed with States TPPI 56 and quadrature detection in t1 dimension; WATERGATE was used for water suppression 57 . All 2D spectra were processed with a squared sine bell apodization and zero filling to 4 K (t2) × 1 K (t1) data matrices. All experiments were recorded using 4, 4-dimethyl-4-silapentane 5-sulfonate sodium salt (DSS) as an internal standard (0.0 ppm). A series of 1D proton-decoupled 31 P NMR spectra of 0.5 mM LPS alone and upon titration with increasing concentrations of peptides, with and without 5 mM MnCl 2 as a paramagnetic quencher, were recorded at 298 K with 3,072 scans (measurement time ~90 minutes/experiment). Both LPS and peptides were dissolved in water supplemented with 10% D 2 O at pH 4.5. Live-cell NMR Experiments. Live-cell NMR experiments were performed on a Bruker Avance III 600 MHz NMR spectrometer, equipped with a 5 mm QCI cryoprobe. Each sample (total volume 550 μ L) was transferred into a 5 mm NMR tube and maintained at a temperature of 310 K during experiments. Final peptide concentration was in the range of 0.5-1.5 mM, with a total cell number varying from 3 × 10 8 to 2 × 10 9 . Sample preparation and other NMR experimental parameters are given in detail in the Supplementary Information. Calculation of NMR Derived Structures. For the NMR-derived peptide structure calculations, the volume integrals of NOE cross-peaks were qualitatively differentiated into strong, medium and weak, depending on their intensities in the NOESY spectra in presence of either LPS. This information was further converted to inter-proton upper bound distances of 3.0, 4.0 and 5.0 Å for strong, medium and weak, respectively, while the lower bound distance was fixed to 2.0 Å. The backbone dihedral angle (phi) of the peptide kept flexible (− 30° to − 120°) for all non-glycine residues to limit the conformational space. The CYANA program v2.1 was used for all structure calculations 58 with iterative refinement of the structure based on distance violation (hydrogen bonding as constraint was excluded for structure calculation). The stereochemistry of NMR-derived ensemble structures were checked using Procheck 40 Determination of Pathogen Density in Plant Tissue. Leaf tissue (20 mg) was cut with individual sterile scissors from control, infected and treated sets of rice plants, surface sterilized by dipping in 5% HgCl 2 for 5 to 7 minutes and washed well in autoclaved double distilled water to remove all traces of HgCl 2 . Sterile condition was maintained during rest of the procedure. Tissues were crushed in 1 mL 10 mM phosphate buffer, pH 7.4 using separate mortar and pestle maintained for each set. An aliquot of the suspension was diluted ten folds (up to 10 −2 dilution). Undiluted (50 μ L) and diluted tissue suspensions were spread on LB agar plates supplemented with 50 μ g/mL Rifampicin in triplicates and incubated at 28 °C for 2-3 days till bacterial growth appeared. The CFU were counted in each case and compared. The experiment was repeated thrice. Statistics. All biological experiments were repeated at least three times and three biological replicates were used wherever applicable. Results are expressed as mean ± standard error. Student's t-test was also performed to determine the statistical significance of the difference between two populations of plants and p value ≤ 0.05 was considered significant.
2018-04-03T02:26:05.896Z
2015-07-06T00:00:00.000
{ "year": 2015, "sha1": "8f861c0d817082dbf8d98c7627a4e888b2650035", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep11951.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f861c0d817082dbf8d98c7627a4e888b2650035", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
214698187
pes2o/s2orc
v3-fos-license
Antibiogram of Bacteria Isolated from Tympanotonus fuscatus Var. Radula (Prosobranchia:Potamididae) Sold in Markets in Nasarawa State, Nigeria The aim of this study was to determine the antibiogram of bacterial isolates from Tympanotonus fuscatus var. radula sold in markets in Nasarawa State. Nigeria. Samples of Tympanotonus fuscatus var. radula (periwinkles) were bought from soup ingredient sellers at different sale locations in Keffi, Masaka and Orange markets and were analyzed using standard bacteriological methods. The bacterial isolates were identified using morphological, cultural and biochemical techniques. The total bacteria count varied from 1.18–3.20 x 10 8 CFU/g for the raw samples while the total bacterial count for the boiled samples varied from 0–1.57 x 10 CFU/g. Periwinkle samples with shells from Masaka market had the highest bacterial load with a mean total bacterial count of 2.94 x 108 CFU/g and mean total coliform count of 2.80 x 106 CFU/g. Raw periwinkle samples with shells had a higher bacterial load than samples without shells. There was also a drastic reduction in the bacterial load in the periwinkle samples after boiling under laboratory conditions. The bacteria isolated were Bacillus spp. and Staphylococcus aureus were the Gram-positive bacteria isolated. Enterobacter spp., Escherichia coli, Salmonella spp., Pseudomonas spp., Serratia spp. Original Research Article Asemota et al.; SAJRM, 5(4): 1-9, 2019; Article no.SAJRM.54746 2 and Proteus spp. The most frequently occurring gram positive bacteria was Escherichia coli with an isolation frequency of 6(24%), the least frequently occurring was Bacillus spp., 8(32)%. Antibiotic susceptibility test showed that all the gram negative organisms exhibited sensitivity to ciprofloxacin: Escherichia coli (32 mm), Enterobacter spp. (41.5 mm), Proteus spp. (40.0 mm), Salmonella spp. (37.0 mm), Serratia spp. (26.0 mm), Pseudomonas spp. (23.0 mm). All the gram negative organisms showed marked resistance to vancomycin: Escherichia coli (12.0 mm), Enterobacter spp. (10.0 mm), Proteus spp. (11.0 mm), Salmonella spp. (5.0 mm), Serratia spp. (10.0 mm) and Pseudomonas spp. (4.5 mm). INTRODUCTION Globally, foodborne diseases and infections have become a growing health challenge. Each year, as many as 600 million or almost 1 in 10 people in the world fall ill after consuming contaminated foods, 420,000 people die including 125,000 children under the age of 5 years [1]. Food borne diarrheal diseases kills 1.9 million children globally every year [2]. In the developing world, food borne infection leads to the death of many children and the resulting diarrheal disease can have long term effects on children's growth as well as their physical and cognitive development [3]. In the industrialized world, food borne infections cause considerable illness, heavily affecting health care systems locally [4]. In major part of the world, about 10-19% of food-borne illness involved shellfishes as a vehicle and between 1993 and 1997, 6.8% of the food borne illnesses involved consumption of fish and shellfishes [5]. Some of these food borne infections are resistant to known antibiotics culminating in high morbidity and mortality, there by aggravating the escalating healthcare costs worldwide. Despite the availability of newer antibiotics, emerging antimicrobial resistance has become an increasing problem in many pathogens throughout the world [6]. The past two decades have witnessed a tremendous increase in emergence and spread of multidrug-resistant bacteria and increasing resistance to newer compounds, such as fluoroquinolones and some cephalosporins [7]. Survey on the microbiological quality of shellfishes shows that they harbor pathogenic organisms [8,9,10]. This is because the water bodies from which the shellfishes are harvested are heavily polluted. [16] reported that grounded periwinkle shell is used as powder for pimples, fertilizers and calcium for animal feeds. The shells compete favorably in construction, cosmetics and ornamental industries [17]. Tympanotonus fuscatus var. radula is a relatively cheap source of high quality animal protein and minerals. The aim of this work is therefore to determine the antibiogram of bacteria isolated from Tympanotonus fuscatus var. radula sold in markets in Nasarawa State, Nigeria. The samples were divided into four groups Group 1: Periwinkle Samples with Shells:-At the laboratory, the periwinkle samples with shells were extensively scrubbed, washed and rinsed using normal saline solution to remove dirt, debris and surface contaminants [19]. The pointed ends were cut off using a sterile knife [20]. All aseptic techniques were carried out under the Purifier Biosafety Cabinet (Model Delta series, LABCONCO, USA). Group 2: Periwinkle Samples without Shells:-The periwinkle samples without shells were extensively scrubbed, washed and rinsed using normal saline solution to remove dirt, debris and surface contaminants [19]. The pointed ends were cut off using a sterile knife [20]. The fleshy part was extracted aseptically using a specially fabricated sterile needle. All aseptic techniques were carried out under the Purifier Biosafety Cabinet (Model Delta series, LABCONCO, USA). Bacteriological Analyses Bacteriological analyses were carried out in triplicates on 50 g each of periwinkles from Groups 1, 2, 3 and 4. They were homogenized with 450 ml of 0.1% sterile peptone water (CONDA, Spain) using a sterile blender/grinder ( Purification and Preservation of Bacterial Isolates Bacterial isolates were aseptically picked with a sterile wire loop based on their morphological appearance and were sub-cultured onto freshly prepared nutrient agar plates to obtain pure cultures. They were incubated for 24 hours at 37°C after which pure cultures were stored in McCartney bottles and stored in a laboratory refrigerator at 4°C [10]. Antimicrobial Susceptibility Testing of the Bacterial Isolates The antimicrobial susceptibility testing was carried out as described by Clinical and Laboratory Standards Institute [24]. Pure colonies of the bacterial isolates were inoculated into 5ml sterile 0.85% (w/v) NaCl (normal saline) and the turbidity was adjusted to the turbidity equivalent to 1.5 McFarland standard. The McFaland's standard was prepared by adding 0.5 ml of 1.172% (w/v) BaCl 2 X 2H 2 O was added into 99.5 ml of 1% (w/v) H 2 SO 4 . A sterile swab stick was soaked in standardized bacteria suspension and streaked on Mueller Hinton agar (Titan Biotech., India) plates and the antibiotic discs were placed aseptically at the center of the plates and allowed to stand for one hour for prediffusion. The plates were incubated at 37°C for 24 hours. The antibiotics discs used were amoxicillin/clavulanic acid (30 µg), gentamicin (10 µg), erythromycin (10 µg), chloramphenicol (30 µg), trimethoprim/sufamethoxazole (cotrimoxazol) (30 µg), tetracycline (25 µg), ciprofloxacin (10 µg), vancomycin (10 µg), ampicillin (10 µg) and streptomycin (30 µg). After the incubation period, the diameter of zone of inhibition (clearance) was measured using a millimeter rule from the center of the disc to the edge of the circumference of the clearance zone and recorded to the nearest millimeter. The result was interpreted in accordance with the susceptibility breakpoint as described by Clinical and Laboratory Standard Institute [24]. Statistical Analysis Data were presented as means standard deviation of triplicate determinations. All statistical analyses were carried out using SPSS for windows version 21.0 statistical package (SPSS Incorporated. USA). One way analysis of variance was done to determine significant difference as P< 0.05. cfu/g, total Salmonella/Shigella (TSS) varied from 1.00 -1.85 x 10 6 CFU/g and total faecal coliform (TFC) varied from 1.10 x 2.30 x 10 6 CFU/g in the raw periwinkles. Raw periwinkles with shells from Masaka market had the highest bacterial load with a TBC of 2.94 x 10 8 cfu/g and a TCC of 2.80 x 10 6 CFU/g. Raw periwinkle samples without shells from Keffi market had the least bacterial load with a TBC of 1.20 x 10 8 CFU/g and TCC of 1.20 x 10 6 CFU/g. Table 2 depicts the mean bacterial load of boiled Tympanotonus fuscatus var. radula sold in markets in Nasarawa State. The TBC varied from 0 -1.57 x 10 8 CFU/g, TCC varied from 0 -1.56 x 10 6 CFU/g. Boiled periwinkle with shells from Masaka market had the highest TBC of 1.56 x 10 8 CFU/g and a TCC of 1.70 x 10 6 CFU/g. The raw periwinkles with shells had a higher bioload due to the spiral shaped nature of the shells which makes it easy for the bacteria to harbor the periwinkles. The high bioload recorded in the periwinkle samples could be attributed to the fact that water bodies from which Tymapanotonus fuscatus var. radula are harvested are contaminated and since the periwinkles are filter feeders there is a tendency that they will accumulate high levels of pathogens as a result of cross contamination. Ekanem and Adegoke [25] stated that the level of pollution of the cultivation waters determines the level of contamination of shellfish. The presence of enteric organisms in the presence study is an indication of pollution of their underlying waters with untreated faecal waste and sewage. This result is in consonance with previously reported works [9,26]. After boiling under laboratory condition of 100°C for 5 minutes, the bacterial load in the shelled samples reduced drastically. On boiling the bacteria in the periwinkles without shells were significantly lower than those in the boiled periwinkles with shells (P<0.05). This result is in consonance with the work of Omenwa et al. [20]. The bacterial load in the periwinkle samples exceeded the acceptable limit as suggested by the International Commission on Microbiological Specifications for food [22] and the US Food and Drug Administration [27] that a maximum microbial count of not greater than 1 x 10 5 CFU/g and coliform levels not more than 1 x 10 2 CFU/g for shellfish. RESULTS AND DISCUSSION The bacteria isolated from the periwinkle includes: Escherichia coli, Staphylococcus aureus, Pseudomonas species, Proteus species, Enterobacter species, Serratia species, Salmonella species and Bacillus species. They are all significant to human health. Enteric organisms such as Enterobacter specie caused septicemia and neonatal meningitis. Staphylococcus aureus is a major cause of cerebrospinal fluid shunts in children. The presence of Salmonella spp. in the periwinkle samples is significant as this organism is one of the most important foodborne pathogen and is usually an indicator of sewage contamination and is found to be associated with a number of nonhuman hosts such as reptiles [28]. Salmonella survives and persist in the aquatic environment. It has been detected in periwinkles from different creeks [9]. The presence of E. coli in the samples is an indication of secondary contamination. As E. coli are known to be associated with gastrointestinal tracts of warm blooded animals and are known to be present in the environment as natural flora. This secondary contamination may be as a result of sewage contamination of the harvesting areas. E. coli causes infantile diarrhea and newborn meningitis, pneumonia and kidney infections [29]. Pseudomonas specie commonly thrives in burns, wounds and some blood infections [30]. They are likely to have been introduced into the environment through swimmers and infected individuals who use the original habitats of these periwinkles for recreation. Therefore, pseudomonas may have occurred due to bathing of the locals with open wounds or other infections. Bacillus cereus causes a toxin mediated disease rather than infection [31]. Table 3 depicts the morphological, cultural and biochemical characteristics of bacterial isolates from Tympanotonus fuscatus var radula. Table 4 depicts the frequency of occurrence of bacterial isolates. The most frequently occurring bacteria were Bacillus species, 8(32%); Pseudomonas species, 4(16%); Escherichia coli, 6(24%). The least occurring isolates were Proteus species, 1(4%), Staphylococcus aureus, 1(4%) and Salmonella species, 1(4%). Table 5 depicts the antibiotic susceptibility pattern of gram negative bacterial isolates from Tympanotonus fuscatus var. radula sold in markets in Nasarawa State. The result shows that all the Gram-negative organisms were susceptible to ciprofloxacin and amoxicillin/clavulanic acid; however, they displayed a 100% resistance to vancomycin. The high performance of these antibiotics can also be due to their molecular sizes a factor which enhances their solubility in diluents thus promoting their penetration power through cell wall into the cytoplasm of the target microorganism as elucidated by Lin et al. [32]. Table 6 depicts the antibiotic susceptibility pattern of gram positive bacterial isolates from Tympanotonus fuscatus var. radula sold in markets in Nasarawa State. The gram positive organisms Bacillus species and Staphylococcus species were both susceptible to ciprofloxacin, gentamicin and amoxicillin/clavulanic acid. The susceptibility pattern observed for the isolates in this study are comparable to those reported by Urassa et al. [33], Isibor and Ekundayo [34], Makut et al. [35] and Ishaleku et al. [36]. CONCLUSION Tympanotonus fuscatus var radula periwinkles sold in markets in Nasarawa State are a good source of high quality animal protein, consumption should therefore be encouraged. Nonetheless, the presence of antibiotic resistant pathogens in the periwinkles is an indication that not cooking periwinkles properly could result in a health risk which could culminate in chemotherapeutic failure of commonly used antibiotics. RECOMMENDATIONS From the findings of this study, government should sponsor public enlightenment programmes on the inherent dangers of consuming raw or improperly cooked periwinkles with or without shells. The public should be made to understand how past outbreaks of food borne diseases occurred. More so, it should be emphasized that the storage and handling procedures should be done properly as most pathogenic organisms are transmitted by hands. Emphasis must be laid on adequate sanitary measures, good personal and environmental sanitary practices in the market and health education. Indiscriminate use of antibiotics should also be discouraged.
2020-03-12T10:36:53.223Z
2020-03-05T00:00:00.000
{ "year": 2020, "sha1": "c8ec9c6cfb239005d124744f00e78b1315e79a76", "oa_license": null, "oa_url": "https://www.journalsajrm.com/index.php/SAJRM/article/download/30135/56549", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9276a1cbba6d2d18aa2facebf65e3e99ea3a4f07", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
251474125
pes2o/s2orc
v3-fos-license
Virological characteristics of the SARS-CoV-2 Omicron BA.2.75 variant The SARS-CoV-2 Omicron BA.2.75 variant emerged in May 2022. BA.2.75 is a BA.2 descendant but is phylogenetically distinct from BA.5, the currently predominant BA.2 descendant. Here, we show that BA.2.75 has a greater effective reproduction number and different immunogenicity profile than BA.5. We determined the sensitivity of BA.2.75 to vaccinee and convalescent sera as well as a panel of clinically available antiviral drugs and antibodies. Antiviral drugs largely retained potency, but antibody sensitivity varied depending on several key BA.2.75-specific substitutions. The BA.2.75 spike exhibited a profoundly higher affinity for its human receptor, ACE2. Additionally, the fusogenicity, growth efficiency in human alveolar epithelial cells, and intrinsic pathogenicity in hamsters of BA.2.75 were greater than those of BA.2. Our multilevel investigations suggest that BA.2.75 acquired virological properties independent of BA.5, and the potential risk of BA.2.75 to global health is greater than that of BA.5. INTRODUCTION Newly emerging SARS-CoV-2 variants need to be carefully and rapidly assessed for a potential increase in their growth efficiency in the human population (i.e., relative effective reproduc-tion number [R e ]), their evasion from antiviral immunity, and their pathogenicity. Resistance to antiviral humoral immunity can be mainly determined by substitutions in the spike (S) protein. Meng et al., 2022;Planas et al., 2021;Van-Blargan et al., 2022), BA.2 (Bruel et al., 2022;Yamasoba et al., 2022b), and BA.5 Khan et al., 2022;Wang et al., 2022;Qu et al., 2022;Hachmann et al., 2022;Tuekprakhon et al., 2022;Arora et al., 2022;Lyke et al., 2022;Gruell et al., 2022; exhibit profound resistance to neutralizing antibodies induced by vaccination, natural SARS-CoV-2 infection, and therapeutic monoclonal antibodies. In particular, newly spreading SARS-CoV-2 variants tend to be resistant to the humoral immunity induced by infection with a prior variant; for instance, BA.2 is resistant to BA.1 breakthrough infection sera (Qu et al., 2022;Tuekprakhon et al., 2022;Yamasoba et al., 2022a), and BA.5 is resistant to BA.2 breakthrough infection sera Hachmann et al., 2022;. Therefore, acquiring immune resistance to previously dominant variants is a key factor in outcompeting previous variants, thereby obtaining relatively increased R e compared with the previously dominant variant. In addition to the evasion of humoral immunity induced by vaccination and infection, substitutions in the S protein can affect sensitivity to therapeutic monoclonal antibodies; for instance, BA.5 exhibits higher resistance to certain therapeutic antibodies than BA.2 Wang et al., 2022;. Furthermore, viral pathogenicity is closely associated with the phenotype of the viral S protein. In particular, we have proposed that the fusogenicity of the viral S protein in in vitro cell cultures is associated with viral pathogenicity in vivo Yamasoba et al., 2022a;Saito et al., 2022). As mentioned above, major SARS-CoV-2 phenotypes can be defined by the function of the viral S protein. The SARS-CoV-2 S protein has two major domains, the receptor-binding domain (RBD) and the N-terminal domain (NTD) (Mittal et al., 2022;Harvey et al., 2021). The RBD is crucial for the binding to the human angiotensin-converting enzyme 2 (ACE2) receptor for cell attachment and entry; therefore, this domain has been considered a major target for neutralizing antibodies to block viral infection (Harvey et al., 2021;Jackson et al., 2022;Barnes et al., 2020). The NTD can also be recognized by antibodies, and some antibodies targeting the NTD potentially neutralize viral infection (Lok, 2021;Voss et al., 2021;Cerutti et al., 2021;Suryadevara et al., 2021;McCallum et al., 2021;Liu et al., 2020;Chi et al., 2020), despite our limited understanding of its virological function. The Omicron BA.2.75 variant, a new BA.2 subvariant, was first detected in India in May 2022(WHO, 2022. Because an early preliminary investigation suggested the potential increase in the relative R e value of BA.2.75 compared with that of BA.5 and the original BA.2 (GitHub, 2022), BA.2.75 has been flagged as the most concerning variant that can potentially outcompete BA.5 and become the next predominant variant in the future. On July 19, 2022, the WHO classified this variant as a variant of concern lineage under monitoring (WHO, 2022). Compared with the BA.2 S, BA.5 S has four mutations Yamasoba et al., 2022a), whereas BA.2.75 S has nine mutations. These findings suggest that the virological phenotype of BA.2.75 is critically different from that of previous BA.2 subvariants. Here, we elucidate the features of a newly emerging SARS-CoV-2 Omicron BA.2.75 subvariant. Epidemics of BA.2.75 in India As of the beginning of August 2022, the Omicron BA.5 variant is predominant worldwide. However, a novel BA.2 subvariant, BA.2.75, emerged and rapidly spread in India since May 2022. Although BA.2.75 and BA.5 belong to the BA.2 subvariant clade, BA.2.75 is phylogenetically distinct from the BA.5 clade (Figure 1A). Compared with BA.2, BA.2.75 harbors 14-amino-acid substitutions, including nine substitutions in the S protein ( Figures 1B and S1A). In India, BA.5 and BA.2.75 spread in different regions: BA.5 spread in the southern area, including the Tamil Nadu and Telangana states, whereas BA.2.75 spread to the other areas, including the Himachal Pradesh, Odisha, Haryana, Rajasthan, and Maharashtra states ( Figures 1C and 1D). To compare the relative R e between BA.5 and BA.2.75 in India upon adjusting the regional differences, we constructed a Bayesian hierarchical model that can estimate both state-spe-cific R e values and the value averaged in India (Figures 1E and S1B; Table S1). The R e value of BA.5 is 1.19-fold higher than that of BA.2 on average in India ( Figure 1E). This value is comparable with the relative R e value of BA.5 in South Africa (1.21) estimated in our recent study . Of note, the R e value of BA.2.75 was 1.34-fold higher than that of BA.2, and the R e value of BA.2.75 was 1.13-fold higher than that of BA.5 ( Figures 1E and S1C). Furthermore, in the Indian states analyzed, the R e value of BA.2.75 was greater than that of BA.5 (Figures S1B and S1C). Together, our data suggest that BA.2.75 has the potential to spread more rapidly than BA.5 and will be predominant in some regions including India in the near future. Article Human sera were collected from vaccinated and infected individuals (Table S2). The 2-dose vaccine sera were ineffective against all Omicron subvariants tested, including BA.2.75 (Figure 2A). Although BA.5 was significantly more resistant to 3-dose vaccine sera than BA.2, which is consistent with previous studies Hachmann et al., 2022;, the sensitivity of BA.2.75 to these sera was comparable with that of BA.2 ( Figures 2B and 2C). However, BA.5 (2.1-fold) and BA.2.75 (1.7-fold) were significantly more resistant to 4-dose vaccine sera than BA.2 ( Figure 2D). To identify the substitution(s) responsible for the higher resistance of BA.2.75 to 4-dose vaccine sera than BA.2, we prepared BA.2 S-based derivatives with the BA.2.75 substitutions. As shown in Figure 2D, four substitutions in the NTD, K147E, W152R, F157L, and I210V, and a substitution in the RBD, N460K, are responsible for the resistance to 4-dose vaccine sera. On the other hand, R493Q increased the sensitivity to 4-dose vaccine sera (Figure 2D). Since R493Q is a reversion substitution (R493 in BA.2 but Q493 in B.1.1, BA.5, and BA.2.75; Figure S1A), these results suggest that this substitution recovered the epitope recognized by vaccine-induced humoral immunity. We then assessed the sensitivity of BA.2.75 to the convalescent sera from individuals who were infected with BA.1 or BA.2 after 2-dose or 3-dose vaccination (i.e., breakthrough infection). Similar to the previous reports including ours Hachmann et al., 2022;, BA.5 exhibited significant resistance to breakthrough infection sera compared with BA.2. In contrast, the sensitivity of BA.2.75 to these sera was comparable with that of BA.2 ( Figures 2E and 2F), suggesting that BA.2.75 is not resistant to the humoral immunity induced by infection with prior Omicron subvariants, including BA.1 and BA.2. In the case of BA.5 breakthrough infection sera, both BA.2 (1.2-fold) and BA.2.75 (1.7-fold) were significantly more resistant than BA.5 ( Figure 2G). The neutralization assay to screen the substitution(s) responsible for the higher resistance of BA.2.75 to breakthrough BA.5 infection sera showed that two substitutions in the NTD, K147E and W152R, and a substitution in the RBD, G446S, were responsible for the resistance to breakthrough BA.5 infection sera ( Figure 2G). Because BA.2.75 exhibited different sensitivities to five monoclonal antibodies, bebtelovimab, cilgavimab, regdanvimab, sotrovimab, and tixgevimab, from BA.2 (Table 1), we assessed the BA.2.75-specific substitution(s) that determine the sensitivity/resistance of BA.2.75. As shown in Figure S2B and Table S3, the resistance of BA.2.75 to bebtelovimab and cilgavimab is determined by the G446S substitution, which is present in the epitope of these two antibodies (Westendorf et al., 2022;Dong et al., 2021). The increased sensitivity of BA.2.75 to sotrovimab is attributed to the D339H substitution, whereas those to regdanvimab and tixagevimab are attributed to the R493Q reversion substitution ( Figure S2B; Table S3). These observations are consistent with the fact that D339H and R493Q are present in the known epitopes of these antibodies (Dong et al., 2021;Pinto et al., 2020;Kim et al., 2021). To evaluate the sensitivity of BA.2.75 to three antiviral drugs, remdesivir, EIDD-1931, and nirmatrelvir, we used clinical isolates of BA.2.75, B.1.1 , BA.2 , and BA.5 . These viruses were inoculated into human airway organoids (AOs), a physiologically relevant model , and treated with three antiviral drugs. As shown in Figure S3 and Figure 3A, the Assays for each serum sample were performed in triplicate to determine the 50% neutralization titer (NT 50 ). Each dot represents one NT 50 value, and the geometric mean and 95% CI are shown. Statistically significant differences were determined by two-sided Wilcoxon signed-rank tests. The p values versus BA.2 (B-F, H, and I), BA.5 (G and J), or BA.2.75 (K) are indicated in the panels. The horizontal dashed line indicates the detection limit (120-fold). For the BA.2 derivatives (D, G, I, and J), statistically significant differences versus BA.2 (p < 0.05) are indicated with asterisks. Red and blue asterisks indicate decreased and increased NT 50 s, respectively. Information on the vaccinated/convalescent donors is summarized in Table S2. See also Table S2. OPEN ACCESS Article pseudovirus infectivity of BA.2.75 was significantly (12.5-fold) higher than that of BA.2. To assess the association of TMPRSS2 usage with the increased pseudovirus infectivity of BA.2.75, we used both HEK293-ACE2/TMPRSS2 cells and HEK293-ACE2 cells, on which endogenous surface TMPRSS2 is undetectable , as target cells. As shown in Figure S4A, the infectivity of BA.2.75 pseudovirus was not increased by TMPRSS2 expression ( Figure S4A), suggesting that TMPRSS2 is not associated with an increase in pseudovirus infectivity of BA.2.75. To determine the substitutions that are responsible for the increased pseudovirus infectivity of BA.2.75, we used a series of BA.2 derivatives that bear the BA.2.75-specific substitutions. Three substitutions in the NTD, K147E, F157L, and I210V, and two substitutions in the RBD, N460K and R493Q, significantly increased infectivity (Figure 3A). Notably, the N460K substitution increased infectivity by 44-fold ( Figure 3A). However, a substitution in the NTD, W152R, significantly (8.9-fold) decreased infectivity ( Figure 3A). The BA.2 derivative bearing the three substitutions in the NTD in close proximity to each other, K147E, W152R, and F157L, exhibited comparable infectivity with BA.2 ( Figure 3A). We next measured the ACE2 binding affinity of the RBDs of BA.2.75 as well as those of BA.2 derivatives bearing D339H, G446S, N460K, and R493Q substitutions by an enhanced yeast surface display system (Zahradník et al., 2021a). Intriguingly, the BA.2.75 RBD showed a strong tight binding with an affinity of 146 ± 6 pM ( Figure 3B). Consistent with the results of the pseu-dovirus assay ( Figure 3A), the BA.2 N460K substitution exhibited a significantly increased binding affinity compared with BA.2 ( Figure 3B). To assess the impact of N460K in BA.2.75 S, we prepared the BA.2.75 RBD derivative bearing the K460N reversion substitution. As shown in Figure 3B, the BA.2.75 K460N substitution exhibited a significantly reduced binding affinity compared with BA.2.75. These observations suggest that N460K is a critical substitution in the BA.2.75 S to increase viral infectivity by enhancing binding affinity to ACE2. To reveal the structural effect of the N460K substitution, we performed cryoelectron microscopy (cryo-EM) analysis of the BA.2.75 S ectodomain and the complex of the BA.2.75 S ectodomain and human ACE2, which determined cryo-EM maps with C1 symmetry at resolutions of 2.86 Å (closed 1) and 3.48 Å , respectively ( Figure 3C; Table S4). The ectodomain of BA.2.75 S showed three different conformational states (mol ratio, 4.5:2.5:3.0), with two closed states with all RBDs ''down,'' but these maps were reconstructed with slightly different RBD conformations, and an open state with one RBD ''up'' and two RBD ''down'' (Figures S4B and S4C; Table S4). There are fouramino-acid substitutions in the RBD of BA.2.75 S compared with BA.2. G446S and R493Q are located on the receptor-binding motif (RBM), whereas N460K and D339H are located distal to the RBM ( Figure 3D, left). N460K forms an intramolecular salt bridge with D420, suggesting that N460K may contribute to RBD folding/flexibility ( Figure 3D, upper right). However, at a low resolution, the electron density map of the BA.2.75 S and ACE2 complex showed the same binding mode as that of ACE2 bound to the RBDs of other SARS-CoV-2 variants (Figures 3C right, S4B, and S4C; Table S4). Therefore, this binding mode suggested that N460K is in a position close to the N-linked glycan on the N90 residue of ACE2 ( Figure 3E). Interestingly, a previous study showed that the glycan linked to the N90 residue of ACE2 exhibits an inhibitory effect on binding to the S proteins of previous SARS-CoV-2 variants . To test the possible association of the N-linked glycan on the N90 residue of ACE2 with the increased binding affinity of BA.2.75 S to ACE2, we performed an additional round of binding experiments using the human ACE2 receptor bearing the N90Q substitution, which ablates the N-linked glycosylation. Consistent with the previous study , the ACE2 N90Q substitution increased the binding affinity of BA.2 S RBD ( Figure 3F). On the other hand, the binding affinity of BA.2.75 RBD was significantly reduced by the N90Q substitution of ACE2 ( Figure 3F). Altogether, our findings suggest the increased binding affinity of BA.2.75 S is partly attributed to the interaction of BA.2.75 S K460 residue to the N-linked glycan on the N90 residue of ACE2. In addition to N460K, the structure of the BA.2.75 S ectodomain showed that D339H forms an intramolecular interaction with F371 ( Figure 3D, bottom right). This observation suggests that D339H possibly contributes to the improved folding/flexibility of RBD. In particular, D339H requires two nucleotide changes in the codon to occur. Such changes are still relatively rare in the evolution of SARS-CoV-2, reinforcing the importance and corresponding fitness advantage. To analyze the potential impact of this substitution, we additionally prepared the BA.2.75 RBD derivative bearing H339D and measured its affinity. As shown in Figure 3B, the K D value of this derivative was Figure S2. significantly (3-fold) higher than that of the parental BA.2.75, suggesting that D339H appears to indirectly affect the affinity for ACE2. To further reveal the virological property of BA.2.75 S, we performed a cell-based fusion assay Yamasoba et al., 2022a;Saito et al., 2022;Motozono et al., 2021) using Calu-3 cells as target cells. Flow cytometry analysis showed that the surface expression level of BA.2.75 was comparable with that of BA.2 ( Figure 3G). Consistent with our recent study , the fusogenicity of BA.5 was significantly higher than that of BA.2, and notably, BA.2.75 S was also significantly more fusogenic than BA.2 S ( Figure 3H). Moreover, a coculture experiment using HEK293-ACE2/TMPRSS2 cells as the target cells Yamasoba et al., 2022a; showed that the S proteins of BA.5 and BA.2.75 showed significantly increased fusogenicity compared with that of BA.2 S ( Figure S4D). Altogether, these results suggest that BA.2.75 S exhibits higher binding affinity to human ACE2 and higher fusogenicity. To evaluate the effect of BA.2.75 on the airway epithelial and endothelial barriers, we used airway-on-a-chips system (Figure S4E;Hashimoto et al., 2022). By measuring the amount of virus that invades from the top channel ( Figure 4G) to the bottom channel ( Figure 4H), we could evaluate the ability of viruses to disrupt the airway epithelial and endothelial barriers. Notably, the amount of virus that invaded the blood vessel channel of BA.2.75-, BA.5-, and B.1.1-infected airway-on-chips was significantly higher than that of BA.2-infected airway-on-chips (Fig-ure 4I). These results suggest that BA.2.75 exhibits more severe airway epithelial and endothelial barrier disruption than BA.2. To further address the fusogenic capacity of BA.2.75, we performed a plaque assay using VeroE6/TMPRSS2 cells. Consistent with our previous studies with a Delta isolate Virological characteristics of BA.2.75 in vivo As we proposed in our prior studies Yamasoba et al., 2022a;Saito et al., 2022), the fusogenicity of the S proteins of SARS-CoV-2 variants is closely associated with the intrinsic pathogenicity in an experimental hamster model. Given that BA.2.75 is more fusogenic than BA.2 in the in vitro cell culture systems (Figures 3 and 4), it is hypothesized that BA.2.75 is intrinsically more pathogenic than BA.2. To address this possibility, we intranasally inoculated a BA.2.75 isolate into hamsters. As controls, we also used clinical isolates of Delta, BA.2, and BA.5. Although we followed our established experimental protocol Yamasoba et al., 2022a;Saito et al., 2022), the viral titers of clinical isolates of Omicron subvariants were relatively low. Therefore, we set out to conduct animal experiments in this study with relatively lower inocula (1,000 50% tissue culture infectious dose [TCID 50 ] [ Figure 5A, top] or 5,000 TCID 50 [Figure 5A,bottom] per hamster) than our previous studies (10,000 TCID 50 per hamster) Yamasoba et al., 2022a;Saito et al., 2022). Nevertheless, consistent with our previous study , Delta infection resulted in weight loss in the infection group with a higher inoculum ( Figure 5A, bottom). At both challenge doses, the body weights of the BA.5-and BA.2.75-infected hamsters were significantly lower than those of the BA.2-infected hamsters ( Figure 5A). We then analyzed the pulmonary function of infected hamsters as reflected by three parameters, enhanced pause (Penh), the ratio of time to peak expiratory flow relative to the total expiratory time (Rpef), and breath per minute (BPM), which are surrogate markers for bronchoconstriction or airway obstruction. Representative inhibition curves are shown in Figure S3. Article Subcutaneous oxygen saturation (SpO 2 ) was routinely measured in the group with the lower inoculum ( Figure 5A, top To address the viral spread in infected hamsters, we routinely measured the viral RNA load in the oral swab. Although the viral RNA loads of the hamsters infected with Delta, BA.2, and BA.5 were comparable, the viral load in the swabs of the BA.2.75-infected hamsters was relatively highly maintained by 7 d.p.i. and was significantly higher than that of the BA.2-infected hamsters ( Figure 5B). To address the possibility that BA.2.75 more efficiently spread in the respiratory tissues, we collected the lungs of infected hamsters at 2 and 5 d.p.i., and the collected tissues were separated into the hilum and periphery regions. Although the viral RNA loads in both the hilum and periphery of the four infection groups were comparable at 2 d.p.i. (Figure 5C, top), those of the hamsters infected with Delta, BA.5, and To further address virus spread in the respiratory tissues, we performed immunohistochemical (IHC) analysis targeting viral nucleocapsid (N) protein. Similar to our previous studies Yamasoba et al., 2022a;, epithelial cells in the upper tracheae of infected hamsters were sporadically positive for viral N protein at 2 d.p.i., but there were no significant differences among the four viruses, including BA.2.75 ( Figure S5A). In the alveolar space around the bronchi/ bronchioles at 2 d.p.i., N-positive cells were detected in Deltainfected hamsters. However, the N proteins strongly remained in the lobar bronchi in BA.5-and BA.2.75-infected hamsters ( Figures 5D, top and S5B Figure 5E, bottom). These data suggest that BA.2 infects a smaller portion of the bronchial/bronchiolar epithelium because it is less efficiently transmitted to neighboring epithelial cells. In contrast, BA.5 and BA.2.75 infections seemed to persist in the bronchial/bronchiolar epithelium, and in particular, BA.2.75 invaded the alveolar space more efficiently than BA.5 at the early stage of infection. Altogether, the IHC data suggest that among Omicron subvariants, BA.2.75 more efficiently spread into the alveolar space than BA.2 and BA.5, with persistent infection in the bronchi/bronchioles. Pathogenicity of BA.2.75 To investigate the intrinsic pathogenicity of BA.2.75, we analyzed the formalin-fixed right lungs of infected hamsters at 2 and 5 d.p.i. by carefully identifying the four lobules and main bronchus and lobar bronchi sectioning each lobe along with the bronchial branches. Histopathological scoring was performed according to the criteria described in our previous studies Yamasoba et al., 2022a;Saito et al., 2022). Consistent with our previous studies Saito et al., 2022), all five parameters as well as the total score of the Delta-infected hamsters were significantly higher than those of the BA.2-infected hamsters ( Figures 5F and 5G). When we compared the histopathological scores of Omicron subvariants, the scores indicating hemorrhage or congestion and total histology scores of BA.5 and BA.2.75 were significantly greater than those of BA.2 ( Figures 5F and 5G). Similar to our recent studies Tamura et al., 2022), BA.5 is intrinsically more pathogenic than BA.2, and notably, our results suggest that BA.2.75 exhibits more significant inflammation than BA.2. For determination of the area of pneumonia, the inflammatory area was termed the area of type II pneumocytes and was morphometrically analyzed ( Figure S5D). As summarized in Figure 5H, at 5 d.p.i., the percentages of the area of type II pneumocytes of Delta, BA.5, and BA.2.75 were significantly higher than that of BA.2. Altogether, these findings suggest that BA.2.75 infection intrinsically induces greater inflammation and exhibits higher pathogenicity than BA.2. DISCUSSION Here, we characterized the properties of the Omicron BA.2.75 variant, such as the growth rate in the human population, resistance to antiviral humoral immunity and antiviral drugs, functions of the S protein in vitro, and intrinsic pathogenicity. In terms of the emergence geography and phylogeny, BA.5 and BA.2.75 emerged independently. Nevertheless, the results of the cellbased fusion assay, airway-on-a-chip assay, and plaque assay suggested that both BA.5 and BA.2.75 acquired higher hamsters per infection group at a lower inoculum (1,000 TCID 50 /hamster) were euthanized at 2 and 5 d.p.i. and used for virological and pathological analysis (C-G). (A) Body weight, Penh, Rpef, BPM, and SpO 2 values of infected hamsters (n = 6 each). The results at a low inoculum (1,000 TCID 50 /hamster) and a high inoculum (5,000 TCID 50 /hamster) are shown in the top and bottom panels, respectively. (B) Viral RNA loads in the oral swab (n = 6 each). OPEN ACCESS Article fusogenicity after divergence from BA.2. Our data, including a recent study , suggest that the critical substitution responsible for the higher fusogenicity of the BA.5 and BA.2.75 S proteins are different: the L452R substitution for BA.5 S and the D339H/N460K substitutions for BA.2.75 S. In our previous studies focusing on Delta , Omicron BA.1 , BA.2 , and BA.5 , we proposed a close association between S-mediated fusogenicity in vitro and pathogenicity in a hamster model. Consistent with our hypothesis, we demonstrated that compared with BA.2, BA.2.75 exhibits higher fusogenicity in vitro and efficient viral spread in the lungs of infected hamsters, which leads to enhanced inflammation in the lung and higher pathogenicity in vivo. However, viral load is not necessarily associated with pathogenicity. Moreover, in vitro experiments using a variety of cell culture systems showed that BA.2.75 replicates more efficiently than BA.2 in alveolar epithelial cells but not in airway epithelial cells. Altogether, our results suggest that BA.2.75 exhibits higher fusogenicity and pathogenicity via evolution of its S protein independent of BA.5. Using hamster sera, we demonstrated that the immunogenicity of BA.5 and BA.2.75 is different from each other, whereas BA.2.75 and BA.5 are the descendants of BA.2. However, the antiviral effect of BA.5 breakthrough infection sera is comparable between BA.2 and BA.2.75. In this regard, a recent study showed that BA.2.75 exhibits a pronounced resistance to BA.5 breakthrough infection sera compared with BA.2 . This discrepancy may be explained by the type of vaccine used: a cohort in the present study was vaccinated with mRNA vaccines (BNT162b2 or mRNA-1273), whereas the cohort in the previous study was vaccinated with an inactivated vaccine (CoronaVac). In fact, another study on a cohort of mRNA-1273 vaccination showed results consistent with ours (Shen et al., 2022). These observations suggest that the basal immunity induced by vaccination is different by the type of vaccine used, and thereby, the immunity induced by breakthrough BA.5 infection is different. We showed that G446S in BA.2.75 S was closely associated with resistance to various antisera: 4-dose vaccinated sera, breakthrough BA.5 infection sera, and BA.2-and BA.5-infected hamster sera. Additionally, G446S was responsible for the increased resistance to therapeutic antibodies, such as bebtelovimab and cilgavimab. These results suggest that G446S is critical to evade diverse antibodies. Importantly, our receptorbinding assay showed that G446S significantly decreased the binding affinity of BA.2 RBD to human ACE2, and this effect was compensated by the other substitutions in the BA.2.75 S RBD, particularly by N460K. Altogether, these results suggest that G446S was acquired to evade antiviral immunity, and the other substitutions in the RBD, such as N460K, were acquired likely to compensate for the ACE2 binding affinity reduced by G446S. This is reminiscent of our recent study focusing on Omicron BA.5 : in the case of the BA.5 S, F486V resulted in immune evasion and decreased binding affinity to ACE2, whereas L452R compensated the decreased ACE2 binding affinity. Although the mechanisms of action are different between BA.5 and BA.2.75, the acquisition of two types of substitutions-one leads to immune evasion, which tends to decrease ACE2 affinity, and the other leads to increased ACE2 affinity for compensation-might be a common strategy of SARS-CoV-2 evolution. Our investigation using viral genome surveillance data reported from India suggested that BA.2.75 has the potential to outcompete BA.2 as well as BA.5, the most predominant variant in the world as of August 2022. Following the worldwide spread of BA.5, it is probable that the number of individuals infected with BA.5 will increase. Additionally, we showed that the intrinsic pathogenicity of BA.2.75 in hamsters is comparable with that of BA.5 and higher than that of BA.2. Since a recent study showed that the hospitalization risk of BA.5 was significantly higher than that of BA.2 in the once-boosted vaccinated population (Kislaya et al., 2022), it is not unreasonable to infer that the intrinsic pathogenicity in infected hamsters reflects the severity and outcome in infected humans to a meaningful extent. In summary, our multisystem investigations revealed that the growth rate in the human population, fusogenicity, and intrinsic pathogenicity of BA. ACKNOWLEDGMENTS We would like to thank all members belonging to G2P-Japan Consortium. We thank National Institute for Infectious Diseases, Japan, for providing clinical isolates of BA.2.75 and BA.2 and the technical assistance from The Research Support Center, Research Center for Human Disease Modeling, and Kyushu University Graduate School of Medical Sciences. We appreciate all data contributors, i.e., the authors and their originating laboratories responsible for obtaining the specimens, and their submitting laboratories for generating the genetic sequence and metadata and sharing via the GISAID Initiative, on which this research is based. The super-computing resource was provided by Human Genome Center at The University of Tokyo. This study was RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Kei Sato (keisato@g.ecc.u-tokyo.ac.jp). Materials availability All unique reagents generated in this study are listed in the key resources table and available from the lead contact with a completed Materials Transfer Agreement. Data and code availability All databases/datasets used in this study are available from the GISAID database (https://www.gisaid.org) and GenBank database (https://www.gisaid.org; EPI_SET ID: EPI_SET_220804hy). The atomic coordinates and cryo-EM maps for the structures of the BA.2.75 S protein closed state 1 (PDB code: 8GS6, EMDB code: 34221), closed state 2 (EMDB code: 34222), 1-up state (EMDB code: 34223) and in complex with hACE2 (EMDB code: 34224) have been deposited in the Protein Data Bank (www.rcsb.org) and Electron Microscopy Data Bank (www.ebi.ac.uk/ emdb/). The computational codes used in the present study and the GISAID supplemental table for EPI_SET ID: EPI_SET_220804hy are available in the GitHub repository (https://github.com/TheSatoLab/Omicron_BA.2.75). Viral genome sequencing data for working viral stocks are available in the Sequence Read Archive (accession ID: PRJDB14324). Additional Supplemental Items are available from Mendeley Data at https://data.mendeley.com/datasets/nrs6t42wbs [https://doi. org/10.17632/nrs6t42wbs.1]. Any additional information required to reanalyze the data reported in this work paper is available from the lead contact upon request. Ethics statement All experiments with hamsters were performed in accordance with the Science Council of Japan's Guidelines for the Proper Conduct of Animal Experiments. The protocols were approved by the Institutional Animal Care and Use Committee of National University Corporation Hokkaido University (approval ID: 20-0123 and 20-0060). All protocols involving specimens from human subjects recruited Human serum collection Vaccine sera of fifteen individuals who had the BNT162b2 vaccine (Pfizer/BioNTech) (average age: 38 years, range: 24-48 years; 53% male) (Figures 2A-2D) were obtained at one month after the second dose, one month after the third dose, and four months after the third dose. Vaccine sera of fifteen individuals who had the BNT162b2 vaccine (Pfizer/BioNTech) for the first, second, and third doses of vaccination and mRNA-1273 (Moderna) for the fourth dose of vaccination (average age: 42 years, range: 32-56 years; 33% male) ( Figure 2D) were obtained at one month after the fourth dose. The details of the vaccine sera are summarized in Table S2. Convalescent sera were collected from fully vaccinated individuals who had been infected with BA.1 (16 2-dose vaccinated; 10-27 days after testing; average age: 48 years, range: 20-76 years, 44% male) ( Figure 2E), fully vaccinated individuals who had been infected with BA.2 (9 2-dose vaccinated and 5 3-dose vaccinated; 11-61 days after testing. n=14 in total; average age: 47 years, range: 24-84 years, 64% male) ( Figure 2F), and fully vaccinated individuals who had been infected with BA.5 (2 2-dose vaccinated, 17 3-dose vaccinated and 1 4-dose vaccinated; 10-23 days after testing. n=20 in total; average age: 51 years, range: 25-73 years, 45% male) ( Figure 2G). The SARS-CoV-2 variants were identified as previously described Yamasoba et al., 2022a). Sera were inactivated at 56 C for 30 minutes and stored at -80 C until use. The details of the convalescent sera are summarized in Table S2. Phylogenetic analyses For construction of an ML tree of Omicron lineages (BA.1-BA.5) sampled from South Africa and BA.2.75 (shown in Figure 1A), the genome sequence data of SARS-CoV-2 and its metadata were downloaded from the GISAID database (https://www.gisaid.org/) (Khare et al., 2021) on July 23, 2022. We excluded the data of viral strains with the following features from the analysis: i) a lack of collection date information; ii) sampling from animals other than humans, iii) >2% undetermined nucleotide characters, or iv) sampling by quarantine. From each viral lineage, 30 sequences were randomly sampled and used for tree construction, in addition to an outgroup sequence, EPI_ISL_466615, representing the oldest isolate of B.1.1 obtained in the UK. The viral genome sequences were mapped to the reference sequence of Wuhan-Hu-1 (GenBank accession number: NC_045512.2) using Minimap2 v2.17 (Li, 2018) and subsequently converted to a multiple sequence alignment according to the GISAID phylogenetic analysis pipeline (https:// github.com/roblanf/sarscov2phylo). The alignment sites corresponding to the 1-265 and 29674-29903 positions in the reference genome were masked (i.e., converted to NNN). Alignment sites at which >50% of sequences contained a gap or undetermined/ ambiguous nucleotide were trimmed using trimAl v1.2 (Capella-Gutié rrez et al., 2009). Phylogenetic tree construction was performed via a three-step protocol: i) the first tree was constructed; ii) tips with longer external branches (Z score > 4) were removed from the dataset; iii) and the final tree was constructed. Tree reconstruction was performed by RAxML v8.2.12 (Stamatakis, 2014) under the GTRCAT substitution model. The node support value was calculated by 100 bootstrap analyses. Modeling the epidemic dynamics of SARS-CoV-2 lineages To quantify the spread rate of each SARS-CoV-2 lineage in the human population in India, we estimated the relative R e of each viral lineage according to the epidemic dynamics, calculated on the basis of viral genomic surveillance data. The data were downloaded from the GISAID database (https://www.gisaid.org/) on August 1, 2022. We excluded the data of viral strains with the following features from the analysis: i) a lack of collection date information; ii) sampling in animals other than humans; or iii) sampling by quarantine. We analyzed the datasets of the ten states of India, where R20 sequences of either BA.2.75 or BA.5 are reported (i.e., Himachal Pradesh, Odisha, Haryana, Rajasthan, and Maharashtra, Gujarat, West Bengal, Delhi, Tamil Nadu, and Telangana). BA.5 sublineages are summarized as ''BA.5'', and BA.2 sublineages with %400 sequences are summarized as ''other BA.2''. Subsequently, the dynamics of the top seven predominant lineages in India were estimated from April 24, 2022, to August 1, 2022, were analyzed. The number of viral sequences of each viral lineage collected each day in each country was counted, and the count matrix was constructed as an input for the statistical model below. We constructed a Bayesian hierarchical model to represent relative lineage growth dynamics with multinomial logistic regression as described in our previous study . In brief, we incorporated a hierarchical structure into the slope parameter over time, which enabled us to estimate the global average relative R e of each viral lineage in India as well as the average value for each country. Arrays in the model index over one or more indices: L = 7 viral lineages l; S = 10 states s; and T = 100 days t. The model is as follows: The explanatory variable was time, t, and the outcome variable was y lst , which represented the count of viral lineage l in state s at time t. The slope parameter of lineage l in state s, b ls , was generated from a Student's t distribution with hyperparameters of the mean, b l , and the standard deviation, s l . As the distribution generated b ls , we used a Student's t distribution with six degrees of freedom instead of a normal distribution to reduce the effects of outlier values of b ls . In the model, the linear estimator m :st , consisting of the intercept a :s and the slope b :s , was converted to the simplex q :st , which represented the probability of occurrence of each viral lineage at time t in state s, based on the softmax link function defined as follows: softmaxðxÞ = expðxÞ P i expðx i Þ y lst is generated from q :st and the total count of all lineages at time t in state s according to a multinomial distribution. The relative R e of each viral lineage in each county ðr ls Þ was calculated according to the slope parameter b ls as follows: Airway organoids An airway organoid (AO) model was generated according to our previous report . Briefly, normal human bronchial epithelial cells (NHBEs, Cat# CC-2540, Lonza) were used to generate AOs. NHBEs were suspended in 10 mg/ml cold Matrigel growth factor reduced basement membrane matrix (Corning, Cat# 354230). Fifty microliters of cell suspension were solidified on prewarmed cell culture-treated multiple dishes (24-well plates; Thermo Fisher Scientific, Cat# 142475) at 37 C for 10 min, and then, 500 ml of expansion medium was added to each well. AOs were cultured with AO expansion medium for 10 days. For maturation of the AOs, expanded AOs were cultured with AO differentiation medium for 5 days. In experiments evaluating the antiviral drugs (see ''Antiviral drug assay using SARS-CoV-2 clinical isolates and AOs'' section below), AOs were dissociated into single cells and then were seeded into a 96-well plate. For verification of the sequences of SARS-CoV-2 working viruses, viral RNA was extracted from the working viruses using a QIAamp viral RNA mini kit (Qiagen, Cat# 52906) and viral genome sequences were analyzed as described above (see "Viral genome sequencing" section). Information on the unexpected substitutions detected is summarized in Table S6, and the raw data are deposited in the GitHub repository (https://github.com/TheSatoLab/Omicron_BA.2.75). Cytotoxicity assay The cytotoxicity of remdesivir, EIDD-1931 or nirmatrelvir ( Figure S3B) was assessed as previously described . Briefly, one day before the assay, AOs (10,000 cells) were dissociated and then seeded into a 96-well plate. The cells were cultured with the serially diluted antiviral drugs for 24 hours. The Cell Counting Kit-8 (Dojindo, Cat# CK04-11) solution (10 ml) was added to each well, and the cells were incubated at 37 C for 90 minutes. Absorbance was measured at 450 nm using the a Multiskan FC (Thermo Fisher Scientific). The assay of each compound was performed in quadruplicate, and the 50% cytotoxic concentration (CC 50 ) was calculated using Prism 9 software v9.1.1 (GraphPad Software). Pseudovirus infection Pseudovirus infection ( Figure 3A) was performed as previously described Motozono et al., 2021;Ferreira et al., 2021;Uriu et al., , 2022. Briefly, the amount of pseudoviruses prepared was quantified by the HiBiT assay using a Nano Glo HiBiT lytic detection system (Promega, Cat# N3040) as previously described (Ozono et al., , 2020. For measurement of pseudovirus infectivity, the same amount of pseudoviruses (normalized to the HiBiT value, which indicates the amount of HIV-1 p24 antigen) was inoculated into HOS-ACE2/TMPRSS2 cells, HEK293-ACE2 cells or HEK293-ACE2/ TMPRSS2 cells and viral infectivity was measured as described above (see ''Neutralization assay'' section). For analysis of the effect of TMPRSS2 on pseudovirus infectivity ( Figure S4A), the fold change of the values of HEK293-ACE2/TMPRSS2 to HEK293-ACE2 was calculated. QUANTIFICATION AND STATISTICAL ANALYSIS Statistical significance was tested using a two-sided Mann-Whitney U test, a two-sided Student's t test, a two-sided Welch's t test, or a two-sided paired t-test unless otherwise noted. The tests above were performed using Prism 9 software v9.1.1 (GraphPad Software). In the time-course experiments ( Figures 3H, 4A-4H, 5A, 5B, and 5G), a multiple regression analysis including experimental conditions (i.e., the types of infected viruses) as explanatory variables and timepoints as qualitative control variables was performed to evaluate the difference between experimental conditions thorough all timepoints. The initial time point was removed from the analysis. The P value was calculated by a two-sided Wald test. Subsequently, familywise error rates (FWERs) were calculated by the Holm method. These analyses were performed in R v4.1.2 (https://www.r-project.org/). In Figures 5D, 5F, and S5, photographs shown are the representative areas of at least two independent experiments by using four hamsters at each timepoint. In Figure S4D, photographs shown are the representatives of >20 fields of view taken for each sample.
2022-10-19T13:12:37.961Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "db402e9d5f79c52b6f1d8232ecac92cd22f6f7f3", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S1931312822005169/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "35e102eb5d2cc8c507f34d7585ea8227140d5da6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
26937034
pes2o/s2orc
v3-fos-license
A note on permanence of nonautonomous cooperative scalar population models with delays For a large family of nonautonomous scalar-delayed differential equations used in population dynamics, some criteria for permanence are given, as well as explicit upper and lower bounds for the asymptotic behavior of solutions. The method described here is based on comparative results with auxiliary monotone systems. In particular, it applies to a nonautonomous scalar model proposed as an alternative to the usual delayed logistic equation. In a previous paper [2], the same authors considered a simpler nonautonomous model where the delay functions τ k (t) satisfy all the conditions above, r(t) is continuous and satisfies r(t) ≥ r 0 , t ≥ 0, for some constant r 0 > 0, and α k , β are positive constants, 1 ≤ k ≤ m. Model (1.1) is a generalization of the DDE (1.3), obtained by considering a more general form of nonautonomous coefficients. The scalar DDE (1.3) has a positive equilibrium K * = 1 β m k=1 α k , which was proven in [2] to be a global attractor of all its positive solutions without any further restriction. In general, (1.1) does not have a positive equilibrium, so criteria for either extinction -when zero is a global attractor -or persistence or permanence play a crucial role. Here, we set some standard notations. For (1.1) and for the DDEs hereafter, C := C([−τ, 0]; R) (τ > 0) with the usual sup norm ϕ ∞ = sup θ∈[−τ,0] |ϕ(θ)| will be taken as the phase space. For an abstract DDE in C,ẋ If the solutions of initial value problems are unique, x(t; t 0 , ϕ) designates the solution oḟ Even if it is not stated, we shall always assume that f is smooth enough so that initial value problems associated with (1.4) have unique solutions, with continuous dependence on data. This is the case if f (t, ϕ) is uniformly Lipschitz continuous on the variable ϕ ∈ C on each compact subset of Ω. For C + := {ϕ ∈ C : ϕ(θ) ≥ 0 for −τ ≤ θ ≤ 0}, initial conditions (1.2) are written in the simpler form x 0 = ϕ with ϕ ∈ int (C + ). Cf. e.g. [5], for the concept of permanence given below, as well as for other standard definitions. A nice criterion for the permanence of (1.1) was established in [3], assuming only that the functions α k (t), β(t) are uniformly bounded from above and from below by positive constants. (1.6) The proof of this result in [3] is broken into several steps, and takes little advantage of the criterion established previously by the authors in [2]. Here, we present an alternative proof based on the fact that equations (1.1) and (1.3) satisfy the quasimonotone condition. In fact, we shall show later (cf. Theorem 3.2) that we need not assume that m k=1 τ k (t) > 0 for all t ≥ 0, and that initial conditions may be taken in the larger set C 0 := {ϕ ∈ C + : ϕ(0) > 0}. We recall that a scalar DDE (1.4) satisfies the quasimonotone condition (on the cone C + ) if for any t ≥ t 0 and ϕ, ψ ∈ C + with ϕ ≤ ψ and ϕ(0) = ψ(0), then f (t, ϕ) ≤ f (t, ψ) (cf. [7], p. 78). Under this condition, the semiflow is monotone. If d ϕ f (t, ϕ) exists, is continuous on [t 0 , ∞) × C + , and d ϕ f (t, ϕ)ψ ≥ 0 for ϕ, ψ ∈ C + and ψ(0) = 0, then (1.4) is cooperative; cooperative equations satisfy the quasimonotone condition. Here, we abuse the terminology, and refer to equations satisfying the quasimonotone condition as cooperative. The above method used to prove Theorem 1.1 motivated us to extend the same arguments to other scalar DDEs from population dynamics. The idea is to consider a broad class of cooperative differential equations with (possibly time-varying) delays and nonautonomous coefficients, and use the theory of monotone dynamical systems to obtain its permanence by comparison with two auxiliary differential equations with constant coefficients, for which a globally attractive positive equilibrium exists. As a particularly important example, we have in mind to apply this approach to establish the permanence of the following scalar population model: We emphasize that the study of the permanence of (1.8) (with the additional constraints ) was proposed in [3] as a topic for further research. Obviously, Theorem 1.1 applies only to the very concrete model (1.1), and therefore cannot be invoked to deal with (1.8). For N (t) = x(t), eq. (1.8) with m = 1 reads as which is (after a scaling) the nonautonomous version of where γ, µ, κ, τ > 0. Eq. (1.9) was derived by Arino et al. [1] as an alternative formulation for the classical delayed logistic equation, also known as Wright's equation, given byṄ where r denotes the intrinsic growth rate, K is the carrying capacity, and τ the maturation delay. The coefficients in this logistic equation are related to the ones in (1.9) by r = γ − µ (for γ, µ the birth and mortality rates, respectively) and K = (γ − µ)/κ. In Section 3, we shall study a class of scalar DDEs which includes (1.8) as a particular case. As another illustration of our technique, the permanence of a nonautonomous Nicholson's blowflies equation will also be studied. Auxiliary results on stability of equilibria In this section, we address the global attractivity of nonnegative equilibria for a family of nonautonomous cooperative scalar DDEs. We start with an auxiliary lemma from [4]. Consider now the family of scalar-delayed population models given bẏ are continuous with R(0, . . . , 0) = D(0) = 0. Moreover, let R, D be smooth enough in order to ensure uniqueness of solutions. The condition R(0, . . . , 0) = 0 is not essential for our analysis, but it corresponds to the general framework in population dynamics models, since zero should be a steady state. We note that the particular caseẋ(t) = R(x(t − τ )) − D(x(t)), t ≥ 0, was studied by Arino et al [1]. , and observe that f satisfies Smith's quasimonotone condition. As before, due to biological reasons we are only interested in positive solutions. Rather than initial conditions in int(C + ), we shall consider x 0 = ϕ, with ϕ in the larger set of admissible initial conditions It is clear that for t ≥ 0 and ϕ ∈ C + with ϕ(0) = 0, then f (t, ϕ) ≥ 0. This implies that solutions of (2.1) with initial conditions x 0 ∈ C + are nonnegative [7]. For x(t) = x(t; ϕ), ϕ ∈ C 0 , x(t) satisfies the ordinary differential inequalityẋ(t) ≥ −ρ(t)D(x(t)), thus conditions D(0) = 0 and x(0) = ϕ(0) > 0 yield x(t) > 0 whenever it is defined. In what follows, concepts as permanence and global asymptotic stability always refer to the solutions with initial conditions x 0 = ϕ ∈ C 0 . Next result is a generalization of Theorem 3.3 in [1], and its proof can be found in the Appendix. For related results, see [4] and Chapter 4 of Kuang's monograph [5]. The same proof works with minimal changes for equations with distributed delays, rather than discrete delays, as stated below. Then, there are at most two nonnegative equilibria. If zero is the unique equilibrium, then it is GAS; if there is a positive equilibrium x * , then x * is GAS (in the set of all solutions with initial conditions x 0 = ϕ ∈ C 0 ). A similar version of this corollary for equations with distributed delays could also be stated. Main results Based on Theorem 2.1, we now extend the arguments used in our proof of Theorem 1.1 to a larger class of nonautonomous cooperative models. where m ∈ N, R(t, y), D(t, x), τ k (t) are continuous with 0 ≤ τ k (t) ≤ τ , for t, x ∈ R + , y ∈ R m + , 1 ≤ k ≤ m, and assume that: (H3) there exist K l , K u > 0 such that Then (3.1) is permanent (in C 0 ); to be more precise, all positive solutions of (3.1) satisfy Proof. As for the first part of the proof of Theorem 1.1, we compare the solutions x(t) = x(t; ϕ) (ϕ ∈ C 0 ) of equation (3.1) with the solutions u(t) = u(t; ϕ) and v(t) = v(t; ϕ) of the cooperative equationṡ respectively. By Theorem 5.1.1 in [7] we deduce that v(t) ≤ x(t) ≤ u(t) for t ≥ 0, whereas Theorem 2.1 applied to these equations implies that v(t) → K l , u(t) → K u as t → ∞. This implies (3.2). The same technique and Corollary 2.1 lead to the corollary below. Corollary 3.1. Let m ∈ N, r k (t, y), d(t, x), τ k (t) be continuous with 0 ≤ τ k (t) ≤ τ , for t, x ≥ 0, 1 ≤ k ≤ m, and assume that: (i) there are (locally Lipschitz) continuous functions r l k , r u k : Then, the equatioṅ We finally study the permanence of (1.8), with less constraints than the ones proposed in [3]. 7) For the particular case of (3.3) with β k ≡ 0, 1 ≤ k ≤ m, i.e., where α k , µ, κ and τ k are as above, the lower bound in (3.5) can be taken as (3.9) Proof. In what follows, we set 0 ≤ τ k (t) ≤ τ for t ≥ 0, k = 1, . . . , m, and use the notations for f replaced by α k , β k , µ or κ. By assumption, the functions α k , κ are bounded and bounded away from zero, and β k , µ are positive and bounded. Note that the cases µ = 0 or β k = 0, for all or some k's, 3) satisfies the quasimonotone condition. Next, denote respectively. To prove the uniform upper bound in (3.5)-(3.6), we reason along the lines of the proof of Theorem 1.1, so some details are omitted. Take a sequence (t n ) with t n → ∞,ẋ(t n ) → 0 and x(t n ) → x. For any ε > 0 small and n large, we derivė Taking limits n → ∞, ε → 0 + , this inequality yields For the lower bound m 0 in (3.7), we note that K l is the unique x > 0 such that r l (x) − d u (x) = 0. Since r l (0) − d u (0) = c 0 and |(r l − d u ) ′ (x)| ≤ m k=1 α k β k + κ = c 1 , by the Mean Value theorem we get K l ≥ c 0 /c 1 . In the case of (3.8), we now take a sequence (s n ) with s n → ∞,ẋ(s n ) → 0 and x(s n ) → x. For any ε > 0 small and n large, we derivė Taking limits n → ∞, ε → 0 + , this leads to x ≥ m 0 for m 0 as in (3.9). Clearly, the method presented in this note applies to other scalar nonautonomous delayed population models, as illustrated in the next example. Consider the scalar DDĖ where β k , τ k , d are continuous, bounded and nonnegative on [0, ∞). Eq. (3.10) is a generalization of the well-known Nicholson's equationẋ(t) = −dx(t) + βx(t − τ )e −x(t−τ ) (d, β, τ > 0). The autonomous version of (3.10) reads asẋ With d > 0, β k ≥ 0 and β := m k=1 β k > 0, Liz et al. [6] proved that if 1 < β/d ≤ e 2 , then the positive equilibrium x * := log(β/d) of (3.11) is a global attractor of all positive solutions. Although (3.11) is not cooperative, if 1 < β/d ≤ e then all solutions satisfy lim t→∞ x(t) = x * ≤ 1, so (3.11) has a cooperative large-time behavior, since h(x) := xe −x is increasing on [0, 1]. Under some further constraints on the coefficients β k (t), d(t), we shall take advantage of this monotonicity to study the permanence of (3.10). Next, the better estimates in (3.13) are derived by using the technique in the proofs of Theorems 1.1 and 3.2. For any positive solution of (3.10), set x = lim sup t→∞ x(t), x = lim inf t→∞ x(t). Take a sequence t n → ∞ withẋ(t n ) → 0, x(t n ) → x. Fix ε > 0. For n large, This example also shows an obvious limitation of our method: since it relies on comparative results with cooperative equations, it cannot be invoked to deal with (3.10) in the case of the upper bound in (3.12) given by m k=1 β k < e 2 d. On the contrary, an advantage of the method is that it can be easily extended to deal with n-dimensional cooperative DDEs.
2014-04-09T18:03:22.000Z
2014-04-09T00:00:00.000
{ "year": 2014, "sha1": "a9210ba3cb0ce04f5ab0dba5b2dec8d341ecf6de", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1404.2566", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a9210ba3cb0ce04f5ab0dba5b2dec8d341ecf6de", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
226240189
pes2o/s2orc
v3-fos-license
Surgical treatment of 125 cases with congenital diaphragmatic eventration in a single institution Backgrounds This study sought to investigate the clinical characteristics of congenital diaphragmatic eventration (CDE) and to compare the ecacy of thoracoscopy and traditional open surgery in infants with congenital diaphragmatic eventration. Methods We retrospectively analyzed the clinical data of 125 children with CDE(90 boys, 35girls; median age:12.2 months, range 1h-7years;body weight1.99-28.5kg,median body weight 7.87±4.40kg) admitted to our hospital in recent 10years, and statistically analyzed their clinical manifestations and surgical methods. Results 108 children in this group underwent surgery, of which 67 underwent open surgery and 41 underwent thoracoscopic diaphragmatic plication.107 patients recovered well postoperatively, except for 1 patient died of respiratory distress after surgery.Followed up for 1-9.5 years,107 patients had signicantly improved preoperative symptoms.During follow-up, the location of the diaphragm was normal and no paradoxical movement was observed.Eleven of the 17 children who did not undergo surgical treatment did not see a decrease in diaphragm position after 1-6 years of follow-up.In the thoracoscopy group, the index data on the operation time, intraoperative blood loss, chest drainage time, postoperative mechanical ventilation time, postoperative hospital stay and postoperative CCU admission time were better than those in the open group.The difference between the two groups was statistically signicant (P (cid:0) 0.05). Conclusions Clinical symptoms of congenital diaphragmatic eventration varied in severity. Patients with severe symptoms should be operated.Both thoracoscopic diaphragmatic plication and traditional open surgery can effectively treat congenital diaphragmatic eventration,but compared with open surgery, thoracoscopic diaphragmatic plication has the advantages of short operation time, less trauma, rapid recovery, so it should be the rst choice for children with congenital diaphragmatic eventration. Background CDE is considered to result from a congenital anomaly during the formation of the pleuroperitoneal membrane, as in Bochdalek diaphragmatic hernia, but occurring in a later stage during embryonal growth [1].CDE is a rare pathology occurs in 0.02 to 0.07/1,000 births affecting mostly males in 60 to 80% of cases.It accounts for 5%-7% of all diaphragm diseases [2].Because the infant ribs are horizontal and the intercostal muscles are weak, the breathing movement mainly depends on the abdominal breathing of the diaphragm muscles moving up and down.Infants and children with CDE have abnormally elevated diaphragm muscles, which often lead to collapse of the affected alveoli or atelectasis, affecting lung ventilation and lung development.Therefore, infants and children with CDE often have symptoms such as dyspnea, repeated respiratory infections, low weight, and stunting.Severe cases may manifest as respiratory distress syndrome, seriously affecting the quality of life of children.Traditionally, diaphragmatic plication has been performed by thoracotomy or laparotomy, particularly in symptomatic smaller children [3].However, advancements in endoscopic surgery have allowed diaphragmatic eventration to be treated quickly and safely. Here, we present our experience with different surgical procedures to treat 125 cases with CDE. Material And Methods We retrospectively analyzed the clinical data of 125 children with congenital diaphragmatic eventration admitted to the Department of Cardiothoracic Surgery, Children's Hospital of Chongqing Medical University from January 2010 to January 2020.Inclusion criteria:children with CDE have dyspnea, repeated respiratory tract infection and other symptoms.Chest X-ray, CT or gastrointestinal radiography clearly diagnose diaphragmatic eventration.Exclusion criteria:Exclude children with acquiring diaphragmatic eventration associated with surgery. Open surgery group: thoracotomy for the right diaphragm eventration and laparotomy for the left diaphragm eventration.Through the thoracoabdominal approach, we remove the weak diaphragm and intermittently sew with non-absorbable sutures to ensure that the cut diaphragm into a shingled shape to strengthen the weak area of the diaphragm.Thoracoscopic group: using the three-hole method,5 mm thoracoscopy is placed on the lower edge of the scapula tip trocar, and two operation holes are made in the fourth intercostal space on both sides of the trocar.Continuous barbed sutures from the outside to the inside are utilized to make the diaphragm into a shingled shape to strengthen the diaphragm. See Figure 1 for details. Statistical analyses All the collected data were statistically analyzed using SPSS 22.0 software.The continuous variables were expressed as mean±standard deviation, and the classi cation variables were expressed as ratio columns. The comparison between the two groups was expressed by independent sample t-test, and the count data was expressed by Fisher's precision test. The difference was statistically signi cant with a p value of <0.05. Results The study included 125 children diagnosed with CDE.There were 90 males (72%) and 35 females (28%), aged 1h-7 years ,median age: 12.2 months,with body weight of 1.99-28.5kg(7.87±4.40 kg), 78 children (62.4%) on the right,47 children (37.6%) on the left, and no bilateral children. There were 79 children with malformations in this group, mainly including 19 cases of congenital heart disease,16 cases of congenital pulmonary dysplasia, 8 cases of pectus excavatum, 4 cases of hiatal hernia, and 3 cases of chicken breast. The clinical symptoms of CDE were reported for 108 of 125 cases. The main symptoms of CDE in infants included the following:cough and asthma, dyspnea, recurrent respiratory tract infections, refuse milk and vomit, and arrhythmia.About 17 cases were found to asymptomatic or accidentally discovered on routine physical examination.The clinical symptoms of CDE are shown in table 1.All the 125 cases had positive manifestations on chest X-ray, among which 39 cases (31.2%) were diagnosed by combined chest CT, 32 cases (25.6%) were diagnosed by combined chest X-ray and digestive tract radiography, and 99 cases were found to have eventration of diaphragmatic shadow. All cases were con rmed as CDE after surgery, and the position of CDE was presented in table 2. 41 cases underwent transthoracic diaphragm plication and 26 cases underwent transabdominal diaphragm plication. Among them, 9 cases were diagnosed as CDE before operation. The stomach, duodenum, spleen and part of the liver herniated into the thoracic cavity during the operation. Diaphragmatic hernia was diagnosed after the operation. We analyzed the data of the relevant surgical indicators of the two groups.In the thoracoscopy group, the index data on the operation time, intraoperative blood loss, chest drainage time, postoperative mechanical ventilation time, postoperative hospital stay and postoperative CCU admission time were better than those in the open group. The difference between the two groups was statistically signi cant (p<0.05).There was no statistically signi cant difference between the two groups in descending distance of diaphragm(P>0.05). See Table 3 for details. Patients had been followed-up radiologically annually to demonstrate the position of the diaphragm,and symptoms if any were also evaluated. In open surgery group, 1 case died of respiratory distress after operation.Almost all respiratory and digestive symptoms disappeared within 1 month after the operation, and none had any symptom 3 years after surgery.Followed up for 1-9.5 years,107 patients had signi cantly improved preoperative symptoms.Eleven of the 17 children who did not undergo surgical treatment did not see a signi cant decrease in diaphragm position after 1-6 years of follow-up, and 6 patients were lost to follow-up.The comparison of chest radiographs before and after operation is shown in Figure 2. Discussion CDE is characterized by incomplete muscle regeneration.Subsequent abnormally elevated diaphragm muscles cause abnormal movement of the affected hemidiaphragm during respiration. It can occur locally or affect the entire diaphragm.In this study, there were 90 males (72%), 35 females (28%), 78 children (62.4%) on the right side, and 47 children (37.6%) on the left side. We observed that the incidence was higher in male children, and the incidence on the right side was higher than that on the left side.CDE can be associated with other developmental defects, and associated comorbidities include congenital hypoplastic lung, congenital heart disease, pectus excavatum,cleft palate, hypospadias, cryptorchidism, and congenital torticollis [4].77 patients in this group were combined with other malformations, congenital heart disease (19, 15.2%) and congenital hypoplastic lung (16, 12.8%) were the main relevant abnormalities in this study.The above facts is di cult to determine whether CDE is accompanied by other malformations or other malformations with this disease.Its numerous accompanying malformations suggest that the cause of the teratology is di cult to explain with a single etiology, and may be similar to the cause of other congenital malformations. The main symptom of CDE is the compression of the lower lobe of the lungs due to the increase of intraabdominal organs. After compression, the mediastinum can also cause the mediastinum to move on the health side and reduce the health side lung function accordingly. In unilateral CDE, the lung capacity and total lung capacity is reduced by 20% -30% [5].Bilateral diaphragmatic eventration reduces the lung functions even more seriously, especially in the supine position [6].The treatment principle of CDE is to restore the normal anatomical position and tension of the diaphragm, the method is to strengthen the weak diaphragm,the goal is to maintain the normal volume of the lungs and the process of lung ventilation.Whether asymptomatic patients need surgical correction has been controversial for a long time.In this group of 17 children who did not undergo surgical treatment,11 patients received 1-6 years of follow-up and did not see decrease in diaphragm position. Therefore, we believe that symptomatic children need timely surgical treatment. Yazici M et al.'s study also considered symptomatic children, usually require surgery [7].Therefore, we believe that the indications for surgery are as follows: relative to the normal position,the diaphragm is displaced upwards by 3 intercostals and above. The diaphragm eventration caused obvious compression on the affected side of the lung, and obvious shortness of breath,asthma and other respiratory distress symptoms. Frequent lung infections, hypoxemia, and even abnormal breathing exercise. During the follow-up, the diaphragm continued to rise and the eventration was aggravated. The traditional treatment method of CDE is diaphragmatic plication performed either by laparotomy or thoracotomy.However, with the development of minimally invasive technology, thoracoscopy is gradually applied in the treatment of CDE [8][9][10].We believe that children with right diaphragm eventration and intrapulmonary malformation need to be corrected through the thoracotomy approach as the rst choice, because it is not affected by the intestinal canal, full exposure, easy to operate, can see the phrenic nerve and reduce postoperative intestinal paralysis.The laparotomy is suitable for children with left diaphragmatic eventration, inability to distinguish between diaphragmatic eventration and diaphragmatic hernia, and considering gastrointestinal malformation.Because the left chest is the heart, there is a high risk of thoracotomy. The use of subcostal incision is conducive to the repair of the hernia and the discovery of possible intestinal malformations.However, in the open group, we used thoracotomy for 4 children with left side diaphragmatic eventration, and achieved satisfactory clinical results. Therefore, we believe that the choice of approach is mainly based on the characteristics of the patient's diaphragmatic disease and which approach the surgeon prefers Familiarity.The preoperative diagnosis of 9 children in this group was unknown, and diaphragmatic hernia and other gastrointestinal tract malformations were found during the operation, so the choice of preoperative approach was particularly important.We resect the weak diaphragm in the diaphragm via the thoracoabdominal route and sutured the diaphragm intermittently with non-absorbable sutures to make the cut diaphragm imbricate to strengthen the weak area of the diaphragm.The advantage of this technique is that it increases the tension of the diaphragm to evenly distribute the tension throughout the repair area. With the development of minimally invasive technology, thoracoscopy is gradually used in the treatment of CDE.We compared the effect of open surgery and thoracoscopy in the treatment of CDE in children.The operation time, chest drainage time, postoperative mechanical ventilation time, postoperative hospital stay and postoperative CCU admission time in the thoracoscopy group were shorter than those in the open group, and the difference between the two groups was statistically signi cant (P<0.05). We consider the possible reasons as follows: Thoracoscopic surgery adopts three hole method, which is less traumatic and less prone to bleeding. The recovery of children is faster after operation. The technique of thoracoscopy is skilled, and the operator and assistant cooperate with each other. We used barbed wire to sew continuously without knot, which greatly shortens the operation time and is obviously better than the open surgery. In this group of 41 children without other thoracoabdominal malformations that need to be corrected, we used thoracoscopic diaphragm plication.Various techniques of diaphragmatic plication have also been employed. All techniques aim to reduce the abundant diaphragmatic surface and lower the diaphragmatic dome.Various suturing methods have been used, including interrupted horizontal mattress sutures, multiple parallel U sutures, gure of eight sutures, continuous running sutures, and endostaplers. Various non absorbable but also absorbable sutures have been used. We used barbed wire to suture the diaphragm from the outside to the inside in a continuous imbricated fashion to strengthen the diaphragm.Combined with the literature and our experience, compared with ordinary absorbable suture, continuous suture of the diaphragm with barbed wire has the following advantages: Starting from the second stitch, it is not easy to slip after tightening the suture.One stitch is sewn to tighten one stitch, and no knot is needed during the suture process,which greatly shortens the operation time. The diaphragms were sutured continuously by barbed wire to make the diaphragms stretch evenly from the center to all directions, and the tension distribution was uniform, so that the movement of the diaphragms was more coherent, and the diaphragms would not be ischemic due to over tight suturing, nor would the suture relax to cause recurrence. The barbed wires suture is close, less bleeding, wireless knot, absorbable, wireless knot reaction and residual suture. There is a view that continuous suture may compromise the safety of the suture and the loosening of the knot may affect the folding of the entire diaphragm, but there is no evidence to support this view [11].A. Parlak, et al., and others adopted doublepurse suture method to strengthen the diaphragm, achieving better clinical effect [12].The usual advantages of thoracoscopy, such as reduced postoperative pain, satisfactory appearance and rapid recovery, are also applicable to our surgery, and should be the preferred treatment for CDE. Figure 1
2020-07-23T09:09:16.169Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "14f30b45d1b0986504810b8b752437e40bd71360", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-020-00928-z", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "b8f7fd417c749a2bd39380b415b5a5af2f2e0bf5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119406549
pes2o/s2orc
v3-fos-license
Common Origin of Dirac Neutrino Mass and Freeze-in Massive Particle Dark Matter Motivated by the fact that the origin of tiny Dirac neutrino masses via the standard model Higgs field and non-thermal dark matter populating the Universe via freeze-in mechanism require tiny dimensionless couplings of similar order of magnitudes $(\sim 10^{-12})$, we propose a framework that can dynamically generate such couplings in a unified manner. Adopting a flavour symmetric approach based on $A_4$ group, we construct a model where Dirac neutrino coupling to the standard model Higgs and dark matter coupling to its mother particle occur at dimension six level involving the same flavon fields, thereby generating the effective Yukawa coupling of same order of magnitudes. The mother particle for dark matter, a complex scalar singlet, gets thermally produced in the early Universe through Higgs portal couplings followed by its thermal freeze-out and then decay into the dark matter candidates giving rise to the freeze-in dark matter scenario. Some parts of the Higgs portal couplings of the mother particle can also be excluded by collider constraints on invisible decay rate of the standard model like Higgs boson. We show that the correct neutrino oscillation data can be successfully produced in the model which predicts normal hierarchical neutrino mass. The model also predicts the atmospheric angle to be in the lower octant if the Dirac CP phase lies close to the presently preferred maximal value. I. INTRODUCTION Although the non-zero neutrino mass and large leptonic mixing are well established facts by now [1], with the present status of different neutrino parameters being shown in global fit analysis [2,3] On the other hand, in cosmic frontier, we have significant amount of evidences [4][5][6][7][8] suggesting the presence of non-baryonic form of matter, or the so called Dark Matter (DM) in large amount in the present Universe. According to the latest cosmology experiment Planck [4], almost 26% of the present Universe's energy density is in the form of DM while only around 5% is the usual baryonic matter leading the rest of the energy budget to mysterious dark energy. Quantitatively, the DM abundance at present is quoted as 0.1172 ≤ Ω DM h 2 ≤ 0.1226 at 67% C.L. [4] where Ω DM = ρ DM /ρ cr is the DM density parameter with ρ cr = Since the standard model (SM) of particle physics fails to address the problem of neutrino mass and dark matter, several beyond standard model (BSM) proposals have been put forward in order to accommodate them. While seesaw mechanism [9][10][11][12] remains the most popular scenario for generating tiny neutrino masses, the weakly interacting massive particle (WIMP) paradigm has been the most widely studied dark matter scenario. In this framework, a dark matter candidate typically with electroweak scale mass and interaction rate similar to electroweak interactions can give rise to the correct dark matter relic abun-dance, a remarkable coincidence often referred to as the WIMP Miracle. Now, if such type of particles whose interactions are of the order of electroweak interactions really exist then we should expect their signatures in various DM direct detection experiments where the recoil energies of detector nuclei scattered by DM particles are being measured. However, after decades of running, direct detection experiments are yet to observe any DM-nucleon scattering [13][14][15]. The absence of dark matter signals from the direct detection experiments have progressively lowered the exclusion curve in its mass-cross section plane. Although such null results could indicate a very constrained region of WIMP parameter space, they have also motivated the particle physics community to look for beyond the thermal WIMP paradigm where the interaction scale of DM particle can be much lower than the scale of weak interaction i.e. DM may be more feebly interacting than the thermal WIMP paradigm. One of the viable alternatives of WIMP paradigm, which may be a possible reason of null results at various direct detection experiments, is to consider the non-thermal origin of DM [16]. In this scenario, the initial number density of DM in the early Universe is negligible and it is assumed that the interaction strength of DM with other particles in the thermal bath is so feeble that it never reaches thermal equilibrium at any epoch in the early Universe. In this set up, DM is mainly produced from the out of equilibrium decays of some heavy particles in the plasma. It can also be produced from the scatterings of bath particles, however if same couplings are involved in both decay as well as scattering processes then the former has the dominant contribution to DM relic density over the latter one [16][17][18]. The production mechanism for non-thermal DM is known as freeze-in and the candidates of non-thermal DM produced via freeze-in are often classified into a group called Freeze-in (Feebly interacting) massive particle (FIMP). For a recent review of this DM paradigm, please see [19]. Similarly, the popular seesaw models predict Majorana nature of neutrinos though the results from 0νββ experiments have so far been negative. Although such negative results do not necessarily prove that the light neutrinos are of Dirac nature, it is nevertheless suggestive enough to come up with scenarios predicting Dirac neutrinos with correct mass and mixing. This has led to several proposals that attempt to generate tiny neutrino masses in a variety of ways , some of which also accommodate the origin of WIMP type dark matter simultaneously. The present article is motivated by the coincidence that the origin of Dirac neutrino masses as well as FIMP dark matter typically require very small dimensionless couplings ∼ 10 −12 [16]. In the neutrino sector, such couplings can generate 0.1 eV Dirac neutrino mass through neutrino coupling to the standard model like Higgs. On the other hand, in the dark sector, such tiny couplings of the dark matter particle with the mother particle makes sure that it gets produced non-thermally through the freeze-in mechanism. There have been several attempts where the origin of such feeble interactions of DM with the visible sector is generated via higher dimensional effective operators [16,49,50]. Very recently, there has been attempt to realise such feeble interactions naturally at renormalisable level also [51]. The coincidence between such tiny FIMP couplings and Dirac neutrino Yukawas was also pointed out, mostly in supersymmetric contexts, by the authors of [16,[52][53][54][55][56]. Here, we consider an A 4 flavour symmetric model 1 where neutrino Dirac mass as well as FIMP coupling with its mother particle get generated through dimension six operators involving the same flavon fields. A global unbroken lepton number symmetry is assumed that forbids the Majorana mass terms of singlet fermions. We show that both freeze-in and freezeout formalisms are important in generating the dark matter relic in our scenario. The mother particle, which is long lived in this model and decays only to the dark matter at leading order, first freezes out and then decays into the dark matter particle. Therefore, the final abundance of dark matter particle depends upon the mother particle couplings to the standard model particles which can be probed at different ongoing experiments. Interestingly, we find that ongoing experiments like the large hadron collider (LHC) can probe some part of the parameter space which can give rise to sizeable invisible decay of SM like Higgs boson into the long lived mother particles. We also show that the correct neutrino oscillation data can be reproduced in some specific vacuum alignments of the flavon fields indicating the predictive nature of the model. The model also predicts normal hierarchical neutrino mass ordering and interesting correlations between neutrino parameters requiring the atmospheric mixing angle to be in the lower octant for maximal Dirac CP phase. The remaining part of this letter is organised as follows. In section II we discuss our A 4 flavour symmetric models of Dirac neutrino mass and FIMP dark matter and discuss the consequences for neutrino sector for some benchmark scenarios. In section III, we discuss the calculation related to relic abundance of dark matter and then finally conclude in section 1 Similar exercise can be carried out using other discrete groups like A 5 , S 4 , ∆(27) etc. However, here we adopt A 4 flavour symmetry as it is the smallest group having a three dimensional representation which in turn helps to realise neutrino mixing in an economical way. II. A 4 MODEL FOR DIRAC NEUTRINOS AND FIMP DARK MATTER We first consider a minimal model based on A 4 flavour symmetry that can give rise to tiny given in Table I. The construction here includes two A 4 triplet flavons, φ T and φ S , which where Λ is the cut-off scale of the theory. Here and subsequently all the y's stand for the respective coupling constants, unless otherwise mentioned. The leading contributions to the charged lepton mass viaLH i (where i are the RH charged leptons) are not allowed due to the specific A 4 symmetry. When the triplet flavon φ T is present in the model it leads to an A 4 invariant dimension five operator as given in equation (1) which subsequently generates the relevant masses after flavons and the SM Higgs field acquire non-zero vacuum expectation value (vev)'s. Using the A 4 product rules given in appendix A and taking generic triplet Here v denotes the vev of the SM Higgs doublet H and ω = e i2π/3 is the cube root of unity. This mass matrix can be diagonalised by using the magic matrix U ω , given by Now, as indicated earlier, the complete A 4 × Z 4 × Z 4 discrete symmetry plays an instrumental role in generating tiny Dirac neutrino mass and mixing at dimension six level. Any contribution to the neutrino mass (throughLHν R ) is forbidden up to dimension five level in the present set-up. Since charged lepton masses are generated at dimension five level, it naturally explains the observed hierarchy between charged and neutral lepton masses. Presence of the A 4 triplet flavon φ S generates the required dimension six operator for neutrino mass and mixing. The relevant Yukawa Lagrangian for neutrino sector is given by Here the subscripts S and A stands for symmetric and anti-symmetric parts of A 4 triplets products (see Appendix A for details) in the S diagonal basis adopted in the analysis and 1, 1 and 1 stand for three singlets of A 4 . For the most general vev alignment φ S = , the effective mass matrix for neutrinos can be written as where the diagonal elements are given by Now, the symmetric part originated from A 4 triplet products are given by As seen above, when neutrinos are Dirac fermions instead of Majorana, then there is an additional anti-symmetric contribution in the neutrino mass matrix which remains absent in the Majorana case due to symmetric property of the Majorana mass term. This additional contribution can in fact explain nonzero θ 13 in a more economical setup [44,45,57] compared to the one for Majorana neutrinos [58]. In the mass matrix given by equation (6) these antisymmetric contributions are given by The most general mass matrix for Dirac neutrinos given in equation (6) can be further simplified depending upon the specific and simpler vev alignments of the triplet flavon φ S . Here we briefly discuss a few such possible alignments analytically and then restrict ourselves to one such scenario for numerical analysis which can explain neutrino masses and mixing in a minimal way. Note that such vev alignments demand a complete analysis of the scalar sector of the model and can be obtained in principle, from the minimisation of the scalar potential [59][60][61][62][63]. For simplicity, when we consider the vev alignment of φ S to be φ S = (v S , v S , v S ) from equation (7)- (13), we obtain s 32 = s 31 = s 21 = 2y s vv 2 S /Λ 2 = s (say) and a 32 = a 31 = a 21 = 2y a vv 2 S /Λ 2 = a (say). Hence the neutrino mass matrix takes the form where For even more simplified 2 scenarios of vev alignments φ S = (v S , v S , 0) and φ S = (0, v S , v S ) the neutrino mass matrices are given by respectively, where the elements are defined as s 21 = 2y s vv 2 S /Λ 2 respectively. As evident from these two neutrino mass matrices given by equation (17), a Hermitian matrix (m ν m † ν ) obtained from these demands a rotation in the 12 and 23 planes respectively. This, however, is not sufficient to to explain observed neutrino mixing along with the contribution (U ω ) from the charged lepton sector given in equation (3). Now, a third possibility with vev alignment φ S = (v S , 0, v S ), yields a compatible neutrino mass matrix, given by, where s 31 = 2y s vv 2 S /Λ 2 = s (say), a 31 = 2y a vv 2 S /Λ 2 = a (say), Although parameters present here are in general complex, for the diagonal elements we consider them to be equal that is, x 11 = x 22 = x 33 = x and real without loss of any generality. Now, to diagonalise this mass matrix, let us first define a Hermitian matrix as Here the complex terms corresponding to the symmetric and anti-symmetric parts of A 4 products can be written as s = |s| iφs and a = |a| iφa . These complex phases essentially dictates the CP violation of the theory. Clearly, the structure of M given in equation (19) indicates rotation in the 13 plane through the relation U † 13 MU 13 = diag(m 2 1 , m 2 2 , m 2 3 ) is sufficient to diagonalise this matrix, where the U 13 is given by and the mass eigenvalues are found to be where A = |s| 2 + |a| 2 and B = (2|s||a| cos(φ s − φ a )) 2 + 4x 2 (|s| 2 cos 2 φ s + |a| 2 sin 2 φ a ). One important inference of such ordering is that inverted hierarchy of neutrino mass is not feasible in this setup as ∆m 2 23 + ∆m 2 21 = −2(|s| 2 + |a| 2 ) < 0, implying m 3 > m 2 . Also, the two parameters θ and ψ appearing in U 13 can be expressed as tan 2θ = x(|a| sin φ a sin ψ − |s| cos φ s cos ψ) |s||a| cos(φ s − φ a ) and tan ψ = − |a| sin φ a |s| cos φ s , in terms of the parameters appearing in the mass matrix. Hence the final lepton mixing matrix is given by one can obtain correlations between neutrino mixing angles θ 13 , θ 12 , θ 23 , Dirac CP phase δ and parameters appearing in equation (25) very easily [44,45,[64][65][66][67]. Hence, from equations (24)(25)(26) it is evident that the mixing angles (θ 13 , θ 12 , θ 23 ) and Dirac CP phase (δ) involved in the lepton mixing matrix U PMNS are functions of x, |s|, |a|, φ s and φ a . Neutrino mass eigenvalues are also function of these parameters as obtained in equations (21)(22)(23). These parameters can be constrained using the current data on neutrino mixing angles and mass squared differences [2,3]. Here in our analysis we adopt the 3σ variation of neutrino oscillation data obtained from the global fit [2] to do so. In figure 1 we have plotted the allowed parameter x. Whereas in the right panel of figure 2 we present the correlation between Dirac CP phase δ and sin 2 θ 23 . Interestingly the model predicts δ in the range −π/2 δ −π/5 and π/5 δ π/2 whereas sin 2 θ 23 lies in the lower octant. Here it is worth mentioning that, the presently preferred value δ ∼ ±π/2 as indicated in global fit analysis [2], predicts the atmospheric mixing angle θ 23 to be in the lower octant within our framework, as seen from the right panel of figure 2. FIMP interactions: After studying the neutrino sector, we briefly comment upon the Yukawa Lagrangian involving the FIMP dark matter candidate ψ upto dimension six level. From the field content shown in Table I, it is obvious that a bare mass term for ψ is not allowed. However, we can generate its mass at dimension five level (same as that of charged leptons). The corresponding Yukawa Lagrangian is Once ζ acquires a non-zero vev, we can generate a mass M ψ = Y ψζ ζ 2 Λ . Another important Yukawa interaction of ψ is with the singlet flavon η that arises at dimension six level, given by It is interesting to note that the same flavon field φ S and the ratio φ S 2 Λ 2 generates the effective coupling of η − ψ − ψ as well as H − ν L − ν R as discussed earlier in equation (4). We will use these interactions while discussing the dark matter phenomenology in the next section. III. FREEZE-IN DARK MATTER In this section, we discuss the details of calculation related to the relic abundance of FIMP dark matter candidate ψ. As per requirement for such dark matter [16], the interactions of dark matter particle with the visible sector ones are so feeble that it never attains thermal equilibrium in the early Universe. In the simplest possible scenario of this type, the dark matter candidate has negligible initial thermal abundance and gets populated later due to the decay of a mother particle. Such non-thermal dark matter scenario which gets populated in the Universe through freeze-in (rather than freeze-out of WIMP type scenarios) should have typical coupling of the order 10 −12 with the decaying mother particle. Unless such decays of mother particles into dark matter are kinematically forbidden, the contributions of scattering to freeze-in of dark matter remains typically suppressed compared to the former. In our model, the fermion ψ naturally satisfies the criteria for being a FIMP dark matter candidate without requiring highly fine-tuned couplings mentioned above. This is due to the fact that this fermion is a gauge singlet and its leading order interaction to the mother particle η arises only at dimension six level. As discussed in the previous section, the effective Yukawa coupling for ηψψ interaction is dynamically generated by flavon vev's Y ∼ v 2 S Λ 2 . Now, the decay width of η into two dark matter particles (ψ) can be written as where Y is the effective Yukawa coupling, m η and m ψ are the masses of the mother particle and ψ respectively. From the transformation of the singlet scalar η under the symmetry group of the model, it is clear that it does not have any linear term in the scalar sector and hence does not have any other decay modes apart from the one into two dark matter particles. Since this decay is governed by a tiny effective Yukawa coupling, this makes the singlet scalar long lived. However, this singlet scalar can have sizeable quartic interactions with other scalars like the standard model Higgs doublet and hence can be thermally produced in the early Universe. Now, considering the mother particle η to be in thermal equilibrium in the early Universe which also decays into the dark matter particle ψ, we can write down the relevant Boltzmann equations for co-moving number densities of η, ψ as where x = M sc T , is a dimensionless variable while M sc is some arbitrary mass scale which we choose equal to the mass of η and M Pl is the Planck mass. Moreover, g s (x) is the number of effective degrees of freedom associated to the entropy density of the Universe and the quantity g (x) is defined as Here, g ρ (x) denotes the effective number of degrees of freedom related to the energy density of the Universe at x = M sc T . The first term on the right hand side of the Boltzmann equation (33) corresponds to the self annihilation of η into standard model particles and vice versa which play the role in its freeze-out. The second term on the right hand side of this equation corresponds to the dilution of η due to its decay into dark matter ψ. Let us denote the freezeout temperature of η as T F and its decay temperature as T D . If we assume that the mother particle freezes out first followed by its decay into dark matter particles, we can consider In such a case, we can first solve the Boltzmann equation for η considering only the self-annihilation part to calculate its freeze-out abundance. Then we solve the following two equations for temperature T < T F We stick to this simplified assumption T F > T D in this work and postpone a more general analysis without any assumption to an upcoming work. The assumption T F > T D allows us to solve the Boltzmann equation (36) for η first, calculate its freeze-out abundance and then solve the corresponding equations (37), (38) for η, ψ using the freeze-out abundance of η as initial condition 3 . In such a scenario, we can solve the Boltzmann equations (37) It can be clearly seen that while the freeze-out abundance of η drops due to its decay into ψ, the abundance of the latter grows. The value of η − H coupling is chosen to be λ Hη = 0.004 in order to generate the correct freeze-out abundance of η which can later give rise to the required dark matter abundance through its decay. It can be seen that, once we fix the ψ and η masses, the final abundance of ψ does not depend upon the specific Yukawa coupling Y as η dominantly decays into ψ 3 Recently another scenario was proposed where the dark matter freezes out first with underproduced freeze-out abundance followed by the decay of a long lived particle into dark matter, filling the deficit [68]. The latest constraint on invisible Higgs decay from the ATLAS experiment at the LHC is We incorporate this in figure 4 and find that some part of parameter space in λ Hη -m η plane can be excluded for low dark matter masses m DM < 10 GeV by LHC constraints. Although we are considering a simplified case where the decay of mother particle occurs after the mother particle freezes out T F > T D , we note that this decay can not be delayed in such a way that m η ≥ 2m DM is satisfied. We have not incorporated the constraints on dark matter relic abundance in figure 5, as we still have freedom in choosing λ Hη that can decide the freeze-out abundance of η required for producing correct dark matter abundance through freeze-in. We leave a more general scan of such parameter space to an upcoming work. It should be noted that we did not consider the production of dark matter from the decay of the flavon ζ responsible for its mass, as shown in equation (27). Since we intended to explain FIMP coupling and Dirac neutrino mass through same dimension six couplings, we did not take this dimension five term into account. This can be justified if we consider the masses of such flavons to be larger than the reheat temperature of the Universe, so that any contribution to FIMP production from ζ decay is Boltzmann suppressed. For example, the authors of [71] considered such heavy mediators having mass greater than the reheat temperature, in a different dark matter scenario. We also note that there was no contribution to FIMP production through annihilations in our scenario through processes like SM, SM → ψψ with η being the mediator. This is justified due to the specific flavour transformations of η and the fact that η does not acquire any vev. IV. CONCLUSION We have proposed a scenario that can simultaneously explain the tiny Yukawa coupling required for Dirac neutrino masses from the standard model Higgs field and the coupling of non-thermal dark matter populating the Universe through freeze-in mechanism. The proposed scenario is based on dynamical origin of such tiny couplings from a flavour symmetric scenario based on discrete non-abelian group A 4 that allows such couplings at dimension six level only thereby explaining their smallness naturally. The A 4 flavour symmetry is augmented by additional discrete symmetries like Z N and a global lepton number symmetry to forbid the unwanted terms from the Lagrangian. The charged lepton and dark matter masses are generated at dimension five level while the sub-eV Dirac neutrino masses arise only at dimension six level. The correct leptonic mixing can be produced depending on the alignment of flavon vev's. One such alignment which we analyse numerically predicts a normal hierarchical pattern of light neutrino masses and interesting correlations between neutrino oscillation parameters. The atmospheric mixing angle is preferred to be in the lower octant for maximal Dirac CP phase in this scenario. In the dark matter sector, the effective coupling of non-thermal dark matter (ψ, a singlet fermion) with its mother particle (η, a singlet scalar) arises at dimension six level through the same flavons responsible for neutrino mass. The mother particle, though restricted to decay only to the dark matter particles at cosmological scales, can have sizeable interactions with the standard model sector through Higgs portal couplings. Adopting a simplified scenario where the mother particle freezes out first and then decays into the dark matter particles, we first calculate the freeze-out abundance of η and then calculate the dark matter abundance from η decay. Although such non-thermal or freeze-in massive particle dark matter remains difficult to be probed due to tiny couplings, its mother particle can be produced at ongoing experiments like the LHC. We in fact show that some part of mother particle's parameter space can be constrained from the LHC limits on invisible decay rate of the SM like Higgs boson, and hence can be probed in near future data. Since η is long lived, its decay into dark matter particles on cosmological scales can be constrained if we demand such a decay to occur before the BBN epoch. We find the lower bound on Yukawa coupling Y governing the decay of η into DM, and show it to be larger than around 10 −13 . We leave a more detailed analysis of such scenario without any assumption of η freeze-out preceding the freeze-in of ψ to an upcoming work. 1 a 1 a 2 + ωb 1 b 2 + ω 2 c 1 c 2 3 s (b 1 c 2 + c 1 b 2 , c 1 a 2 + a 1 c 2 , a 1 b 2 + b 1 a 2 ) In the T diagonal basis on the other hand, they can be written as
2018-05-28T18:05:25.000Z
2018-05-28T00:00:00.000
{ "year": 2018, "sha1": "30101b8171da1525c4757a6611c2980554ee8579", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.11115", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "30101b8171da1525c4757a6611c2980554ee8579", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1613971
pes2o/s2orc
v3-fos-license
Framework for finite alternative theories to a quantum field theory Using the path-integral formalism, we generalize the 't Hooft-Veltman method of unitary regulators to put forward a framework for finite, alternative quantum theories to a given quantum field theory. Feynman-like rules of such a finite, alternative quantum theory lead to alternative, perturbative Green functions. Which are acceptably regularized perturbative expansions of the original Green functions, causal, and imply no unphysical free particles. To demonstrate that the proposed framework is feasible, we take the quantum field theory of a single, self-interacting real scalar field and show how we can alter, covariantly and locally, its free-field Lagrangian to obtain finite, alternative perturbative Green functions. Introduction In this paper, we will put forward a framework for constructing such alternative theories to a given quantum field theory (QFT) that provide solutions to the problem of ultraviolet divergencies without using formal, auxiliary parameters. For simplicity, we will consider in some detail solely the case of the QFT of a single, self-interacting real scalar field φ(x) with φ 4 interaction. In this case, the conventional, QFT, connected n-point Green functions scaled by a positive coefficient Z −n/2 , G δJ(x 1 ) · · · δJ(x n ) J=0 (1) with the generating functional where the action functional the QFT free-field Lagrangian the interaction Lagrangian m 2 , m 2 0 , λ, and λ 0 are positive coefficients; the external source J(x) is an arbitrary, real scalar field; we use the (−1, 1, 1, 1) metric. An expression for the QFT scattering matrix of the quantum system, characterized by the action functional I[φ, J], can be obtained in terms of G (n) Z by means of the Lehmann-Symanzik-Zimmermann (LSZ) reduction formula, see e.g. Sec. 10.3 in [1]. We can formally rewrite the generating functional (2) as Let φ c (x) be the Feynman-Stueckelberg solution to the Euler-Lagrange equations of the action I[φ, J] with V = 0. On changing the variable of functional integrals in (6) from φ to φ − φ c , (6) can formally be written as where N is a J-independent factor, irrelevant for the calculation of perturbative expansions of G (n) Z , and the Feynman propagator see e.g. Sec. 1.2 and App. B in [2]. By (8), (3), and (4), the propagator We can obtain the perturbative expansions of G (7), and then expanding in powers of the four coupling coefficients λ, Z 2 λ 0 −λ, Z −1, and Zm 2 0 −m 2 . These expansions lead to the conventional Feynman rules for this theory [1,2]. In general, integrals over independent loops diverge, and perturbative expansions of G (n) Z are not well defined. Regularizing such divergent integrals in perturbative expansions of G (n) Z , and then using a renormalization scheme and removing the cut-off and its auxiliary parameters, we can calculate the perturbative expansion of the QFT scattering matrix by the LSZ reduction formula. There is an infinite variety of acceptable regularizations , i.e. such that the above procedure yields the same, renormalization-scheme-dependent, QFT result. There is a widespread opinion that ultraviolet divergencies encountered in perturbative expansions of Green functions of the standard model are due to the inadequate description of ultra-high-energy phenomena by the present Lagrangian. Which suggests that we should modify this Lagrangian to get rid of ultraviolet divergences. However, as noted e.g. in [1], in a QFT each complete, momentum-space, spin 0 Feynman propagator has the Källén-Lehman spectral representation with a non-negative spectral function, i.e., it is equal to with some ρ(s) ≥ 0. That implies that in a QFT, also a complete spin 0 propagator cannot vanish faster than ∆ F (k) as |k 2 | → ∞. So it follows that in a QFT we can regularize none of the perturbative expansions of Green functions by modifying its Lagrangian. Some departure from the conventional framework of QFTs seems necessary to this end, but of course, it has to be compatible with the experimentally verified perturbative results of QFTs. Which presents a sixty-years-old theoretical problem (see e.g. Sec. 1.3 in [1], and [3]). We believe that there is a Lagrangian-based theory of fundamental interactions such that we can regard perturbative expansions of its Green functions as acceptably regularized perturbative expansions of the Green functions of the standard model. However, up to now each candidate for such a theory that provides a non-perturbative regularization abandons some of the conventional properties of models in contemporary physics. Though upon removal of a cut-off during renormalization, these properties are restored in perturbative results. E.g., every version of the Pauli-Villars method proposed so far introduces auxiliary, unphysical particles with negative metric or wrong statistics; lattice regularization is based on discrete space-time; and string theories need to introduce additional space-time dimensions to avoid anomalies. We intend to alleviate this conceptually displeasing situation where models have expected properties only when the cut-off is removed. To this end, we will generalize the 't Hooft-Veltman, Lagrangian-based version of the Pauli-Villars method (HV method), see Sec. 5 in [4], to a continuous infinity of auxiliary fields but without introducing additional, auxiliary free particles. We will argue on the analogy of the above path-integral formalism (1)- (9). The key will be the LSZ reduction formula as, in general, it does not imply a direct connection between the number of free fields and the number of free particles. For any given QFT, we will introduce a new, Lagrangian-based framework for constructing finite, alternative quantum theories to a QFT (FAQTs) such that: (i) The action functional, I A , would equal the QFT action functional were it not for its free-field Lagrangian. A can be regarded as such acceptably regularized perturbative expansions of the QFT Green functions that are invariant under the same symmetry transformations as the QFT scattering matrix. Hence, each FAQT provides at least as adequate a perturbative description as the QFT. In Sec. 2. we give a short description of the HV method for a single, self-interacting real scalar field in terms of the generating functional. In the HV method, 't Hooft and Veltman (i) introduced a countable set of auxiliary fields, (ii) altered only the QFT free-field Lagrangian, and (iii) introduced Feynman-like rules that equal the conventional Feynman rules but for an altered propagator, which we label as the HV propagator. We could use the HV method to construct FAQTs were it not for its auxiliary, unphysical free particles. In Sec. 3. we collate those properties of a HV propagator that are essential for an acceptable covariant regularization. We regard as an alternative to ∆ F each HV propagator that has these properties but no additional singularities which imply additional, free particles. In Sec. 4. we give an example of how one may generalize the HV method to a continuous infinity of auxiliary fields so that the corresponding HV propagator is an alternative to ∆ F . So we construct a Lorentz invariant FAQT to the QFT of a single, self-interacting real scalar field. Thereby we show that the generalized HV method actually provides a feasible framework for constructing FAQTs to a QFT. Method of unitary regulators 't Hooft and Veltman invented, in addition to dimensional regularization, also the method of unitary regulators (HV method), which is a Lagrangian-based Pauli-Villars method with a discrete spectrum of auxiliary masses (see Secs. 2 and 5-8 in [4]). It seems that the HV method has not been much noticed since we found no reference to it in Science Citation Index, though their report [4] itself has over 200 citations. Here we give an outline of a new, non-perturbative formulation of the HV method in terms of a generating functional in the case of a single, self-interacting real scalar field with the QFT action functional I[φ, J]. (a) The HV connected n-point Green functions. Using a countable set of auxiliary real scalar fields φ i (x), i = 1, 2, . . ., we define the HV action functional: where the HV free-field Lagrangian with M i and c i real coefficients such that and Λ is a positive cut-off parameter. We may regard I HV [φ i , J] as being obtained from I[φ, J] by first replacing the field φ with the sum φ HV of auxiliary fields φ i , and then modifying I by replacing L 0 with a Lorentzinvariant alteration L HV 0 . So in I HV all fields φ i (x) have the same external source J(x) contrary to QFT formalism. We define the HV connected n-point Green functions, G HV , through functional derivatives (1) of the generating functional And we define the HV scattering matrix, S HV , in terms of G (b) Perturbative expansions. We change the variables of functional integrals in (14) from φ i to φ i − φ ic , where φ ic (x) are solutions to the Euler-Lagrange equations of the action functional I HV with V = 0. We then formally obtain where N HV is a J-independent factor, and the HV propagator ∆ HV (x) is defined by the relation We choose such solutions φ ic (x) that ∆ HV (x) is the Fourier transform of where the regularizing factor By (17) Z , but with the Feynman propagator ∆ F (k) replaced with the HV propagator ∆ HV (k) whereas the vertices remain unchanged, by (7) and (15). 't Hooft and Veltman [4] obtained such regularized perturbative expansion of G HV are causal, free of ultraviolet divergencies, and the corresponding perturbative expansion of S HV is unitary to all orders in the coupling coefficients; but it involves unphysical particles [4]. In particular, the zero-order term in the perturbative expansion of G (2) HV equals the HV propagator ∆ HV (k). So the poles of ∆ HV (k) determine the properties of free particles of the HV scattering matrix S HV : (i) the position of the pole determines the mass of the particle, and (ii) when the residue is negative, the particle is considered unphysical (a ghost), since then transition probabilities can be negative in the tree order [4]. So, the HV scattering matrix S HV predicts free scalar particles with masses M i that are unphysical when c i < 0. By (13), the above regularization of G (n) Z by the HV method introduces additional free particles, at least one of which is unphysical. Alternatives to the QFT propagator We do not need the additional poles of the HV propagator ∆ HV (k) by themselves for an acceptable regularization of perturbative expansions of G (n) Z . As some of them correspond to unphysical free particles we would prefer a HV method with such a HV propagator that has properties sufficient for an acceptable regularization of perturbative expansions of G (n) Z but no additional singularities. We will refer to such a HV propagator as an alternative to the QFT spin 0 propagator ∆ F (k) and denote it by ∆ A (k) provided where: (i) the regularizing factor f A (z) does not depend on ǫ, and is an analytic function of complex variable z except somewhere along the segment is real on the positive real axis; so f * A (z) = f A (z * ). (iv) sup z (1 + |z| 3/2 )|f A (z)| < ∞. So all integrals over independent loops converge, and also the Wick rotation is possible. (v) The coefficients of f A (z) can be made to depend on some cut-off parameter Λ so that for any Λ ≥ Λ 0 > 0 the regularizing factor f A has properties (i)-(iv) and satisfies relations (19). By (i)-(v), we can regard ∆ F as a covariant, low-energy approximation to its alternative ∆ A ; the coefficients of ∆ A can be chosen to make this low-energy approximation as accurate as desired. Using Cauchy's integral formula we can conclude that each ∆ A (k) admits the Källén-Lehman representation (10) with the spectral function so ρ(s) is changing sign, in contrast with the spectral function in QFT. By (10) and (21), we may regard ∆ A as a sum of ∆ F and of a Pauli-Villars regulator with a continuous spectrum of auxiliary masses [5]. Following 't Hooft and Veltman [4], we can show that by replacing the QFT propagator ∆ F in the perturbative expansions of G (n) Z with its alternative ∆ A we obtain acceptably regularized, covariant, and causal expressions that equal the perturbative expansions of G (n) A . Which result, by means of the LSZ reduction formula, in a covariant and unitary perturbative scattering matrix S A that is without additional free particles but may involve additional, unphysical particles generated by the interaction [4]. However, as there are no additional particles at V = 0, there also are no additional particles in a vicinity of V = 0 if the zeroes of (G In the next section we will generalize the HV method to auxiliary fields having a continuous, four-vector index so that the sum in the definition (18) of the regularizing factor f HV (k 2 ) is replaced with an integral that has no poles as a function of k 2 . Thereby we will avoid the additional free particles implied through the LSZ reduction formula by the additional poles of ∆ HV (k). So we will be able to demonstrate the existence of such a local and invariant action functional that the corresponding (in the sense of (2)- (7)) propagator is an alternative to ∆ F . Each such action functional defines a FAQT, as specified in Sec. 1., to the QFT of a single, self-interacting real scalar field. An example One may wonder whether it is possible to generalize the HV method so that the HV propagator ∆ HV is an alternative to ∆ F , and thus construct a FAQT to the QFT of a single, self-interacting real scalar field with action functional (3). The following action functional (26) demonstrates that this is possible and so provides an example of such a FAQT as specified in Secs. 1. and 3.. We introduce infinitely many auxiliary real scalar fields with a continuous, real four-vector index p, Φ(x, p); their weighted sum and the FAQT free-field Lagrangian where q is a real coefficient; f (y) and t(y) are real functions of real y, yet to be specified, such that f 2 (y)/t(y) is finite at y = 0; and the integral d 4 p . . . = −i lim y→∞ iy −iy dp 0 A possible physical significance of such a FAQT free-field Lagrangian is considered in [6]. We construct the FAQT action functional I A by replacing L 0 with L A0 , and φ with φ A in the QFT action functional I; i.e., Note that we do not introduce an external source for each individual, auxiliary real scalar field Φ(x, p), but only for the combination φ A that appears in the interaction Lagrangian −V . The FAQT action functional I A is real, quadratic, local in spacetime, and invariant under the inhomogeneous Lorentz transformations And we define the FAQT scattering matrix, S A , in terms of G (n) A by means of the LSZ reduction formula. We change the variables of functional integrals in (28) from Φ(x, p) to Φ(x, p)−Φ c (x, p), where Φ c (x, p) is a solution to the Euler-Lagrange equations of the FAQT action I A with V = 0. So we can formally write the generating functional (28) as where N A is a J-independent factor, and the propagator ∆(x) is defined by the relation We can choose [7] such a solution Φ c (x, p) that the Fourier transform of ∆(x) is where I(z) is the analytic function of the complex variable z equal to We can choose such q, f (y), and t(y) that the propagator ∆(k) becomes an alternative to ∆ F ; and so the action functional I A defines such a FAQT as specified in Secs. 1. and 3.. To show by a simple example that this is feasible: where and and let f (y) = 0 for y > 4. In such a case, we can check by inspection that: with the regularization factor (ii) f A (z) is an analytic function of z except along the segment z ≤ z d , z d = −Λ 2 (1−3η) 2 < −m 2 , of the negative real axis for each η ∈ (1/3−0.047, 1/3); to infer this, we take into account that ℜ √ z ≥ 0 and that Λ/( v 2 j − m 2 + v j ) → ∞ as η ր 1/3 only if j = 1. Note that there are infinitely many functions f (y) and t(y) that result in the same alternative, (36)-(37), to the propagator ∆ F (k). On the analogy with alternatives to the spin 0 Feynman propagator, we have defined also alternatives to the spin 1 2 Feynman propagator and to the Feynman propagator in unitary gauge for massive spin 1 bosons, and provided an example of the corresponding FAQT free-field Lagrangians [8]: We used them to construct an example of a FAQT to QED with massive photons in the unitary gauge. Similarly we can demonstrate that there are FAQTs to the standard model. However, it is an open question what properties the free-field Lagrangian of a FAQT to the standard model must have to make this theory: (i) without additional particles created by the interaction; (ii) of interest for the study of non-perturbative phenomena, cf. [9], and (iii) a better way of extracting physical data from the interaction Lagrangian of the standard model. In this connection it is not of primary importance how convenient are the resulting, acceptably regularized Green functions for further calculations-the main concern of all regularizations so far, cf. [10]. Some comparisons between the FAQT and QFT formalisms From the preceding examples we note the following differences and similarities between the FAQT and QFT formalisms: (a) Källén-Lehman representation (10) with the spectral function (21) of the FAQT propagator ∆ A defined by (31) is a consequence of assumptions about the FAQT action functional I A . These classical assumptions do not imply the QFT premises on which the Källén-Lehman representation (10) with a non-negative spectral function is based in QFTs, and do not contradict the positivity postulates of quantum mechanics, cf. Sec. 10.7 in [1]. (b) According to Buchholz and Haag [12] one can take as a postulate that each QFT "is completely described by a finite number of covariant fields (each having a finite number of components)". However, in constructing a FAQT we replace each QFT field with a field that has a continuous infinity of components-an essential difference characteristic also of string theories. (c) If we replace the FAQT free-field Lagrangian L A0 in the FAQT action functional I A with the QFT free-field Lagrangian L 0 with φ = φ A , the resulting action functional becomes equivalent to the QFT action functional I as far as the connected Green functions are concerned. So we may take that QFT of a single, self-interacting real scalar field and the FAQT based on I A actually differ only in their free-field Lagrangian densities. The interaction and source parts of the FAQT action functional I A are completely determined by the corresponding parts of the QFT action functional I. (d) Contrary to the QFT free-field Lagrangian, a FAQT free-field Lagrangian by itself does not specify the free particles of a FAQT; and the number of FAQT free particles is finite whereas the number of FAQT free fields is not. Summary In such a case, the Feynman-like rules for calculating the FAQT Green functions are similar to the ones of the QFT in question: only the QFT propagators are replaced with FAQT propagators, which are determined through the FAQT action functional without the interaction part. By providing an example in terms of functions defined in the eightdimensional space R 1,3 × R 1,3 , we have shown in Sec. 4. that the above framework, based on the path-integral formalism, is feasible.
2014-10-01T00:00:00.000Z
2001-10-01T00:00:00.000
{ "year": 2001, "sha1": "bcfaf32c5679513d11ab433fb5640b877d7d3a21", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bcfaf32c5679513d11ab433fb5640b877d7d3a21", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219266434
pes2o/s2orc
v3-fos-license
Plastic modification of human cornea in vivo: applications to clinical refractive procedures Purpose: The purpose of this report is to quantify how pressure applied to the human cornea, either physiological or intentional, affects its curvature. In particular, how pneumatic procedures flatten the central cornea and keep it flat over time, thereby decreasing the patient’s myopia. Methods: A viscoelastic model is developed for plastic deformation which gives us the basic governing equations of the elastic and plastic strain of corneal stroma. The model is applied to data from corneas of six patients who underwent pneumatic keratology (NEumatica Keratologia) to reduce their myopia. Results: The model shows corneal dimensional stability for long periods of time after NEumatica Keratologia that decay with an exponential time constant. Separate equations are developed that relate corneal plastic strain to the pressure applied and its duration ε = σ0 t1/η1, to change in refraction ε = 2 × ΔRefr, to keratometry radius increase ε = ΔR/R, and to corneal thinning ε = sqr (Δh/h). The average values obtained for ε from the patients’ data are 3%, 3.2%, 3%, and 2.6%, respectively, all in remarkable agreement. The average refraction change is found to remain stable at ΔRefr = +1.67D ± 5.2%. Clinical data yield good agreement of theory and treatment results. Conclusions: The model proposed is a good description of NEumatica Keratologia outcomes. Practical applications include the long-term stable correction of myopia with refractive procedures. High myopia subjects can benefit from this procedure because NEumatica Keratologia corrects and protects the central cornea radius by stretching the peripheral cornea. Introduction The current myopia epidemic is a serious problem, which will only subside when its cause is met. In short, it is now apparent that while the initial, usually low myopia of individuals with natural accommodation is probably a matter of inherited properties, 1 progressive myopia is caused by the continuous feedback effect of negative lenses. 2 An immediate way to focus the myopic blurred retinal image is achieved with corrective lenses or various refractive procedures, laser-assisted in situ keratomileusis (LASIK) being the most widely used at present. 3,4 Many types of procedures involve changing the central radius of curvature of the cornea to bring to focus the image in an eye with refractive error, usually myopia. Most notable of these procedures include LASIK, orthokeratology (OK), and the pneumatic keratology [NEumatica Keratologia (NEK)] procedure discussed in this article. It is of prime importance to maintain dimensional stability of the cornea for long periods of time after these various types of procedures that change its anatomy. We show here that NEK produces myopia reduction stable over time. Pneumatic keratology is a procedure to flatten the cornea noninvasively. Medina 5 applied a vacuum force for 5 min to flatten the central part of the cornea by distending the stressed peripheral areas where a vacuum is applied. Unlike LASIK, NEK does not remove any tissue but relies on the mechanical properties of the cornea to deform it plastically. The NEK procedure stretches the cornea circumferentially, the same as radial keratotomy (RK) incisions extend the cornea perpendicular to the axis of the incisions. The NEK instrument is portable and its effect is permanent myopia correction. Because of its simplicity, portability, and low cost, NEK is superior for use in remote areas and in the world. Refractive surgery is dependent on the mechanical properties of the cornea. The collagen in the corneal stroma is a complex biomaterial that exhibits viscoelasticity, plasticity, and creep under stress, whether external or physiological. Any attempt to modify the cornea and predict its stability afterward requires knowledge of its mechanical properties. Classical texts by Fung 6 and Ferry 7 provide basic equations and examples of the viscoelastic biomaterials. Nash and colleagues 8 present viscoelastic data for human and rabbit cornea, showing creep strain rates of 0.6% per hour at slightly elevated temperature and stress level. Nash and colleagues 8 and Ku and Greene 9 present data of the stress of the cornea and the sclera and resulting creep strain-rate. Rand and colleagues 10 model the mechanical effects of RK surgery on the cornea using thinshell theory. Andreassen and colleagues 11 measure the high-stress Young's modulus for human cornea. Models of the elastic properties of the cornea have been proposed by Glass and colleagues. 12 McMonnies and Schief 13 show how midperipheral external forces can change the curvature of the central cornea. The purpose of this article is to integrate its reversible and irreversible mechanical properties into a cornea model. The model can be useful to predict the temporary and permanent corneal deformation after intentionally applied stress, such as from NEK and nonintentional stress, such as post-LASIK. The corneal collagen is a fibrous protein with an organization similar to polymer fibers. Model corneas have been fabricated using polymers. 14 After removal of stress to the cornea, the stretched and distorted collagen fibers between cross-link points act like springs that tend to cause the prior deformation to partially recover (regress) with time. Some recoverable (reversible) viscous deformation occurs in addition to the nonrecoverable (irreversible) portion as shown in Figure 1. This partial recovery is more complex than simple viscous behavior. The simplest rheological model of the cornea that combines elastic, creep, and transient viscoelastic recovery is proposed here. This minimum-complexity model of the cornea accounts for both its elastic and plastic properties. A similar model has been proposed to approximate the behavior of polymers. 15 Materials and methods Six human eyes were treated with the NEK device, which evacuated air in front of the cornea for 5 min thereby plastically deforming the cornea to a flatter radius. The vacuum stresses the region of the cornea in contact with the vacuum chamber with a transmural pressure of 775 mmHg. Central corneal radii R were recorded with a topographic keratometer (Zeiss 9000 Series) at three points in time (1 day, 2 months, and 2 years) after the procedure. Subjective and objective refraction and visual acuity were recorded from each patient before the procedure and 1 day, 2 months, and 2 years after the procedure. Corneal thickness was measured with a pachymeter accurate within <1%. Because the NEK device increases the volume of the anterior chamber during application, the intraocular pressure (IOP) did not increase intraoperative. Consistently, the patients, who had usable vision through the device window, did not experience vision blackout that could have been the result of high IOP. The window in the device was used to align the device with the pupil. The pre-op and post-op IOP was not significantly different and was within the normal range. Protocol details can be found in Medina. 5 This report includes data from an additional patient who underwent NEK after 2017. The room temperature during the procedure for all six patients was 25°C, refraction and corneal radii were obtained twice and their values averaged. The ages of the patients were 30, 31, 37, 42, 46, and 50 years. Corneal thickness was measured with a pachymeter in the center of the cornea and in the midperiphery, nasal, temporal, superior, and inferior quadrants; the four peripheral values were averaged and reported here as the peripheral thick- For purposes of this report, approval was waived in as much as it involves analysis of data collected in a previously approved and reported study. 5 The cornea was subjected to a pulse stress σ 0 . The expected viscoelastic response of the cornea subjected to a NEK stress pulse is shown in Figure 1. As any viscoelastic material, it experienced a time-dependent increase in strain. This phenomenon is known as viscoelastic creep. At time t 0 , a circumferential ring of the cornea is loaded with a tangential constant stress σ(t 0 ) = σ 0 from a vacuum. The treated corneal ring responds to the stress with a circumferential strain ε(t) that increases as shown in Figure 1 until t 1 , when the vacuum is removed. The applied pulse had a duration (t 1 -t 0 ) of 5 min. After t 1 , the cornea slowly recovers some of its original shape until reaching a permanently deformed larger ring of residual strain ε plastic , the plastic strain that is reported here for our subjects. The elastic strain ε 0 is reversed (recovered) immediately upon the removal of vacuum at t 1 , whereas the transient strain ε viscoelastic is reversed slowly with time. The strain ε plastic is permanent and represents the long-term effect of NEK. The strain of any portion of the cornea under a chamber of the NEK device has a circumferential and a polar component as shown in Figure 2, when subjected to the transmural pressure. The circumferential strain ϵ ϴ results in an accumulated strain for the whole circumference of approximately equal value when considering all chambers. We show here that this strain flattens the cornea. We will refer here to ϵ ϴ simply as ϵ, the strain of interest. Figure 3 shows the model for viscoelastic deformation of the cornea during and after NEK that can provide the observed response in Figure 1. After the vacuum and therefore stress is removed at t 1 , the elastic strain ε 0 in spring K 1 is recovered immediately, whereas the transient strain ε viscoelastic in parallel combination (K 2 , η 2 ) is recovered slowly with time. The steady-state strain ε plastic in dashpot η 1 is permanent and represents the longterm effect of NEK. We used the model in Figure 3 to analyze the data and obtain the resulting values for the elements of the model. Once the elements are known, we can predict how the cornea will respond to NEK and to other corneal procedures. Equations (1a) and (1b) are obtained in derivation I in the Supplemental Material. The corneal stability factor after treatment is quantified as viscoelastic strain (equation (1b)). This strain stabilizes with time constant τ. The model does not include a friction element for simplicity and because it is negligible compared with the stress applied by NEK. A friction element prevents the continuous strain of the cornea under low, physiological stress. Results During and after the NEK procedure, the cornea was strained circumferentially along the vacuum chambers of the device as predicted. Immediately after NEK the residual strain is about 2-10%, as estimated theoretically and measured experimentally (see derivation II in the Supplemental Material). The stressing force reduced the thickness of the cornea. The thickness of the cornea was reduced by an average of 7.1% in the center and 9.8% in the midperiphery 1 day after the procedure. Subsequent to NEK procedures, the cornea remained dimensionally stable during the follow-up period of 2 years. The mean corneal radius for our subjects (N = 6) was <R> = 7.6 mm before the procedure and increased <ΔR> = 0.23 mm as measured 2 years after the procedure. Table 1 displays the spherical refraction value for each patient rather than the spherical equivalent because the cylindrical component was small (1D or less) and remained essentially journals.sagepub.com/home/oed 5 unchanged after the procedure. 5 The change in the spherical component assesses the treatment efficacy under those circumstances. Only corrected acuity is displayed; uncorrected acuity was not measured. Because the patients' myopia ranged from about 3D to 20D, their uncorrected acuity was very low and it was not feasible to obtain a measure with a Snellen chart. Corrected distance visual acuity (CDVA) after the procedure was equal or better than before it. CDVA, in decimal notation, remained constant at postop, or improved with respect to pre-op values at CDVA = 0.98 ± 0.14 SEM (see Table 1). The average effect of the NEK procedure measured by clinical refraction was a reduction in myopic refraction <ΔRefr> = 1.67D 1 day after the procedure and regressed less than 0.37D after 2 years as shown in Figure 4. Table 2 presents data from N = 6 human subjects and the calculated σ 0, ε pl , η 1 , τ, K 2 , and η 2 from average data using our model. In Table 2 (Figures 5 and 6). Discussion To date, the NEK procedure can achieve myopia reversal and correction of 1-5 diopters in 5 min. The model indicates that additional correction will be proportional to the duration and magnitude of applied pneumatic stress as observed in rabbits. 5 We can confirm that the NEK procedure flattens the corneal curvature without increasing the anterior-posterior length of the eye (direction z in Figure 2) because the strain derived from refraction is about the same as measured by keratometry and pachymetry. The strain calculated and measured by three different methods, as described in Table 2, are all in remarkable agreement. The strain numbers and constants calculated here nevertheless must be taken with caution because of the relatively high stress that NEK applies to the cornea. The cornea was subjected to pressure that is greater than normal IOP by a factor of 50×. Some of the formulation we relied upon has never been applied to high levels of plastic strain. As reported, the NEK plastic strain technique has achieved up to 5 diopters of myopia improvement per treatment, whereas LASIK is normally used for up to 10 diopters (LASIK, however, is not intended for repeated procedures). In addition, while remaining essentially stable for a period of 2 years following treatment, there is some observed regression after NEK. This result indicates longterm stability, showing that the treated region of the cornea does not continue stretching after NEK. Other procedures like RK and LASIK have resulted on occasion in a post-op progressive stretch, evidenced as development of hyperopia and ectasia, respectively. Ectasia is a serious LASIK complication with prevalence of <0.6%. 16 It is caused by creep of the cornea after it is thinned by LASIK. Early LASIK models and algorithms assumed a rigid structure for the cornea disregarding the biomechanical response of the cornea to the ablation. 17 They were, therefore, unable to predict ectasia. The stressing pressure that causes ectasia is the continuous post-op IOP. NEK does 18 The analysis of creep in this report during NEK is applicable to ectasia and could provide a better and individualized prediction of ectasia risk. Limitations Some potential limitations of NEK and this study should be mentioned. Each diopter of myopia improvement requires approximately 2% plastic strain of the circumferential cornea, as determined in derivation II. More work remains to be done to accurately find the safety limit and to achieve plastic strain within this limit. Results from other experiments confirm that increased temperature, duration, or pressure as well as repeat treatment can at least triple the corrective effect reported here. Although no fundus changes were observed after the NEK procedure, increased exposures to the pneumatic stress (in duration or pressure) for augmented response may entail additional safety hazards. The possibility of inducing retinal tears must be considered in those cases requiring large corrections, which intrinsically have a risk of retinal tears. No correlation was observed between age and treatment effectiveness, but the number of patients is small to arrive at firm conclusions. It was however clear that the refractive change depended on the degree of myopia, probably due to the combined effect of the reduced corneal thickness and longer size of the eye, characteristic of higher myopia. See equations (T1) and (T7). The highest correction that can be safely accomplished with NEK is not addressed in this study. Knowledge of basic corneal viscoelastic parameters may prove fundamentally important to evaluate long-term patient tolerance and the successful outcome of various refractive procedures. An important result of the NEK procedure is the determination of each patient's viscoelastic parameters, notably viscosity η 1 . That knowledge could also be used to plan refined repeat treatment with NEK. In general terms, the series elastic element K 1 corresponds to the collagen fiber elasticity, while the parallel elastic element K 2 corresponds to the proteoglycan matrix elasticity. The dashpots η 1 and η 2 quantify the relative slippage of the fibers with respect to each other and with respect to the matrix. Prior knowledge of these viscoelastic parameters will help determine the eligibility of the patient for this type of procedure, the recommended NEK pulse duration (t 1 -t 0 ), and the applied pressure amplitude ΔP required to achieve a given refractive improvement ΔRefr and the likelihood of dimensional stability into the future. Some clinical considerations are notable. The relatively new NEK procedure is compared with other more conventional types of corneal refractive surgery (e.g. LASIK). NEK is basically a noninvasive operation; its procedure requires a minimum of clinical personnel and can be administered by an ophthalmologist or optometrist with minimal training and using equipment no more complicated than a tonometer. The patients' response to the NEK procedure was uneventful. None of the six subjects as detailed here reported any complications or adverse effects from the 5-min NEK vacuum procedure. None reported any physical irritation, dry eye syndrome, excessive tear film, halos, starbursts, night vision glare and problems often associated with other types of refractive surgery. Visual examination of the corneal surface using a slit-lamp revealed only minor surface imperfections, which quickly return to normal over the course of a few days. Perhaps most importantly, none of the subjects had their visual acuity degraded; CDVA remained the same or improved with respect to pre-op values. Perhaps the most significant feature of the NEK procedure is that it is quite cost-effective compared with other types of refractive surgery (comparable in cost with Ortho-K) in terms of initial capital outlay for equipment, number of subsequent patient visits, and overall cost for each procedure. NEK may serve as a cost-effective, permanent, and portable method of myopia correction. The model we propose explains the plastic deformation of the cornea after stress and subsequent partial recovery, features that other models 19 cannot explain. The basic utility of the model is that it provides a structural framework for understanding and predicting the patient's corneal response to various applied loads. In particular, there can be large intersubject differences in their elastic and viscoplastic parameters. The model is a valuable tool to evaluate a patient's tolerance and response to various types of refractive surgery.
2020-05-28T09:09:29.731Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "dc601496520c4a0a41c6a2164af062a1754b9dc8", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2515841420913027", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f2d4a220066a60761b261fbeeb85815dd5f9718", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
268271420
pes2o/s2orc
v3-fos-license
DETERMINANTS OF CUSTOMER-BASED BRAND EQUITY ON BRAND IMAGE: THE MODERATING ROLE OF TOURISM MANAGEMENT AND ADVERTISEMENT How to cite this paper: Latif, W. B INTRODUCTION Businesses strive to create enduring brands having favorable equity and image.Brand equity development actively involves brand image.How brand equity is developed, managed, and upheld is still unclear as of this point (Keller, 2003;Bishop, 2014;Park, 2009;Gordon, 2010).The service industry is one of the biggest and fastest-growing industries across the globe.In fact, the hotel industry is growing steadily as a substantial part of the tourism industry and a key component of the service industry (Gordon, 2010;Bishop, 2014). The offerings of hotel service companies in the tourism sector have generated intense competition in the market over the past two decades, starting in the 1990s and continuing into the 2000s.For instance, Americans choose hotels that offer value or benefit, convenience, usability, and healthier options.Doing business in the food and lodging service sector is now increasingly competitive than ever due to an increase in hotels, high client demand, and the economic downturn (Park, 2009). Key benefits of the rising competition for the consumer include more options, better value for their money, and improved service (Cai et al., 2015;Kandampully & Suhartanto, 2000).In contrast, hotels require reinventing, strengthening, or accentuating the image of their brands in today's competitive and dynamic market environment with a plethora of alternative brands to enable guests to distinguish the hotels from their rivals.The requirement for effective and efficient marketing tactics is therefore obvious given the brand's or company's increased competitiveness.According to this viewpoint, the hotel industry needs a clearly defined brand image in order to thrive and sustain in a rapidly changing global market (Chi, 2016). In Bangladesh, managers lack an understanding of customers' behaviour about evaluating hotel brands.Therefore, hotels are unable to provide the services the customers desire, and as a result, they struggle to occupy a strong position in the competitive market.In view of the above context, this study will have a significant contribution to building a strong brand image and to what extent tourism management and advertisement moderate the determinants of customer-based brand equity (CBBE) and brand image. Thus, the researchers identified the following research questions: RQ1: What are the drivers of CBBE that impact the brand image?RQ2: Do tourism management and advertisement have any moderating effect on the relationship between the determinants and brand image? Against this backdrop, this research aims to refine the determinants of CBBE for creating the brand image of hotels in the tourism sector. The theoretical framework is used to delineate the complex relationships between factors that contribute to CBBE -such as brand awareness, brand association, brand excellence, brand resonance, and corporate social responsibility (CSR) -and the mode of brand image formation in the unique context of the tourism sector of Bangladesh.Tourism management and advertisement appear as simultaneous moderating factors in this framework, and it is expected that they will have subtle effects on the nature and intensity of the correlations between the above determinants and brand image. The structure of this paper is as follows.Section 2 includes some relevant literature to find out previous research to identify the factors affecting CBBE to build brand image and presents the conceptual framework with pertinent hypotheses.Section 3 explains the methodology as to how the data were collected and analysed.Section 4 presents the research results, and Section 5 presents a discussion of the results.Finally, Section 6 presents the conclusions of the study and some recommendations for future research.Tourism management takes the initiative to attract tourists to tourist destinations by taking awareness programs and enhancing the brand image of tourist destinations.Therefore, for developing brand image, the role of brand awareness may differ based on tourism management.According to Tran et al. (2020), brand superiority plays an imperative role in customers' cognitive evaluations.In the same way for developing brand image, brand superiority may be different with different levels of tourism management.Hence, tourism management intrigues various attractive activities that appeal to a tourist's feelings and generate affection for tourism products, whereas affection influences the positive image in the customers' minds (Hakim et al., 2018).Long-term relationships with tourists may be affected by tourism management (Valeri & Baggio, 2021).Longterm relationships are the final goal of brand resonance that strongly influences brand image.Hence, brand resonance may have a different influence on the brand image with the variation of tourism management, which links biodiversity and ecosystem.At the same time, tourism management links with biodiversity and ecosystems and provides different socio-cultural advantages for society (Higgins-Desbiolles, 2018).Indeed, all these activities are a crucial part of CSR.So, for developing brand image, the role of CSR may vary with the variation of tourism management.Advertisement is an effective promotional tool that disseminates information about products.Advertisements are mainly designed to create a brand image and straightforwardly encourage customers to purchase the products of a particular brand (Shareef et al., 2019).Advertisement is expected to enhance the brand image by disseminating vital information regarding brand attributes and benefits in the hotel industry (Hao et al., 2020).Therefore, advertisements may greatly influence brand awareness to build brand image.Functional attributes of products are considered a brand association (Jin et al., 2019). LITERATURE REVIEW AND HYPOTHESES DEVELOPMENT On the contrary, brand association can vastly impact the brand image (Dirsehan & Kurtuluş, 2018).Using advertisements makes customers easily informed about a brand and its unique features (Bhakar et al., 2019).Hence, unique features of a particular product that create proper meaning and feelings in the customers' minds are considered brand superiority (Ali et al., 2018).On the other hand, brand superiority creates uniqueness and greatly influences the brand image (Sultan & Wong, 2019).So, the change in the degree of advertisement may affect brand superiority in establishing brand image.Consequently, customers' emotions, positive attitudes, and love for the products are considered brand affection (Harrigan et al., 2018).Positive brand affection means a higher degree of brand image (Wu & Chen, 2019).Through advertisement, marketers establish powerful connections with customers (Sanne & Wiese, 2018).Subsequently, powerful connections with customers make brand resonance (Pawar & Lavuri, 2018).Then again, brand resonance largely affects brand image (Wong, 2019).As an outcome, Brand resonance may have a different role in creating a brand image with the different extent of advertisement.At present, organisations communicate product information and let customers know about their actions for society, the environment, and humanity with advertisements (Lloyd-Smith & An, 2019).Organisations operate these activities to protect the environment, society and humanity (Epperson et al., 2018).In a true sense, these activities are integral parts of CSR (Verčič & Ćorić, 2018), and they help companies build positive insight into customers' minds (Cuesta-Valiño et al., 2019).For this reason, the role of CSR in the brand image may vary with the variation of advertisement. For this research, the following research model has been developed and proposed based on the comprehensive review of pertinent marketing and branding literature.H1: Brand image is positively influenced by brand awareness. For creating positive attitudes and perceptions towards a brand marketers use brand association.Sanny et al. (2020), elucidated that brand association is formed by benefits.These benefits build a position in a customer's mind (Tanu et al., 2018).Thus, other hypotheses we may form are: H2: Brand image is positively influenced by brand association. H3: Brand image is positively influenced by brand superiority. Hence, brand affection plays an imperative role in building a strong brand image.Brand affection as related to emotional attachment bears a strong influence on brand image.As such, we hypothesise: H4: Brand image is positively influenced by brand affection. Brand resonance helps customers create a good connection with a brand.As a result, brand resonance denotes to construction of brand image (Cheng et al., 2019).Active brand attachment standpoints for brand resonance for which brand image is increased by brand resonance (Keller, 2020).Thus, it is hypothesized that: H5: Brand image is positively affected by brand resonance. CSR can be explained by the possible contribution to society (for the sensible development of mankind) without much expense regarding its financial activities (Han et al., 2019).According to Lee (2014), through CSR a company and its brands create a strong position in the customers' minds.Indeed, brand image can be influenced by CSR (Salehzadeh et al., 2018).Jeon et al. (2020) signified that CSR has a positive impact on building brand image.Therefore, it is hypothesized that: H6: Brand image is positively influenced by corporate social responsibility. Thus, the role of tourism management is to decorate destination brands for tourists through awareness programs and create a positive image in the minds of tourists (Pierdicca et al., 2019).Definitely, brand awareness supports tourists to make decisions about a tourist destination.Therefore, for developing brand image, brand awareness may differ based on the tourism management.Tourism is a combination of destination, accommodation and transportation (Getz, 2000).On the other hand, these facilities are the foremost components of brand association.Tourism makes the relationship between resident tourists and non-resident tourists.Thus, it is hypothesized that: H7: The relationship between brand awareness and brand image is positively moderated by tourism management. H8: The relationship between brand association and brand image is positively moderated by tourism management. H9: The relationship between brand superiority and brand image is positively moderated by tourism management. H10: The relationship between brand affection and brand image is positively moderated by tourism management. H11: The relationship between brand resonance and brand image is positively moderated by tourism management. H12: The relationship between corporate social responsibility and brand image is positively moderated by tourism management. Advertisement not only provides customers with information but also pursues and reminds customers of purchasing products (Altberg et al., 2018).Advertisement helps customers build sound awareness that influences customers to develop a positive image inside their minds (Maria et al., 2019).At present, the internet is one of the significant and revolutionary techniques of advertisement (Anwar et al., 2019).Consumers may readily learn about a brand's distinctive qualities (Bhakar et al., 2019).Therefore, distinctive qualities of a certain product that instill the right meaning and emotions in the minds of consumers are regarded as brand superiority (Ali et al., 2018) and brand superiority generates distinctiveness that significantly affects brand image (Sultan & Wong, 2019).So, the roles of brand superiority for the brand image may vary as the level of advertisement changes. Thus, brand affection is defined as consumers' feelings, favorable opinions, and adoration for the items (Harrigan et al., 2018).A higher degree of brand image is correlated with positive brand affection (Wu & Chen, 2019).Marketers build strong relationships with consumers through advertising (Sanne & Wiese, 2018).Brand resonance is subsequently created by strong consumer interactions (Pawar & Lavuri, 2018).However, brand image is heavily influenced by brand resonance (Wong, 2019).Consequently, depending on the degree of advertising, brand resonance could play a varying role in developing a brand image.Currently, businesses use advertisements to inform consumers about their products as well as their social, environmental, and humanitarian initiatives (Lloyd-Smith & An, 2019).These initiatives are being carried out by organizations to safeguard society, the environment, and humankind (Epperson et al., 2018).These are in fact essential components of CSR (Verčič & Ćorić, 2018).CSR aids businesses in developing favorable perceptions in the eyes of their clientele (Cuesta-Valiño et al., 2019).Because of this, the significance of CSR in a brand's image might change depending on the level of advertisement.In this way, it is hypothesized that: H13: The relationship between brand awareness and brand image is positively moderated by advertising. H14: The relationship between brand association and brand image is positively moderated by advertising. H15: The relationship between brand superiority and brand image is positively moderated by advertising. H16: The relationship between brand affection and brand image is positively moderated by advertising. H17: The relationship between brand resonance and brand image is significantly moderated by advertising. H18: The relationship between CSR and brand image is significantly moderated by advertising. RESEARCH METHODOLOGY This research model supports the study to hypothesize the relationship between tourism management and advertising with six free factors (brand awareness, brand association, brand superiority, brand affection, brand resonance, CSR), and brand image.In the proposed research, the factors influencing brand image are considered as independent variables, while tourism management and advertisement are considered as moderating variables, and brand image is the dependent variable.Simultaneously, advertisement and tourism management are expected to influence moderating relationships among these six independent variables and brand image. For this research, necessary data were collected through structured questionnaires from the top hotels of Dhaka and Cox's Bazar (Bangladesh).The respondents were chosen from the customers of some of those hotels.A sample of 600 respondents was selected through a multi-stage sampling method.A total of 299 filled-in questionnaires came up with valid responses.Thus, the response rate of 49.83% is adequate for such a study.Partial least squares structural equation modeling (PLS-SEM) was used to analyse the data using intelligent software SmartPLS 2.0.Reliability and validity of the data were determined based on the measurement model, and hypothesis testing was carried out by obtaining results based on the structural model. Descriptive analysis Responses were collected using a five-point Likert scale, with one (1) representing "strongly disagree" and five (5) representing "strongly agree".Descriptive statistics are presented in Table 1 as follows.Table 1 shows that all the means are above and close to 4; that means respondents, in general, agree with the statements.On the other hand, standard deviations of data are close to the mean value.The standard deviation for each variable is smaller than one. PLS-SEM analysis results The PLS measurement model provides the values of the reliability test, validity test, path coefficient, and coefficient of determination in the analysis of PLS-SEM.In this model, variables are normally linked in a figure that illustrates the route of the relationship (path coefficient) between endogenous and exogenous variables.Table 2 shows three different types of values -path coefficient, item loadings, and coefficient of determination (R 2 ) -in the PLS measurement model.This study defines brand association, brand awareness, brand superiority, brand resonance, brand affection, and CSR as exogenous variables and brand image as endogenous variables.Table 2 below demonstrates the hypothesised model created using SmartPLS is 3.2.8.The Cronbach's alpha values and the composite reliability are used to conduct the reliability test.Furthermore, PLS-SEM analysis computes two types of validity.The reliability and validity testing criteria are detailed in the following sections. Reliability test The reliability and validity of several constructs in the measurement model (outer model) are first examined.Using Cronbach's alpha and composite reliability values, the reliability of constructs is tested.Cronbach's alpha values and the composite reliability values of all constructs should be higher than 0.70 to make them reliable (Hair et al., 2012;Bagozzi & Yi, 1988).As shown in Table 2, all Cronbach's alpha and composite reliability values exceed 0.70, indicating strong internal data consistency (Hair et al., 2012).As a result, the current research meets the reliability of all constructs. Predictive relevance (Q 2 ) Under predictive relevance (Q 2 ), PLS is used to evaluate the predictive validity of a large complex model generated blindfolded.Q 2 demonstrates how well the empirically acquired data can be recreated using the given model and the PLS parameters.Q 2 > 0 means the model contains predictive relevance, whereas Q 2 < 0 means a lack of predictive relevance.This study has achieved a Q 2 value of 0.405 for the brand image variable, which is greater than zero and implies the predictive relevance of the model.Hence, the Q 2 value of this study indicates that the model used in the investigation is adequate to explain the brand image of the hotel industry. Coefficient of determination (R 2 ) The coefficient of determination (R 2 ) indicates the variation in the endogenous variable produced by exogenous variables.The present study has achieved an R 2 value of 0.745, which implies the influence of independent variables on the dependent variable by 74.50%.It is a generally accepted rule that R 2 values of 0.25, 0.50, and 0.75 for endogenous latent variables may be weak, moderate, and substantial (Hair et al., 2011).The current study has achieved an R 2 value of 0.745, which is close to a substantial effect.As a result, the six independent variables employed in this study, namely brand awareness, brand resonance, brand association, brand affection, brand superiority, and CSR, had a substantial impact on brand image. Multicollinearity In the study variance inflation factor (VIF) and tolerance values for exogenous variables were evaluated to detect multicollinearity.Typically, VIF values of 5 or less and tolerance levels of 0.20 or higher are required to avoid collinearity problems (Hair et al., 2011).In this study, the VIF values for all exogenous variables were below five and the tolerance values for all variables were above 0.20.This indicates that there are no multicollinearity problems in the data (Hair et al., 2013). Structural model The study considers path coefficients, p-values, t-statistics, and errors.A hypothesis is considered valid if it is significant at a 5% level of significance (t-value > 1.96 or p < 0.05) (Henseler & Fassott, 2010).The results of the structural model for hypotheses testing are shown in Table 4. The path coefficient is estimated to test the proposed hypotheses in the study.After a PLS model has been conducted, the path coefficients are estimated to indicate the hypothesized correlations connecting the latent constructs.To test the hypotheses, the researcher opted for the bootstrap approach to evaluate how important the hypothesised relationships are in the path model.Approximately 1000 resamples were used for bootstrapping.The number of bootstrap cases is similar to the original number of observations to determine standard errors and find t-statistics (Hair et al., 2013).Acceptance of the hypothesis must be significant at the 5% level (p < 0.05) level or the t-statistics must be higher than 1.96.Hypothesis H1: This hypothesis is substantially supported since the path coefficient value is 0.216, whereas the t-statistics is 2.646 (p < 0.01), implying a 1% significance level. Hypothesis H2: The path coefficient value is 0.107, which is significant at the 1% (t-statistics is 2.976; p < 0.01) level. Hypothesis H3: This study finds that brand image is positively influenced by brand superiority.For this variable, the path coefficient and the t-statistic are 0.126 and 3.207 (p < 0.01). Hypothesis H4: Table 4 shows that the path coefficient value of brand affection equals 0.004 while the t-statistics equals 0.116, which is not satisfactory (p > 0.05).Therefore, H4 is not confirmed. Hypothesis H5: This hypothesis is proved in this study since the path coefficient has achieved a positive value of 0.100 and the value of equivalent t-statistics is 2.086, which is acceptable at the 5% level. Hypothesis H6: This hypothesis is substantiated in this study since the path coefficient value equals 0.506, and such value is acceptable at a 1% level (t-statistics is 3.830; p < 0.01). Moderation analysis The product indicator approach has been applied in this study to evaluate the moderating effect of tourism management as well as advertisement on the correlation between determinants and brand image.This approach involves developing product terms using the static independent variable and the static moderator variable indicators.In the structural model, these product terms play the roles of the indicators of the interaction term.If the interaction variable's path coefficient is significant in statistical terms (t-value > 1.96 or p < 0.05), the moderating effect is supported (Henseler & Fassott, 2010).The present study has two moderators, and therefore the moderating effect is calculated in two models, which have been discussed below. Moderating effect of tourism management First, the moderating effect of tourism management is tested based on the correlation between the six previously identified independent variables and brand image.Table 5 demonstrates the observations of the moderating role. Tourism management is tested for its moderating effect on the correlation between brand association, brand awareness, brand superiority, brand resonance, brand affection, CSR, and brand image.The observations of this moderating effect test are given in Table 5. Hypothesis H7: It is found that brand awareness is correlated with the brand image with the moderating role of tourism management.The interaction effect of brand awareness on tourism management is shown in the model to evaluate such a moderation effect.The value of the path coefficient of the interaction effect is 1.144 (Table 5), whereas the t-value is 0.751, which indicates it is not significant (p > 0.05).Thus, H7 is not supported. Table 5. effect test findings (Tourism management) Hypothesis H8: Tourism management is suggested to moderate the correlation between brand image and brand association.Table 5 shows that the path coefficient of the interaction effect of tourism management and brand association on brand image is -0.141, with a value of 0.559.As the path coefficient is negative and the value is not satisfactory (p > 0.05), tourism management does not have a moderating role in the correlation between brand image and brand association.Therefore, H8 is not confirmed either. Hypothesis H9: Table 5 demonstrates the path coefficient is 0.944 while the t-value found is 2.586; that is significant at the 1% level (p = 0.005).It indicates that tourism management moderates the correlation between brand image and brand superiority.Thus, H9 is vindicated. Hypothesis H10: Tourism management is suggested to moderate the correlation between brand affection and brand image.The interaction terms, such as brand affection and tourism management, are added to the model to evaluate this moderation effect.The path coefficient is 0.055, and the t-value is 0.068, insignificant (p > 0.05).Therefore, H10 is not confirmed. Hypothesis H11: The path coefficient of the interaction effect of brand resonance and tourism management on brand image is 1.591, and the t-value is 1.935.Such a value is momentous at the 5% level (p = 0.026).Therefore, H11 is confirmed. Hypothesis H12: The interaction term (CRS * Tourism management) is introduced to the model to evaluate this moderating effect.The interaction term "CRS * Tourism management" has obtained a path coefficient value of -0.455 and a t-value of 0.331, which negates the hypothesis (p > 0.05).Thus, H12 is not supported. Moderating role of advertisement The current study is also intended to examine the effect of moderating the role of an advertisement on the relationships between the six previously identified independent variables and brand image.The findings of this moderating effect are given in Table 6. Hypothesis H13: Table 6 demonstrates that 1.972 is the value of the path coefficient of the interaction effect of advertisement and brand awareness, whereas 2.653 is the t-value, and such value is momentous at the 1% level (p = 0.004).Thus, H13 is confirmed. Hypothesis H14: The path coefficient of this interaction (Brand association * Advertisement) effect is 0.643, and the t-value is 2.879; therefore, it is momentous at the 1% level (p = 0.002).Thus, H14 is confirmed.Hypothesis H15: The value of the path of the interaction effect of brand superiority and advertisement on brand image is negative (-0.342), and such value is not significant (t = 1.102; p = 0.135).Thus, H15 is not confirmed. Hypothesis H16: The value of the path coefficient of the interaction effect (Brand affection * Advertisement) is 0.600, and the t-value is 0.854; that means it is not significant (p > 0.05).Thus, H16 is not confirmed. Hypothesis H17: This study confirms that the path coefficient of the interaction effect (Brand resonance * Advertisement) on brand image is negative (-0.484), and the t-value is 0.582; that is, it is not important (p > 0.05).Therefore, H17 is not confirmed. Hypothesis H18: The path coefficient of the interaction effect of advertisement and CSR on brand image is 1.553, and the t-value is 2.217, which means the value is significant at the 5% level (p = 0.013).Therefore, H18 is confirmed. DISCUSSION From the findings, we can conclude brand awareness has a significant influence on brand image in the hotel industry.Our findings can also be evident from the previous studies (Frank, 2013;Chi, 2016).Besides, brand association is also closely related to brand image.In marketing and branding literature, the brand association includes a bundle of benefits that generate positive feelings in customers' minds about a brand.In the tourism sector, the brand association of a hotel comprises its external look, brand symbol (or logo), star rating, history, reputation, competitive price, country of origin, location, and user image (Severi & Ling, 2013).Our study also identifies that providing an appealing benefit to customers can create a strong brand image (Severi & Ling, 2013;Chi, 2016). In pursuit of the determinants of brand image for branded hotels, the present study explored brand superiority as an important factor.Therefore, marketing practitioners especially in the tourism sector may emphasise brand superiority for creating a brand image.However, brand affection is not a major predictor of the brand image of the hotel industry, especially for branded hotels although emotions play a significant role in customer reactions and are central to the consumer behavior literature (Ahuvia et al., 2014;Kumar et al., 2015).Such an observation is not consistent with previous observations in which researchers have studied the important correlation between brand attachment and brand image (Ahuvia et al., 2014;Kumar et al., 2015), whereas brand resonance can be a powerful element in creating a long-term relationship between a customer and brand, as seen in previous literature (Huang & Sarigöllü, 2014). In addition, the hospitality sector cannot ignore CSR activities because it can be one of the important tools among other determinants of brand image. Considering the direct effect, we can conclude that tourism management moderates the correlation between brand resonance and brand image in a positive and significant way.This means that with better tourism management an organization can afford may make a powerful correlation between brand resonance and brand image.If an organization's tourism management is better and if it can maintain customer relationships, it can help to develop a strong brand image.Tourism management requires improving the image of destination brands and inculcating a positive perception among tourists (Pike et al., 2010;Alamu, 2016).While tourism management moderates the correlation between brand superiority and brand resonance with brand image, no moderating effect of tourism management was found on brand association, brand awareness, brand affection, and CSR with brand image.Statistics show that brand image does not increase while tourism management interacts with brand association, brand awareness, brand affection and CSR.The respondents were aware of the attributes and benefits of the particular hotels where they used to stay; they responded positively regarding the brand attributes of those hotels.They might not have had the chance to visit all the tourist' spots; rather, they interacted with a minor portion of tourism products and responded accordingly.For this reason, the respondents might have different experiences of overall tourism management at specific hotels. Alternatively, the present study also used advertisement as a moderator on the relationships between brand association, brand awareness, brand superiority, brand resonance, brand affection, CSR, and brand image.The brand image enhancement in the hotel industry is connected to the actual customers' direct experience gathered from service performance and indirect experience gathered from advertisement.Consequently, brand awareness is stimulated by advertisement, which bestows both search and experiential brand image (Gehrels & de Looij, 2012).The advertising endeavours of specially branded hotels are also reflected by advertising on the internet due to the fact that the websites of the tour service providers communicate both tangible and intangible information about the hotel brands and their services (Henry, 2016).Customers receive information about hotel brands from both outward and inward searches (Wright et al., 2017) that work together to form customers' general brand choice and brand equity judgment (Kashkuli et al., 2014).This study has investigated whether advertisements can influence brand image by interacting with the determinants of brand image.The researchers concluded that advertisement positively and significantly influences the degree of relationship between brand awareness and brand image.Therefore, the influence of brand awareness in the creation of the brand image is stronger when organisations use advertising to promote their products.This fact is supported by Kashkuli et al. (2014), who suggested that advertisement factors do a great job of improving hotel brands.All these advertisement factors inform customers about hotel brands and create a positive image among the customers.Hence, the present study confirms that advertisement strengthens the relationship between brand awareness and brand image in the hospitality industry, especially for hotels. Again, the hypothesis testing reveals that the correlation between brand association and brand image is advertisement moderated by advertisement.The observations reveal that when an advertisement performs a moderating role, the influence of brand association in building brand image becomes much higher.These associations build a positive image among the audience (Kashkuli et al., 2014).Yameen (2013) stated that advertisement campaigns play a significant role in promoting hotel brands in the hotel industry.The observations of this study also provide strong support that advertisements interacting with the tangible and intangible benefits of the hotels might influence customers to form positive feelings regarding a particular hotel brand.Hence, advertisement is a significant moderator in reinforcing the correlation between brand association and brand image. The present study also reveals that advertisement has a significant moderating effect on the correlation between CSR and brand image.While companies are doing CSR practices, different types of advertising media are communicating the messages to the public.In this way, people grow a positive perception in their minds regarding a particular brand.Therefore, a good blending of CSR practices and communication media might play a crucial part in transmitting consistent and impressive messages to prospective customers, thus building a strong brand image.From the present study, it is obvious to state that with the interactive effect of CSR and advertisement a hotel's brand image can be accelerated (Park et al., 2008).From the customers' perspective, advertisement highly contributes to hotel brand equity, a composition of brand association, brand awareness, brand superiority, brand resonance, brand affection, and CSR in the hotel industry (Šerić et al., 2014). While the advertisement is found to moderate the relationships between brand awareness and brand image; brand association and brand image; and CSR and brand image, there is no moderating effect of advertisement in the relations between brand affection, brand resonance, brand superiority and brand image.This study hypothesised advertisement moderates the relationship between brand affection and brand image; nevertheless, statistical findings generated from the raw data did not support this hypothesis.Hence, the brand image does not increase significantly even though there is a combined effect of brand affection and advertisement.Similarly, the advertisement could not influence the relation between brand superiority and brand image, and the relation between brand resonance and brand image. CONCLUSION The current research makes methodological contributions in various ways.Firstly, it adds value to hospitality and marketing literature by developing a brand image influencer that includes factors such as brand awareness, brand association, brand superiority, brand affection, brand resonance, CSR, tourism management, and advertisement.This scale will be useful for evaluating and examining hotel brand image accurately and consistently, thereby facilitating researchers in the food and accommodation service industry to detect the influencing role and significance brand image can exert on brand equity.Additionally, this study validates the adapted measures of hotel brand image through empirical testing. Secondly, this research employs PLS-SEM to identify the factors affecting hotel brand image.The study calculates the composite reliability, Cronbach's alpha reliability, convergent validity, and discriminant validity of latent constructs, which were found to be above the suggested minimum thresholds.The AVE value for each latent construct was examined to assess convergent validity, while discriminant validity was determined by comparing the correlations among latent constructs with the square roots of AVE.Cross-loading matrix results supported the discriminant validity of the conceptual framework.Therefore, this study successfully applies a robust approach (PLS path modeling) in examining the psychometric properties of latent constructs and testing the predictive power of the tool. Finally, this study also contributes methodologically by developing and validating some items for advertisement, which future researchers can use.However, by gaining a deeper comprehension of the factors that influence brand image, managers can develop a brand management roadmap.Quality services are a critical aspect of hotel brand management, as they enable guests to find a better hotel brand among the alternatives.The present study evaluates brand attributes that relate to the company's tangible and intangible elements, which in turn assess consumers' socio-psychological needs.Consequently, the study's results will allow hotel companies to establish a robust brand image by meeting guests' socio-psychological priorities while staying at a hotel. The study's drawbacks include the use of a cross-sectional methodology, which may restrict the detection of causal inferences, and the inclusion of only guests from specific hotel brands in two cities.Both of these factors decrease the generalizability of the study findings.Future studies should use a longitudinal approach to examine causal linkages and a more varied sample of hotels from throughout the nation. The most significant assets for hotel businesses are the brand image and customers' perceptions of the brand.If the companies can build a powerful brand with a great image, they can enjoy numerous marketing and revenue advantages.An improved understanding of the determinants of the brand image will enable managers to develop specific guidelines for tactical brand management.Since the current study's findings are generated from the responses of actual customers of the branded hotels in Bangladesh, it can help this country's tourism industry develop its products to build a better brand image.For the development and management of the hotel industry, policymakers might also consider the findings of this study to develop this industry by upholding a strong brand image of this sector.In addition, this empirical study will contribute theoretically and practically to the existing marketing and branding literature. According to Cheung et al. (2019), brand awareness enhances brand familiarity and recall performance and contributes to brand equity.Therefore, strong brand attributes stimulate brand awareness which stimulates brand image (Matikiti-Manyevere et al., 2021).Marketers use brand association to create positive attitudes and perceptions toward a brand.Indeed, brand association encompasses promising, robust, and sole associations of a brand (Jin et al., 2019).Amron (2018) states that brand association helps build a strong brand image.Customers' overall cognitive assessment of a particular brand is measured by Brand superiority.Cognitive evaluation is how customers differentiate the brands and instill a positive image in the customers' minds (Crolic et al., 2019).According to Kim et al. (2020), brand superiority created by differentiation and uniqueness is the indicator of brand image.According to Tanu et al. (2019), brand affection underlines strong emotional attachment that stimulates customer reaction to a brand.Gordon (2010) stated that brand affection encompasses strong emotions influencing customer response.Brand resonance helps customers create a good connection with a brand.As a result, brand resonance denotes to construct of brand image (Cheng et al., 2019).According to Lee (2014), CSR helps a company and its brands instill a strong perception in the hearts of customers.Jeon et al. (2020) signified that CSR has a positive impact on building brand image. Table 2 Bagozzi and Yi (1988)data are greater than 0.50 which denotes the constructs' convergent validity supports the current research.The item's ultimate standard outer loadings are higher than 0.50.Loadings greater than 0.5 can be acceptable if additional factors are in the block for comparison.However,Bagozzi and Yi (1988)proposed that 0.60 should be the minimum item loading.All of the item loadings in this investigation had values more than 0.60, indicating that the convergent validity is at the indicator level, as shown in Table2.As a result, AVE values of 0.50 and higher for all constructs and item loadings of 0.60 and above signify constructconvergent validity. Table 3 (Fornell & Larcker, 1981) root of the AVEs has the strongest correlation with the other constructs, confirming the discriminant validity(Fornell & Larcker, 1981)of the constructs in this study in yet another aspect. Table 3 . Correlations of constructs and discriminant validity assessment Table 4 . The structural estimates (Main effect) Table 6 . Findings of moderating effect test (Advertisement)
2024-03-08T16:02:06.527Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "b9fdbc89fdee908c71b1a8caee7544357b1a2533", "oa_license": "CCBY", "oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=13217&hash=53b00721fcebdb28358d0091c22b22c16de68e82", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "01f4bfbc3bf85d840557d4b2b9f7ad10f6378d86", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
57202753
pes2o/s2orc
v3-fos-license
A DC-DC Circuit Using Boost Converter for Low Voltage Energy Harvesting Application Corresponding Author: Nurul Arfah Che Mustapha Department of Electrical and Computer Engineering, International Islamic University Malaysia, Kuala Lumpur, Malaysia Email: nurularfah@yahoo.com Abstract: A DC-DC step-up voltage converter is designed to convert a very low voltage supply, 35 mV such as from the thermal energy source from body heat. The converter can generate an output voltage up to 210 mV, approximately six imes its initial input voltage over a frequency of 36 GHz. The effect of switching transistors, inductor current, rise and fall time is also highlighted. The circuit operates using 2 μH inductor and 0.01 fF load capacitor, is simulated using PSpice Simulation tool. This voltage converter is suitable for energy harvesting application in implanted electronic devices. Introduction The trend of portable handheld and remotely implanted devices for monitoring purposes is gradually on the rise. The field of cochlear and neural-implants in the process to be relevant impairment affected diseasepatients will be around the world in the immediate future. The neo-implants sending electrical pulses deep into the brain proper for activating some of the pathways for activating motor control will become a popularly used field. Other commonly used applications are already diverse; ranging from being used in monitoring environmental and oceanic conditions such animal tracking, autonomous flying or moving objects. The energy requirement of such devices is very small, often needing electronic component capable to produce less than 1 V battery equivalent sources. The energy from non-conventional ambient sources over the recent past years is harvested to complement or even replace conventional forms of powering methods using batteries. Such devices need a continuous power supply without having to replace the source. Heat from the sun or human body or that generated from industrial engines combined with vibrations generated from the surroundings are becoming serious contenders in producing such battery equivalent sources. The advantages include reducing maintenance cost and minimizing the chemical waste from batteries while scavenging for useful applications of energy going into waste as discussed in (Anton and Sodano, 2007;Erturk and Inman, 2011;Wang and Yuan, 2008). The commonly used harvesting transducers from light, vibration or thermal produces very small Voltage (mV) sources, not enough to meet the needs of even very small energy requirement. Often it needs some additional circuitry to step up the energy to supply a minimum power of 1 V. Another associated concern coming from these sources is that of instability when it comes to sensitive electronic applications such as those of aircraft wings. The technique introduced by (Meninger et al., 2001) using ambient sources by energy scavenging rarely provides solution to this problem. This paper reports on a proposed harvesting technique which converts very small input voltage energy from thermal source of 50 mV or less using the conventional boost converter to store energy into a capacitor with suitable parameters for circuit operation. The results gathered consist of the following parameters; frequency of the vibrations, the capacitor and inductor value of the booster used. Further, such results show a higher level of stability in the output voltage and potential application in high frequency (GHz) range when compared with what is reported in (Mustapha et al., 2013). DC-DC Boost Converter Circuit Switch power boost converters always give an output voltage greater than its input voltage. A circuit configuration of DC-DC boost circuit consists of inductor, L, diode, load capacitor, C and switch circuit, S W , is shown in Fig. 1a. Circuit configuration using NMOS transistor, M S , is used to replace the switch circuit and diode transistor, M D , is used to replace the diode as shown in Fig. 1b. The transistor switch is turned ON and OFF depending on the mode of operations at fs = 1/T. There are two modes of operation for boost converter as reported in (Ang and Oliva, 2005;Daycounter, 2004;Emadi et al., 2009) and the operation can be either one of them at a transition of time: (i) a continuous mode and (ii) a discontinuous mode. Continuous Mode: When Switch is ON state A continuous mode starts when transistor M S is turned ON and conducting a pulse. The diode transistor at this time is turned OFF and is reversed biased. A charge is being stored and inductor current is drawn into the inductor, L. The inductor current, I L will ramp up linearly during the time interval of t ON . The relation between the ON state and the inductor current as in Equation 1: (1) Discontinuous Mode: When Switch is OFF state While in a discontinuous mode, transistor M S is switched OFF. Inductor voltage reverses its polarity in order to maintain its constants current since current in the inductor cannot change instantaneously. Inductor voltage starts to build up its energy and when this charge is higher than the combined energy in transistor diode and load capacitor, the inductor delivers its voltages to the load capacitor through the diode transistor. Voltage at the output capacitor is higher than the input voltage. During this time, t OFF inductor current falls linearly from I 2 to I 1 and the state of inductor current, decides the operation mode (Emadi et al., 2009). Equation 2 shows the relation when time is OFF: At every interval of a duty cycle, charge is built up at the load capacitor. For high efficiency purpose, the diode should be ultrafast recovery diode (Daycounter, 2004). Results The charge built up at the output voltage is depending on the switching transistors and the rise and fall time of the switching pulse. In this study, several attempts have been made for testing purposes to get the best parameters values i.e., the inductor value, load capacitance value and working frequency of the design. Calculation has been made before the Spice simulation as shown in Table 1 and result shows the output voltage will increased when the operation frequency rate is increased. Figure 2 shows the switching transistors pulses at the rate of 0.03 ns or 33 GHz. Figure 3 shows the effect of changing the value of the inductor and frequency rate. Input voltage and load capacitor is maintained the same throughout the simulation testing using 35 mV input. Substituting Equation 1 and 2 and simplifying the equation, average output voltage is proportional to the duty cycle, D: Discussion As soon the transistor switch is at ON state and the first interval starts, inductor current starts increasing linearly across the diode and there is no current across the diode. Inductor current is continuously decreasing until switch is turned ON. In the second interval, the switch is not turned ON before current reaches zero, while the inductor current is still decreasing. Diode at this time, does not allow current to go into opposite direction. This explains the sudden peak in diode current and it remains zero at other peak of in Fig. 2. Using Equation 4, different parameters listed in Table 1 and frequency of operations is being tested: The larger the value of ripple current or the peak current, the greater saturation of inductor stress on the transistor (Daycounter, 2004). Thus, it is better to choose lower value of ripple current and consequently will give reasonable value of inductance, L. Increasing the inductance value will eventually decrease the output voltage. Another parameter contributes to the level of the output voltage is the rise and fall time of the transistor switch, the ON and OFF pulses. A period of complete pulse is equivalent to the double of its pulse width, pw. By changing the rise and fall time of the pulse, the increment level for the output voltage also increasing. Simulation using Spice at a constant inductance, capacitance, frequency and pulse width with different rise and fall time is shown in Fig. 4. Faster rise and fall time will accumulate faster output level voltage. Hence the minimum rise and fall time make the highest output voltage. Conclusion The circuit is shown to be operating according to specified frequency rate, rise and fall time of the input and by determine the duty cycle of the switching rate. The fastest the switching rate, the greater the output voltage can be obtained. Choosing lower ripple current is better in lowering the inductor's saturation to the diode. By properly design for low voltage requirement, the design parameters are L = 2 µH, C = 0.01 fF and f = 36 GHz with input of 35 mV and output voltage of 210 mV.
2019-01-23T15:58:39.213Z
2015-06-09T00:00:00.000
{ "year": 2015, "sha1": "e66c6bef313b7dfef40c6670ca8afc2de0d7fda8", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2015.272.275", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "19936a6fab8722983ce8a0cb8721197220ddd356", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
4467970
pes2o/s2orc
v3-fos-license
Highly Proliferative α-Cell-Related Islet Endocrine Cells in Human Pancreata. The proliferative response of non-β islet endocrine cells in response to type 1 diabetes (T1D) remains undefined. We quantified islet endocrine cell proliferation in a large collection of nondiabetic control and T1D human pancreata across a wide range of ages. Surprisingly, islet endocrine cells with abundant proliferation were present in many adolescent and young-adult T1D pancreata. But the proliferative islet endocrine cells were also present in similar abundance within control samples. We queried the proliferating islet cells with antisera against various islet hormones. Although pancreatic polypeptide, somatostatin, and ghrelin cells did not exhibit frequent proliferation, glucagon-expressing α-cells were highly proliferative in many adolescent and young-adult samples. Notably, α-cells only comprised a fraction (∼1/3) of the proliferative islet cells within those samples; most proliferative cells did not express islet hormones. The proliferative hormone-negative cells uniformly contained immunoreactivity for ARX (indicating α-cell fate) and cytoplasmic Sox9 (Sox9Cyt). These hormone-negative cells represented the majority of islet endocrine Ki67+ nuclei and were conserved from infancy through young adulthood. Our studies reveal a novel population of highly proliferative ARX+ Sox9Cyt hormone-negative cells and suggest the possibility of previously unrecognized islet development and/or lineage plasticity within adolescent and adult human pancreata. The proliferative response of non-b islet endocrine cells in response to type 1 diabetes (T1D) remains undefined. We quantified islet endocrine cell proliferation in a large collection of nondiabetic control and T1D human pancreata across a wide range of ages. Surprisingly, islet endocrine cells with abundant proliferation were present in many adolescent and young-adult T1D pancreata. But the proliferative islet endocrine cells were also present in similar abundance within control samples. We queried the proliferating islet cells with antisera against various islet hormones. Although pancreatic polypeptide, somatostatin, and ghrelin cells did not exhibit frequent proliferation, glucagonexpressing a-cells were highly proliferative in many adolescent and young-adult samples. Notably, a-cells only comprised a fraction (∼1/3) of the proliferative islet cells within those samples; most proliferative cells did not express islet hormones. The proliferative hormone-negative cells uniformly contained immunoreactivity for ARX (indicating a-cell fate) and cytoplasmic Sox9 (Sox9 Cyt ). These hormone-negative cells represented the majority of islet endocrine Ki67+ nuclei and were conserved from infancy through young adulthood. Our studies reveal a novel population of highly proliferative ARX+ Sox9 Cyt hormonenegative cells and suggest the possibility of previously unrecognized islet development and/or lineage plasticity within adolescent and adult human pancreata. Type 1 diabetes (T1D) is characterized by a considerable loss of b-cells and subsequent insulin deficiency (1)(2)(3)(4)(5)(6)(7). Although b-cells have been reported to persist in T1D pancreata for several years after diagnosis, we recently found that T1D pancreata do not exhibit evidence of increased b-cell proliferation or evidence of b-cell neogenesis or transdifferentiation (1). However, the impact of T1D on non-b-cells has not been studied. Thus, the regenerative response of islet endocrine cells to T1D remains poorly understood. Lineage-tracing studies in mice suggest that a-cells may have unappreciated plasticity. a-Cells appear to transdifferentiate into b-cells in mice under some circumstances (2)(3)(4). These results imply that a-cells might be a potential source for b-cell neogenesis as a novel therapy for diabetes. Indeed, insulin-glucagon-coexpressing cells have been reported within pancreata of human patients with acute pancreatitis (5). However, potential compensatory responses from nonb-cell sources in human pancreata with long-standing T1D remain poorly understood, as only a few studies have been performed. Increased Ki67+ islet cells have been observed in both aand b-cells of pancreata from individuals with recent-onset T1D (6). Ki67+ ductal cells have also been described in transplanted pancreas of patients with T1D (7). Increased cell proliferation has also been reported in pancreatic duct glands of T1D pancreata (8). Taken together, these observations hint at a role for non-b-cell sources in T1D pathophysiology or compensation. Given the lack of consensus, we considered the possibility that other islet endocrine cells could participate or respond to autoimmunity with attempted regeneration. We surveyed human islet proliferation in nondiabetic control and T1D pancreata from the JDRF Network for Pancreatic Organ Donors with Diabetes (nPOD) collection, applying high-throughput imaging and analysis using techniques similar to those used in our previous study (1). We find that islet proliferation did not increase in response to T1D. But islet cell proliferation was sharply increased in many adolescent and young-adult pancreata of individuals with and without T1D. We identify a novel population of highly proliferative, a-related cells within many adolescent and young-adult pancreata. Human Pancreatic Samples Paraffin-embedded pancreas tissue sections were obtained from the JDRF nPOD after a waiver from our institutional review board. Pancreata were studied based on availability. Tissues were processed by nPOD by standardized operating procedures (http://www.jdrfnpod.org/for-investigators/ standard-operating-procedures/). Paraffin-embedded tissues were fixed in 10% neutral buffered formalin for 24 h and up to 40 h for pancreata with high fat content (1). Islet Morphometry Islet endocrine and a-cell morphometry were assessed with Volocity 6.1.1 (PerkinElmer) as previously described (9). Zeiss AxioImager M1 (Carl Zeiss Microscopy) with automated X-Y stage and Orca ER camera (Hamamatsu) acquired images of tens of thousands of individual nuclei/sample (Supplementary Table 4). Proliferation Analysis Ki67+ islet endocrine (synaptophysin), a-cell (glucagon), PP, SS, ghrelin, and cytoplasmic Sox9 (Sox9 Cyt ) proliferation were calculated as % total cells. A subset of high proliferators was quantified as % intra-islet Ki67+ cells for insulin and all other markers. High proliferation was defined as having an islet endocrine cell replication rate .0.71%, corresponding to z score of 0.5. TUNEL Apoptosis analysis was performed in a subset of available control and T1D samples as previously described (1). Total terminal deoxynucleotide TUNEL-positive islet endocrine cells were assessed in .85,000 islet cells/condition. Total TUNEL+ Sox9 Cyt cells were assessed in 993 islet cells/ condition. In every sample, TUNEL+ pancreatic ducts were imaged to ensure adequate TUNEL staining. Increased Islet Endocrine Cell Proliferation in Some Adolescents and Young Adults Irrespective of T1D Status We previously quantified b-cell proliferation in control and T1D nPOD pancreas organ donor samples and found no evidence for a b-cell regenerative response to T1D in any age-group (1). During our studies, we noted that some samples had islets with abundant Ki67+ cells, indicating islet proliferation ( Fig. 1A-C). Samples with highly proliferative islets represented only a portion of pancreatic donors, largely from adolescents and young adults. Proliferative islet cells were present in control and T1D samples; T1D did not influence the proliferative islet cell phenotype (Fig. 1D-F and Supplementary Table 5). Cell cycle entry within islets, as measured by the percentage of Ki67+ islet cells among total islet nuclei, was equivalent between control and T1D for age-matched cohorts of children and adolescents (0.63 vs. 0.81%) or adults (0.38 vs. 0.24%), with proliferation more variable in control subjects ( Fig. 1G-J and Supplementary Table 5). Most proliferative islet cells did not express insulin, suggesting they were not b-cells ( Fig. 1A and B). We quantified Ki67+ b-cells as a fraction of the total Ki67+ islet cells among the highly proliferative pancreatic samples. The vast majority of Ki67+ islet cells did not contain insulin; 89.8% of Ki67+ intra-islet cells of control subjects and 98.86% of Ki67+ intra-islet cells of T1D subjects did not express insulin ( Fig. 1K and Supplementary Table 6). Moreover, Ki67+ islet cells were not CD3+ or CD31+, thereby excluding lymphocytes or endothelial cells, respectively (Supplementary Fig. 1). We therefore considered the possibility of proliferative non-b islet endocrine cells within our samples. Increased Islet Endocrine Cell Proliferation in Pancreata From Some Adolescents and Young Adults Irrespective of T1D Status We imaged and quantified total islet endocrine cell proliferation using Ki67 and synaptophysin (marker of islet endocrine cells) in 59 control and 47 T1D pancreata to determine the identity of the proliferative islet cells and to test for a proliferative response to T1D. Similar to previous findings of b-cell proliferation (1,10,11), islet endocrine cell proliferation was greatest in pancreatic samples from infants and children without diabetes (up to 8.68%) and declined with age ( Fig Table 9). Islet endocrine cell proliferation was verified with other markers of cell Figure 1-High levels of intra-islet proliferation within some adolescents and young adults but not within b-cells. Islet images for control (A-C) and T1D (D-F) pancreata stained for DAPI (blue), insulin (Ins)/synaptophysin (Syn) (green), and Ki67/pHH3/PCNA (red). Scale bar: 100 mm. G and H: Quantification of intra-islet cell proliferation in control (G) and T1D (H) represented by age (years) demonstrates high intra-islet proliferation in both control and T1D islets. Data points represent the mean value for each of 33 control and 18 T1D pancreata. I and J: Ki67+ intra-islet cells (% total) in T1D were very similar to control child and adolescent pancreata (I) and young-and older-adult pancreata (J). Results expressed as mean 6 SEM for 11 control and 12 T1D children and adolescent subjects, and 17 control and 6 T1D young-and older-adult subjects. K: Ki67+ b-cells represent a small fraction of the total intra-islet Ki67+ cells in a subset of highly proliferating control (n 5 12) and T1D (n 5 6) pancreata, identified as described in RESEARCH DESIGN AND METHODS. yr, year; yrs, years. Scale bar: 100 mm. G and H: Quantification of islet endocrine cell proliferation in control (G) and T1D (H) represented by age (years). Data points represent the mean value for each of 59 control and 46 T1D pancreata. Ki67+ synaptophysin+ cells (% total) in control and T1D versus age (years) demonstrates that islet endocrine cell proliferation generally declines with age with notable proliferation in adolescents and young adults. I and J: Ki67+ synaptophysin+ (% total) in T1D was very similar to control child and adolescent pancreata (I) but reduced compared with control young-and older-adult pancreata (J). Results expressed as mean 6 SEM for 23 control and 23 T1D children and adolescent subjects and cycle entry. Abundant synaptophysin+ phospho-histone H3+ (pHH3) and PCNA+ nuclei were observed, confirming increased cell cycle entry ( Supplementary Fig. 2). To test for regional variation of the islet cell proliferative phenotype, we studied samples from the opposite region of the pancreas (e.g., head/body when tail had been previously measured) from four adolescent and young-adult case subjects and compared synaptophysin+ Ki67+. Reassuringly, there was minimal intrasample variation of islet cell proliferation ( Supplementary Fig. 3 and Supplementary Table 10). Thus, the proliferative islet endocrine cell population did not substantially vary within pancreatic region. Taken together, these studies indicate that proliferative islet endocrine cells are present within a significant population of pancreata from adolescent and young-adult donors. a-Cell Proliferation Declines With Age and Is Greatly Increased in Some Adolescent and Young-Adult Pancreata We considered the possibility that a-cells could represent the proliferating islet endocrine cells of pancreata from adolescents and young adults. We measured a-cell proliferation in control samples, quantifying glucagon+ Ki67+ nuclei ( Table 7). Infants exhibited very high (up to 8.93%) and older adults very little (0-0.42%) a-cell proliferation. (Fig. 3G and Supplementary Table 7). However, a-cell proliferation was quite elevated (up to 3.4% of a-cells were Ki67+) in samples from a fraction (;1 of 3) of adolescents and young adults. Thus, a-cell proliferation gradually decreased with age, consistent with previous findings in islets (above), a-cells (12), and b-cells (1,10,11). Notably, many pancreata from adolescents and young adults with increased islet endocrine cell proliferation also had abundant a-cell proliferation (Supplementary Table 8). We tested for a proliferative response of a-cells to T1D. Glucagon+ Ki67+ cells were present in some T1D samples of children, adolescents, and young adults ( Fig. 3D-F). Abundant a-cell proliferation was present in a fraction of pancreata from children, adolescents, and young adults ( Fig. 3H and Supplementary Table 8). Similar to findings in control subjects, a-cell proliferation declined as a function of age to very low levels in older T1D adults (0-0.07%). Notably, a-cell proliferation was decreased in T1D samples from children/adolescents and young/older adults ( Fig. 3I and J and Supplementary Tables 7 and 8). Similarly, Ki67+ a-cells per islet were decreased in T1D pancreata from children, adolescents, and young adults (Supplementary Table 11). Individuals with T1D with the most a-cell proliferation were mainly children and adolescents with short disease duration ( Fig. 3K and L and Supplementary Table 8). a-Cell proliferation was present in the same subset of control and T1D adolescent and young-adult samples that exhibited islet endocrine cell proliferation (Supplementary Tables 7 and 8). Given parallel trends of proliferation between islet endocrine cells and a-cells, we considered the possibility that a-cells could represent the majority of proliferative islet cells. However, glucagon+ Ki67+ cells only accounted for 24 and 30% of total intra-islet cell proliferation in the highly proliferative control and T1D samples, respectively ( Fig. 3M and Supplementary Table 12). Since b-cell proliferation comprised only 10.2 and 1.1% of total intra-islet cell proliferation in control and T1D pancreata (Fig. 1G and Supplementary Table 6), most islet endocrine proliferation could not be accounted for within aand b-cells. Thus, our studies revealed the presence of another population of highly proliferative islet endocrine cells within adolescents and young adults. PP, SS, and Ghrelin Cells Exhibit Minimal Proliferation In a further attempt to identify the highly proliferative islet endocrine cells, we separately quantified Ki67 staining with PP, SS, and ghrelin. However, there were very few proliferating PP, SS, and ghrelin cells within a random sampling of adolescent and young-adult pancreata (Fig. 4A-I and Supplementary Table 13). No difference was noted between control and T1D pancreata for any cell populations. A separate analysis quantified PP, SS, and ghrelin proliferation as a percentage of intra-islet proliferation (Fig. 4J-L) with use of the subset of donors with or without T1D identified with high levels of proliferation. PP, SS, and ghrelin cell proliferation accounted for very little of the total intra-islet proliferation within control and T1D pancreata (Fig. 4J-L and Supplementary Table 14). Thus, most of the islet cells entering the cell cycle within the proliferative samples did not contain any of the five known islet endocrine hormones, although some Ki67+ cells were glucagon+. Our studies therefore reveal the presence of highly proliferative islet endocrine cells that do not express any of the known islet hormones. No Association of Islet Endocrine Cell Proliferation With Pancreas Harvest Conditions Since increased islet endocrine cell proliferation was only present within a fraction of pancreata from adolescents and young adults in the control (9 of 34) and T1D (2 of 31) groups, we considered the possibility that islet cell cycle entry might be influenced by the postmortem state or conditions of organ harvest. We compared synaptophysin+ Ki67+ cell content of pancreatic samples versus donor clinical parameters. Reassuringly, there was no association between duration of intensive care unit (ICU) stay and islet cell proliferation in control or T1D subjects ( Supplementary Fig. 4A and B and Supplementary Table 15). Similarly, there was no association between a-cell proliferation and ICU stay ( Supplementary Fig. 4C and D and Supplementary Table 15). Concerns have been raised regarding the potential impact of warm or cold ischemia to reduce Ki67 immunoreactivity and thus confound measurement of human islet cell proliferation (13). Although pancreatic transit time varied dramatically, there was no correlation with islet cell proliferation or a-cell proliferation (Supplementary Fig. 4E-H and Supplementary Table 15). Thus, impaired sample Analysis of a-cells in control and T1D pancreata. Islet images for control (A-C) and T1D (D-F) stained for DAPI (blue), glucagon (Gcg) (green), and Ki67 (red). Scale bar: 100 mm. G and H: Quantification of a-cell proliferation in control (G) and T1D (H) represented by age (years). Data points represent the mean value for each of 59 control and 47 T1D pancreata. Ki67+ a-cells (% total) in control and T1D vs. age (years) demonstrates that a-cell proliferation declines with age with notable proliferation in adolescents and young adults. I and J: Ki67+ a-cells (% total) in T1D were reduced compared with in control child and adolescent pancreata (I) and young-and older-adult pancreata (J). Results expressed as mean 6 SEM for 23 control children and adolescent subjects and 24 children and adolescents with T1D and for 28 young-and older-adult control subjects and 23 young and older adults with T1D. *P , 0.05. K and L: a-Cell proliferation vs. T1D duration (years) demonstrates that most pancreata from T1D individuals exhibited low rates of a-cell proliferation. M: Ki67+ a-cells represent a fraction of the total intra-islet Ki67+ cells in a subset of highly proliferating control (n = 12) and T1D (n = 6) pancreata. mths, months; yr, year; yrs, years. quality did not seem to explain the variability of islet endocrine proliferation in our population. Sox9 Cyt in Proliferative Islet Cells We considered the possibility that highly proliferative islet endocrine cells could express markers of islet endocrine progenitors. Ngn3, a key islet endocrine cell progenitor, was not detected within either control or T1D pancreata (data not shown). We then tested for Sox9 expression, which influences development of pancreatic progenitor cells and adult exocrine cells, among other organs (14)(15)(16). Surprisingly, most Ki67 or PCNA proliferative cells showed Sox9 immunoreactivity in the cytoplasm (Sox9 Cyt )-in sharp contrast to pancreatic ducts (Fig. 5A and B and Supplementary Fig. 5A-D). To further test Sox9 Cyt immunoreactivity, we stained pancreata with phospho-specific antisera against serine 181 of Sox9 (p-Sox9). p-Sox9 detection was consistent with earlier Sox9 studies (17)(18)(19)(20)(21)(22)(23). p-Sox9 antisera yielded cytoplasmic immunoreactivity in scattered islet endocrine cells that were uniformly ARX+ ( Supplementary Fig. 6A-D). Many p-Sox9+ ARX+ cells also expressed glucagon in variable amounts. (Supplementary Fig. 6C and D). In contrast to the above Sox9 studies, p-Sox9+ antisera did not stain nuclear antigens in pancreatic acini and only weakly detected nuclei within duct structures (Supplementary Fig. 6E and F). Collectively, these studies with p-Sox9+ antisera support the possibility of Sox9 Cyt immunoreactivity within the highly proliferative ARX+ glucagon variable islet endocrine cells. Quantification revealed very high amounts of Sox9 Cyt Ki67+ cells (up to ;13%) in some adolescent control and T1D samples (Fig. 5C and Supplementary Table 16). Indeed, Sox9 Cyt immunoreactive cell proliferation accounted for ;72% of total intra-islet proliferation in the subset of control and T1D samples with high endocrine cell proliferation ( Fig. 5D and Supplementary Table 16). Thus, Sox9 Cyt Ki67+ cells represented the proliferative hormone negative intraislet cells, which were also detected by p-Sox9 antisera (Supplementary Fig. 6G and H). Notably, Sox9 Cyt Ki67+ cells were specific to islets and not primarily associated with pancreatic ducts or pancreatic parenchyma, as detected by ductal structures or cytokeratin-19 (Supplementary Fig. 5E-G and Supplementary Table 17). Absence of Neuronal Markers in Sox9 Cyt Immunoreactive Cells Synaptophysin is a neuroendocrine marker and could also detect neurons in the pancreas or islet. Hence, we used additional neuronal markers to test whether Sox9 Cyt cells represented dedifferentiated neurons. Sox9 Cyt cells did not express b3-tubulin or NeuN, which detect peripheral neurons ( Supplementary Fig. 6). Thus, proliferative intra-islet cells did not express neuronal related markers. Absence of b-Cell-Specific Markers in Proliferative Islet Cells We used additional b-cell and endocrine differentiation markers to test whether proliferating intra-islet cells represented dedifferentiated b-cells. The Ki67+ islet cells did not express Pdx1 (Supplementary Fig. 8A), which defines mature b-cells and a few SS islet cells. Islet Sox9 Cyt cells also did not express the mature b-cell marker, Nkx6.1, which was always found within insulin+ cells and absent from glucagon+ cells (Supplementary Fig. 8B) (1). Similarly, Ki67+ cells did not express MafA, GAD65, PC1/3, or GLUT1 ( Supplementary Fig. 8C-F). These studies indicate that the proliferative intra-islet cells did not express b-cell related markers. islet endocrine Sox9 Cyt cells might be related to a-cells. We looked for a-cell progenitors after finding increased a-cell proliferation, given that proliferative a-cells did not represent the majority of proliferative islet cells. We stained pancreata with antisera against ARX, a marker of a-cell-related fate. Indeed, virtually all Sox9 Cyt cells were also ARX+ (control = 99.3% and T1D = 98.4%) (Fig. 6C and D and Supplementary Table 19). ARX+ Sox9 Cyt cells variably expressed glucagon: some ARX+ Sox9 Cyt cells contained glucagon, while others were glucagon negative ( Fig. 6C and D). Sox9 Cyt cells also variably expressed GLP1, and many GLP1+ cells coexpressed glucagon (Supplementary Fig. 9). Notably, many ARX+ glucagon negative cells were Ki67+ (Fig. 6E and F and Supplementary Table 20). These studies of ARX+ Sox9 Cyt Ki67+ glucagon cells reveal that many of the proliferative islet endocrine cells have an a-cellrelated phenotype. We tested for other islet endocrine markers of ARX+ Sox9 Cyt cells. We first looked for INSM1, a transcription factor present within insulinomas (24) and in murine a-, b-, SS, and PP cells (25). INSM1 expression was consistently observed in band a-cells of young-adult control and TID samples and also in ARX+ Sox9 Cyt cells ( Supplementary Fig. 10A-D). Similarly, Isl1 and -2 were readily detected in ARX+ Sox9 Cyt cells (Supplementary Fig. 10E). Additionally, Nkx2.2 marked both aand b-cells and was observed in highly proliferative Ki67+ insulin-negative islet cells (Supplementary Fig. 10F and G). Thus, ARX+ Sox9 Cyt Ki67+ cells seem to express many markers common to islet endocrine cells. To further characterize the endocrine properties of the proliferative cells, we tested for chromogranin A and other vesicular or exocytosis-related proteins. ARX+ Sox9 Cyt cells expressed chromagranin A, as well as exocytotic machinery Figure 6-Highly proliferative Sox9 Cyt cells are islet endocrine cells, expressing ARX and variable glucagon (Gcg). Control (A, C, and E) and T1D (B, D, and F) islets stained for endocrine, Sox9, Ki67, and a-cell markers. A and B: Islets stained for synaptophysin (Syn) (green) and Sox9 (red) show synaptophysin-Sox9 Cyt copositive cells. C and D: Islets stained for Sox9 (green), ARX (red), and glucagon (white) indicate some Sox9 Cyt -ARX-glucagon triple-positive cells (inset of C) and some Sox9 Cyt ARX+ cells that are clearly glucagon negative, as visible in the inset of D. E and F: Islets stained for glucagon (white), ARX (green), and Ki67 (red) indicate ARX-Ki67 copositive cells that do not express glucagon. Scale bar: 100 mm. yr, year; yrs, years. components synaptotagmin 1a and SNAP25 (Supplementary Fig. 11A-F). Interestingly, secretogranin III, which is typically contained within islet endocrine cells including aand b-cells (26,27), was absent from Ki67+ cells or some ARX+ cells ( Supplementary Fig. 11G and H). Thus, the ARX+ Sox9 Cyt islet endocrine cells exhibit many (but not all) of the common features of hormonesecreting a-cells. ARX+ Sox9 Cyt Islet Endocrine Cells Are Conserved in Infant and Child Pancreata To determine whether the proliferative islet endocrine cells are unique to adolescents and young adults, we studied additional pancreata from infant and young children control subjects. Sox9 Cyt synaptophysin+ cells were abundant in islets from infants and young children ( Fig. 7A and B). ARX+ Sox9 Cyt cells were readily detected in islets from infants and young children, with glucagon present in some but not all cells ( Fig. 7C and D). Similar to adolescents and young adults, most of the ARX+ Ki67+ cells in younger samples did not express glucagon ( Fig. 7E and F). ARX+ Sox9 Cyt cells consistently expressed INSM1 (Fig. 7G and H). These studies reveal that ARX+ Sox9 Cyt islet endocrine cells are developmentally conserved and readily detected within pancreata across a wide range of ages. Cell Death Within Proliferative Islet Cells We hypothesized that vast proliferation of ARX+ Sox9 Cyt islet endocrine cells might be counterbalanced by increased cell death. We quantified islet endocrine cell death via TUNEL assay. Some control and T1D adolescent and young-adult individuals exhibited intra-islet TUNEL nuclei (Supplementary Fig. 12A-C, F, and G and Supplementary Table 21). However, Sox9 Cyt cells were not routinely TUNEL+ ( Supplementary Fig. 12D and E and Supplementary Table 22). Thus, the ARX+ Sox9 Cyt islet endocrine cells seem to be a stable population that does not immediately undergo cell death. However, overall islet endocrine cell death seemed to be dramatically increased within some samples, especially those from adolescents and young adults. Increased cell death could limit the net expansion of islet endocrine cells. To directly test the impact of increased islet endocrine cell proliferation, we quantified islet endocrine cell mass using synaptophysin as a pan-endocrine islet marker. While islet area and mass were decreased in T1D samples compared with control subjects, islet endocrine area and mass did not correlate with age in either cohort ( Supplementary Fig. 13 and Supplementary Table 23). Thus, the proliferative islet endocrine cells do not appear to result in accumulation of islet endocrine mass. DISCUSSION We analyzed human islet cell proliferation and found high amounts of islet endocrine and a-cell proliferation in some control and T1D pancreata from adolescents and young adults. The proliferative cells appear to be related to a-cells and express ARX, Sox9 Cyt , and various markers of islet endocrine cells including INMS1, Isl1 and -2, and Nkx2.2 (Table 1). Sox9 Cyt cells comprised the majority of islet endocrine cell proliferation (up to ;72%). ARX+ Sox9 Cyt cells expressed markers of secretory vesicles and machinery, suggesting some capacity for hormone secretion. ARX+ Sox9 Cyt cells were abundant in nonproliferative individuals in a wide age range. Our studies reveal a novel population of highly proliferative a-related cells and suggest previously unappreciated replicative capacity and plasticity within islet populations of human pancreata in adolescents and young adults. Although b-cell proliferation decreases dramatically after infancy (1,10,11), we found that islet endocrine and a-cell proliferation was high in a subset of adolescents and young adults. Interestingly, a-cell proliferation was decreased in T1D compared with control subjects, suggesting that islet proliferation in adolescents and young adults may be impaired by diabetes. Furthermore, we observed an increase in hormone-negative a-related cells with substantial proliferation, suggesting that adolescent and young-adult pancreata may harbor cells with retained plasticity and replicative capacity. These a-related cells were relatively abundant and comprised the majority of islet Ki67+ nuclei within Table 1-Adolescent and young-adult pancreata contain a population of highly proliferative a-related cells that are mostly glucagon negative: a summary of markers found to be detected or absent in a-, highly proliferative a-related cell, b-, g-, d-, and «-cells a-Cell Highly proliferative a-related cell b-Cell g-Cell d-Cell e-Cell donors with highly proliferative islets. They did not exhibit mature b-cell markers: Pdx1, Nkx6.1, MafA, and GAD65. Notably, islet cell proliferation in adolescent T1D pancreata was equivalent to that in control nondiabetic pancreata. Although a regenerative response to T1D was not observed, few samples were obtained from very recent T1D onset. In contrast, Willcox et al. (6) demonstrated that individuals with ,18 months' disease duration may have compensatory aand b-cell proliferation. However, the high rates of islet and a-cell proliferation observed in some T1D adolescents do not appear to be attempted b-cell regeneration from T1D-associated b-cell deficiency. Rather, these highly proliferative cells seem to represent previously unrealized developmental plasticity within islet endocrine lineages, unrelated to T1D. While most islet proliferation occurs during the perinatal period (28), islet cell proliferation and apoptosis were comparable in our study, further suggesting adolescence and young adulthood as a unique period of high proliferation. Sox9 Cyt cells, which express many islet and a-cell markers, might be in a transition state. However, whether they are transitioning to or from a-cells remains unclear. Sox9 is an important transcription factor in developing pancreas for islet endocrine differentiation and development (29). In contrast to typical nuclear Sox9 localization (30), we find highly proliferative cells with Sox9 Cyt immunoreactivity. Interestingly, Sox9 activity has been described to change with its localization in the cytoplasm and nucleus within gonad, chondrocytes, breast, and gastric tissues (17)(18)(19)(20)(21)(22)(23). In cultured cells, nuclear translocation of Sox9 has been linked to b-catenin degradation and thus inhibition of Wnt signaling (22). In mature islets, future studies will determine whether Sox9 Cyt plays a role in cell proliferation or differentiation. Our study has notable strengths but also limitations inherent to autopsy specimens. We studied a large number of control and T1D samples across a wide age range and quantified vast numbers of cells within several different cell types. Thus, our findings of highly proliferative islet endocrine cells seem to represent a genuine phenomenon, and these Ki67+ cells can be readily detected. However, studies of autopsy tissues are complicated by potential confounding variables, including the cause and conditions of death of the patient, tissue transport conditions, and other preexisting medical conditions prior to the patient's demise. For example, hypoxemia or other conditions of ICU stay could alter pancreas physiology and possibly influence islet endocrine cell proliferation. Thus, it will be difficult to resolve the highly proliferative a-related cells in vivo given the challenges of pancreatic biopsy from T1D individuals, which include pancreatitis (31). Although samples can be obtained from patients who require pancreatic resection for cancer or tumors (32), such lesions typically occur in the middle-aged and elderly. Consequently, it might be difficult to obtain pancreatic samples from healthy adults. Therefore, nPOD autopsy specimens represent a unique window into human physiology, albeit with limitations. In conclusion, we find no compensatory islet proliferation in T1D. However, we find a highly proliferative population of a-related islet endocrine cells in adolescents and young adults. The presence of proliferative ARX+ Sox9 Cyt cells across a range of ages (infants to young adults) demonstrates developmental conservation of this a-related cell population and hints that a-cells may play an essential role in islet function and growth.
2018-04-03T06:13:54.232Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "45c8243e466f616b65de264e095f6c3faa5cbce8", "oa_license": null, "oa_url": "https://diabetes.diabetesjournals.org/content/diabetes/67/4/674.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "45c8243e466f616b65de264e095f6c3faa5cbce8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12245346
pes2o/s2orc
v3-fos-license
White fat, factitious hyperglycemia, and the role of FDG PET to enhance understanding of adipocyte metabolism The development of a hybrid PET/CT led to the recognition of the enhanced glycolysis in brown fat. We report a previously unrecognized mechanism for altered fluorodeoxyglucose (FDG) biodistribution with diffuse white adipose tissue uptake. This occurred during a restaging scan for cervical cancer following administration of insulin in the setting of measured hyperglycemia. The patient's blood sugar normalized, but she experienced symptoms and signs of hypoglycemia. A subsequent history indicated that the patient received intravenous high-dose vitamin C just prior to arrival. Ascorbic acid is a strong reducing agent and can cause erroneous false positive portable glucometer readings. Accordingly, it is likely the patient was euglycemic on arrival and was administered FDG during a period of insulin-induced hypoglycemia. Prominent diffuse white adipose tissue, gastric mucosal, myocardial, and very low hepatic and muscle activity were observed. The case provides insight into the metabolic changes that occur during hypoglycemia and the potential danger of relying on portable glucometer readings. We discuss the potential biological basis of this finding and provide recommendations on the avoidance of this complication. Background The development of hybrid positron emission tomography/computed tomography (PET/CT) devices led to the recognition of enhanced glycolysis in brown fat, typically in the neck and paravertebral regions of the thorax and upper abdomen, as a thermoregulatory response and under catecholamine stimulation [1][2][3]. Other atypical patterns of fat uptake in patients with lipodystrophy have been reported [2,3] We report a previously unrecognized mechanism for altered fluorodeoxyglucose (FDG) biodistribution into adipose tissues. Case presentation A 40-year-old woman presented for restaging with F-18 FDG PET/CT on the background of squamous cell carcinoma of the cervix and biopsy proven recurrence in a left supraclavicular node. Conventional imaging had demonstrated no further evidence of metastatic disease. She had previously received radical chemoradiotherapy for FIGO stage IIIb disease with para-aortic nodal involvement, without intervening therapy in the 6 months since completing the treatment. The patient was not diabetic and had a body mass index of 24, which is in the normal range for a female. She had no other past medical history, took no medications, and had fasted from the previous evening. On arrival, her blood glucose level (BGL) obtained with a portable glucometer (Abbott Diabetes Care Optium Xceed, Alameda, CA, USA) via a fingerprick capillary blood sample was 15 mmol/L, and she was administered 6 U of short-acting insulin intravenously as per local protocol. Thirty minutes later, her BGL had fallen to 6.9 mmol/L, and she described feeling unwell with anxiety, palpitations, and sweating. Her blood pressure, temperature, and oxygen saturation levels were normal, and she was observed. The BGL measurements plateaued at 6.0 mmol/L, and FDG was subsequently injected. PET/CT scanning was performed 60 min later. Given this unusual biodistribution, we questioned the patient further. She reported receiving an intravenous infusion of high-dose vitamin C (sodium ascorbate solution) from another health practitioner just prior to her arrival for the PET scan. Ascorbic acid is a strong reducing agent and interferes with laboratory tests involving oxidation and reduction reactions. Substantially reduced or elevated portable glucometer readings occur with ascorbic acid in a dose-dependent fashion and is one of the most common interfering substances that affects the accuracy of glucose meters [4,5]. A plasma venous sample was not available to confirm either plasma glucose or ascorbic acid as blood was obtained via fingerprick, and the error was not suspected prospectively. Nevertheless, the clinical symptoms of hypoglycemia and a history of intravenous ascorbic acid just prior to arrival at the PET scan, provides sufficient evidence to indicate that the patient was injected with FDG during a period of iatrogenic hypoglycemia induced by administration of insulin in the setting of a falsely elevated BGL reading. Based on the unusual pattern of uptake, the study was repeated the following day in the absence of vitamin C. BGL was normal on arrival, and the FDG biodistribution was normal on the repeat study (see Figure 2; Additional file 1). Discussion This case highlights the potential limitations of standardized insulin protocols [6,7], especially when relying on point-of-care glucometers. Intravenous ascorbic acid may result in substantial error in glucometer readings [4,5]. This is of particular relevance for cancer imaging as some complementary care practitioners advocate the use of vitamin C in conjunction with chemotherapy or radiotherapy in a wide variety of malignancies [8]; moreover, many patients choose not to tell their doctors about concomitant use of alternative medicines [9]. Clinicians should also be aware of other potential causes of error resulting in factitious hyperglycemia including substances containing maltose (e.g., intravenous human immunoglobulin) [10], icodextrin (e.g., peritoneal dialysis solution) [11] and galactose. Indeed, several deaths have been reported, and warnings have been issued by health regulatory agencies [12][13][14][15]. Accordingly, a patient should be questioned regarding the use of not only conventional chemotherapy and mediations but also whether they are having alternative therapies. The case also illustrates a remarkable pattern of prominent WAT glycolytic activity on FDG PET. We hypothesize that this was likely physiologic and in Figure 1 Biodistribution of FDG. Coronal PET, PET/CT fusion, and CT images demonstrating prominent subcutaneous white adipose tissue metabolic activity throughout the thorax, abdomen, and pelvis (arrows). More intense metabolic activity uptake was observed in intra-abdominal fat (arrow). Intense gastric and myocardial uptake combined with low hepatic and negligible muscle uptake was observed. response to hypoglycemia induced by administration of insulin. This is different than the characteristic pattern of diffuse muscular uptake visualized in hyperinsulinemic patients, during hyperinsulinemic euglycemic clamping or following oral glucose loading for optimization of cardiac imaging [16], or in nonfasted or hyperglycemic patients undergoing oncologic imaging [6]. It is also different than the pattern of FDG uptake observed in brown fat adipose tissue in cervical, supraclavicular, paravertebral regions, mediastinal, and suprarenal regions [1][2][3]. During hypoglycemia, major changes in metabolism occur, including mobilization of liver glycogen and release of energy stored in WAT into circulation as nonesterified fatty acids. Our findings demonstrate relative high-glucose uptake into adipocytes in response to hypoglycemia in a distribution consistent with WAT activation. The two main defenses to hypoglycemia are an increase in glucagon secretion and adrenaline secretion [17]. Glucagon, secreted by pancreatic α-cells, results in the stimulation of adenylate cyclase activity within adipocytes and plays the primary role in counter hormone regulation by promoting lipolysis in WAT [18][19][20]. Lipolysis also occurs via adrenaline released from the sympathetic nervous system terminals innervating WAT [21]. Catecholamines also activate brown fat with beta-blockers having been advocated as an intervention to reduce this confounding finding on FDG PET in predisposed individuals [22]. Other growth hormones such as FGF-21 may also play role. FGF-21 has been shown to stimulate glucose uptake in adipocytes and suppress hepatic glucose production [23]. Elevated FGF-21 levels have also been described in patients with HIV-associated lipodystrophy [24], where atypical FDG fat distribution is also described [25,26]. An additional hypothesis is that high-dose ascorbic acid administered in a short time period prior to the study is responsible, possibly in part, for the observed increased WAT glycolytic activity. Ascorbic acid dietary supplementation has been shown to reduce abdominal and subcutaneous fat depots in high-fat diet-induced adiposity animal models [27]. This study demonstrated upregulation of genes involved in cell proliferation and downregulation of genes participating in lipid metabolism and steroidogenesis in rats supplemented with ascorbic acid. At the sites of increased WAT metabolic activity, a higher Hounsfield unit (HU) on CT was observed in the same region of WAT compared to the study performed one day later. Although not visually discernable, HU averaged -71 within WAT compared to -86 on the second scan, with an identical HU in other tissues such as muscle. This phenomenon has been described in brown adipose tissue (BAT) but, to the best of our knowledge, has not been described before in WAT. A prior animal and patient study demonstrated that the total lipid content of BAT was substantially decreased when activated under cold conditions, with a corresponding increase in CT HUs [28]. In this case, a rapid consumption of stored lipid within WAT may also account for the change seen. Greater blood flow in activated fat may also contribute to the rise in HU [29]. Conclusion FDG PET/CT is a useful noninvasive imaging modality for visualizing metabolic changes within adipocytes. A greater understanding of the role of both brown and white adipocyte tissue as endocrine organs is of public health interest as they may be central to our improved understanding of obesity and diabetes mellitus [30]. The mechanisms of observed WAT glycolytic activity in this case study are proposed but uncertain. In particular, the role of exogenous insulin, physiologic hormonal responses to hypoglycemia or ascorbic acid in inducing WAT glycolytic activity is uncertain. Further controlled studies utilizing FDG and tracers that interrogate other metabolic and receptor pathways may enhance our understanding of endocrine pathophysiology. Consent Written informed consent was obtained from the patient for the publication of this manuscript and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Additional material Additional file 1: Comparative rotating maximum intensity projection images demonstrating altered FDG biodistribution on the first study (right), with normalization on FDG biodistribution when repeated the following day in the absence of prior ascorbic acid or exogenous insulin (left).
2016-05-04T20:20:58.661Z
2011-06-07T00:00:00.000
{ "year": 2011, "sha1": "5b4efd50bdf49a9514610767785005c2f581357e", "oa_license": "CCBY", "oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/2191-219X-1-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d2720cc29f55ded6ff9a8338c245cb1915e1ad9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224804625
pes2o/s2orc
v3-fos-license
Exceptional response to immunotherapy in association with radiotherapy in patient with breast metastasis from urothelial carcinoma: A case report Most common sites of metastasis of urothelial carcinoma (UC) are lungs, liver, lymph nodes and bone. Pembrolizumab, a humanized monoclonal antibody directed against programmed cell death protein-1 (PD-1), represents an effective second-line therapy for advanced UC. Radiotherapy has been shown to induce a mechanism of immunogenic cell death (ICD) resulting in immune memory and advantageous systemic effects. We present the first case of breast metastasis (BM) from a UC described in literature who had an exceptional response to second-line therapy with pembrolizumab in association with radiotherapy, showing the efficacy of combining immunotherapy and radiotherapy even in patients with atypical metastatic sites. Introduction UC is one of the most widespread cancers worldwide, with around 430,000 new diagnoses each year. Most patients have a superficial nonmuscle invasive disease, whereas about 30%-40% presents at diagnosis or develops muscle-invasive and/or metastatic disease, and atypical sites of metastasis are rare. 1 Although platinum-based chemotherapy remains the standard firstline treatment for patients with metastatic UC rarely achieve complete and durable response, with a 5-year overall survival (OS) rate of 15% and after failure, there were few data available for the choice of secondline treatment. 2 New knowledge about mechanism of UC progression and involvement of immune system led to development of immune checkpoint inhibitors (ICIs) targeting either PD-1 or PD-L1 that have shown efficacy in different settings of UC. In phase III trial KEYNOTE-045, pembrolizumab improved OS compared with control arm in patients with metastatic UC progressed during or after platinum-based chemotherapy, regardless of PD-L1 tumour expression status and with good tolerability. 3 Case presentation In this reported case, a 61-year-old woman presented to emergency room due to new-onset macrohematuria and irritative voiding symptoms. An abdomen ultrasound (US) showed a vascularized mass (Ø 70 × 70 mm) in basal right-side wall of bladder and initial right hydronephrosis. The patient was hospitalized and received blood transfusions for anaemia caused by haematuria (Hb 4.8 gr/dl). Blood tests also reported an increase in creatinine basal level from 1.37 to 1.8 mg/dl. Abdomen and chest CT scan confirmed the presence of intraluminal mass (Ø 70 × 60 mm) involving right half of the bladder and showed a lymphadenopathy (Ø 31 × 26 mm) in right iliac region. Other lymph nodes appeared bilateral along external iliac vessels, the largest on left side (maximum short axis 13 mm). There was no evidence of metastatic disease at baseline CT scan and bone scintigraphy. Pathological investigation of specimens obtained by transurethral resection of bladder tumour (TURBT) showed a high-grade urothelial papillary carcinoma with invasion of lamina propria (pT1 WHO 2016). The absence of detrusor muscle in the sample probably underestimated the disease. To have an accurate staging and to stop bleeding, a salvage radical cystectomy with an extended lymphadenectomy and an ureterocutaneostomy was performed. Post-surgical staging was pT3b N1 M0 (sec WHO 2016). During post-operative hospitalization, patient found a right breast nodule through breast self-examination, confirmed by mammography and US (Ø 25 × 35 mm). The histological and immunohistochemistry (estrogenic receptor negative, CK20 positive, p63 positive) examination of needle biopsy was diagnostic for a metastatic localization from UC (Fig. 1). The CT scan performed after surgery revealed the presence of a subcutaneous nodule on the left side of thorax wall (Ø 13 × 18 mm) and a lymphadenopathy (short axis <15 mm) near external left iliac artery. Given the evidence of metastatic disease, patient underwent a firstline chemotherapy regimen with carboplatin and gemcitabine. After 3 cycles radiological assessment showed a partial response of breast metastasis (reduced to Ø 32 × 28 mm) according to RECIST v 1.1 criteria. After 6 cycles of chemotherapy, imaging showed disease progression. Target breast lesion increased to Ø 35 × 35 mm and CT scan revealed numerical and dimensional increase of lymph nodes in paraaortic region. We performed a genomic analysis supported by FoundationO-ne®CDx on tissue samples obtained by breast biopsy, which identified alterations in genes BCL2L1, CDKN2A, MYC, RAF1, TP53 known to be cancer related. No actionable mutations were identified. Immunohistochemical test of PD-L1 was then performed with IHC 22C3 antibody (Dako North America) and evaluated with Combined Positive Score (CPS = percentage of PD-L1 expressing tumour and infiltrating immune cells relative to total number of tumour cells). 3 PD-L1 expression was 30% (Fig. 2). Considering disease progression and results of genetic and immunohistochemical analyses, patient underwent a second-line treatment with pembrolizumab at dose of 200 mg every three weeks. A concurrent stereotactic radiotherapy treatment (6X FFF photon beams with a total dose of 30 Gy in 3 fractions) was performed on breast and subcutaneous metastasis of chest wall. After stereotactic radiotherapy and 4 cycles of pembrolizumab, radiological assessment showed a complete response of subcutaneous metastasis of chest wall and a significant reduction of right breast nodule (lesion size reduced to Ø 15 × 11 mm). Abdomen lymph nodes were also reduced, with non-pathological residual dimensions (short axis <10 mm) (Fig. 3). Discussion Before immunotherapy, median OS for patients with advanced UC was 9 months and no established therapy was available after failure of platinum-based treatment. Development of ICIs has improved both OS and quality of life of patients in a second-line therapy setting, with a better safety profile compared to traditional chemotherapy. 3 In our patient pembrolizumab was well tolerated, reporting only low-grade hyperthyroidism (grade 2 sec. CTCAE v 5.0) treated and resolved with medical therapy. The biological rationale for therapeutic efficacy of pembrolizumab was suggested by molecular characterization of breast metastatic tissue, which resulted in PD-L1 CPS of 30%. Although in KEYNOTE-045 trial survival benefit was observed in the total population regardless of PD-L1 CPS expression, in subgroup analysis the benefit was better in patients with PD-L1 CPS of 10% or more. 3 In our clinical case, the exceptional response of metastasis treated with radiotherapy could be related to an ICD mechanism that supported the action of the immunotherapy. In fact, it is well known that radiotherapy has direct cytotoxic effects on tumour cells (targeted effects) primarily due to production of DNA double-strand breaks. More recent evidence has shown that radiotherapy induces phagocytosis of tumour cells by dendritic cells (DC) and processing of tumour-derived antigens and DC-associated cross-priming of CD8+-CTLs19. 4 Therefore, ICD may involve the recruitment of host's immune system, resulting in immune memory and advantageous systemic effects, such as the abscopal effect, which causes tumour regression in non-irradiated areas. 5 Conclusions This reported case of atypical BM of UC confirms that immunotherapy represents an excellent therapeutic choice for patients with metastatic UC progressed after platinum-based chemotherapy. The association of immunotherapy with radiotherapy could have a synergistic effect on changes of tumour microenvironment and it could increase disease control.
2020-10-21T05:05:33.241Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "634539a00164f478a69c65cf74f1242ff91c139f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eucr.2020.101444", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "634539a00164f478a69c65cf74f1242ff91c139f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119384463
pes2o/s2orc
v3-fos-license
ALMA and high redshift dusty starburst mergers By using a new numerical code for deriving the spectral energy distributions of galaxies, we have investigated the time evolution of morphological properties, the star formation rate, and the submillimeter flux at 850 $\mu$m in high-redshift ($z$) dusty starburst mergers with mass ratio ($m_{2}$) of two disks ranging from 0.1 (minor merger) to 1.0 (major one). We found that the maximum star-formation rate, the degree of dust extinction, and the 850 $\mu$m flux are larger for mergers with larger $m_{2}$. The 850 $\mu$m flux from mergers at 1.5 $\le z \le$ 3.0 in the observer frame is found to be a few mJy for major merger cases, and at most $\sim$ 100 $\mu$Jy for minor ones. This result suggests that only high-redshift major mergers are now detected by SCUBA with the current 850 $\mu m$ detection limit of a few mJy. These results imply that LMSA (ALMA) with the expected detection limit of the order of 10 $\mu$Jy at 850 $\mu m$ can be used to study high-redshift mergers with variously different $m_2$, and thus provide an important clue to the formation of galaxies in a high-redshift universe. Introduction One of fruitful and remarkable achievements of recent optical, infrared, and radio observations of distant galaxies is that interstellar dust has been found to affect the photometric and spectroscopic evolution of galaxies at high redshift (e.g., Meurer et al. 1997;Steidel et al. 1999). For example, deep surveys in the submillimeter regime with the Sub-millimeter Common-User Bolometer Array (SCUBA) (Holland et al. 1999) on the James Clerk Maxwell Telescope (Smail et al. 1997;Hughes et al. 1998; Barger et al. 1998;Smail et al. 1998;Lilly et al. 1999), in the Mid/Far-Infrared with the Infrared Space Observatory (ISO) (e.g., Flores et al. 1999), and in the radio with the Very Large Array (VLA) (e.g., Richards 1999) have revealed the nature of starburst galaxies whose star formation is heavily obscured by dust at intermediate and high redshifts. These observations have simultaneously demonstrated that optical data alone can provide a partial view of the galactic evolutionary history, which is possibly qualitatively incorrect. A major remaining challenge is thus to reveal how heavily distant starburst galaxies are obscured by dust and what physical process determines the degree of dust extinction in these galaxies. Galaxy merging has been observed to be closely associated not only with low-z dust starburst galaxies, such as ultra-luminous infrared galaxies (Sanders, Mirabel 1996), but also with intermediate/high-z ones, such as faint SCUBA sources (Smail et al. 1998). Accordingly, it is primarily important to investigate how intermediate/high-z galaxy merging can determine the nature of dusty starburst galaxies. Bekki, Shioya, and Tanaka (1999) first investigated both the morphological and photometric evolution of high-z dusty starburst major mergers, and found that major mergers can successfully reproduce the morphological and photometric properties of faint SCUBA sources observed by Smail et al. (1998). However, they only described the evolution of dusty starburst major mergers and did not consider at all how dusty gas is important for the evolution of minor mergers with the mass ratio of two merging disks less than 0.1 (m 2 < 0.1) and that of unequal-mass ones with m 2 ∼ 0.3. Minor merging is considered to be occurring more frequently than major merging, and is important for the growth of bulges (e.g., Mihos, Hernquist 1994), whereas an unequal-mass one has been demonstrated to be related to the formation of S0s (Bekki 1998). Thus, investigating different types of dusty mergers (with different m 2 ) not only leads to a better understanding of the nature of dusty starburst galaxies, but also provides a new clue to the origin of the Hubble sequence. This paper considers how the mass ratio, m 2 , controls the time evolution of the morphological properties, the degree of dust extinction, and the submillimeter flux at 850 µm in dusty starburst mergers. We furthermore demonstrate how the 850 µm flux in the observer-frame depends on m 2 if mergers are located at z = 1.5 and 3.0. Based on our present numerical results, we discuss whether the Large Millimeter and Submillimeter Array (LMSA) can detect reemission of dusty gas at 850 µ, and suggest that LMSA can particularly play a vital role in revealing morphological transformation processes of galaxies in a high-redshift universe. Model We here consider both the time evolution of the galactic morphology and that of the spectral energy distributions (SEDs), based on the results of numerical simulations that could follow both the dynamical and chemical evolution of galaxies. The numerical techniques for solving galactic chemodynamical and photometric evolution and the methods for deriving SEDs in numerical simulations of galaxy mergers with dusty starburst are given in Bekki and Shioya (2000). The advantages and disadvantages of the model in reproducing the observed SED of ULIRGs are summarized in detail by Bekki and Shioya (2000). For example, our model does not include physical processes related to dust production, grain destruction, and grain growth in the interstellar medium, and thus can not follow the time evolution of dusty starbursts from the optically thick dusty cocoon stage to the optical thin one in detail. Mazarellar et al. (1991) compared the far-IR properties of 187 Markarian galaxies with those of the IRAS Bright Galaxy Sample and extensively discussed the nature of these in terms of the grain-size distribution. Although our model can not include this important effect of the grain-size distribution (and dust compositions) on the photometric properties of dusty starburst galaxies, owing to the adopted simple model assumption in the present paper, we should investigate this in the future. We constructed models of galaxy mergers between two bulgeless gas-rich disks embedded in massive dark-matter halos by using the Fall-Efstathiou model (Fall and Efstathiou 1980). The initial mass-ratio of dark matter halo to disk was 4:1 for the two disks. The mass ratio of the smaller disk (referred to as the 'secondary') to the larger one (the 'primary') in a merger, which is represented by m 2 , was considered to be a critical determinant for the evolution of dusty starburst mergers in the present study. We mainly describe the results of the model with m 2 = 0.1 (minor merging), 0.3 (unequal-mass), 0.5, and 1.0 (major). The disk mass (M d ) was 6.0 × 10 10 M ⊙ for the primary in all merger models of the present study. The exponential disk scale length and the maximum rotational velocity for disks were 3.5(M d /6.0 × 10 10 M ⊙ ) 1/2 kpc and 220(M d /6.0 × 10 10 M ⊙ ) 1/4 km s −1 , respectively. These scaling relations were adopted so that both the Freeman law and the Tully Fisher relation with a constant mass-to-light ratio could be satisfied for the structure and kinematics of the two disks. For example, parameter values for disk structure and kinematics for the model with m 2 = 0.3 were as follows. The size and mass of a disk are set to be 17.5 (9.6) kpc and 6.0 (1.8)×10 10 M ⊙ , respectively, for the primary (the secondary). The scale length and the scale height of an initial exponential disk is 3.5 (1.9) kpc and 0.7 (0.38) kpc, respectively, for the primary (the secondary). The rotational curve of the primary (the secondary) becomes flat at 6.1 (3.4) kpc with the maximum velocity of 220 (163) km/s. The Toomre stability parameter (Binney, Tremaine 1987) for initial disks was set to be 1.2. The collisional and dissipative nature of disk interstellar gas was represented by discrete gas (Schwarz 1981) and the initial gas mass fraction was set to be 0.5 (corresponding to a very gas-rich disk). The mass and the size for each of the clouds were 3.0 × 10 6 M ⊙ and 130 pc, respectively. Star formation in gas clouds during galaxy merging is modeled by converting gas particles to stellar ones according to the Schmidt law with an exponent of 2.0 (Schmidt 1959;Kennicutt 1989). Although the effects of supernovae feedback on dynamical evolution of the mergers are not included, the effects probably would not change significantly the present numerical results, mainly because the adopted mass of a merger progenitor disk was fairly large. Stellar components that are originally gaseous ones are referred to as new stars. Chemical enrichment through star formation during galaxy merging was assumed to proceed both locally and instantaneously in the present study. The fraction of gas returned to interstellar medium in each stellar particle and the chemical yield were 0.3 and 0.02, respectively. The initial metallicity Z * for each stellar and gaseous particle in a given galactic radius R (kpc) from the center of a disk 4 was given according to the observed relation Z * = 0.0610 −0.197×(R/3.5) of typical late-type disk galaxies (e.g., Zaritsky et al. 1994). All of the simulations were carried out on the GRAPE board (Sugimoto et al. 1990) with a gravitational softening length of 0.53 kpc. For calculating the SED of a merger, we use the spectral library GISSEL96, which is the latest version of Bruzual and Charlot (1993). The SEDs of dusty mergers and the way to calculate the 850 µm flux from high redshift mergers both for observer-frame and for rest-frame are described in detail by Bekki et al. (1999) and Bekki and Shioya (2000). In the following, the cosmological parameters H 0 and q 0 are assumed to be 50 km s −1 Mpc −1 and 0.5, respectively. The primary in the merger, on the other hand, leaves its disk morphology even after merging, though the primary suffers from dynamical heating due to the accretion of the secondary, and consequently forms a thick disk. Furthermore, mergers with larger m 2 become more similar to early-type galaxies (E/S0), principally because the disk destruction due to tidal gravitational force and dynamical relaxation more drastically proceeds for mergers with larger m 2 . These results suggest that m 2 is an important factor for controlling the morphological properties of dusty starburst galaxies formed by merging. These results also imply that m 2 is one of important factors for the origin of the Hubble sequence. For mergers with larger m 2 , the stronger tidal disturbance triggers efficient cloud-cloud collisions, and consequently induces a larger amount of gaseous dissipation. As a natural result, the dusty interstellar gas is more efficiently transferred to the central regions to form higher density gaseous regions surrounding the nuclear starburst populations in mergers with larger m 2 . This is the main reason for the above m 2 dependence of A V . Thirdly, the observed 850 µm flux greatly exceeds the current SCUBA detection limit of ∼ a few mJy for larger m 2 (= 0.5, 1.0). Considering the results shown in figure 1, it is strongly suggested that highredshift dusty early-type galaxies which are being formed by mergers with larger m 2 are preferentially detected by SCUBA. Furthermore, this result implies that dusty starburst triggered by minor merging, which is considered to be more frequently occurred than major merging with m 2 = 1.0 and an important determinant for the dynamical evolution of early-type disk galaxies (e.g., Mihos, Hernquist 1994), should not be studied by SCUBA, but by a future submillimeter array with a detection limit of 10 − 100 µJy. Discussion and Conclusion Considering the present numerical results, LMSA has the following three advantages in studying the origin and the nature of high redshift dusty starburst galaxies. Firstly, LMSA is expected to detect a submillimeter flux of 10 − 100 µJy at 850 µm and therefore can investigate not only high-redshift dusty starburst galaxies formed by major mergers, but also those by minor and unequal-mass ones. Since minor and unequal-mass mergers are suggested to be important for the formation of Sa and for that of 6 S0 (Mihos, Hernquist 1994;Bekki 1998), respectively, extensive studies of dusty galaxies with 850 µm flux of the order of 10 − 100 µJy by LMSA will provide a new clue to the origin of the Hubble sequence. Secondly, LMSA can investigate the nature of dusty starburst galaxies at variously different dynamical stages of merging. Bekki and Shioya (2000) have demonstrated that the 850 µm flux from dusty starburst populations depends strongly on the degree of the dynamical relaxation of merging galaxies. Accordingly, LMSA will find a physical relationship between the dynamical evolution and the photometric one at the submillimeter band in high-redshift dusty starburst galaxies, if future deep optical and near-infrared morphological, structural, and kinematical studies will have revealed the dynamical conditions of these galaxies. Thirdly, and most importantly, LMSA can provide valuable information concerning forming disk galaxies at high redshift. Recent numerical simulations based on the hierarchical assembly of cold darkmatter (CDM) halos have demonstrated that the star formation rate is estimated to be on the order of ∼ 10 M ⊙ yr −1 when successive minor merging of subgalactic clumps becomes important for the formation of galactic disks at z = 1 − 2 (e.g., Steinmetz, Müller 1995;Bekki, Chiba 2000). The present study has demonstrated that the 850 µm flux in minor mergers at the redshift of 1 − 3 is at most on the order of ∼ 100 µJy and, accordingly, high redshift forming disk galaxies can be investigated by LMSA rather than by SCUBA. We suggest that a detailed comparison between LMSA submillimeter studies and future optical and infrared spectrophotometric ones would reveal the degree of dust extinction (A V ) and the physical parameters for A V in forming disk galaxies. By using a new and original code for calculating the SEDs of dusty starburst galaxies, we have demonstrated that the mass ratio, m 2 , controlling the strength of tidal disturbance (or the degree of disk destruction) in galaxy merging is an important determinant for the maximum star-formation rate and the degree of dust extinction (A V ). The derived dependence on m 2 does not depend very strongly on other parameters, such as the orbital configurations, internal structure, and gas mass fraction of galaxy mergers (Shioya, Bekki, in preparation). Based on these essentially new results, we emphasize the importance of LMSA in investigating and revealing the nature of high-redshift dusty starburst galaxies. We lastly expect that future observational studies of high-redshift galaxy mergers with different m 2 will provide 7 new clues to the formation and evolution of galaxies. Y. S. thanks to the Japan Society for Promotion of Science (JSPS) Research Fellowships for Young Scientist. where T represents the time that has elapsed since the two disks begin to merge for each of the four models. Note that the primary disk of the merger with larger m 2 is more greatly destroyed by the accretion of the secondary. The final morphology of each of merger remnant in nearly virial equilibrium (∼ 1 Gyr after maximum starburst) is basically similar to that at the epoch of maximum starburst. The final morphology also depends on m 2 , such that the merger remnant with larger m 2 is more similar to early-type galaxies (E/S0). We stress that the merger remnant with m 2 = 0.3 is more similar to S0s with no remarkable thin stellar disk such as NGC 3245 and 4684 than those with thin extended stellar disks such as NGC 4111 Figure Captions and 4710 [see the Hubble Atlas of Galaxies of Sandage (1961)]. and that of the degree of dust extinction (A V ) in units of mag (lower one). Note that both the maximum star-formation rate and the degree of dust extinction is larger for dusty starburst mergers with a larger m 2 . Also note that the maximum star-formation rate in the major merger with m 2 = 1.0 is ∼ 378 M ⊙ yr −1 , which corresponds roughly to (or is larger than) that required for explaining infrared luminosity in ultra-luminous infrared galaxies (Sanders, Mirabel 1996). 3.0 (dotted one) in the observer-frame. The current detection limit in SCUBA (2 mJy) and the expected one in LMSA (40 µJy) are also given by the thick solid lines. Note that in comparison with SCUBA, LMSA can detect not only dust reemission from major mergers at
2019-04-14T01:32:06.513Z
2000-12-01T00:00:00.000
{ "year": 2000, "sha1": "2cb893a61eac0fcace4a724ccec790796cbdcbb7", "oa_license": null, "oa_url": "https://academic.oup.com/pasj/article-pdf/52/6/L57/5953013/pasj52-0L57.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0bf3bbd4bee8113e62da650ad1a9258bc0f2a2b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220506892
pes2o/s2orc
v3-fos-license
ADAD1 and ADAD2, testis-specific adenosine deaminase domain-containing proteins, are required for male fertility Adenosine-to-inosine RNA editing, a fundamental RNA modification, is regulated by adenosine deaminase (AD) domain containing proteins. Within the testis, RNA editing is catalyzed by ADARB1 and is regulated in a cell-type dependent manner. This study examined the role of two testis-specific AD domain proteins, ADAD1 and ADAD2, on testis RNA editing and male germ cell differentiation. ADAD1, previously shown to localize to round spermatids, and ADAD2 had distinct localization patterns with ADAD2 expressed predominantly in mid- to late-pachytene spermatocytes suggesting a role for both in meiotic and post-meiotic germ cell RNA editing. AD domain analysis showed the AD domain of both ADADs was likely catalytically inactive, similar to known negative regulators of RNA editing. To assess the impact of Adad mutation on male germ cell RNA editing, CRISPR-induced alleles of each were generated in mouse. Mutation of either Adad resulted in complete male sterility with Adad1 mutants displaying severe teratospermia and Adad2 mutant germ cells unable to progress beyond round spermatid. However, mutation of neither Adad1 nor Adad2 impacted RNA editing efficiency or site selection. Taken together, these results demonstrate ADAD1 and ADAD2 are essential regulators of male germ cell differentiation with molecular functions unrelated to A-to-I RNA editing. Special considerations regarding testicular RNA editing Multiple factors have been identified as regulators of RNA editing with the best studied being the AD-domain proteins themselves. This report and previous related work (19) shows that while germ cell RNA editing is catalyzed by a known RNA editing enzyme, other AD-domain containing proteins do not appear to regulate germ cell RNA editing. These findings suggest other mechanisms of control may be at play. While cis elements within RNA editing targets have dramatic impacts on RNA editing efficiency by giving rise to specific RNA secondary structures (28), it seems unlikely that cis elements within the germ cell transcriptome alone explain the relative lack of RNA editing. This is especially true given the complexity of the male germ cell transcriptome exceeds even that of the brain (20, 24), a site of extremely high RNA editing levels. However, RNA editing has been shown to be quite sensitive to a cell's complement of RNA binding proteins (RBPs) (29) and male germ cells express an incredibly wide range of RBPs (30), in part due to their tight reliance on post-transcriptional processes for normal differentiation (21). RBPs, which drive an extremely diverse range of biological process, act on the transcriptome and as such can change the availability, structure, localization, and stability of RNA editing targets in a celltype or tissue-dependent manner. Which, if any, germ cell-specific RBPs interact with the RNA editing pathway remains an open question, however efforts leveraging RNA-sequencing data from large-scale projects like ENCODE may provide valuable insight. In addition to individual RBPs, it has been demonstrated that entire RNA processing pathways intersect and impact RNA editing. In particular, splicing has been shown to have a robust influence on RNA editing for a large number of targets (29). The interaction of these two pathways is reasonable given RNA editing is known to occur co-transcriptionally (7) and is often the result of exonic and intronic cis elements interacting to form the double stranded RNA template used by ADAR enzymes (30). Consistent with this, a recent report demonstrated that global reduction of splicing leads to significant increases in editing efficiency (31) while mutation of specific alternative splicing factors altered editing of more specific targets, with decreased splicing generally being associated with increased editing. These findings are buoyed by the observation that a large number of alternative splicing proteins have been identified as RBP regulators of RNA editing (27). The interaction of splicing and RNA editing may be especially important in the context of the male germ cell where alternative splicing is particularly high (20), which mechanistically may serve to globally repress RNA editing. To facilitate the depth of alternative splicing observed, the testis expresses a wide range of splicing factors, some or many of which may influence total or site-specific RNA editing. As germ cells develop, they leverage different suites of alternative splicing programs (32) to facilitate cell-specific alternative splicing patterns across germ cell development. This cell-specific splicing regulation will require careful dissection to identify potential splicing regulators of germ cell RNA editing. Phenotypic differences between Adad1 tm1Reb relative to Adad1 em2 may be driven by multiple mechanisms While it is possible the difference in phenotype between Adad1 tm1Reb and Adad1 em2 may be due to differences between wildtype ADAD1 and the abnormal proteins detected in the Adad1 tm1Reb testes, it seems most likely the reduced phenotypic severity in Adad1 tm1Reb relative to Adad1 em2 is due to a reduction, but not total loss, of functional ADAD1 protein in Adad1 tm1Reb . Further, this outcome may be exacerbated by a number of other mechanisms. As with all CRISPR generated alleles, it is feasible carrier mutations in the background of the CRISPR allele are leading to a more severe phenotype. However, this is extremely unlikely as extensive analyses confirmed no carrier mutations at other probable CRISPR targets within the genome. A much more likely mechanisms is that the two alleles behave differently as a function of their slightly different genetic context. Although both mutant models were derived from the same strain, due to breeding the two alleles were analyzed on slightly different genetic backgrounds. This hypothesis is supported by previous observations (18) showing that when moved to a different genetic background, the Adad1 tm1Reb fertility phenotype changed dramatically. These suggest ADAD1 function may be reliant on genetic modifiers and open the possibility of defining ADAD1's molecular function via genome-scale modifier screens.
2020-07-14T14:25:45.739Z
2020-07-14T00:00:00.000
{ "year": 2020, "sha1": "51e6bdd9f776810cdf7db4916a1451d68d7e02c9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-67834-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dab319c4f9226c67bc1e9e4b0a5d662f800f4c32", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
114415000
pes2o/s2orc
v3-fos-license
Developing an Intelligent Model for the Construction a Hip Shape Recognition System Based on 3D Body Measurement The purpose of this paper was to develop an intelligent recognition system consisting of a feature reduction method combining cluster and correlation analyses, and a probabilistic neural network (PNN) classifier to identify different types of hip shape from 3D measure - ment for each person. Firstly 28 items reflecting lower body part information of 300 female university students aging from 20 to 24 years were selected. The feature reduction method was employed to extract typical indices. Secondly hip shapes were subdivided into five types by a K-means cluster and analysis of variance (ANOVA). Finally the PNN was then trained to serve as a classifier for identifying five different hip shape types. The average classifica - tion accuracy of the scheme proposed was 97.37%, and its effectiveness was successfully validated by comparing with the BP and Support Vector Machine (SVM) scheme. Thus an intelligent recognition system was developed to make hip shape type classification of high-precision and time saving. . However, the BP algorithm suffers from a slow convergence rate and often yields suboptimal solutions [17]. Zhang et al developed a SVM model to identify young women's body shapes taking 32 seconds for training 200 samples and 2 seconds for testing 50 samples, and the average accuracy achieved was 92.5% [18], but SVM can be abysmally slow in the test phase, although it has good generalisation performance [19]. Better than the above two methods, the PNN can provide a general solution to pattern classification problems by Bayesian classifiers. Its main advantages are the fast training process. An inherently parallel structure guaranteed to converge to an optimal classifier as the size of the representative training set increases; meanwhile, the training samples can be added or removed without extensive retraining [20]. It has been widely used to solve pattern recognition and classification problems in geotechnical engineering [21], in ocean engineering [22] and in medical diagnosis [23]. Jamal et al developed a hybrid intelligent model for constructing a size recommendation expert system based on data clustering and a PNN to enable the salesperson to help the consumer in choosing the right size, and the accuracy of the system proposed achieved 87.2% [24]. Hence in this paper, we aimed to develop an intelligent hip shape classification system with 3D anthropometric measurements integrating a statistical analysis Analysis of Measurements Method and Artificial Intelligence Method. Among these, the Visual Assessment Method and Statistical Analysis of Measurements Method are used for classification, but their shortcomings have been identified, causing the Artificial Intelligence Method to have advantages of application in classifying analysis. Among the deficiencies identified in the Visual Assessment Method is its objectivity, which is difficult to guarantee because this method depends solely on subjective evaluation [6,7]. Meanwhile the Statistical Analysis of Measurements Method has been found to have a lack of accuracy and be not flexible [8,9]. For classification analysis, Artificial Intelligence Methods such as the Artificial Neural Network (ANN), Support Vector Machine (SVM) and Probabilistic Neural Network (PNN) are widely applied due to their good accuracy result and the ability of analysing nonlinear problems which are difficult to solve by classical methods [10]. Recently intelligent systems have been widely utilised as classifiers for pattern recognition [11,12], especially neural networks that have already been treated as a powerful classifier to deal with pattern design [13,14] and body shape classification problems [15]. Zou et al established a model of an artificial neural network with a BP Network algorithm to identify the body type of young women aged 15 -35 in terms of the experience of fashion designers, and an overall identification accuracy of 82.6% was achieved n Introduction It is fundamental for producing high quality garments in the apparel industry to measure body shapes and clarify their statistical features. The development of a body shape classification scheme plays an important role in meeting the clothing fit. Body shape classifications were carried out with anthropometric data by the 3D body measurement technique [1], and considered in various categories from lateral [2] and frontal shapes of the body [3]. However, a limited amount of studies have been noted regarding the integration of body scanning for the use of a partial shape [4] for certain populations [5] in the apparel industry, especially for the position which affects clothing fit greatly, such as the hip part. Body shape is an important classification issue. There are several popular methods used for classifying body shape types, which can be categorized as follows: the Visual Assessment Method, Statistical method and PNN technique. Figure 1 gives an overview of the method proposed. To mitigate limitations of classification methods, the method proposed introduced feature parameter variables representing hip shape types. In addition, results of statistical analysis of 3D scan data of female university students were incorporated into the PNN technique to identify the hip shape types. n Methods Data acquisition 3D body scanning 300 female university students of ages ranging from 20 to 24 years were scanned with a [TC] 2 3D body scanner ([TC]² Corporate, USA).During the scanning, each individual wore short tight pants and a swimming cap, in addition to a bra-top. The mean value of three-times-repeated measurements from each individual was chosen in order to reduce the systematic measuring error. The head could not be scanned by the [TC] 2 3D body scanner, which is only suitable for objects of light colour, the subjects' hair being dark. Measurement of height (H) was performed directly on the subjects using a Martin ruler according to ISO7250, and the measurement of weight (Wei) was done using a weight scale. Gaussian distribution of height (H) and weight (Wei) are described in Figure 2 and Figure 3, respectively. Preprocessing To exclude abnormal data from the analysis, differences from the range of "the mean difference ± 3σ" (σ: SD of differences) were considered as abnormal and excluded from the analysis. As a result, 2.67% of the data were excluded, and the rest, scan data of 292 subjects, were used last. The abnormal difference values were generally a result of recording errors and 3D body scan measure-ment errors due to the poor quality of scan data. Measurements of items For the major lower part size, a total of 28 items were selected, as listed in Table 1 and Figure 4, respectively. Although the [TC] 2 3D body measurement scanner provides whole body size measurement for each subject, we decided to take lower part size measurement only as we were making a classification for the type of hip shape. 20 items of body size measurement (marked 1), except the height and weight, were obtained by directly measuring the 3D models using the scanner, and the other size items (marked 2) were obtained by calculation. Hierarchical clustering of the size items Cluster analysis methods can classify sets of data samples into clusters according to similarity. For cluster analysis, the hierarchical cluster method was successfully cited in literature [25]. The hierarchical cluster method is used to build a hierarchy of clusters. According to the similarity between sets of data samples, it can decide which clusters should be combined. By repeating the combination process accumulatively, a hierarchical structure of the samples is determined, and then the data given can be divided into several groups on the basis of certain similarity levels by using such a structure. Hence based on the items shown in Table 1, we adopted the hierarchical cluster method [26] with weighted average linkage and the squared Euclidean distance to cluster the database in order to obtain a better analysis result. Correlation analysis for typical indices To classify the distributional tendencies of the hip shapes, we first adopted , then a k can be extracted as the typical index of a 1 , a 2 , ..., a n . Thus in the typical index the amount of original variables can be reduced to a few latent variables that still represent the main information of the original sets of data samples. Subdivision of young female hip shapes The K-means cluster is a method of vector quantisation, originally from signal processing, that is popular for cluster analysis in data mining. Given a set of observations x 1 , x 2 , ..., x n where each observation is a d-dimensional real vector; k-means clustering aims to partition the n observations into k sets (k ≤ n) S = {S 1 , S 2 , ..., S k } so as to minimise the within-cluster sum of squares (WCSS): where, m i is the mean of points in S i . The K-means cluster directly decomposes the dataset into a set of disjoint clusters and attempts to determine the desired partitions that optimise a certain criterion function (similarity measure). It can produce the optimal result even for very large data sets with respect to a defined criterion, namely the input parameter of the cluster number and defined similarity measure. It is suitable for very large data sets. Hence we adopted the k-means cluster to subdivide the hip shapes based on typical indices. It was noted that the result of k-means clustering can be diverse for the same data set with different cluster numbers [27]; thus we chose the cluster number based on the analysis of variance (ANOVA) of each cluster. Probabilistic neural network classifier A probabilistic neural network (PNN) is a basic pattern classifier which is developed from the combination of the Bayes decision strategy and Parzen non-parametric estimator of the probability density functions (PDF) of different classes [28]. Because of its easy implementation with a probabilistic output and application to problems containing any number of classes, the PNN has been widely used to solve the problem of pattern recognition. A PNN can be realised as a network of four layers: an input layer, pattern group, 11 However, using all the variables causes deficiency in classification. Therefore we did a variables reduction before classifying hip shapes. For this reason, we applied correlation analysis to obtain the correlation coefficient between variables and introduced a typical index algorithm to extract the feature parameter using the following steps: 1. calculate the correlation coefficient matrix R in the case where there are N indices, namely a 1 , a 2 , ..., a n in one The input layer consists of r input nodes, passing the inputs to the next layer. In Figure 5, j, k, l, m and n are the numbers of Type 1 (A), Type 2 (B), Type 3 (C), Type 4(D) and Type 5 (E) training vectors, respectively. The pattern layer estimates the probability density function (PDF) of the input vector X by the Parzen window method. In this study, we chose the Gaussian function as the Parzen window for its uniqueness of representation of data with normal distribution. In this layer, the Euclidean distance between the input vector X and training vector T, is calculated for its description of the closeness between the input and training vectors; in addition, the Euclidean distance is used as the argument of the Gaussian function to compute the PDF. Therefore the neurons of the pattern layer, A ip , with the i th neuron belonging to the pth class, are represented as follows: where, T ip is the ith training vector belonging to the p th class, and σ is the smoothing parameter. In Figure 5, p is denoted by subscripts "A" for Type 1, "B" for Type 2, "C" for Type 3, "D" for Type 4 and "E" for Type 5. In the summation layer, the Bayesian likelihood ratio is calculated by the PDF from the pattern layer, and then the input was classified based the ratio. Thus the summation of A ip , A p is made as follows: where, N = j in Type 1, N = k in Type 2, N = l in Type 3, N = m in Type 4, N = n in Type 5. In the output layer, the five values from the summation layer are compared to select the maximum one as the output. In Figure 5 output Y indicates the specific class the input vector belongs to based on the summation layer output. In this paper, the application of the PNN model was supported by software called MATLAB R2009a (MathWorks Inc., Massachusetts, USA). General procedure The general implementation procedure of modelling hip shape identification in this paper was summarised in a flowchart, as shown in Figure 6. The training patterns were based on 28 data items provided by the 3D body measurement scanner. First the characteristic parameters, typical indices with high correlation to classification, were chosen as the input variables. Then the smoothing parameter σ was selected through user-defined constants. In the succeeding steps, the classification was performed, with classes set or divided, samples allocated, the Euclidian distance between the training and test pattern computed, and the PDF calculated. Finally the output (class) was obtained through calculating the PDF. Training In this paper the input parameters were the six typical indices, including HH, HG, AbF_X-HB_X, AbB_X-HB_X, HW/HT and CrL. To give an equal weighting factor before applying the data to the network proposed, all of the input data were normalized to 0.1 -0.9. The output (class) was the type of hip shape, namely Classification of the PNN model was trained by randomly dividing the available set of data (292 samples) into a training set and testing set. Three-fourths of the samples selected randomly from a class were used for training patterns, while the rest were used for testing patterns. Thus there were a total of 218 training patterns and 74 testing patterns for classes, with the specification for selection of each class is listed in Table 2. Smoothing parameter (σ) In this paper we chose the value of smoothing parameter σ in a ranging from 0.1 to 2 and determined the optimal one that better evaluates the distribution of test results. Figure 7 shows the estimation result of the PNN for all test patterns using the above values of σ. It was obvious that the value of smoothing parameter σ had a great effect on the estimation error in the PNN, and that the estimation error had been reduced until σ = 1.3 (marked a); hence smoothing parameter σ = 1.3 provided the smallest estimation error. Thus smoothing parameter σ = 1.3 was finally adopted for the PNN in this paper. n Results and discussions Feature dimension reduction In this paper the hierarchical cluster method was applied to classify the twenty-eight items mentioned above into sev-eral bigger groups formed by merging items based on 3D body scanning data. Moreover the process result of hierarchical clustering was illustrated in a graphic called a dendrogram, as shown in Figure 8 (dotted line rectangles denote the groups divided on the basis of the clustering results), which allowed a good impression of the similarity between the items. In this dendrogram, the data points appear to cluster six groups, including a different number of items. Both the fourth and fifth clusters contain two items respectively, whereas the second cluster contains nine items. In contrast, the first, third and sixth cluster included six, four and five samples, respectively. Then we established the extract feature parameter through a typical index algorithm based on the correlation coefficient obtained by correlation analysis following the steps mentioned above, and calculated the values of 2 r in each variable listed in Figure 9. Thus the twenty-eight original variables were directly reduced to a few latent variables, six typical indices named HH, HG, AbF_X-HB_X, AbB_X-HB_X, HW/HT and CrL, that could still represent the main information of the original sets of data samples. Subdivision of hip shape types The hip shape database was firstly partitioned using a k-means cluster, with the cluster number ranging from 3 to 5 according to the six typical indices so as to obtain the optimal cluster number based on analysis of variance (ANOVA). ANOVA could measure the overall variances between the groups as well as the overall variances within the groups. By ANOVA, cluster number distribution and clustering results are shown in Table 3. It was apparent that based on the probability (P) value, the cluster numbers desired could be easily identified, and we concluded that 5 was the cluster number desired for the hip shape database. From Table 3 with five cluster numbers, it could also be realized that the Cluster Mean Square (CMS) between groups of any variable from six typical indices was so much larger than the Error Mean Square (EMS) within groups. From the P value, it could be seen that the P value of each variance was smaller than 0.05, which meant we rejected the null hypothesis and assumed that the probability that the observed group's means would have appeared by chance was less than 5%. Moreover it was noticed that the null hypothesis tested in ANOVA was equal to the group means. In addition, the result of ANOVA presented that the differences among five groups were great enough to identify each group successfully by the six variances. Thus the optimal number, which was five in this paper, was chosen as the cluster number desired. Moreover the cluster results of the database displayed in Table 4 were portioned by direct implementation of k-means clustering. Table 4 summarised the final cluster centre and capacity of each type for the optimal cluster number based on six typical indices. As shown in Table 4 Comparison of the method proposed with other existing approaches Classification of the performance of the testing scheme proposed was validated by a total number of 74 testing patterns of five hip shape types, including 2 samples of Type 1 (A), 11 samples of Type 2 (B), 16 samples of Type 3(C), 20 samples of Type 4 (D) and 25 samples of Type 5 (E), listed in Table 2. To evaluate the performance of the testing scheme, three common measures of sensitivity (SENS), specificity (SPEC), and accuracy (ACCU) [29] were used, defined as follows: where TP was true positive, FN false negative, TN true negative, and FP false positive. This study presented an application of the PNN with typical indices for hip Table 5. Meanwhile, the sensitivity (SENS), specificity (SPEC), and accuracy (ACCU) of the three classifiers mentioned above for the testing patterns are summarised in Table 6. SENS represented the number of objects belonging to a class that were correctly classified in the correct class, and SPEC corresponded to objects not belonging to a certain class and subsequently classified as pertaining to another. From According to Table 6, the average SENS, SPEC, and ACCU were improved from 84.95%, 95.82%, and 94.28% to 94.70%, 98.25%, and 97.37%, respectively, by applying the PNN scheme instead of the BP scheme. In addition, the overall performance of the PNN scheme was better than that of the BP scheme by more than 9.7% in SENS, 2.4% in SPEC, and 3% in ACCU, respectively. Therefore it was apparent that the PNN scheme showed the best performances for hip shape type classification. Implementation of the model for developing an intelligent classification system In this paper, the PNN model could be considered as a good shape recognition expert system. By using the model proposed and designing an interface for it, an intelligent decision support system was developed as a shape type recognition expert system that could be used by textile industries. The results were time saving and more convenient for users by selecting the appropriate model for them to understand their shape types without inputting the numerous items. The intelligent hip shape recognition system implemented is shown in Figure 11. Therefore if the user wants to use such an intelligent system for other persons with various body structures, he just needs to collect data containing six typical indices of body measurements. After that he can use the data collected to form the hip shape types and implement the recog- Figure 11. Intelligent recognition implemented system. nition system. The resulting hip shape recognition system that is constructed through the classification model proposed based on 3D body measurement could be helpful for a large number of persons with the same body measurement to understand their appropriate shape types. n Conclusions In this paper, an intelligent model for developing a hip shape recognition sys-tem based on 3D body measurement combining cluster analysis and correlation analysis, and a PNN classifier to identify the type were proposed. A 28-dimensional feature vector reflecting lower body part information was selected. In order to reduce the training time and improve classification accuracy of the classifier, the 28-dimensional features were reduced to six typical indices by cluster analysis and correlation analysis. After subdivision of hip shape types by a K-means cluster and analysis of variance, the reduced features were then treated as the inputs of the PNN classifier to discriminate five different types of hip shape. The average classification accuracy of the PNN scheme proposed was evaluated by comparing with the BP and SVM schemes. The scheme proposed achieved a 97.37% rate, which was very promising. Thus it could be considered as a successful recognition system. When facing large scale data from the 3D body scanner, it is necessary to do feature reduction, and then to develop an appropriate recognition model for identification of the body shape type, as the significant features can characterise a certain body type. If the method in this study is successfully carried out in practice, it will be expected that the system proposed may help industrial practitioners to gain insight into 3D body scanning data for mass customisation in order to satisfy customer requirements for garment fitness.
2019-04-15T13:09:38.167Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "6d0e885557a13202973abddefc434e97cb3db92d", "oa_license": null, "oa_url": "https://doi.org/10.5604/12303666.1215535", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "352a1f4fffe4eaa5dd754fe54b98ca987c8944a4", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Engineering" ] }
229088123
pes2o/s2orc
v3-fos-license
lncRNA-Associated ceRNA Networks in Spleen of Nocth1-correlated T-ALL Leukemia Mice Background Acute T-lymphocytic leukemia (T-ALL) is a highly aggressive malignant tumor in leukemia. Nocth1 is considered as an major oncogene in the development of T-ALL. Increasing evidences have revealed that the occurrence and progression of T-ALL referred to abnormal gene expression, pathway activation and the regulation between these genes. However, the potential lncRNA-associated competing endogenous RNA (ceRNA) network involved in spleen of Nocth1-correlated T-ALL leukemia mice remains unclear. Methods Overexpression of Notch intracellular domain (ICN) of Notch1 by retroviral infection was used to set up mouse T-ALL model. Deep RNA-sequencing analysis was performed the expression of lncRNAs and mRNA in spleen of T-ALL mice and C57BL/6 mice. Results The deep RNA-sequencing analysis shown that 1833 lncRNAs and 4626 mRNAs were deregulated according to the P-value (p<0.05) and fold change (>2-fold) in spleen of T-ALL leukemia mice compared with that of C57BL/6 mice. Gene Ontology(GO) and KEGG pathway analysis were performed to reveal the potential roles of differentiated expressed lncRNAs. Co-expression Network was performed to reveal the regulation relationship between the differentiated expressed lncRNAs and mRNAs. CeRNA prediction constructed the lncRNA-miRNA-mRNA model to find the core ceRNA based on regression model analysis and seed sequence matching methods. Conclusion This study provided a systematic overview of the altered lncRNAs and mRNA expression, pathway and ceRNA regulation network in the pathogenesis of Nocth1-correlated T-ALL. 3 ALL accounts for 20-25% of the total incidence of acute lymphoblastic leukemia in adults, for 10%-15% of childhood acute lymphoblastic leukemia [2]. The abnormal proliferation T cell can infiltration and damage to various of tissues and affect organ function. Despite the combined chemotherapy and allogeneic hematopoietic stem cell transplantation have applied to clinical treatment of T-ALL, its event-free survival rate is only 30%-50% [3]. Explore the mechanism and pathogenesis of T-ALL is of great significance in improving survival rate. The formation of T-ALL is a multi-step process which includes the activation of oncogenes and inactivation of tumor suppressor genes [4]. Notch1 is a subtype of Notch receptor in Notch signaling pathway which participated in multiple pathological and physiological processes, including cancer [5]. Approximately 55% of T-ALL patients are related with acquired functional mutations in Notch1 [6]. Block Notch signaling pathway causes cell cycle arrest and apoptosis in T-ALL cell lines [7]. Notch1 is a class I transmembrane protein which transduces information from extracellular signals into nucleus directly [8]. The Notch intracellular domain (ICN) of the Notch1 is its active component, which can activate expression of target genes in nucleus. Overexpressed of ICN1 by retroviral infection in hematopoietic progenitor cells or thymocytes promote T ALL tumorigenesis, which was made to set up mouse T-ALL model [9]. lncRNAs are a special class of non-coding RNAs (ncRNAs), participated with varies of physiological cellular processes as well as cancer pathological process [10]. Increasing evidences have revealed that lncRNAs play a critical role in many types of cancers, including hepatocellular carcinoma, renal cell carcinoma and leukemia [11]. Investigating lncRNAs specifcally transcriptional profiles in T-ALL is of great significance to understand the gobal altered expression of lncRNAs. lncRNAs have diverse mechanism to regulate gene expression. Some studies show that lncRNAs act as competing endogenous RNAs 4 (ceRNA) through a lncRNA-miRNA-mRNA model to regulate gene expression [12]. Based on the lncRNAs and mRNAs specifcally transcriptional profiles, construction the ceRNA network can enrich the raw data in studying the potential mechanism in the formation of T-ALL. In the present study, we performed the deep RNA-sequence to analyze expression profile in spleen of T-ALL leukemia mice with the aim of explore the lncRNAs catalogue. We showed the differentially expressed lncRNAs and mRNAs. Gene Ontology(GO) and KEGG pathway analysis were performed to reveal the potential roles of these mRNAs. Coexpression Network was performed to reveal the regulation relationship between the differentially expressed lncRNAs and mRNAs. Meanwhile, we performed ceRNA prediction to construct the lncRNA-miRNA-mRNA model. Thus, this study provided a systematic overview of the altered RNAs expression, pathway and ceRNA network in the pathogenesis of Nocth1-correlated T-ALL. Animals and tissues We used T-ALL leukemia mice overexpressing the Notch l intracellular domain (NICD) as the research model. T-ALL leukemia cells presented by the Institute of Hematology, Peking Union Medical College, Chinese Academy of Medical Sciences. We cotransfected with retroviral plasmid MSCV-ICN1-IRES-GFP (ICN1-GFP), the reverse transcription packaging protein CMV-VSVG and and Kat into 293T. C57BL/6 mice were divided into two groups randomly, T-ALL mice and C57BL/6 mice, and each group has three mice. T-ALL mice group were treated as follows, collected the viral supernatant to infect C57BL/6 mouse bone marrow cells (Lin-Scal + ), and then transplanted into C57BL/6 mice (10 6 per mouse)after semi-lethal dose irradiation through tail vein injection. C57BL/6 mice group 5 were transplanted culture medium in the same way. Mice were numbered, divided into three cages randomly and reared routinely. C57BL/6 mice, 6~8 weeks female, were purchased from Charles River Experimental Animal Technology Co., Ltd. (Beijing, China). 2 weeks after, the mice were sacrificed by cervical dislocation and removed the spleen by laparotomy (Fig.1A). Mice quarantined in a 12 h light and 12 h dark photoperiod pathogen free environment, received water and food in clean class animal room of Hebei Medical University. All animals were housed and cared in accordance with the Declaration of Helsinki and the guidelines and regulations of the Institutional Animal Care and Use Committee of the Second Hospital of Hebei Medical University. RNA-Seq RNeasy mini kit (Qiagen, Germany) was used to isolate the total RNA. TruSeq™ qRT-PCR Differently expressed lncRNAs were selected for validation by qRT-PCR. Total RNA was 6 reverse-transcribed using SureScript™ First-Strand cDNA Synthesis Kit (Genecopoeia, USA) according to the manufacturer's guide. qPCR reactions was performed using SYBR-Green (Invitrogen) according to the manufacturer's guide. BioRad iQ5 Real-Time thermocycler was used to perform qPCR reactions. The Cycling conditions was denaturing at 95°C, followed by 39 cycles of 95℃ (10 s) and 55℃ annealing (30 s). Specific primers of each lncRNA were listed in Table 2. Statistical analysis The differentially expression levels of lncRNAs and mRNAs in T-ALL mice or C57BL/6 mice spleen was analyzed by Bioconductor package (limma version 3.26.1) and R (version 3.2.2) software. Spearman correlation test was used to analyze co-expression relationships between the lncRNAs. CeRNA prediction was analyzed by Pearson's correlation coefficients. qRT-PCR data was shown as the means±S.E.M. Differences between two groups were analyzed by Student's t test. P<0.05 was considered as significantly. 1. Deep RNA-sequence lncRNAs and mRNA expression profiles in spleen of T-ALL mice. Before injection the mice were healthy and two weeks later, deep RNA-sequence was (Fig.1DE). mRNA GO and KEGG pathway analysis In order to clarify the biological processes, cellular components and molecular functions of differentially expressed mRNAs, we performed GO terms enrichment and KEGG pathway analysis. The GO terms enrichment for differentially expressed mRNAs were related to melanocyte differentiation, myosin complex and translation repressor activity in biological processes( Fig.2A), cellular component( Fig.2B) or molecular function (Fig.2C), respectively. KEGG pathway analysis showed that showed that 303 pathways were significantly enriched among the transcripts (Fig.2D). Acute myeloid leukemia, protein export and glycosaminoglycan biosynthesis -keratan sulfate were the 3 significantly enriched pathways. Coding/non-coding co-expression analysis To predict the functions of lncRNAs, we constructed lncRNA-mRNA co-expression network (Fig.3). Construction of a ceRNA network. lncRNAs could function as ceRNA to competing the binding between miRNA and mRNA. Analysis of ceRNA network helped to understand the characterization of lncRNAs. As shown in Fig.4, 71 differentially expressed lncRNAs and 123 expressed mRNAs were selected, which predicted 11 miRNAs sharing binding sites with the differentially expressed lncRNAs and mRNAs. Validation of differentially lncRNAs Nice differentially expressed lncRNAs from ceRNA network prediction were selected to confirm RNA sequence results by qRT-PCR (Table 1). Ten pairs of mice spleen which contain both T-ALL leukemia mice or C57BL/6 mice were selected. 10 downregulated lncRNAs(p<0.05; FC>5-fold) were selected. All these lnRNAs expression were downregulated and consistent with RNA sequence results(P<0.05). Moreover, NONMMUT026003.2 was maximal changed lncRNAs in these lncRNAs. Discussion Notch1 is one of the major driving oncogene in T-ALL. About 55% of T-ALL patients occurred Notch1 mutation in the trans-membrane region and the intracellular PEST domain, which resulted abnormal activation of the Notch signaling pathway. Overexpression of ICN1 by retroviral infection in hematopoietic progenitor cells or thymocytes promote T ALL tumorigenesis, which was made to set up mouse T-ALL model. Over the past decades, the molecular mechanism of Notch1-correlated T-ALL has been extensively investigated. However, the precise pathogenesis of Notch1-correlated T-ALL is still unknown. Recent years, ncRNAs, including lncRNAs, have been found to be related with humorous number of biological regulatory functions. lncRNA NALT activating Notch signaling pathway promoted cell proliferation in T-ALL [13]. lncRNA-IUR acted as a tumor suppressors by suppressing the STAT5-CD71 pathway in Bcr-Abl-mediated tumorigenesis of T-ALL [14]. To further confirm the significant differences in the expression of lncRNAs and mRNAs, we removed the spleen from T-ALL leukemia mice and C57BL/6 mice and performed deep RNA sequence to study the different expression of lncRNAs and mRNAs. We found that lncRNAs' expression altered significantly in the spleen of Notch1-correlated T-ALL leukemia mice compared with that of C57BL/6 mice which was the first time to report the altered expression of lncRNAs. 1873 lncRNAs and 5626 mRNAs were differentially expressed in the spleen from T-ALL leukemia mice compared with that of C57BL/6 mice. 9 we performed GO terms enrichment to study the biological functions of differently expressed mRNAs. The most enriched GO terms for differentially expressed mRNAs were related to melanocyte differentiation, myosin complex and translation repressor activity in biological processes, cellular component or molecular function. To understand the function of differentially expressed mRNAs further, we performed KEGG pathway analysis and found that 303 pathways were significantly enriched among the altered transcripts. Acute myeloid leukemia, Protein export ad Glycosaminoglycan biosynthesis-keratan sulfate were the 3 significantly enriched pathways. Apoptosis, Notch signaling pathway and PI3K-Akt signaling pathway which was closely related with the pathology process of T-ALL was involved in the enriched pathways [15]. Furthermore, we constructed co-expression network to investigate the relation between lncRNAs and the coding genes. Competing endogenous RNAs (ceRNAs) was raised that ceRNA molecules could sponge miRNA through miRNA response elements (MREs) and regulate gene expression. Numbers of molecules could act as ceRNAs, including lncRNAs, circRNAs or pseudogene. Wang et al found that lncRNA CHRF regulates cardiac hypertrophy by targeting miR-489 [16]. circRNA MTO1 sponges miR-9 to suppress hepatocellular carcinoma progression [17]. Chan et al found that A FTH1 gene:pseudogene, can sponge miRNAs to regulates tumorigenesis in prostate cancer [18]. In this study, a lncRNA-associated ceRNA network analysis was perform and showed that 71 differentially expressed lncRNAs and 123 expressed mRNAs were selected, which predicted 11 miRNAs sharing binding sites with the differentially expressed lncRNAs and mRNAs. Nice differentially expressed lncRNAs from ceRNA network prediction were selected to confirm deep RNA sequence results by qRT-PCR. We selected ten pairs of mice spleen which contain both T-ALL leukemia mice or C57BL/6 mice to further validation. All of the nice downregulated lncRNAs expression were consistent with RNA sequence results. NONMMUT117521.1 was located in intron of ECE-1 which has endopeptidase activity and membrane-bound metalloprotease. Bao et al found that ECE-1 promoted Ischemia/Reperfusion-Induced Injury [19]. ENSMUST00000195494 was located in Pfkfb3 which participated in the glucose metabolism and promoted cell proliferation, apoptosis and autophage in many types of cancer [20]. NONMMUT008951.2 was located in Bcl11a which inhibited proliferation and promoted apoptosis in B lymphoma cell lines [21]. NONMMUT026003.2, in the ceRNA network of differentially expressed lncRNAs, was located Bcl6 which participated humorous of processes, including inflammatory response, growth and differentiation [22]. This study provides the first Notch1-correlated ceRNA network prediction of lncRNAs and mRNAs which needed some studies to explore the roles of these differentially expressed lncRNAs. Because of the limitation of mice model, patients' samples addition is valuable. Abbreviations Validation of deep RNA sequence results by qRT-PCR from spleen of T-ALL mice or C57BL/6 mice. Supplementary Files This is a list of supplementary files associated with the primary manuscript. Click to download. additional data.xls ARRIVE Guidelines Checklist.pdf
2020-05-28T09:14:02.986Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "d5e8eeec763f50faffa0f24e4ca7d6bcdbbe67f1", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-29268/v1.pdf?c=1631855716000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "2e68ef9daf59fe0286b400e8050b4d4c2655f224", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
241094249
pes2o/s2orc
v3-fos-license
Prevention and Detection of ARP Spoofing and Man-In-The-Middle Attacks using EDMAC-IP Algorithm : The great Man-in - the-Middle assault is centered around convincing two has that the other host is the machine in the center. In the event that the framework utilizes DNS to order the other host or address goals convention (ARP) ridiculing on the LAN, this might be accomplished with an area name parody. This paper targets presenting and delineating ARP ridiculing and its job in Man-in - the-Middle assaults. The expression "Man-in - the-Middle" is normal utilization—it doesn't imply that these assaults must be utilized by individuals. Maybe progressively sensible wording would be Teenager-in - the-Middle or Monkey-in - the-Middle. Progressively contact the assault can be identified using timing data much of the time. The most widely recognized kind of assaults happen because of reserve harming of Address Resolution Protocol (ARP), DNS satirizing, meeting commandeering, and SSL seizing. INTRODUCTION Man-in-the-center (MIM) assaults make it especially hard to keep information secure and private, since assaults can be introduced from remote PCs with bogus locations. A host or escape switch must send an ARP demand bundle by means of Address Resolution Protocol on the off chance that it needs to find the physical locations of another host's notable Media SSL/TLS Protocol: The SSL protocol is a transport layer security protocol that was developed and proposed by Netscape Communications in the 1990s. The SSL and TLS protocols are essentially the same. Part of the protocol is a handshake protocol that is responsible for (mutual) authentication and key establishment. Figure (1): TLS Protoco Access Control (MAC) on its system. Every parcel of ARP demands contains the MAC address of the sender and the IP locations of the source target. The solicitation bundle is transmitted over the system and this parcel ought to be overseen by all hosts on the system. It can change the transmitted information or include new information or even square the sniffed parties from getting to the information. This paper breaks down the implications of HTTPS associations with MITM by reenacting a genuine MITM assault on different HTTPS associations, for example, Gmail, Yahoo Mail and Bank account. It was discovered that HTTPS connections can be focused by utilizing the correct free apparatuses, and passwords can be sniffed and saw in plain content assault against HTTPS associations and it is relied upon to give completely make sure about LAN condition against MITM assault. The association of rest of the paper is as per the following: segment two depicts ideas, for example, SSL, MITM and ARP harming, segment three will portray the exploration targets, area four is the procedure, segment five will talk about the outcomes, segment six will exhibit the proposed strategy, segment seven is conversation lastly segment eight is the finishing up comments. HTTPS was first acquainted with be utilized as a made sure about correspondence channel, instead of typical HTTP convention which isn't secure. What's more, it gives a dependable correspondence over the Internet by utilizing encryption to secure the data to be seized by unapproved parties. Thus, a large portion of the web based business and e-banking locales maintain their business utilizing this convention. In any case, one significant downside found in HTTPS is that it can't adapt to unapproved access by programmers; the aggressor in MITM assault can give counterfeit testaments to the person in question and get the first one, as it will be appeared right now. This will prompt security issues when the secret data of clients is hacked, for example, passwords and record numbers. II. MAN IN THE MIDDLE ATTACK MITM ''a type of dynamic wiretapping assault in which the aggressor captures and specifically changes conveyed information so as to take on the appearance of at least one of the elements engaged with a correspondence affiliation'' . The fundamental MITM attributes are (i) that they speak to dynamic assaults,and (ii) that they focus on the relationship between the conveying substances (as opposed to the elements or the correspondence channels between them) There are numerous approaches to execute MITM assault, for example, Address Resolution Protocol (ARP) reserve harming and Domain Name System (DNS) caricaturing. In ARP poison the aggressor infuses all around structured location goals bundles onto the neighborhood arrange; when such ARP parcels arrive at an objective machine it act to change the condition of the ARP store on that framework • Collecting Information: most importantly we gather data about SSL, MITM and investigate conceivable open source hacking programming that can be utilized in our test. • Creation and Design: to structure the investigation condition we need three has, the person in question, the aggressor and the door. We decide to utilize virtual machine for the assailant on the unfortunate casualty machine which is running Windows XP. • Parameters needed: we will have three hosts and the hacking programming on the assailant machine. Subsequent to making the correct arrangements nature is prepared to execute the assault. • Implementation: the assliant can hurt the hosts inside a comparative LAN,and redirect the traffic to his own host.By then the aggressor needs to hold up until the disastrous setback will login to specific HTTPS account .By then passwords of the heartbreaking loss will be showed up in the assliant machine in plain substance. ARP Poisoning: Figure (2): MITM To manage the issue of ARP harming different techniques were proposed to either see or forestall ARP harming, at any rate each ha its own qualities and inadequacies. Trabelsi and Shuaib proposed a structure for seeing poisonous hosts that are performing traffic redirection assault in LAN coordinates in any case the limitation of this system is that it is dull and the affirmation will be after the trap had as of late occurred. In MITM the attacker may beginning at now appear at the delicate data before recognizing confirmation. Another strategy is to upset the snare before it happens yet by a long shot the greater part of these frameworks have its own impediments and until this time no method had the choice to give full LAN confirmation from different assaults. A portion of these structures were separated and our check later parts right now. III. METHODOLOGY: To accomplish the exploration targets the technique of this examination has the accompanying stages: and will be clarified in more detail in the coming areas. PROPOSED NEW ALGORITHM (EDMAC-IP): The exploratory outcomes show that aggressors can get the HTTPS passwords in a plain book and HTTPS connection isn't secure from MITM. Likewise, from the outcomes it was discovered that the critical disadvantage which draws in the snare is the capacity to acknowledge ARP harming and sniff the relationship without the information on the client and outfit the customer with a phony help when the principle affirmation is seized by the attacker. The issue here isn't the route by which solid is the encryption estimation to guarantee about secret key rather in the event that we need to forestall MITM assault we should think in a working manner to prevent ARP harming. There are prior reactions for impede and see ARP harming at any rate they have several shortcomings. The need to absolutely guarantee about LAN and plan an essential game-plan was behind the opportunity of MAC-IP check we are proposing. ARP's most unmistakable deficiency is that it is a stateless show. This surmises it doesn't follow reactions to the mentioning that are passed on and right presently perceive reactions without having sent a deals. Another symptom of ARP being stateless is that a framework's ARP table just uses the outcome of the last ARP communicate. With the goal for somebody to keep on parodying an IP address it is important from the first host. The proposed technique for EDMAC-IP impedes such assault by making the ARP update subject to our tally. It works by making a poverty stricken relationship between any host's MAC address and its IP address. This EDMAC-IP calculation nearly lessens the man-in-the-center assault. Before that EDMAC-IP we have S-ARP calculation this will diminish the assault yet it will forestall just 75% of the assault yet our calculation will nearly forestall 92% of the assaults. We will utilize this by utilizing MAC locations and IP addresses. Here we have executed that EDMAC-IP calculation. IV. EDMAC-IP ALGORITHM:- •We need to calculate the seed value first for that we need to do summation for sequential no.of hosts MAC addresses •Hexadecimal S ∀ A model is told in figure on the best way to deal with pick the IP of the entryway by playing out the calculation. The DHCP side will assign the IP for each host on the LAN reliant on its MAC address; this will be the basic furthest reaches of the DHCP side (server) of the calculation. The going with stage will be to address the action of each host associated with HTTPS relationship in obstructing the trap o XOR ed the seed with S 2 (the second number of the sequential number), the outcome will be first era. o 1st part= Seed XOR S2 o XOR ed first era with S4 (the fourth number of the sequential number), the outcome is the second era. o 2nd part= 1st part XOR S 4 o XOR ed 2nd part with S6 (the sixth number of the serial number) the result is the 3rd part. o 3 rd part = 2 nd part XOR S6  3rd part will be assigned as the last octet of the IP address of the host. Case of EDMAC-IP DHCP side The Client side: The customer will check when any ARP parcels show up before refreshing the ARP store. The accompanying advances speak to the EDMAC-IP customer side: • XOR ed the IP address last number with S6 (the 6th number of the door's MAC address). • XOR ed the outcome with S4 (the fourth •XOR ed the outcome with S2 (the second number of the passage's MAC address). The outcome is the seed. • Calculate the whole of the sequential number of the MAC address. • Check the coordinating of seed Case of EDMAC-IP client side (no endeavor to harm) The Server Side (gateway): The server side will be connected with giving out the IP passes on to the hosts also as door. By then the passage will have an occupation in checking the ARP packs it gets before stimulating the ARP table so when the aggressor need to hurt the entry's ARP table the segment will play out the EDMAC-IP customer figurings to check the match between the got IP and MAC addresses.The Double Check from the customer and the door server Right now relationship on both the customer and the passage will be secured with frustrating the trap and every one will play out the EDMAC-IP figuring, and if there is any connection between two has inside the LAN they will in like way check the ARP bundles and perform EDMAC-IP. This will incite a thoroughly secure LAN against ARP harming and will guarantee a guaranteed LAN against MITM and different kinds of ambushes that rely on ARP harming, for example, DoS. In the underneath figure shows the twofold check of IP right now. The Double Check from the customer and the passage server. VI. DISCUSSION From the results of the redirection clearly MITM is conceivable against HTTPS affiliations and the blended passwords can be appeared in plain substance. In the wake of separating the gaps in HTTPS which perceives the phony affirmations from the assailant we accept that the reaction for foil MITM is to forestall ARP harming. EDMAC-IP is proposed and is relied on to be unbelievable in accomplishing secure LAN by ruining ARP spare harming. In the wake of showing the way EDMAC-IP works we will presently discuss its features and characteristics and appear differently in relation to other such strategies. Despite the way that various methodology and procedures were proposed and executed, there are a couple of criteria that pick whether or not these frameworks can be viably executed in certified world or not. • The plan should not anticipate that changes should be made to each host on the framework (e.g., present remarkable programming on each host), as this grows the legitimate costs. • The usage of cryptographic systems should be restricted or evaded (in a perfect world), as it ruins ARP. • Prevention/blocking are jumped at the chance to area, as the last depends upon the administrator having the alternative to manage the alerts in a suitable and advantageous manner. • The plan should be comprehensively open and easy to realize. • Exorbitant equipment fundamentals ought to be obliged at any rate much as could be ordinary. • Solution ought to be in reverse faultless with ARP. • ARP demand/answers ought not be eased back down on an exceptionally essential level. • All sorts of ARP assaults ought to be blocked
2020-06-11T09:06:07.496Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "a5819b8e3a925c4dae201d9dd8fbb2ceb955d414", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijrte.f8211.059120", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8833bbac6b1c753f1e7c043da16d5c59986a7bc0", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
10401618
pes2o/s2orc
v3-fos-license
The role of thyroid hormone nuclear receptors in the heart: evidence from pharmacological approaches This review evaluates the hypothesis that the cardiac effects of amiodarone can be explained—at least partly—by the induction of a local ‘hypothyroid-like condition’ in the heart. Evidence supporting the hypothesis comprises the observation that amiodarone exerts an inhibitory effect on the binding of T3 to thyroid hormone receptors (TR) alpha-1 and beta-1 in vitro, and on the expression of particular T3-dependent genes in vivo. In the heart, amiodarone decreases heart rate and alpha myosin heavy chain expression (mediated via TR alpha-1), and increases sarcoplasmic reticulum calcium-activated ATPase and beta myosin heavy chain expression (mediated via TR beta-1). Recent data show a significant similarity in expression profiles of 8,435 genes in the heart of hypothyroid and amiodarone-treated animals, although similarities do not always exist in transcripts of ion channel genes. Induction of a hypothyroid cardiac phenotype by amiodarone may be advantageous by decreasing energy demands and increasing energy availability. Introduction The heart is an important target organ for thyroid hormone as evident from clinical practice [1]. In hyperthyroid patients there is an increased resting heart rate, increased left ventricular contractility, increased cardiac output and a decreased systemic vascular resistance, resulting in a lower diastolic and higher systolic blood pressure; serum cholesterol is decreased and patients are susceptible to cardiac arrhythmias, specifically to atrial fibrilation. In contrast, hypothyroid patients have a decreased heart rate, impaired cardiac contractility and diastolic function, decreased cardiac output and an increased systemic vascular resistance, resulting in a higher diastolic blood pressure; serum cholesterol is increased and patients are susceptible to accelerated atherosclerosis and coronary artery disease. The aberrant cardiovascular functions in hyperthyroid and hypothyroid patients are usually fully reversible upon restoration of the euthyroid state. Amiodarone, introduced originally as an anti-anginal agent but nowadays used as a very potent antiarrhytmic drug, lowers heart rate, lengthens the cardiac action potential (manifest as a longer QTc interval on the EKG), and depresses myocardial oxygen consumption [2]. In addition, amiodarone increases plasma cholesterol [3]. Similarities between the effects of amiodarone and of hypothyroidism are striking. It has therefore been hypothesized that the cardiac effects of amiodarone can be explained-at least partly-by the induction of a local 'hypothyroid-like' condition in the heart [3]. In this review we present evidence in favour of this hypothesis derived from experimental animal studies focussing on nuclear T3 receptors and thyroid hormone-dependent gene expression in the heart. isoforms of TRs: TRa1 and TRb1 (which both bind the ligand T3) and TRa2 (which does not bind T3 but is able to bind to thyroid response elements (TRE) and may exert a dominant negative effect on gene expression). The heart is a predominantly TRa1 organ, although TRb1 is also expressed albeit at a lower level. In rats made hypothyroid by adding 0.05% propylthiouracil to their drinking water for 2 weeks, we did not observe any changes in atrial or ventricular gene expression of the three TRs relative to controls, neither at the mRNA level nor at the protein level [4]. Other models, however, did show changes in the expression of particular TR isoforms in hypothyroid rats [5,6]. The discrepancy can be explained by the more severe hypothyroidism in these earlier studies by using propylthiouracil for a longer period of time (six instead of our 2 weeks) or combining propylthiouracil with a low iodine diet [5,6]. Because in our model cardiac TR levels are not affected by thyroid hormone deficiency, observed changes in the expression of T3-dependent genes in the hypothyroid heart are likely attributed to a low occupancy rate of TR with T3. There are a number of T3-responsive genes in the heart encoding for proteins involved in cardiac contractility. Examples are the sarcoplasmic reticulum calcium-activated ATPase (SERCA2a) which is responsible for the calcium reuptake during the diastole and is activated by T3 [7], and the two myosin heavy chains a and b (aMHC and bMHC, respectively) which are myofibrillar proteins that make up the thick filament of the cardiac myocyte contractile apparatus. In rodents, transcription of aMHC is activated by T3, whereas transcription of bMHC is repressed by T3 [8,9]. In our experimental model, hypothyroidism was associated with a downregulation of aMHC and an upregulation of bMHC (both at the mRNA and at the protein level) in atria and ventricles; SERCA2a was significantly downregulated in atria and ventricles [4]. The observed changes are in good agreement with previous reports in the literature [8][9][10]. Downregulation of aMHC (the fast myosin with higher ATPase activity) under simultaneous upregulation of bMHC (the slow myosin), together with the downregulation of SERCA2a explains to a certain extent the decreased cardiac contractility associated with hypothyroidism [1]. Modulation of aMHC transcription is linked to the TRa1 isoform, whereas transcription of bMHC and SERCA2a genes seems to be under control of TRb1 [11]. Amiodarone Amiodarone treatment (100 mg/kg/day orally for 2 weeks) influences cardiac TR mRNA expression [12]: TRa1 is decreased in the right atrium but increased in the left ventricular wall, TRa2 remains unchanged at these locations, and TRb1 is decreased both in the right atrium and the left ventricular wall. The overall downregulation of TR by amiodarone is similar to the reported downregulation of TRa1 and TRb1 in the post-infarcted rat heart, which shows a hypothyroid cardiac phenotype [13]. Amiodarone treatment also influenced thyroid hormonedependent gene expression in our experimental rat model at the mRNA level [12]: SERCA2a was reduced in the right atrium, aMHC was reduced both in the right atrium and left ventricular apex, whereas bMHC was increased in the right atrium, left ventricular wall and apex. The findings are in good agreement with the literature [14]. The data strongly suggests that amiodarone induces a hypothyroid-like phenotype with regard to T3-dependent gene expression in the heart. To learn about the mechanism by which amiodarone exerts these effects, we performed a number of in vitro and in vivo studies. First, in vitro experiments demonstrated that amiodarone via its main metabolite desethylamiodarone (DEA) acts as a competitive inhibitor of T3 binding to TRa1 (IC50 value 30 ± 3.9 lM) and as a noncompetitive inhibitor of T3 binding to TRb1 (IC50 value 71 ± 3.4 lM) [15,16]. Next to inhibition of T3 binding to TR, DEA may further affect T3-dependent gene expression by inhibition of co-activator binding to TR and inhibition of the TR binding to TRE [17,18]. Second, DEA concentrations in the rat heart after amiodarone treatment (100 mg/kg/day orally for 2 weeks) are in the micromolar range (14 lmol/kg) [19], close to the IC50 values of DEA for inhibiting T3 binding to TR in vitro. The T3 concentrations in rat heart are 4 times lower in amiodarone-treated animals than in control animals (1 vs 4 nmol/kg, respectively) [20]. The marked decrease in myocardial T3 concentration is related to the decrease of plasma T3 and the impaired entrance of plasma-derived T3 in hearts of amiodarone-treated animals [21]. The reduction of cardiac TR and T3 concentrations will result in all likelihood in a low occupancy of TR with T3, which favours the inhibitory effects of DEA [15,16]. The finding that amiodarone modulates the gene expression of both aMHC (mediated via TRa1) and bMHC and SERCA2a (mediated via TRb1) is in line with the inhibitory effect of DEA on the binding of T3 to both TRa1 and TRb1. Dronedarone Dronedarone is a newly developed antiarrhythmic drug, structurally related to amiodarone. It lacks, however, the iodine moiety of amiodarone, and thereby iodine-related toxicity. Dronedarone like amiodarone has antiadrenergic effects as well as blocking effects on many ion channels. Dronedarone possesses rate-control and rhythm-control properties, and seems to be safe and effective in preventing recurrence of atrial fibrillation [22]. We wondered if part of the pharmacological actions of dronedarone could also be attributed to induction of a local hypothyroid-like condition in the heart. In vitro experiments demonstrate that debutyldronedarone (the major metabolite of dronedarone) inhibits the binding of T3 to TRa1 (IC50 value 59 ± 4.1 lM) much more strongly than the binding of T3 to TRb1 (IC50 value 280 ± 29 lM) [23]. Inhibition of T3 binding to TRa1 by debutyldronedarone is competitive in nature. Treating rats with dronedarone (100 mg/kg/day orally for 2 weeks) decreases TRa1 mRNA in the right atrium, decreases TRb1 mRNA in right atrium, left ventricular wall and apex, whereas it does not affect TRa2 mRNA in the heart [12]. With regard to T3-dependent gene expression, dronedarone did not change aMHC, bMHC and SERCA2a expression in the heart [12]. Pantos et al. [24] also did not observe a change in bMHC or SERCA2a cardiac expression in rats treated with dronedarone (90 mg/kg/d orally for 2 weeks), but did find a significant decrease in aMHC and heart rate. These findings are most interesting: the presence of an effect of dronedarone on heart rate and aMHC (both TRa1 mediated) and the absence of an effect of dronedarone on bMHC and SERCA2a (both TRb1 mediated) reinforce the in vitro findings that dronedarone acts as a selective TRa1 antagonist. This has also been demonstrated in another rat study in which treatment with amiodarone reduced the expression of two TRb1-dependent genes (as evident from a lower LDL receptor protein concentration and a lower iodothyronine-5 0 -deiodinase-activity in liver), whereas treatment with dronedarone did not [23]. Whether dronedarone like amiodarone also induces a local hypothyroid-like condition in the heart is less clear. However, further biochemical and functional studies showed many similarities in hearts of hypothyroid and dronedarone-treated rats, leading these authors to conclude that dronedarone treatment results in cardioprotection by selectively mimicking hypothyroidism [24]. Hypothyroid cardiac phenotype Amiodarone treatment, like hypothyroidism, lowers heart rate, lenghthens the QTc interval, and lowers aMHC gene expression in the heart; these effects are TRa1-mediated effects. Amiodarone treatment, like hypothyroidism, increases bMHC and decreases SERCA2a gene expression in the heart; both effects are TRb1-mediated effects. The data provides supportive evidence for the hypothesis that amiodarone induces a hypothyroid-like condition in the heart. Amiodarone apparently switches gene expression back into foetal programming of particular cardiac genes, which might have survival value for the organism. The hypothesis is further strengthened by recent data from a microarray analysis of 8,435 genes in the left ventricular myocardium of rats [25]. There was a very significant similarity in expression profiles between hypothyroid and amiodarone-treated rats (R = 0.63, P \ 0.00001); the correlation became even stronger when the top 100 up-regulated and 100 down-regulated genes in hypothyroidism were analyzed (R = 0.78, P \ 0.00001). As a final remark, however, it should be mentioned that not all pharmacological actions of amiodarone can be explained from the induction of a local hypothyroid-like condition in the heart. Evaluating the complete ion channel repertoire by real time PCR in hearts of mice treated with amiodarone, it became obvious that changes in transcript levels sometimes were similar to those seen in hypothyroid mice, but very frequently were completely different from the hypothyroid phenotype [26,27]. Nevertheless, downregulation of the effect of thyroid hormone in the heart results in what has been called ''cardiac metamorphosis'' [28], which by decreasing energy demands and increasing energy availability might be advantageous with potential therapeutic implications. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2017-08-02T22:36:40.633Z
2008-12-19T00:00:00.000
{ "year": 2008, "sha1": "5905dcfb45161514bb50d262a9b84f4980666473", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10741-008-9131-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "51ccb8ff26adf962dcdf58ff568b25c1a317bd49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247076843
pes2o/s2orc
v3-fos-license
Tolerability of COVID-19 mRNA vaccines in patients with postural tachycardia syndrome: a cross-sectional study Background: Postural tachycardia syndrome (POTS) is a form of autonomic dysregulation. There is increasing evidence that the etiology may be immune-mediated in a subgroup of patients. Patients with POTS often experience an exacerbation of their symptoms associated with (viral) infections and often fear the same symptom aggravation after vaccination. In this report we describe the tolerability of messenger ribonucleic acid (mRNA) vaccines against coronavirus disease 19 (COVID-19) and the consequences of a COVID-19 infection on POTS symptoms in our cohort of patients with neuropathic POTS. Methods: We conducted a standardized, checklist-based interview with 23 patients and recorded the acute side effects of mRNA vaccination, acute symptoms of COVID-19 infection as well as the effects of vaccination and COVID-19 infection on POTS symptoms. Results: Of all included patients, 20 patients received two mRNA vaccines without having had a previous COVID-19 infection, and five patients in total had suffered a COVID-19 infection. Of these, three had COVID-19 without and two after being vaccinated. No increased frequency of side effects after both doses of mRNA vaccines was observed. Six patients reported a mild and short-term aggravation of their POTS symptoms beyond the duration of acute vaccine side effects. All five patients who suffered a COVID-19 infection subsequently reported a pronounced and persistent exacerbation of POTS symptoms. Conclusions: Our observations suggest that mRNA vaccines are not associated with a higher frequency of acute side effects in patients with POTS. Symptom exacerbation as a consequence of mRNA vaccination seems to be less frequent and of shorter duration compared to patients who suffered a COVID-19 infection. Introduction Postural tachycardia syndrome (POTS) results from autonomic dysregulation.It is characterized in adults by a clinically symptomatic, sustained increase in heart rate of more than 30 beats per minute within 10 minutes of standing or head-up tilt testing, in the absence of orthostatic hypotension. 1,24][5][6] Many patients with POTS additionally report non-orthostatic symptoms of autonomic origin such as fatigue, gastrointestinal complaints, sleep disturbances, restless legs symptoms and exercise intolerance. 7,8The exact etiology of POTS is still unknown, although in recent years evidence has accumulated that in a subset of patients with POTS the pathogenesis of dysautonomia may be immune-mediated. 9,10The onset of POTS is frequently reported after an immunologic stressor, with a female predominance. 11,12Up to 50% of patients with POTS describe a viral infection as the trigger of their symptoms. 12,13Patients also often report that infections (especially viral illnesses) are triggers for a prolonged symptom exacerbation, even after the subsiding of the acute infection. 4,13,14Individuals with POTS are more likely to be affected by comorbid autoimmune diseases than the average population. 8,15,169][20] Additionally, antibodies against angiotensin II type 1 receptors and abnormal levels of inflammatory biomarkers were reported. 21,22Despite the presence of these antibodies, their role in the complex pathophysiology of autonomic dysfunction in POTS remains unknown. 15mmunomodulatory treatment with intravenous immunoglobulins has shown a positive effect on the symptoms of patients with POTS, further supporting an immune-mediated genesis. 235][26][27][28][29][30] Nearly all affected individuals were females without pre-existing conditions who developed symptoms of autonomic dysfunction several days or weeks after an acute COVID-19 infection and there was no association with initial COVID-19 severity. 24,31,32Vaccines based on messenger ribonucleic acid (mRNA) technology are being used to combat the COVID-19 pandemic.The mRNA provides the body with the genetic code of the virus, which is then translated in the host cells and as a consequence, spike proteins are built.These act as antigens and trigger an immune response, as a result of which neutralizing antibodies against SARS-CoV-2 are formed. 33There is one case report in which POTS was diagnosed in a previously healthy, 42-year-old male following the first dose of mRNA vaccination. 34 have observed that patients with POTS are hesitant towards vaccination in general and especially towards the new mRNA vaccines because they often fear aggravation of their symptoms.On the other side, it is reasonable to assume that a COVID-19 infection in patients with POTS may trigger a prolonged symptom amplification as it is commonly observed with infections. The aim of this study was to assess the tolerability and side effects of the two COVID-19 mRNA vaccines used in Switzerland (Spikevax ® , Moderna; BNT162b2 ® , Pfizer) in a cohort of patients with POTS, and to assess possible consequences of a COVID-19 infection on POTS symptoms. Patients We conducted a standardized checklist-based interview with all patients who had been diagnosed with neuropathic POTS and were followed in the Autonomic Unit of the Departments of Neurology and Neurosurgery, University Hospital Bern, Bern, Switzerland.All available patients were contacted by telephone and asked if they were interested in participating in the study after checking the eligibility criteria.If interested, they were sent the informed consent form.After receiving the signed consent form the interview took place.The structured interviews were performed by one of two authors (KJ or BR) either by telephone or during a routine consultation.Data was collected between November 2021 and January 2022.All contacted patients agreed to participate in the study and provided written informed consent for the collection and publication of their data.Potential bias was minimized by the standardization and structuring of the interview.The interviewer strictly followed the predetermined interview checklist (please see the extended data for the used interview checklist). 48The study was carried out in accordance with the Declaration of Helsinki.The diagnosis of POTS had been made according to medical history, physical and neurological examination, cardiovascular autonomic function testing, thermoregulatory sweat test and/or quantitative testing of sudomotor axon reflex, determination of autoantibodies against G-protein-coupled receptors, measurement of plasma norepinephrine levels and skin biopsy in selected patients. 1,35ligibility criteria Patients had to meet the following inclusion criteria: confirmed diagnosis of neuropathic POTS, aged between 18 and 60 years, received two COVID-19 mRNA vaccine doses ≥ 1 month prior to the interview, or recovered from COVID-19 infection ≥ 1 month prior to the interview. Interview checklists To evaluate the tolerability of COVID-19 mRNA vaccines, the following data were collected during the interviews: Date(s) of vaccination and type of vaccine (BNT162b2 ® , Pfizer BioNTech, New York, NY or Spikevax ® , Moderna, Cambridge, MA).The following, previously published side effects of vaccines [36][37][38] were assessed (for the first and second dose of the vaccine separately) in their presence (yes/no) and duration (days): fever, shivering, fatigue, headache, joint pain, muscle pain, nausea, emesis, diarrhea, and reaction at injection site (pain, swelling and cutaneous reaction). 49 patients who had suffered a COVID-19 infection, the following additional symptoms were queried: coughing, sore throat, rhinorrhea, breathlessness, loss of taste, loss of smell and chest pain.For each symptom, the presence (yes/no), severity (mild, moderate, severe) and duration (in days) were evaluated.Furthermore, the duration of the infection, need for hospitalization and incapacity for work were assessed. To assess possible exacerbation of POTS symptoms due to mRNA vaccination and COVID-19 infection, the presence (yes/no), severity (mild, moderate, severe; for COVID-19 infection only) and duration of symptom exacerbation (in days) for the following symptoms were evaluated: dizziness, nausea, weakness, palpitations, lightheadedness, tremulousness, blurred vision, concentration difficulties, memory difficulties, orthostatic leg and/or arm pain, gastrointestinal symptoms, sleep disturbances, restless legs syndrome and orthostatic headache.During the interview, symptom aggravation was assessed separately for the first and second dose of the vaccine, and COVID-19 infection, from the patients memory.Furthermore, adjustment of therapy and inability for work due to symptom exacerbation were assessed. Data analysis The data analysis was descriptive and performed using Statistical Package for the Social Sciences (SPSS Statistics) Version 25.0 (IBM Corp., Armonk, NY, USA).Data are reported either as frequencies, mean (range) or median (range).All interviews were fully completed, so there were no missing data. Patients A total of 23 patients, two men (8.7%) and 21 women (91.3 %) with diagnosed neuropathic POTS and a mean age of 26.65 (range 18-40) years, were included in this study and interviewed once.In total, 20 patients who had been vaccinated twice and had not previously suffered a COVID-19 infection were assessed for side effects of the vaccinations.Of the 23 patients included in this study five (21.7%) had suffered a COVID-19 infection; three before and two after two doses of mRNA vaccination. 48ute side effects of mRNA vaccination Frequencies of published acute side effects of mRNA vaccination reported by our POTS cohort are shown in Table 1.All included patients received the first dose between April and September 2021 and the second dose between May and October 2021.After the first dose, patients were unable to work for a mean of 0.35 (range 0-7) days and after the second dose for a mean of 1.05 (range 0-3) days.No allergic reactions were observed. Acute symptoms of COVID-19 infection Acute symptoms of COVID-19 infection are summarized in Table 2. Mean duration of infection was 16.4 (range 10 -27) days.None of the patients had to be hospitalized.Mean duration of incapacity for work was 18.8 (range 10 -28) days. Effect of mRNA vaccination on POTS symptoms Reported increase of POTS symptoms after mRNA vaccination is shown in Table 3.An increase of POTS symptoms was reported by three patients after the first and by five patients after the second vaccination.Mean duration of symptom increase was seven days (range 1-14).None of the patients needed an adjustment of the symptomatic therapy for POTS, and no incapacity for work was reported. Consequences of COVID-19 infection regarding POTS symptoms The effects of COVID-19 infection on POTS symptoms are shown in Table 4.In addition to the above reported incapacity for work, one patient (Patient 3) had to reduce her existing workload for two more months.Adjustment of symptomatic POTS treatment was necessary in all patients. Discussion The present study investigated the frequencies of known side effects of mRNA vaccination (Spikevax ® , Moderna; BNT162b2 ® , Pfizer) in patients with POTS.In addition, possible effects on POTS symptoms were assessed and compared to the impact of a COVID-19 infection. Vaccine side effects were present in 20 (100%) patients for both vaccinations.The most frequently reported side effects of mRNA vaccines were pain at the injection site (70% after first, 85% after second), fatigue (50% after first, 80% after second), headache (30% after first, 75% after second), fever (20% after first, 65% after second) and shivering (15% after first, 65% after second).Side effects were generally reported more frequently after the second vaccination.[38] Only six patients reported mild worsening of their POTS symptoms after vaccination beyond the duration of the acute side effects, for a mean duration of seven days (maximum 14 days).The observed increase in symptoms occurred more Nausea Presence N (%) 3 ( 15 frequently after the second vaccination.This is similar to findings of studies examining the effects of mRNA vaccination on disease activity in patients with autoimmune inflammatory rheumatic diseases, which showed no higher incidence of side effects compared to healthy subjects and no greater risk of disease flares. 39,40Similarly, also patients who suffered from post-COVID symptoms of dysautonomia did not report a worsening of symptoms after getting vaccinated. 41 contrast, patients suffering a COVID-19 infection experienced a pronounced and prolonged aggravation of their POTS symptoms for several months.Due to the symptom increase all patients needed an adjustment of their symptomatic POTS therapy and had prolonged incapability for work.Interestingly, symptom exacerbation due to a COVID-19 infection was also observed in two previously vaccinated patients.However, both patients were vaccinated more than six months prior to the infection at a time when booster vaccinations were not yet available for this priority group in Switzerland.In these two patients, there was a tendency for a milder and shorter exacerbation of POTS symptoms compared to non-vaccinated patients. Most patients with POTS experience a prolonged increase of their symptoms in the context of infections (especially of viral etiology). 4,13,14In general, hypovolemia, fever and bedrest can intensify POTS symptoms. 26,42,43Furthermore, in patients with possible immune-mediated POTS, symptom aggravation is most likely due to a general immunological activation.Besides this, SARS-CoV-2 appears to affect the autonomic nervous system directly, which could be an additional factor for aggravation. 445][26][27][28][29][30] Several hypotheses about possible pathomechanisms of POTS or dysautonomia in general after COVID-19 infection have been proposed: imbalance of the renin-angiotensin-aldosterone system, 26,29 brainstem involvement, 43,45,46 autoreactivity to antibodies against SARS-CoV-2, 25,43,47 dyshomeostasis of immune response 42,44 and denervation of peripheral sympathetic nerve fibers. 26,43,42is study has some limitations.Due to the small number of cases (especially of POTS patients with COVID-19 infection), generalizability cannot be fully derived.Furthermore, the retrospective collection of data by interview bears the risk of inaccurate symptom recollection and reporting in patients.Finally, effects of mRNA booster vaccinations and other types of vaccination were not recorded in this study. This is an important contribution in the wake of vaccination hesitancy among people with neurologic disease.The study provides early scientific evidence that POTS, a condition that may also occur after infections, is not associated with an increased risk of side effects of SARS-CoV-2 vaccination.Importantly, pronounced and prolonged symptom exacerbation was observed with COVID-19 in patients with POTS.This observation should serve as an additional line of argumentation for the safety and efficacy of SARS-CoV-2 vaccination and identifies patients with POTS as a vulnerable cohort for detrimental outcome of COVID19.The only remark from my side relates to the lack of a control group, ideally sex-and age-matched. Is the work clearly and accurately presented and does it cite the current literature? Yes Is the study design appropriate and is the work technically sound?Yes Table 1 . Acute side effects of messenger ribonucleic acid vaccination. Table 3 . Effect of mRNA vaccination on POTS symptoms.
2022-02-24T16:07:21.127Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "84445cebb61e4010e171fc8e76877ec3ef72dd03", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/11-215/v1/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16f6c238a5dc7e5db937a54f0de564fe968e1a66", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211685802
pes2o/s2orc
v3-fos-license
Embedding Gesture Prior to Joint Shape Optimization Based Real-Time 3D Hand Tracking In this paper, we present a novel approach for 3D hand tracking in real-time from a set of depth images. In each frame, our approach initializes hand pose with learning and then jointly optimizes the hand pose and shape. For pose initialization, we propose a gesture classification and root location network (GCRL), which can capture the meaningful topological structure of the hand to estimate the gesture and root location of the hand. With the per-frame initialization, our approach can rapidly recover from tracking failures. For optimization, unlike most existing methods that have been using a fixed-size hand model or manual calibration, we propose a hand gesture-guided optimization strategy to estimate pose and shape iteratively, which makes the tracking results more accuracy. Experiments on three challenging datasets show that our proposed approach achieves similar accuracy as state-of-the-art approaches, while runs on a low computational resource (without GPU). Moreover, most of these approaches rely on high computational resources or are unable to achieve real-time performance. Based on the above considerations, we propose a novel approach that embeds hand gesture prior to joint shape optimization to accomplish such a task. Compared to previous works, our proposed approach has several contributions. We propose a GCRL network for pose initialization, which achieves a robust and efficient result compare with directly regressing the key-points or per-pixel hand part classification. The results of the GCRL network are treated as an initial solution for iterative optimization and prevent tracking loss caused by self-occlusion or fast hand motion. Additionally, based on the GCRL network, we design a progressive method that constructs the initial solution from inferred gesture and previous hand shape. This strategy increases the stability of the tracking result. We propose a gesture-guided optimization to estimate the pose and shape of the input hand. During the model fitting, we only optimize the visible bone of hand(which depends on current hand gesture). This may reduce the computations of jointly tracking the pose and shape. In effect, we seamlessly combine the GCRL network and gesture-guided optimization to construct a pipeline(as shown in Figure.2). We conduct a set of experiments on three public datasets to evaluate our approach. The remainder of this paper is structured as follows: In section 2, we discuss the related works on hand tracking. In section 3, we describe the GCRL network, hand pose/shape model used in our work and real-time tracking approach. In section 4, we analyze the performance of the hand tracking approach and provide comparisons to the state-of-theart approaches. Finally, we conclude the paper with a brief discussion and a conclusion in section 5. Discriminative methods [48] mostly tend to learn a function from large quantity of training data, and directly map features to corresponding hand poses. Ge et al. [12] fed a set of hand parts rendered from different views into a 2D CNN for pose regression. Zhou et al. [50] embedded the joint forward kinematic process into a regression network to improve the accuracy. In [28], Oberweger et al. compared several different CNN architectures and chose the best one to estimate hand joint locations. These methods attempt to FIGURE 2. Overview of our proposed approach. Given a depth image, we first crop the hand region and feed it into our trained GCRL network; the GCRL network estimates the gesture and root location of the hand(global position of hand). Additionally, we extract the 3D point cloud and 2D silhouette from the original image. Then, combining the hand initialization, we perform gesture-guided pose/shape optimization that incorporates both data fitting and prior limits to ensure accurate and robust hand tracking. VOLUME 8, 2020 estimate the hand joint configuration directly and usually do not use temporal information and other priors. Generative methods synthesize observations that are compared to inputs. Then, through optimization, the hand pose that most accurately matches the observation is identified. Oikonomidis et al. [29] fit a hand (spheres/cylinders model) to the depth image with particle swarm optimization(PSO). Qian et al. [31] used an ICP-PSO method to find the possible pose that matches the observed hand point cloud. Sharp et al. [34] adopted a mesh model for tracking in a render-and-compute framework and achieved efficient performance. Melax et al. [24] used rigid body dynamics(RBD) to fit the input depth image and achieves robust results. Generative methods usually focus on more efficient methods for adaptive hand models, model representations, and optimization strategies. They are more computationally complex than discriminative approaches and prone to local minima due to the fast hand motion and complex articulate structure. These two classes of methods have had a slight overlap over the last years. Hybrid methods combine discriminative and generative methods to improve the robustness and preciseness of model fitting with per-frame pose reinitialization. Tompson et al. [43] first estimated the 2D joint's location with a heatmap and then refined the alignment with PSO-based model fitting. Another type of hybrid algorithm replaces the correspondences found by the iterative closest point(ICP) with the result of a per-pixel classifier(Random Forest(RF) or Convolutional Neural Network(CNN)). Sridhar et al. [35] optimized the hand pose using a detection guide strategy. They achieved correspondences through a per-pixel random forest classifier. These point-to-model correspondences can guide the fitting process. In a modern motion tracking system, model calibration plays a core role. Unlike face or body model, the large number self-occlusions of hand, the heavy noise of depth sensors, and globally unconstrained pose, and, the self-occlusion of fingers makes the hand shape calibration harder. Current work of hand calibration mainly includes offline and online calibration. Offline calibration. Taylor et al. [40] generated hand models from input depth frames while the user needs to rotate his fingers manually. Tan et al. [39] personalized a hand model from a set of depth measurements using a trained shape basis and achieved a robust result. Remelli et al. [32] calibrated a hand sphere-mesh in a low dimensional shape-space with a multi-stage optimization. Online calibration. de La Gorce et al. [7] tracked hand from a set of RGB images with a model with preset hand shape(in the first frame). Makris and Argyros [23] proposed a model-based approach to jointly solve the pose and shape estimation problem in a tracking system. They fit a set of frames with a fixed-size cylindrical model of a hand, it runs in 30 fps, but the render-and-compute framework limits future improvement. Tkach et al. [42] used an online optimization algorithm that jointly estimates pose and shape in each frame. Moreover, it is more robust and precise than other algorithms. It leverages the estimated shape parameter confidence and builds a tracking sphere model. However, their approach needs to run on high computing resources (GPUs). Compared to offline methods, Online algorithms can offer immediate feedback to the user and have the potential to adapt to hands with different shapes dynamically. More importantly, they will improve the accuracy of the hand tracking systems. In recent years, most works in this direction [12], [48], [50] have been based on end-to-end networks. However, existing deep-based approaches mainly have the following problems. Most of them require expensive computing resources(GPUs) to achieve real-time performance, which limits the scope of use of these methods. Existing networks improve the generalization of hands with different shapes only by normalizing the input data or adding training data. Moreover, in some scenarios of virtual/augmented reality, like interacting with deformed objects, the application requires more details of the user's hand, such as the hand-to-object contact position on hand, and only estimating the hand joint locations is not enough. III. OUR APPROACH A. OVERVIEW In this paper, we denote pose θ and shape β as [θ, β]. The goal of our hand pose and shape tracking is to find a hand model with parameter [θ, β] that best matches the input depth image, and the [θ, β] must be plausible and realistic. We propose a regularized hand gesture-guided optimization that carefully balances data fitting with suitable priors limits to solve the problem. Our data fitting term contains both 3D point-model and 2D silhouette constraints. The 3D point to model constraint ensures that each depth point is close to the tracked model, it will pull the hand model to depth points. The 2D silhouette constraint pushes the tracked model into the depth's contour. Additionally, we adopt a set of hand priors to ensure that the recovered θ corresponds with the constraints of reality. Above all, we aim to solve the following optimization problem Recently, learning-based initialization has been widely used in hand tracking. These methods attempt to guess a coarse θ init , which is around the global minimum of θ. It avoids trapping in multiple local minima and improves convergence speed when using the θ init as the start 'seed'. We adopt a similar strategy, our approach comprises per-frame initialization and model fitting. We first pre-process the input depth image(including hand interesting area image(ROI) cropping, noise filtering, and 2D silhouette extraction). The hand ROI is then fed into a trained GCRL network to estimate the hand gesture and hand root location, thus providing an 'initialization'. Finally, starting with the initial pose, we optimize x[θ init , β] until it is cover with both data fitting and prior limits. The following sections describe these components. We begin with our GCRL network and pose/shape model, and then describe the definition of our energy function and optimization strategy. B. INITIALIZATION Due to the highly non-convex of the object function, the existing approach uses reinitializer like multi-layer random forests [20], convolution neural networks [43] or fingertip detection [31] for search a single good pose solution. In [34], Sharp et al. realized that it is hard to predict a single good pose solution. They designed a two-layer tree-based hand pose reinitialization to predict a distribution over poses. The first layer exclusively focuses on predicting global hand rotation. The second layer is trained to infer finger rotations and other elements. In [40], Taylor et al. used retrieval forest to search four postures that are closest to the real value, and this strategy achieved robust results. Inspired by this approach, we believe that the result of the reinitializer should remain stable in sequence. Furthermore, hand gestures can be treated as a label that represents a set of discrete predetermined posture vectors [8], [19]. Based on the above considerations, we split the pose reinitializer/estimation problem into two parts: gesture/pose classification and root position location. The proposed GCRL network solves two sub-tasks: gesture classification and root position location. The root location stream regresses the per-pixel likelihood heatmap for the hand root, so we can effectively estimate the global position of the hand. It has an encoder and a corresponding decoder, followed by a pixel-wise likelihood heatmap layer. As shown in Figure.4. First, we use three convolution(CONV) layers(each layer follows a max-pooling layer) to down-sample the input, and four convolution layers to capture the low-level image features. Then we perform two unpooling operations between convolutions to up-sample the given depth features to a heatmap of the hand root. The unpooling process balances computational time and accuracy. The per-pixel likelihood of the heatmap is computed as follows: where R * c and R c are the ground-truth and estimated heatmap for the hand root. The gesture classification stream identifies the category of the gesture, In our experiment, we classify the gesture by employing intermediate features of the hand. The classification branch has five additional convolutions and two fully connected layers, the output probabilities of 17 classes. As shown in Figure.3, each gesture corresponds to a hand pose vector θ gesture , from the θ gesture and global pose(orientation is the direction of the point cloud, position is estimated from root location sub-network), and we can restore the current hand pose θ initial . The architecture of the GCRL network is shown in Figure.4. We train the CGRL network by minimizing the total training loss: The GCRL network can provide an initial θ init for the subsequent optimization phase. In addition, it is used to guide the optimization, to be specific. We optimize only the visible bones(which usually means high confidence of β) based on VOLUME 8, 2020 the predicted gesture. This reduces the number of optimized parameters. There are two types of initialization, the first directly estimates joints' locations and uses inverse kinematics(IK) to calculate the rotation angle value of joints, and the second performs per-pixel hand part segmentation, then guides the model fitting with the semantic information. The former may not generalize well for different hand shapes, the latter is slow and violates the inherent topology structure. When tracking is lost, our GCRL network first infers the hand gesture, then, we construct a new initial seed from the last tracked frame's hand shape and the inferred initial posture. This strategy makes our GCRL network robust to various hand shapes, and costs less infer time. C. POSE AND SHAPE MODEL We use a 3D mesh of triangles M and vertexes N to represent the human hand. It parameterizes both pose θ and shape β to deform a N -vertex triangular mesh with a hierarchical skeleton J . It is illustrated in Figure.5. To be able to articulate the complex hand motion, we use a standard kinematic skeleton to denote a hierarchy of joints and the transformations between them. We parameterize the hand pose as θ ∈ R 26 (six parameters for global translation and orientation, four for each finger), and hand shape is encoded via scalar length parameters β ∈ R 20 . Given a vector [θ, β] consists of pose θ and shape β, the surface(including vertexes and corresponding normal) of the mesh model can be computed by a standard technique which in computer graphics is called linear blend skinning (LBS). We refer the reader to [18] for details. Figure.6 shows several tracking templates used in recent model-based real-time hand tracking methods. The model in [31] and [35] cannot represent the details of the hand. [24] in terms of some of the problems represented thumb finger. [32] used a spheres-mesh model to efficient estimate hand pose. However, the number of parameters is too large. To tradeoff the accuracy and computational time, we use the LBS model in our experiment. D. ENERGY FUNCTION The objective energy function plays an important role in modern hand tracking systems, and a key choice is the energy terms. In this section, we describe each term in our optimization. We first introduce the 2/3D data fitting terms and the computation of correspondences between input depth and model. Then, we discuss the prior terms and their benefits in terms of tracking accuracy and robustness. The human hand consists of a set of links(bones) and is connected by joints. The joints have two types, rotational or translation, A rotational joint is denoted by a rotation axis and rotate angles, and a translation joint is parameterized by a direction vector and lengths of the link(bone). In this paper, we denote the set of rotate angle(hand pose) as [θ 1 , θ 2 , . . . θ n ], and lengths of link(hand shape) as [β 1 , β 2 , . . . β m ]. To compute Equation-1. We first introduce the skeleton jacobian, which is first proposed in [3]. The skeleton jacobian J skel (t) is a [3 × n] matrix, n is the number of θ, which represents the affected DOFs in the kinematic chain that each 3D point t determined. As shown in Figure.7, t i is a depth point, and s i is the corresponding point on the surface of the hand mesh. We compute J skel (t) i,j by manual differentiation. For the j-th joint, let θ j be its angle of rotation, p j is its position, w j is the LBS weight of p j , and let v j be the vector pointing along its current axis of rotation(see [38]). The corresponding entry ∂s i ∂θ j in the skeleton jacobian matrix for joint j affecting the i-th surface point s i is If the i-th surface point is not affected by the j-th joint, then To jointly optimize the hand shape β, we propose a bone jacobian matrix. The i-th column of J bone (t) contains the linearization of the i-th bone about t. Each entry of is J bone (t) i,j ∂s i ∂β j = w j v j (5) 1) DATA ENERGIES a: 3D POINTS ALIGNMENT Our E 3D term computes the corresponding point y ∈ H (θ, β) on the surface model for each sensor point t. We only compute the correspondence between depth points and the model point on the front-facing part of H (θ, β), which is different from the traditional ICP. In our experiments, for computational efficiency, we set the view ray of the camera model to n = [0, 0, 1] T . We linearize the point cloud alignment energy(pose and shape fitting) as . Several tracking templates used in recent model-based real-time hand tracking methods. Images courtesy of [24], [31], [32], [35], [38]. where || * || denotes the L2 norm, n is the surface normal of y, and d is the distance between y and t. b: 2D SILHOUETTE ALIGNMENT The human hand is highly articulated, and fast motion may cause self-occlusions during tracking. Only the 3D alignment energy will not constrain the occluded parts. Based on this consideration, the term E 2D align the 2D silhouette of our hand model(we project the front-face point as the 2D silhouette) and the 2D silhouette of the input hand part. E 2D energy term is given by where x is the 3D location of a rendered silhouette point p, n is the 2D normal at the sensor silhouette location q. Here, we compute the 2D correspondences with a 2D distance transform, and we refer the user to appendix B in [38] for more detail of J persp (y). 2) PRIOR ENERGIES Only considering the data fitting will easily lead to unrealistic hand poses In reality, the motion of the hand is more constrained. We regularize our optimization with finger collision, joint rotation, and temporal priors to ensure that the result is plausible. Each of these terms plays an important role in the stability of our objective function. a: JOINT ROTATION LIMIT To discourage the incorrect tracked posture, we adopt the joint rotation constraint and encode this prior to the energy term where θ max is maximal vector of joint angles and θ min is the minimal. ω 1 is set to one if θ i < θ min i and to zero otherwise. ω 2 is equal to one if θ i > θ max i and to zero otherwise. For the θ min and θ max , we also use the values experimentally determined by [5]. Figure.8 shows the example of joint rotation constraint. b: COLLISION LIMIT To prevent our model from taking on anatomically incorrect result, e.g. the collision between fingers and palm, we approximate fingers and palm in our hand using a set of S spheres. Using spheres instead of triangles may reduce the computations of collision detection. Figure.9 shows a sample FIGURE 9. Collision limit. VOLUME 8, 2020 of collision limit. The linearization of the collision energy becomes where x i and x j are the end-points of the shortest distance between the collision sphere S i and S j , n i is the surface normal at x i , and d is the distance between x i and x j . χ(i, j) is an indicate function that evaluates to one if the sphere S i and S j are colliding, and to zero otherwise. c: TEMPORAL LIMIT We use the temporal limit provided by [38] to smooth the tracking result between the current and previous frames. The purpose of this term ensures that the pose of the current frame should be near the previous pose. We encode the temporal prior as where k pre i is the position of the joints' locations from the previously optimized frame. K is the set of current joints' locations. E. OPTIMIZATION We treat the optimization of energy over [θ, β] as a nonlinear least squares problem, and solve it with the Levenberg-Marquardt approach. We adopt Taylor expansion to iteratively approximate the energy terms in Equation-1 and solve the linear system to obtain the update δθ and δβ at each iteration. To speed the convergence, we propose a gesture-guided optimization strategy. We first perform pose estimation to align hand to points(pose fitting stage). Then, we optimize the hand shape alone(shape fitting stage). Finally, we jointly optimize the shape and pose of the hand(fully fitting stage). During shape fitting, we only need to optimize the visible joint associated β. For example, a finger that extends outside means that the corresponding β is more confident in optimization. Figure.10 shows our multi-stage optimization. IV. EXPERIMENTS A. PRELIMINARIES We extract the region ROI of the single hand using a similar method as that presented in [27]. We also employ a data augmentation to increase the generalization of the network. Specifically, we randomly rotate the image along the z axis with a range of −15 • and 15 • and scaling displacement [0.8, 1.2]. We train and evaluate our networks on a PC with Intel Core i7 6700K, 32GB of RAM, and an Nvidia 1080-Ti GPU. Net models are implemented with the Caffe framework [17]. When training the networks, we set Adam optimizer with learning rate 0.005, batch-size 32, and weight decay 0.0005. Our approach runtime is 2 ms pre-processing, GCRL network runs in 5ms per-frame, optimization stage costs 30 ms(20 iterations for pose fitting, 5 iterations for shape fitting and 5 iterations for joint optimization). During each iteration, pose update costs nearly 1.0 ms, and shape update costs 0.8 ms. This translates to 30˜35 frames per second (only use initial network when larger tracking error in previous frame). On MSRA-2014, following the most commonly used metrics in the literature, we choose the average Euclidean distance between the 3D joint location and the ground truth to compare our approach with other state-of-the-art approaches [29], [31]. We choose MSRA-2015 to evaluate the performance of our gesture classifier and root location network. For the classification task, we select the mean classification accuracy to evaluate our gesture classifier. For the root location task, we evaluate the hand location performance using a 3D distance error between the root and ground truth. We also use NYU [43] to compare our approach with several state-of-the-art methods. We utilize two metrics in this work to evaluate the performance. One is mean Euclidean distance error for each joint across all the test frames. The second is the worst-case accuracy, which represents the fraction of FIGURE 10. We first initialize the pose with the result provided by the GCRL network, then optimize for a coarse pose(pose fitting stage); after pose fitting, we optimize the hand shape, and finally, perform a refinement in the full-dimensional space(both pose and shape). The red area represents the model in front of the depth data, the blue means model behind the depth data, and the white area represents model near the depth data. (less than 5 mm). test frames that all have estimated joint Euclidean error below an error tolerance. C. COMPARISON TO STATE-OF-THE-ART 1) RESULTS ON MSRA-2014 MSRA-2014 consists of six subjects, each subject has a different hand shape and is annotated with 3D position for 21 joints. We compare our approach to two state-of-the-art model-based approaches, including Forth [29], Qian [31]. Forth [29] use PSO to perform fitting between a fixed-size hand model and points. Qian [31] proposed a hand tracking system using ICP-PSO based model fitting. For a fairly comparison, we use the version without the GCRL network as baseline. Additionally, we conduct a version with a fixed size hand model as a baseline w/o to validate the effectiveness of our joint optimization. As Figure.11 illustrates, our approach outperforms the Forth [29] that uses a fixed-size hand model. We also improve 1 mm than our shape w/o fitting. The results show that our joint optimization strategy can improve the accuracy of hand tracking. The mean error distance for all joints of our strategy is 8.6 mm, which is 0.5 mm smaller than the results of [31] and 9.3 mm smaller than the results of [29]. Some qualitative results of our approach on the MSRA-2014 dataset are illustrated in Figure.12. We see that our approach still obtains reasonable results in complex hand poses. [29] and Qian [31]. We use the results that public in [31]. In summary, we can draw the following conclusions: (i) local gradient descent is more precise and faster than the global search. (ii) jointly optimizing pose and shape will improve the robustness and accuracy of tracking. Moreover, we list the runtime of several model-based approaches(results from the papers) in Table.1. Compared with the previous approaches like [29] and [31], our approach improves both speed(see Table. 1)and accuracy (see Figure. 11) while being able to restore the hand shape. We consume fewer computing resources (only CPU) than [38] and [42]. Compared with [10] and [38], our joint tracking strategy performs the shape estimation online. Baselines: we create several baselines to validate the effectiveness of our network. 1)RF-C: classify the input depth using the Random Forest, this baseline estimate the gesture of input. 2)RF-R: directly regress the hand global root position using the random forest. 3)CNN-R: directly regress the hand global root position using a network similar to the baseline in [28]. For RF-C and RF-R, we choose the pixel difference features, and the maximum depth of trees is set to 20. We compare the accuracy of our gesture classifier with several baselines on the MSRA-2015. For all approaches, we use the subjects 1-8 for training and the 9th subject for testing. For the gesture classification task, our classifier achieves a mean accuracy of 93.8% of its highest accuracies on 17 different gestures. The RF-C only achieves 85.2%. For the root location task, our sub-network of root location has much higher accuracy than RF-R and CNN-R. As shown in Table. 2, the mean error distance for the root position of our approach is 8.5 mm, which is 2.3 mm smaller than the results of RF-R and 0.9 mm smaller than the CNN-R. The performance gain is more obvious, showing that our network can capture more complex hand structure. shape as w/o GCRL, and use the GCRL network but only optimize the hand pose using a fixed-size hand mesh as w/o shape. Figure.13 shows the mean error results of our approach compared with these approaches. The results show that our approach is slightly superior to [12], [27], [28], [50], and is comparable to [6], [44]. Our approach achieves a mean joints error of 11.87 mm, which is approximately 4.1 mm smaller than [27], 5.1 mm smaller than [50], and near 9 mm smaller than [28]. The accuracy of our approach is similar to [6] and [44], while both of them rely heavily on GPU to achieve real-time performance. Compared with the w/o GCRL(mean error 15.37mm), the results show the effectiveness of our GCRL network. In some cases, the self-occlusion and noise cause the w/o GCRL(mean error 13.69mm) tracking loss. The comparison result with the w/o shape also indicates the success of our joint optimization strategy. V. DISCUSSION AND CONCLUSION The current implementation of our approach works well for the majority of poses, and the reconstruction is hard when the hand is in serious occlusion cases. In addition, we find that the viewpoint variations of the camera will seriously influence the tracking result. When the hand is in the 'fist' state, although our GCRL network provides an initialization, if the palm is not facing the camera, the occluded part of the hand will be lost. Additionally, in this angle of camera view, although the GCRL network set a corresponding initial pose of estimated hand gestures in the predicted hand center, the serious edge noise and heavy self-occlusion often cause matching failure. In principle, after using the GCRL network and jointly optimizing the shape and pose of the hand, our approach fails badly only in extreme views and severe selfocclusion. Another contribution of this paper is the demonstration that both the initialization network and joint optimization strategy not only contribute to the state-of-the-art accuracy shown above but also allow us to maintain this approach on a low computation device. Three variables that determine the amount of computing fitting procedure: (i) The number of data points used in the data term; (ii) The number of iterations we perform for each starting point, (iii) The initial result provided by the GCRL network. In recent years, some works [14], [25], [41] have focused on the tracking two interacting hands. They all adopted the strategy of 'left/right-hand segment + pose/shape optimization'. This strategy makes this problem feasible. The endto-end network has not yet emerged due to the lack of a large quantity of effective training data. Therefore, the modelbased optimization method will play an important role in tracking two interacting hands in the future. In this paper, we propose a novel approach for hand tracking that consists of deep-based pose initialization and gesture-guided pose/shape optimization. The GCRL network captures a meaningful hand structure to estimate gesture and hand root location, thus providing a robust initial pose. Starting from the estimated pose, we jointly estimate pose and shape. By integrating the deep-based initialization and optimizing the parameters of shape selectively, our approach results in faster convergence and increased robustness. Extensive experiments on three datasets demonstrate the effectiveness of our proposed approach.
2020-02-20T09:05:49.094Z
2020-02-17T00:00:00.000
{ "year": 2020, "sha1": "914eca00121d738fca098cdcfafc6a7f66975384", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09000925.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "31ee328326de979838f2a5f396d0fed7312f3dbd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119105052
pes2o/s2orc
v3-fos-license
Estimation of the Allergenic Potential of Urban Trees and Urban Parks: Towards the Healthy Design of Urban Green Spaces of the Future The impact of allergens emitted by urban green spaces on health is one of the main disservices of ecosystems. The objective of this work is to establish the potential allergenic value of some tree species in urban environments, so that the allergenicity of green spaces can be estimated through application of the Index of Urban Green Zones Allergenicity (IUGZA). Multiple types of green spaces in Mediterranean cities were selected for the estimation of IUGZ. The results show that some of the ornamental species native to the Mediterranean are among the main causative agents of allergy in the population; in particular, Oleaceae, Cupressaceae, Fagaceae, and Platanus hispanica. Variables of the strongest impact on IUGZA were the bioclimatic characteristics of the territory and design aspects, such as the density of trees and the number of species. We concluded that the methodology to assess the allergenicity associated with urban trees and urban areas presented in this work opens new perspectives in the design and planning of urban green spaces, pointing out the need to consider the potential allergenicity of a species when selecting plant material to be used in cities. Only then can urban green areas be inclusive spaces, in terms of public health. Introduction Urban green spaces (UGS) are of strategic importance to the quality of life of urban dwellers [1,2]. They provide a series of ecosystem services (ES) that have direct and indirect effects on public health [3]. The direct effects include all those processes that mitigate environmental degradation: air purification [4-6], carbon sequestration [7], climate regulation [8], and water regulation and purification [9,10]; and those that directly prevent diseases-improved psychological well-being [11] and reduced stress [12]. The indirect effects are related to the possibility offered by UGS to carry out physical activities and sports [13], i.e., reduced obesity and cardiovascular symptoms [13,14], leisure and recreation [15,16], socialization, and contact with nature [17], which result in a feeling of well-being that contributes to improved health and quality of life [18]. When establishing the net balance of benefits that UGS provide, we must, however, consider the negative factors emanating from natural functions of ecosystems or their anthropogenic manipulation [19]. These negative effects, defined by some authors as ecosystems disservices (ED) [20,21], generate important environmental and socioeconomic costs, and sometimes have a great impact on health [22]. The adverse reactions caused by the emission of allergenic pollen during flowering of the plant species that form the UGS are one of the main EDs, with a high effect on citizens' welfare [23]. Approximately 30% of the world population is affected by reactions caused by allergenic pollen in the atmosphere [24]. In Europe, more than 150 million citizens suffer from chronic allergic diseases, with an estimated cost between 55-151 billion euros/year [25] to the National Health Services. This issue constitutes one of the main public health burdens today, and is expected to increase exponentially in the coming years as a result of the effects of climate change and growing urbanization, industrialization, and pollution [26]. In the urban context, UGS have been identified as the main source of allergen emission [27] and are among the factors triggering allergy symptoms in city dwellers [28,29]. Climatic, ecological and geographical characteristics play an important role in the allergens prevalent in each territory [30], by affecting the zonal distribution of vegetation [31], both in natural and urban environments. This has allowed establishing of the geographic range of distribution of the major allergens: some of them are of world-wide distribution, such as grasses, as they are favored for their great adaptability and participation in many varieties of lawns and numerous urban green elements [32]. Closer relationships between allergens and climatic conditions occur when the emitting species are those best represented in the considered bioclimate, such as olive tree (Olea europaea) in the Mediterranean region [33], Cryptomeria japonica in Japan [34], birch (Betula spp.) in the northern Europe [35], maples (Acer spp.) and willows (Salix spp.) in Canada [36], or several species of Myrtaceae in Australia and New Zealand [37]. A major priority is thus to provide citizens and administrators with tools and mechanisms that allow for the adoption of preventive measures and reduction of the negative impacts that the presence of allergenic pollen may have on the sensitive population. The sampling of atmospheric allergens and the subsequent dissemination of information constitute one of the main strategies to raise awareness [38]. Several European cities have an aerobiological sampling unit (https://www.zaumonline.de/pollen/pollen-monitoring-map-of-the-world.html), but for the most part, the levels recorded are too general, over a wide area of coverage [39,40], and do not represent situations closest to the population (i.e., breathable air at human height). There are also attempts being made to assess the risk that the presence of allergen-emitting plants may have on the population, and to identify and categorize the allergenic level of different plant species [41][42][43]. Some studies have reviewed the causes of the increasing allergenicity of UGS, and pointed out the low biodiversity of species used, the introduction of exotic species, the discrimination of female specimens or the cross-reactions that are established between phylogenetically related species as main causes [44]. However, if there is a cause that should be highlighted, it is the non-consideration of the criterion of allergenicity when selecting urban plant material. This lack of planning in the design of green areas results in severe health risks at certain periods of the year. This work aims to establish the potential allergenic value of some of the most common tree species in urban environments, so that the allergenic risk generated in the UGS can be easily estimated. Different typology of green spaces, encompassing a climate gradient, was also considered in order to analyze the design and infrastructure characteristics and parameters that may impact the quality of UGS in terms of health benefits. The results will prevent allergy sufferers from situations that may pose a risk to their health. Selection of Urban Parks and Inventory of Vegetation A selection of urban parks was carried out, across a range of characteristics, in terms of typology, size, design and style (both historical and modern) [45,46], in 23 cities from six Mediterranean countries: France, Italy, Morocco, Portugal, Spain, and Slovenia. In order to analyze the existing diversity in the same country, several cities in Italy, Portugal, and Spain were considered. The type of park and its location within the city may be related to the type and frequency of activities that are carried out in it [13]. Therefore, location and general characteristics of these urban parks are detailed in Figure 1 and Table S1. An inventory of the existing species was carried out, either by details from the Park and Gardens Services of the municipalities, as provided by local institutions, or by direct in situ field surveys and identification, by staff collaborators. Reference literature and monographic floras were used to determine the species [47][48][49][50]. The vegetation inventory included taxonomic determination at the species level (cultivar or variety if possible), and the reproductive character, including hermaphrodite, monoecious, or dioecious species. In the latter case, the number of specimens of each sex was considered, given the allergenic implications when only male specimens are planted. The deciduous or evergreen character was also considered, because it may affect the dispersion of pollen grains. In addition, the geographic origin of the tree species was established, due to the interest that this information can have when determining the causative agents of major allergies. The hardiness zone category, which provides information about the ability of different species to withstand minimum temperatures of the zone, was also determined [51]. The presence of cover grass was finally taken into account, as this information affects the final value of the allergenic risk. Allergenic Risk Assessment In order to estimate the potential allergenic risk of urban green areas, the Index of Urban Green Zone Allergenicity (I UGZA ), proposed by [52] was applied, which is related to the time of flowering and varies between 0 and 1. The index combines a series of biological and biometric parameters of the different species of trees, according to the formula: where: VPA = Value of Potential Allergenicity of each species; S T = Surface of the urban park; k = number of species in the park; S i = Area occupied by each species in the park; H i = maximum height that the tree can reach at maturity [53]. VPA results from the combination of three parameters intrinsic to the species: pollination strategy (ps), duration of the pollination period (dpp), and intrinsic allergenic capacity of the pollen grains (ap) [52,53]. To collect the information regarding the parameters of each species, we consulted numerous documentary sources from different disciplines such as taxonomy, botany, forestry, or allergology. With this information, a database of parameters for the calculation of the I UGZA was created (SafeCreative code 1803156149680, IPR-684). The VPA of each species is then combined with the average allometric parameters, i.e., the average volume of tree canopy emitting allergens that each tree species will have upon reaching the reproductive maturity. This volume of allergen emission is calculated from S i and H i . By extending the result to S T , it is possible to know the contribution of each species to the total allergenicity of the UGS. It can also identify the species that have a greater contribution in terms of surface area or abundance of existing trees. Finally, for each park, the percentage of grass covered area was calculated, since this data may have an impact on the final allergenicity value. The analysis of the different situations of allergenicity that can be generated in a park defined a value of I UGZA of 0.3 as a threshold to establish the risk that the presence of allergenic species in these parks can represent for allergic people [54]. This threshold of 0.3 can be recorded when there are monospecific formations of allergenic species in a park, or there are planted species of moderate allergenicity among which cross-reactions can be established, or when the percentage of allergenic species in the park is higher than that of the non-allergenic species. Based on this value, the parks were classified as low (<0.2), moderate (0.2-0.3), or high allergenicity (>0.3). Data Analysis In order to identify the factors with the strongest impact on the I UGZA of the different parks, a set of environmental variables was analysed. For bioclimatic variables, we used air temperature and rainfall as they are biologically meaningful [55]. The values (average 1970-2000) were retrieved from Worldclim.org for city centres, being coded as follows: annual mean temperature (BIO1), mean diurnal range (BIO2), isothermality (BIO3), temperature seasonality (BIO4), maximum temperature of the warmest month (BIO5), minimum temperature of the coldest month (BIO6), annual temperature range (BIO7), mean temperature of the wettest quarter (BIO8), mean temperature of the driest quarter (BIO9), mean temperature of the warmest quarter (BIO10), mean temperature of the coldest quarter (BIO11), annual precipitation (BIO12), precipitation of the wettest month (BIO13), precipitation of the driest month (BIO14), seasonality precipitation (BIO15), precipitation of the wettest quarter (BIO16), precipitation of the driest quarter (BIO17), precipitation of the warmest quarter (BIO18), and precipitation of the coldest quarter (BIO19) (Figure 1). The VPA of each species is then combined with the average allometric parameters, i.e., the average volume of tree canopy emitting allergens that each tree species will have upon reaching the reproductive maturity. This volume of allergen emission is calculated from Si and Hi. By extending the result to ST, it is possible to know the contribution of each species to the total allergenicity of the UGS. It can also identify the species that have a greater contribution in terms of surface area or abundance of existing trees. Finally, for each park, the percentage of grass covered area was calculated, since this data may have an impact on the final allergenicity value. The analysis of the different situations of allergenicity that can be generated in a park defined a value of IUGZA of 0.3 as a threshold to establish the risk that the presence of allergenic species in these parks can represent for allergic people [53]. This threshold of 0.3 can be recorded when there are monospecific formations of allergenic species in a park, or there are planted species of moderate allergenicity among which cross-reactions can be established, or when the percentage of allergenic species in the park is higher than that of the non-allergenic species. Based on this value, the parks were classified as low (<0.2), moderate (0.2-0.3), or high allergenicity (>0.3). Data Analysis In order to identify the factors with the strongest impact on the IUGZA of the different parks, a set of environmental variables was analysed. For bioclimatic variables, we used air temperature and rainfall as they are biologically meaningful [55]. The values (average 1970-2000) were retrieved from Worldclim.org for city centres, being coded as follows: annual mean temperature (BIO1), mean diurnal range (BIO2), isothermality (BIO3), temperature seasonality (BIO4), maximum temperature of the warmest month (BIO5), minimum temperature of the coldest month (BIO6), annual temperature range (BIO7), mean temperature of the wettest quarter (BIO8), mean temperature of the driest quarter (BIO9), mean temperature of the warmest quarter (BIO10), mean temperature of the coldest quarter (BIO11), annual precipitation (BIO12), precipitation of the wettest month (BIO13), precipitation of the driest month (BIO14), seasonality precipitation (BIO15), precipitation of the wettest quarter (BIO16), precipitation of the driest quarter (BIO17), precipitation of the warmest quarter (BIO18), and precipitation of the coldest quarter (BIO19) (Figure 1). Figure 1. Map and climatic characteristics of the cities participating in this study. Left axis in climatic graphs represents monthly air temperature in °C; the right axis represents monthly precipitation in mm, and the horizontal axis represents the time in months. General characteristics of the parks (coordinates, surface area, turf area, number of trees, number of species and density of trees) are listed in SI Table S1. Figure 1. Map and climatic characteristics of the cities participating in this study. Left axis in climatic graphs represents monthly air temperature in • C; the right axis represents monthly precipitation in mm, and the horizontal axis represents the time in months. General characteristics of the parks (coordinates, surface area, turf area, number of trees, number of species and density of trees) are listed in SI Table S1. Tree density and Shannon diversity index [54] were also calculated for each park. Firstly, an exploratory analysis was performed in order to understand which variables significantly contributes to the I UGZA index. To do so, individual variables were tested with Spearman correlations and the most significant results were then submitted to a generalized linear model (GLM) with identity link (and normal distribution). All possible combinations of those significant variables were tested. Looking for the best possible model and using the parsimony principle, the model with the highest variance and with a significant contribution of all predictors was selected for further interpretation. All statistical analyses were implemented in Statistica TM software (Tibco Software Inc., Palo Alto, CA, USA). Finally, the main taxa contributing to high I UGZA values were identified. The surface covered by grass ranged from 0% in the most arid site (Almeria), to more than 95% in some of the 12 parks in Rome. In absolute terms, the most extensive grass surfaces were found in Talenti Park (110,000 m 2 ), Parque da Paz, Almada (273,772m 2 between irrigated and non-irrigated meadows), and El Retiro Park (with more than 300,000 m 2 ). The total number of taxa was 355 in terms of species, sub-species, and varieties, from 83 botanical families, for a total of more than 110,000 trees ( Table 1). The largest number of species was registered in the Jardin des Plantes, with 160 species. The Jardim Guerra Junqueiro and El Retiro Park also exceeded 100 species (Table S1). In relation to sexual attributes, 62.2% of the species had both sexes on the same individual (hermaphrodite, monoecious), while 67 species, i.e., 18.8%, had separated sexes in different individuals (dioecious) ( Table 2). Regarding the strategy of pollination, 46.7% of the species were insect-pollinated. By contrast, 42.3% of the plants used the wind as vector of pollination. The deciduous attribute was present in 65.0% of the species, while 33.1% were evergreen. In relation to the origin of the species, 18.3% of them were native to North America; 18.0% were of Chinese origin, and 17.0% were native to Europe, of which 5.4% were of Mediterranean origin ( Figure 2). As for the hardiness zone categories (Figure 3), 60.9% were able to tolerate minimum temperatures ranging from −28.9 • C to −12.2 • C (categories 5, 6 and 7), 28.8% were included in the range of categories 8 to 11 (from −12.2 • C to + 4.4 • C), while a group of 43 plants (10.3%) were able to tolerate minimum temperatures below −35 • C (categories 2 to 4). individuals (dioecious) ( Table 2). Regarding the strategy of pollination, 46.7% of the species were insect-pollinated. By contrast, 42.3% of the plants used the wind as vector of pollination. The deciduous attribute was present in 65.0% of the species, while 33.1% were evergreen. In relation to the origin of the species, 18.3% of them were native to North America; 18.0% were of Chinese origin, and 17.0% were native to Europe, of which 5.4% were of Mediterranean origin ( Figure 2). As for the hardiness zone categories (Figure 3), 60.9% were able to tolerate minimum temperatures ranging from −28.9 °C to −12.2 °C (categories 5, 6 and 7), 28.8% were included in the range of categories 8 to 11 (from −12.2 °C to + 4.4 °C), while a group of 43 plants (10.3%) were able to tolerate minimum temperatures below −35 °C (categories 2 to 4). In the list of the 20 most frequent species (Table 3), most species were native to the European continent with the exception of Acer negundo, Magnolia, and Robinia pseudoacacia, which were from North American, and Ligustrum lucidum, originally from China. As for the attributes, the majority were monoecious (7) or dioecious (7), wind-pollinated (13), and deciduous (12). The application of the IUGZA (Figure 4) revealed that 10 parks exceeded the threshold value of 0.3, resulting in severe potential health risks during specific periods of the year. Two of the parks, Parco di Arlecchino and Bosco dei Cento Passi, registered the maximum possible value of IUGZA, i.e., 1, so the risk of suffering allergic symptoms is maximum. In the list of the 20 most frequent species (Table 3), most species were native to the European continent with the exception of Acer negundo, Magnolia, and Robinia pseudoacacia, which were from North American, and Ligustrum lucidum, originally from China. As for the attributes, the majority were monoecious (7) or dioecious (7), wind-pollinated (13), and deciduous (12). The application of the I UGZA (Figure 4) revealed that 10 parks exceeded the threshold value of 0.3, resulting in severe potential health risks during specific periods of the year. Two of the parks, Parco di Arlecchino and Bosco dei Cento Passi, registered the maximum possible value of I UGZA , i.e., 1, so the risk of suffering allergic symptoms is maximum. Regarding the variables that have the greatest impact on the value of the IUGZA, certain structural characteristics of the parks such as the number of trees and of species, the Shannon's index and the number of trees/ha -1 and also precipitation of May and July, precipitation of the warmest quarter (BIO 18), annual mean temperatures (BIO 1), and mean temperature of the driest quarter (BIO 9) significantly affected IUGZA (Table 4). Density of trees was one of the parameters with the highest positive correlation with the value of IUGZA (r = 0.70; p < 0.01) ( Table 5), and accounted for most of the variance as a predictor in the tested GL model (adjR 2 = 076). Regarding the variables that have the greatest impact on the value of the I UGZA , certain structural characteristics of the parks such as the number of trees and of species, the Shannon's index and the number of trees/ha −1 and also precipitation of May and July, precipitation of the warmest quarter (BIO 18), annual mean temperatures (BIO 1), and mean temperature of the driest quarter (BIO 9) significantly affected I UGZA (Table 4). Density of trees was one of the parameters with the highest positive correlation with the value of I UGZA (r = 0.70; p < 0.01) ( Table 5), and accounted for most of the variance as a predictor in the tested GL model (adjR 2 = 076). Discussion In this work, 34 parks located in 23 Mediterranean cities were considered, so that the spaces in which Mediterranean citizens perform outdoor activities were well represented [45,46]. The study showed important information that can affect the allergenic impact of Mediterranean UGS on citizens' health. First, the type, size and location within the city of the park helped to predict I UGZA and thus can be used for programming the frequency and duration of the visits. We considered some small parks that are usually located in city centers and historical districts, densely built districts, or in the vicinity of administrative or monumental buildings. In these small parks, there is a large presence of citizens who perform daily activities: sport routines, socialization between similar age groups, pet walking or relaxation [55]; thus, during the period of flowering, the contact and interactions with the allergens and other atmospheric particulate matter are frequent [56]. This is the case of parks in the city of Rome or El Retiro in Madrid, in which the maximum number of local visitors and tourists at the beginning of spring coincides with the period of flowering of Platanus and the species of Oleaceae, Fagaceae and Pinaceae [57,58]; people should, thus, be warned so they could take precautionary measures. The inventory of vegetation revealed the extraordinary rich and varied native flora of this climatic region [59], which also allows for the growth of other taxa from other geographical origins and phytoclima [60,61]. A good index of the diversity of the parks was indicated by their high number of taxa (355) with a total of more than 110,000 trees. This figure contrasts with those obtained for other areas of Europe, since a study on the diversity and distribution of trees in 10 major Nordic cities showed a markedly lower diversity (133 different tree species for the city with the highest diversity), with a total number of trees exceeding 190,000, including street and park trees [62]. The largest number of species was registered in the Jardin des Plantes (Nantes), given its arboretum and botanical garden character, although other parks also exceeded 100 species, making these urban green zones true biodiversity hotspots in the urban environment [61,63]. This great diversity was also reflected in aspects such as sexual attributes and pollination strategies. Regarding the latter, most of the entomophilous species had bees as a main pollinator agent [64]. This confirms the important role of urban parks in the provision of ES, not only because of the diversity of bee species that participate in pollination, but because this regulating service is essential to maintaining ecosystem processes [65]. By contrast, the anemophilous strategy, in which the wind is the driver of pollination, is the cause of one of the main disservices associated with urban vegetation [22]. This process of anemophilia is even more intense when considering the coniferous species, as all of them are primary anemophilic, and the deciduous anemophilic angiosperms [66], which developed this strategy in a later evolutionary phase and adjusted the functional process from anthesis to the moment immediately before the new leaves unfold, so that the emission of pollen is made from the anthers without any obstacle. The 20 most frequent species included species native to the European continent, some of Mediterranean origin, which led us to consider them as major allergens in the region. This group included some of the main causative agents of pollen allergy in the Mediterranean area, such as Cupressus [67], and Platanus x hispanica [68], both with a very high VPA. Other species characteristic of the Mediterranean, such as Pinus halepensis, Pinus pinea, and Quercus ilex, were also included in the list, although with a lower degree of allergenicity, all of them largely distributed in some cities such as Rome [69]. These results are in line with those obtained in a previous study on the allergenicity of the ornamental flora carried out in two Mediterranean cities [70], but contrast sharply with those from other European regions, where there was a clear prevalence of species of the genera Tilia, Acer, Aesculus, and Fraxinus in Central and Continental Europe [71], and Betula, Sorbus, Carpinus, and Fagus in northern Europe [62,72]. Plane tree (Platanus x hispanica) is one of the most notable species due to its extensive presence in European cities and the rest of the world [71]. In our study, plane tree was recorded in 95% of the inventories, with an unequal presence, and therefore an unequal contribution to the final value of I UGZA . Thus, the overabundance of individuals in some of the Spanish parks (more than 400 in Huesca and Pamplona, and almost 1000 in Madrid) had a very high contribution to the value of I UGZA . Although this pollen type has low dispersion capacity, estimated to be just 400 m from the source [73], the tree's deciduous character favors the dispersion before the new leaves begin to develop [74]. By contrast, this tree species has a short flowering period, which limits the time of emission to just a few weeks [68]. Cupressus sempervirens (Tuscany cypress) is another frequent species in Mediterranean urban parks. Its very high value of allergenic potential can be applicable to the rest of the species of the family Cupressaceae [67]. All of them share reproductive attributes such as anemophilic character, high pollen production [75], extensive flowering period [76], and very high allergenic pollen grains, thus being one of the allergenic-type typical of the Mediterranean region [77]. Where their presence is abundant, authorities should warn the population during their pollination period, which usually occurs during the winter months. Several species of Oleaceae family must be pointed out. The different species of Fraxinus may have different reproductive attributes. Two of the most frequent species, F. excelsior and F. angustifolia, are dioecious and wind-pollinated, so that a greater presence of male individuals can influence not only VPA but also the amount of pollen emitted [78]. In contrast, Privet (Ligustrum sp.) is an insect-pollinated Oleaceae species, but little amounts of pollen emitted may be sufficient to cause allergy reactions if a person stays in its vicinity [79]. Finally, it is necessary to stress that the presence of Olea europea in urban parks is increasingly frequent. In our study, several olive trees grew in parks in Italy, Spain and Portugal, some with centenary specimens. In addition to its pollen grains being the first cause of pollen allergy in the Mediterranean region [80], we must consider the cross-reactions that can be established between the different species of the family due to the presence of shared allergens [81]. Another family with Mediterranean species is Pinaceae. Although the allergenicity of its pollen grains is low [82], it is pertinent to recall other sensitivity reactions that can be generated by the presence of caterpillars [83]. Orange trees (Citrus aurantium) deserve particular attention, as they emit sufficient pollen levels to generate a symptomatic response in the population, due to their relatively high frequency in parks and streets of Mediterranean cities [68]. Populus alba is one of the few species of Populus of European origin. Given the existence of exclusively male-sex clones, allergenicity is linked to a greater or lesser presence of male individuals [84]. This list also includes some species that tolerate minimum temperatures below −20 • C (hardiness zones 4a-5b). Acer negundo is the only wind-pollinated species of the genus, which thus increases its allergenicity to high [85]. Taxus baccata has a phylogenetic link with the Cupressaceae family, with which it shares allergens and allergenicity [86]. Linden (Tilia spp.) is widely used in walk-alignments, so that its moderate allergenicity can be increased when forming dense groups [87]. A relevant aspect of this work was to establish the allergenic risk assessment of 34 parks and the factors that pose the greatest hazards. A value of I UGZA index of 0.3 had been established in a previous work as the threshold above which the presence of allergenic plants is high enough to cause discomfort and symptoms to an allergic population [53]. In our study, 10 parks exceeded this threshold, and two parks (Parco di Arlecchino in Mantua and Bosco dei Cento Passi in Milan), registered the maximum value of I UGZA . Reviewing the characteristics of these last parks, most of them showed a density of trees higher than 150 trees/ha, with peaks of 562.6 trees/ha and 771 trees/ha. This relationship between I UGZA and density of trees was already evident in a previous work [53], but in this work, it has been reinforced by the characteristics of the main species of these parks: Carpinus betulus and Quercus robur. The allometric parameters of both species in terms of the area they occupy in relation to the surface of the park (S i /S T ratio) and crown height generate a volume of tree canopy emitting allergens that, together with the high density of existing trees, take the value of I UGZA to its maximum. The species richness and the Shannon Index were also correlated with I UGZA . Surprisingly, the correlations were positive. This relationship is clear when there is variety and equity among the species in the parks [54], since it would have a balanced diversity among species with different flowering and allergenicity attributes. However, an imbalance in this ratio could take the index to maximum values (Bosco dei Cento Passi: I UGZA : 1, Species richness: 15, Shannon's Index: 3.16), or minimum, as in the case of the Parco Centrale del Lago, in Rome, which with its 935 trees of 50 different species, Shannon's Index: 3.3. register some of the lowest I UGZA values, 0.07. The latitudinal gradient can also be considered an important parameter, since it determines the prevailing environmental variables in each zone. In our study, we analysed parks on both shores of the Mediterranean, from 33 • N in Casablanca to 47 • N in Nantes, resulting in a large span of local climatic conditions [88]. The Spearman Correlation revealed that temperature and precipitation significantly affected the I UGZA index. Both parameters are closely related to the flowering of plants [89], and pollen emissions [90], but here the sign of the correlations was opposite i.e., negative. In the Mediterranean area, the occurrence of rainfall during the warmest period (BIO18) and even during the flowering period (BIO15), prevents water stress and favors a more intense flowering [91]. As for mean temperatures, its correlation is negative. The inclusion of the annual mean temperature (BIO 1) in the model as one of the most significant variables suggests that the value of the I UGZA is higher in green spaces of colder cities. This aspect was reflected in the floristic composition: parks located in the northern regions of Portugal, Spain, France, and Slovenia had among their most contributory species to the index some taxa of temperate climates such as Carpinus betulus, Corylus spp., Taxus baccata, Fagus sylvatica, Pterocarya sp., and deciduous Quercus. All species of these genera are included in the hardiness zone categories (5a to 6b), suggesting that they can tolerate minimum temperatures below −20 • C. In addition, we postulate that a landscape design forest-type, with high density of tree ha −1 , and a greater presence of monospecific groupings increased the magnitude of pollen emission, and contributed to the remarkable increase in the I UGZA index. In the parks of Rome, Casablanca, and the south of Spain, species of Oleaceae, Cupressaceae, Pinus, and evergreen Quercus were abundant, which are more tolerant to thermophilic conditions and aridity, and are included in categories 8a to 10b of hardiness zones. This distinctive floristic composition affected I UGZA , since it is also true that, in general, the highest values of species richness and the Shannon Index were recorded in the parks of the most temperate zones. Another element with an impact on the final value of I UGZA is the total area covered by grasses. The majority of the lawns covering the parks can be considered as conventional, that is, requiring periodic management, including frequent cuts and regular irrigation [92]. There is evidence of allergic symptomatology in lawn cutters [93]. Drought-tolerant, summer growth and high allergenicity species, such as Bermuda grass (Cynodon dactylon) and kikuyu grass (Pennisetum clandestinum) were often abundant [94,95]. The more and more frequent presence in Mediterranean parks of natural grasslands or grass of low hydric requirements [96] is favoring in turn the installation in green areas of grasses of great colonizing and allergenic capacity [97]. In the parks of this study, the contribution to the I UGZA index value of the turf covered area was particularly significant in some parks in Rome, Stibbbert park in Florence, Portuguese Parque da Paz and El Retiro Park in Madrid. As a final remark, we would like to emphasize that the results of this study should not be interpreted from a negative perspective, but as information to take into account when making the net balance of ES provided by the elements of urban forest [98]. Many of the cities that participated in this study have an important historical and cultural past, which has survived to this day in the form of historic parks and gardens [99]; while other cities have experienced important urban transformations, in which the greening process has changed the urban landscape to a great extent [100]. Whatever the case, all cities must face the challenge of climate change, mitigate its impact, and reinforce its resilience. In this context, green infrastructure in general, and urban parks in particular will play a fundamental role as providers of ES, and benefits and well-being in the population [2,4]. The growth expectations of diseases related to environmental degradation, including respiratory diseases [26], highlight the need to implement plans aimed at reducing the risk of allergenicity and improving public health. The information resulting from this study opens a frontier of knowledge to the fact that green spaces are inclusive spaces in term of health without limitations in the presence of specific qualities provided to urban built environments. Conclusions This work presented a methodology to assess the allergenicity associated with urban trees and urban areas of different cities in the Mediterranean region, although the high number of species analyzed allows its application to other bioclimatic regions. The results indicated that the species that present a series of attributes, such as monoecia or dioecia, wind-pollination, deciduous and extensive periods of flowering, are the ones with the highest values of allergenic potency (VPA). These characteristics are presented by some of the most frequent species in Mediterranean urban environments such as Acer negundo, Fraxinus excelsior, F. angustifolia, Populus alba or Platanus hispanica. High allergenic value are presented by some of the species that are best adapted to Mediterranean climate conditions, such as different species of the Cupressaceae family (with Cupressus genus as the most prominent), Fagaceae family (sclerophyllous species of Quercus), and Olea europaea, that should be considered as ornamental species in addition to agronomic, due to its extensive presence in cities. Once the species-specific VPA was calculated, it was possible to apply the Urban Green Zones Allergenicity Index (I UGZA ), which estimates the overall allergenic risk that these species represent in that the area where they grow. The 34 parks considered in this study are a good example of the different typologies that exist in Mediterranean cities-from urban forests to pocket parks and small plazas. The characteristics and factors that most affect the final allergenicity value were analyzed, highlighting the density of existing trees, the species richness and the Shannon index as the most significant factors. Environmental variables are also revealed as important parameters, since they affect the floristic composition of the parks, and this in turn affects the value of I UGZA . This information highlights the need to consider the allergenicity criterion as a parameter when managing, designing and planning current and future green areas, since only then can urban green areas be healthy spaces, inclusive for all population. Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/16/8/1357/s1, Table S1: Name and type of park, locality, surface areas, number of species and trees, tree density, Shannon´s index and main contributors to I UGZA of the green spaces included in this study. Author Contributions: The project outline was drafted and coordinated by P.C. Preparation of the manuscript, tables and figures and statistical analysis were performed by P.C., F.G., and P.P. All Authors P.C., F.G., P.P., M.C.P.,
2019-04-18T13:03:30.923Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "da7abbc4d0e51b143e8409361a5db549a15e2d6e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/16/8/1357/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da7abbc4d0e51b143e8409361a5db549a15e2d6e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
8048865
pes2o/s2orc
v3-fos-license
Alcohol and risk of admission to hospital for unintentional cutting or piercing injuries at home: a population-based case-crossover study Background Cutting and piercing injuries are among the leading causes of unintentional injury morbidity in developed countries. In New Zealand, cutting and piercing are second only to falls as the most frequent cause of unintentional home injuries resulting in admissions to hospital among people aged 20 to 64 years. Alcohol intake is known to be associated with many other types of injury. We used a case-crossover study to investigate the role of acute alcohol use (i.e., drinking during the previous 6 h) in unintentional cutting or piercing injuries at home. Methods A population-based case-crossover study was conducted. We identified all people aged 20 to 64 years, resident in one of three regions of the country (Greater Auckland, Waikato and Otago), who were admitted to public hospital within 48 h of an unintentional non-occupational cutting or piercing injury sustained at home (theirs or another's) from August 2008 to December 2009. The main exposure of interest was use of alcohol in the 6-hour period before the injury occurred and the corresponding time intervals 24 h before, and 1 week before, the injury. Other information was collected on known and potential confounders. Information was obtained during face-to-face interviews with cases, and through review of their medical charts. Results Of the 356 participants, 71% were male, and a third sustained injuries from contact with glass. After adjustment for other paired exposures, the odds ratio for injury after consuming 1 to 3 standard drinks of alcohol during the 6-hour period before the injury (compared to the day before), compared to none, was 1.77 (95% confidence interval 0.84 to 3.74), and for four or more drinks was 8.68 (95% confidence interval 3.11 to 24.3). Smokers had higher alcohol-related risks than non-smokers. Conclusions Alcohol consumption increases the odds of unintentional cutting or piercing injury occurring at home and this risk increases with higher levels of drinking. Background Cutting and piercing injuries are among the leading causes of unintentional injury morbidity in developed countries [1,2]. In New Zealand, cutting and piercing are second only to falls as the most frequent cause of unintentional injuries among people aged 20 to 64 years that result in admission to hospital [3]. Almost 30% of cutting and piercing injuries in this age group that result in admission to hospital in New Zealand, occur at home [4]. As many as 20% of young and middle-aged adults admitted to hospital as a result of a cutting or piercing injuries may have consumed alcohol in the 6 h preceding injury [5]. Based on its association with many other types of injury including cutting and piercing injuries, alcohol intake could be an important target for intervention [6][7][8]. Injuries are recognised as the leading cause of alcohol attributable deaths globally (37.8% in 2004) [9]. While the harmful influence of alcohol is likely to be mediated in part through predictable cognitive and psychomotor effects-such as reaction time, cognitive processing, coordination and vigilance [10], the contribution of alcohol use to cutting and piercing injury is unclear. The case-crossover research design has been shown to be well suited to study the influence of transient exposures that occur intermittently [11]. Previous case-crossover studies for injuries have investigated the influence of alcohol [12][13][14][15], cannabis use [16], anger [17], cell phone use [18], sleepiness [19,20], and other transient risk factors [21][22][23]. This method has also helped identify causes of study work-place hand injuries (some of which were due to cutting and piercing) [24][25][26][27] but we found no similar studies in the home setting. We used a case-crossover study to investigate the role of acute alcohol use (drinking during the previous 6 h) in the occurrence of unintentional cutting or piercing injuries at home. We restricted ourselves to home injuries because this study was funded as part of a program studying home injuries and because exposures were more likely to be similar than those occurring outside the home. Methods We identified all people aged 20 to 64 years, resident in one of three regions of the country (Greater Auckland, Waikato and Otago), who were admitted to public hospital within 48 h of an unintentional non-occupational cutting or piercing injury sustained at home (theirs or another's) from August 2008 to December 2009. In New Zealand, about 97% of acute injury admissions are to public hospitals [28]. Admission registers of the five recruiting hospitals (North Shore, Auckland City, Middlemore, Waikato, and Dunedin) were reviewed by study nurses to identify potential cases meeting the inclusion criteria. Patients who provided written informed consent were interviewed face-to-face by research nurses using a structured questionnaire, which took about 30 min to complete. Interviews were conducted as soon after the injury as practically possible. The main alcohol related exposure of interest was acute alcohol intake (converted to standard 12 g alcohol units) in the 6 h immediately prior to injury. As this was a case-crossover study we obtained the same information on alcohol intake in two control periods at the same time of day for the 6 h reference time period: the previous day and the same day the previous week. Other information was collected on known and potential confounders including: sociodemographic characteristics, medical conditions, smoking status, prescription drug use, acute marijuana and other illicit drug use (within 3 h of injury), acute sleep deprivation (less than 6 h sleep during the previous 24-hours), usual marijuana and other illicit drug use (at least weekly), and usual alcohol use indicative of hazardous or harmful drinking using the standardised Alcohol Use Disorders Identification Test (AUDIT) scale [29]. The AUDIT score is categorised into four risk levels: low risk (score 0-7), hazardous drinking (8)(9)(10)(11)(12)(13)(14)(15), severe hazardous drinking (16)(17)(18)(19), and probable dependence ≥ 20). For potential confounders that vary with time (acute recreational drug use and sleep deprivation) the same information was collected for the two control periods. The power associated with the expected sample size (n = 313; 38 discordant individuals), was estimated using information from a case-crossover study which investigated the role of alcohol consumption in injury in the United States [13]. In this study, 12% of participants had discordant exposure in case and control exposure periods when "none" to "any" alcohol use was compared [13]. From this information, and assuming a relative risk (RR) of 3, the sample size was expected to achieve 80% power at the 0.05 level of significance [30]. The effect of alcohol was examined using the casecrossover, matched-pair-interval approach [11]. We categorised alcohol consumption in two different ways. First, any drinking was compared to no drinking during the case and control periods. In addition, to investigate dose-response effects, no alcohol intake was contrasted to 1 to 3 drinks, or 4 or more. Effects of binary exposure variables were analysed using discordant pairs ratios, comparing each control period separately against the event period. We also used conditional logistic regression modelling to estimate exposure effects taking account of both control periods in a single model. The model also allowed adjustment for confounding variables. Marijuana and sleep deprivation were considered to be key confounding variables. If either alcohol, marijuana or sleep deprivation were missing from one exposure period, the information for this time period was omitted from the analysis. Other potential confounders were included in models if these responded in an incremental change in the effect of alcohol of 10% or more [31]. Analyses explored whether smoking, socio-economic status, and other modified the effect of acute alcohol use by including interaction terms and subgroup analyses. All analyses were conducted using the R-project statistical software with the 'survival' package, using the 'clogit' procedure for conditional logistic regression [31]. The study was approved by the national ethics committee (MEC/08/13/EXP), and the relevant institutional and Māori research boards of the five recruiting hospitals. The study was carried out in compliance with the Helsinki Declaration http://www.wma.net/e/policy/b3. htm. Results In total, 456 cases met the inclusion criteria, of whom 10 had insufficient English language skills to complete the interview (2%) and a further 90 (20%) were excluded due to being missed at presentation (n = 20) or because they declined to participate (n = 70). Non-respondents had a similar gender distribution to respondents, but were more likely to identify as either Māori (33.0% cf. 17.4%) or Pacific ethnicity (26.0% cf. 10.7%), and were more likely to be younger (20-39 years 64.5% cf. 47.8%). The median age of the remaining 356 participants was 40 years (Interquartile-range 28 to 51). Contact with glass (31%, n = 111), powered tools or machinery (29%, n = 103) accounted for the most injuries (Table 1). Blood alcohol concentration testing was only performed in 18 cases (5.1%), 16 of these were positive. The majority (68%, n = 242) of participants reported a long term pattern of alcohol use consistent with a low risk of hazardous or dependent drinking (AUDIT score < 8), few (4%, n = 14) reported regular use indicative of alcohol dependency (AUDIT score ≥ 20), and a further 5% (n = 16) of individuals refused to respond to these questions (Table 1). Overall, for the 1068 exposure periods of interest (alcohol, marijuana and sleep deprivation), 89 (8.3%) were dropped in multivariate analyses due to missing information in one of the three paired exposures (alcohol use, sleep deprivation or marijuana use). For the acute alcohol use data, missing information due to refusal to respond or difficulty with recall in the injury or control periods, resulted in complete paired entries for 345/356 participants for the 'day before', and 311/356 participants for 'a week ago' ( Table 2). Although many individuals could not recall the duration of the sleep they had in the 24 h before the injury (n = 84, 23.6%), fewer instances of missing information were recorded for this exposure in the 'day-before' control period (n = 48, 13.5%), compared to the 'week-before' control period (n = 84, 23.6%). The majority of subjects reported no alcohol use in the 6 h before injury, or in the corresponding period either the day before (257/345) or the week before (253/320) the injury. After adjustment for other paired exposures, alcohol consumption in the 6 h period before injury was positively associated with cutting and piercing injury, when both 'day before' and 'week before' control periods compared 'any drinking' to 'no drinking', with a threefold elevated odds of injury in the former group (Table 3). A dose-response effect was evident, when the association between the intake of 1 to 3 drinks and 4 or more drinks with cutting and piercing injuries were contrasted. The adjusted odds ratios (OR) for 4 or more drinks was more than 2.5 times that for 1 to 3 drinks (OR 8.68; 95% CI: 3.11, 24.3 vs. OR 1.77; 95% CI: 0.84, 3.74). We evaluated whether the association between cutting or piercing injury and acute alcohol use across all control periods was modified by the participants' smoking No interactions were observed between acute alcohol use and age, gender, education, fatigue, usual pattern of alcohol use, or recreational drug use and the outcome risk of cutting or piercing injury, suggesting that all drinkers were at higher risk of cutting and piercing injuries after consuming alcohol, not just those with a high risk of hazardous or dependent drinking. Discussion These findings indicate that acute alcohol use (within 6 h of injury) is associated with hospital treatment for unintentional cutting or piercing injuries at home, among young and middle-aged adults. There is evidence of a dose-response relationship with the adjusted odds ratio for 4 or more drinks being considerably higher than that for 1 to 3 drinks, relative to no drinks (8.68 compared with 1.77, respectively). Smoking status modified the effect of alcohol on injury, so that the excess odds of alcohol exposure were much higher among smokers than non-smokers. The strengths of this population-based study include the relatively high response rate of around 80%. The study base-Greater Auckland, Waikato and Otago regions-covers more than 60% of the total New Zealand population aged 20 to 64 years, and includes both rural and urban environments. The findings, however, need to be considered in light of several limitations. Although the study was designed to be populationbased, the higher proportion of individuals of Māori and Pacific ethnicity among non-respondents has introduced a degree of selection bias. The study relied on self-reported data for capturing acute exposures and lifestyle factors and blood alcohol concentration (BAC) was only measured in 5.1% of cases. The accuracy of the information, provided by participants, limits the credibility of our reported effect measures. Actual intake may be underestimated, as has been found with other self report measures, due to reluctance to admit to consumption, or simply poor recall particularly for the period 1 week before the injury occurred [32,33]. Increased missing information for the week before (over the day before) provides evidence for this effect ( Table 2). In subjects with high levels of alcohol use (either before the event or who reported habitual high levels of intake), reported number of units of alcohol consumed during specific periods is unlikely to be accurate [34]. Furthermore, as suggested by the wide confidence intervals around estimates for some of the effect modification analyses (e.g. AUDIT category ≥ 20, and smokers), the study was too small to allow precise subgroup analyses. The prevalence of hazardous drinking as measured by the AUDIT score (≥ 8) was 32.0%, higher than the 21% of New Zealand adults who identified as having a hazardous drinking pattern in the most recent New Zealand Health Survey (2006/07) [35]. Our findings were, however, similar to the proportion of 25 to 59 year olds who had a moderate to severe injury as a result of an unintentional fall at home (24.5%) [36]. Twenty-nine percent of our participants identified as 'current smokers'. This proportion is higher than New Zealand national estimates which indicate that 22% of adults (15 to 64 years) are current smokers [37], but it is lower than a US study of moderate to severely injured adult (18 to 65 years) trauma patients (Injury Severity Score > 20), who were admitted to hospital of whom 47.7% were current smokers [38]. Given the study entry criteria, it is not possible to determine how generalisable the findings are to cutting and piercing injuries that are fatal, do not result in hospitalisation, or occur in settings outside the home, such as workplace or recreational environments. In case-crossover studies, it is important to select control periods that are sufficiently distant in time from the case period to limit the correlation between the two periods [39]. Our selection of 24 h before and 1 week before are consistent with other case-crossover studies which investigate the role of acute alcohol on injury risk [13,40]. The 'hangover' or 'residual alcohol' effect in which fatigue may play a role, has been identified as a potential risk factor in previous injury studies [41][42][43][44][45]. The selection of the first control period (the same 6 h in the 24 h prior to the injury occurring) may have limited our ability to assess this phenomenon. However, the point estimates and confidence intervals for acute alcohol use and the odds of injury are concordant between the two control periods, which suggest that a 'hangover effect' is unlikely to bias our results. 'Hangover' effects generally start once BAC is close to zero [43][44][45] and this is less likely to influence results given we used a 6 h induction period. Mis-reporting of alcohol use is another potential threat to the validity of this study [46]. If participants had improved memory of alcohol intake immediately before the cutting or piercing injury, compared to their control periods, then the effect of acute alcohol consumption on injury risk may have been inflated. A study investigating the causes of Meniere's disease, explored this phenomenon by repeated questioning of cases during attacks and in different control periods [46]. The authors concluded that outcome-dependent misclassification, was not a major threat to validity. No tendency to overestimate exposure close in time to attacks of the disease occurred, despite strong beliefs among patients of the likely causes of their acute symptoms. In addition, we did not ask if the person was at home using cutting tools in the control period which has been noted to be a potential bias in studies of motor vehicle injuries [47]. People who have higher socioeconomic status generally experience better health than those who are socially disadvantaged [48,49]. As well as being considered as potential confounders or effect modifiers in the relationship between alcohol and risk of injury, they are important parameters independently linked to injuries and, as such, need to be incorporated into injury prevention strategies and policy targeting reduction in home injuries. This study was a case-crossover study which is designed to examine transient risk factors and the strength of this design is that participants are their own controls and so cases are self-matched on socioeconomic factors. However, as a result of the study design, we cannot examine the influence of time-invariant exposures such as social status. In a related case-control study [5] we were able to explore these factors in the subset of our cases that had a landline and we found that the proportion of cases with no individual level socioeconomic deprivation characteristics (55.9%) was similar to that estimated for New Zealand adults (50.7%) [50]. The proportion of cases identifying as Māori or Pacific ethnicity (18.0% and 12.0% respectively) were higher than the expected proportions (9.7% and 9.0% [51]. Our findings contribute to the limited body of published evidence for risk factors associated with cutting or piercing injuries. The findings are consistent with previous research which has examined the association between acute alcohol use and unintentional injury [13,52,53]. A meta-analysis of acute alcohol use and different classes of injury reported a per-drink (10 g pure alcohol) pooled-effect estimate for unintentional injuries (other than falls or motor vehicle accidents) of OR 1.32 (95% CI: 1.27, 1.36) [52]. This effect measure is similar to our adjusted odds ratio of consuming 1 to 3 units compared to none of 1.77 (95% CI: 0.84, 3.74). Our study found that the effect of alcohol on injury was stronger among smokers compared to non-smokers. The interaction of alcohol and smoking on a number of outcomes including fire and traffic injury has been the subject of a systematic review by Taylor et al. [54]. The authors concluded that this interaction may increase risk for traffic and fire injury, but suggested future research is required to confirm the relationship. Tobacco use, has previously been linked to some injuries [55], and impaired impulse control has been observed in smokers [56], and alcohol consumers [10]. Other explanations offered to explain the increased risk of injury among tobacco smokers include: direct toxicity from nicotine or carbon monoxide; distraction associated with lighting or disposing of cigarettes; or associated medical conditions; such as cardiovascular disease, cataracts or cancer which may impair performance of tasks; during which, injuries may occur [57]. Further analytical studies are required to confirm if the interaction between smoking and alcohol and injury risk exists. Conclusion Cutting and piercing injuries are a leading cause of home injuries among young and working aged-adults. As is the case with injuries resulting from motor-vehicle crashes and falls [6,7,12,13,36,52,58], acute-alcohol intake contributes to unintentional cutting or piercing injuries among young and middle-aged adults. Our analysis suggests that it may treble the odds. In people who smoke tobacco such an effect is increased, a finding that is worthy of further exploration. The study findings add to the impetus to enact policies which dissuade problem drinking and limit access to alcohol which will reduce the risk of injuries both at home and on the highway. List of abbreviations AUDIT: Alcohol use disorders identification test; BAC: Blood alcohol concentration; CI: Confidence interval; OR: Odds ratio; RR: Relative risk.
2016-05-12T22:15:10.714Z
2011-11-09T00:00:00.000
{ "year": 2011, "sha1": "cb4098585333c4200e92c7df008d35b7f0dc749a", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-852", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb4098585333c4200e92c7df008d35b7f0dc749a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266869067
pes2o/s2orc
v3-fos-license
Utilization of Solar Power by Rural Households in Ikole Local Government Area of Ekiti State, Nigeria - The study investigated solar power utilization amongst rural households in Ikole Local Government Area, Ekiti State. Specifically, it investigated the level of awareness; evaluated perception towards solar power, and identified the constraints to solar power utilization. A two-stage sampling procedure was used. The first stage involved a random selection of six rural communities. Twenty rural household heads were selected from the rural communities making one hundred and twenty respondents in the second stage. Awareness of usage of solar power to charge phones (90.8 %) and for household for lightening (90 %) ranked 1st and 2nd while the utilization of solar power for charging of phones (mean = 4.81), lightening of the house (mean = 4.58), lightening of the shop/office facilities (mean = 3.91), ranked 1st, 2nd and 3rd respectively. The respondents had a positive perception towards solar power utilization in terms of its reliability (x = 4.76), saving fuel costs (x = 4.76) and no payment for electricity bills (x=4.72) among others. ‘Requires sunny weather to work best’ (mean= 4.70) and ‘It is not reliable in raining season’ (mean = 4.29) were the strongest constraints to its utilization. A significant relationship existed between perception about solar power and utilization of the same at p ≤ 0.05. Solar power is moderately utilized among rural dwellers to enhance their living standard. It is recommended that Government, NGOs, and other stakeholders should provide solar powered lightening infrastructures for public utility and also make it inexpensive for to low-income rural dwellers through subsidized, instalment payments. INTRODUCTION ncreasing urban and rural population has created a demand that is beyond supply for power and as a result a good number of suburban and rural areas are not even connected to the national grid.The use and utilization of solar power could be a panacea to ameliorating this deficit.(Mohammed et al., 2013).Shaaban and Petinrin (2014), also affirmed that the country is suffering from high shortage of power and that solar power can be of importance in solving this problem and in improving the lives of residents of rural and suburban areas.Baurzhan and Jenkins (2016); and Monyei et al. (2017), reported that out of about 1.2 billion people living without access to stable electricity, 50 per cent of them reside in Sub-saharan Africa.Nigeria, the giant of Africa has about 100 million citizens living without access to stable source of power (Akinyele et al., 2017;Yakubu & Ifeanyi-Nwaoha, 2017).The provision of essential services and infrastructure in the urban and suburban areas for human needs like water supply, health facilities, cooking, communication requires energy supply.The rural population especially require energy for improvement of rural life and for food security through irrigation, agroprocessing, fertilization, irrigation and land preparation.(FAO, 2016).Fundamental to the development and growth of a nation's economy is its power supply.Therefore, energy supply is of paramount importance in sustaining the wheel of technological development of a nation Ayodele and Ogunjuyigbe (2015); Rafindad (2016), and Akuru et al. (2017) established that a consistent power supply is an essential requirement for economic development, creation of job, poverty reduction, industrialization, manufacturing, commerce, infrastructural development and security.Anumaka (2012) stated that power is playing a great role in the world of our time as its developmental growth depends largely on abundant energy. *Corresponding Author The utilization of solar energy has become an increasing phenomenon in Nigeria over the years.This becomes essential and unavoidable because of the inconsistent and epileptic nature of electric power supply from other nonrenewable sources.Aside from the age long use of solar energy for sun drying of agricultural produce, solar power has found other important uses over the years in many parts of the nation.Solar power has to do with the translation of the energy of the sun into electricity using photovoltaics cell (PVC) directly or by the use of concentrated solar power, or a fusion or merger of the two.Photovoltaic cells through photovoltaic effect are capable of translating light into electric current.The Photovoltaic system is a technology that is acknowledged by the Food and Agriculture Organisation, FAO, as one that is meeting the needs of the world at household levels and is also making a great impact in generating income and enhancing agricultural productivity (FAO, 2016).Some of the places where solar PV have been used include households, agricultural productivity, off-farm productive uses (rural and cottage industry, commercial services and small business development), social and community services, and other productive activities, I namely: billboards/advertising etc. Solar energy is known as one of the most useful choices among the renewable energy sources.It is pollution free, abounding, and free (Fang and Song, 2018).Consequent upon this, this study examined the awareness of solar power among rural households; evaluated the rural household perception towards solar power; assessed the level of utilization of solar power and identified the constraints to solar power utilization among rural households.The hypothesis of this study is as follows: Ho1: There is no significant relationship between the perception of rural household heads towards solar power and its utilization. METHODOLOGY 2.1 STUDY AREA The study was carried out in Ikole Local Government area (LGA) in Ekiti State, Nigeria.It has 16 communities and is the third largest of the sixteen LGAs in the state.coordinates are 7°40'N5°15'E.All rural households in the LGA are the population of the study. SAMPLE SELECTION The first of a two-stage sampling procedure involved a random selection of six rural communities from the LGA based on their level of rurality.Twenty rural households from the rural communities were selected in the second stage, making one hundred and twenty respondents.Data collection was done through structured interview schedule and questionnaire.The dependent variable of the study was utilization of solar power.The respondents were given a list of statements of usage and were asked to indicate their level of utilization appropriately as; highly utilized 5, moderately utilized 4, utilized 3, partially utilized, 2 and not utilized 1 point.The total score was categorized into three: high, low and medium and 3.0 was the decision mean. THE LEVEL OF AWARENESS OF THE USE OF SOLAR POWER IN RURAL HOUSEHOLDS Table 1 shows all the respondents in the study area were aware that solar power can be used to charge phones and can be used for lightening of the house while 90.8 and 90 per cent of the respondents were aware that solar power can be used to power electrical appliances and for lightening of shop/office facility respectively ranking 3 rd and 4 th .Also, street lightening (90%), lightening of pen (79.2.0%), pumping of borehole (77.5%), refrigeration of farm produce (62.5%)ranking 5 th , 6 th , and 7 th respectively.Klepacka et al, (2018) and Nwalule and Mzuza ( 2022) reported a high level of awareness of solar power among rural households.Source: Field survey, 2022. LEVEL OF UTILIZATION OF SOLAR POWER Data in Table 2 revealed the utilization of solar power by rural household as follows: for charging of phones (mean = 4.81), refrigerating vaccines in rural health care systems (mean = 4.70), lightening of the house (mean = 4.58), lightening of the shop/office facilities (mean = 3.91), ranking 1 st , 2 nd 3 rd and 4 th respectively.The result showed that solar power is moderately utilized in the study area.This is in agreement with Oyedepo (2014) that solar power makes business premises more identified at night and reduces crime rate in Nigerian rural areas and with the studies of Haque et al. (2013) who reported that solar power is a good substitute for kerosene in Bangladesh and that of Nwalule and Mzuza (2022) also reported that 52 percent of the respondents in their study area in Malawi used solar power for lighting.Source: Field survey, 2022 PERCEPTION TOWARDS SOLAR POWER Table 3 shows the favourable perception of the respondents towards solar power utilization in terms of its reliability (x=4.8),saving fuel costs (x=4.7) and paying of electricity bills, (x=4.4)not producing unpleasant noise, (x=4.7), and its eco-friendly, (x=4.6).Zarma et al. (2017) reported that solar power is gaining increasing relevance in terms of its utilization in Nigeria.This is undoubtedly because power deprived rural dwellers have positive perception towards solar power utilization.Also, Klepacka et al. (2018) Source: Field survey, 2022 r=0.83, r 2 =0.68, ** significant at 0.01% level, * significant at 0.0 5% level TEST OF HYPOTHESIS The result in Table 5 showed that there is a significant relationship between respondents' perception of solar power and their utilization of the same at 0.01% and 0.05% respectively.Therefore, the hypothesis is rejected This may not be unconnected with the positive perception and high level of utilization of solar power in the study area as supported by the findings of Zarma et al. (2017). CONCLUSION AND RECOMMENDATION Results of this study shows that rural households are aware of and have favourable perception to solar power but could not utilise it well enough to enhance their standard of living as the cost of installation is a major constraint.It is recommended that NGOs, and other stakeholders should provide solar powered lightening infrastructures for public utility and make solar power available and affordable to rural dwellers through subsidized, instalment payments.Enlightened community stakeholders, suppliers and beneficiaries should conduct trainings and re-trainings programmes to enhance awareness and skill acquisition on solar power. Fig. 1 : Fig. 1: Map of Ekiti State showing Ikole Local Government Area Table 1 . Distribution of respondents according to awareness of the uses of solar power. Table 2 . Distribution of respondents according to level of utilization of Solar power Table 3 . Distribution of rural household heads based on their perception towards solar power reported that rural households in Poland have positive attitude towards solar power because it is clean, accessible and cost saving.They have relatively unfavourable perception towards solar power in terms of Not effective in rainy season (x=3.9),No energy production at night(x=3.6)and cost of installation © 2023 The Author(s).Published by Faculty of Engineering, Federal University Oye-Ekiti.This is an open access article under the CC BY NC license.(https://creativecommons.org/licenses/by-nc/4.0/)http://doi.org/10.46792/fuoyejet.v8i4.1114http://journal.engineering.fuoye.edu.ng/418 Table 4 . Distributions of respondents according to the constraints to the utilization of Solar power Table 5 . Regression analysis of relationship between perception towards solar power and utilization
2024-01-09T16:03:06.133Z
2023-12-30T00:00:00.000
{ "year": 2023, "sha1": "a01e072f2ab9e4bce023e1c5552df28569ffbeb4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.46792/fuoyejet.v8i4.1114", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2c5316b270bb978e4489f3ebdf61ce5201dde47a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
256231459
pes2o/s2orc
v3-fos-license
Benchmarks for physics-informed data-driven hyperelasticity Data-driven methods have changed the way we understand and model materials. However, while providing unmatched flexibility, these methods have limitations such as reduced capacity to extrapolate, overfitting, and violation of physics constraints. Recent developments have led to modeling frameworks that automatically satisfy these requirements. Here we review, extend, and compare three promising data-driven methods: Constitutive Artificial Neural Networks (CANN), Input Convex Neural Networks (ICNN), and Neural Ordinary Differential Equations (NODE). Our formulation expands the strain energy potentials in terms of sums of convex non-decreasing functions of invariants and linear combinations of these. The expansion of the energy is shared across all three methods and guarantees the automatic satisfaction of objectivity and polyconvexity, essential within the context of hyperelasticity. To benchmark the methods, we train them against rubber and skin stress-strain data. All three approaches capture the data almost perfectly, without overfitting, and have some capacity to extrapolate. Interestingly, the methods find different energy functions even though the prediction on the stress data is nearly identical. The most notable differences are observed in the second derivatives, which could impact performance of numerical solvers. On the rich set of data used in these benchmarks, the models show the anticipated trade-off between number of parameters and accuracy. Overall, CANN, ICNN and NODE retain the flexibility and accuracy of other data-driven methods without compromising on the physics. These methods are thus ideal options to model arbitrary hyperelastic material behavior. Introduction The frontier of biomedical engineering applications such as personalized surgery requires accurate mathematical models of material-specific behavior [1]. Similarly, human-engineered systems based on soft materials also necessitate predictive simulations with high precision [2]. The materials for these applications are extremely nonlinear and undergo large deformations, e.g. rubber and skin. Yet, despite decades of effort developing constitutive equations for these materials, there still isn't a definitive model for them due to inherent limitations of expert-constructed models [3]. Traditional material modeling restricts the prediction of tutive models that satisfy objectivity and polyconvexity a priori. Yet, there is a gap in our understanding of how these different methods perform on benchmark datasets, and a general need to benchmark machine learning methods in computational mechanics [26,27]. Rubber modeling was the center of attention for large deformation hyperelasticity in the past century, with tens of constitutive models proposed [28]. Recently, advances in soft robotics has renewed the interest in developing improved high-fidelity simulations of soft robots made of rubbers and other elastomers. For example, applications that aim at produce complex motion such as tentacle grippers, walking soft robots, and rehabilitation soft exoskeletons [29], all require precise modeling of the material response. Soft tissues made of collagen have remarkable mechanical properties. They show exponentiallike stress-strain response and anisotropy. These nonlinearities allow tissues like skin to protect us against environmental harm while allowing interaction and movement [30]. The development of constitutive models for soft tissues, and skin in particular, dates to the seminal work by Lanir and Fung [31,32], and has resulted in a long list of strain energy functions proposed over the past five decades [3]. New models are being proposed even today [33,34]. Despite the rich literature on skin and soft tissue modeling, the complexity of the material response in these materials has prevented the emergence of a categorically superior constitutive model. The manuscript is organized as follows. In the Methods section we first review the basic equations that describe the mechanical behavior of hyperelastic materials with emphasis on strain energy function expansions that satisfy objectivity and polyconvexity requirements. Then, we show how CANN, ICNN and NODE architectures can be used to create material models within the considered families of elastic potentials. After training the three methods to datasets on rubber and skin, the Results section explores in detail the ability of the models to interpolate and extrapolate, their robustness with respect to model initialization, the regularity of second derivatives of the energy, and the trade-off between number of parameters and model accuracy. We finally discuss the results in the context of other data-driven efforts for computational mechanics. Polyconvex strain energy density functions Consider a motion , the gradient F = ∇ contains all the local information about the deformation. Within the framework of hyperelasticity, the strain energy function (F) fully defines the material response. Polyconvexity implies that the energy (F) can be expressed as a convex function in the extended domainˆ(F, cof F, det F). Intuitively, this extended domain covers different modes of deformation: F measures changes in length, cof F changes in area, and = det F changes in volume. It is difficult to work directly with the deformation gradient and its cofactors as inputs to the strain energy. Instead, the right Cauchy-Green deformation tensor C = F F is used because it does not contain information about superimposed rigid (a) Constitutive Artificial Neural Networks . . . . . . body rotations. Furthermore, objectivity is enforced by working with the invariants The last two invariants in eq.(1) are only relevant for transversely anisotropic materials and depend on the deformation of two material unit vectors a 0 , s 0 . For soft tissues, the vectors a 0 , s 0 represent collagen fiber bundle orientations. Furthermore, for nearly incompressible materials such as rubbers and skin, the split between volumetric and isochoric parts is often used. The isochoric part of the deformation isF = −1/3 F, with the corresponding deformation tensorC =F F . The isochoric invariants follow Based on the split between the isochoric and volumetric parts of the deformation, the energy can be additively decomposed into For polyconvexity to be satisfied in this additive split, one requirement is convexity of vol and growth conditions vol → ∞ as → 0 or → ∞. In the case of fully incompressible materials, the volumetric part of the strain energy is replaced by ( − 1), where is a Lagrange multiplier field that enforces = 1. In simple loading cases such as uniaxial or biaxial deformation, can be directly determined from boundary conditions. In addition, for incompressible behavior the isochoric part of the energy becomes a function of the original invariants defined in eq. (1). To ensure polyconvexity of iso , recall that this condition implies a functionˆconvex on (F, cof F, det F). The invariant 1 is convex in F, while 2 is convex in cof F. The anisotropic invariants 4 , 4 are also convex on F. Moreover, the isochoric split preserves polyconvexity of¯1,¯4 ,¯4 , and a simple scaling with a power of is enough to maintain polyconvexity of 2 . Thus, a sufficient large family of polyconvex functions has the form with each of the a convex non decreasing function of its argument, while, as mentioned previously, vol has to be convex and grow to infinity appropriately with changes in . Again, for incompressibility, the last term in (4) is replaced with the Lagrange multiplier constraint, and the terms can be considered as functions of the invariants in eq. (1). Stress predictions from a strain energy potential Given a strain energy function, the second Piola-Kirchhoff stress follows from the standard Coleman-Noll procedure [6], Other stress tensors can be easily obtained with push-forward operations, for instance the nominal or first Piola-Kirchhoff stress P = FS, or the Cauchy stress = −1 FSF , which appear in the strong form of linear momentum balance in the reference or deformed configurations respectively. Since the energy is in terms of the invariants, computing the stress requires the standard derivatives The same derivatives as in eq. (6) apply to the the derivatives of the isochoric invariants with respect toC. However, when using the split into volumetric and isochoric parts, we need the projectionC The tensor I in eq.(7) denotes the fourth order identity. Bringing it all together, the second Piola-Kirchhoff stress takes the form It is possible to extend eq. (4) to capture even a wider class of materials. Convex linear combinations of the invariants maintain polyconvexity with respect to F. Therefore, in addition to the invariants in eq.(1) or their isochoric counterparts eq.(2), we can consider the mixed invariants and the corresponding isochoric versions¯. The family of strain energies considering these mixed terms has the following structure The expression forS in this more general case is analogous to eq. (9) but with additional terms to account for the contributions. Uniaxial, pure shear, and biaxial loading For the specific case of isotropic uniaxial deformation of a perfectly incompressible material, the deformation depends on the single stretch , and the stress in the direction of the applied stretch is For pure shear deformation of a wide but thin specimen, the nominal stress in the direction of the applied stretch is The third loading case of interest for thin incompressible isotropic materials is equibiaxial tension defined by the stretch . For this loading, the nominal stress in the two principal directions of applied stretch is the same and equal to Lastly we consider an incompressible transversely anisotropic material under arbitrary biaxial loading specified by the two stretches , . Without loss of generality we set a 0 = [1, 0, 0], s 0 = [0, 1, 0]. The in plane nominal stresses are with the pressure Lagrange multiplier solved from the plane stress condition and the normal stretch obtained from the incompressibility constraint CANN models To construct convex non-decreasing functions to represent the energy in eq. (11), one way is to borrow from the architecture of feed forward neural networks but using only convex non-decreasing activation functions on a polynomial expansion. The method is illustrated in Fig. 1a. Starting from F, the invariants eq.(1) are computed in a pre-processing step. For ease of implementation and to improve the optimization step during model training, consider the normalized invariantsˆ= where 1 = 2 = 3, 4 = 4 = 1, and is a normalizing constant such that the range ofˆis approximately [0, 3]. Note that in the case of full incompressibility as assumed from now on, the normalized invariants strictly satisfyˆ≥ 0. For compressible or nearly incompressible materials, simply replace with the isochoric counterpart¯in eq. (18). For the mixed invariants, the normalized version iŝ which also satisfiesˆ≥ 0 as along as For the general case including anisotropy, the strain energy can be summarized as where ( ) = is a basic polynomial expansion with ∈ {1, 2, 3} in our implementation, (•) is the activation function choice in our case either identity 1 ( ) = or exponential 2 ( ) = exp( ) − 1. The notation is the same for the mixed invariants. The weights , , , , , , , are the trainable parameters of the model and need to be non-negative to maintain the convex non-decreasing output. For the rubber examples below, we only use the two main invariantsˆ1,ˆ2. For the skin example we have two ansatz. The simpler model includes contributions fromˆ1,ˆ2,ˆ4 4 . The second option for the skin examples takes inputsˆ1,ˆ2,ˆ1 2 ,ˆ1 4 ,ˆ1 4 ,ˆ4 4 . Our choice of polynomials and activation functions guarantee the interpolation of convex non-decreasing functions of the inputsˆ,ˆin the domainˆ,ˆ≥ 0 provided that only nonnegative weights are used which is easy to enforce. The non-negative condition on the domain, ,ˆ≥ 0, is trivially satisfied for incompressible materials, and satisfied for compressible or nearly incompressible materials if the isochoric invariants are used in eq. (18). Thus, CANNs a priori result in polyconvex strain energy functions. ICNN models This algorithm also relies on building convex functions of the normalized invariants and linear combinations of them. Let be the input to the first layer, and Z −1 the output of layer − 1. Then, for layer the output is parameterized by the weights W , , W , and biases b . For the first layer we have while for the last layer This architecture retains convexity because softplus 2 (•) is a convex non-decreasing function evaluated on linear combinations of the original input and the intermediate layer outputs using non-negative weights (enforced with the exp(•) function). Therefore, ICNNs can be used to create convex non-decreasing functions of the same normalized invariantsˆand normalized mixed invariantsˆdefined in eqs. (18), (19) for the CANN models. The general expansion is Similarly to the CANNs, for the rubber examples using ICNNs we only consider two functions 1 (ˆ1), 2 (ˆ2). For the anisotropic examples we have two models. The simpler one uses three functions 1 (ˆ1), 2 (ˆ2), 4 4 (ˆ4 4 ). The second anisotropic model also includes the mixed terms 1 4 (ˆ1 4 ), 1 4 (ˆ1 4 ), 1 2 (ˆ1 2 ). NODE models In contrast to the two previous methods, NODEs avoid interpolation of the energy and interpolate the derivative functions directly. In the end, the derivatives with respect to the invariants are the ones that enter the definition of the stress, see eq. (9). Consider the normalized invariantˆ, the NODE is a feed-forward neural network with weights W and biases b that define the function (•) of the ODE where is a pseudo-time auxiliary variable. The output of interest is the solution of the ODE at a fixed pseudo-time. In this implementation we choose = 1, Note that the output is directly the derivative of the strain energy. The key observation is that trajectories of ODEs do not intersect, thus for two initial conditions ( ) (0) ≥ ( ) (0), the ensuing trajectories continue to satisfy ( ) ( ) ≥ ( ) ( ). This implieŝ The monotonicity of the output eq. (27) is equivalent to convexity of the underlying . For the mixed invariants, the NODE defines the derivativê for an ODE analogous to eq. (25). Therefore, when using NODE models we do not recover an analytical expression for NODE . Nevertheless, the energy can be integrated if needed along a given trajectory overˆ,ˆ. Even though convexity of NODE with respect to the invariants is ensured by eq. (27), to construct convex non-decreasing functions the additional restriction of zero biases b = 0 is applied. With this last correction, the energy NODE is automatically polyconvex. Benchmark datasets and test cases We consider two datasets in this study, a classicall rubber dataset including uniaxial tension (UT), pure shear (PS), and equibiaxial tension (ET) nominal stress-stretch data from [8]. The other dataset is from porcine skin and consists of three biaxial tests: strip biaxial in the direction (SX), i.e. = is applied and the orthogonal direction is kept at = 1, strip biaxial in direction (SY), and equibiaxial tension (EB). Data from the skin data comes from [7], and is also nominal stress-stretch data. Performance on rubber dataset The rubber dataset contains three mechanical tests as described in the Methods Section. To test the ability of the data-driven methods to extrapolate we trained first against one of the three tests and compared against the other two. Results are depicted in the first three columns of Fig. 2. Not surprisingly, all three methods perfectly capture the loading curve on which they are trained on (Fig 2a,f,k). However, the methods have difficulty extrapolating. Depending on which test was used for training, the performance on the validation data varies. When trained on uniaxial data, predictions on the other two tests are inaccurate, with stiffer predictions in all cases compared to the data (Fig 2e,i). The ICNN trained on pure-shear data is still able to capture the response in biaxial and uniaxial loading (Fig 2c,g). In contrast, the CANN model trained on PS data can predict UT and ET data up to an intermediate stretch after which the prediction exponential increases and diverges from the data. The NODE trained on PS data performs well on the UT dataset but not on the ET dataset. Equibiaxial training appears to be the best for extrapolating for all three methods. The prediction for ET data matches closely the experiments, see Fig.2f, and the predictions for uniaxial and pure shear qualitatively match the observed response albeit with some error (Fig.2b,j). To verify that the methods are indeed able to capture the entire response of the material, the last column of Fig. 2 shows predictions when CANN, NODE, ICNN models are trained on all data at once. All three methods flawlessly interpolate the entire dataset (Fig.2d,h,l). Results in Fig. 2 are representative, yet, they correspond to single fit from the CANN, NODE and ICNN models. To show the robust performance of the data-driven methods, we repeat the training 50 times and compute 2 values for each trained model. The 2 values are shown in Fig. 4 in a layout analogous to the representative training Fig. 2. The 2 values confirm the previous observations from Fig. 2. For uniaxial training, 2 values on UT data are approximately one always (Fig. 4a) but there is little predictive performance on the other two tests (Fig. 4e,i). For PS training we confirm that NODE and ICNN are able to capture the PS response (Fig. 4k), the UT response (Fig. 4c), but not the ET data (Fig. 4g). CANN models can capture the pure shear response just as well (Fig. 4k), but unable to extrapolate to the other two loading cases (Fig. 4c,g). With 50 instances of model fitting, we can confidently state that equibiaxial tests are indeed the ones that allow the three machine learning models to better extrapolate to other loading cases. 2 values in Fig. 4b,f,j are always greater than 0.656, with narrow standard deviations. Fig. 4d,h,l also confirms that when trained on all data at once, CANN, ICNN and NODE have no trouble fitting the data, achieving 2 on average 0.971, 0.997, 0.997 for each of the methods respectively. Performance on skin dataset The anisotropy of skin leads to poorer capacity of the three algorithms for extrapolation. Trained with either strip biaxial in , strip biaxial , or equibiaxial tension, the three methods can capture the response they are trained on but unable to extrapolate, as illustrated in the first three columns of Fig. 3. To capture the transversely anisotropic response of skin, the number of parameters and flexibility of the functional space available to the three datadriven methods enables them to produce complex response, but at the same time it leads to unconstrained and poor predictions outside of the training region. For instance, trained on SX data, predictions under SX loading are remarkably accurate (Fig. 3a), but CANN models tend to predict stiffer responses in EB and SY loading (Fig. 3e,i); NODE predics stiffer response in SY loading (Fig. 3i) but accurate response in EB loading (Fig. 3e), and ICNN performs well in EB loading (Fig. 3e) but predicts soft response compared to the data in SY loading (Fig. 3i). To verify if the models are able to capture the entire dataset we trained CANN, NODE, ICNN modes with all the data simultaneously and show the fits in Fig. 3d,h,l. All three methods can capture the response when trained on all data, however, fits are not perfect compared to the individual test fitting in Fig. 3a,f,k. The poorer performance in the simultaneous fitting is consistent between all three methods and suggests that the data themselves might be inconsistent with the assumption of hyperelasticity, that there are experimental errors, or that the functional space available to the data-driven models needs to be even richer. A more quantitative analysis of the performance is reported in Fig. 5, which shows 2 values computed after 10 instances of model training with different, random initialization. Just as observed in the representative fits of Fig. 3, the 2 scores on the loading used for training are near one (Fig. 5a,f,k), but they are low or even near zero for the validation cases (Fig. 5b,c,e,g,i,j). Surprisingly, there is still some information from the equibiaxial test (Fig. 5f) that is useful for extrapolation to the strip biaxial loading cases (Fig. 5b,j). Training on the strip biaxial tests, SY has no information for the SX or EB data (Fig. 5c,g) Figure 4: Performance of CANN, NODE and ICNN models against the skin mechanics dataset. Trained on strip (SX) data, the three models were compared to SX (a), equibiaxial (EB) (e) and strip (SY) data (i). Trained on EB data, comparison to SX, EB, SY data is shown in (b,f,j). Trained on SY data, comparison to SX, EB, SY is shoin in (c,g,k). Trained on all data simultaneously, the three methods can capture SX (d), EB(h) and SY response (l). training does lead to some 2 > 0 for EB (Fig. 5e) but not SY data (Fig. 5i). The methods are able to consistently interpolate the entire data from all three tests regardless of random initialization (Fig. 5d,h,l). Regularity of second derivatives Thus far we have focused on the performance of the data-driven models to capture stressstretch data, which directly relates to strain energy derivatives. However, using these highly nonlinear model in large scale physics solvers, either implicit dynamics or equilibrium, requires computation of second derivatives of the energy. Therefore, even though second derivatives are not related to any data, we are interested in the regularity of the second derivatives for CANN, ICNN and NODE models. For the rubber benchmark the models are based on the interpolation of two functions the same layout as Fig. 2 and Fig. 4. It is surprising that even though all three methods capture the stress data quite well, they differ substantially in terms of their second derivatives. This reflects that there are many strain energy functions ( 1 , 2 ) that are polyconvex and that can capture the stress-stretch data under uniaxial, pure shear, and equibiaxial loading. The CANN, ICNN and NODE are suited to capture different functions within the large space of functions available to each method. The consistent trend in Fig. 6 is that the CANN models lead often to exponential second derivatives because one of the two key activation functions is the exponential. In contrast, the NODE model is the one with the smallest second derivatives in all cases. For all the three methods, the second derivatives are smooth functions. For the skin benchmark, there are more functions being interpolated by the three datadriven frameworks. As a result, Fig. 7 shows the second derivatives 2 / 2 1 , 2 / 2 2 , 2 / 2 4 , and 2 / smallest out of the three methods. The second derivatives might increase for some initial range of deformation but tend to smaller values toward the end of the testing ranges ( Fig. 7il). The CANN (Fig. 7a-d) and ICNN methods (( Fig. 7e-h) have increasing second derivatives over the range of the invariants. Also similar to the rubber benchmark, here we see that even though all three methods perform similarly on the stress-stretch predictions (see Fig. 3), they do so by interpolating different functions ( 1 , 2 , 4 , 4 ). Model efficiency A key question and common criticism of data-driven models, particularly neural networkbased models, is that increasing the number of trainable parameters logically allows the methods to capture the limited data better and better, but at the risk of over-fitting. The polyconvexity constraint, enforced exactly for CANN, ICNN and NODE models, prevents nonphysical extrapolation, much like expert models. On the other hand, expert models and some non-parametric data-driven methods [35] are generally very efficient and capture the data reasonably well with very few parameters. We test how efficiently can the data-driven models interpolate the data, i.e. we ask how does the error decrease as a function of the number of trainable parameters. Fig. 8 shows the efficiency plots for the rubber benchmark. The structure of the CANN model is between that of a neural network and an expert model. As a result, there is a single point for the CANN model for each of the plots in Fig. 8. For ICNN and NODE models, the error decays with increasing number of parameters, as expected. When there are 52 trainable parameters, the NODE and ICNN show similar performance in all the training cases. However, the drop in the error is more pronounced for the ICNN compared to the NODE framework. This suggests that the NODE model can capture the data well even with very few parameters. The efficiency trends are not preserved for the anisotropic skin data as shown in Fig. 9. In this case, in order to explore the effect of the number of parameters on the accuracy of the methods we follow two strategies: reducing the ansatz by interpolating only the functions in (4), or using the full expansion (11) CANN model, which has a fixed number of parameters when considering either (4) or (11), we observe that the full ansatz has lower errors than the reduced one for all training cases. The flexibility of the framework increases by including the mixed terms, which helps with capturing the data better. This is consistent with the development of mixed invariant terms in popular closed-form constitutive equations such as the Gasser-Ogden-Holzapfel model [36]. The ICNN and NODE also show decreasing errors when going from the reduced model (4) to the model including mixed invariants (11). The improvement is much more pronounced for the NODE model compared to the ICNN one. In contrast to the rubber dataset, for skin, increasing the number of parameters of the neural networks used in the NODE models leads to a large decrease in error. The ICNN model error decreases only slightly with increasing number of parameters. At the upper end of the range considered, i.e. approaching 200 parameters, both ICNN and NODE perform similarly. The most efficient of the methods for skin data is the CANN, which achieves the lowest errors with the lowest number of parameters. Discussion This manuscript analyzes the performance of three data-driven methods for isotropic and anisotropic hyperelastic materials that automatically satisfy objectivity and polyconvexity of the strain energy. Traditional closed-form models rely on selecting few functional terms to capture the response of a material in a parsimonious way with few parameters to fit. Closedform expressions are an elegant solution but also have the major downside of sacrificing accuracy. Data-driven methods have the flexibility to perfectly interpolate data. However, a paucity in their adoption is the difficulty to guarantee basic physics constraints that are front and center in the design of expert models [15,37]. Objectivity and polyconvexity are the key requirements to represent realistic materials. The CANN, ICNN, and NODE models studied here are constructed in such a way that they a priori satisfy these essential physics constraints. Therefore, these three methods have the potential to revolutionize modeling and simulation of highly nonlinear materials. We show that the three methods can interpolate rubber and skin benchmark datasets for isotropic and anisotropic hyperelasticity. They capture the data almost perfectly and have some capacity to extrapolate when trained on part of the data. The second derivatives are smooth, which is needed for equilibrium and implicit dynamic solvers. The models show the expected trade-off between accuracy and number of parameters. Overall, either of the three modeling frameworks is suitable for fully data-driven material modeling. The methods we analyze here stand in contrast to other recent developments in datadriven computational mechanics. The most obvious way of leveraging machine learning tools is to directly interpolate strain-stress data. There are methods along those lines developed in recent years [38]. One limitation of these approaches is the inability to extrapolate. Another problem of dealing with stress data is that objectivity and polyconvexity are not satisfied a priori. Data-driven models that capture the strain energy function are more similar to expert models [18]. CANN, ICNN and NODE fall on this category. For the data-driven models that interpolate the strain energy, one option to impose polyconvexity is through the loss function. These methods have had some success but have to carefully balance between imposing the constraint or achieving a higher accuracy [7,39]. Another approach is to select the best model out of a wide library of available models [40]. CANN, ICNN and NODE models have been recently developed to automatically satisfy polyconvexity which is a sufficient condition for the solution of boundary value problems in hyperelasticty. The original version of these methods was introduced in recent publications [10,41,8]. However, benchmarking of the original formulations is challenging because different expansions of the energy were used in each case. In this work we have re-formulated the methods such that the same invariants and energy terms are used consistently across the three models. With this implementation we show that, because the constraints are embedded in the methods, there is no trade-off between model accuracy and enforcing the physics. The three methods can get accurate representation of the data, show positive second derivatives of the energy with respect to the invariants, and perform robustly even with random initialization. The ideas behind each method are different and this translates into slight differences in their performance. CANN leverages the structure of feed-forward neural networks but uses a fixed number of available terms. Fitting a CANN involves finding the weights of the different terms. As a result, CANNs produce parsimonious models but are inherently limited by the number of functional terms. Despite the fixed structure, CANNS perform well on the benchmark datasets of this study. ICNNs rely on building convex functions by using nested linear combinations of convex non-decreasing functions in every layer of an otherwise conventional feed-forward neural network structure. NODEs deal directly with the energy derivative functions and leverage monotonicity of ODEs to get monotonic derivative functions (which implies convex functions). Because ICNNs and NODEs have an inner structure that resembles standard neural networks, they have more freedom to adjust the number parameters by changing the number of layers or the depth within a layer. The efficiency plots reflect the trade-off between accuracy and number of parameters. As the number of parameters increases, the difference between NODE and ICNN models vanishes. Thus, all three methods can accurately and efficiently capture the data. The other notable difference between the methods is in the prediction of second derivatives of the strain energy. CANN and ICNN models tend to predict increasing second derivative functions. The NODE, in contrast with the other two methods, tends to predict smaller or vanishing second derivatives towards the end of the training region. This difference likely stems from the fact that ODEs have fixed points. In other words, the derivative predictions converge to a single value, consequently producing vanishing second derivatives. In all cases the second derivatives are smooth functions which is ideal for equilibrium and implicit dynamic solvers [9]. This is in contrast with other data-driven methods that require additional regularization of the derivatives [42]. Another strategy to work with the derivatives of the energy but avoid solving an ODE would be to explore integrable neural networks [43]. The methods we benchmark here are have been designed to capture hyperelasticity. Even within the context of hyperelastic materials, the expansion of the energy can be done in different ways, potentially giving access to even wider classes of behavior [44]. More complex material response beyond hyperelasticity can also benefit from the flexibility of data-driven methods. There is still a gap in the development of physics-informed machine learning methods for dissipative materials such as plasticity and viscoelasticity [45]. There have been data-driven methods in this direction, but without a complete set of built-in physics constraints [46,47]. Therefore, this is a central area for future work that can leverage the three existing frameworks reviewed and refined here. A second extension that is needed is modeling uncertainty in the material response. This is particularly relevant to biological tissue [1]. Neural network-based frameworks can capture perfectly the response of a single material and can easily retrained with new data, but a fully Bayesian approach would allow deeper understanding of population distribution. For example, it would allow us to model how skin properties change with age, sex, or ethnicity. A Bayesian framework would also allow monitoring of epistemic uncertainty to guide data collection and produce trustworthy simulations. The third item we want to highlight is the extension to multi-modality data. The three methods we explore are still based on stress-stretch data. In contrast, some expert models are built around the idea of microstructure modeling, multiscale simulations, or micromechanics arguments [36]. These ideas have started to permeate intro data-driven modeling [48,49]. Alternatively, inferring material behavior from full-field displacements and global force data without relying on stress-stretch pairs has also gained attention recently [50,51]. Physics-informed machine learning methods that can build on CANN, ICNN or NODE frameworks but also leverage images of the tissue microstructure or information about material composition are a natural next step. Conclusions We present three fully data-driven and physics-constrained methods for nonlinear material modeling: CANNs, ICNNs, and NODEs. The methods capture hyperelastic material response perfectly on benchmark datasets of rubber and skin under three different loading cases. Evaluating their capacity to extrapolate, their efficiency, and the regularity of their second derivatives, we conclude that even though the methods have different features, they all have comparable low errors which decay with parameter and model complexity, have smooth second derivatives, and have some capacity to extrapolate. In summary, these methods hold the key for high-fidelity modeling of arbitrary material behavior without the need to select closed-form expressions. Code and data are available with this submission and we are confident that these resources complement our detailed analysis and will favor the ongoing development and refinement of data-driven computational mechanics.
2023-01-26T06:41:38.577Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "e36c11f647810c23de6ad79bf72e36dba9ffba87", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e36c11f647810c23de6ad79bf72e36dba9ffba87", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119204502
pes2o/s2orc
v3-fos-license
Alignment of the photoelectron spectroscopy beamline at NSRL The photoelectron spectroscopy beamline at National Synchrotron Radiation Laboratory (NSRL) is equipped with a spherical grating monochromator with the included angle of 174 deg. Three gratings with line density of 200, 700 and 1200 lines/mm are used to cover the energy region from 60 eV to 1000 eV. After several years operation, the spectral resolution and flux throughput were deteriorated, realignment is necessary to improve the performance. First, the wavelength scanning mechanism, the optical components position and the exit slit guide direction are aligned according to the design value. Second, the gratings are checked by Atomic Force Microscopy (AFM). And then the gas absorption spectrum is measured to optimize the focusing condition of the monochromator. The spectral resolving power is recovered to the designed value of 1000@244eV. The flux at the end station for the 200 lines/mm grating is about 10^10 photons/sec/200mA, which is in accordance with the design. The photon flux for the 700 lines/mm grating is about 5 X 10^8 photons/sec/200mA, which is lower than expected. This poor flux throughput may be caused by carbon contamination on the optical components. The 1200 lines/mm grating has roughness much higher than expected so the diffraction efficiency is too low to detect any signal. A new grating would be ordered. After the alignment, the beamline has significant performance improvements in both the resolving power and the flux throughput for 200 and 700 lines/mm gratings and is provided to users. Introduction The photoelectron spectroscopy beamline is designed for research on electronic states of solid surface using photoemission spectroscopy [1] . A typical spherical grating monochromator (SGM) with the included angle of 174°is used in this beamline [2][3][4][5] . Three gratings, G1, G2 and G3, with line density 200, 700 and 1200 lines/mm are used to cover the whole energy region from 60 eV to 1000eV. The designed resolving power of this beamline is 1000@244eV, and the designed flux throughput at the end station is 5×10 9 Therefore, realignment is needed to improve the performance of the beamline. This paper describes the previous status and performance of the beamline, realignment procedures, and measured results [6][7] . Realignment Due to several years' operation, the optical component positions in the beamline have changed from their original values, and the surfaces of these optical components have been carbon contaminated. These reasons make the beamline flux and spectral resolution worse. All the critical optical components need to be aligned to recover its performance. Critical components position check All the critical position of components in the spherical grating monochromator (includes entrance and exit slits, gratings and the exit slit linear guide) are checked by the theodolite and the automatic level. It is found that the entrance slit, the exit slit and its linear guide have errors related to the grating rulings and grating center. If the slit opening is not parallel to the grating ruling direction, the resolution will be affected. About 1mrad of this error for the entrance and the exit slit. If the exit slit linear guide doesn't pass through the grating center, the included angle will be changed during slit scanning on it. This will cause the wavelength calibration error, focusing error, and the beam moving at the sample. All errors found are adjusted and set to the design value. Resolution of the wavelength scanning mechanism The resolution of the wavelength scanning mechanism has to be better than 0.5 arcsec to meet the energy resolving power, which is 1000@244eV. The length of the grating sine bar is 500 mm, so the resolution of the linear stage is calculated to be better than 0.001 mm [8] . But the resolution of the linear stage used is just 0.003mm. So, a new linear stage is used to replace the old one. The resolution of this new one is 0.0002mm. Roughness of gratings Roughness of the optical components will influence the flux throughput of beamline, so roughness of these grating is tested by ATF. Roughness of G3 is about 5nm, and the groove shape is damaged, so the diffraction efficiency of G3 is low. A new grating with line density of 1200 lines/mm will be ordered. After initial alignment, the beamline is baked to ultra-high vacuum. It is ready to test the spectral resolution and calibrate the wavelength of the monochromator by the gas ionization chamber which is installed at the beamline just before the experimental station [9][10] . Resolving power In order to estimate the spectral resolution of the monochromator and to calibrate the wavelengths, the photon absorption spectra for excitation of the inner-shell electron into the unoccupied states in the argon, krypton and nitrogen gases were measured by a gas ionization chamber [8][9] . Both entrance and exit slits' widths are set to 50 µm. First, the spectral resolution of grating with the 700 lines/mm is tested. Fig.3 (a) shows the photoionization spectrum of argon. The gas pressure in the ionization chamber is 7.7 Pa. FWHM at 244.39 eV is 0.16 eV. By taking a natural linewidth of Γ=0.114 eV, the resolving power E/ΔE at 244.39 eV is 2200. Fig.3 (b) shows the photoionization spectrum of nitrogen. The gas pressure is 3.8 Pa. FWHM at 406eV is about 0.4 eV. Assuming a natural linewidth of Γ=0.132 eV, the resolving power at 406 eV is 1000. FWHM at 122 eV is about 0.17 eV. The resolving power for this spectral line is about 1000. Fig.4 (b) shows the photoionization spectrum of krypton with the 200 lines/mm grating. The gas pressure is 5 Pa. FWHM at 91 eV is 0.115 eV. Assuming a natural linewidth of Γ=0.084eV, the resolving power at 91.2 eV is about 1200. Photon Flux After the measurement of the resolution, the ionization chamber is removed permanently and the sample position is connected with the beamline. Calibrate the photon energies with the photoionization spectrum of gas. Then, the throughput of this beamline is examined with a photodiode. In the testing process, the exit slit is moved along the linear stage to ensuring the focusing of the spectrum. With slits setting of 50 µm and a ring current normalization to 200 mA, the photon flux of the 200 lines/mm grating is found to be better than 10 10 photons/sec/200mA. This meets the original design. However, the photon flux for the 700 lines/mm grating is just 510 8 photons/sec/200mA, and has an obvious reduction around the energy 280eV (the carbon K edge). This phenomenon proves the existence of carbon contamination in this beamline. Fig. 5 shows the results. Au 4f photoelectron spectrum After all the alignment and wavelength calibration, the 4f photoelectron spectrum from atomic Au is tested again to demonstrate the improvement of this beamline. Fig.6 shows the Au 4f, which is measured with 200 eV photons.700l/mm grating is used, and both entrance and exit slits' widths are set to 100 µm. The total widths of the peaks are 0.57 eV and 0.56 eV for 4f 5/2 and 4f 7/2 . The energy difference between these two peaks is 3.68 eV. Assuming a natural linewidth of Γ=0.54 eV, the resolving power E/ΔE is about 1300. This energy resolution is good enough for this beamline. Compared to the spectrum in fig.2, the resolution has a significantly improvement. Conclusion After realignment, the energy resolving power is up to 1000, which meets the initial design target. The flux of the 200 lines/mm grating is 10 10 photons/s/200mA, which is better than the design value. However, the flux of the 700 lines/mm grating is 5×10 8 photons/s/200mA, and has an obvious reduction around the carbon K edge, which is caused by carbon contamination. The Au4f photoelectron spectrum after alignment confirms the improvement of the performance.
2019-04-13T05:09:15.024Z
2013-03-04T00:00:00.000
{ "year": 2013, "sha1": "28ef827f8438efd79bbeed37d6bda33bdaa85005", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1303.0643", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "28ef827f8438efd79bbeed37d6bda33bdaa85005", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
198932380
pes2o/s2orc
v3-fos-license
Clinical impact of variability on CT radiomics and suggestions for suitable feature selection: a focus on lung cancer Background Radiomics suffers from feature reproducibility. We studied the variability of radiomics features and the relationship of radiomics features with tumor size and shape to determine guidelines for optimal radiomics study. Methods We dealt with 260 lung nodules (180 for training, 80 for testing) limited to 2 cm or less. We quantified how voxel geometry (isotropic/anisotropic) and the number of histogram bins, factors commonly adjusted in multi-center studies, affect reproducibility. First, features showing high reproducibility between the original and isotropic transformed voxel settings were identified. Second, features showing high reproducibility in various binning settings were identified. Two hundred fifty-two features were computed and features with high intra-correlation coefficient were selected. Features that explained nodule status (benign/malignant) were retained using the least absolute shrinkage selector operator. Common features among different settings were identified, and the final features showing high reproducibility correlated with nodule status were identified. The identified features were used for the random forest classifier to validate the effectiveness of the features. The properties of the uncalculated feature were inspected to suggest a tentative guideline for radiomics studies. Results Nine features showing high reproducibility for both the original and isotropic voxel settings were selected and used to classify nodule status (AUC 0.659–0.697). Five features showing high reproducibility among different binning settings were selected and used in classification (AUC 0.729–0.748). Some texture features are likely to be successfully computed if a nodule was larger than 1000 mm3. Conclusions Features showing high reproducibility among different settings correlated with nodule status were identified. Electronic supplementary material The online version of this article (10.1186/s40644-019-0239-z) contains supplementary material, which is available to authorized users. The radiomics has shortcomings. One major shortcoming is the low reproducibility of radiomics features, which makes it difficult to compare and interpret radiomics studies. Typically, features were defined mathematically using factors affected by imaging parameters such as voxel resolution and reconstruction methods [13,14]. Studies have proposed standardized image settings, to improve feature stability [1]. However, such standardization approaches are not always feasible for multi-center retrospective studies that might involve heterogeneous image settings. This study focused on voxel geometry (i.e., isotropic vs. anisotropic) and the number of histogram bins among the many factors affecting feature stability. A given region of interest (ROI) is made of many voxels, and voxel geometry affects features. Many features depend on the histogram of intensity from the ROI, and thus how histograms are binned affects features [15]. There are many categories within radiomics features, such as histogram-based features and texture-based features. The features may be unstable depending on the factors described above. Furthermore, some features might fail to be computed. For example, a very small nodule cannot be used to compute texture features. Inspecting the physical properties of failed computations might lead to valuable insights into performing radiomics studies. Here, we aimed to find features showing high reproducibility with respect to voxel geometry and the number of bins for lung nodules smaller than 2 cm tested on two different cohorts (n1 = 180 and n2 = 80) by lung CT. Smaller nodules were chosen because larger nodules are likely to have less variability [16]. As a secondary aim, we tried to provide guidelines for computing features by inspecting the physical properties of failed radiomics computations. Patients Institutional review board (IRB) approvals from Samsung Medical Center (SMC) and Sungkyunkwan University were obtained for this retrospective study with waivers of informed consent. Two independent cohorts were employed: For the training cohort (local data), we used 180 CT images (benign: 51 and malignant: 129) from 114 patients. The nodules were less than 2 cm. Some patients (n = 66) had nodules defined in two time points and others had nodules defined in single time point. All the malignant nodules were confirmed as adenocarcinoma histologically in the training cohort. The benign nodules were not confirmed invasively. Using CT imaging observations, we classified nodules as benign if they showed no change for 2 years or more for the solid lesion. For sub-solid nodules, the interval was 3 years or more. For the test cohort (public data), 80 CT images from the lung nodule analysis (LUNA) database (benign: 30 and malignant: 50) were randomly chosen [17,18]. The training cohort was used to identify reproducible features and the testing cohort was used to see if the finding generalizes to an independent data. CT imaging CT images of the training set were obtained with the following parameters: detector collimation was 1.25 or 0.625 mm, the tube peak potential energies ranged from 80 to 140 kVp, tube current ranged from 150 to 200 mA, and reconstruction interval ranged from 1 to 2.5 mm. All CT images were displayed at standard mediastinal (window width, 400 HU; window level, 20 HU) and lung (window width, 1500 HU; window level, − 700 HU) window settings. In-plane resolution varied from 0.49 to 0.88 with a mean and standard deviation (SD) of 0.7 and 0.07, respectively. The mean slice thickness of images was 2.33 (range: 1-5 mm) and the SD was 0.98. CT images of the test set were obtained from various institutions. Full details of imaging parameters are available [18]. The tube peak potential energies ranged from 120 kV to 140 kV, tube current ranged from 40 to 627 mA, the mean effective tube current was 222.1 mAs, and the reconstruction interval ranged from 0.45 to 5.0 mm. In-plane resolution varied from 0.49 to 0.9 with a mean and SD of 0.66 and 0.08, respectively. The mean value of slice thickness was 1.86 (range: 0.625-2.5 mm) and the SD was 0.52. All CT images of both cohorts were reconstructed using the standard algorithm. Nodule segmentation and pre-processing On axial CT images, nodules were segmented using inhouse semi-automated software by single expert [19]. Target regions were defined as nodules less than 2 cm. For the first experiment, features computed using default voxel and isotropic voxel settings were compared. The default setting refers to native voxels (can be nonsquare) and the isotropic voxel setting refers to resampling imaging data into square voxels. Such a resampled square voxel setting is necessary for the following reasons. Different voxel sizes must be compared in multi-center studies, a process that usually involves reformatting imaging data into a larger voxel setting. It is undesirable to up-sample large voxels to small voxels because the process potentially involves interpolation with bias. It is preferable to down-sample small voxels to large voxels, and thus simple averaging occurs during the process. Radiomics studies evaluate texture features that require directional voxel neighborhood information. Square voxel settings are ideal because in-plane and out-of-plane directions have the same spatial sampling. The imaging data were resampled to 2x2x2 mm 3 isotropic voxel settings using the ANTs software [20]. We were comparing data obtained from different settings and it was safe to resample to a poor resolution for a fair comparison. The training cohort had an average slice thickness of 2.33 mm, while the test cohort had an average slice thickness of 1.86 mm. Thus, we chose 2 mm as the slice thickness and made the voxel geometry isotropic to compute texture features in a standard manner. Experiment 1 (original vs. isotropic voxels) A total of 252 features were considered for each voxel setting using a combination of open source code (i.e., PyRadiomics) and in-house code implemented in MATLAB (MathWorks, Inc.) [21]. Some of the features could not be computed and we only analyzed 128 features out of the 252 features. Further details regarding the computation failures are given in later sections. The features were divided into four categories. Histogram-based features were calculated from four types of ROIs: whole ROI (number of features = 19), positive voxel of the whole ROI (n = 14), outer 1/3 of the whole ROI volume (outer ROI, n = 9), inner 2/3 of the whole ROI volume (inner ROI, n = 9), and the difference between outer and inner ROI (ROI delta, n = 9) [22,23]. A given ROI was partitioned into inner and outer ROIs purely based on the volume using binary morphological operations. A total of ten 3D shape features were calculated, and some shape features (n = 3) were computed from 2D data obtained from the slice where the nodule was the largest. Shape features related to nodule margin were computed using the sigmoid function (n = 6) [24]. The sigmoid function was used to fit density change along a sampling line drawn orthogonal to the nodule surface. Each sampling line going through one voxel on the tumor surface has a certain length (3, 5, and 7 mm in this work) inside and outside the nodule. The fractal dimension was calculated as a fractal-based feature using the box-counting method and fractal signature dissimilarity (FSD) was calculated using the blanket method [25,26]. The lacunarity was also calculated to assess the texture or distribution of the gap. Texture features were calculated using a Gray-level co-occurrence matrix (GLCM), intensity size zone matrix (ISZM), and neighborhood gray tone difference matrix (NGTDM) with 3D ROI [27][28][29]. Two types of 3D GLCM features were computed: GLCM of the whole ROI and GLCM using sub-sampled ROI. Each type was applied to four ROI types: whole, inner, outer, and delta ROIs. Intensities were binned with 256 bins. Total of 44 GLCM features were eventually obtained. Two ISZM features were computed. A 32 × 256 matrix was constructed in which the first dimension is binned intensity and the second dimension is the size. The ISZM features can quantify how many sub-regions there are and how often certain sub-regions occur within the ROI. Two features were calculated using ISZM. NTGDM-based features (n = 5) quantify the difference between a gray value and the average gray value of its neighbors. Filter-based features (n = 9) were considered. The 3D Laplacian of Gaussian (LoG) filter was adopted [30]. Sigma values of the LoG filter were computed with σ = 0.5-3.5 in 0.5 voxel increments. Computed features were normalized to the z-score. Full details of all features are given in the Additional file 1. Features with high reproducibility were identified as those with intra-class correlation (ICC) over 0.7 between two voxel settings (original vs. isotropic) using SPSS (IBM Corp.) [31]. The least absolute shrinkage selector operator (LASSO) was used to select features to explain nodule status (i.e., malignant vs. benign) for each voxel setting [32,33]. The features common to both settings were retained. Thus, features that were both reproducible and correlated with nodule status were identified. The effectiveness of the identified features was further assessed by using the features to classify between malignant and benign nodules in both the training and testing sets. The overall design of experiment 1 is in Fig. 1. Experiment 2 (default bin setting vs. changed bin setting) Many radiomics features are computed from 1D or 2D histograms. In our study, histogram-, GLCM-, and ISZM-based features depend on histograms. The histograms are dependent on the number of bins adopted. The default number of bins was compared with other numbers of bins. There were 4096 bins as the default setting for histogram-based features accounting for the CT intensity range [31]. The default bins were 256 for GLCM and 32 for ISZM. For histogram-based features, the default bin (4096 bin) setting was compared using 256, 512, 1024, and 2048 bins. For GLCM-based features, default bin setting (256 bin) was compared with those using 32, 64, and 128 bins. For ISZM-based features, default bin setting (32 bin) were compared with those using 16 and 64 bins. The histogram-, GLCM-, and ISZM-based features were computed as described in the first experiment. The ICC between features from different bin settings (default vs. changed bin settings) was calculated to identify features showing high reproducibility. Features with ICC values higher than 0.7 were retained [31]. The LASSO was then applied to select features that can explain nodule status (i.e., malignant vs. benign) for each binning setting. Common features from the compared settings were retained and used for classification of nodule status. The overall design of experiment 2 is in Fig. 2. Inspection of failed computation for features Some features failed to be computed in the extraction step. The following features were excluded because of high error rate: histogram-based features (positive pixel, inner ROI, outer ROI, and delta ROI features), GLCM features (inner ROI, outer ROI, and delta ROI), sub-sampled GLCM features, and NGTDM features. These features were not computed because nodules in this study were too small. The physical properties of failed computation cases (error group) and successful computation cases (non-error group) were compared for the two feature categories using one-tailed t-tests. Since all cases had histogram-and shape-based features available, those features were used to compare the two groups. In addition, the histogram/ shape-based features are easily interpretable which makes them good features to compare the two groups. A total of 26 features (19 histogram-based features and 7 shapebased features) were compared between the two groups. Statistical analysis The features identified from the two experiments were used as inputs for random forest (RF) classifier to distinguish between malignant and benign nodules [34]. The RF classifier used 200 decision trees. The classifier was trained using data of the training set, and it was then applied to the test set. The area under the curve (AUC), sensitivity, specificity, and accuracy of the receiver operating characteristic (ROC) curve were measured. All statistical analysis procedures were calculated using MATLAB. Experiment 1 (original vs. isotropic) From the training data, features computed using default voxel and isotropic voxel settings were compared. Thirty-eight features (ICC > 0.7) were selected from 252 features. Of these, 23 features (13 for the original voxel and 10 for isotropic voxel settings) that can explain nodule statues (malignant/benign) were retained using LASSO. Nine features were common between the two voxel settings: maximum, minimum (histogram-based), maximum 3d diameter, spherical disproportion (shapebased), cluster tendency, dissimilarity, entropy (GLCM), skewness_1 (LoG filter-based), and lacunarity (fractal- Fig. 1 Overall design for Experiment 1. a Feature extraction and the 1st selection step. In the 1st selection step, we selected features with ICC ≥ 0.7. b In the 2nd selection, we applied LASSO to select features that can explain nodule status. c The features were used to train a RF classifier to classify nodule status. It was later tested in a test cohort Fig. 3. We quantified how each identified radiomics feature contributed to explaining the nodule status and the relative importance of the features using a permutation of out-of-bag (OOB) observations within the RF classifier framework. These additional results are given in the Additional file 1. Fig. 4. Table 3 reports features showing high reproducibility from two experiments and their possible interpretations. As in experiment 1, the results for contribution of radiomics features are given in the Additional file 1. Suggested guidelines from inspecting failed computation cases The properties of cases with failed NGTDM computation using histogram-and shape-based features were further examined. One notable difference was from the skewness of histogram-based features. The skewness of the error group (mean 0.24) was larger than that of the non-error group (mean − 0.67). This indicates that the non-error group tends to have higher mean intensities. The volume of the non-error group (mean 1228.89 mm 3 ) was larger than that of the error group (mean 470.30 mm 3 ). The 95% confidence interval (CI) of volume features for the nonerror group is 1045.5mm 3 to 1412.28mm 3 . The CIs for various features that differed between the error and nonerror groups are reported in Table 4. Figure 5 shows various features compared between error and non-error groups. We recommend that nodules should be larger than a certain size (≥ 1000 mm 3 ) and the intensity values should be brighter than the average intensity of the nodule for successful computation of NGTDM features. The properties of cases with failed sub-sampled GLCM computation were also examined. The volume related features (volume, surface area, and maximum 3D diameter) of the non-error group were larger than those of the error group. However, compactness, sphericity, and spherical disproportion values, which are independent of size, did not differ between the two groups. CIs were applied to calculate the range of features to set recommended criteria for which sub-sampled GLCM features can be computed. According to the calculated values, sub-sampling GLCM features can be calculated when the volume is 1100 mm 3 or more, maximum 3d diameter value is 19 mm or more, and surface area value is 870 mm 2 or more. The comparison plot between groups and confidence interval values are shown in Fig. 6 and Table 5, respectively. Discussion Our goal was not to find features that lead to a good classification of nodule status but to find reproducible features between different settings (voxel geometry and binning settings). We observed that the classification performance using the reproducible features stayed similar, which could be indirect evidence of reproducibility of the identified features. We identified nine features showing high reproducibility that correlate with nodule status regardless of voxel geometry settings (isotropic vs. anisotropic). We also identified six features showing high reproducibility correlated with nodule status regardless of binning settings. There are 35 papers related to reproducibility of radiomics between 2010 and 2017 according to a review article [35]. Existing studies on average used 62 samples in the training cohort, while ours used 114 samples in the training cohort, which would lead to better statistical robustness. Many studies lacked independent test cohorts, while we validated the reproducible features in an independent test cohort [36,37]. The existing studies reported divergent sets of reproducible features. This is rather expected because the training cohort varied significantly among studies. The training cohort included only small (< 2 cm) nodules. The randomly chosen test cohort from the LUNA database was confirmed to be small. The maximum 3D diameter of the test cohort was on average 2.1 cm, while that of the training cohort was 1.6 cm. There is a scarcity in studies dealing with reproducibility in lung radiomics, especially for small nodules. Our study tried to fill that gap in research. There are limited CT imaging studies focusing on small lung nodules. One radiomics study reported 84% accuracy in distinguishing between benign and malignant cases in small nodules [38]. Another radiomics study reported AUC of 0.80 using a RF classifier [39]. The first two studies considered different sets of radiomics features including Laws and margin sharpness features and thus the features identified from them could not be compared directly with the identified features of our study. Mehta et al. used the volume of the nodules to distinguish between benign and malignant nodules and reported similar AUC compared to ours [40]. All these studies lacked validation using independent cohorts and thus the performance values could be inflated. In addition, our study did not try to find radiomics features that led to good classification performance but sought reproducible features between different settings (voxel geometry and binning settings). Thus, our study Fig. 5 Various features compared between the error and non-error groups related to computation of NGTDM features. Blue plots were the difference between shape-based features, and green plots were differences between histogram-based features could have lower classification performance and lead to a different set of radiomics features compared to existing studies on small lung nodules. We identified nine features showing high reproducibility that correlate with nodule status regardless of voxel geometry settings (isotropic vs. anisotropic): maximum, minimum (histogram-based), maximum 3d diameter, spherical disproportion (shape-based), cluster tendency, dissimilarity, entropy (GLCM), skewness_1 (LoG filterbased), and lacunarity (fractal-based). Most (= 26) of the histogram and shape-based features had ICC over 0.7, and selected features were those related to nodule status. Existing studies also identified maximum, minimum (histogram-based), maximum 3d diameter, and spherical disproportion (shape-based) as important features related to nodule status. GLCM features involve directional assessment of neighborhood voxels, which differs largely among voxel geometry settings. In the isotropic setting, directions have 45-degree increments, while in the anisotropic setting, directions have different increments. Only a few GLCM features were reproducible (ICC over 0.7), and the identified reproducible features correlated with nodule status. This is one novel finding of our study. Features of the LoG category operated on many scales denoted by sigma. Some features of the LoG category were reproducible, and those with small sigma were suitable for small nodules and could be selected (e.g., skewness σ = 1). Fractal features quantify shape in a multi-scale fashion and thus can be insensitive to voxel geometry settings. We identified five features showing high reproducibility correlated with nodule status regardless of binning settings: maximum, minimum, entropy (histogrambased), difference entropy, and homogeneity (GLCM) features. All histogram-based features had ICC over 0.7, and the selected features were those related to nodule status. In addition to the first experiment, entropy was identified, which is frequently found in other radiomics studies related to nodule status. GLCM features varied significantly depending on bin settings, and only 2, 3, and 7 features had ICC over 0.7 when 32, 64, and 128 bins were used, respectively, compared to the default 256 bin setting. Among these features, difference entropy and homogeneity were related to nodules status. These two features quantify texture from the entire GLCM, not some parts of it, thus, they are reproducible with respect to bin settings. ISZM features were reproducible but did not reflect nodule status. One possibility was that only small nodules (≤ 2 cm) were considered, limiting the size variability portion of the ISZM. The properties of failed NGTDM/sub-sampled GLCM computation cases were examined using histogram-and shape-based features. We found that nodules need to be larger than a certain size (e.g., over 1000 mm 3 for NGTDM features). The texture features require voxel neighborhood structure, and thus the ROI needs to be larger than the threshold. This could be a practical lower limit on nodule size for lung radiomics. Our results were computed from image acquisition settings of varying resolution (in-plane resolution between 0.48 mm to 0.9 mm and out-of-plane resolution from 0.6 mm to 10 mm), and the lower limit could be lower in an imaging acquisition setting with smaller voxels. Radiomics in lung cancer is different from in other oncology fields. Lung cancer resides in an environment rich with air, while other cancers primarily consist of soft tissue and reside in the interstitium [6]. Consequently, tumor progression in lung cancer is multi-factorial. In addition to the usual volume reduction, tumor progression is associated with density change from ground-glass opacity (GGO) to solid component [3,41,42]. Thus, radiomics in the lung should jointly consider the tumor core and surrounding air components along with textural changes in density to properly model lung cancers. Reproducibility studies in lung radiomics are largely lacking, and our study provides suggestions for future lung radiomics studies. Our study has limitations. We did not fully test the reproducibility of all 252 features. Our study focused on small nodules which led to uncalculated features in some categories. This was further explored comparing properties of the error and non-error group. Still, future studies need to explore reproducibility of radiomics features for larger nodules. Our results were derived from two datasets, and further validations are necessary using data of different image acquisition settings. The features we identified showed high reproducibility (via ICC) reflecting nodule status (via LASSO). If a future radiomics study requires another clinical variable (e.g., therapy response), the researchers should change the LASSO portion with appropriate clinical variables as necessary. Lung nodules are imaged using other modalities such as MRI and PET in addition to CT. Reproducibility of radiomics features is an important future research topic. Conclusion We identified nine features showing high reproducibility with respect to voxel geometry and five features showing high reproducibility with respect to the number of bins for lung nodules smaller than 2 cm tested on two different cohorts. We also provided guidelines for computing features by inspecting the physical properties of failed radiomics computations. The features we identified are low dimensional (< 10) and they can be easily computed as a quick pre-screening tool to determine whether a full radiomics study is worthwhile.
2019-07-27T07:31:37.262Z
2019-07-26T00:00:00.000
{ "year": 2019, "sha1": "46a7ee31753e0a3a6e19466dd0b9d2124782f799", "oa_license": "CCBY", "oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-019-0239-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46a7ee31753e0a3a6e19466dd0b9d2124782f799", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86657254
pes2o/s2orc
v3-fos-license
Neurotoxic Activity of the HIV-1 Envelope Glycoprotein: Activation of Protein Kinase C in Rat Astrocytes Abstract: Envelope glycoprotein (gp120) of the human immunodeficiency virus type one (HIV-1), has adverse effects on glial cells and neurons. This study reports on the direct effect of recombinant gp120 (r-gp120) produced from different expression systems on protein kinase C, as a measure of relative neurotoxicity. Brain cells were grown in vitro from explants of the cerebral cortex of newborn rats, and recombinant gp120 preparations expressed in mammalian cell/vaccinia virus and insect cell/baculovirus systems were applied to astrocyte-enriched cultures. The gp120 preparations activated protein kinase C (PKC) to similar levels in these cells. Mutant recombinant gp120 lacking the amino-terminal 29 amino acids produced from the mammalian and insect cells also activated PKC to similar levels as did the full-length protein. The recombinant proteins specifically activated PKC β and ζ, suggesting that they are able to induce both Ca 2+ -dependent and Ca 2+ -independent isoforms of this enzyme. Alteration of PKC activity in astrocytes by gp120 indicates its ability to modulate gene expression, which is associated with the neurotoxicity of this protein Cells of the monocyte-macrophage lineage including microglia are productively infected by HIV-1 in the brain, although the presence of the virus in other brain cells has also been reported [5,6]. Replication of HIV-1 in the brain and the effects of its glycoprotein (gp) 120 are associated with the brain lesions of ADC.Expression of HIV-1 gp120 in the brain of transgenic mice was reported to have caused lesions that are similar to those in HIV-1 infected patients, including reactive astrocytosis [7]. Neuronal derangements during HIV-1 infection are therefore attributable, in part, to the direct effect of gp120 on astrocytes leading to disturbance of the functions of these cells, including homeostatic regulation of the neuronal microenvironment [8]. The objective of this study was to determine the biological activity of full-length HIV-1 gp120 and its mutant with N-terminal deletion, with respect to induction of signal transduction in astrocytes.Fulllength HIV-1 gp120 expressed by the vaccinia /COS-7 cells and insect cell/baculovirus system and the deletion mutants of gp120 (lacking the NH 2 -terminal 29 amino acids) expressed in COS-7 cells and insect cell/baculovirus system were applied to rat astrocytes and induction of signal transduction was determined.Both full-length and mutant preparations of HIV-1 gp120 induced PKC activity in the treated cells, suggesting that the NH 2 -terminal 29 amino acids are not essential for the induction of signal transduction. Materials and Methods The following reagents were purchased from the companies specified: rabbit polyclonal antibodies to PKC isozymes from Life Technologies, Gaithersburg, MD; rabbit serum, fluorescein isothiocyanate (FITC)-conjugated goat anti-rabbit immunoglobulin G (IgG), rabbit anti-glial fibrillary acid protein (GFAP) and phorbol-12 myristate-13-acetate (PMA) from Sigma Chemical Co, St Louis, MO; PKC enzyme assay kit from Amersham-Pharmacia Biotech, Evanson, IL; full-length baculovirus-produced recombinant gp120 from ICN, Costa Mesa, CA, Bac-to-Bac baculovirus expression kit from Invitrogen-Life Technologies, Carlsbad, CA and COS-7 and baby hamster kidney (BHK) cell lines from American Type Culture Collection (ATCC), Rockville, MD. vPE8 recombinant vaccinia virus [9] and soluble CD4 were obtained through the AIDS Research and Reference Program, Division of AIDS, NIAID, NIH. Expression of full-length recombinant gp120 vPE8 (r-gp120-expressing vaccinia virus) was propagated in BHK cells.Culture supernatant was cleared of debris.Virus pellet collected by centrifugation at 100, 000 g for 1 h at 4 o C was suspended in NET buffer (50 mM NaCl, 5 mM EDTA, 10 mM Tris hydrochloride, pH 7.4) and stored in aliquots in liquid nitrogen.The virus was titrated in BHK cells and used to infect COS-7 cells at a multiplicity of infection (MOI) of 0.1.The cells were grown in Dulbecco's modified Eagles medium (DMEM) with 2% glutamine, 10% fetal bovine serum, 100 U penicillin, 100 µg streptomycin at 37 o C in a humidified atmosphere of 5% CO 2 .Culture supernatant were collected 4 days post-infection, and concentrated for purification of expressed r-gp120 as explained below. Expression of truncated gp120 in COS-7 cells The gp120 in pCAS-ENV (Biogen, Cambridge, MA)m, which lacks the first 29 NH 2 -terminal amino acids [10] was subcloned into pcDNA6/HisA vector (Invitrogen) using the manufacturers protocol in the kit.Ten µg of the plasmid was introduced into the cells with lipofectin (Invitrogen-Life Technologies), using the protocol of the manufacturer.The cells were seeded in 100 cm 3 plates, and maintained at 37 o C in humidified incubator with 5% C0 2 atmosphere.Recombinant gp120 was purified from concentrated culture supernatant and cell lysate using the affinity nickel column kit (Novagen, Madison, WI). Expression of truncated gp120 in insect cell/baculovirus system Gp120 in pCAS-ENV was amplified by the polymerase chain with 18 nucleotides at both ends of the gene as primers, and blunt ligated to pCNTR in General Contractor kit (5 Prime, 3 Prime, Boulder CO).The insert was subcloned in the multiple cloning site of the baculovirus shuttle vector, pFASTBAC (bacmid).The protocol in the BAC-TO-BAC baculovirus expression kit (Invitrogen ) was used for the protein expression and purification. Determination of CD4 binding and immunoreactivity of expressed r-gp120. For the determination of expression of r-gp120 by the virus-infected or DNA-transfected cells, the cells were grown in media with or without [ 3 H]methionine.The culture supernatant was cleared of debris and virus particles by centrifugation at 100,000 g.Agarose beads (Sigma) were linked to soluble CD4 according to the manufacturers instructions, and used to precipitate r-gp120 from solution.The suspension of agarose bead-CD4 complex was incubated with the culture medium overnight on a rotating platform at 4 o C, and then washed 3 times in PBS.The bound protein was eluted in 2x sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) loading buffer and analyzed in 10% PAGE [11].For immune precipitation, aliquots of the cleared supernatant were incubated with polyclonal anti-gp120 at 4 o C for 6 h.The mixture was then incubated with 50% suspension of Protein A agarose beads overnight at 4 o C on a rotating platform.The beads were precipitated by centrifugation at 6,000 g, washed three times with phosphate buffered saline (PBS) pH 7.4, and boiled in 2x SDS-PAGE loading buffer for analysis in 10% PAGE. Purification of vPE8/COS-7-expressed recombinant gp120 The supernatant of cell culture from vaccinia infected COS-7 cells were concentrated by passing through a filter with molecular weight cut-off of 30 kDa (Millipore Corporation, Bedford, MA).The glycoproteins in the concentrate were adsorbed onto a column of agarose-lentil lectin (Sigma) and eluted with 10% methyl mannoside.Peak elution reacting with sheep anti-gp120 polyclonal antibody was purified by ion exchange chromatography on DEAE 52 cellulose (Amersham Pharmacia Biotech) eluting with sodium chloride gradient from 0 to 0.8 M in 20 mM Tris-HCl buffer.The purified peak fraction was dialyzed against 20 mM Tris-HCL, pH 8.2 and concentrated using a filter with molecular weight cut-off of 30 kDa.Protein concentration was determined by the Bradford method [12]. Immunoreactivity of purified protein was analyzed by Western blot assay. Rat astrocyte culture Isolation and culture of rat astrocytes were as described previously [13].One-day old rats were sacrificed by decapitation.The meninges were removed from the cerebral cortex and the brain cells dissociated with collagenase [14].Cells were filtered through nylon mesh (120 µm) and cultured in glutamine-free Dulbecco's modified Eagles medium with Ham's F12 (DMEM//F12).The DMEM with pyruvate contained 4.5 mg of glucose /ml, 15 mM HEPES buffer.Culture medium contained penicillin 100 U/ml and streptomycin 100 µg/ml and 10% heat inactivated fetal bovine serum.Following viability test, cell suspension was adjusted to 6 x 10 5 cells per ml in culture medium, and 5 ml of this cell suspension were placed in 75 cm 3 tissue culture flasks, and incubated at 37 o C in a humidified atmosphere of 5% C0 2 .On day 3, the medium was changed, and after another 3 days of culture, fresh medium was added to the attached cells.The flask was tightly capped, wrapped over with parafilm, fastened to a bench orbital shaker, and shaken at 250 rpm for 2 h.The supernatant was discarded and replaced with a fresh medium.The shaking procedure was repeated daily for another 3 days.The purity of the adherent cells was determined by staining for GFAP immunohistochemically.Only preparations that were at least 98% pure astrocytes were used for experiments. Assay of PKC activity in fixed cells Cultured astrocytes were treated with 0.5 µg /ml of r-gp120 or 1 µM of PMA, for 30 min.For estimation of PKC activity induced, cultured cells were fixed in 0.25% paraformaldehyde for 1 h at 4 o C.After washing in phosphate-buffered saline (PBS), the cells were permeabilized with 0.02% Tween 20 at 37 o C for 10 min.They were then washed three times in cold PBS and incubated with rabbit polyclonal antibody to PKCβ or PKCζ for 30 min.After washing, the cells were incubated with goat anti-rabbit IgG-FITC conjugate.Control cells were treated only with normal rabbit serum and goat anti-rabbit IgG-FITC.Cells were scanned for fluorescence in a cell sorter (Coulter, Miami, FL ). The area of peak fluorescence was measured for each test.The value obtained in the test with r-gp120 was expressed as a percentage of the peak fluorescence induced in experiment with PMA. Results The immunoreactivity and CD4 binding of each protein expressed was first ascertained.To determine immunoreactivity, lectin-bound protein, which had been enriched by ion exchange chromatography was analyzed by SDS-PAGE and Western blotting, and then reacted with anti-gp120 polyclonal antibody.Distinct protein of 120kDa was obtained for the expressed full-length r-gp120 expressed in COS-7 cells.This was similar in size to the insect cell /baculovirus-expressed protein (Fig. 1).The mutant proteins were approximately 98 kDa (data not shown).To determine the biological property of the recombinant protein with respect to CD4 binding, the expression was done in [ 3 H]methionine-labeled cells.Proteins precipitated from the culture medium by CD4 were analyzed by SDS-PAGE.Both full-length r-gp120 expressed by mammalian cells and insect cells (data not shown) and the mutant proteins (Fig. 2) bound to CD4. The full-length r-gp120 preparations produced from two different sources were of similar molecular weights.Similarly, the mutant r-gp120 preparations from different expression systems had similar molecular weights, as indicated in Figs. 1 and 2. Addition of HIV-1 r-gp120 protein to purified astrocytes at concentrations from 0.1 µg/ml to 1.0 µg/ml did not induce cell death.This was confirmed by trypan-blue staining of the cells. Induction of PKC activity To determine induction of signal transduction in cells stimulated with PMA and r-gp120, PKC was estimated in a mixture of the cytosolic and particulate fractions of the cell homogenate.PKC activity was significantly (p<0.05)increased in cells stimulated by PMA and r-gp120 than in control cells (Fig. 3).At the concentrations of the inducers used, the stimulation of PKC activity by r-gp120 (0.5 µg/ml) was significantly (p<0.05)higher than that by PMA (1 µM).baculovirus system-expressed r-gp120 with or without polyclonal antibody to gp120 was applied to astrocytes in culture.PKC was assayed in the supernatant of cell homogenate, which had been centrifuged at 10,000 g.Induction of PKC by PMA was significantly less than by r-gp120 (p<0.05).Sheep anti-gp120 (1:20) effectively blocked the activity of r-gp120. In order to determine if certain PKC isozymes are preferentially stimulated, the level of activity of PKCβ and PKCζ isozymes was estimated in whole cells by flow cytometry, following application of the recombinant proteins.The level of PKC activity induced by r-gp120 was expressed as a percentage of the value obtained when the activation was by PMA.Both full-length r-gp120 expressed by insect cell/baculovirus system (Bac-r-gp120) and by vaccinia/mammalian cell system (vPE8-r-gp120) were able to induce PKC activity in the rat astrocytes (Fig. 4).Both recombinant mutant r-gp120 proteins expressed by mammalian cells (COS7-r-gp120DL) and by insect cell/baculovirus system (Bac-r-gp120DL) showed equivalent ability to induced PKCβ and PKCζ activities (Fig. 5). The levels of PKCβ and PKCζ activities induced by full-length gp120 were not significantly higher (p>0.05)than those by mutant r-gp120.In all tests with the r-gp120 or PMA, the activity of PKCβ induced was significantly (p<0.05)lower than that of PKCζ.full-length recombinant gp120 produced from vaccinia-infected COS-7 cells (vPE8-COS7-r-gp120) or by insect cell/baculovirus system (Bac-r-gp120) was applied to rat astrocytes.PKC isozyme activity is presented as a percentage of the activity induced by PMA.Each bar represents a mean of five assays.Both protein preparations stimulated PKC to a similar level.PKCζ response was significantly (P<0.05)higher than that of PKCβ.recombinant mutant gp120 produced from COS-7 cells (COS7-r-gp120DL) or by insect cell/baculovirus system (Bac-r-gp120DL) was applied to rat astrocytes.PKC isozyme activity is presented as a percentage of the activity induced by PMA.Each bar represents a mean of five assays.Both protein preparations stimulated PKC to a similar level.PKCζ response was significantly (P<0.05)higher than that of PKCβ. Discussion Full-length r-gp120 produced in mammalian and insect cells showed similar molecular weights.Also, the two preparations of mutant r-gp120 protein with deleted NH 2 -terminal 29 amino acids had similar molecular weights.This suggests that the level of glycosylation of this protein may not be widely different when produced from the two systems. PKC is primarily activated by diacylglycerol (DAG), which together with inositol 1, 4, 5 triphosphate (IP3), is a product of the hydrolysis of inositol triphosphates, following the interaction of the cell surface receptor with a ligand (15).PMA was used as a control ligand in the present study because, like DAG, it utilizes PKC to mediate signal transduction.The present results suggest that gp120 is one of the ligands that can alter PKC activity in astrocytes, thus setting up a cascade of events leading to neuronal damage during HIV-1 infection. The mutant protein induced PKC in astrocytes to the same level as did the full-length protein.This indicates that the deletion did not affect the portion of the protein that enables the induction of signal transduction in astrocytes.This opens a question as to the functional role of the NH 2 -terminal 29 amino acids of gp120. PKC actually consists of a number of different molecules (16).These include the conventional PKC isozymes: α, β-1, β-II, and γ which require calcium, phospholipid and diacylglycerol for their activation.The other molecules include the novel PKC isozymes δ, ε, and η, which are calciumindependent and the atypical PKC, ζ and λ.These isoforms differ in tissue distribution, substrate specificity and kinetics of activation and inactivation.It has been earlier demonstrated that gp120 does activate both the calcium-dependent and Ca-independent isoforms in monocytic cell line, U937 (17). Only two of the isozymes, PKCβ and PKCζ  were tested in the present study, primarily to determine if the recombinant gp120 and its mutant have similar effects on isoforms of PKC.The results indicate that in astrocytes, both the calcium-dependent and calcium-independent PKC isozymes are inducible by HIV-1 gp120 and the particular mutant under consideration. When PKC is activated, it phosphorylates certain cellular proteins, resulting in cellular activation and gene expression.Induction of PKC activity is an important biochemical step during cellular activation by HIV-1.Several reports indicate that HIV-1 and its gp120 induce signal transduction in target monocytic cells.These include observation of elevated Ca 2+ and IP3 in lymphocytes and increased levels of Ca 2+ and IP3 in HIV-1 infected H9 cells (18,19).It is expected that this activation is associated with the induction of gene expression, including cytokine production in those cells. Induction of PKC by gp120 in astrocytes therefore is an indication of the ability of this viral protein to activate certain genes in these cells, including GFAP (24) and, most likely, cytokines.If inflammatory cytokines are produced by astrocytes activated by HIV-1 gp120 in vivo, they can be very toxic to neurons, and thus contribute to the pathogenesis of the brain lesions in AIDS. In conclusion, a 29-amino acid deletion of the HIV-1 gp120 in the NH 2 -terminal region did not affect certain functional properties of the protein, including binding to CD4 and induction of signal transduction in astrocytes.Recombinant gp120 produced from mammalian and insect cell systems did not differ in their ability to induce signal transduction.Work is in progress to characterize the genes activated in astrocytes following induction of signal transduction by the HIV-1 gp120. Figure 1 . Figure 1.Immunoblot analysis of r-gp120 produced by vEP-COS-7 cells and insect cell/baculovirus system.Virus-free culture supernatant of vEP-infected COS-7 cells was run through lentil lectinconjugated agarose bead column.Bound glycoprotein was eluted with 10% mannoside in PBS/Tris buffer, purified by DEAE 52 ion exchange chromatography, dialyzed and concentrated.Five µg of the concentrate was resolved by the SDS-PAGE and analyzed by Western blotting, using anti-gp120 polyclonal antibody.Lane 1: preparation from control cell supernatant; lane 2: preparation from vEPinfected COS cells; lane 3: purified insect cell/baculovirus system-expressed r-gp120. Figure 2 . Figure 2. CD4-precipitation of [ 3 H]methionine-labeled mutant r-gp120 with deleted NH 2 -terminal 29 amino acids.Cell culture supernatant was cleared of debris by ultra-centrifugation, concentrated and incubated with agarose beads conjugated to soluble CD4 overnight.After PBS washing, the beads were boiled in 2X SDS-PAGE loading buffer, and the supernatant analyzed in 10% PAGE, followed by auto-radiography of the dried gel.Lane 1: control untransfected COS-7 cells; Lane 2: mutant r-gp120 expressed in COS-7 cells; Lane 3: insect cells not infected with recombinant baculovirus; Lane 4: mutant r-gp120 expressed in insect cell/baculovirus system. Figure 3 . Figure 3.Effect of stimulation of astrocytes with gp120 and PMA on PKC enzymes.PMA or Figure 4 . Figure 4. Effect of stimulation of astrocytes with recombinant gp120 on PKCβ and PKCζ.PMA or Figure 5 . Figure 5.Effect of stimulation of astrocytes with recombinant mutant gp120 on PKC.PMA or
2014-10-01T00:00:00.000Z
2002-11-30T00:00:00.000
{ "year": 2002, "sha1": "e6637215fe0e896f211e032e66a0dc66e8333794", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/3/11/1105/pdf?version=1403128972", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "e6637215fe0e896f211e032e66a0dc66e8333794", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
267971710
pes2o/s2orc
v3-fos-license
Gut-Modulating Agents and Amyotrophic Lateral Sclerosis: Current Evidence and Future Perspectives Amyotrophic Lateral Sclerosis (ALS) is a highly fatal neurodegenerative disorder characterized by the progressive wasting and paralysis of voluntary muscle. Despite extensive research, the etiology of ALS remains elusive, and effective treatment options are limited. However, recent evidence implicates gut dysbiosis and gut–brain axis (GBA) dysfunction in ALS pathogenesis. Alterations to the composition and diversity of microbial communities within the gut flora have been consistently observed in ALS patients. These changes are often correlated with disease progression and patient outcome, suggesting that GBA modulation may have therapeutic potential. Indeed, targeting the gut microbiota has been shown to be neuroprotective in several animal models, alleviating motor symptoms and mitigating disease progression. However, the translation of these findings to human patients is challenging due to the complexity of ALS pathology and the varying diversity of gut microbiota. This review comprehensively summarizes the current literature on ALS-related gut dysbiosis, focusing on the implications of GBA dysfunction. It delineates three main mechanisms by which dysbiosis contributes to ALS pathology: compromised intestinal barrier integrity, metabolic dysfunction, and immune dysregulation. It also examines preclinical evidence on the therapeutic potential of gut-microbiota-modulating agents (categorized as prebiotics, probiotics, and postbiotics) in ALS. Introduction Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease, is a rare but highly fatal neurodegenerative disorder characterized by the progressive loss and degeneration of motor neurons in the brain and spinal cord.Its average age of onset is currently estimated at 61.8 years, with death typically occurring 3-5 years following diagnosis [1,2].Although the incidence of ALS is around 2.0 cases per 100,000 persons per year, its global prevalence is closer to 5.4 cases per 100,000 persons, reflecting a low survival rate [2].The clinical manifestations of ALS are heterogeneous and can be initially nonspecific.Patients afflicted with the disease often experience debilitating muscle weakness, stiffness, or paralysis, which begin in the extremities and then spread to involve most parts of the body.Despite extensive research efforts, the etiology of ALS remains largely unknown, and thus effective treatment options that can delay disease progression are limited.However, a growing body of evidence has recently implicated enteric microbiota and the gut-brain axis (GBA), an intricate two-way communication system between the gastrointestinal (GI) tract and central nervous system (CNS), in the pathogenesis of several neurodegenerative diseases, including ALS [3]. Factors contributing to overall gut health, such as intestinal inflammation and barrier permeability, can have profound effects on brain function and well-being [3,4].Symptoms of GI upset or dysfunction (e.g., pain, dysphagia, reflux, and constipation) have also been noted in ALS, further suggesting a link between gut and brain pathology [5][6][7].Moreover, imbalances or alterations in the composition of the gut microbiome, commonly referred to as gut dysbiosis, have been observed in human patients as well as in experimental animal models of ALS [8,9].Although several advancements in our understanding of the gut microbiome and its relation to neurodegenerative disorders have been made, the field is still in its early stages.Further explorations into the complex role of the GBA and its contributions to brain development in both health and disease can provide new insights into the mechanisms underlying ALS pathogenesis.In this review, we comprehensively summarize and present the current literature on gut dysbiosis in ALS, with a specific focus on the involvement of GBA pathology.Moreover, we aim to explore the potential of direct gut modulation as a therapeutic measure in the management of this devastating motor disease. Components of the Gut-Brain Axis Bidirectional communication between the gut and brain is primarily mediated by the nervous, immune, and endocrine pathways [10,11].Nervous pathways include the enteric nervous system (ENS), autonomic nervous system (ANS), and vagus nerve.Visceral afferents normally transmit sensory information from the gut to the CNS, which, in turn, modulates GI function by efferent signals.Intestinal motility, absorption, secretion, and blood flow are all regulated by the ANS [11].Interestingly, the ANS shares several mediators and receptors with the gut's immune system and, thus, can also play a role in regulating local inflammatory processes [12].Signaling in the immune pathways is primarily mediated by enterocytes, resident immune cells, and the gut microbiota.Enterocytes, particularly a specialized subset called tuft cells, and gut-associated immune cells, such as macrophages, neutrophils, and dendritic cells, all express toll-like receptors (TLRs) that can recognize molecular patterns on the surfaces of invading pathogens and initiate an innate immune response [11].Upon activation, these cells release inflammatory cytokines and chemokines that not only aid in further immune cell recruitment but also trigger signaling cascades that can communicate with the CNS.Cytokines may also act locally on vagal afferents [10] and subsequently affect signaling within the GBA.Endocrine pathways mainly involve hormones and the hypothalamic-pituitary-adrenal (HPA) axis.However, neuroactive microbial products, such as short-chain fatty acids (SCFAs) and secondary bile acids , can also propagate signals to the CNS either directly, by accessing the body's systemic circulation, or indirectly, by interacting with enteroendocrine, enterochromaffin, and immune cells of the gut [10].While this section offers a simplified overview of these pathways, it is important to recognize how different components of the GBA are related and work in concert to maintain gut homeostasis.Furthermore, it becomes evident that disruption in any component of the GBA can not only impact the gut but may also have downstream effects on the CNS. Evidence of Gut Dysbiosis in Human Amyotrophic Lateral Sclerosis The human gut is habitat to a highly complex yet balanced network of commensal microorganisms spanning all three domains of life: bacteria, archaea, and eukarya (including fungi, yeasts, and protozoa) [13].Bacteriophages and eukaryotic viruses are also integral components of the gut microbiome [14].Although the exact composition and diversity of microbial communities throughout the GI tract vary considerably, the gut microbiome is primarily dominated by bacterial populations.Specifically, among the eight identified bacterial phyla in the human gut, Firmicutes and Bacteroidetes constitute the majority portion and represent over 90% of all intestinal microbiota [13,15].The remaining 10% is typically composed of Actinobacteria and smaller proportions of Proteobacteria, Verrucomicrobia, or Cyanobacteria [13,15,16].The division of the intestinal flora at this taxonomic level is similar, if not uniform, across most healthy humans.However, each individual possesses a unique microbiome made up of different species and strains at varying densities likely owing to a number of genetic factors and host-microbe interactions [17,18].Age, diet, lifestyle, environment, and disease may also account for individualized differences since they can shape and alter the gut's microbial composition [15].Furthermore, studies have shown that dietary changes or acute lifestyle modifications can quickly and reproducibly alter the gut microbiota [19][20][21].Over the past decade, the relationship between gut health and neurological disorders has become a subject of research interest.Preclinical evidence implicates gut dysbiosis and subsequent GI dysfunction in the pathogenesis of ALS [22][23][24][25].In parallel, human studies consistently report that the abundance and diversity of the intestinal flora in ALS patients are significantly altered when compared to healthy or other neurodegenerative controls. Alterations to the gut flora are typically examined via high-throughput sequencing techniques such as 16S rRNA sequencing and shotgun metagenomics.Bioinformatic analyses may also be used to assess the taxonomic composition or diversity of detected microbial communities in various samples, as well as across different patient groups.To date, the largest human gut profiling study was conducted by Guo et al. [26] and included a total of 185 participants.The fecal microbiome of ALS patients and unrelated healthy controls were longitudinally compared at three different time points.Two bacterial phyla, Firmicutes and Cyanobacteria, were significantly different in relative abundance between patients and controls at the first collection point (baseline) [26].Adjustment for confounding factors (sex, age, and body mass index) further highlighted alterations to the abundance of six specific genera: Bacteroides, Parasutterella, and Lactococcus were all significantly enriched in ALS samples, while Faecalibacterium and Bifidobacterium were markedly reduced compared to controls.The presence of a distinct gut microbiome in ALS patients was further validated by significant differences in the beta-diversity between the two study groups.Interestingly, only the abundance of Firmicutes significantly varied at the second time point [26], suggesting that the gut microbiome may continue to undergo changes over the course of the disease.In tandem, the relative abundance of Bacteroidetes and Firmicutes was significantly different at the first and second time points, respectively, in ALS patients with bulbar versus limb onset.Separate cohorts, albeit with some overlap in study participants, were also used to assess if any associations between dysbiosis and plasma metabolites were present [26,27].Indeed, several microbes, such as Akkermansia muciniphila and members of the Lachnospiraceae family were correlated with alterations in plasma lipid-related metabolites [26].Mendelian randomization analysis described acylcarnitine, bile acid, and fatty acid metabolism as potentially causal to ALS, and several acylcarnitines were further negatively correlated with scores on the revised ALS functional rating scale (ALSFRS-R).Taken all together, the microbiome-metabolome interface offers a promising framework for understanding the disease mechanisms underlying ALS and exploring new therapeutic interventions. Other human microbiome studies also provide evidence of dysbiosis in ALS patients.For instance, Fang et al. [28] demonstrated several significant alterations to the gut microbiome at different taxonomic levels.Bacteroidetes (phylum), Bacteroidia (class), Bacteroidales (order), and Dorea (genus) were all enriched in ALS samples compared to healthy controls, whereas Firmicutes (phylum), Clostridia (class), Lachnospiraceae (family), Oscillibacter (genus), and Anaerostipes (genus) were notably decreased.Di Gioia et al. [29] reported that Escherichia coli, Clostridiales Family XI (family), Gastranaerophilalaes (family), and Cyanobacteria (phylum) were all significantly elevated in ALS, while Clostridiaceae 1 (family) was lower in patients than controls.While the total bacterial count did not differ between the two study groups, ALS stool samples showed lower DNA concentrations compared to controls, possibly due to significantly decreased amounts of yeast [29].Patients with reduced yeast counts were significantly correlated with lower ALSFRS-R scores and forced vital capacity percentage (FVC%).Thus, shifts in various microbial populations, not only bacteria, may impact disease progression or clinical manifestation.Consistent with these findings, Zhai et al. [30] showed ALS stool samples had a significant increase in the relative abundance of Euryarchaeota (phylum), Methanobacteria (class), and Methanobrevibacter (genus) while Faecalibacterium (genus) and Bacteroides (genus) were reduced.ALS patients were also noted to have less rich and even microbial communities compared to healthy controls [30], which could impact metabolic function.Indeed, spectrophotometry showed that elevated fecal levels of SCFAs, nitrogen-containing compounds (NO 2 -N/NO 3 -N), and γ-aminobutyric acid were observed in ALS samples compared to controls.While not statistically significant, alterations in fecal metabolites may be a sign of underlying ALS-related GI dysfunction [30]. Given that caregivers who live or closely interact with ALS patients may share environmental exposures, and changes to the gut microbiome by extension, several studies use healthy family members and spouses as controls.Niccolai et al. [31], for example, compared the stool samples of ALS patients with those collected from cohabiting controls and found they were distinct.Patient samples showed a significant enhancement in the relative abundance of Senegalimassilia (genus), while Subdoligranulum (genus) and several members of Lachnospiraceae (family) were instead only elevated in family members.Among others, Adlercreutzia, Lachnospiraceae_FCS020_group, and Romboutsia were further correlated with disease progression and survival in the ALS group [31].Patients with a slow progression rate, in particular, were associated with a higher abundance of Streptococcaceae (family) but a significant reduction in fecal α-diversity [31].In contrast, Ngo et al. [32] reported no significant differences were found in the fecal microbiota of ALS subjects compared to the control group, which included healthy spouses, friends, and family members.Moreover, no associations between the gut microbiome and disease severity or site of onset were found.Anthropometric, metabolic, and clinical features of the disease were all described as being independent of the gut's microbial composition [32].Consistent with the previous study, however, the authors observed that an increase in the α-diversity of fecal microbiomes was associated with accelerated disease progression and a greater risk of early death [32].This finding suggests that the contribution of gut dysbiosis to ALS pathogenesis may extend beyond changes to microbial composition to involve alterations in community diversity as well. While most human microbiome studies present evidence of gut dysbiosis in ALS patients, some conflicting findings have been reported.Inconsistencies with regards to differences in the alpha/beta-diversity or Firmicutes/Bacteroidetes (F/B) ratio are amongst the most common.The F/B ratio is widely accepted as a sign of intestinal homeostasis and overall gut health, and its alteration has been previously described in inflammatory bowel diseases [33,34] as well as metabolic disorders [35][36][37].However, the precise implications or effects of F/B imbalance on disease progression and patient outcome in ALS remain unknown due to conflicting results amongst current profiling studies.Some studies have reported that ALS patients exhibit significant shifts in the gut microbiome in favor of either Firmicutes [30,32,38] or Bacteroides [28,31,39,40], while others failed to detect any difference in the F/B ratio between ALS patients and controls [41].These inconsistencies likely arise due to several reasons: heterogeneity in the subtypes, segment onset, and disease stage of ALS can all lead to varying differences in the gut microbial composition among study participants, while patient demographics, limited sample sizes, and methodological differences may also influence results.Addressing these limitations in future studies is pivotal to understanding the complex role of gut dysbiosis in ALS pathogenesis. All currently published oral and fecal microbiome studies performed on human ALS patients are summarized in Table 1.It is important to note that evidence of dysbiosis and microbial translocation has also been found in other primary samples and/or tissues [42][43][44].For example, in a recently published study by Liu et al. [42], nasal swabs collected from 66 ALS patients and 40 healthy caregivers differed significantly in microbial composition and diversity.At the phylum level, Bacteroidetes and Firmicutes were substantially increased in healthy nasal communities, while Actinobacteria dominated the nasal microbiota of ALS patients.Specific genera like Gaiella, Sphingomonas, Lachnospiraceae, and Klebsiella were all found to be significant predictors of ALS [42].Faecalibacterium and Alistipes were also notably enriched in ALS patients and positively correlated with ALSFRS-R scores [42].Further profiling and correlation analyses linked several changes to the nasal microbiota with ALS-related immune dysregulation and metabolic dysfunction.However, it is still unknown whether these associations are causal and if they significantly contribute to disease development.Another study by Ellis et al. [43] sought to characterize the microbial profile and total DNA content in the peripheral blood of ALS patients.Compared to healthy controls and multiple sclerosis (MS) patients, samples drawn from ALS subjects exhibited a significant reduction in Pseudomonas, Acidovorax, and Acinetobacter levels, with complete depletions of Funneliformis and Cloacibacterium [43].Hydrurus, a freshwater alga belonging to the phylum Ochrophyta, was unexpectedly enriched in all ALS samples but only a quarter of the control and MS sample pools [43].Moreover, principal component analysis revealed the combination of β-proteobacteria, γ-proteobacteria, and Ochrophyta could effectively sort and distinguish ALS patients from other sample populations. Gut Dysbiosis Contributes to Pathology in Amyotrophic Lateral Sclerosis Changes to the gut microbiota can influence distant organs by direct (e.g., translocation) and indirect (e.g., immune dysregulation) means.While the precise mechanisms are still under investigation, several connections between gut dysbiosis and ALS pathology have been identified (see Figure 1).It is important to highlight that further research is needed to definitively determine whether these findings are causal. Gut Dysbiosis Contributes to Pathology in Amyotrophic Lateral Sclerosis Changes to the gut microbiota can influence distant organs by direct (e.g., translocation) and indirect (e.g., immune dysregulation) means.While the precise mechanisms are still under investigation, several connections between gut dysbiosis and ALS pathology have been identified (see Figure 1).It is important to highlight that further research is needed to definitively determine whether these findings are causal.Alterations to the gut microbiota can contribute to ALS pathology via three pivotal mechanisms: compromised gut barrier integrity, metabolic dysfunction, and immune dysregulation.Reduced expression of tight junctions along the intestinal epithelium allows for microbial invasion and subsequent inflammation.These microbes may also translocate into the blood, triggering a systemic immune response (e.g., endotoxemia, proinflammatory cytokine production, and peripheral monocyte activation).If prolonged, systemic inflammation can damage the blood-brain barrier (BBB) and result in the overactivation of microglia and astrocytes further aggravating neuroinflammation.The loss of immunomodulatory and neuroprotective metabolites such as butyrate, a short-chain fatty acid (SCFA), promotes motor neuron degeneration by increased oxidative stress and mitochondrial dysfunction.The interplay between gut and brain health highlights the therapeutic potential of gut-brain axis (GBA) modulation in ALS.Restoring a healthy microbial balance may not only alleviate patient symptoms by modulating these mechanisms but also prolong survival by mitigating disease progression.The figure was created using Biorender.com.Alterations to the gut microbiota can contribute to ALS pathology via three pivotal mechanisms: compromised gut barrier integrity, metabolic dysfunction, and immune dysregulation.Reduced expression of tight junctions along the intestinal epithelium allows for microbial invasion and subsequent inflammation.These microbes may also translocate into the blood, triggering a systemic immune response (e.g., endotoxemia, proinflammatory cytokine production, and peripheral monocyte activation).If prolonged, systemic inflammation can damage the blood-brain barrier (BBB) and result in the overactivation of microglia and astrocytes further aggravating neuroinflammation.The loss of immunomodulatory and neuroprotective metabolites such as butyrate, a short-chain fatty acid (SCFA), promotes motor neuron degeneration by increased oxidative stress and mitochondrial dysfunction.The interplay between gut and brain health highlights the therapeutic potential of gut-brain axis (GBA) modulation in ALS.Restoring a healthy microbial balance may not only alleviate patient symptoms by modulating these mechanisms but also prolong survival by mitigating disease progression.The figure was created using Biorender.com. Dysbiosis and Intestinal Barrier Integrity The intestinal mucosa and its components serve as the gut's frontline defense system against invasion by harmful pathogens and toxins.Healthy microbiomes are essential to the function and integrity of this barrier [49].A common way by which commensal microorganisms fortify the gut's lining mucosa is by direct upregulation of intercellular junctions.Alvarez et al. [50] demonstrate that several commensal strains of Escherichia coli can promote the translation and redistribution of tight junction proteins, such as zonula occludens (ZO)-1 and claudin-2.Similarly, Karczewski et al. [51] show that Lactobacillus plantarum can upregulate the expression of scaffold and transmembrane proteins in healthy subjects, even reversing their chemically induced dislocation in an in vitro model of human gut epithelium. Alterations in the microbial composition of the gut can disrupt intercellular junctions, and by extension, increase mucosal permeability (a phenomenon commonly referred to as "leaky gut").A recent study by Wu et al. [52] shows that gut dysbiosis was associated with compromised barrier integrity in a transgenic mouse model of ALS.Compared to wild-type mice, the relative abundance of Fermicus, Escherichia coli, and Butyrivibrio Fibrisolvens, a butyrate-producing bacteria, were all markedly lower in transgenic mice.These shifts occurred before ALS symptom onset (at 2 months of age) and were associated with a significant reduction in colonic expression of tight (ZO-1) and adherent (E-cadherin) junction proteins [52].Moreover, levels of interleukin (IL)-17, a proinflammatory mediator of the host defense against extracellular pathogens [53], were significantly enhanced in both the intestine and blood of transgenic mice but not wild-type mice, indicating that intestinal permeability was affected.Indeed, a permeability assay revealed a two-fold increase in the serum intensity of fluorescein isothiocyanate (FITC)-dextran in ALS transgenic mice compared to wild-type, confirming that barrier integrity was altered by dysbiosis.While studies on ALS patients are lacking, evidence of compromised barrier integrity can be drawn from reports of microbial translocation and circulating inflammatory marker levels.For example, Zhang et al. demonstrate [54] that plasma levels of bacterial lipopolysaccharide (LPS), a pro-inflammatory glycolipid on the surface of most gram-negative bacteria, were significantly elevated in patients with sporadic ALS compared to healthy controls.In addition, LPS levels were negatively correlated with patient ALSFRS-R scores, suggesting that barrier integrity could influence disease progression or severity.Indeed, chronic elevation of LPS can trigger low-grade systemic inflammation, as evidenced by widespread monocyte activation [54], and subsequently impact vagal afferent signaling in the GBA [55].Moreover, endotoxemia secondary to a leaky gut can potentially propagate to the CNS across damaged blood-brain barriers [56] and further aggravate neuroinflammation by promoting microglial overactivation or the production of reactive oxygen species [57].Consistent with these findings, the restoration of intestinal barrier integrity by reversing dysbiosis not only suppressed local immune receptor signaling, but also alleviated neuroinflammation [58].Another study by Kim et al. [38] showed that lipopolysaccharidebinding protein (LBP), a surrogate marker of microbial translocation [59], was significantly elevated in the plasma of ALS patients and similarly correlated with symptom severity.Moreover, an increased abundance of microbial species in the blood of these patients was seen by 16S rDNA quantification, highlighting the link between dysbiosis and increased gut permeability [38]. Dysbiosis and Metabolic Dysfunction Motor neurons are remarkably vulnerable to systemic and cellular disturbances in energy homeostasis.Impaired mitochondrial function, oxidative stress, and altered glucose metabolism have therefore all been implicated in the pathogenesis of ALS [60].Given that the gut microbiota significantly regulates nutrient availability and bioenergetics [61], recent evidence suggests that dysbiosis may drive metabolic dysfunction in ALS.Sagi et al. [62], for example, report that, in mice lacking the antioxidant enzyme superoxide dismutase 1 (SOD1), a well-established animal model of ALS, changes to the gut microbiota and F/B ratio were associated with significant metabolic dysfunction.Increased oxidative stress caused by SOD1 deficiency not only suppressed hepatic gluconeogenesis but also promoted lipid accumulation (causing fatty liver in young 15-week-old mice).Moreover, redox imbalance was associated with the increased nitrosylation and subsequent inactivation of glyceraldehyde-3-phosphate dehydrogenase (GAPDH), a crucial enzyme in glycolysis [62].If not compensated, chronic shifts in carbohydrate metabolism can have detrimental impacts on energy homeostasis and disease progression in ALS [63].Another study by Blacher et al. [22] demonstrates that, in both human ALS patients and SOD1 transgenic mice with glycine substituted to alanine at position 93 (G93A), significant reductions in the relative abundance of Akkermansia muciniphila were associated with lower levels of nicotinamide.Nicotinamide is the precursor to coenzymes that are crucial in cellular signaling, energy metabolism, and redox homeostasis [64], and its depletion was associated with aggravated motor dysfunction in mice and lower ALSFRS scores in patients [22].These results were further confirmed in G93A mice when systemic supplementation with nicotinamide significantly ameliorated motor symptoms and prolonged survival, likely due to the restoration of mitochondrial and antioxidant functions (as revealed by a gene-ontology enrichment analysis).Another relevant metabolite is butyrate, a natural byproduct of dietary fiber fermentation in the colon.In addition to being the gut's primary source of energy, butyrate has several immunomodulatory and metabolic functions throughout the body [65][66][67].Moreover, butyrate may exert neuroprotective effects, as its supplementation in a motor neuron-like cell model of ALS significantly restored mitochondrial respiratory capacity and biogenesis [68].Many of the previously discussed human [28,31,47] and animal [52,69] profiling studies report that the levels of butyrate-producing bacteria are significantly decreased in ALS.Hertzberg et al. [46] also show that ALS patients typically lack enzymes needed for butyrate metabolism, even without deficiency in these microbes. Dysbiosis and Immune Dysregulation Dysregulation of both central and peripheral immune systems has been previously described in ALS [70].Persistent inflammation in the brains and spinal cords of ALS patients is typically associated with an increase in the number of reactive microglia and astrocytes.While initially neuroprotective [71], chronic glial cell activation exacerbates inflammation by promoting inflammasome formation and the production of several proinflammatory cytokines, such as IL-1β and IL-18 [70,72].Impaired autophagy and the loss of metabolic support normally provided by astrocytes can also contribute to neuronal cell injury and degeneration in ALS [72].Although many of the previously discussed profiling studies link alterations in the gut microbiota to neuroinflammation in ALS, the precise mechanisms are not fully understood.Some studies suggest immune dysregulation following dysbiosis is an indirect consequence of endotoxemia and an altered SCFA metabolism.Among its many anti-inflammatory activities, butyrate can inhibit histone deacetylase and subsequently shift microglial polarization towards an anti-inflammatory and more neuroprotective phenotype [73,74].It can also suppress IL-17 to maintain the balance between regulatory T cells (Treg) and T helper 17 cells (Th17) for systemic immune response control [75].In ALS patients, the levels of butyrate-producing bacteria are reported to be significantly lower compared to healthy controls [47].These alterations not only impact SCFA production but can also exacerbate local gut inflammation and trigger a systemic or neuroinflammatory response.Indeed, Niccolai et al. [31] reported that inflammatory biomarkers such as macrophage inflammatory protein-1 alpha (MIP-1α), monocyte chemoattractant protein-1 (MCP-1), IL-1α, IL-6, IL-18, and IL-27 were significantly higher in ALS stool samples.Moreover, elevated levels of circulating inflammatory cytokines, such as IL-17 and IL-23, have been described in the serum and cerebrospinal fluid of ALS patients [76], indicating possible Treg/Th17 imbalance.Other mechanisms by which dysbiosis may contribute to immune dysregulation in ALS include disabled autophagy and increased immune cell infiltration [77], although further investigation is warranted. Gut-Microbiota-Modulating Agents in Amyotrophic Lateral Sclerosis Given that GBA dysfunction and dysbiosis play a significant role in the pathogenesis of ALS, interventions and treatment strategies that target the gut microbiota may be of therapeutic benefit.Gut-modulating agents are broadly categorized into four main types: prebiotics, probiotics, postbiotics, and synbiotics.The use of these compounds in neurodegenerative disorders, such as Alzheimer's disease (AD) and Parkinson's disease (PD), have previously proven effective in reversing dysbiosis and alleviating disease symptoms [78][79][80][81].The following sections summarize all currently available preclinical evidence on the use of gut-microbiota-modulating agents in ALS. Prebiotics Prebiotics include a wide range of dietary fibers and compounds that can effectively stimulate the growth or activity of beneficial gut microbes [82,83].Although initially indigestible, prebiotics undergo fermentation in the colon to produce metabolites (e.g., SCFAs) that help regulate systemic metabolism and promote overall gut health [84].In a recent study by Zhang et al. [85], oral administration of the prebiotic galacto-oligosaccharides (GOS) not only increased the relative abundance of Lactobacillus but also attenuated neuroinflammation and cognitive impairment in transgenic AD mice.The use of fructooligosaccharides (FOS), alone or in combination with GOS, similarly promoted Bifidobacterium growth and alleviated AD pathology.The effects of these prebiotics were partly attributed to the downregulation of signaling pathways shared between the colons and cortices of mice [85], suggesting that GBA modulation could significantly influence CNS pathology.Preclinical evidence on the use of prebiotics in ALS remains preliminary and inconclusive.Song et al. [86], for example, demonstrated that treatment of SOD1-G93A transgenic mice with GOS-rich yogurt can significantly alter ALS progression and prolong animal lifespan.GOS administration not only rescued mitochondrial activity in skeletal muscles, it but also attenuated their denervation and atrophy.Moreover, the observed reduction in motor neuron degeneration was attributed to a marked suppression in neuroinflammation following treatment [86].Compared to mice groups fed with normal saline or milk, yogurt consumption was associated with significantly reduced microglial and astrocyte activation, as well as lower levels of inflammatory and apoptosis-related factors.Another study by Yip et al. [87] showed that the use of eicosapentaenoic acid (EPA), an omega-3 polyunsaturated fatty acid, did not offer any therapeutic benefit in ALS.Although EPA treatment significantly reversed microglial cell activation similar to GOS, it shortened the lifespan in mice and failed to alter motor neuron loss [87].Furthermore, an increase in neurotoxic byproducts, such as microglial 4-hydroxy-2-hexenal, was found in the spinal cords of mice treated with EPA.Further research is needed to conclusively determine whether prebiotic compounds are safe and effective. Probiotics Probiotics comprise a number of live microorganisms, often beneficial gut bacteria and yeast, that play a crucial role in maintaining the body's microbial balance and promoting well-being [88].Many of the currently well-recognized probiotics belong to the Lactobacillus, Bifidobacterium, Saccharomyces, Enterococcus, and Streptococcus genera [88].Several studies have explored the potential of specific strains in treating neurodegenerative disorders.For example, Zhu et al. [89] showed that the administration of Bifidobacterium breve significantly attenuated neuroinflammation, amyloid deposition, and cognitive impairment in APP/PS1 transgenic mice.These changes were associated with increased regulation of gut microbiota composition and improvement in intestinal barrier function [89], highlighting the therapeutic potential of GBA modulation.Another study similarly reported Bifidobacterium breve supplementation in amyloid beta precursor protein (APP) knock-in mice increased the bioavailability of anti-oxidative metabolites, which improved cognitive function [90].Huang et al. [91] demonstrated that the use of Lactobacillus plantarum can delay AD progression by regulating gliosis and tau hyperphosphorylation following a reduction in propionic acid levels.Yang et al. [92] show treatment with ProBiotic-4, a combination of Bifidobacterium lactis, Lactobacillus casei, Bifidobacterium bifidum, and Lactobacillus acidophilus, improved cognitive function and memory deficits in aged SAMP8 mice.Moreover, the combination significantly attenuated age-related disruption of the blood-brain barrier, and it also reduced cerebral neuronal and synaptic injuries [92].Sancandi et al. [93] reported that Symprove™, a commercially available probiotic suspension comprised of another four bacterial strains, restored gut integrity, improved SCFA production, and prevented striatal neuroinflammation in an early stage PD rat model. Aligned with the aforementioned findings, probiotic interventions in ALS have been shown to be neuroprotective.A recent study by Labarre et al. [94] investigated the effects of sixteen different probiotic formulations on Caenorhabditis elegans strains that were genetically modified to express two human ALS-associated proteins: fused in sarcoma (FUS) and TAR DNA-binding protein 43 (TDP-43).Although most combinations had little to no effect, treatment with the probiotic Lacticaseibacillus rhamnosus HA-114 alone was effective in delaying neurodegeneration and preventing paralysis [94].These effects were observed in FUS and TDP-43 mutant worms but not wild-type strains, suggesting that the benefits of HA-114 were more specifically tied to ALS pathology.Indeed, further investigations revealed that impaired β-oxidation and altered energy homeostasis, common features of ALS-related metabolic dysfunction, exacerbated motor neuron degeneration unless treated with HA-114 [94].The neuroprotective mechanisms of Lacticaseibacillus rhamnosus were primarily attributed to its unique fatty acid content, which helped restore energy metabolism independent of a functional carnitine shuttle.Blacher et al. [22] similarly reported that restoration of mitochondrial function following administration of Akkermansia muciniphila was central to alleviating motor symptoms in SOD1-G93A mice.Although further investigation is warranted, these findings suggest that targeting dysbiosis to address neuroinflammation and metabolic imbalances in ALS could be a viable therapeutic approach. Postbiotics Postbiotics are non-viable, biologically active components or metabolic byproducts of the gut microbiota [84].Common examples of postbiotics are functional proteins, extracellular polysaccharides, bacterial lysates, and fermentation byproducts (e.g., SCFAs).In ALS, Zhang et al. [69] have shown that the natural bacterial product, butyrate, was neuroprotective when given at a 2% concentration in filtered drinking water to SOD1-G93A transgenic mice.Treatment with postbiotics not only restored intestinal microbial homeostasis and gut barrier integrity, but also delayed ALS progression and prolonged the lifespan of mice.Moreover, abnormal Paneth cell accumulation and SOD1 mutant protein aggregation were significantly lowered in the intestines of mice receiving butyrate compared to the control groups [69].These findings suggest that the therapeutic potential of postbiotics particularly lies in their ability to address gut-related abnormalities and inflammation.Indeed, Ogbu et al. [95] demonstrated that 2% sodium butyrate administration was associated with changes in microbial carbohydrate and amino acid metabolism.Moreover, butyrate treatment significantly reduced microglia in the spinal cords of SOD1-G93A mice and was associated with lower circulating levels of proinflammatory IL-7 and LPS [95].Ryu et al. [96] further attributed the neuroprotective effects of sodium phenylbutyrate to the regulation of several anti-apoptotic genes.In addition to inhibiting histone deacetylase, phenylbutyrate administration significantly upregulated nuclear factor-kappaB (NF-kB) and beta-cell lymphoma 2 (Bcl-2) expression, blocking caspase activation and subsequent motor neuron death.While treatment with phenylbutyrate alone significantly delayed disease progression, another study by Signore et al. [97] reported that its combination with riluzole, a widely recognized medication for the treatment of ALS patients, was most effective in prolonging survival in G93A transgenic ALS mice.The combination also rescued body weight loss and grip strength [97], features of the disease that are often overlooked in animal studies.While promising, larger-scale studies are required to determine the long-term efficacy of combined therapies and whether these synergistic effects translate to human patients.All of the aforementioned findings on the application of gut-modulating agents in ALS have been summarized in Table 2. Inhibited histone deacetylase (shifting microglial to an anti-inflammatory neuroprotective phenotype) and upregulated expression of anti-apoptotic genes Promoted motor neuron survival and delayed disease progression in ALS mice Note: To the best of our knowledge, no preclinical studies explored the use of synbiotics in ALS.SAMP8: senescence-accelerated mouse prone 8; SOD1: superoxide dismutase 1; G93A: glycine 93 to alanine mutation. Future Directions Taken all together, recent evidence suggests a role for gastrointestinal dysfunction and gut dysbiosis in the pathogenesis of neurodegenerative disorders.Human microbiome studies consistently report that ALS patients exhibit distinct changes to their gut microbial composition and diversity.Drastic shifts in the microbial profile not only exacerbate local intestinal inflammation but can also promote chronic neuroinflammation, a hallmark of ALS pathology.Moreover, compromised gut barrier integrity, metabolic dysfunction, and immune dysregulation following dysbiosis have been linked to GBA dysfunction in ALS.Given these connections, the therapeutic potential of gut-modulating agents (prebiotics, probiotics, and postbiotics) have become a focus of recent research efforts.These interventions showed promising gut modulatory effects and neuroprotective properties in animal models of ALS.Depending on when they were administered, several agents were also able to rescue motor symptoms, or even mitigate disease progression.Although many parameters of gut health were assessed (e.g., fecal metabolites, inflammatory markers, and intestinal permeability), a major limitation of the preclinical studies summarized in this review is that the composition of gut microbiota following treatment was not always reported.It is, therefore, difficult to determine whether the effects of gut-modulating agents are secondary to the restoration of a healthy microbial balance without further research.Future studies should also continue exploring ALS-related gut dysbiosis to establish which mechanisms are potentially causal to disease pathology.To limit the possibility of any confounding factors from independently affecting both ALS and the gut microbiome, rigorous study designs are necessary.Lastly, translating these findings to human patients remains a challenge.Preclinical studies often use transgenic mice with SOD1 mutations, and while instrumental in our understanding of ALS pathology, these models do not fully represent the ALS patient population.It is important to investigate whether neuroprotection following gut modulation is reproducible in a variety of genetic and sporadic ALS models.Well-designed, large-scale randomized clinical trials are needed to better determine the efficacy and safety of these agents in ALS patients.Future studies should also aim to explore how individual variations in the human gut microbiome impact disease progression and treatment outcomes.Understanding the influence of genetic, dietary, and environmental factors could pave the way for more personalized treatment strategies. Figure 1 . Figure 1.Gut Dysbiosis and Pathology in Amyotrophic Lateral Sclerosis.Alterations to the gut microbiota can contribute to ALS pathology via three pivotal mechanisms: compromised gut barrier integrity, metabolic dysfunction, and immune dysregulation.Reduced expression of tight junctions along the intestinal epithelium allows for microbial invasion and subsequent inflammation.These microbes may also translocate into the blood, triggering a systemic immune response (e.g., endotoxemia, proinflammatory cytokine production, and peripheral monocyte activation).If prolonged, systemic inflammation can damage the blood-brain barrier (BBB) and result in the overactivation of microglia and astrocytes further aggravating neuroinflammation.The loss of immunomodulatory and neuroprotective metabolites such as butyrate, a short-chain fatty acid (SCFA), promotes motor neuron degeneration by increased oxidative stress and mitochondrial dysfunction.The interplay between gut and brain health highlights the therapeutic potential of gut-brain axis (GBA) modulation in ALS.Restoring a healthy microbial balance may not only alleviate patient symptoms by modulating these mechanisms but also prolong survival by mitigating disease progression.The figure was created using Biorender.com. Figure 1 . Figure 1.Gut Dysbiosis and Pathology in Amyotrophic Lateral Sclerosis.Alterations to the gut microbiota can contribute to ALS pathology via three pivotal mechanisms: compromised gut barrier integrity, metabolic dysfunction, and immune dysregulation.Reduced expression of tight junctions along the intestinal epithelium allows for microbial invasion and subsequent inflammation.These microbes may also translocate into the blood, triggering a systemic immune response (e.g., endotoxemia, proinflammatory cytokine production, and peripheral monocyte activation).If prolonged, systemic inflammation can damage the blood-brain barrier (BBB) and result in the overactivation of microglia and astrocytes further aggravating neuroinflammation.The loss of immunomodulatory and neuroprotective metabolites such as butyrate, a short-chain fatty acid (SCFA), promotes motor neuron degeneration by increased oxidative stress and mitochondrial dysfunction.The interplay between gut and brain health highlights the therapeutic potential of gut-brain axis (GBA) modulation in ALS.Restoring a healthy microbial balance may not only alleviate patient symptoms by modulating these mechanisms but also prolong survival by mitigating disease progression.The figure was created using Biorender.com. Author Contributions: Conceptualization, A.N.E.; writing-original draft preparation, A.N.E., M.A. and R.N.E.; writing-review and editing, A.N.E. and R.N.E.; visualization, A.N.E.; supervision, A.Y. and K.A.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding. Table 1 . Key Microbial Alterations in Human Patients with ALS. Table 1 . Cont.Only oral and fecal microbiome studies of human patients were considered and summarized in this table.Upward arrows (↑) indicate an increase in the abundance of listed microbes while downward arrows (↓) indicate reduced levels.Unless mentioned otherwise, all listed microbial changes in ALS patients compared to controls are statistically significant (p < 0.05).ALSFRS-R: ALS functional rating scale revised; BAD: brachial amyotrophic diplegia; bALS: bulbar onset ALS; sALS: spinal onset ALS; DGGE: denaturing gradient gel electrophoresis; F/B: Firmicutes/Bacteroidetes ratio; IL: interleukin; LBP: lipopolysaccharide-binding protein; MCP-1: monocyte chemoattractant protein-1; NAM: nicotinamide; PCR: polymerase chain reaction; qPCR: quantitative PCR; qRT-PCR: quantitative real-time reverse-transcription PCR; VEGF-A: vascular endothelial growth factor A, O2PLS-DA: two-way orthogonal partial least square with discriminant analysis; WGCNA: weighted gene co-expression network analysis. Table 2 . Preclinical Studies on Gut-Modulating Agent Use in Animal Models of ALS.
2024-02-27T17:14:12.396Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "871a48bb4fc8bbbb4bdd5cb5c18cc93e7353f5a1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "29e3973fb9881e9d1a6cb0687359c9dd3581a516", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
260091647
pes2o/s2orc
v3-fos-license
MUSE-ALMA Haloes IX: Morphologies and Stellar Properties of Gas-rich Galaxies Understanding how galaxies interact with the circumgalactic medium (CGM) requires determining how galaxies morphological and stellar properties correlate with their CGM properties. We report an analysis of 66 well-imaged galaxies detected in HST and VLT MUSE observations and determined to be within $\pm$500 km s$^{-1}$ of the redshifts of strong intervening quasar absorbers at $0.2 \lesssim z \lesssim 1.4$ with H I column densities $N_{\rm H I}$ $>$ $10^{18}$ $\rm cm^{-2}$. We present the geometrical properties (S\'ersic indices, effective radii, axis ratios, and position angles) of these galaxies determined using GALFIT. Using these properties along with star formation rates (SFRs, estimated using the H$\alpha$ or [O II] luminosity) and stellar masses ($M_{*}$ estimated from spectral energy distribution fits), we examine correlations among various stellar and CGM properties. Our main findings are as follows: (1) SFR correlates well with $M_{*}$, and most absorption-selected galaxies are consistent with the star formation main sequence (SFMS) of the global population. (2) More massive absorber counterparts are more centrally concentrated and are larger in size. (3) Galaxy sizes and normalized impact parameters correlate negatively with $N_{\rm H I}$, consistent with higher $N_{\rm H I}$ absorption arising in smaller galaxies, and closer to galaxy centers. (4) Absorption and emission metallicities correlate with $M_{*}$ and sSFR, implying metal-poor absorbers arise in galaxies with low past star formation and faster current gas consumption rates. (5) SFR surface densities of absorption-selected galaxies are higher than predicted by the Kennicutt-Schmidt relation for local galaxies, suggesting a higher star formation efficiency in the absorption-selected galaxies. INTRODUCTION The circumgalactic medium (CGM) has become increasingly recognized as an important component of the baryonic Universe. It serves as a transition region between the galaxy disk and the intergalactic medium (IGM) (Tumlinson et al. 2017). Metal-poor IGM gas is believed to flow into the galaxy, passing through the CGM. This gas ★ E-mail: karkia@email.sc.edu is converted into stars and progressively enriched chemically. The outflows driven by the supernovae [or active galactic nuclei (AGN)] transfer the enriched gas back into the IGM, also passing through the CGM. This cosmic baryon cycle regulates star formation in the galaxy (Péroux & Howk 2020). Given the central role of the CGM in this cycle, it is expected to play a major role in the evolution of the galaxy. Understanding how the CGM interacts with galaxies requires analyzing how the stellar properties of the galaxies are dictated by and, in return, influence the CGM properties. The stellar properties of the galaxies are described by measurements of various properties that depend directly on their stellar populations -for example, photometric magnitudes, colors, star formation rates (SFRs), and stellar masses. The morphologies of the galaxies are closely coupled to these properties. They can be expressed in terms of various quantitative measures of the surface brightness distribution such as the effective radius ( ), axis ratio (b/a), and Sérsic index ( ). Broadly speaking, galaxies fall into two main types in the color-magnitude diagrams -the "blue cloud" consisting of the latetype, more actively star-forming galaxies, and the "red sequence" of early-type, more passive galaxies that have their star formation quenched (e.g. Kauffmann et al. 2003;Buta 2011). The "size" of a galaxy detected in optical light depends on its stellar mass and the rate of star formation, which are governed by the dark matter halo and the galaxy's formation history. Massive galaxies are larger and have higher SFRs. (Mowla et al. 2019). The slope of the size versus stellar mass relation is shallower for late-type galaxies than for early-type galaxies (e.g. Shen et al. 2003;van der Wel et al. 2014). This difference may be due to dry minor mergers, which can lead to a size growth caused by adding an outer envelope without adding as much mass. Galaxies that undergo repeated dry minor mergers tend to have a larger size (e.g. Hilz et al. 2013;Carollo et al. 2013). The star formation caused by gas-rich mergers can also lead to the formation of larger disks. The different evolutionary paths taken by early-type and late-type galaxies thus result in different size-mass relationships. The direct observation of emissions from the gas inflows and outflows passing through the CGM is difficult due to the very low gas density. Absorption spectroscopy of background sources such as quasars or gamma-ray bursts (GRBs) provides an alternative and powerful technique to study these gas flows. Damped Ly (DLA) and sub-Damped Ly (sub-DLA) systems provide a huge reservoir of neutral hydrogen required for star formation (e.g. Péroux et al. 2003;Wolfe et al. 2005;Prochaska & Wolfe 2009;Noterdaeme et al. 2012;Kulkarni et al. 2022). DLAs ( HI ≥ 2 × 10 20 cm −2 ) and sub-DLAs (10 19 ≤ HI < 2 × 10 20 cm −2 ) permit measurements of a variety of metal ions, and are therefore among the best-known tracers of element abundances in distant galaxies (e.g. Kulkarni et al. 2005;Rafelski et al. 2012;Som et al. 2015;Fumagalli et al. 2016). Although the absorption technique provides an effective tool to probe gas along the sight line to the background object, it cannot provide information about the galaxy in which the absorption arises (e.g. Bergeron 1986;Bergeron & Boissé 1991). Also, detecting galaxies associated with absorbers in the vicinity of the quasar using imaging and spectroscopy was not always effective in past studies, since the galaxies selected for spectroscopic study were sometimes found to be offset in redshift from the absorber. The technique of integral field spectroscopy (IFS) provides an efficient method for detecting galaxies associated with the absorbers, and thus connecting stellar properties with gas properties (Péroux et al. 2019(Péroux et al. , 2022. Several surveys (e.g., MusE GAs FLOw and Wind (MEGAFLOW; Schroetter et al. 2016), MUSE Ultra Deep Field (MUDF; Fossati et al. 2019) and MUSE Analysis of Gas around Galaxies (MAGG; Dutta et al. 2020;Lofthouse et al. 2020Lofthouse et al. , 2023) have used the power of IFS to search for sources traced by Mg and H absorption and studied the gas properties of CGM. While the Bimodal Absorption System Imaging Campaign (BASIC survey; Berg et al. 2022), has studied H selected partial Lyman limit systems (pLLSs) and Lyman limit systems (LLSs) to search for absorber-associated galaxies, the Cosmic Ultraviolet Baryon Survey (CUBS; Chen et al. 2020) has studied the galactic environments of Lyman limit systems (LLSs) at abs < 1 using IFS observations. Another survey, MUSE Quasar-field Blind Emitters Survey (MUSEQuBES; Muzahid et al. 2020) searched for Ly emitters (LAEs) at the redshift of the absorbers using guaranteed time observations with the Multi-Unit Spectroscopic Explorer (MUSE; Bacon et al. 2010) on the Very Large Telescope (VLT). We have surveyed a large number of absorption-selected galaxies with VLT MUSE and the Atacama Large Millimetre/submillimetre Array (ALMA; Wootten & Thompson 2009) as part of our MUSE-ALMA Haloes Survey, and have recently imaged these galaxies with the Hubble Space Telescope (HST). The HST images provide highresolution broadband continuum imaging of the galaxies, while the MUSE data provide IFS of the galaxies, thus providing information about the spatial distribution of gas kinematics, SFRs and emissionline metallicity. The kinematics measured from the IFS also allows estimates of the dynamical masses of the galaxies (e.g., Péroux et al. 2012;Bouché et al. 2013). Thus, combining HST images and MUSE IFS provides a powerful approach to studying the morphologies and stellar content of absorption-selected galaxies and relating them to the CGM gas properties. Together, the stellar and gas properties can be used to put improved constraints on the evolution of galaxies and their CGM. The MUSE-ALMA Haloes (MAH) survey targets the fields of 32 H rich absorbers at redshift 0.2 ≤ z abs ≤ 1.4 detected in sight lines to 19 quasars. These 32 absorbers were detected in HST FOS, COS, or STIS UV spectra. Most of our quasars also have optical highresolution spectra from VLT/UVES, X-Shooter, or Keck/HIRES with spectral resolutions ranging from 4000-18 000 (X-shooter), 45 000-48 000 (HIRES) and up to 80 000 (UVES). The H column densities determined from these UV spectra (with resolution R = 20 000−30 000) are found to be HI > 10 18 cm −2 . Information about quasar spectra and an overview of the survey are provided in Péroux et al. (2022). In this paper, we focus on 66 galaxies observed in the HST images that lie within a radial velocity range of ± 500 km s −1 of the redshifts of 25 H rich absorbers (the remaining 7 absorbers having no associated galaxies within ± 500 km s −1 ). The paper is organized as follows: section 2 presents the sample selection and observations. Section 3 details the results derived from the observation. Section 4 summarizes our findings. We adopt the following cosmology parameters: H o = 70 km s −1 Mpc −1 , Ω = 0.3 and Ω Λ = 0.7 throughout the paper. HST IMAGES AND OBSERVATIONS Our primary HST imaging dataset comes from the broad-band imaging observations performed in GO Program ID 15939 (PI: Péroux) with the Wide Field Camera-3 (WFC3). These data were complemented by archival Wide Field and Planetary Camera-2 (WFPC2) or WFC3 images obtained in programs 5098, 5143, 5351, 6557, 7329, 7451, 9173, and 14594 (PIs Burbidge, Macchetto, Bergeron, Steidel, Malkan, Smette, Bechtold, Bielby, respectively). The observations consisted of multiple dithered exposures in a variety of filters. Further details about these observations and the observation strategy used for our own program (PID 15939) can be found in Péroux et al. (2022). The data were reduced using the 3 or 2 pipelines. For each filter, the sky-subtracted, aligned images from the individual exposures were median-combined to produce the final images. Figure 2 show examples of final full-frame images and zoomed-in sections near the quasar. The quasar point spread function (PSF) has been subtracted in the zoomed-in frame to search for galaxies at small angular separations from the quasar. The PSF in each filter and instrument for each given quasar field was constructed Figure 1. Median stacked image of Q0152+0023 in the F814W UVIS filter-band. The image is created by combining all the individual exposure images, which helps to remove the cosmic ray effects and bad pixels and improve the signal-to-noise ratio. This enables us to study the morphological properties of the galaxy populations robustly. The solid red circles represent the position of the associated galaxies in the field of view, while a dotted black circle denotes the QSO's position. The astropy PHOTUTILS package was used to detect the associated galaxies in the given quasar field. The associated galaxies are located within ± 500 km s −1 of the absorber's redshift. The redshift of those galaxies came from MUSE data. The scale corresponds to 40 kpc at abs = 0.4818. The object identification number (ID) of these galaxies came from the MUSE-ALMA Haloes master table listed in Péroux et al. (2022). using observations of all remaining quasar fields from our sample in the same band and instrument. For each such image used in making the PSF, we masked the objects other than the central quasar and performed sky subtraction. All such processed images were aligned spatially and coadded after the flux levels in the outer wings of the PSF were scaled to match with each other. The resulting PSF thus constructed was subtracted from the quasar field of interest after matching the flux levels of the two images. The PSF-subtracted images thus produced were used to search for galaxies near the quasars. This approach is similar to that used in previous works (e.g. Kulkarni et al. 2000Kulkarni et al. , 2001Chun et al. 2010;Straka et al. 2011;Augustin et al. 2018). Object detections and astrometric and photometric measure-ments were performed using the package . The data reduction, PSF construction, PSF subtraction, and photometric measurements are further detailed in Péroux et al. (2022). Using the MUSE observations along with the HST imaging, a total of 3658 sources were detected in all of our fields. The MUSE Line Emission Tracker (MUSELET) tool of the MPDAF package was used to detect sources with emission lines, and the R Package was used to identify continuum sources in the MUSE fields. A final master table was produced by matching the sources detected in the HST images with the MUSE results using the tool. Spectroscopic redshifts were determined for 703 objects out of the 3658 sources detected in all fields using the tool on the VLT/MUSE spectra. The remaining objects do not have spectroscopic redshifts, because they were either detected outside of the MUSE field of view or too faint to estimate a redshift. We refer the reader to Péroux et al. (2022) for a detailed methodology regarding the redshift measurements and for the master table of all targets. Table 1 provides a summary of HST imaging observations for 18 quasar fields. The high spatial resolution of the broad-band HST images allowed us to detect several sources nearby or far away from the quasar sightlines, which were undetected in the MUSE cubes. The HST images also enabled us to detect and resolve several sources near the quasars whose redshifts are not well known. While the MUSE data allowed us to study the emission properties of galaxies, the study of geometrical properties of the galaxies and structural features such as tidal tails are difficult to discern in MUSE images due to insufficient spatial resolution. The high-resolution HST images also allowed us to study the morphologies of the galaxies and structural details. The MUSE data show 79 associated galaxies within ± 500 km s −1 of the absorber redshifts. Out of the total of 32 absorbers, 19 (59%) have two or more associated galaxies, 7 (22%) have one associated galaxy and the remaining 6 (19%) have no associated galaxy detected within ± 500 km s −1 of the absorption redshift. A detailed explanation of how these associated galaxies were selected is provided in Weng et al. (2023). Out of these 79 galaxies, 9 galaxies were detected in emission only in MUSE fields (but not in HST images), one galaxy was detected at the edge of the HST image, and three galaxies were detected in the archival HST images but were too faint to perform reliable morphological measurements. Therefore, we analyzed the remaining 66 associated galaxies in the 18 HST fields. Morphological properties of associated galaxies The software (Häußler et al. 2011) was used to study the morphologies of the 66 galaxies associated with the absorbers in our MUSE-ALMA halo sample. G uses (Bertin & Arnouts 1996) for object detection and (Peng et al. 2002) for two-dimensional image decomposition, and can be run in batch mode to analyze multiple objects. This analysis was performed on the reddest images obtained for each quasar field because these images in the red or near-infrared filters (WFPC2 F702W, F814W, WFC3 F105W, WFC3 F140W, or NICMOS F160W) provide far more sensitive detection thresholds and enable better deblending of extended sources compared to the bluer filters. The redder bands better sample the cooler and older stars, which in turn more accurately follow the gravitational potential. Sérsic profiles were fitted to each of the sample galaxies. In each case, a cutout region was created with the galaxy centered in the image. Depending on the size of the galaxy, the cutout region ranged from 3" × 3" to 7" × 7" in size, and included a sufficient number of pixels at the sky level surrounding the source. Most of the cutout images show only the galaxy without the presence of nearby sources. In cases where nearby sources were present, they were masked using the segmentation map made using . For each object, the best-fitting values of the morphological parameters were determined so as to minimize the residual between the data and the fitted Sérsic model. In cases where there were significant residuals or the parameter values returned by had large errors, the morphological parameters were determined by adding more Sérsic components and individually running iteratively until the parameter converged. As an example, Figure 3 shows the outcomes of the Sérsic profile fitting for the five associated galaxies detected in the UVIS/F814W image of the quasar field presented in the Figure 1. Each of these five galaxies were well-fitted using a single Sérsic component. Table 2 lists the morphological parameters ( , n, b/a and PA of the major axis (in degrees east of north)) along with impact parameter, k-corrected absolute magnitude, absolute effective surface brightness and reduced chi-squared values determined from our analysis for each of the galaxies using HST images. The galaxy redshifts are based on the emission line measurements from our MUSE observations, as described in Péroux et al. (2022). The absolute magnitude for each galaxy was determined using the apparent magnitude and luminosity distance for the assumed cosmological parameters and k-corrected. Since our sample consists of galaxies at different redshifts observed in various bandpasses, applying a k correction is essential to make a meaningful comparison among them (Blanton & Roweis 2007). The IDL-based software was used to calculate Table 2. Results of our GALFIT morphological modeling, impact parameters (b), and surface brightness of the gas-rich galaxies. The listed parameters are the quasar name, object IDs from Péroux et al. (2022), impact parameters of the quasar sightlines from the galaxy centers, k-corrected absolute magnitude in the UVIS F814W filter, Sérsic index, effective radius, axis ratio, position angle (PA) of the major axis, absolute surface brightness averaged within the effective radii of the galaxies and goodness of fit. A complete table of associated galaxies and their properties is available as machine-readable online material. k-corrections to the absolute magnitudes. Figure 4 shows the distribution of the morphological properties provided by and the galaxies redshift of the associated galaxies. For our sample galaxies, the Sérsic index ranges from 0.33 to 2.27 with a median value of 0.86 ± 0.07. This suggests that most of the sample galaxies are disks or dwarf spheroidals. Such disk galaxies exhibit an exponential light profile (Kelvin et al. 2012). The effective radii of the sample galaxies range from 0.68 kpc to 7.55 kpc with a median value of 2.85 ± 0.15 kpc, and their absolute magnitudes (in F814W filter) range from -15.80 to -23.73 with a median value of -20.81 ± 0.07. The redshifts of the galaxies range from = 0.19 to = 1.15, with a median value = 0.56. Stellar Properties of associated galaxies The SFRs of the associated galaxies were measured using the H emission line for the sample galaxies with z ≤ 0.4, and with [O ] emission lines for the remaining galaxies (where the H emission lines fall outside the MUSE wavelength coverage). Dust-corrected SFRs were calculated for 13 galaxies at redshift z < 0.4 with detections of H and H using the measured H and H emission-line fluxes. For 15 galaxies, only a 3-upper limit could be placed on the SFR. A complete description of the SFR estimates and dust correction on SFRs is provided by Weng et al. (2023). The stellar masses ( * ), estimated by using the HST broad-band magnitudes and performing spectral energy distribution (SED) fits using the photometric redshift code (Arnouts et al. 1999;Ilbert et al. 2006), were found to span a wide range 7.8 < log * < 12.4. Further detail about the stellar mass determination is provided in Augustin et al. (in prep). The SFR surface density and average stellar mass density within , and the k-corrected absolute magnitude (lower-right) in UVIS/F814W filter of the 66 associated galaxies. In each panel, the vertical dashed black line represents the median value. We used IDL-based software to calculate k-corrections to the absolute magnitudes. Table 3. Stellar Properties of the gas-rich galaxies and other galaxies from the literature. The listed parameters are the quasar name, galaxy ID, galaxy's redshift, specific star formation rate, SFR surface density, stellar mass surface density and sSFR surface density. The entries marked "-999" correspond to measurements that are unavailable. A complete table of associated galaxies and their properties is available as machine-readable online material. the effective radius were calculated as The absolute effective surface brightness, < eff > (mag arcsec −2 ), averaged within the effective radius for our galaxies were calculated based on the k-corrected absolute magnitude and the effective radius using the following expression (Graham & Driver 2005): < eff >= M + 5 log 10 (R e ) + 38.57 (3) Table 3 lists the derived stellar properties of the galaxy populations. Figure 5. The absolute surface brightness averaged within the effective radius of the galaxies associated with H absorbers plotted against the absolute magnitude in the UVIS/F814W band. Sample and literature galaxies are divided into two bins in terms of H column density using the median value of column density of H ( HI,med = 6.03 × 10 19 cm −2 ). Magenta and yellow symbols denote galaxies with HI < HI,med and HI ≥ HI,med , respectively. Diamonds denote galaxies nearest to the quasar sight lines (in impact parameter) from our HST measurements, while open circles denote the most massive galaxies in each quasar field at the redshift of the absorber. The literature values are taken from Fynbo et al. (2011), Augustin et al. (2018), Rhodin et al. (2021). For literature galaxies, we calculated the surface brightness within the using the photometric magnitudes listed in the literature papers. The stacked histograms show the distribution of the galaxies in both axes. Table 4. Absorption properties of the gas-rich galaxies and other absorbers from the literature. Listed are the name of the quasar field, absorber redshift, impact parameter of the nearest galaxy, normalized impact parameter (b/R e ), H column density, rest equivalent widths of Mg II 2796 and Fe II 2600 and the absorption metallicities. The absorption metallicities listed here are the observed values (based on generally Zn) in the quasar sightlines. The last column lists the references for the impact parameter, absorber rest-frame equivalent widths, absorption metallicity, and the H column density. See the text for more details. The entries marked "-999" correspond to measurements that are unavailable. A complete table of absorption properties is available as machine-readable online material. Absorption Properties of associated galaxies The MUSE-ALMA Haloes survey analyzed the absorption properties of 32 H rich absorbers with HI > 10 18 cm −2 using 19 quasar fields. Using the HST images of the 18 quasar fields, we detected 66 galaxies within ± 500 km s −1 of the absorber redshift for 25 absorbers. Table 4 lists the impact parameters of the quasar sightlines from the galaxy centers and the absorption properties along these sightlines for the sample galaxies and the galaxies from the literature. The impact parameters of these 66 associated galaxies are measured from the astrometry of the HST images. The measurements of the column density of H , absorption metallicities, and the equivalent widths of the Fe II 2600 and Mg II 2796 absorption lines are from various references in the literature. The absorption metallicities listed for the DLAs and sub-DLAs in the sample (available for 4 out of the 25 absorbers) are based on the Zn abundance without corrections for ionization and (in most cases) dust depletion. The effect of dust depletion is expected to be modest, since Zn is a volatile element that is far less depleted in the interstellar medium (ISM) of the Milky Way compared to refractory elements such as Fe (e.g. Jenkins 2009;Vladilo et al. 2011). Indeed, Zn is often used as the metallicity indicator in DLA/sub-DLAs for these reasons. Ionization corrections are known to be small for DLAs (Muzahid et al. 2016;Péroux & Howk 2020). For sub-DLAs (which tend to be more ionized than DLAs), the ionization corrections for Zn abundance are found to be 0.2 dex (e.g., Meiring et al. 2009). We, therefore, select only the absorbers with Zn abundance measurements for comparison of the absorption metallicities with the stellar properties of the galaxies in the following sections. Finally, we note that emission-line metallicities are also available for our galaxies, and were measured using the 3 calibration from Curti et al. (2017). A complete description of the galaxy metallicity estimates is provided in Weng et al. (2023). DISCUSSION Using the high-resolution HST images and the integral field spectroscopy provided by MUSE, we have studied and analyzed the morphological and stellar properties of 66 associated galaxies. The two powerful software: and , provide ample information about the structural and stellar properties of those gas-rich galaxies. The following sections discuss the scientific results, compare the associated galaxies with the general galaxy population, and explore the connection between their stellar and absorption properties. Literature Sample To examine how the properties of the galaxies in our sample compare to other absorption-selected galaxies, we use a comparison sample compiled from the literature. Specifically, this literature sample consists of 61 galaxies detected at the redshifts of known gas-rich absorbers from the IFS surveys of the CGM (see references listed in Table 4). These literature galaxies range in redshift from 0.10 to 3.15, and in impact parameter from ∼3 kpc to 88 kpc. The H column density of some of the literature galaxies (Zabl et al. 2019, denoted as grey stars in the figures) are estimated from the Mg 2796 equivalent widths using the relation from Ménard & Chelouche (2009). Three absorbers from this literature sample have multiple galaxies at the absorber redshifts (Augustin et al. 2018). In these cases, if combined measurements of stellar properties were available, we adopted those values and treated those multiple sources as a single source. The stellar masses for the galaxies in the literature sample were derived from SED fitting in most cases, and using the tight correlation between * and a dynamical estimator, i.e. a function of galaxy velocity dispersion and rotational velocity (Schroetter et al. 2019 Correlations between stellar and absorption properties To assess the correlations between the various properties of the galaxies associated with the absorbers, we use the Spearman rank-order correlation method to evaluate the correlation coefficient ( ) and the probability ( ) that the observed value of could occur purely by chance. In cases where there is a mixture of detections and limits (i.e., in the presence of censored data), we use the survival analysis method to calculate the Spearman correlation coefficient (as implemented in the Image Reduction and Analysis Facility (IRAF; Tody 1986) task ). The survival analysis method uses the Kaplan-Meier estimate of the survival curve to assign ranks to the observations that include censored points. Censored points are assigned half (for upper limits) or twice (for lower limits) the rank that they would have had were they uncensored. Given that the literature sample is based on observations obtained with different selection methods, it is useful to ask how much the correlations are affected by the inclusion or exclusion of the literature sample. With this in mind, we computed the and values between the various properties for both our own sample, and a larger sample after including the literature galaxies. Table 5 lists the results of these correlation calculations. In the following subsections, we discuss some of the key implications of our analysis to address two major questions: How do absorber-selected galaxies compare to the general galaxy population? How do stellar and absorption properties relate? Do gas-rich galaxies differ from the general population? We now compare the absorber-associated galaxies from our sample and the literature with the properties of the overall galaxy population. While making these comparisons, we examine whether there is a difference between galaxies associated with high and low H column densities, and also between small vs. large impact parameters. The galaxies with the lowest impact parameters may be thought of as the host galaxies of the absorbers (e.g. Schroetter et al. 2016;Weng et al. 2023). In most cases (18 out of 24), the galaxies with the lowest impact parameters are also the most massive galaxies. However, we find no substantial difference between the trends for the lowest and intermediate column density bins. The trend appears to be flatter for the lowest luminosity galaxies, especially for galaxies associated with the highest H column density absorbers. This finding is consistent with past suggestions that the highest H column density absorbers may be associated with dwarf galaxies . We also analyzed a plot similar to Figure 5 by sub-sampling our sample galaxies into two redshift bins using the median redshift of the sample galaxies. No significant differences are observed between z and HI sub-samples. Figure 6 shows a plot of the SFR vs. stellar mass, revealing a strong correlation with = 0.41 and = 1.24 × 10 −4 . Also shown for comparison is the SFR- * relation, i.e. the star formation main sequence (SFMS) for galaxies at z = 0.56, the median redshift of our full sample of absorber-associated galaxies (based on the SFMS from Boogaard et al. 2018). Since the SFMS evolves with redshift and our Rhodin et al. (2021). All other symbols are as in Figure 5. The solid black line shows the star formation main sequence from Boogaard et al. (2018) at the median redshift of the sample galaxies. The blue and red dashed lines show the 1-deviations from the SFR-M* relations at min and max of the full sample of absorber-associated galaxies (including galaxies from both our MAH sample and the literature). Lower panel: plot of the difference in the observed SFR and the expected SFR from the SFMS at med taken from Boogaard et al. (2018) at the observed stellar mass vs. the stellar mass. The blue and red dashed lines show the difference relative to the SFMS at med for the lower and upper 1-SFMS at min and max . full sample covers a wide redshift range, we show the 1-deviations from the SFMS relations at z = 0.10 and z = 3.15 (the minimum and maximum redshifts of our full sample). Dependence of SFR on stellar mass Most galaxies associated with strong intervening quasar absorbers appear to be consistent with the SFMS within the uncertainties. We note, however, that a small fraction of high-mass galaxies lie below the SFMS. The lower panel of Figure 6 shows the deviation from the SFMS vs. * . The deviation seems to be highest for galaxies with the highest stellar mass. Similar conclusions were also reached by Kulkarni et al. (2022). We note, however, that dust corrections for SFRs were not possible for most of these galaxies (12 out of the 14 galaxies with * > 10 10 M showing > 2 deviation from the SFMS), and the true SFRs for these galaxies could be higher. Figure 7 shows plots of the stellar mass vs. the effective radius and Sérsic index. Also shown for comparison is the mass-size scaling relationship for galaxies at 0.50 < < 0.75 adopted from Ichikawa et al. (e.g. 2012). * appears to be strongly correlated with the effective radius ( = 0.49, = 9.77 × 10 −7 ). A similar result was found in previous studies (e.g. Ichikawa et al. 2012). A positive correlation is observed between the Sérsic index and stellar mass with = 0.43 and = 2.51 × 10 −4 . The more massive galaxies tend to be more centrally concentrated and, therefore, earlier-type. Similar results were also observed in previous studies (e.g. van der Wel et al. 2014;Mowla et al. 2019;Lima-Dias et al. 2021). However, we caution that our sample is relatively small, and the correlation between * and is sensitive to the presence of a few galaxies with the largest masses. It seems clear that galaxies with Sérsic index below 1.2 are primarily below 10 10 M in stellar mass, while those with higher Sérsic indices span the full mass range, consistent with observations of a wide mass range among local early-type galaxies (e.g., dwarf spheroidal and giant elliptical galaxies) Dependence of Stellar mass on Sérsic index and size To summarize, we have compared the various morphological and stellar properties of our galaxies (which were selected for strong H absorption) with the properties of the global galaxy population. We find that the absorption-selected galaxies exhibit similar properties as shown by the general population. How do stellar and absorption properties relate? We now examine the stellar properties of the galaxies within the velocity range of ± 500 km s −1 of the absorption redshift with the . The effective radii plotted against the impact parameter color-coded by column density of neutral hydrogen of the galaxies sample. The diamond symbols are the sample galaxies that lie closest to the quasar sightline, while open circles stars denote the most massive galaxies of each quasar at the redshift of the absorber. The stars are the literature sample galaxies. The red and blue dashed lines correspond to b = R vir , and b = R vir /2, respectively, taking the approximate relation between the effective radius and the virial radius. From the plot, we see most of the sample galaxies are found at or within the R vir and more than half of the sample galaxies are found at or within the half of the R vir . It is also interesting to see that all the galaxies that are closest to the quasar's sightlines are within the virial radius. Most of the gas-rich galaxies are located in the region near the galactic center and fewer gas-rich absorbers are found to be tracing CGM at a larger distance. Table 5. Results of correlation tests between various properties of our sample galaxies and literature galaxies. Column 1 lists the parameter pairs for which the correlation is computed. Columns 2, 3, and 4 list the number of paired parameters, the Spearman rank order correlation coefficient ( ), and the probability (p) that the observed value of could arise by chance for both sample and literature galaxies. Columns 5, 6, and 7 list similar values for our sample galaxies only. . The rest-frame equivalent width of Mg II 2796 plotted against the stellar mass. All symbols are as in Figure 5. The stellar mass of the galaxies is anti-correlated with the column density of H gas (Augustin et al. 2018) while the HI positively correlates with the equivalent width of the metal lines (Rao et al. 2006). The massive galaxies will have lower metal line strengths, while the less massive galaxies are found to possess strong metal line strengths. absorption properties. We only include the absorbers with reliable metallicities for comparisons of absorption metallicities and stellar properties of the galaxies. To perform these comparisons, we selected galaxies with the smallest impact parameters from the quasar sightlines and the most massive galaxies in each quasar field detected within ± 500 km s −1 of the absorption redshift. We note that it is not possible to consider all the galaxies in such cases or even use the average values of the stellar properties of all the galaxies since doing so would either underestimate or overestimate the true values, given that the impact parameters of the individual galaxies are different. Figure 8 shows the impact parameter versus the effective radius for our galaxies and other absorption-selected galaxies from the literature. For reference, the red and blue dashed lines correspond to b = R vir , and b = R vir /2, respectively, assuming the approximate relation between the effective radius and the Virial radius (Kravtsov 2013). Most of the sample galaxies (∼85 %) lie below b = R vir , and more than half of the sample galaxies (∼59 %) lie below b = R vir /2. All of the galaxies at the smallest impact parameters (i.e., the most probable host galaxies) are located below b = R vir while almost all massive galaxies (∼96 %) are present below the red dash line, as shown in the Figure 8. Given the approximate relation between the effective radius and the Virial radius (Kravtsov 2013), b/R e ∼70 corresponds to b ∼ R vir . It is thus interesting to note that most of the galaxies have impact parameters below R vir and that the galaxies with impact parameters larger than ∼ 0.3 R Vir are almost all below the sub-DLA limit in H column density (Noterdaeme et al. 2014). While most DLAs and sub-DLAs are associated with galaxies at impact parameters less than 0.2 R vir , a small fraction (∼26 %) of sample galaxies have impact parameters in the range of 0.5 R vir to R vir . All of the galaxies from the literature lie below b = R vir /2. This suggests that while these gas-rich absorbers usually trace regions close Figure 10. Absorption and emission metallicities plotted against stellar properties of gas-rich galaxies. Left panel: Absorption metallicities plotted against the stellar mass, star formation rate, and specific star formation rate (from top to bottom). All symbols are as in Figure 5. The absorption metallicities are based on Zn for most cases, while for one absorber, we adopt dust-free absorption metallicity. Right Panel: Emission metallicities plotted against the stellar mass, star formation rate, and specific star formation rate (from top to bottom). The solid and dashed black lines show, respectively, the mass-metallicity relation for 0.5 < < 1.0 from Ly et al. (2016) to galaxy centers, they occasionally trace the CGM at large distances extending out to the Virial radius. Dependence of Mg II 2796 rest-frame equivalent width on stellar mass Studies of the interdependence of the Mg II 2796 absorption strength and the galaxy's stellar properties have reached different conclusions. Bordoloi et al. (2011) reported that the Mg II equivalent widths [estimated from low-resolution spectra of zCOSMOS galaxies, obtained with the Visible Multi-Object Spectrograph (VIMOS) on the Very Large Telescope (VLT), by fitting a single Gaussian profile across the Mg II 2796 and 2803 lines] increase with stellar mass for blue galaxies at lower impact parameter (b < 50 kpc). However, such dependence is absent in the red galaxies and even in blue galaxies with larger impact parameters (b > 65 kpc). Nielsen et al. (2013) suggested that more massive galaxies have larger Mg II equivalent width in the MAG CAT sample, based on a positive correlation between the K-band luminosity (assumed to be a proxy of * ) and the Mg II equivalent width. For impact parameters smaller than 50 kpc, Lan et al. (2014) showed that the Mg II 2796 equivalent width shows a positive correlation with M* for star-forming galaxies but not for passive galaxies. Taking the larger dispersion values in median Mg II 2796 equivalent widths, Rubin et al. (2018) found that the median Mg II equivalent widths increase with the stellar mass for blue galaxies at a transverse distance 30 kpc < R ⊥ < 50 kpc, however the median Mg II 2796 equivalent widths values for red galaxies are found to be smaller compared to those for high-mass blue galaxies. Including both detections and upper limits in Mg II 2796 equivalent widths measurements, Dutta et al. (2020) reported a weak positive correlation between Mg II 2796 equivalent widths and M* in the MAGG sample. For our MAH sample and literature galaxies, we find a weak negative correlation ( = -0.27, = 4.86 × 10 −2 ) between the Mg II 2796 equivalent widths and stellar mass. Part of the reason for the difference between this trend and the weak or positive relations found for Mg II absorbers seems to be the difference in the selection techniques. Our sample absorbers are selected by high HI , and therefore have higher Mg II 2796 equivalent widths. For example, ∼ 70% of the absorbers in our sample have Mg II 2796 rest equivalent widths > 0.5 Å, while only < 15% of the absorbers in the sample of Dutta et al. (2021) fall in this category. Moreover, ∼ 60% of the galaxies in our sample are at impact parameters less than 100 kpc, while only < 10% of the galaxies in the sample of Dutta et al. (2021) fall in this category. We also note that our * values have substantial uncertainties, making a robust detection of the trend difficult. Figure 9 shows plots of Mg II 2796 absorption strengths versus the stellar mass. The H column density is known to be positively correlated with the Mg II and Fe II absorption strengths (e.g. Rao et al. 2006). Furthermore, the H column density is negatively correlated with the stellar mass and with metallicity, suggesting that the lower HI absorbers are associated with more massive galaxies that have had high past star formation and gas consumption Augustin et al. 2018). The negative correlation seen in the Figure 9 is thus expected. Dependence of metallicity on stellar properties The left panels of Figure 10 show plots of absorption metallicities vs. stellar properties (M * , SFR and sSFR) of the galaxies. For most cases, we use absorption metallicities based on Zn, which is less depleted on dust grains. For one absorber, we use the dust-free absorption metallicity inferred from multiple elements using the method of Jenkins (2009) based on depletion trends observed in the local interstellar medium of the Milky Way. The right panels of Figure 10 Figure 10 are the mass-metallicity relation (MZR) for star-forming galaxies at 0.5 < < 1.0 and the 1-uncertainties in this relation (Ly et al. 2016). The bottom sub-panel in the upper right panel shows the difference in the observed emission metallicities and the expected emission metallicities based on the MZR from Ly et al. (2016) at the observed stellar mass. It is clear that both the absorption-based metallicity (away from the galaxy center, at the impact parameter of the corresponding quasar sight line) and the emission-based metallicity (typically at the galaxy center) are positively correlated with the stellar mass, and are generally consistent with the MZR for star-forming galaxies. The middle panels of Figure 10 show plots of the absorption and emission metallicity vs. the SFR. No correlation is observed between metallicity (both absorption and emission metallicity) and SFR. The bottom panels of Figure 10 show plots of the absorption and emission metallicity vs. the specific star formation rate (sSFR). Both absorption and emission metallicity are negatively correlated with the sSFR. This suggests that the low-metallicity absorbers are associated with galaxies that are forming stars (and consuming gas) more vigorously. However, they are less enriched due to low past star formation activity. While the plot suggests that the negative correlation is mostly driven by the presence of literature galaxies, it is essential to expand the sample to verify this correlation. Figure 11 shows plots of the metallicities of the absorption-selected galaxies vs. the impact parameter and the normalized impact parameter for galaxies from our sample and the literature. To place these in context, we use a model for the metallicity-size relation for DLAs suggested by Fynbo et al. (2008). In this simple model, galaxies are assigned a size, metallicity, and metallicity gradient based on their luminosities. The metallicity distributions of the quasar DLAs and GRB DLAs are predicted using the luminosity function of UVselected galaxies (Lyman Break Galaxies), a metallicity vs. luminosity relation, and the radial distribution of H gas along with luminosity. Using such a model prediction, Krogager et al. (2012) generated 4000 simulated data points to compare to the measured data points. The light blue region under the solid blue line in the right panel of Figure 11 shows these 4000 simulated data points. For further details, see (e.g. Krogager et al. 2012). We extended the region to impact parameters of 100 kpc to include our sample galaxies. To do so, we extrapolated the metallicity-impact parameter relationship predicted by those simulation points. The light blue region under the dashed blue line in the Figure 11 shows the extrapolated region. Both panels of Figure 11 suggest that the metallicity is higher for galaxies sampled at higher impact parameters. This result is in agreement with previous studies (e.g. Krogager et al. 2012Krogager et al. , 2017, and would be consistent with the model expectation (Fynbo et al. 2008). This suggests that the metal-poor systems are found in smaller haloes and, therefore, detectable only in galaxies at lower impact parameters (Krogager et al. 2012). Figure 12 shows the impact parameter normalized by the effective radius and the effective radius plotted versus the H column density. A negative correlation ( = -0.56 and p = 2.00 × 10 −4 ) is observed between the normalized impact parameter and the H column density. The effective radius of the absorbing galaxies also shows a strong negative correlation ( = -0.52 and p = 6.00 × 10 −4 ) with H column density. This shows that galaxies associated with higher H absorbers are smaller in size, and therefore likely to show strong absorption at small impact parameters. Indeed, all galaxies in the current sample associated with DLAs have effective radii smaller than 3 kpc, while all galaxies with R e >3 kpc are associated with lower H column density absorbers. This finding is consistent with past suggestions that DLAs are associated with dwarf galaxies (e.g. York et al. 1986;Kulkarni et al. 2010). . Absorption metallicities plotted versus the impact parameter (left) and the normalized impact parameter (right). All symbols are as in Figure 5. The light blue region below the solid blue line in the left panel figure denotes the simulated distribution of impact parameters plotted against the absorption metallicities for using the model described in Fynbo et al. (2008) for the DLA galaxies at = 3 and is taken from Krogager et al. (2012). Extrapolating the model's prediction, the region is further extended up to the impact parameters of 100 kpc to include the sample galaxies detected at the larger impact parameters and is denoted by a light blue region below the dashed blue line. The trends suggest that metal-poor strong H absorbers are found in smaller haloes and, therefore, at smaller impact parameters (Krogager et al. 2012). The plot infers that the H column density is higher for the galaxies found at low-impact parameters to the quasar's sightline. At the same time, there is low H column density in the regions located at larger distances. Right Panel: Plot of R e against the log of HI . The gas-rich galaxies are smaller in size than the gas-poor galaxies. All symbols are as in Figure 5. Figure 13 shows plots of the surface densities of the SFR and sSFR averaged within the effective radius versus the column density of H . Relation between H column density and surface density of star formation In the upper two panels, the galaxies are binned into two groups by stellar mass (above and below the median stellar mass log M* med = 9.67). In the lower two panels, the galaxies are binned into two groups by redshift ( < 1 and ≥ 1). For comparison with local galaxies, we also show the relationship observed between the surface densities of SFR and H (Kennicutt 1998) for low-redshift spirals (what we refer to as the "atomic Kennicutt-Schmidt (K-S) relation"). This relation is based on the best fit (log Σ SFR = 1.02 log Σ HI -2.89) between the tabulated values of Σ SFR and Σ HI listed in Kennicutt (1998). We use this relation (which translates to log Σ SFR = 1.02 log HI -23.36) rather than the standard KS relation between Σ SFR and Σ gas since the latter also includes molecular gas, and molecular gas measurements are not available for most of the absorber-selected Figure 13. The surface densities of SFR (left panels) and sSFR (right panels) measured within the effective radius plotted versus the H column density. The gas-rich absorbers tend to have higher surface densities of SFR and sSFR. In the top panels, the galaxies are subdivided into two bins using the median value of the stellar mass. Galaxies with log M* < log M* med are colored with blue squares and the galaxies with log M* ≥ log M* med are represented with green squares. The bottom panels show similar plots where the galaxies are subdivided into two bins in terms of galaxies' redshift. Blue-colored galaxies have z gal < 1 while green-colored galaxies have z gal ≥ 1. Diamonds denote galaxies nearest to the quasar sight lines from our HST measurements, while open circles stars denote the most massive galaxies of each quasar at the redshift of the absorber. One of the sample galaxies Ma et al. (2018) and Rhodin et al. (2021). In the left panel, the solid black line is the Kennicutt-Schmidt relationship for nearby galaxies taken from Kennicutt (1998), while in both panels, the solid red line is the median value of the surface densities obtained from the TNG100 simulation. The solid red vertical lines denote 1-uncertainties in the median value. The cyan dots are the simulated galaxies at = 0.5. No correlation is observed between SFR surface density and H column density but a positive correlation exists between sSFR surface density and H column density. Absorption-selected galaxies have a higher star formation efficiency than predicted by the K-S law for local galaxies. See the text for more details. galaxies plotted in Figure 13 (besides the few objects observed so far in CO emission with ALMA). The upper right panel of Figure 13 shows that Σ sSFR is positively correlated with the H column density. The Spearman rank order correlation test gives = 0.42 and p = 8.30 × 10 −3 between HI and Σ sSFR (although we note that this relation is not significant if only our sample galaxies are considered.) The correlation between Σ SFR and HI (shown in the upper left panel of Figure 13) is not as significant even after including the literature galaxies, with = 0.29 and p = 6.65 × 10 −2 . This finding seems to be at odds with the previous suggestions based on simulations (e.g. Nagamine et al. 2004) that DLAs follow the KS law. Indeed, most of the galaxies associated with the absorbers from our sample and the literature lie substantially above the "atomic K-S relation". If taken at face value, this suggests that the absorptionselected galaxies (which have med ∼ 0.8) have more efficient star formation compared to the nearby galaxies. However, this apparent discrepancy may also be caused in part by the fact that the local "atomic K-S relation" is based on observations of 21-cm emission (which is usually far less sensitive to low H column densities than the Ly absorption line technique used for the quasar absorber sample). We also note that the galaxies with stellar mass greater than the median mass are located farther away from the "atomic K-S" relation in the upper left panel of Figure 13, as compared to lower-mass galaxies. This suggests that the more-massive absorption-selected galaxies have more efficient star formation, allowing them to reach comparable SFRs in lower H column density regions. The high values of Σ SFR compared to the K-S relation for the galaxies in our sample and the literature seen in Figure 13 may appear surprising, given the findings of earlier studies based on indirect estimates of SFR surface density that the star formation efficiency in DLAs at ≥ 1 is 1-3% of that predicted by the K-S relation (e.g. Wolfe & Chen 2006;Rafelski et al. 2011;Rafelski et al. 2016). The latter measurements are also shown in the left panels of Figure 13 for comparison. A substantial fraction of our galaxies have 18 < log HI < 20. But even focusing on just the absorbers with log HI ≥ 20.3 (i.e. the DLAs) in our sample, the star formation efficiency of the absorber-associated galaxies (with a median redshift of ∼ 1.01) still appears to be comparable to or (for some galaxies) substantially higher than predicted by the KS relation. The large difference between our values of Σ SFR and those in the past DLA studies may result from the fact that, unlike our the study, the past studies were based not on direct SFR measurements for galaxies associated with DLAs, but on measurements in the outskirts of isolated star-forming galaxies and on the assumption that the latter are related to DLAs. It is also noteworthy in this context that the surface brightness of the galaxies associated with DLAs detected in our sample and those from the literature are in fact several magnitudes brighter than the upper limit of 29 mag arcsec −2 estimated in prior studies of star formation efficiency in DLAs at ∼ 3 (Wolfe & Chen 2006;Rafelski et al. 2011). We note, however, that we cannot make a precise statement about the agreement between those DLAs and the simulation data due to the limited number of simulated galaxies with HI > 10 21 cm −2 . The ability of IFS studies to reveal star-forming galaxies associated with DLAs (and lower H column density absorbers) demonstrated by our MUSE-ALMA-Halos sample and other IFS studies in the literature indeed mark a huge improvement in detecting star formation in galaxies associated with gas-rich quasar absorbers compared to past searches. We also note, in passing, that the previous studies of the SFR surface density in DLAs mentioned above (e.g., Nagamine et al. 2004;Rafelski et al. 2016) adopted the original KS relation for Σ SFR vs Σ gas instead of the "atomic" version of this relation Σ SFR vs Σ HI . To examine whether the star formation efficiency of absorberselected gas-rich galaxies may have increased dramatically at < 1, we compare the Σ SFR vs. log HI trends for galaxies with < 1 and galaxies at ≥ 1 in the lower panels of Figure 13. The lower redshift galaxies appear to have comparable SFR surface densities for lower H column densities. This suggests a higher star formation efficiency of absorber-selected galaxies at < 1 compared to those at ≥ 1. Also shown for comparison in Figure 13 are simulated galaxies at = 0.5, based on an analysis of the data obtained from the Illustris TNG simulations (Nelson et al. 2018;Pillepich et al. 2018;Springel et al. 2018;Naiman et al. 2018;Marinacci et al. 2018). These state-of-the-art cosmological simulations incorporate essential physical processes pertinent to the formation and evolution of galaxies, such as gravity, hydrodynamics, gas cooling, star formation, stellar feedback, black hole feedback, and more. In this study, we employ the highest-resolution version of the TNG100 simulation, executed within a ∼ 100 Mpc box. Haloes are initially identified by employing the Friends of Friends (FOF) algorithm (Davis et al. 1985). Galaxies, regarded as substructures, are recognized as gravitationally-bound assemblies of particles within these FOF haloes performing the SUB-FIND algorithm (Springel et al. 2001). Each FOF halo encompasses a central galaxy, generally the most massive galaxy within that halo, and all additional galaxies within the halo are labeled as satellites. Properties of galaxies, including stellar masses and star formation rates (SFRs), are taken from the principal TNG galaxy catalogs. These are measured within the stellar half-mass radius by utilizing the particle data of the simulation. Furthermore, the neutral hydrogen (H ) content of TNG galaxies has been extracted from catalogs provided by Diemer et al. (2018), based on the analytic model of Sternberg et al. (2014), where they use an optimized post-processing framework for estimating the abundance of atomic and molecular hydrogen. This method uses the surface density of neutral hydrogen and the ultraviolet (UV) flux within the Lyman-Werner band, with all computations being performed through face-on projections within a two-dimensional model. The UV radiation emitted from young stars is modeled by assuming a constant escape fraction and optically thin propagation across the galaxy. The simulation demonstrates a relatively satisfactory agreement with the measurements of the SFR surface density based on emission-line observations for galaxies from both our sample and the literature shown in Figure 13 (and, like these data, also lie substantially above the upper limits for the DLAs from past studies). The agreement between the simulated and observed data appears to be better for the sSFR surface density than for the SFR surface density, as seen in the right panels of Figure 13. This difference may result from the differences in the stellar mass distributions for the simulated and observed galaxies. The stellar mass distribution of the observed galaxies (from our sample and the literature) peaks around log * = 9.67 and shows fewer low- * galaxies compared to the distribution for the simulated galaxies. The higher * galaxies have lower Σ sSFR (as expected from the negative correlation between sSFR and * , see Table 5), giving better consistency between the Σ sSFR vs. * trends for the simulated and observed galaxies, compared to the median Σ SFR vs. * trends. To summarize, we find interdependence between the stellar properties and the absorption properties. In particular, the H column density and the absorption metallicity show correlations with M*, ssfr, Σ sSFR , but not with the SFR and Σ SFR . SUMMARY AND CONCLUSIONS We have analyzed the morphological and stellar properties of 66 galaxies detected within ± 500 km s −1 of the redshifts of strong intervening quasar absorbers at 0.2 1.4 with HI > 10 18 cm −2 (that also have MUSE and/or ALMA data). The structural parameters of these absorption-selected galaxies were determined using . The galaxies were found to have Sérsic indices ranging from 0.3 to 2.3 and effective radii ranging from 0.7 to 7.6 kpc. The k-corrected absolute magnitudes of these galaxies range from -15.8 to -23.7 mag. Our main findings are as follows: (i) The absolute (rest-frame) surface brightness shows a strong positive correlation with the galaxy luminosity. The trend appears flatter at lower luminosities for those galaxies that have high H column densities. This suggests dwarf galaxies are associated with high H column densities. (ii) The star formation rate correlates well with the stellar mass. Most galaxies associated with intervening quasar absorbers are consistent with the star formation main sequence. (iii) Larger galaxies are found to be more massive compared to smaller galaxies. Furthermore, massive galaxies are more centrally concentrated, as observed for nearby galaxies. Overall the absorptionselected galaxies follow similar trends as those shown by the general galaxy population. (iv) For most (∼85%) of the galaxies in our sample, the impact parameters are smaller than the virial radius. Most of the sight lines with high HI in our sample probe the CGM of the associated galaxies at impact parameters less than half the virial radius, and only a small fraction have impact parameters larger than the virial radius. (v) The rest-frame equivalent widths of Mg II 2796 show a negative correlation with stellar mass. Such trend suggests that lower HI absorbers are associated with more massive galaxies that have undergone high past star formation and gas consumption activity. (vi) The absorption metallicity and emission metallicity show a positive correlation with the stellar mass for many of the absorptionselected galaxies and are consistent with the mass-metallicity relation for star-forming galaxies. While the metallicity shows no correlation with SFR, the metallicity is negatively correlated with the specific SFR, suggesting that the low-metallicity absorbers are associated with galaxies with vigorous current star formation but low past star formation activity. (vii) Metallicity appears to be positively correlated with the impact parameter and normalized impact parameter. This suggests that metal-poor galaxies are found in smaller haloes and are, therefore, detectable in galaxies at smaller impact parameters. (viii) The H column density is negatively correlated with the normalized impact parameter and the effective radius of the galaxies. This shows that galaxies associated with higher HI absorbers are smaller in size and, therefore, likely to show strong absorption at small impact parameters. (ix) The sSFR surface density is positively correlated with the H column density, but no correlation is seen between SFR surface density and H column density. Furthermore, the Σ SFR for the absorber-associated galaxies is substantially higher than predicted from the atomic K-S relation for nearby galaxies, suggesting higher star formation efficiency in the absorber-selected galaxies. The SFR surface density is also substantially higher than the upper limits on Σ SFR for DLAs estimated in past studies. Moreover, the star formation efficiency for absorber-associated galaxies at < 1 appears to be higher than for those at ≥ 1. The overall conclusions from our results are: the stellar and morphological properties of absorption-selected galaxies are consistent with the star formation main sequence of galaxies and show the mass-metallicity relation. Furthermore, the higher H column density absorbers are associated with smaller galaxies in smaller halos that have generally not experienced much star formation in the past and have thus remained metal-poor. However, these galaxies associated with the higher HI absorbers have more active current star formation and exhibit higher surface densities of star formation and gas consumption. Our study also reveals that a substantial fraction of gas-rich quasar absorbers arises in groups of galaxies. Our study of structural and stellar properties of 66 associated galaxies associated with gas-rich quasar absorbers has thus allowed us to search for correlations between a variety of morphological, stellar, and gas properties. However, our sample is still relatively small. Increasing the number of absorption-selected galaxies with measurements of the various properties with future MUSE, ALMA, and HST observations is essential to more robustly establish the trends suggested by our study and to fully interpret their implications for the evolution of galaxies and their CGM.
2023-07-24T04:01:53.629Z
2023-07-21T00:00:00.000
{ "year": 2023, "sha1": "b56fc37dacabe7c84473c314f17ef07cb0becac8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b56fc37dacabe7c84473c314f17ef07cb0becac8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16304659
pes2o/s2orc
v3-fos-license
On the triple nature of the X-ray source 4U2129+47 (= V1727 Cyg) Context. In quiescence, the proposed optical counterpart to the neutron star low mass X-ray binary 4U 2129+47 (V1727 Cyg) shows a spectrum consistent with a late F-type subgiant and no radial velocity variations on the 5.24 hour binary period. This could imply that V1727 Cyg is a chance line of sight interloper. Radial velocity measurements, however, showed evidence for a longer term ~40 km/s shift, which suggested that 4U 2129+47 could be a hierarchical triple system, with the F-type star in a wide orbit about the inner low mass X-ray binary. Aims. In order to confirm the long-term radial velocity shift reported in Garcia et al. (1989) and its amplitude, we obtained spectroscopic observations of V1727 Cyg during 1996 and 1998 with the William Herschel Telescope using the ISIS spectrograph. Methods. We determined radial velocities from the ISIS spectra by means of the cross-correlation technique with a template spectrum. Results. The resulting radial velocities show variations with a maximum amplitude of ~40 km/s, confirming previous results and supporting the F-type star as being the third body in a hierarchical triple system. The odds that this star could be an interloper are ~3e-6 Introduction 4U 2129+47 was discovered as an active, but weak X-ray source in the fourth UHURU survey (Forman et al. 1978), and its optical counterpart was identified as V1727 Cyg (Thorstensen et al. 1979).V1727 Cyg was found to be of ∼17th magnitude, and exhibited a large amplitude (∆B ∼1.5) photometric periodicity of 5.24 hours.This was taken to be indicative of the orbital period, with the photometric modulation ascribed to X-ray heating of the companion (Thorstensen et al. 1979).Analysis of the X-ray light curve led to the development of a model where the system is viewed edge on and surrounded by an accretion disc corona (White & Holt 1982).Attempts to measure the mass of the compact object produced confusing results, with radial velocity studies implying a compact object mass of 0.6 ± 0.2 M ⊙ (Horne et al. 1986).This led to the conclusion that the binary system was a cataclysmic variable.However, the discovery of an X-ray burst confirmed the identity as a neutron star X-ray binary (Garcia & Grindlay 1987).Assuming a neutron star primary and a 5.24 hour period, the empirical relationship described by Robinson (1976) between the companion mass and the binary period indicates a mass of 0.59 ≤ M(M ⊙ ) ≤ 0.80 and a radius of 0.59 ≤ R(R ⊙ ) ≤ 0.68 (Thorstensen et al. 1979).These parameters suggest a K -M V spectral type for the companion star. X-ray observations in 1983 failed to detect 4U 2129+47, and contemporaneous optical observations showed that V1727 Cyg had dimmed to ∼18.5th mag (Pietsch et al. 1986).This period of quiescence was a good opportunity for detecting the companion star with a view to constraining the orbital parameters of the system.The first such study was conducted by Garcia et al. (1989).Spectra were taken during four observing runs between June 1987 and October 1988 in order to obtain more accurate measurements of the neutron star mass without the strong influence of accretion disc emission.The results were surprising: no variations on a 5.24 hour period were detected.From the upper limits on the orbital radial velocity variations, the neutron star's mass appeared to be < 0.1 M ⊙ (Garcia et al. 1989), which is obviously inconsistent with a neutron star primary.Moreover, the June 1987 run provided a mean velocity significantly higher (by ∼ 40 km s −1 ) than observed in the other nights.Another surprise was the identification of V1727 Cyg as an F7-IV star -a stable 5.24 hour orbit about a neutron star is smaller than the radius of an F7-IV star (by a factor of ∼ 1.5).These results have led to the suggestion that 4U 2129+47 is a hierarchical triple system.The observed F-type star is postulated to be in a wide orbit about the centre of mass of the close pair, while the inner companion is perhaps a K-type dwarf (Garcia et al. 1989).The F7 star dominates the optical spectrum in the current quiescent state, and so radial velocity shifts on a 5.24 hour period are not observed.Garcia et al. (1989) suggested that a 30 day outer period could account for the on-off cycles seen in 4U 2129+47.An outer period on this order would drive a periodic eccentricity in the inner binary on a 45 year timescale, consistent with the on and off states that have been observed since the 1930s.The radial velocity data previously obtained and newly reported herein are consistent with such a period.Chevalier et al. (1989) have pointed out that even without the observed radial velocity shift, the lack of large radial velocity variations would lead to the postulation of a third stellar component. An obvious alternative to the triple hypothesis is a chance line of sight alignment.Ground based studies show that the on and off state optical counterparts are coincident to 0.26 ′′ indicating that the likelihood of a chance alignment is 10 −3 (Thorstensen et al. 1988).HST imaging shows that there is no nearby companion of comparable magnitude within 0.04 ′′ (Deutsch et al. 1996).Scaling this radius to the ground based results further reduces the chance alignment probability to 2×10 −5 . In this paper, we present spectra of V1727 Cyg taken in order to analyse its systemic radial velocity over a time interval similar to the long (∼ 30 day) period predicted for the late F-type star if this is the outer component of a triple system (Garcia et al. 1989).Previous radial velocity data were acquired during short observing runs (over at most 2 nights), separated by periods of many months.In order to test the proposed ∼ 30 day period it was necessary to conduct a study over a time interval of ∼ 1 month, with data points separated by about a week. Observations Optical spectra were acquired in service mode during 1996 and 1998 using the 4.2m William Herschel Telescope on La Palma equipped with the dual-beam ISIS spectrograph.V1727 Cyg was observed with both arms of the instrument. The 1996 data set was acquired on August 5/12/19/25 UT.The observations were made using the R1200R grating and TEK2 CCD in the red arm and the R600B and TEK1 CCD in the blue arm, giving a dispersion of 0.40 Å pixel −1 and 0.78 Å pixel −1 , respectively.The slit width used ranged from 1" .0 to 1".3.From measurements of the arc-lamp and night sky emission lines, we find that this yielded a spectral resolution from 1.0 to 1.2 Å (red arm) and 1.4 to 2.0 Å (blue arm).The red and blue data cover the wavelength range λλ6370 − 6760 and λλ4340 − 4910.On each night, two to three 30 min spectra of V1727 Cyg were taken, along with spectra of a template star, either BD+47 4219 or HD 222368 (both F7-IV stars).HD 222368 is a GCRV star with a velocity of 5.0 ± 0.9 km s −1 .In total, 11 spectra of V1727 Cyg were obtained.Calibration frames were also acquired, in particular arc lamp spectra.However, the night of August 4 had no flat fields taken.The data have been included in the analysis as the signal to noise was high enough to mask the fixed pattern noise: performing a cross correlation between an August 5 object spectra and an identically extracted strip from a flat field produced no peak, indicating that the fixed pattern noise is small.The night of August 12 (with 2 target spectra) was discarded, as the arc lamp spectra were not rigorously taken close in time to the target spectrum, leading to an uncertain wavelength calibration. The 1998 data set consists of 6 nights of data: June 19/27, July 3/17/22 and August 2 UT.The observations were taken with the R1200R grating and TEK2 CCD on the red arm and the R1200B grating and EEV10 CCD on the blue arm of the spectrograph.The slit width used ranged from 1".0 to 1".2.In the red arm, this provided a spectral resolution from 0.8 to 1.5 Å.The blue arm data was of poor quality and were not used in the analysis.The red data covered the wavelength interval λλ6310−6710. During each night two to three spectra of V1727 Cyg with integration times ranging from 15 to 30 min were taken, along with all associated calibration frames.Spectra of BD+47 4219 were also acquired each night (except on July 3).A total of 15 spectra of V1727 Cyg were taken.The two spectra acquired on June 27 were discarded due to their low quality.Furthermore, the night of July 16 was unsuitable for use due to instrumental problems. Both data sets were reduced with standard IRAF routines and the spectra were extracted with the IRAF KPNOSLIT package. Analysis Velocities were computed using the Fourier cross-correlation technique developed by Tonry & Davis (1979) and implemented in the IRAF task FXCOR.A sum of three HD 222368 spectra (taken on 1996 August 18 when conditions were very good) was used as a template spectrum.Fig. 1 shows this composite spectrum from the red arm along with a sum of four V1727 Cyg spectra.Prior to the cross-correlation, the target and template spectra were re-sampled into a common logarithmic wavelength scale and normalized by dividing with the result of fitting a low-order spline to the continuum.Correlation was performed without Hα in the red, to ensure that any possible residual emission from the accretion disc did not interfere with the measured velocity of V1727 Cyg.In the blue, Hβ was included, as (besides the G-band) the spectra lack well defined features in the available region, and without Hβ correlation fits were poor. Fig. 2 and Table 1 show the radial velocity of V1727 Cyg from the 1996 and 1998 data sets.The radial velocities shown in Fig. 2 are weighted averages of radial velocities obtained from the red and (when available) blue spectra.All velocities are heliocentric.Table 1 also shows the average radial velocities obtained as derived using only the blue or ned spectra.As in Garcia et al. (1989) the errors on the individual velocities produced by FXCOR have been computed as σ = C/(1 + r), where C is a constant determined from the observed scatter in the velocity of comparison stars, and r is the correlation coefficient (see Tonry and Davis 1979).Gaussian statistics were used throughout when calculating mean values.Checking the wavelength of night sky emission lines served as a zero point calibration for the data: these lines are seen only at wavelengths longer than ∼ 5000 Å, so this method could not be used for the blue frames.The average night sky line velocity is approximately -0.4 km s −1 , which is consistent with zero given the errors.The wavelength of each night sky line was found by finding the centroid of a Gaussian function fitted to each line.Both the larger error bar for the V1727 Cyg radial velocity and the lack of sky line measurements on 1998 July 3 are due to the fact that the spectra were obtained during dawn.As noted in section 2, no radial velocity templates were observed during this night. Discussion Our data do not show the expected short term, ∼ 300 km s −1 semi-amplitude radial velocity variations indicative of a neutron star primary.This is consistent with all previous studies of V1727 Cyg in quiescence (see e.g.Garcia et al. 1989;Garcia 1989;Chevalier et al. 1989;Cowley & Schmidtke 1990).Furthermore, the longer term velocity variations found by Garcia et al. 1989 are confirmed in this new dataset.The 40 km s −1 amplitude of the variation is similar to that found earlier, and the variation is detected between 7 and 10 σ level. If we accept that the late F-type subgiant is indeed orbiting an unseen companion (or companions) we can ask what are the odds that it could still be a line of sight interloper, rather than physically associated with 4U 2129+47?The fraction of multiple systems in the Galaxy is an active topic of study, with estimates ranging from 10% to 80% (Lada 2006).It is now clear that the fraction depends upon mass, with more massive stars more likely to be members of multiple systems.Also, the majority of the systems are widely detached and show only small (few km s −1 ) radial velocity variations.We used the study of Duquennoy & Mayor (1991) to estimate the fraction of F7 stars that would show velocity variations of at least 40 km s −1 (corresponding to a radial velocity semi-amplitude K of 20 km s −1 ).This study measured orbital velocity variations of a sample of F7 to G8 stars in order to determine orbital periods and multiplicity frequency.The mean period in the sample was log(P) = 4.8 with a roughly Gaussian dispersion of log(P) = 2.3 with P in days.From Table 2 of Duquennoy & Mayor (1991) we estimate that K = 20 km s −1 corresponds approximately to P = 100 days.The fraction of systems with periods less than 100 days, and therefore likely to show velocity variation of 40 km s −1 or more, is approximately 1 in 7. Therefore the odds that V1727 Cyg could be an interloper are reduced from 2 × 10 −5 to approximately 3 × 10 −6 . Recently, Bozzo et al. (2007) have presented time analysis of two XMM-Newton observations of 4U 2129+47 in quiescence obtained ∼ 22 days apart, finding evidence for a delay of ∼ 200 s for the mid-eclipse times measured from the X-ray observations.This delay can be explained as due to the orbital motion of the compact 4U 2129+47 binary around the center of mass of a triple system.In light of these new observations, coupled with the radial velocity measurements presented in this paper, we conclude that the triple hypothesis best explains all features of 4U 2129+47, and that the late F-type subgiant is the outer component of the triple system. Conclusions The X-ray source 4U 2129+47 has been a candidate hierarchical triple for some time.The optical counterpart, V1727 Cyg, has features inconsistent with those expected for a close binary companion, and previously displayed radial velocity variation over several weeks.The new spectroscopic measurements presented here confirm this long term shift of ∼ 40 km s −1 , which strongly suggests that 4U 2129+47 is indeed a triple.However, the outer period is yet to be observed directly.Doing so would allow for more sophisticated modeling of the system.It may also help understand the evolutionary history of 4U 2129+47, which as of now is somewhat elusive.1).Also plotted are sky line velocities as a zero point check (top plot).These velocities were calculated from the difference between the measured and published (Osterbrock et al. 1996) wavelengths of sky emission lines.Comparison star velocities are shown in the middle plot. Fig. 1 . Fig. 1.Summed spectra of V1727 Cyg compared to that of the template spectrum used, HD222368. Fig. 2 . Fig. 2. New radial velocity data for V1727 Cyg (bottom) from 1996 and 1998 (see Table1).Also plotted are sky line velocities as a zero point check (top plot).These velocities were calculated from the difference between the measured and published(Osterbrock et al. 1996) wavelengths of sky emission lines.Comparison star velocities are shown in the middle plot.
2014-10-01T00:00:00.000Z
2008-04-08T00:00:00.000
{ "year": 2008, "sha1": "07dd4343a0e41cd80f0d11c0aaed4602e1b6dc77", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2008/27/aa09516-08.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "07dd4343a0e41cd80f0d11c0aaed4602e1b6dc77", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3598626
pes2o/s2orc
v3-fos-license
The addition of vildagliptin to metformin prevents the elevation of interleukin 1ß in patients with type 2 diabetes and coronary artery disease: a prospective, randomized, open-label study Background Patients with type 2 diabetes present with an accelerated atherosclerotic process. Animal evidence indicates that dipeptidyl peptidase-4 inhibitors (gliptins) have anti-inflammatory and anti-atherosclerotic effects, yet clinical data are scarcely available. Design and methods A prospective, randomized, open-label study was performed in 60 patients with coronary artery disease (CAD) and type 2 diabetes, who participated in a cardiac rehabilitation program. After a washout period of 3 weeks, patients were randomized in a 2:1 ratio to receive combined vildagliptin/metformin therapy (intervention group: n = 40) vs. metformin alone (control group: n = 20) for a total of 12 weeks. Blinded assessment of interleukin-1ß (IL-1ß, the primary endpoint), hemoglobin A1c (HbA1c), and high sensitivity C reactive protein (hsCRP), were performed at baseline and after 12 weeks. Results Mean age of study patients was 67 ± 9 years, 75% were males, and baseline HbA1c and inflammatory markers levels were similar between the two groups. At 12 weeks of follow up, levels of IL-1ß, hsCRP, and HbA1c were significantly lower in the intervention group as compared with the control group. There was a continuous elevation of IL-1ß among the control group, which was not observed in the intervention group (49 vs. 4%, respectively; p < 0.001). The hsCRP was lowered by 60% in the vildagliptin/metformin group vs. 23% in the metformin group (p < 0.01). Moreover, a significant relative reduction of the HbA1c was seen in the intervention group (7% reduction, p < 0.03). Conclusion The addition of vildagliptin to metformin treatment in patients with type 2 diabetes and CAD led to a significant suppression of the IL-1ß elevation during follow up. A significant relative reduction of hsCRP and HbA1c in the intervention group was also observed. Trial registration NCT01604213 Electronic supplementary material The online version of this article (doi:10.1186/s12933-017-0551-5) contains supplementary material, which is available to authorized users. Background Diabetes mellitus (DM) is one of the leading causes of death in the USA and Europe [1,2] Patients with ischemic heart disease (IHD) and diabetes are at a particularly high risk for the recurrence of cardiovascular events. Cardiovascular disease (CVD) risk is two-to four-times greater in individuals with DM as compared to individuals without DM [2][3][4]. It is well known that DM induces complex vascular changes, promoting accelerated atherosclerosis and hypercoagulability, as can be assessed indirectly by a number of markers. Principal perturbations include endothelial dysfunction, increased inflammatory plaque infiltration, adhesion molecule over-expression and adverse effects of circulating fatty acids and advanced glycosylation end products [5,6]. Consequently, diabetes is recognized as an independent risk factor for premature atherosclerosis, and for recurrent cardiovascular events in this population [7]. Much evidence supports a pivotal role for inflammation in all phases of atherosclerosis, from the initiation of the fatty streak to the culmination in acute coronary syndromes [8]. Inflammation also is involved in many of the metabolic abnormalities associated with diabetes, the most important of them being insulin resistance [3,9]. IL-1 is the "apical" pro-inflammatory mediator in both acute and chronic inflammation [10]. It plays a major role in the activation of innate immunity [11], induces the synthesis and expression of multiple secondary inflammatory mediators including IL-6, IL-18 and IL-33 [12,13], and is strongly associated with the development of atherosclerosis and impairment of cardiac function in diabetic patients [14]. High sensitivity C reactive protein, a well-known marker of inflammation, is produced by hepatocytes under regulatory control from circulating cytokines, in particular IL-1 and IL-6 [18]. Animal studies involving gliptins have suggested numerous beneficial anti-atherosclerotic effects, well beyond their primary role in lowering blood glucose [19,20]. In addition, anti-remodeling effects have also been proposed [21], although this feature has not been established in a clinical setting. Concomitant treatment with a gliptins and metformin may offer an attractive glycemic reduction modality with a synergistic mechanism of action while exerting additional vascular protective benefits. The effect of gliptins on the above mentioned parameters has not been studied in humans. Several pleiotropic beneficial effects of metformin beyond its glucose-lowering effect have been described previously [22,23]. This compound improves the angiogenic functions of endothelial progenitor cells via various signaling pathways [24][25][26][27] and presents clear anti-inflammatory effects [27,28], even irrespective of diabetes status [22,29]. Accordingly, in the present study we designed a prospective randomized clinical trial in order to assess possible incremental anti-inflammatory and athero-thrombotic protective effects of combined vildagliptin-metformin therapy vs. metformin alone in a clinical setting. Specifically, we focused on the effects of DDPi therapy on IL-1ß due to its important role as a pro-inflammatory signaling cytokine, a key factor in the pathogenesis and progression of atherosclerosis [30][31][32][33][34]. Study design and patients This was a 12-week, single-center, prospectively randomized, non-blinded, controlled study to provide evidence on the effects of vildagliptin on key biomarkers of athero-thrombosis and inflammation in a population of diabetic patients with coronary artery disease who undergo cardiac rehabilitation. Participants eligible for this trial included males and non-child-bearing-potential females over the age of 21 who have (a) documented coronary artery disease >30 days; and (b) evidence of suboptimal type II diabetes control on the basis of Hemoglobin A1c (HbA1c) ≥6.5%, despite the use of oral anti-diabetic mono-therapy. Standard of care secondary prevention for coronary artery disease background therapy included, but was not limited to, lipid lowering, anti-hypertensive, ß blockers, and antiplatelet therapy, as appropriate and in accordance to current guidelines. Patients were excluded if they had significant renal impairment (creatinine ≥1.4 mg\dL in female or ≥1.5 mg\dL in male patients), planned coronary intervention or planned surgical intervention (percutaneous coronary intervention or coronary artery bypass grafting), recent (<30 days) acute coronary syndrome (ACS), history of lactic acidosis, type I diabetes, current HbA1c >7.5%, or any significant hepatic, renal or cardiovascular medical conditions [35]. We prospectively enrolled 60 patients who met the study's inclusion and exclusion criteria, and randomized them in a 2:1 ratio to either vildagliptin-metformin therapy (n = 40) or metformin therapy (n = 20). Study design and flow are presented in Fig. 1. The study consisted of a 1-2 weeks screening period, followed by a 2-week wash-out period based on the current medication regimen. Eligible patients (HbA1c ≥6.5 and ≤7.5%) who received current anti-diabetic mono-therapy (not including metformin or a gliptin), initially received substituted anti-diabetic treatment with metformin. Pre-specified substitution of oral anti-diabetic mono-therapy was permitted if clinically reasonable and safe. A washout period of 2 weeks took place prior to randomization. During this period, treatment with open label metformin was carried out with blood glucose monitored regularly. Initial dose was 850 mg once daily, with a dose increase to a maximum of 850 mg TID with a target of fasting glucose ≤130 mg/dL. For patients who were eligible for the study who received current treatment with metformin mono-therapy, a dose increase was also allowed to a maximum of 850 mg TID aiming for a target of fasting glucose ≤130 mg/dL. Eligible patients (HbA1c ≥6.5 and <9%) not on current anti-diabetic therapy initially received open-label metformin mono-therapy for a period of 2 weeks prior to randomization. During this period, treatment with metformin was carried out with blood glucose monitored regularly. Initial dose was 850 mg once daily, with a dose increase to a maximum of 850 mg TID to a target of fasting glucose <130 mg/dL. Study assessments and endpoints The primary endpoint was change in the inflammation marker IL-1ß from baseline to week 12 or the final visit. Secondary efficacy assessments included weight reduction as well as changes in HbA1c, hsCRP, IL-1 alpha, IL-6, IL-10, tumor necrosis factor alpha (TNF-alpha), monocyte chemoattractant protein-1 (MCP-1)monocyte subsets by FACS, and matrix metallo-proteinase 9 (MMP-9). Safety assessments included recording and monitoring of treatment-emergent adverse events; biochemistry and hematology laboratory test results; and vital signs. Follow-up visits All patients were invited for monthly follow-up visits with study coordinators. During these visits we monitored both clinical and adverse events, verified medication compliance and evaluated any hypoglycemic events. Changes in weight and in drug regimen were recorded. At the completion of the 3-month treatment, blood was drawn for laboratory testing as done at baseline. Blood samples did not contain identifying information and all tests were performed in a blinded fashion. Statistical methods and power calculation The proposed sample size was calculated to demonstrate a significant improvement in the intervention group compared to the control group with at least 90% power and a two-sided 5% type 1 error. Variables are expressed as mean ± standard deviation (SD) or median and inter quartile range (IQR). Categorical data are summarized as numbers and percentages. The demographic, clinical characteristics and laboratory values of patients at baseline according to the two pre-specified groups were compared with the use of the independent t test for normally distributed continuous variables, or nonparametric tests for covariates violating the normality assumption, and the Chi square test was used for comparison of categorical variables and for percentage changes. In order to account for the possible effect of baseline parameters such as LDL levels and additional characteristics, we divided the continuous variable "percent change in IL-1ß" into three equal percentiles on scanned cases with roughly the same number of observations in each group. The lowest tertile represented the smallest increase. We used binary logistic regression modeling to assess the independent effect of vildagliptin (vs. metformin only) on the likelihood of IL-1ß changes beyond the lowest tertile (changes greater than recorded in the lowest tertile). The following covariates were introduced along with the vildagliptin vs. metformin only group: age, gender, serum creatinine, hypertension, heart failure, previous MI or past cerebrovascular accident. Statistical significance was accepted for a two-sided p < 0.05. The statistical analysis was performed with IBM SPSS version 20.0 (Chicago, IL, USA) and SAS version 9.2 (SAS institute Inc.). Results The disposition of patients from screening to study endpoint is depicted in Fig. 1. Of the 60 patients that were included in the study, 40 were randomized to the intervention metformin-vildagliptin group and 20 to the control metformin group. The percentage of randomized patients who discontinued the study was overall low yet somewhat higher in the vildagliptin group (5 vs. 9%; p = 0.13; respectively), mainly due to loss to follow-up (Fig. 1). The demographic and baseline characteristics of the randomized patients were generally similar between the treatment groups ( Table 1). The mean age was 67 ± 9 years, 75% male, and 61% had previous myocardial infarction. The only statistically significant difference between the two groups was that patients in the vildagliptin-metformin group had lower triglycerides levels compared to the metformin group (124 ± 41 vs. 176 ± 95; p < 0.001; respectively). Approximately three quarters of the patients in both groups were already treated with metformin prior to enrollment in the trial, with a mean dose of 1250 mg/day. The remaining 24% patients were started on metformin treatment for 3 weeks wash out period. All patients received 25 or 50 mg/daily dose of vildagliptin added to their regimen (based on their HbA1c). Figure 2a shows the distribution of the basal IL-1ß levels. There were no statistically significant differences in the basal values (mean of 35 pg/mL in the vildagliptin-metformin group vs. mean of 37 pg/mL in the metformin only group; p value = 0.58). Following 12 weeks of treatment, the levels IL-1ß were significantly greater in the metformin group than the combined group (44 vs. 34 pg/mL; p-value < 0.01, respectively; Fig. 2a). Efficacy and safety Primary end point: IL-1ß Additionally, Fig. 2b shows the percent change in IL-1ß levels following the three months of treatment. During the 12 weeks of follow up, an increase of 49% was observed in the metformin only group compared to 4% change in the vildagliptin/metformin group (p < 0.001). Consistently, multivariate binary logistic regression showed that vildagliptin treatment was independently associated with a 79% (p = 0.01) lower likelihood of an increase above the lower tertile of percent change in IL-1ß as compared to metformin-only therapy [OR 0.21 (95% CI 0.04-0.92);p = 0.01]. Secondary end points A significant lowering of hsCRP levels was seen among the vildagliptin-metformin group. The hsCRP was lowered by 60% after the initiation of vildagliptin, as compared to only 23% lowering in the metformin group; p > 0.01 for the comparison (Fig. 3). It is to mention, that three patients (two in the vildagliptin-metformin group and one in the metformin group were excluded, secondary to extreme high levels (more than 40 mg/dL, or more than 250% fold change) either at baseline or at follow up, indicating another concomitant disease such as infection, malignancy, etc. The addition of vildagliptin resulted in a significant absolute reduction of HbA1c by 0.37% (7% percent change from baseline), compared with a smaller non-significant absolute reduction of 0.28% (2% percent change) in the metformin only group (Fig. 4). Furthermore, a trend for lower results was also seen after the addition of vildagliptin in the other markers. Please see Additional file 1: Table S1, Additional file 2: Figure S1, Additional file 3: Figure S2. Compliance and safety The overall safety and tolerability of the addition of vildagliptin was very good, as no incidence of drug related adverse events were reported in both treatment groups, and no discontinuations were reported in both groups, except a 5-day discontinuation in the control intervention arm (case of gastroenteritis). Additionally, 1 patient had an episode of atrial fibrillation and 2 other patients visited the emergency department for atypical chest pain, and were discharged home after an acute coronary syndrome was ruled out. A total of 4 patients did not complete the study (1 in the control group and 3 in the intervention arm), 1 opted not to participate prior to randomization, 1 left the country, and the other 2 withdrew due to non-medical reasons after 1 month of treatment. Discussion Our main finding in this study was that vildagliptin 50 mg bid added as an OAD to metformin 850-2550 mg prevented the elevation of IL-1ß during the study period, whereas a 49% elevation was observed in the metforminonly group. Patients with diabetes have a continuous increase in IL-1ß since high concentrations of glucose stimulate IL-1ß production from the pancreatic ß cell itself, implicating a role for IL-1ß in type 2 diabetes. Moreover, the high levels of free fatty acids act together with glucose to stimulate IL-1ß production [36,37]. In a randomized, placebo-controlled study of anakinra (IL-1 inhibitor), gene expression for IL-1ß was >100-fold higher in ß cells from patients with type 2 diabetes than from patients without. Subsequently, it was shown that patients who responded to Anakinra used 66% less insulin to obtain the same glycemic control. This observation suggests the functional restoration and partial regeneration of ß cells and the pivotal role of IL-1 [38]. Furthermore, interleukin-1 receptor antagonists were found to improve endothelial dysfunction in diabetic rats [39]. Additionally, diabetics have elevated levels of oxidized LDL (ox-LDL) in their macrophages, which further promotes the secretion of IL-1ß thus contributing to the positive feedback and new synthesis of IL-1ß. In several previous studies, DPP-4 inhibitors were found to repress this elevation; however, these were only animal studies [32][33][34] and to our knowledge we are the first to demonstrate these findings in stable diabetics patients with CAD. Interleukin-1ß induces the synthesis and expression of numerous secondary inflammatory mediators and also induces its own production and processing, representing a key step in the pathogenesis of many auto-inflammatory diseases and the sustained increase of IL-1ß [10,32]. Therefore, patients with both diabetes and atherosclerosis are especially prone to high levels and persistent increase in IL-1ß [15]. In our study, the addition of vildagliptin appears to inhibit this increase in IL-1ß and led to stabilization of the IL-1ß levels through the follow up period. As already mentioned, there is a link between diabetes and atherogenesis which may be related to the high circulating levels of ox-LDL and AGEs, both of which induce endothelial dysfunction and thus inflammation [40]. Manica-Cattani et al. [17] have shown that macrophages treated with ox-LDL generate a number of cytokines, including IL-1ß. Liu et al. [16] have also shown that ox-LDL induces IL-1ß secretion promoting foam cell formation leading to atherosclerosis. It is well-established that inflammation plays a major role in atherogenesis from a number of perspectives [41]. Inflammatory cytokines, including IL-1ß, induce a number of alterations in key steps leading to vascular injury, such as endothelial dysfunction, thrombosis and apoptosis. This inflammatory response is related mainly to the activation of the immune system, both the innate and the acquired, via these inflammatory cytokines [42]. Interleukin-1β (IL-1β), the main active form of the IL-1, is a prototypic multifunctional cytokine which plays a significant role in promoting inflammation, with [43]. Recent studies in the field have shown that DPP-4 inhibitors block the catabolism of GLP-1R agonists, leading to toll-like receptor 4 (TLR4) activation and decreasing PKC activity [15,34,44]. Although the mechanism is still not known, DPP-4 inhibitors repress the expression of TLR4 and lead to decreased activation of PKC, leading to the suppression of the overproduction of IL-1ß found in ox-LDL treated human macrophages [15,16]. Our study supports the observations of Yao-Dai et al. [15] showing a repression of IL-1ß via GLP-1 receptor inactivation in macrophages exposed to DPP-4 inhibitors. However, it should be noted that these prior studies were performed in vitro, with direct injection of DPP-4 into the macrophages. To the best of our knowledge, this is the first clinical study to demonstrate these findings in patients with therapeutic doses of the DPP-4 inhibitor vildagliptin. Moreover, the patients in our study were well treated with statins, ß-blockers, metformin, and were actively undergoing cardiac rehabilitation and nutritional consultations. Despite all these protective effects, the addition of vildagliptin significantly reduced the expression of IL-1ß in these patients making our results even more robust. Among all other inflammatory markers (except hsCRP), there was an increase from baseline levels through the follow up period. The use of DPP4 was associated with a trend for better suppression of these elevations, despite almost normal levels at baseline. We believe the effect of vildagliptin did not reach significance because of the cohort size and relatively short follow-up period. Although it failed to achieve significance, the change in hsCRP levels among the vildagliptin group was at least double as much as the change in the metformin-only group (50% decrease vs. 20% decrease; p = 0.13; respectively). We believe there is a strong correlation, but we failed to reach significance because of the above mentioned limitations of the study. Moreover, our patients are well treated stable patients, who participate regularly in our cardiac rehabilitation institute, with monitored follow up, and frequent physician, dietitian and physiologist interactions. Furthermore, they received potent statins and additional secondary prevention measures according to the latest national guidelines. Conclusion Compared to metformin only, the addition of vildagliptin led to a significant suppression of the IL-1ß elevation in patients with established CAD receiving an optimal secondary prevention regimen. A significant relative reduction of hsCRP and HbA1c in the intervention group also was observed. As IL-1ß is a key regulator in the inflammation and atherosclerotic process, this effect could be possibly associated with improved clinical outcomes. Larger studies with longer follow-up periods are necessary to further explore these findings. Authors' contributions RK and IG designed the study and developed the methodology. AY and RK collected the data, interpreted the patient data, performed the laboratory collecting, analyzed the data and wrote the manuscript. AT and IG, analyzed and interpreted the patient data, revised the manuscript, and contributed to the writing. DE, RG and EZF helped with the collecting of the data. JL, NS helped with the laboratory analysis. All authors read and approved the final manuscript. 1 The Leviev Heart Center, Sheba Medical Center, Tel Hashomer, Sheba Road 2, 52620 Ramat Gan, Israel. 2 Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel. 3 Cardiovascular Diabetology Research Foundation, Holon, Israel. 4 Heart Research Follow-up Program, University of Rochester, Rochester, NY, USA.
2018-04-03T05:45:01.095Z
2017-05-22T00:00:00.000
{ "year": 2017, "sha1": "19cf077e0004c400195c10206147a842d7aabaff", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/s12933-017-0551-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19cf077e0004c400195c10206147a842d7aabaff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89839749
pes2o/s2orc
v3-fos-license
Physiology response of fourth generation saline resistant soybean (Glycine max (L.) Merrill) with application of several types of antioxidants Antioxidant applications are expected to reduce the adverse effects of soil saline. This research was conducted in plastic house, Plant Tissue Laboratory Faculty of Agriculture and Plant Physiology Laboratory Faculty of Mathematic and Natural Science, Universitas Sumatera Utara, Medan also in Research Centers and Industry Standardization, Medan from July-December 2016. The objective of the research was to know the effect of various antioxidant treatments with different concentrations (control, ascorbic acid 250, 500 and 750 ppm; salicylic acid 250, 500 and 750 ppm; α-tocopherol 250, 500 and 750 ppm) on fourth generation soybean physiology in saline condition (Electric Conductivity 5-6 dS/m). The results of this research showed that the antioxidant type and concentration affected not significantly to physiology of fourth generation soybean. Descriptively the highest average of superoxide dismutase and peroxide dismutase was showed on ascorbic acid 250 ppm. The highest average of ascorbate peroxidase was showed on α-tocopherol 750 ppm. The highest average of carotenoid content was showed on ascorbic acid 500 ppm. The highest average of chlorophyll content was showed on α-tocopherol 250 ppm. The highest average of ratio of K/Na was showed on salicylic acid 250 ppm. Introduction Soybean is one of the kind of plant nuts used as food source of energy and protein [1]. Efforts to increase soybean production can be done with intensification and extensification. Efforts that can be taken in the intensification is to plant high yielding varieties and apply the optimal package of technology. As for extensification efforts taken is to open a new agricultural area [2]. The saline soil is one of the largely untapped land for cultivation, due to toxic effects and increased root osmotic pressure resulting in disruption of plant growth [3]. one of the causes of plant damage in stressful conditions is occurrence of oxidative stress caused by accumulation of reactive oxygen species (ros) such as oxygen (o 2 ), hydrogen peroxide (h 2 o 2 ), superoxide (o 2¯) and hydroxyl radicals (oh¯), due to inhibition of the process photosynthesis by closing stomata causing damage oxidative properties of photosynthetic [4]. some of the antioxidants that can be applied to plants are ascorbic acid, salicylic acid and α-tocopherol. application of ascorbic acid 500 ppm can help morphology and physiology of soybean to grow and produce better in the soil [5]. salicylic acid applied to soybean in saline soils can increase chlorophyll and superoxide dismutase (sod) [6]. the application of α-tocopherol may increase the activity of wheat leaf enzymes in saline soils [7]. the objective of the research was to know the effect of various antioxidant treatments with different concentrations on fourth generation soybean (glycine max (l.) merrill) physiology in saline condition (ec 5-6 ds/m). Materials and methods Field experiment was conducted on July-December 2016 in plastic house Faculty of Agriculture, Universitas Sumatera Utara. The experiment was arranged in a non factorial randomized block design with ten treatments (control, ascorbic acid 250, 500 and 750 ppm; salicylic acid 250, 500 and 750 ppm; α-tocopherol 250, 500 and 750 ppm) with four replications for each treatment. The research started from land preparation, planting, application of antioxidant, maintenance, fertilizing and physiology parameters analysis. The antioxidants were applied at 2 weeks after planting (stadium V 1 ) to pod filling period (stadium R 3 ), with intervals of 1 week. To determine the volume of spraying, every week spray volume calibration was done on plant leaves by using hand sprayer. The application time was at 07.00 AM. The antioxidants solvent was dissolved by stirrer with aquades according to predetermined concentrations. The observation on SOD, POD, APX, chlorophyll content, carotenoid content and ratio of K/Na were measured when the plants at stadium V 4 . The physiology parameters analysis include SOD, peroxide dismutase (POD), ascorbate peroxide (APX) and carotenoid content was held in Plant Tissue Laboratory Faculty of Agriculture, Universitas Sumatera Utara. Chlorophyll content observation was held in Plant Physiology Laboratory Faculty of Mathematic and Natural Science, Universitas Sumatera Utara. Ratio of K/Na observation was held in Research Centers and Industry Standardization, Medan. The data were analyzed statistically using F-test and then following by orthogonal contrast test at 5 % level. Results and Discuss Data presented in Table 1 shows that application of several types of antioxidants affected not significantly increased SOD, POD and APX compared with untreated plants. tocopherol for membrane protection; as well as electron donors for APX activities [8]. Ascorbic acid has many benefits to plant physiology in protein synthesis, as an antioxidant, enzyme cofactor and as a signal cell modulator in a variety of important physiological processes, including cell wall biosynthesis, secondary metabolites and phytohormone, stress physiology, photoprotection, cell division and growth [9]. Antioxidant treatment had no significant effect on POD enzyme activity. The application of salicylic acid in P. sativum in saline soil did not have a significant effect on POD enzyme activity. This is caused by the hydrogen peroxide (H 2 O 2 ) produced by plants not only dispart by peroxidase enzyme but also catalase enzyme (CAT) and APX [10]. In APX observation, α-tocopherol 750 ppm can increase the activity of APX compared with other antioxidants. Decrease in APX activity occurs on the antioxidant salicylic acid 750 ppm. The decrease in APX activity is relate to the abundance of H 2 O 2 accumulation as substrate enzyme APX, so the plant is unable to reduce the negative effects caused by H 2 O 2 . The APX enzyme uses ascorbic acid as electron receptors to reduce H 2 O 2 into water and monodehydroascorbate (MDHA) [11]. On observation of chlorophyll content, the application of α-tocopherol 250 ppm was more effective in increasing total chlorophyll compared with α-tocopherol 750 ppm. Decrease in chlorophyll in the saline soil occurs due to disruption of chlorophyll formation in chloroplasts. Growth of plants in environments with high salt levels causes plants to hyperosmotic effects. As a result, interference with membrane function, metabolic toxicity, disturbance of the photosynthesis process, may even lead to crop death [12]. Salinity in soybeans can cause a decrease in green color on the leaves. α-tocopherol in resolve the harmful effects of salt stress relate with the stability and protection of photosynthetic pigments from oxidative damage. α-tocopherol 250 ppm is more effective in increasing the amount of chlorophyll in soybeans. Increased chlorophyll indicates more active photosynthetic activity and will increase crop production. Salinity through increased osmotic pressure causes reduction of water uptake and metabolic disorders and physiological processes is under its influence. This causes decreased production by decreasing the amount of leaf chlorophyll [13]. In the observation of carotenoids, ascorbic acid 500 ppm can increase the physiology of soybean to salinity with high carotenoid content. The effect of ascorbic acid on reducing the harmful effects of salt stress is thought to result from activating several enzyme reactions in the metabolic process. Ascorbic acid can trigger the action of enzymes in the metabolism process of plants which will further increase the rate of photosynthesis where if the process of photosynthesis increases then the resulting assimilate is sufficient to be distributed to the seeds so the quality of the beans produced also increases [11]. In observation of ratio of K/Na, salicylic acid 250 ppm can increase ratio of K/Na. Whereas, low ratio of K/Na indicates that antioxidant treatment has not been effective in improving soy physiology in saline soils. The physiology of soybean in saline soils can be seen from decreasing salt levels through the soil's ability to exchange Na ions to K. High salt levels in the soil affect plant growth through four mechanisms i.e. (1) elevated levels of salt cause osmotic stress, (2) inhibition in K + absorption which is the main nutrient of the plant, (3) Na + ions at elevated levels are toxic to cytosolic enzymes and (4) high salt concentration stimulates oxidative stress and cell death [14]. The soil's ability to convert K ions to Na will decrease the Na content to the plants. Physiological responses of soy genotype to salinity stress can be (1) prevention of ion transfer from roots to other parts of the plant, (2) not accumulate many salts in leaves and stems, and (3) have better osmotic adjustability in plant cells. The physiology of soybeans on salinity is also related to the regulation of stable moisture content in the canopy and the accumulation of soluble saccharides, dissolved proteins, amino acids, and K + and Ca + ions for osmotic adjustment [15]. Conclusions The results of this research showed that the treatment of antioxidant type and concentration affected not significantly to physiology of fourth generation soybean. The best ascorbic acid concentration was obtained at 250 ppm to improve soybean physiology in saline soil.
2019-04-02T13:13:34.706Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "931a6bfb1051b2152d5650539e90f92764866ce9", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/122/1/012068/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a2aaaa6e330d3d25205a87cda71daf7898fca1a5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
259755085
pes2o/s2orc
v3-fos-license
The Influence of Water in the Vapor-Assisted Conversion Synthesis of UiO-67 MOF Thin Films Water is known to play an important role for the crystallization and stability of Zr-based metal-organic frameworks (MOFs). This work investigates its effect on the vapor-assisted conversion (VAC) synthesis of UiO-67 MOF thin films on Au-coated Si substrates. We demonstrate the equilibration processes taking place during the VAC procedure, confirming the gradual equilibration of all solutions upon heating. The presence of water affects the vapor phase composition but does not significantly impact the acetic acid equilibration rate. However, the preparation of UiO-67 thin films by VAC is highly sensitive to the water content in the reaction. Some water is required for the formation of the zirconium clusters, but excessive water in the reaction vial yields poorly crystalline materials. Atmospheric water that is taken up by the vapor source can be sufficient to reduce crystallinity dramatically. This complication can be partially overcome by increasing the amount of acetic acid in the vapor source. Introduction Metal-organic frameworks (MOFs) have established themselves as candidate materials in a number of fields ranging from bioand biomedical applications, [1] capture-, sieving-and storage materials, [2] catalysis, [3] and electronics [4] to sensor technology. [5]OFs consist of repeating inorganic nodes connected by organic linkers resulting in a material offering a well-defined structure and chemical composition, high crystallinity and porosity, and a large internal surface area.The possibility to grow MOFs as thin films makes them particularly interesting for applications that require functionalized surfaces, such as coatings, membranes, optics, sensing, or electronics. [6]Incorporating catalysts in a MOF thin film grown on a conductive surface can result in highly efficient electrode materials. [7]For such systems to live up to their potential, control over film thickness and crystal orientation are highly relevant. [8]6c,9] In 2018, Virmani et al. [10] reported an alternative method that was coined vapor-assisted conversion (VAC), and that gave access to thin films of certain MOFs that are notoriously more difficult to obtain by the traditional methods.For example, VAC allowed on-surface growth of highly oriented crystalline UiO-type (UiO = Universitetet i Oslo) thin films of controlled thickness.An overview of the VAC setup and the involved processes is shown in Figure 1.Initially, a precursor solution containing the linker, the metal source, and a modulator in N,N-dimethylformamide (DMF) is placed on a substrate that is elevated from the bottom of a sealed glass container.A vapor source that contains a solution of modulator in DMF is added to the bottom of the glass vessel.Upon heating of the vessel, the precursor droplet is exposed to vapor from the partially evaporated vapor source, leading to the conversion of the precursors into a crystalline MOF thin film. [10]In the original paper, it was found that the composition of the vapor source and the resulting diffusion of acetic acid from vapor source to precursor droplet was crucial for successful MOF synthesis.Additionally, it was demonstrated that the variation of the modulator and precursor concentration can influence crystallite orientation or size.A link between crystallization rate, crystallinity, and orientation of the resulting film was suspected, with homogeneous nucleation in solution and heterogeneous nucleation at the surface being coexisting processes. [10]hile the above-mentioned parameters were investigated very carefully, the importance of water was not addressed in the original paper.This is crucial, as shown in a couple of recent contributions that investigate the role of water in the crystallization kinetics of UiO-type MOFs. [11]The question of water stability of UiO-67 MOFs had been a matter of discussion as well, with no easy agreement within the field. [12]Water is required for the formation of the Zr 6 O 4 (OH) 4 cluster from the salt precursor, [11a] but according to Firth et al. [13] it can also lead to defects and different phases in the resulting MOF.In addition, we suspected that a better understanding of the role of water may be central to overcome problems with irreproducibility that we encountered in some of our own syntheses (see Supporting Information for details). Results and Discussion The VAC system is more complex than traditional solvothermal MOF syntheses with the three-component system consisting of DMF, acetic acid (AcOH) and water, all present in different ratios in the vapor source, the precursor droplet, and the vapor phase.The two liquid phases are in contact through the vapor and will slowly equilibrate over the course of the experiment.This means that in the precursor droplet, where the MOF is grown, the concentrations of all components change over the course of the synthesis. Unless stated otherwise, all VAC experiments described herein were conducted according to the following general VAC protocol, in analogy to the original report. [10]The precursor solution contained metal precursor and linker in 2.2 mM concentrations, dissolved in a mixture of AcOH and DMF (v/v 1:40).The vapor source contained a higher AcOH concentration (v/v AcOH:DMF 1:5.25).The volume of the precursor droplet was 60 μL, while that of the vapor source was 1 mL.The vessel was heated at 100 °C for three hours after which the crystallinity of formed films was evaluated by powder X-ray diffraction (PXRD).Further characterization of selected films was performed by scanning electron microscopy and X-ray photoelectron spectroscopy, revealing a uniform film of approximately 100 nm thickness (Supporting Information Figure S1 and S2). To demonstrate the diffusion of AcOH to the precursor droplet, the amount of AcOH in the precursor droplet before and after VAC synthesis was compared by 1 H NMR spectroscopy (Figure 2, see Supporting Information for details on the NMR sampling protocol). The 1 H NMR spectrum in Figure 2 a) shows the composition of the precursor droplet as dropcasted, consisting of DMF with small amounts of AcOH (v/v AcOH:DMF 1:40).The water peak seen in the spectrum comes from contaminant water and is shifted downfield in the presence of AcOH.After performing the VAC synthesis, which involves heating the sealed reaction vial in the presence of the vapor source with a higher AcOH concentration (v/v AcOH:DMF 1:5.25), the droplet composition was measured again (Figure 2b).The 1 H NMR spectrum shows an increase of AcOH in the precursor droplet after the reaction.PXRD measurements of the resulting material confirmed that a crystalline MOF film was formed (Figure S3).In Figure 2c), the same experiment was conducted but in the absence of AcOH in the vapor source.The 1 H NMR spectrum of the precursor droplet after heating shows the disappearance of the AcOH peak.PXRD measurements revealed that the resulting material was poorly crystalline (Figure S3).This is in agreement with previous data showing that the reaction is sensitive to AcOH concentration. [10]The results show how equilibration via vapor dictates the AcOH concentration in the precursor droplet and underline the importance of acetic acid as a modulator. Besides the different AcOH content, the vapor source might also contain a different amount of water than the precursor droplet.Both substances will equilibrate at their own rate, while the relative amount of each substance present in the solutions can influence both rates, leading to a complex three-compound vapor mixture.The vapor composition is dominated by the vapor source due to the much higher volume.If water is present in the vapor source we expect the vapor composition to shift. How the presence of water in the vapor source influences the composition of the gas phase during a VAC synthesis was assessed by quantitative variable temperature 13 C NMR spectroscopy at 90 °C (see Supporting Information for full details).A vapor source was prepared according to the general VAC protocol, water was added (v/v 0, 0.01, and 0.1 of H 2 O in AcOH: DMF 1:5.25), and the resulting solutions were transferred to J. Young NMR tubes. 13C NMR spectra were collected at 25 °C, at 90 °C after equilibration for one hour, and again at 25 °C.The methyl signals of DMF and AcOH were integrated and the ratio of AcOH to DMF in the liquid phase was calculated (Figure 3). Figure 3 shows the AcOH to DMF ratio in the different vapor sources before and after heating.As expected, the ratios are the same irrespective of the water content, and remain the same after heating, confirming that the system is closed.When heating the system to 90 °C, a decrease of the AcOH to DMF ratio in the liquid phase was observed for all experiments.The decreased AcOH to DMF ratio upon heating is consistent with the lower boiling point of AcOH, which will be preferentially vaporized over DMF.Addition of increasing amounts of water to the vapor source results in a smaller decrease of the AcOH to DMF ratio, which is indirect evidence that less AcOH evaporates from the vapor source when water is present.This would lead to an overall lower amount of AcOH in the vapor of the reaction vial.If this water induced reduction of AcOH in the vapor phase leads to a lack of AcOH in the precursor droplet during synthesis, the crystallinity of the resulting film could be negatively affected. To gain more insights about how the addition of water in the vapor source influences the equilibration rate of AcOH in the precursor droplet, the 1 H NMR sampling protocol from Figure 2 was repeated with one dry (v/v AcOH:DMF 1:5.25) and one wet vapor source (v/v H 2 O:AcOH:DMF 1:1:5.25).Samples of the precursor droplet were taken at different times over the course of the experiments to monitor the transport of AcOH from the vapor source to the precursor droplet.Figure 4 shows that for both experiments, the amount of AcOH increased as the heating continued, demonstrating again the equilibration of the AcOH concentration in the vapor source and precursor droplet.Remarkably, the presence of water in the vapor source caused no appreciable differences to the [AcOH]:[DMF] ratios as a function of time.These findings suggest that water is not significantly disrupting AcOH equilibration in the reaction vial. To further investigate the influence of water, we decided to look at the influence of the water content on MOF synthesis, taking into consideration the possibility of differences resulting from the spatial distribution of water as it can be present either in the vapor source or the precursor solution at the start of the VAC.To assay the spatial role of water, samples were prepared inside a drybox, following the general VAC protocol, with varying amounts of water in the precursor solution (v/v 0, 0.1, and 1 of H 2 O in AcOH:DMF 1:40) and vapor source (v/v 0, 0.01, 0.1, and 0.2 of H 2 O in AcOH:DMF 1:5.25).Crystallinity of the resulting materials was determined by PXRD (Figure 5). In the absence of water in both vapor source and precursor droplet, no MOF was formed (Figure 5, top/left).This is not surprising as the Zr 6 O 4 (OH) 4 cluster that constitutes the secondary building unit (SBU) of UiO-type requires water for successful assembly.Thus, addition of water to the precursor solution (v/v H 2 O:AcOH 1:1; Figure 5, top/right) resulted in successful cluster assembly and the formation of crystalline thin films.Particularly noteworthy is the experiment with no water in the precursor droplet (Figure 5, second row/left), which still results in good MOF crystallinity, suggesting that water can also be supplied to the precursor droplet indirectly by including it initially in the vapor source (v/v H 2 O:AcOH 0.01:1).This also supports the transport of water from the vapor source to the precursor droplet during the VAC procedure.However, if the water concentration in the vapor source exceeds a certain threshold (Figure 5, row 3 and 4), crystallinity of the UiO-67 thin film is impaired.This trend was highly reproducible. When calculating the total percentage of water in the reaction vial (Figure 5, red), the data suggests that the presence of up to 0.3% of total water content in the reaction vial enables the formation of a crystalline MOF thin film.Poorly to noncrystalline materials can be detected above 1.5% of water in the vial.The initially considered importance of the spatial distribution of water seems to result from the different volumes of precursor and vapor solution used in the VAC procedure.The much lower volume of precursor solution used in VAC makes the synthesis less susceptible to its water content, as opposed to the water content of the vapor source which has a stronger influence on the system's total water content. Having established a water content window for successful MOF film growth under controlled conditions, we set out to illustrate the implications of this finding in a real-world context.The question that was investigated was whether moisture that is brought into the experiment by atmospheric humidity can lead to poorly crystalline material.Two VAC experiments were set up using the same precursor and vapor solutions prepared inside a drybox.One reaction was conducted inside the drybox, while the other was set up under atmospheric conditions (19 °C, 94% relative humidity).PXRD measurements (Figure S4) revealed that the samples synthesized inside the drybox were crystalline, whereas the samples prepared outside the drybox were significantly less crystalline.These results confirm the possibility of a negative impact of atmospheric water on UiO-67 film growth using VAC.This would also explain the reproducibility issues encountered when attempting to grow highly crystalline UiO-67 thin films. In order to make the VAC procedure synthetically more robust in the presence of atmospheric moisture, it was investigated whether increased [AcOH] in the vapor source could have a beneficial impact (Figure 6).Crystallinity was initially suppressed through the addition of water (v/v 0.1 of H 2 O in AcOH:DMF 1:5.25) to the vapor source.While keeping the amount of water constant, a series of experiments were conducted containing increasing amounts of AcOH in the vapor source. The resulting PXRD spectra show a resurgence of the crystallinity when doubling the amount of AcOH in the vapor source.In some experiments, a new peak appears at 6.6°, aligning well with the < 200 > reflection of a simulated UiO-67 PXRD pattern, [14] suggesting a loss of preferred orientation.The experiments point to the possibility of alleviating the negative effect of atmospheric humidity by increasing the AcOH concentration, at the cost of preferred orientation.In addition, decreasing the volume of the vapor source was found to have a similarly positive effect (Figure S5), increasing the resilience of the system towards detrimental water content.Such measures are important in situations when the exclusion of atmospheric humidity is experimentally challenging. The results discussed in this paper suggest that water is responsible for the loss of crystallinity when UiO-67 thin films are grown by VAC without careful control of water content.While some water is required for the formation of the Zr 6 O 4 (OH) 4 cluster, [11a] our experiments show that the film formation is impaired when the water content rises above 1.5% of the total solvent volume.This could suggest an inhibition of the crystallization process or the formation of highly defective and poorly crystalline MOF films.According to Firth et al., [13] water can be used to create defects in UiO-type MOFs, or even leading to the formation of a different phase.Burtch et al. [15] explain that MOF defectivity correlates with water absorbance, meaning a more defective MOF structure will have more water bound within its pores.Another paper demonstrates that water molecules remaining in the pores during and after the drying process will lead to pore collapse through capillary forces. [16]12a] This paints a picture where a defect-free MOF is quite stable when exposed to moisture, whereas a defective MOF will be exponentially more susceptible to degradation by water.Once formed, the UiO-67 films show decent hydrolytic stability (Figure S6).UiO-67 thin films remain intact for several weeks when stored under dry DMF.When evaporated and dried, the crystallinity declines within days, especially if the film is not kept under argon (Figure S7). Conclusions To conclude, VAC seems to be a powerful and reliable method for the growth of highly oriented, crystalline thin films of UiOtype MOFs.As water is known to play an important role for the crystallization and stability of Zr MOFs, we decided to investigate its effect on the VAC synthesis of UiO-67 MOFs.Firstly, the equilibration processes taking place during the VAC procedure were demonstrated, confirming that the precursor droplet gradually equilibrates with the vapor source upon heating.Water affects the vapor phase composition but does seemingly not significantly impact the AcOH equilibration rate.However, we found that the preparation of UiO-67 thin films by VAC is highly sensitive to the water content in the reaction.Some water is necessary for the formation of the zirconium clusters, but excessive water in the reaction vial results in very poorly crystalline materials.Experiments show that atmospheric water taken up by the vapor source can be sufficient to strongly reduce crystallinity.Increasing the amount of AcOH in the vapor source can reverse the negative effect of water but might lead to a loss of preferred orientation.Once highly crystalline films are formed, they are much less susceptible to water.We suggest that defect-free and properly activated UiO-67 is quite stable when exposed to moisture, whereas a defective material will be more susceptible to degradation by water.These findings should help other researchers when troubleshooting with the synthesis of UiO-type MOFs using VAC or similar methods.The results reported herein will enable and accelerate future research based on UiO-type MOFs. General VAC protocol Solutions of ZrOCl 2 •xH 2 O (14.2 mg) in DMF (10 mL) and biphenyl-4,4'-dicarboxylic acid (10.7 mg, 0.044 mmol) in DMF (10 mL) were prepared and sonicated for 10 min.For the precursor solution, 1 mL of each solution were combined, and acetic acid (49 μL) was added to the mixture.The vapor source was prepared from acetic acid and DMF (v/v 1:5.25).Four glass Raschig rings were placed in a vial.Vapor source (1 mL) was added to the bottom of the vial, a substrate slide was placed on the Raschig rings and precursor solution (60 μL) was pipetted onto the substrate.The reaction vial was sealed and placed in a blockheater (100 °C).After 3 h the vial was removed and left to cool down before carefully rinsing the substrate with DMF and storing it in DMF. H NMR sampling protocol NMR solvent was prepared by adding toluene to chloroform-D as internal standard.The general VAC protocol was used to set up the reactions, but the composition of the precursor solution and vapor source were varied.After removing the vials from the blockheater, 2.5 μL aliquots were taken from the remaining droplet on the gold surface and mixed into 0.6 mL of the previously prepared NMR solvent. Figure 2 . Figure 2. 1 H NMR spectra showing precursor droplet composition before and after heating in absence and presence of AcOH in the vapor source: a) precursor before heating (AcOH:DMF 1:40), b) after heating with AcOH in the vapor (AcOH:DMF 1:5.25), and c) precursor after heating without AcOH in the vapor (DMF only). Figure 3 . Figure 3. Measurement of the AcOH to DMF ratio in the vapor source by quantitative 13 C NMR spectroscopy at 25 °C, after equilibration at 90 °C, and after cooling back to 25 °C.Vapor sources with different water content were compared (v/v 0, 0.01 and 0.1 of H 2 O in AcOH:DMF 1:5.25). Figure 4 . Figure 4. Ratio of AcOH to DMF in the precursor droplet after heating, with water (v/v H 2 O:AcOH:DMF 1:1:5.25)and without water (no H 2 O in AcOH:DMF 1:5.25) in the vapor source, measured by 1 H NMR spectroscopy. Figure 5 . Figure 5. PXRD patterns of Au@Si substrates after attempting to grow a UiO-67 thin film using VAC with different ratios of water in the vapor source (v/v 0, 0.01, 0.1 and 0.2 of H 2 O in AcOH:DMF 1:5.25) and the precursor solution (v/v 0, 0.1 and 1 of H 2 O in AcOH:DMF 1:40).The observed peak corresponds to the UiO-67 < 111 > reflection.The final percentage of water (v/v) inside the reaction vial is given in red. Figure 6 . Figure 6.PXRD patterns of Au@Si substrates after attempting to grow a UiO-67 thin film using VAC.Experiments were conducted containing a constant amount of water and increasing concentrations of AcOH in the vapor source (v/v 0.1 of H 2 O in AcOH:DMF 1-2:5.25).The precursor solution remained unchanged (AcOH:DMF 1:40), additionally a positive control with dry vapor (no H 2 O in AcOH:DMF 1:5.25) was performed.
2023-07-12T06:14:03.999Z
2023-07-02T00:00:00.000
{ "year": 2023, "sha1": "acb36d20fe247459ccc13b340f857dfa2346e0e4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1002/ejic.202300216", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1a45674df0f194c5539003b23ce8021aa525e5f2", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
227132295
pes2o/s2orc
v3-fos-license
Auditory sensation with affective agnosia: A prevalence of alexithymia among tinnitus patients Objectives: The aim of the present study was to determine the prevalence and association of alexithymia, depression, and anxiety in patients affected by tinnitus. Methods: The study was conducted among the patients referred for audiometric evaluation for tinnitus. They were further evaluated with the Hospital Anxiety and Depression Scale, the Tinnitus Handicap Inventory, and the Toronto Alexithymia Scale. Analysis was done for prevalence and the sample was categorized as high and low tinnitus handicap subgroups, and mean scores of alexithymia, anxiety, and depression were compared. Results: A total of 70 patients (55.7% – male and 44.3% – female) with a mean age of 33.17 ± 12.24 years were finally analyzed. The severity of tinnitus was most severe (34.3%), followed by moderate (20%), catastrophic (18.6%), mild (17.1%), and slight (10%). The prevalence of alexithymia, anxiety, and depression among patients of tinnitus was found to be 65.7%, 37.1%, and 20%, respectively. The high tinnitus handicap group showed higher scoring on total alexithymia score, anxiety, and depression and higher scoring with describing emotion and identification of emotion, but there was no difference for the subscale of externally oriented thinking. Conclusions: The study found a prevalence of alexithymia, anxiety, and depression as 65.7%, 37.1%, and 20%, respectively, among patients of tinnitus, and problem of describing and identification of emotion are associated with higher tinnitus handicap. Many psychological problems are associated with tinnitus such as anxiety, depression, stress, and sleep problems. [8,9] The relationship between psychological problems and tinnitus is reciprocal, as studies reported that psychological distress is related to patients of tinnitus, [10] whereas 48%-78% of patients with chronic tinnitus developed major depression. [11] Yet, another study found a lifetime T is a symptoms of hearing sound sensation without external auditory stimulation of various etiology. [1] Tinnitus may be associated with annoyance, concentration difficulties, distress, sleep problems, psychological disorders, anxiety, depression, and suicidal ideations. [2] Tinnitus is a common problem. [3] In a systemic review of all published papers in 35 years, McCormack et al. found an overall prevalence of 5.1%-42.7%. [4] Another systemic review reported 4.7%-46% in the general pediatric population and 23.5%-62.2% among children with hearing loss. [5] Gender-wise men are more affected than women, [4] and menstrual cycle irregularity may also be a related factor of tinnitus in women. [6] Many associated factors such as abnormal tympanic membrane, prevalence of major depression in 62% and current depression in 48% of their sample. [12] A study on a large sample of 51,574 from the general population found that persons with tinnitus scored significantly higher on anxiety and depression and lower on self-esteem and well-being than people without tinnitus. [13] Alexithymia is a trait that comprises impairments in the perception of bodily states, their cognitive representation, and verbal communication; very recently, it has been conceptualized as "Affective Agnosia." [14] The concept of alexithymia has attained great relevance in a psychological construct such as emotional regulation and associated disorders. [15] Numerous studies have shown that alexithymia is associated with a variety of medical and psychiatric disorders including physical disorders such as hypertension, [16] substance use disorders, [17] eating disorders, [18] somatic symptoms and somatization disorders, [19] functional gastrointestinal disorders, [20] and depression. [21] Studies have also found a strong correlation between alexithymia and somatization in depressed patients. [22] The present study aimed to determine the prevalence of alexithymia, depression, and anxiety among the patients affected by tinnitus. METHODS The study was conducted at the department of ear, nose, and throat (ENT) at a Medical college and hospital. The study was approved by the institutional review board. The sample consisted of patients who visited the hospital for tinnitus and referred for audiometric evaluation to the Audiologist. Inclusion criteria were all patients with either unilateral or bilateral tinnitus, aged 18 years or above, consenting to participate in the study. The exclusion criteria were the patient's condition too incapacitated to participate in the study due to poor medical status, presence of vertigo, gross language, and communication barrier. Information on patient demographics, history of alcohol or substance use, past history, and family history was obtained from interviews with patients and accompanying persons. A detailed physical, ENT, and neurological examination was done to exclude any comorbid general medical condition. All detailed evaluation and data collection were done by a team of an audiologist and a clinical psychologist. Sociodemographic data sheet The sociodemographic data sheet included age, marital status, religion, community, education, and occupation. Clinical variables recorded were alcohol and drug use, history of epilepsy, and past history of medical or psychiatric illness. The Hospital Anxiety and Depression Scale This is a very well-validated scale [23] to assess anxiety and depression among hospital-based patients. It consists of 14 questions, 7 scoring anxiety and 7 scoring depression. Patients were asked to read each question and place a tick against the reply that came closest to how they had been feeling that day. Each answer was scored 0, 1, 2, or 3. The possible range of scores was, therefore, 0-21, with higher scores indicating greater levels of anxiety. A score of 0-7 is considered normal, scores of 8-10 are borderline abnormal, and scores of 11-21 are abnormal. The sensitivity and specificity for both Hospital Anxiety and Depression Scale (HADS) A and D subscales is approximately around 0.80; the mean Cronbach's alpha for HADS-A is 0.83, and for HADS-D, it is 0.82. [24] The Tinnitus Handicap Inventory The Tinnitus Handicap Inventory (THI) is a widely used self-administered test to determine the degree of distress suffered by the tinnitus patient. [25] It consists of 25 questions divided into 3 subgroups: functional (11 items), emotional (9 items), and catastrophic (5 items). It has high internal consistency and reliability with the Cronbach's alpha coefficient (0.88) and a high intraclass correlation coefficient (0.78-0.90). [26] The Toronto Alexithymia Scale The Toronto Alexithymia Scale (TAS) is one of the most commonly used measures of alexithymia with good internal consistency (Cronbach's alpha = 0.81) and test-retest reliability (0.77, P < 0.01). This self-report scale consists of 20 items which are rated on a five-point Likert scale 1 (strongly disagree) to 5 (strongly agree). The total alexithymia score is the sum of responses to all 20 items and a score of 61 or greater suggests alexithymia. Scores of 52-60 suggest possible alexithymia, and a score of 51 or less suggests nonalexithymia. It has three subscales: the first one consisting of five items numbered 2, 4, 7, 12, and 17 is for describing difficulty in feelings or emotions. The second one consists of seven items -1, 3, 6, 9, 11, 13, and 14, which measures difficulty in identifying feelings or emotions. The third subscale consists of eight items -5, 8,10,15,16,18,19, and 20, which measures externally oriented (EO) thinking. [27] Statistical analysis The collected data of all patients were statistically analyzed using the Statistical Package for the Social Sciences (SPSS, Inc., Chicago, Illinois, USA) version 16.0. Data analysis included means and standard deviations of continuous variables for the total sample. Descriptive statistics included frequency and percentage of categorical variables. The Mann-Whitney U-test was used to determine if differences of distribution existed between two groups of the sample. Statistically significant levels are reported for P ≤ 0.05. Highly significant levels are P < 0.001. Characteristics of the study sample A total of 70 patients (55.7% -male and 44.3% -female) were included for the study. Table 1 summarizes the sample characteristics. The mean age of the group was 33.17 ± 12.24 years, and the mean years of education were 9.72 ± 5.61 years [ Table 1]. The marital status of the sample was mostly married (64.3%) and 35.7% were single. Hindu religion dominated the sample size with 85.7%, followed by 12.9% of Muslims, and only one subject was Christian. 48.6% were unemployed and the remaining 12.9% were in service and 38.6% were self-employed. Tinnitus severity and prevalence The tinnitus severity as measured by THI was found to be most severe (34.3%), followed by moderate (20%), catastrophic (18.6%), mild (17.1%), and slight (10%). The prevalence of alexithymia, anxiety, and depression among patients of tinnitus was 65.7%, 37.1%, and 20%, respectively [ Table 2]. The distribution of different grading of THI scoring and presence of alexithymia as per TAS-20 is shown in Table 3. Comparison of alexithymia across high and low distressed as measured by the Tinnitus Handicap Inventory The mean score of the THI was 53.92 ± 2.46; hence, we categorized the patients based on the mean THI score of the sample: group with THI score of 54 and above and another group of below 54 THI score. This high THI scoring group consisted of 43 sample size, and the low THI scoring group was of 27 sample size. The mean scores of TAS total scores and subscales, HADS score, and Mann-Whitney U-test statistics were done and are tabulated in Table 4. There was a significant difference (P = 0.000) among these two groups for alexithymia, i.e., total TAS score. Among the subscales of TAS, the significant difference was found with identification (P = 0.000) and describing (P = 0.001) the emotions, whereas there was no difference in these groups in domains of EO thinking. Similarly, we found significantly higher depression and anxiety among the high THI scoring group as measured by HADS [ Table 4]. DISCUSSION Our study reveals significantly high alexithymia, anxiety, and depression in patients of tinnitus. Furthermore, as hypothesized, in this study, we found that the TAS-20 subscale assessing difficulty identifying and expressing feelings is more closely associated with tinnitus compared to other subscales of EO thinking. In accordance with our study, an earlier study addressed the association of tinnitus, depression, and alexithymia among elderly people and found a clear association between depression and subjectively annoying tinnitus. [28] However, another part of their finding is contradictory to ours as they did not find the correlation of the alexithymia with the severity of tinnitus. We also found significantly higher depression, anxiety, and alexithymia among high THI scoring tinnitus as compared to low THI scoring tinnitus patients. This difference can be attributed to gross difference in age groups of these two studies. The average age of our study was 33.17 years, whereas earlier study consisted of 70-85 years aged olders. Although our sample was purposive, it indicates that men presented tinnitus more frequently than women (55.7% of males and 44.3% of females), in agreement with what was reported by Salviati et al. as 63.17% of males and the remaining 36.83% of females. [29] Our sample consisted of 55.7% of male patients, which may partially contribute to the high prevalence of alexithymia, as the male gender is known to be associated with alexithymia. [30] We found a 65.7% point prevalence of alexithymia among patients suffering from tinnitus; this is much higher than what is usually reported for the general population as 17%. [30] Yet, another study that examined the prevalence of alexithymia in patients with psychogenic nonepileptic seizures and epileptic seizures reported 36.9% and 28.6%, respectively. [31] These findings are in concordance to most of the previous studies which conclude that alexithymia seems to be a common feature of neurological disease. Tinnitus is also an overlapping illness of neurological, ear, and psychological problems. However, a review [32] found most evidence available for patients with traumatic brain injury, stroke, and epilepsy. We also found a 20% prevalence of depression among our sample. This high point prevalence of depression directly contributes to the found high prevalence of alexithymia. Earlier studies and meta-analysis suggest a strong association between alexithymia and depression. [9,33] Furthermore, depression poses as a confounding factor in studies of alexithymia. This comorbidity may be the reason for the very high found prevalence of alexithymia in our study. We also found a 37.1% prevalence of anxiety in our study; this is the third dimension along with alexithymia and depression. The result of the study also shows that tinnitus is highly associated with alexithymia, anxiety, and depression; this conforms to earlier studies that demonstrated a higher prevalence of psychological problems such as depression, anxiety, somatization, and obsession. [28] These findings may have important implications for understanding and promoting general psychological health among patients of tinnitus. Limitations of this study include lack of control group, very small sample size, and cross-sectional observation; these may be planned to overcome for future studies. These available findings have been based on questionnaire data, but future studies may employ structured psychiatric interviews adopting diagnostic criteria. CONCLUSIONS The prevalence of alexithymia, anxiety, and depression among the patients of tinnitus was found to be 65.7%, 37.1%, and 20%, respectively. The high tinnitus handicap group showed significantly higher scoring on total alexithymia score, anxiety, and depression compared to the low tinnitus handicap group. The high tinnitus handicap group also showed significantly higher scoring with describing emotion and identification of emotion, but there was no difference for the subscale of EO thinking. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-11-24T14:19:22.369Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "db2fc2a64b7f3616a137b09cc1e9121b0f29d787", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ipj.ipj_40_18", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bdd1667a0b3834e442520850e3b5ed8e2e0061e9", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259162947
pes2o/s2orc
v3-fos-license
Osteopontin is a prognostic circulating biomarker in patients with neuroendocrine neoplasms Purpose Osteopontin (OPN), also called secreted phosphoprotein 1 (SPP1) is a matricellular glycoprotein whose expression is elevated in various types of cancer and which has been shown to be involved in tumorigenesis and metastasis in many malignancies. Its role in neuroendocrine neoplasms (NEN) remains to be established. The aim of the study was to analyze plasma concentrations of OPN in patients with NEN and to explore its diagnostic and prognostic value as a clinical biomarker. Methods OPN plasma concentrations were measured in a total of 38 patients with histologically proven NEN at three different time points during the course of disease and therapy (at the start of the study, after 3 and 12 months, respectively) as well as in healthy controls. Clinical and imaging data as well as concentrations of Chromogranin A (CgA) and Neuron Specific Enolase (NSE) were assessed. Results OPN levels were significantly higher in patients with NEN compared to healthy controls. High-grade tumors (grade 3) showed the highest OPN levels. OPN levels were neither different between male and female patients nor between different primary tumor sites. OPN correlated significantly with corresponding NSE levels, while there was no correlation with Chromogranin A. High OPN levels above a cutoff value of 200 ng/ml at initial analysis predicted a worsened prognosis with significantly shorter progression-free survival of patients with NEN, which also held true within the subgroup of well-differentiated G1/G2 tumors. Conclusion Our data indicate that high baseline OPN levels in patients with NEN are predictive of an adverse outcome with shorter progression-free survival, even within the group of well differentiated G1/G2 tumors. Therefore, OPN may be used as a surrogate prognostic biomarker in patients with NEN. Introduction Neuroendocrine neoplasms (NENs) are a group of rare, very heterogeneous cancers (Kunz 2015). Nevertheless, the incidence and prevalence of NENs has been rising globally: it is estimated, that every year over 12,000 people in the United States are diagnosed with NET (Dasari et al. 2017;Oberg et al. 2013). The prognosis of NENs is largely dependent on the histopathological assessment, which is based on the World Health Organization (WHO) classification of 2019 (Rindi et al. 2022). Disease stage and tumor grading are important factors of the classification and largely correlate with the prognosis (Baur et al. 2016). Whereas welldifferentiated NEN have mostly favorable prognosis, poorly differentiated NEN (neuroendocrine carcinoma, NEC) are highly proliferative (Ki67 index > 20%) with a median overall survival of 11-17 months (Rinke and Gress 2017;Rindi et al. 2022). When diagnosed at an early stage, surgical resection can be performed, leading to significantly improved overall survival. Due to their often-indolent growth, NENs are frequently diagnosed at a late stage, when metastases have occurred (Modlin et al. 2008). The tumor marker Chromogranin A (CgA) is commonly used in clinical practice for monitoring patients with NEN (Lindholm and Oberg 2011). Serum concentration of Neuron-Specific Enolase (NSE) can also be found elevated in up to 45% of patients with NEN and seems to correlate with a worsened prognosis, also in cases of normal levels of CgA (Appetecchia et al. 2018;Kulke et al. 2019). CgA is a glycoprotein, which is expressed in large core vesicles of neuroendocrine cells (Borges et al. 2010). Increased levels of CgA have been associated with Neuroendocrine Neoplasms (NENs) from many different sites, including the gastroenteropancreatic tract (Diez et al. 2013), the bronchopulmonary system (Caplin et al. 2015;Pericleous et al. 2018), as well as pheochromocytomas, paragangliomas, medullary thyroid carcinoma, as part of multiple endocrine neoplasia type 1 (MEN-1), Von-Hippel Lindau (VHL) syndrome (Cives and Strosberg 2018;Bottoni et al. 2015), and others. Currently, even after curative surgery, surveillance is recommended at certain intervals using imaging and CgA measurement (Knigge et al. 2017). While changes of CgA levels in individual patients can be useful as surrogate for tumor progression, the levels do not reflect the aggressiveness of the tumor and further most aggressive G3 tumors often express less CgA compared to well differentiated tumors (Campana et al. 2007;Marotta et al. 2012). NSE, an enzyme which is specific for neurons and neuroendocrine cells (Isgro et al. 2015), has emerged as another biomarker, which is frequently increased in high-grade G3 tumors (van Adrichem et al. 2016), although its clinical value is under debate due to limited sensitivity (Pavel et al. 2020;Modlin et al. 2008;Rindi et al. 2022). Osteopontin (OPN) is a non-collagenous bone matrix protein, produced by osteocytes, osteoblasts and hematopoietic cells (Wang and Denhardt 2008;Kita et al. 2006). Apart from promoting physiological responses, e.g. regulation of bone mineralization, promoting cell adhesion and migration, as well as recruitment of macrophages, the importance of OPN in cancer progression is becoming increasingly acknowledged (Zhao et al. 2018;Hao et al. 2017). The role of OPN in malignancies has been demonstrated in several different cancer types, including breast, prostate and colorectal cancer, melanoma, osteosarcoma and glioblastoma (Zhao et al. 2018;Amilca-Seba et al. 2021). To date there is no study, which assessed the implication of OPN for NENs. Therefore, we sought to evaluate the role of OPN in this cancer entity, and evaluate its utility as a prognostic biomarker. Patient recruitment and study cohort From December 2013 to October 2016, 38 patients diagnosed with Neuroendocrine Neoplasms were enrolled in our study at our tertiary center for Neuroendocrine Tumors at the Charité University Hospital. Prior to sample collection, patients' informed written consent was obtained. The local ethics committee approved the study (EA EA1/229/17). From December 2013 to March 2018 blood samples from those thirty-eight patients were collected at three time points during the course of disease and therapy (i.e., at the beginning of the study / beginning of therapy, after three months, and after 12 months, respectively). The levels of circulating OPN were evaluated as a potential biomarker for NETs. In all patients, prior histopathological analyses of tumor tissue obtained by tumor resection or biopsy proved the presence of a NEN. Tumor grading was performed in accordance with the WHO guidelines. We also collected blood samples from ten healthy controls with no history of malignancies, which served as control samples. Sample procession and measurement of OPN levels After the collection of patients' and healthy donors' blood samples, they were subjected to a centrifugation step for 10 min at 3000 g, and plasma aliquots of 1 mL were frozen at − 80 °C until further analyses. In total, we analyzed plasma from 114 patient samples and 10 healthy donors. Plasma levels of OPN were measured by using an enzyme-linked immunosorbent assay (ELISA) according to the manufacturer´s instructions (No. 27158, Immuno-Biological Laboratories (IBL) International GmbH Flughafenstrasse 52a 22,335 Hamburg, Germany). This kit uses two types of highly specific antibodies. Tetra Methyl Benzidine (TMB) is used as coloring agent (Chromogen). The epitopes of the used antibodies are as follows: (1) Coating Antibody: Anti-Human OPN (O-17) Rabbit IgG Affinity Purify. (2) Labeled Antibody: Anti-Human OPN (10A16) Mouse IgG MoAb Fab'-HRP. The measurement range amounts from 5 to 320 ng/mL. The sensitivity of the reaction is 3.33 ng/mL. All samples were measured in duplicates. In parallel, blood samples for analyses of CgA and NSE levels were measured as a part of the standard workflow for NEN patients at Labor Berlin, the central laboratory of Charité University Hospital Berlin, Germany. The CgA and NSE analyses each were performed with the same method at all time points. Also, the normal upper limits remained the same (CgA < 102 ng/ml; NSE < 16.3 ug/L). Statistical analyses All assays were performed in replicates. Results are displayed as violin plots or box plots. Parametric data were compared using student's t-test, nonparametric data were compared using the Mann-Whitney U-test or the Kruskal-Wallis test for multiple group comparisons. A p-value of < 0.05 was considered statistically significant. Statistical analyses were performed by using GraphPad Prism software. Patient characteristics In the current study, plasma from 38 patients (22 female and 16 male, respectively), were collected at three different time points during the course of the disease for analysis of OPN levels. The standard biomarker for NENs, CgA, but also NSE, were analyzed in parallel. Further, blood from ten healthy donors for analyses of OPN levels served as controls. Within the control group, 5 were female and 5 were male participants. The median age was 40 years (29-58 years). At the time point of inclusion into the study, the median age of the patients was 63 years (28-85). In the majority of cases, the primary tumor was localized in the ileum (n = 20) or the pancreas (n = 13). Further primary tumor locations included the kidney (n = 1), mammary gland (n = 1), lung (n = 1) and carcinoma of unknown primary (n = 2). The median Ki-67 index was 8% (range 1-40%). The majority of patients showed a G2 tumor (58%) and no functionality (68%). In most cases, the tumor had metastasized to lymph nodes and the liver at the time of inclusion into our analyses. Osteopontin (OPN) levels are elevated in patients with neuroendocrine malignancies As mentioned above, several studies have already shown the relevance of OPN as a circulating biomarker in different cancer types. To evaluate, whether OPN has a diagnostic and prognostic significance in patients with neuroendocrine malignancies, we compared plasma levels from patients with healthy controls. We observed that OPN levels were significantly higher in patients with NEN (n = 38, range 32.3-687 ng/ml) compared to healthy controls (n = 10, range 69.8-225.4 ng/ml) (Fig. 1a). There was no difference in OPN levels between male and female patients (Fig. 1b). OPN levels are significantly higher in patients with high-grade NEN In order to further analyze the relevance of OPN in neuroendocrine neoplasms, we compared circulating OPN levels in patients with G3 NENs (Ki67 > 20%) with patients diagnosed with well-differentiated G1 and G2 NEN. We found that OPN levels in patients suffering from G3 NEN (range 655-688 ng/ml; median 670.5) were significantly higher as compared to low and intermediate grade tumors (grade 1 and 2, range 32.3-368, median 190.7). This was consistent throughout three different time points of the course of disease and therapy (Fig. 2). OPN plasma concentrations show no correlation to primary tumor location Next, we sought to analyze, whether OPN plasma concentrations showed changes in patients with different primary tumor localizations. We analyzed three groups-patients with pancreatic or ileal primary, being the most prevalent entities in our cohort, compared to other localizations. Our analysis revealed that OPN levels were not associated with the localization of the primary tumor, which was consistent at three different time points during the course of disease and therapy (Fig. 3). OPN levels do not predict therapeutic response When looking at the longitudinal measurements of OPN levels in individual patients, there was no significant correlation between OPN and treatment response. In most cases, OPN levels remained stable or showed a slight decline throughout the course of the observation time (Fig. 4). 1 3 OPN plasma concentrations in correlation to corresponding Chromogranin A and NSE levels We next tested whether the circulating OPN levels correlated with the current standard biomarker for NEN, Chromogranin A (CgA) or to NSE (Fig. 5). While there is no association between OPN and CgA, we found a significant correlation between the levels of OPN and NSE (p < 0.0003; r value 0.5668). OPN plasma concentrations in correlation to corresponding Ki-67 levels Subsequently, we wanted to find out whether circulating OPN levels correlated with Ki-67 (Fig. 6). We found a significant correlation between the levels of OPN and Ki-67 (p < 0.0001; r value 0.6769). Association of OPN levels with progression-free survival Finally, we asked whether circulating plasma OPN levels could indicate enhanced or worsened progression-free survival. Using a cutoff value of 200 ng/ml OPN, we compared the progression-free survival (PFS) in patients that had values below (n = 18) versus above (n = 20) the cutoff. Using Kaplan-Meier analysis, we found that Osteopontin indicates a significantly worsened prognosis with shorter PFS when using a cutoff value of 200 ng/ml (Fig. 7). The median PFS was 41 months in the group of patients with OPN levels below versus 19 months in individuals with OPN levels above the cutoff value (hazard ratio 0.38; p < 0.03). Since we had only a limited number of patients with G3 tumors, we asked whether within the well differentiated G1/ G2 cases high OPN correlates with PFS. Indeed, we found that OPN still indicates a significantly worsened prognosis with shorter PFS when using a cutoff value of 200 ng/ml (Fig. 8). The median PFS was 38.5 months in the group of patients with OPN levels below versus 19 months in individuals with OPN levels above the cutoff value (hazard ratio 0.39; p < 0.02). Fig. 6 Correlation of OPN plasma levels (ng/ml) with Ki-67 levels (%). Circulating OPN levels were compared to corresponding Ki-67 values at the start of surveillance. We found a significant correlation between OPN and Ki-67 levels Thus, OPN can be used as an additional parameter to stratify the risk of progression also in patients with welldifferentiated G1/G2 tumors. Discussion In the current study, we evaluate OPN as a biomarker for neuroendocrine malignancies and demonstrate that a plasma OPN level above 200 ng/ml is associated with a significantly shorter PFS in patients with NEN. High OPN levels obtained at the initial analysis indicate a more aggressive tumor biology and predict the adverse disease outcome. OPN is a protein, which is mainly synthesized by osteocytes, osteoblasts and hematopoietic cells (Zhao et al. 2018). It has been linked to inflammatory processes, and it has been previously shown that high OPN levels can estimate severity of disease and risk of mortality in critically ill patients (Roderburg et al. 2015). The relevance of OPN as a marker of cancer aggressiveness has been reported in several malignancies, including breast, colorectal, pancreatic, lung, bladder, oral, head and neck cancer, and several other cancer types (Weber et al. 2010;Wisniewski et al. 2019;Petrik et al. 2006;Loosen et al. 2019). Neuroendocrine malignancies are a group of rare, very heterogeneous tumors (Detjen et al. 2021). Whereas patients with well-differentiated tumors (G1-G3) show a rather good prognosis, survival in patients with less differentiated tumors (NEC) is poor in most cases (Milione et al. 2017). Currently, Chromogranin A is established as a circulating biomarker for Neuroendocrine Neoplasms. While Chromogranin A is widely used to monitor the course of disease and response to therapy in individual patients, the absolute levels do not correlate with the prognosis. Indeed, more aggressive tumors can lose CgA production (Kidd et al. 2016). Therefore, new predictive biomarkers are urgently warranted for neuroendocrine malignancies. To address this issue, we analyzed levels of OPN in plasma from patients with Neuroendocrine Neoplasms at three different time points during the course of disease and therapy. We were able to demonstrate that OPN levels are significantly higher in patients with NEN as compared to healthy controls. Further, OPN levels in patients with highgrade neuroendocrine carcinomas (Grade 3) were significantly higher as compared to low and intermediate-grade tumors (Grade 1 and 2). There was no difference in OPN levels between male and female patients, and we found no correlation to primary tumor localization. Importantly, while the levels correlated with the prognosis, they did not show a correlation with the tumor burden or response to therapy. Thus, OPN levels appear to reflect the tumor biology in terms of the aggressiveness of the tumor cells, irrespective of the initial tumor size. When correlating OPN levels in plasma obtained at the first consultation with corresponding CgA and NSE levels, we observed a significant correlation to NSE values, while no correlation with CgA could be found. This verifies existing data concerning the relevance of elevated NSE levels in reflecting a worsened disease course (van Adrichem et al. 2016). CgA is highly expressed in well-differentiated NEN and therefore the concentration does not necessarily reflect the aggressiveness of the tumor. Moreover, CgA levels can be affected by confounders such as renal insufficiency, atrophic gastritis, and during therapy with proton pump inhibitors (Mettler et al. 2022). We therefore propose that OPN provides additional valuable information about the tumor biology of NEN, with aggressive tumors correlating with higher OPN levels. We also sought to find out, whether OPN levels were correlated to Ki-67 values. We found a significant correlation. This may be a key advantage in cases where tumor In the current study, an important aspect is the demonstrated prognostic value of OPN for progression-free survival when using a cutoff value of 200 ng/ml. Similar results were found in a meta-analysis examining the relevance of OPN for the prediction of overall survival in gastric cancer: in cases with high expression levels of OPN, there was a correlation with factors that mirror more aggressive and advanced disease (i.e., TNM stage, lymph node and distant metastases) (Gu et al. 2016). Our data are also in line with another previous study showing that OPN levels are significantly elevated in patients with metastasized colorectal cancer when compared to healthy controls (Loosen et al. 2018). Further, high pre-and postoperative plasma levels of OPN reveal worse prognosis following tumor resection (Loosen et al. 2018). Together these data suggest that OPN may be used as a prognostic pan-cancer marker, which also includes rare tumor entities such as NEN. In addition to its role as a biomarker, the mechanistic role of OPN during tumor progression has been suggested. Interestingly, in a study by Ishigamori et al. examining OPN knockout mice with APC deficiency, tumor development was shown to be significantly suppressed, whereas in solely APC-deficient mice the expression of OPN was upregulated in colon cancers (Ishigamori et al. 2017). Similarly, ablation of OPN in mice infected with H. pylori led to a significant decrease of the development of gastric cancer compared to wild-type mice (Lee et al. 2015). Further, in another study analyzing the incidence of chemically induced hepatocellular cancer, OPN deficiency lead to a significant reduction. This seems to be caused by suppression of EGFR-mediated anti-apoptotic signaling (Lee et al. 2016). Together these data indicate that OPN affects cellular proliferation and survival. Our data here reveal that NEN with high proliferative activity show particularly high OPN levels. It will be interesting to investigate whether OPN signaling promotes NEN cell proliferation and thus contributes to the high proliferative activity, or whether the highly proliferative tumors promote immunological responses that are linked to high OPN values. In clinical patient care routine, diagnosis and evaluation of cancer largely depends on clinical and histological criteria. Nevertheless, it is important to keep in mind, that blood biomarkers are easily obtainable and may provide additional information. Our data suggest that OPN may serve as a surrogate biomarker for a tissue biopsy if sufficient material cannot be collected, for example, when tissue is not easily obtainable due to high risk of adverse events. Further, we postulate, that elevated OPN levels at the time of diagnosis of Neuroendocrine Neoplasms can aid with the decision towards a more powerful treatment regimen than are necessary for patients with low OPN levels. A limitation of our study is the rather small patient cohort. Thus, in the next step, the relevance of OPN should be tested in a larger study group for validation of our results, and to evaluate, whether the combinational analysis of OPN with currently used markers can be used to identify high-risk patients. Hence this may influence the diagnostic and therapeutic workflows and impact the outcome of patients with NEN. Conclusion Our data demonstrate for the first time that circulating OPN may be considered as a prognostic biomarker in patients with neuroendocrine malignancies in order to identify patients with potentially lower progression free survival. This biomarker is easily obtainable non-invasively at any time point and may help in guiding treatment decisions in the future. However, further investigation including larger cohorts of NET and NEC patients are necessary in order to fully understand the pathophysiological role of OPN in this cancer type before implementation into clinical algorithms can be considered. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-06-16T06:16:23.903Z
2023-06-15T00:00:00.000
{ "year": 2023, "sha1": "3ab10d9e1446f0e108c8c0eafffa82b44909ab57", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00432-023-04979-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "fc4de213e9da512d6ecf11b17f5bee8174526008", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231710645
pes2o/s2orc
v3-fos-license
Local Treatment of Burns with Cell-Based Therapies Tested in Clinical Studies Effective wound management is an important determinant of the survival and prognosis of patients with severe burns. Thus, novel techniques for timely and full closure of full-thickness burn wounds are urgently needed. The purpose of this review is to present the current state of knowledge on the local treatment of burn wounds (distinguishing radiation injury from other types of burns) with the application of cellular therapies conducted in clinical studies. PubMed search engine and ClinicalTrials.gov were used to analyze the available data. The analysis covered 49 articles, assessing the use of keratinocytes (30), keratinocytes and fibroblasts (6), fibroblasts (2), bone marrow-derived cells (8), and adipose tissue cells (3). Studies on the cell-based products that are commercially available (Epicel®, Keraheal™, ReCell®, JACE, Biobrane®) were also included, with the majority of reports found on autologous and allogeneic keratinocytes. Promising data demonstrate the effectiveness of various cell-based therapies; however, there are still scientific and technical issues that need to be solved before cell therapies become standard of care. Further evidence is required to demonstrate the clinical efficacy and safety of cell-based therapies in burns. In particular, comparative studies with long-term follow-up are critical. Background Burns represent a substantial public health problem worldwide. According to the World Health Organization, there are approximately 265,000 deaths each year due to fires, electric burns, and chemical substances [1]. Furthermore, over 96% of fatal fire-related burns occur in low-and middle-income countries [1]. Thermal burns are the most common type of burn injuries, making up about 86% of the burned patients requiring burn center admission [2]. The burn depth is proportional to the temperature of the causative agent and the contact time length [3]. Effective wound management is a major challenge and an important determinant in patients survival and prognosis with severe burns. Burn wounds are characterized by loss of progenitor cell population that is necessary for the epidermis and dermis regeneration [4]. Severe burns would benefit from cell therapy by enhancing wound healing, replacement, and regeneration of damaged skin. A significant challenge is incorporating skin appendages, reducing fibrosis and inflammation [5]. Furthermore, by destroying the epidermis, the likelihood of bacterial infection increases [6]. We decided to separate cell therapy application in radiation burns from other types of burns in our review (Figure 1). Although a radiation injury is often referred to as a Surgical debridement followed by split-thickness skin grafting (STSG) is a standard therapy for burns. One of the biggest problems with severe burn patients is the limitation of available donor skin sites for surgical treatment, especially in cases with over 50% of total body surface area (TBSA). Thus, the technique of meshed STSG allows expanding skin grafts to a much larger size [13][14][15]. However, when the expansion is over 1:6 of meshed grafts, there is a lower re-epithelialization rate and a significant decrease in survival rate [13]. Moreover, STSG increases the wound's total surface, leading to higher water and electrolyte loss from the patient's body [16]. Another issue is donor site hypertrophic scar and contracture, especially in children due to their physical growth [17]. Re-epithelialization is crucial in the burn wound treatment, and there are many different methods of delivering skin cells to the wound bed [13,18]. Many efforts have been made towards autologous and allogeneic cell-based therapies and skin substitutes, both as monotherapy and as a part of combined treatment. This review aims to overview the up-to-date local treatment of burns with cellular therapies based on the published data from 1983 up to 2020. We discussed the results of clinical studies only. Practical options for future therapeutic applications of cell therapies for burns treatment and ongoing challenges associated with burn injuries are finally considered. In our previous review concerning the use of cell-based therapies in non-healing wounds, we presented the characteristics of all cell types; therefore, we did not repeat this information it in this review [19]. Experimental Section Study Selection Using Pubmed search engine and ClinicalTrials.gov, we analyzed the available data concerning the use of human keratinocytes, fibroblasts, bone marrow cells, adipose tissue cells, as well as cell-based products available on the market like Epicel ® , Keraheal™, ReCell ® , and JACE ® . We have excluded all preclinical (animal) studies. A systematic literature search was conducted from 1983 to 2020 using the following terms "burns" OR "radiation burns" combined with "keratinocytes" OR "fibroblasts" OR "mesenchymal stem cells" OR "adipose tissue". Only articles published in peer-reviewed scientific journals were included in the analysis. The analysis covered 49 English-language articles, assessing the use of keratinocytes (30), keratinocytes and fibroblasts (6), fibroblasts (2), bone marrowderived cells (8), and adipose tissue cells (3). The patients' characteristics include the degree of burn, age, and the length of follow-up. Autologous Keratinocytes Autologous keratinocytes may be administered as a cultured and non-cultured cell suspension in the spray device, single-cell suspension, and in the form of cultured epithelial sheets [15]. Cultured epithelial autograft (CEA) is prepared as a sheet (25 or 50 cm 2 ) consisting of isolated and cultured keratinocytes fixed on petrolatum gauze [15,20]. Approximately 2-3 weeks are required to prepare a confluent sheet [21]. CEA is efficient for extensive skin burns when available healthy skin is insufficient, but it is high cost and the lack of dermal substrates limits their applicability. Another disadvantage of this method is a long-term culturing period, which extends the time between biopsy and grafting. Moreover, cell culturing has other difficulties such as lack of adherence and wound contracture [22,23]. One of the pioneers in this field, Gallico et al., demonstrated that cultured autologous epithelium could be used to generate permanent epidermis on half or more of the TBSA [24]. The use of the CEA in the treatment of deep second-and third-degree burn wounds was described by Teepe et al. The authors observed that the wounds excised at an early stage showed a significantly better graft take than non-excised chronic wounds that were grafted at a later stage. The regenerated skin was smooth and pliable. Moreover, scars showed less hypertrophic formation in comparison with meshed grafts. The authors showed an inverse correlation between the graft take and the patient's age [25]. This correlation was not confirmed by the multicenter study of Odessey et al., demonstrating that patient's age, burn size, and extent of full-thickness injury did not significantly affect the graft's take. The average final take was around 60%, and 22% of patients achieved a final take of ≥ 90% [26]. Another advantage of CEA transplantation in burn patients is reduced mortality. In a study by Munster et al., there was a decline in mortality from 48% to 14% [27]. Increased collagen deposition, decreased stromal cellularity, and significant effect on connective tissue phenotype and dermal neogenesis after CEA transplantation were observed by Compton et al. on a group of pediatric patients [28]. CEA may also be a part of a combined burn wounds treatment. A 15-year-retrospective study by Auxenfans et al. revealed that it allows rapid healing of STSG donor sites and deep second-degree burns, due to the decreased wound surface and stimulated healing of the remaining wound [29]. Chrapusta et al. conducted a study on children with significantly shorter healing time when STSG and cultured autologous keratinocytes were applied in one stage [30]. In 2015 Matsumura et al. published studies on CEA treatment of patients with severe burn wounds. CEA, whose manufacturing period was between 22 and 30 days, contributed to wound closure and patients' survival [31]. In a retrospective study by Wood et al., patients were treated with CEA in the form of a sheet, cell suspension (CellSpray), or both. After an average of 10.6 days, cell suspension could be administered compared with 25 days for cultured sheet grafts due to the faster time of the pre-confluent stage cell culture [32]. Kym et al. observed no clinical differences between the sheet and spraytype of CEA; both resulted in significantly higher patient survival than the non-CEA group [33]. Another retrospective study was published by Cirodde et al. Favorable outcome was most often associated with young age and a small number of infectious complications [34]. Chua et al. published a 12-year retrospective review of patients who received CEA. The authors compared the outcomes after STSG and micrografting, both followed by the CEA application, and observed that a significantly lower amount of skin allografts was needed in the micrografting group [35]. The comparison of CEA, Cuono's method, and CEA combined with STSG were analyzed in a study by Lo et al. The Cuono's method is based on the two-stage procedure. A cadaver skin allograft is grafted on a wound. After 2-3 weeks, the cadaver epidermis is removed, and CEA is applied. Sites treated with either Cuono method or CEA with initial take rates < 60% did not heal. Moreover, the highest take rate was achieved when CEA was combined with STSG [36]. The most extensive study on CEA to date was performed by Hickerson et al. The results of this analysis were compared to the patients' outcome with comparable burns, reported in the National Burn Repository. This study's main conclusion was that when CEA was used with STSG, the survival rate increased [37]. Clinical applications of autologous keratinocytes in burns are summarized in Table 1. Products Based on Autologous Keratinocytes Epicel ® is a wound dressing composed of the patient's autologous, proliferative keratinocytes sheets. FDA-approved indication for use in adult and pediatric patients with deep dermal or full-thickness burns comprises a total body surface area greater than or equal to 30%. It may be used in conjunction with split-thickness autografts, or alone in patients for whom split-thickness autografts may not be an option due to the severity and extent of their burns [38]. This CEA serves as a successful permanent burn coverage in severely traumatized patients. According to the FDA, Epicel ® ranges from 2 to 8 cell layers thick and measures approximately 50 cm 2 [38]. Age is one of the factors determining the CEA's take, as reported by Carsin et al. in a five-year, single-center study. Those younger than 15 years old presented the highest initial and final Epicel ® CEA take (82.28% and 85.27%, respectively). More extensive burns tended to occur in the younger population, which contributed to these results. Surprisingly, no correlation between the take and burn wound size was observed. The authors stated that Epicel ® CEA appears to have a high beneficial value in managing burns covering over > 60% of TBSA [39]. • Keraheal™ Keratinocyte spray suspension is the next method for delivering epidermal cells to the wound bed. Keraheal™ is one of the products available on the market (Biosolution Co. Ltd, Seoul, Korea). Unlike the conventional sheet type, it contains mainly non-differentiated pre-confluent cells. According to the manufacturer, this spray-type autologous keratinocyte therapy is indicated for deep second-degree burns covering more than 30% of TBSA and in third-degree burns-more than 10% of TBSA [40]. Keraheal™ improved scars quality in severely burned patients and was effective in saving lives [12]. According to Yim et al., Keraheal™ requires a lower number of cells to culture and shorter culturing time in comparison to the sheet type. In studies of Keraheal™ combined with meshed grafts, the CEA's take rate after four weeks was 100% in Yim's and 68% in Lee's study [13,41]. • ReCell ® ReCell ® is another product applied via spray (Clinical Cell Culture, Cambridge, UK). This system uses rapid, autologous cell harvesting, processing, and delivery technology. A small sample of the patient s skin is obtained to isolate keratinocytes, fibroblasts, and melanocytes that are sprayed over the burn wound by a special nozzle [42]. FDA-approved indication for the treatment of acute thermal burn wounds in patients 18 years of age and older. An appropriately licensed healthcare professional uses the RECELL ® Device at the patient's point-of-care to prepare autologous Regenerative Epidermal Suspension (RES™) for direct application to acute partial-thickness thermal burn wounds or application in combination with meshed autografting for acute full-thickness thermal burn wounds [43]. A randomized trial comparing results obtained with the ReCell ® autologous cell harvesting (ACH) system and the classic skin grafting for epidermal replacement in the deep partialthickness burns was performed by Gravante et al. Their study revealed that skin grafting was faster than ReCell ® , but ReCell ® biopsy areas and post-operative pain were smaller than in traditional grafting [44]. Not only adults received ReCell ® , Wood et al. performed a randomized controlled pilot study on pediatric patients with partial-thickness scald injury. They tested if the addition of ReCell ® to the Biobrane ® synthetic wound dressing gave better results and compared them to the standard treatment-skin grafting [45]. According to the manufacturer, Biobrane ® is composed of a silicone membrane bonded to a nylon mesh. Peptides from porcine dermal collagen have been connected to the nylon membrane form a flexible and conformable composite dressing. Biobrane remained attached to superficial partial-thickness burn wounds, donor sites, and excised burn wounds with or without meshed autografts [46]. In the Wood et al. study, by day 21 after burn, 100% of patients receiving Biobrane ® and ReCell ® healed, 97.7% receiving Biobrane ® , and 90.1% in the standard treatment group. According to the authors, the best outcomes can be obtained when debridement, followed by Biobrane ® with or without ReCell ® , is performed within four days after-burn. It leads to decreased healing time and requires fewer dressing changes. Moreover, it is less painful [45]. In another study by Sood et al., the effects of ReCell ® treatment were compared to the effects after a meshed STSG. Eight patients had 100% take with both treatments, and two patients had significant non-take and graft loss. Patients benefited from the ReCell ® therapy having a decreased donor site size and comparable outcomes with meshed STSG treatment [47]. • JACE ® JACE ® is a Green-type CEA, an epidermal cell sheet supplied in cultured autologous epidermis produced from keratinocytes for treatment of severe, extensive burns. It allows obtaining cells from a small area of the patient's tissue. These sheets are grafted onto the wound surface with preserved dermis for the closure of the wound via engraftment and epithelialization [48]. JACE ® is indicated for patients with a deep-dermal or full-thickness burn wound when sufficient donor sites for autologous skin grafts are not available, and the burn area is 30% or more of the TBSA. After skin grafting, a cultured epidermal cell sheet is applied onto the reconstructed dermis [49]. The results from a 6-year multicenter study of JACE ® were published by Matsumura et al. The authors demonstrated a 66% take rate at four weeks after grafting and found that the JACE ® application contributed to patient survival up to seven weeks after burn [50]. Similar results were obtained by Hayashi et al., who used a combination of JACE ® CEA and STSG or meshed split-thickness dermis graft, de-epithelialized STSG. A meshed dermis graft required more healing time than STSG, but it enabled covering a burn wound by collecting tissue from only a small donor site. The skin graft taken at four post-operative weeks was between 85 and 95% [48]. Clinical applications of products based on autologous keratinocytes in burns are summarized in Table 1. Autologous Engineered Skin (Keratinocytes and Fibroblasts) Hansbrough et al. developed procedures for establishing confluent, stratified layers of cultured, autologous keratinocytes on a modified collagen-glycosaminoglycan membrane containing autologous fibroblasts. These grafts were transferred onto the areas of fullthickness burn wounds. It took up to 9 days to form the basement membrane. According to the authors, this technique offers a significant advance in extensively burned patients' care and can also provide skin for reconstructive surgeries [51]. Boyce et al. investigated cultured skin substitutes (CSS) consisting of autologous cultured keratinocytes and fibroblasts. The cells were attached to collagen-based sponges prepared from STSG. When CSS is used, donor skin can be spared, and the mesh ratio for autografts needed for coverage of the remaining, not-covered burn could be reduced to 1:2 or less. Reduced mesh ratio autografts guaranteed faster healing and reduced scarring [52]. TESE is a tissue-engineered skin equivalent developed by Takami et al. It comprises autologous cultured keratinocytes, fibroblasts, and a decellularized allogeneic dermis and requires three weeks of processing. After this time, it was transplanted to the thirddegree burn wounds. The authors observed 96% of graft survival. TESE s histological characteristics are similar to normal human STSGS. Therefore, it can be used for permanent repair of full-thickness skin defects [53]. Clinical applications of autologous keratinocytes and fibroblasts in burns are summarized in Table 1. Table 1. Clinical applications of autologous keratinocytes, products based on autologous keratinocyte and autologous engineered skin in burns (NA-not available, CEA-cultured epithelial autograft, AT-active treatment, CT-control group, TBSA-total body surface area, CEA/A-cadaver allograft followed by placement of CEA onto an allodermis base, STSG-split-thickness skin graft, TESE-tissue-engineered skin equivalent, CSS-cultured skin substitutes). Therapy Compared Hefton et al. observed that burn wounds grafted with cultured allogeneic epidermal cells healed within three days and remained healthy for the nine months of observation. Based on these findings, the authors stated that allografts might serve as alternative biological dressings, or grafts, for deep second-degree burn wounds. They accelerate healing and reduce the need for STSG [54]. The same conclusion was drawn by Madden et al. where cultured allogeneic epidermal cells gave similar results to the autografts in second-degree burns and were not successful in third-degree ones [55]. Faster epithelialization of the wounds was also confirmed by Rivas-Torres et al. Healing time was reduced by 37.8% when treated with cultured epidermal allografts. Moreover, the authors observed that allografted sites were less erythematous than skin grafts, which served as a treatment control [56]. Similar results were obtained after transplantation of frozen cultured human allogeneic epidermal sheets in deep and superficial partial-thickness burns. Alvarez-Diaz et al. observed that deep partial-thickness burns treated with cultured allogeneic epidermal sheets healed in an average of 5.6 days versus 12.2 days in the control group [57]. Cryopreserved cultured epidermal allografts in pediatric patients were studied by Yanaga et al. Not only early wound closure and prevention of hypertrophic scar formation but also the decrease in graft cell viability were observed. The sex-determining region Y (SRY) gene could be detected only for 2-4 weeks after cell transplantation [17]. It proves that allograft take is not permanent, and allogeneic cells are replaced by autologous keratinocytes. Haslik et al. analyzed the long-term results of dermal hand burns covered with cryopreserved allogeneic keratinocyte sheet grafts. The authors compared these keratinocytes to autologous STSG and observed no statistically significant differences. The use of allogeneic keratinocytes for the coverage is appropriate to preserve skin grafts for full-thickness areas. Because of high costs and qualified staff requirements, in the presence of sufficient donor sites, the usage of skin grafts for the application in hand burns should be the first choice of treatment [58]. On the other hand, 15 years retrospective study of Auxenfans et al. revealed that cultured allogeneic keratinocytes (CAlloK) facilitate healing of STSG donor-sites as well as deep second-degree burns. CAlloK secrets growth factors and cytokines stimulating the proliferation of host keratinocytes in both acute and chronic wounds. Due to the storage options and availability, they can serve as temporary coverage. Transplantation of CAlloK in deep dermal burns when there is a lack of donor sites may replace the use of STSG [14]. There is a product-KeraHeal-Allo™-which was investigated in phase 1 and 2 clinical trials. It is a thermosensitive hydrogel-type allogeneic keratinocyte therapy (Biosolutions Co., Ltd) to promote the reepithelialization of deep 2nd degree burns. No significant adverse reactions have been observed yet [59]. Clinical applications of allogeneic keratinocytes in burns are summarized in Table 2. Allogeneic Keratinocytes and Fibroblasts Apligraf ® is a living, bi-layered cell-based product approved by the US. Food and Drug Administration (FDA) to heal diabetic foot ulcers and venous leg ulcers [60]. The lower dermal layer combines bovine type 1 collagen and human fibroblasts, which produce additional matrix proteins. The upper epidermal layer is formed by promoting human keratinocytes. Apligraf ® does not contain melanocytes, Langerhans cells, macrophages, lymphocytes, or other structures such as blood vessels, hair follicles, or sweat glands [61]. Waymack et al. placed Apligraf ® over a meshed autologous STSG over excised burn wounds. There was no difference in taking autograft in the presence or absence of Apligraf ® . On the other hand, they demonstrated the cosmetic and functional advantages of Apligraf ® when applied over meshed autograft [62]. Hu et al. 2006 USA evaluated the persistence of Apligraf ® by DNA detection. After four weeks, it could be found only in the minority of patients [63]. Another product consisting of allogeneic keratinocytes and fibroblasts is OrCel™. It is composed of a porous collagen sponge containing co-cultured allogeneic donor epidermal keratinocytes and dermal fibroblasts from human neonatal foreskin tissue [64]. Still et al. compared Biobrane-L ® dressing with OrCel™ in facilitating wound closure in burn patients. The authors demonstrated that wound healing is faster after OrCel™ treatment. They explained that this effect is due to the combination of collagen sponge and cytokines and growth factors produced by the proliferating keratinocyte and fibroblast. OrCel™ sites also exhibited reduced scarring [65]. Clinical applications of allogeneic keratinocytes and fibroblasts in burns are summarized in Table 2. Table 2. Clinical use of allogeneic keratinocytes and fibroblasts in burns (NA-not available, AT-active treatment, CT-control group, DDB-deep partial-thickness burn wounds, STSG-split-thickness skin graft, CDS-cultured dermal substitute). Therapy Control/Compared To Cultured dermal substitute (CDS) is composed of fibroblasts seeded on a porous matrix of hyaluronic acid and collagen. Kashiwa et al. evaluated CDS as a biological dressing for highly extended mesh auto-skin grafting. When applied onto the 6-fold extended autoskin graft, it produces growth factors and extracellular matrix components, promoting the tissue granulation and epithelialization of the skin [66]. Moravvej et al. cultured allogeneic fibroblasts on a combination of silicone, glycosaminoglycan, and autologous mesh grafts. The authors observed that allogeneic fibroblasts grafted on meshed STSG might be useful for third-degree burn wounds treatment. Furthermore, it requires less autologous skin, which is a valuable advantage in extensive burns. Healing time and scar formation compared to conventional therapy is reduced, but after 1-year, there were no differences between these two groups [67]. Clinical applications of allogeneic fibroblasts in burns are summarized in Table 2. Mesenchymal Stem Cells Rasulov et al. applied allogeneic fibroblast-like bone marrow mesenchymal stem cells (BMSCs) onto deep thermal burn surfaces. The high tempo of wound regeneration in the presence of active neoangiogenesis was observed [68]. Moreover, the regeneration of sweat glands after deep burns is a significant clinical problem. Sheng et al. used BMSCs to acquire the phenotype of sweat gland cells in vitro. Twelve months after successful cell transplantation, the recovery of perspiration function in all the BMSC transplanted areas was observed. The authors emphasized that the success of the sweat gland regeneration depends not only on BMSCs but also on the surgical technique. The cells need to be covered with a decellularized allogeneic dermal matrix with laser-punched holes, granulated autologous skin grafting, and with allogeneic skin [69]. In another study, autologous transplantation of BMSCs in association with STSG was proved to prevent the skin graft contraction when combined with the injection into the sites of STSG [70]. Mansilla et al. studied cadaveric BMSC in a patient suffering an extensive skin burn. After two courses of cell transplantation and too slow epithelialization, autologous meshed skin was grafted. The skin healed completely without retractions. Limited hair regrowth was observed in the areas of BMSCs transplantation [71]. A prospective comparative study to evaluate the regenerative potential of BMSCs, and umbilical cord blood-derived mesenchymal stem cells (UC-MSCs) versus conventional early excision and grafting was performed by Wael Abo-Elkhei et al. The authors observed a significant improvement of healing both in the BMSC and UC-MSC application, with no significant difference between treatments [72]. Clinical applications of mesenchymal stem cells in burns are summarized in Table 3. Table 3. Clinical applications of mesenchymal stem cells in burns (CMSCs-cadaveric bone marrow mesenchymal stem cells, NA-not available, AMSG-autologous meshed skin grafting, FMSC-fibroblast-like bone marrow mesenchymal stem cells, SG-skin grafts, BMSCs-bone marrow mesenchymal stem cells, UC-MSCs-umbilical cord blood-derived mesenchymal stem cells, TBSA-total body surface area). Therapy Compared Treatment of Radiation Burns with Cell Therapies Successful soft tissue reconstruction and absence of necrotic tissue with no recurrence of the lesion at 8-month follow-up was reported by Bey et al. Both surgical procedures and BMSCs therapy were performed on a 32-year-old man with a severe radiation burn of skin and underlying tissues in 3 approaches. Standard thermal burn treatment included a dermal substitute graft, but no improvement was observed. The second approach was based on muscle flap surgery and three local MSC administrations. Lack of complete healing led to the third approach-two local BMSC administrations. It resulted in a stable reconstruction of the soft tissue and complete pain relief. After BMSCs administration, a decrease of blood C-reactive protein (CRP) levels was noted, leading to the conclusion that BMSCs have an anti-inflammatory effect and accelerate the healing process [73]. Portas et al. confirmed these results. They reported the use of human cadaveric BMSCs in a patient with a radiation-induced skin lesion. The patient received three BMSCs administrations, and after each application, the CRP level decreased significantly [74]. The combination of physical therapy, surgery and local administrations of autologous BMSCs was presented in a case report by Lataillade et al. After BMSCs transplantation, almost complete healing was achieved within a month [75]. Attempts have also been made to use Adipose-Derived Stem Cells (ADSCs) in burn wounds treatment. Akita et al. treated patients with chronic radiation injuries with ADSCs and basic fibroblast growth factor (bFGF) sprayed over the radiation burn. The artificial dermis served as a scaffold. One part of the ADSCs was injected in the wound bed and margins; the other was placed over the artificial dermis. After bFGF application on a debrided tissue, increased angiogenesis and wound healing was observed. Fully regenerated tissue was seen during the 1.5-year follow-up, proving that ADSCs can be used to successfully treat the radiated wounds [76]. Moreover, beneficial effects have been reported from the transplant of lipoaspirates containing adipose-derived stem cells into wounds caused by radiotherapy [77]. Recently, a new technique has been developed-wound treatment with the cryopreserved placental membrane (vLPM). vLPM contains an extracellular matrix, growth factors, endogenous neonatal MSCs, fibroblasts, and epithelial cells of the native tissue. Regulski et al. published a case report of a 73-year-old patient with radiation necrosis. Since the wound was not qualified for surgical closure, 12 courses of vLPM were applied. The complete closure of the wound was observed at day 98 [12]. Clinical applications of cell-based therapies in radiation burns are summarized in Table 4. Table 4. Clinical applications of cell-based therapies in radiation burns (BMSCs-bone-marrow mesenchymal stem cells; BMNCs-bone marrow mononuclear cells, ADSCs-adiposederived stem cells; rh-bFGF-human recombinant fibroblast growth factor; vLPM -lyopreserved placental membrane containing viable cells). Therapy Compared Summary and Perspectives While there have been a number of reports published on cell therapy for wound healing, clinically available therapies are still limited. In the last decade, several literature reviews have discussed the implications of cell transplantation in burn treatment [11,15,[78][79][80][81][82][83]. The objective of our study was to update these findings. The gathered data of cell-based therapies applications in burns confirms encouraging results and alternatives to standard care. In general, scientific evidence suggests that all presented bioengineered skin substitutes are safe. However, each of the presented strategies has its limitations and disadvantages. Moreover, caveats inherent with the clinical evidence covered in this report include differences in techniques measuring wound healing time and closure, small study groups, no information on how the recipient's general health affects cell transplant acceptance. It is also difficult to conclude because burns vary significantly in depth, size, and the causing factor. According to a multicenter experience with the treatment of partial and full skin thickness burns, the cultured autologous and allogeneic epidermis can be frozen and remains viable if stored in a skin bank [18]. There is no doubt that in comparison to the STSG, the culturing, storage, and use of keratinocytes (most studied cells) is associated with higher costs and institutional demands. The beneficial effect of their use has been demonstrated in several publications on burns in children, facial burns, donor sites, and in combination with mesh autologous skin grafts [84]. Unfortunately, a perfect technique does not exist. Despite the significant advantage of immunological safety, there are challenges with CEAs due to the long-term culture, insufficient donor site for extensive burn, and possible infections, as well as the high cost of this technique [4]. The use of isolated keratinocytes may represent an alternative therapy to CEA. The culturing time is much shorter, and the transplanted cells are not differentiated so that they may proliferate in the recipient after transplantation [15]. Therefore, due to the storage options and availability, they can serve as temporary coverage. Allogeneic keratinocytes are free of Langerhans cells and leukocytes-cells expressing major histocompatibility complex (MHC) class II, which results in a low probability of transplant rejection. Allogeneic keratinocytes do not remain in the transplant permanently and are replaced with the recipient cells [85]. Moreover, the area of skin serving as a source of keratinocytes isolation is crucial due to the number of epidermal stem cells and their proliferation potential [86]. As reported by Yanaga et al., cryopreserved cultured epidermal allografts have several advantages such as availability, the possibility of repeated treatments, enhanced wound closure, and does not require a donor's presence [17]. In the case of severe, extensive burns, allogeneic fibroblasts grafted on meshed STSG should be considered [67]. It might be a useful method for third-degree burn wounds treatment. Furthermore, it requires less autologous skin, which is a valuable advantage. Alternatively, allogeneic fibroblast-like bone marrow mesenchymal stem cells (BMSCs) should be applied. The high tempo of deep thermal burn wound regeneration in the presence of active neoangiogenesis was observed [68]. On the other hand, it should be kept in mind that in developing countries, synthetic skin substitutes are either not available or are very expensive. Although there is currently no significant scientific evidence suggesting that fat transplantation in acute burn wounds facilitates wound healing and improves subsequent scars, this therapeutic approach should be mentioned as a further perspective. It was confirmed that in burns, fat helps modify scar tissue by increasing vascularity and new collagen formation and deposition [87]. Several studies reported improvements in skin texture, thickness, color, and patient satisfaction [88]. Moreover, autologous stem cells from the adipose tissue of surgically debrided burned skin seem to be a promising idea. Rodney K. Chan et al. isolated these cells and proved that they could differentiate into epithelial, dermal, and hypodermal layers [89]. According to the authors, these results indicate that stem cells isolated from debrided skin can be used as a single autologous cell source to develop a vascularized skin construct without culture or addition of exogenous growth factors. This technique may provide an alternative approach for cutaneous coverage after extensive burn injuries. A very recent case study demonstrated that the adipose-derived stromal cells seeded onto a collagen-based matrix, Integra ® DRT exhibit valuable properties that may improve post-excision wound healing and facilitate skin regeneration without scars [90]. Unfortunately, it is difficult to assess this method's effectiveness and establish a consensus due to the small number of studies. However, there is still a lack of randomized controlled trials supporting the efficacy of fat and adipose stem cell transplantation in burns. Additionally, randomized controlled trials on MSCs isolated from bone marrow and adipose tissue are necessary to determine if these therapies are effective. However, such studies are unlikely to be carried out on patients with extensive, deep burns because these burns are rare and usually involve complex clinical decisions using different therapies that may vary between patients. Therefore, randomized trials of patients with smaller burns are recommended, as these burns occur more frequently, and the collection of data from small study groups may be more manageable. Besides, these studies should be performed with the longest possible observation time to assess the long-term safety and efficacy of cell-based therapies. Furthermore, other approaches not discussed here seem to have significant therapeutic potential, including epidermal stem cells from hair follicles, embryonic cells, or induced pluripotent stem cells; however, future clinical trials will determine their effectiveness. Conclusions Although much progress has been made to demonstrate cell therapies effectiveness in burns, there are still scientific and technical challenges that need to be solved to introduce cell therapies as a standard clinical practice. Further evidence, including clinical trials, as well as studies assessing cell graft take and survival, is needed to demonstrate the clinical efficacy and safety of cell therapies in burns. Furthermore, the cost-effectiveness balance for cell therapy products is also a challenge. Finally, there is a high demand to determine the fate of transplanted cells and the number and type of cells required to obtain the best clinical outcome, as well as the most effective delivery system. We hope that our review will create a basis for further clinical studies and experimental research.
2021-01-27T06:16:50.720Z
2021-01-21T00:00:00.000
{ "year": 2021, "sha1": "99d0a5c042f20c68620eedb4f448e80cd2155a46", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/3/396/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e0879d970113c9ab3490061a5e79b5dc3758ef8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251699224
pes2o/s2orc
v3-fos-license
Atomistic Simulation Informs Interface Engineering of Nanoscale LiCoO2 Lithium-ion batteries continue to be a critical part of the search for enhanced energy storage solutions. Understanding the stability of interfaces (surfaces and grain boundaries) is one of the most crucial aspects of cathode design to improve the capacity and cyclability of batteries. Interfacial engineering through chemical modification offers the opportunity to create metastable states in the cathodes to inhibit common degradation mechanisms. Here, we demonstrate how atomistic simulations can effectively evaluate dopant interfacial segregation trends and be an effective predictive tool for cathode design despite the intrinsic approximations. We computationally studied two surfaces, {001} and {104}, and grain boundaries, Σ3 and Σ5, of LiCoO2 to investigate the segregation potential and stabilization effect of dopants. Isovalent and aliovalent dopants (Mg2+, Ca2+, Sr2+, Sc3+, Y3+, Gd3+, La3+, Al3+, Ti4+, Sn4+, Zr4+, V5+) were studied by replacing the Co3+ sites in all four of the constructed interfaces. The segregation energies of the dopants increased with the ionic radius of the dopant. They exhibited a linear dependence on the ionic size for divalent, trivalent, and quadrivalent dopants for surfaces and grain boundaries. The magnitude of the segregation potential also depended on the surface chemistry and grain boundary structure, showing higher segregation energies for the Σ5 grain boundary compared with the lower energy Σ3 boundary and higher for the {104} surface compared to the {001}. Lanthanum-doped nanoparticles were synthesized and imaged with scanning transmission electron microscopy-electron energy loss spectroscopy (STEM-EELS) to validate the computational results, revealing the predicted lanthanum enrichment at grain boundaries and both the {001} and the {104} surfaces. ■ INTRODUCTION Lithium-ion batteries continue to be an integral part of the rechargeable battery industry and the search for sustainable energy storage. Although lithium-ion technologies have been widely utilized over the past few decades, energy content and charging rates are still insufficient to meet automotive energy demands. 1 Nanomaterials offer potential improvements to enhanced battery operation kinetics through the increased surface area, shortening of diffusion path lengths, and increased rates of lithium intercalation. 2 However, the main degradation mechanisms, transition metal dissolution, reactivity to the electrolyte, and intergranular cracking, are exacerbated at the nanoscale, leading to catastrophic decreases in capacity after a few cycles. 3 Many of the problems in nanoscale cathodes directly result from their thermodynamic instabilities. A significant fraction of atoms are located at interfacial regions in nanomaterials, bringing intrinsic excess energies to the system. 4,5 A potential method for stabilizing surfaces and grain boundaries is the compositional design to provoke dopant segregation, also known as interfacial excess. Following derivations from the Gibbs adsorption isotherm, 6 interfacial excesses of solid solutes can reduce stress energies and increase the overall stability of nanomaterials. 7 Nakajima et al. recently explored scandium doping of LiMn 2 O 4 nanoparticles and directly measured the doping effects on surface and grain boundary energies. 8 The data showed decreasing interfacial energies with the scandium doping and preferential scandium segregation to the grain boundaries. The results align with other studies using this 'interfacial engineering' to stabilize catalytic supports and other nanostructured oxides. 9,10 In parallel, Wang et al. showed that dopant segregation enhances cathodes' cyclability through suppressed intragranular cracking and increased mechanical strength. 11 Although the authors did not discuss interfacial energies, interfacial segregation always has a cause−effect relationship with the local energies. The work exploits the relationship between interfacial mechanical strength and thermodynamics, as recently reported. 12,13 It is important to note that interfacial excess differs from coating technologies. 14 The first is a spontaneous phenomenon driven by thermodynamics that does not require additional processing steps and does not constitute a separate phase. There is still an overall lack of thermodynamic data on dopant segregation correlations with interfacial energies in relevant technological systems, such as lithium-ion structures, to enable effective design for performance. 15−17 In this work, we used atomistic simulations to study relevant interfaces in nanoscale LiCoO 2 (LCO) to investigate the segregation potentials of dopants to surfaces and grain boundaries. The goal is to inform experiments regarding dopant selection criteria for interfacial energy design. Two representative surfaces, {001} and {104}, and two low-index grain boundaries, Σ3 and Σ5, were constructed using atomistic models and energetically minimized. Different dopants substituted individual cobalt sites in the structure to map the simulation cell energy at different dopant positions. Divalent, trivalent, and tetravalent dopants with different ionic radii were introduced into the systems to explore the physical−chemical impacts on the relative segregation energy. Overall, dopants showed higher segregation energy at {104} surfaces than at {001}, and higher segregation energies for Σ5 as compared to Σ3. Moreover, the segregation energies increased with the atomic radius. Informed by the simulation results, LCO nanoparticles were synthesized and doped with the element with the highest segregation energy, lanthanum. The results suggest simulations can satisfactorily predict segregation in cathode materials despite the assumptions made, but more quantitative segregation experiments are needed to establish more reliable models for engineering applications. ■ METHODS Atomic Simulations. The atomistic simulations were performed within the LAMMPS framework, 18 and all simulations were conducted with three-dimensional periodic boundary conditions in all directions. We applied standard Coulomb−Buckingham potentials to model the two-body atomic interactions. 19 The Buckingham potential models the energy for the short-range interactions between particles. The additional Coulombic potential term models the electrostatic potential energy of the long-range interaction between ionic charges summed using Ewald's method. 20 The cutoff distance for all two-body interactions in the simulations was 8.0 Å, and the Buckingham potential parameters for all species considered are shown in Table 1. We note that while there are other potentials for the Li− Co−O system, including some that describe charge transfer, 21,22 this parameter set is the only parameterization we found for which LCO was stable and that had transferable parameter sets consistent with the same O 2− −O 2− interaction for the dopant species. The layered O3 trigonal LiCoO 2 (Space Group R3̅ m) unit cell was obtained from The Materials Project (ID: mp-22526). 32 Two lowindex surfaces and grain boundaries were constructed to study the segregation profile of ten different dopants. The two design constraints used for building the interfaces were (a) maintaining the stoichiometry of the structure by not deleting or adding any atoms and (b) modifying polar surfaces to remove any surface dipoles. One polar surface, {001}, and one nonpolar surface, {104}, were studied due to their stability, prevalence in the LCO structure, and expected low surface energies. 33 For the polar {001} surface, several terminations could be considered based on the cleavage plane chosen. According to Hu et al., the cobalt layer termination is an unstable configuration that causes a mix of trivalent and tetravalent cobalt ions on the surface layer, leading to numerous surface configurations of cobalt ions with different oxidation states. 34 The two possible oxygen terminations also have low stability and require a strongly reducing environment to stabilize the surface oxygen. Due to the instability of the cobalt and oxygen terminations, the lithium termination is the preferred orientation for the {001} surface. 35 One crucial consideration of the slab geometry for Tasker Type III surfaces, such as the {001} surface studied here, is to prevent surface dipole moments that cause the surface energy to diverge. 36 The surface dipole is counteracted by moving half of a monolayer of lithium from the top surface to the bottom surface; the resulting surface is illustrated in Figure 1a. As described by Kramer and Ceder,35 that structure has an equal charge of +1/2 at both surface layers and a net charge of −1 in the bulk. This leads to a global charge balance of the stoichiometric slab while ensuring Co remains in the trivalent oxidation state. It also provides that the two resulting surfaces have a very similar, if not identical, atomic structure. The vacancy configuration of the surface was modeled after the work of Ceder and Van der Ven and moved every other lithium row to the opposite surface of the structure. 37 This configuration of the surface lithium atoms is the lowest surface energy arrangement that Ceder and Van der Ven constructed. The designed slab had dimensions of 1.7 × 1. Two low-index grain boundaries were also studied to understand dopant segregation profiles and interface stabilization at grain boundaries. An atomic model of a Σ3 grain boundary of LCO was constructed using GB-code 38 and VESTA, 39 with a common rotation Figure 1c). We considered the conventional cell of LCO first to construct the Σ3 GB using GB-code without specifying the chemical identity of the atoms. Next, we used VESTA to assign the chemical identities. That boundary represents the simplest and lowest energy GB structure in most materials and has dimensions of 0.8 × 1.0 × 10.3 nm 3 . The Σ5 grain boundary, representing a higher energy interface but still structurally simple, was designed using the Aimsgb Python framework for building periodic grain boundaries. 40 The tilt boundary was constructed with a common rotation axis between the two grains along the {001} plane and by orienting the grain boundary plane along the {120} plane. An additional interfacial distance of 1.0 Å was added between the two grains to prevent overlapping atoms and allow the minimizations to converge. The structure dimensions were 0.8 × 8.8 × 1.4 nm 3 , with an xy skew of 3.2 nm, as shown in Figure 1d. All four designed structures were energetically minimized by anisotropically relaxing the atoms and simulation cells before any dopant replacements. The grains were translated in both directions parallel to the grain boundary in 0.1 Å increments and energetically minimized at each position for the two grain boundaries. The γ surface mapping provides an energy landscape of the grain boundary with respect to the relative translation of the grains. The lowest energy structure was used for the dopant studies. The dopants selected for this study covered a range of ionic radii and oxidation states: isovalent dopants were chosen (Sc 3+ , Y 3+ , Gd 3+ , La 3+ ), as well as six aliovalent dopants consisting of three divalent dopants (Mg 2+ , Ca 2+ , Sr 2+ ) and three tetravalent dopants (Ti 4+ , Sn 4+ , Zr 4+ ). The segregation profiles of these dopants were studied by replacing one Co 3+ atom with a dopant and allowing the structure to relax through energy minimization while holding the simulation cell dimensions constant. The process was repeated, one by one, for each Co 3+ in the structure, and the system's energy was computed for each dopant position. The difference between the energy of a dopant in the bulk compared to the dopant at a surface or a grain boundary was used to calculate the segregation energy (E seg ). The surface energy or grain boundary energy (γ) of the undoped interfaces was calculated by finding the energy difference between a slab with two interfaces (surfaces/grain boundaries, (E int ) and a bulk slab geometry with the same number of atoms (E bulk )). This is shown in eq 1, 41 where 2A accounts for the interfacial area of the two surfaces/grain boundaries. Experimental Section. Doped and undoped nanoparticles of LCO were synthesized by adapting protocols developed by Okubo et al. 3 The coprecipitation method was performed by dissolving 20 mmol of Co(NO 3 ) 2 ·6H 2 O into 100 mL of deionized (DI) water and preparing a 100 mL of 5 M NaOH solution. For the doped nanoparticles, the amount of cobalt nitrate was reduced and replaced with 1 or 2 mol % of the dopant in the nitrate form. The nitrate solution was slowly added to the basic NaOH solution to precipitate the Co(OH) 2 nanoparticles and then diluted into 1,800 mL of DI water. The diluted suspension was oxidized by bubbling air through the stirred suspension for 48 h to yield the CoOOH nanoparticles. The CoOOH nanoparticles were centrifuged and washed with DI water 5 times and dried at 80°C overnight. The precipitates were ground in a mortar and pestle, and 500 mg was stirred into a 133 mL aqueous solution containing 1 M LiOH. The suspension was added to a 200 mL stainless steel autoclave with a PTFE liner and placed in a furnace. The furnace was heated to 180°C at 0.5°C/min and held for 12 h, then the autoclave was cooled at 1°C/min to 100°C and removed to cool at room temperature. The LCO precipitate was washed and centrifuged in water 4 times and dried at 80°C overnight. X-ray diffraction patterns were obtained with a Bruker AXS D8 Advance powder diffractometer (Cu Kα radiation, λ = 1.5406 Å) at 40 kV and 40 mA. Jade MDI software was used to confirm crystallographic phases and lattice constants. Crystallite sizes were calculated using the Scherrer equation using whole profile fitting. 42 Raman spectra were collected on a Renishaw confocal Raman microscope with a 785 nm laser at 50% intensity and 30 s measurement time. Scanning transmission electron microscopy (STEM) coupled with electron energy loss spectroscopy (EELS) revealed the morphology of the nanoparticles and mapped dopant distribution. JEOL Grand ARM 300CF equipped with Gatan GIF Quantum with K2-summit was used for the study, operating at 300 keV. ■ RESULTS: ATOMISTIC SIMULATIONS The first studies focused on the segregation potential of isovalent and aliovalent dopants on the minimized LCO surface structures. Figure 2 shows an example segregation profile acquired for La 3+ at the nonpolar {104} surface. The plot shows the minimized energy of the system versus the dopant position in the crystal structure. Each cobalt atom was substituted by La 3+ one at a time, and the structural energy was minimized to evaluate the most favorable replacement site. The presented graph had surfaces on both sides of the cell, at +24 and −24 Å, with the positions near 0 Å representing the crystal bulk. In these calculations, the dopant minimizes the system energy further when placed near the surfaces. The energy difference between the state with the dopant replaced in the bulk value and the surface substituted dopants gives the segregation energy for the individual atom, which is 5.7 eV for La 3+ at the {104} surface. These simulations also explain the energetic trends and associated structural arrangements at and near the surface regions. For example, as seen in Figure 2, La 3+ ions located at the surface and in the second atomic layer from the surface both protrude outward toward the surface. The behavior shifts the dopant from the cobalt site and can displace other ions around it. The bulk energy values are nearly achieved when La 3+ is at the third atomic layer from the surface, and the dopant remains close to the initial cobalt position. The relative asymmetry in the plot between surfaces, particularly for the 2nd and 3rd internal atomic layers, refers to local energy minima associated with the large ionic radius of La 3+ . Small shifts in the La 3+ positions may impact the stability of neighboring sites and, therefore, the system's overall energy. However, the primary conclusions regarding the most stable sites and the segregation energy are similar for both surfaces. Figure 3 shows an example of the energy profile when doping LCO with La 3+ in the presence of Σ3 grain boundaries. This profile illustrates two grain boundaries, with one in the middle of the structure at 0 Å and another located on the edges of the cell created as a consequence of the periodic boundary conditions. Similar to the surface case, La 3+ promotes lower energy to the system when segregated to the grain boundary regions. This case results in a spontaneous segregation energy of 3.3 eV, which is slightly lower than the {104} surface and highlights that the dopants may have different affinities for different interfaces based on the thermodynamic stability and coordination of the atoms at the given interface. The calculations also provide insights into the favorable dopant positions. For the Σ3 grain boundary, the system shows the lowest energy when the atoms sit exactly at the interface. However, if substituted in the second atomic layer from the interface, the dopant causes an increase in the energy, suggesting that this substitution is less likely to occur. Since the unfavorable energy is mirrored on both sides of the grain boundary, the phenomenon creates an energetic trap that should limit the dopant mobility across grain boundaries. The pattern was observed for all tested dopants, but the magnitude of the second layer energy deviation depended on their ionic radius. In general, dopants with larger ionic radii, such as lanthanum, presented higher segregation energies (∼3.3 eV) and higher energy aberration in the second layer (∼0.4 eV), while smaller dopants, such as scandium, showed lower segregation energies (∼2.0 eV) and lower energy aberrations (∼0.3 eV). Figure 4 shows the compiled results of the segregation energy plotted against the ionic radius of the isovalent dopants for the two surfaces and two grain boundaries. The segregation energy increases with the ionic size of the dopant with a clear, albeit different, linear trend for each of the interfaces in the The ability of an interface to accommodate a foreign ion is related to its intrinsic thermodynamic stability. According to density functional theory (DFT) studies by Kramer and Ceder, the {001} surface is one of the most stable surface planes in the LCO structure, with a theoretical surface energy of 1.00 J/m 2 for the termination with a one-half monolayer of lithium at the surface. 35 Their study also points out the {104} surface is one of the most stable nonpolar surfaces because it has minimal coordination loss compared to other nonpolar surfaces. However, it does have a slightly higher surface energy of 1.05 J/m 2 compared to the polar {001} surface. This difference could be the cause for the stronger thermodynamic driving force for segregation to the {104} surface. This driving force leads to higher segregation energies to {104} surfaces, a consequent more significant reduction in the surface energy, and an overall more thermodynamically favorable accommodation. In the present study, the calculated surface energies from eq 1 were 2.21 J/m 2 for {001} and 1.75 J/m 2 for {104} surfaces. Despite the numerical differences when compared to Kramer and Ceder's report 35 and other first-principles DFT studies, 43 we also found the surface energies to be close in relative values, with the {001} surface having higher energy. The fact DFT yields lower energies indicates a limitation of the used potentials in the present work. However, those were the only set of potentials that both predicted a stable LCO surface structure and had interactions for the numerous dopant species considered in this study. The relative consistency with recent results, the self-consistency, and experimental confirmations presented later in this work indicate that although the absolute values may be off, the predicted basic physical trends concerning segregation are reliable. The two studied grain boundaries, Σ3 and Σ5, also presented energetic differences affecting the segregation trends. At the Σ3, the atoms are more coordinated, and the structure shares more atoms at the coincidence site lattice. Equation 1 enabled the estimation of the difference in grain boundary energy between the two structures using the bulk energy of a slab structure with no interfaces and the same number of atoms. From this calculation, the Σ3 boundary showed an excess energy of 0.59 J/m 2 , while the energy for Σ5 boundary was 3.63 J/m 2 . The Σ5 boundary shows significantly higher energy than the Σ3 and supports the inference that the Σ5 is more disordered and atomically less coordinated. High energies are consistent with the covalent nature of LiCoO 2 . The directional characteristic of covalent bonds increases energies due to the significant bond angle distortions. The higher energy leads to stronger segregation potentials, as dopants can alleviate the local stresses by increasing the coordination. In addition to the isovalent doping, several aliovalent ions (Mg 2+ , Ca 2+ , Sr 2+ , Ti 4+ , Sn 4+ , Zr 4+ ) were tested to study the impact of dopant oxidation state on the segregation behavior. Figure 5a shows the segregation potential of all 10 dopants as a function of the ionic radius for the Σ3 and Σ5 grain boundaries, while Figure 5b shows the segregation potentials for the studied surfaces. A few unique cases from the simulations with aliovalent dopants arose during the dopant replacements, and those are discussed briefly in the Supplemental Information ( Figure S1). The linear trend of increasing segregation energy with ionic radius remained consistent for all oxidation states of the dopants. However, the linear dependence is different for each oxidation state and interface, providing interesting insights for dopant selection. One observation is that the segregation energy increases as the oxidation state of the dopant increases. For example, dopants of similar ionic radius but different charge states, e.g., Mg 2+ and Zr 4+ , had segregation energies scaling with the charges, i.e., 0.7 and 2.0 eV, respectively, for the Σ3 boundary. Consistently, the Σ5 boundary again had higher segregation energies than the Σ3 boundary due to the higher structural disorder but similar trends with the oxidation state of dopants. Interestingly, results show that all dopants, regardless of the size and charge, had favorable segregation energy. The doped surface structures shown in Figure 5b exhibit similar linear trends to the grain boundaries. This implies all could potentially be used to control interfacial energies, but some had a more pronounced impact. Chemistry of Materials pubs.acs.org/cm Article It is tempting to select the dopants with the highest computed segregation energies, La 3+ or Sr 2+ , to attempt an interfacial engineering protocol as those would present the highest thermodynamic driving force. However, one should keep in mind that the presented atomistic simulations do not consider the possibility of nucleation of a second phase. As discussed in more detail by Castro, 44 a saturation of interfacial sites by a dopant can eventually lead to the formation of second phases. The formation of a precipitate is typically undesirable as it compromises electrochemical properties. This was recently observed in La 3+doped MgAl 2 O 4 , in which a lanthanum-rich precipitate formed after saturation of the interfacial sites. 45 While the extremes of segregation energies may not be positive, similarly, low segregation energies, as found for Ti 4+ , which is much closer to the ionic radius of Co 3+ , may not have a high enough segregation potential at dilute concentrations and will provide very little stability enhancement at the interfaces. Additionally, Al 3+ and V 5+ dopants were studied due to their small ionic size and ability to enhance some aspects of battery stability (Table S1 and Figure S2). 46,47 The aluminum dopant shows low segregation energies for both the {104} surface and Σ3 boundary, which follows what has been observed in the literature. 46 The aluminum dopant has no electrostatic charge or elastic strain to drive the dopant to the interface and therefore remains a bulk dopant. The vanadium not only remains a bulk dopant for the {104} surface but also appears to be thermodynamically unstable at the surface. This could be due to the limitation of a different O 2− −O 2− potential parameter or a strong repulsion on the surface from the higher oxidation state ion. There is a small segregation energy of 0.86 eV for the Σ3 boundary that shows the grain boundaries' ability to accommodate the excess charge from the vanadium ion. There are mixed results on the ability of vanadium doping to improve electrochemical performance; however, the impact of vanadium as an interfacial dopant in nanoscale materials could be vastly different from bulk doping cathodes. 48 The segregation energies also show that ions of a similar ionic size to cobalt can still segregate due to the higher oxidation state of the dopant, but the driving force may be small depending on the oxidation state. ■ RESULTS: EXPERIMENTS To confirm the segregation predictions, we selected La 3+ as a dopant at a concentration low enough not to saturate the available interfacial areas, assuming the limit as a monolayer coverage (below 2 mol %). Nanoparticles of lanthanum-doped and undoped LCO were synthesized through a hydrothermal synthesis method. The X-ray diffraction (XRD) patterns of the undoped and doped LCO are shown in Figure 6 and show no evidence of secondary phase formation due to the dopant. Traces of Co 3 O 4 secondary phase are present in all three samples, but compared to the intensity of the LCO peaks, the amount of the second phase is estimated to be below 1 wt % by Rietveld refinement. 49 Raman spectra of the doped and undoped LCO in Figure 7 also provide support for no secondary phases caused by excess dopant segregation. The spectra confirm the presence of LCO with the characteristic peaks around 485 and 495 cm −1 . 50 The Co 3 O 4 secondary phase peaks were also confirmed in both samples. 51 The Raman measurements corroborate the XRD results and show none of the expected lanthanum secondary phases (La 2 O 3 and LaCoO 3 ) forming from excess dopant segregation. 52,53 Table 2 shows the calculated lattice parameters from a whole pattern fitting. The synthesized LCO can crystallize into either a layered or spinel-type structure with similar XRD patterns. 54 Gummow and Thackeray showed that a c/a parameter of ∼5.0 indicates a layered type structure, and values closer to 4.9 indicate a spinel-type form. The doped and undoped c/a parameters are close to 5.0 and show that the doped system maintains the layered structure. The lanthanum doping caused a minimal effect on the parameter a and a slight decrease on the parameter c. In truth, dopants forming a solid solution 55 That would be particularly expected in this case since La 3+ has a significantly larger ionic radius than Co 3+ . Therefore, the lack of structural expansion is already indirect evidence of segregation. The lattice shrinkage could be attributed to the stress induced by the segregated dopants or the observed reduction in crystallite size, as seen in Table 2. The interfacial energy reduction caused by segregation inhibits coarsening driving force independent of the growth mechanisms, leading to smaller crystallite and particle sizes at a given annealing temperature. 56,57 The results are consistent with the BET surface area shown in Table 2, indicating higher surface areas for the doped samples due to surface stabilization. The XRD patterns also show La 3+ doping changes in the relative intensities of certain planes in the LCO structure. In undoped LCO, the ratio of the {104} peak to the {003} is 0.90, but for the doped samples, it is above 1.05. The observation is consistent with the work from Okubo et al., where they show the {003} peak intensity decreases as particle size decreases due to the nanoplatelet morphology of the particles. 3 Figure 8 shows the STEM and EELS images of the 2% Ladoped LCO nanoparticles after calcination at 600°C. Figure 8a−c confirms the nanoscale dimension and shows the expected nanoplatelet morphology with varying thicknesses of 10−20 nm. Figure 8a indicates that particles are partially connected, with a grain boundary indicated by arrows. Figure 8b shows the EELS composed color mapping demonstrating a concentrated green color around the edges of the particles, depicting the La 3+ enrichment at both the surfaces and the grain boundaries. The center of the particles had a more purple hue because of the higher fraction of cobalt (blue) and oxygen (red). Figure 8b still shows lanthanum atoms in the center of the nanoparticles. However, most of the nanoparticles in the image are lying flat and showing the {001} surface on the top and bottom of the particle. 43 The platelike morphology makes it challenging to determine if the lanthanum is at the {001} plane or remains in the bulk structure since electrons are transmitting through the sample. Figure 8c shows a particle oriented perpendicularly, allowing visualization axially along with the a parameter to identify the fringes of the c-spacing consistently with LCO layered structure. While the top surface is attributed to the {001} plane, the edges of the particles can be assigned to {104} and {012} surfaces. 43 Figure 8c also shows evidence of lanthanum enrichment along the {001} surface plane indicated by the phase contrast between cobalt and lanthanum atoms. The segregation of La 3+ to {001} is confirmed in the color mapping in Figure 8d. Figure 9a shows the box scan measurement of the {001} surface from Figure 8d and displays the highest peak intensity of lanthanum at 8 nm near the surface of the particle. At the same distance, the cobalt and oxygen-normalized intensity dips near the surface, which confirms the lanthanum enrichment near the {001} surface. Note that because the particles overlap (see the box in Figure 8d), the scan shows positive signals for O, Co, and La on either side of the peak position despite the fact the measurement is looking at a surface. Atoms that are from background particles are marked by hollow symbols in the box scan plots to allow better visualization. Figure 9b shows the box scan results from the {104} surface shown in Figure 8d. This scan also shows an enrichment of La near the surface and confirms the thermodynamic driving force directing La atoms to all interfaces in LCO. It appears the lanthanum has such a strong segregation potential that there is no preferential doping of specific interfaces, and it distributes across all surfaces and grain boundaries shown here. This conclusion matches the atomistic calculations of lanthanum segregation that revealed lanthanum had one of the highest segregation energies compared to the dopants studied in all four of the constructed interfaces. Noteworthy, in both segregation profiles, one observes that oxygen dips when La peaks at the interfacial regions. That suggests that La does not simply replace Co, as assumed in our atomistic calculations but that more complex reactions might be occurring. However, the observed experimental segregations confirm the trends regarding the relative segregation potential of different dopants are reasonable despite this approximation. ■ DISCUSSION The atomistic simulations enabled screening over many dopants that could potentially segregate to surfaces and grain boundaries of LCO. The motivation was to find dopants that would potentially lower excess energies in the system, enabling greater thermodynamic stability in nanocrystalline cathodes. Out of the proposed dopants, La 3+ had one of the highest segregation energies and therefore was selected for the experimental studies. The synthesis and characterization demonstrated La 3+ ions segregated to surfaces and grain boundaries as predicted by the simulations. The results are very encouraging since a simulation-informed design of experiments provides a methodology for relatively quickly and inexpensively streamlining experimental investigations. The method helps overcome the existing challenges in obtaining experimental thermodynamic data on interfacial energies and segregation enthalpies in oxides and could open Although segregation is not a new concept in cathode doping, the connection between ion segregation and interface thermodynamic stability makes this work very relevant to the development of stable nanomaterials (and micro) for lithiumion battery technologies, which can extend the battery's lifetime. 58 Additionally, to improve cyclability, the computational model helps determine the tendencies of segregation for different dopant chemistries to specific interfaces for the design of purposefully anisotropic particles. In LCO, the {001} surface is not an active surface for lithium diffusion, and the {104} surface is one of the most active surfaces since lithium ions prefer to move along layers and not across cobalt layers. 59 This model can design specific morphology particles with an optimized fraction of {104} surfaces that in turn will enhance the lithium diffusion and battery performance. Additionally, it is reported that lithium diffusion along grain boundaries can play a critical role in the electrochemical performance of cathodes. 60,61 The stabilization of grain boundary networks can be critical for preventing failure mechanisms like intergranular cracking or coarsening and morphological changes during electrochemical cycling. 1,62−64 In theory, future models could be developed to design ionically and electronically conductive grain boundaries for fast lithium and electron transport. This type of energetic and morphological engineering is only possible due to the segregation behavior of dopants in nanoscale materials, and more thermodynamic understanding is necessary. In truth, there were a number of assumptions and limitations in the atomistic simulations that enabled the extensive search through ten different dopants with varying ionic size and charge across the four structures considered. One of the most relevant approximations in the interatomic potentials was fixing the cobalt oxidation state to the trivalent state. It is well known that cobalt can assume several oxidation states in LCO, especially during lithium cycling. Hence, some changes to segregation energy values may occur if the cobalt was allowed to change oxidation state near an interface or in the presence of aliovalent dopants. However, despite multiple attempts, the study could not find interatomic potential parameters for a charge-transfer model that could accommodate the wide range of studied dopants. A charge-transfer potential that could accommodate a subset of dopants and delithiated structures could provide insight into segregation behavior as cathodes are cycled within the battery. Related to this point, particularly when aliovalent dopants are considered, other chargecompensating reactions might also occur to stabilize the incorporation of those dopants. Indeed, past work has shown such effects at grain boundaries. 65 However, we expect that our results are still useful for identifying dopants with higher tendencies to segregate to surfaces and interfaces. Finally, only low surface energy surfaces and low-index grain boundaries were evaluated for this work. It would be valuable for future work to construct higher energy interfaces and more surfaces to look for other trends in more complex structures and as a predictive tool for morphology evolution. ■ CONCLUSIONS Atomistic simulations were used to construct four interfaces, two low energy surfaces, and two low-index grain boundaries and study the dopant segregation behavior. By inserting dopants into the bulk of the structure and at the interface, the segregation energies of ten dopants with different ionic radii and charges were calculated. The results demonstrated the linear dependence of segregation energy on the ionic radius of the dopant, where dopants with larger ionic radius had higher segregation energies. Additionally, dopants with a higher oxidation state exhibited higher segregation energy than other dopants of the same ionic size but lower oxidation state. For example, Zr 4+ and Mg 2+ have a similar ionic radius, but Zr 4+ had larger segregation energies for all four interfaces studied. The magnitude of the segregation energy was highly dependent upon the specific surface and grain boundary structure. This behavior shows that the thermodynamic driving forces of each dopant depend not only on the chemical nature of the dopant but also on the detailed interfacial atomic environment. The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.chemmater.2c01246. Simulation special cases revealing some segregation anomalies and how they were dealt with; segregation profiles to {100} surfaces and Σ5 grain boundary; results for Al 3+ and V 5+ segregation to Σ3 and Σ5 grain boundaries, and to {104} surfaces (PDF)
2022-08-21T15:15:16.987Z
2022-08-19T00:00:00.000
{ "year": 2022, "sha1": "da45a4fb54c775d62c7ca1d86092afd525c4707b", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "a0d7109835f0b5219411dc715f43de53c97800ec", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
13948966
pes2o/s2orc
v3-fos-license
Stellar Pulsations excited by a scattered mass We compute the energy spectra of the gravitational signals emitted when a mass m is scattered by the gravitational field of a star of mass M>>m. We show that, unlike black holes in similar processes, the quasi-normal modes of the star are excited, and that the amount of energy emitted in these modes depends on how close the exciting mass can get to the star. I. INTRODUCTION In this paper we study the gravitational signals emitted when a mass m 0 is scattered by the gravitational field of a star of mass M. Under the assumption that m 0 is much smaller than M, this study can be done in the framework of a first order perturbation theory, by assuming that the perturbations of the gravitational field and of the fluid composing the star are excited by the stress-energy tensor of a point-like scattered mass (a particle). Since 1971, when M.Davis, R.Ruffini, W.H.Press and R.H.Price [1] computed the energy and the waveform emitted when a particle is radially captured by a Schwarzschild black hole, the perturbations of rotating and nonrotating black holes excited by infalling, scattered, or orbiting masses, have been extensively investigated. No comparable attention has been dedicated to the study of stellar perturbations excited by small masses, although this problem is certainly interesting. Indeed, we know that neutron stars exist, and observations allow to infer the location and the mass of many of them, as well as their rotation periods. In addition, the worldwide experimental effort spent to detect gravitational waves, will hopefully be crowned by success in a not too far future. Therefore, it is time to gain new theoretical insight on the characteristics of the gravitational signals emitted by neutron stars. Until now, the application of the theory of stellar perturbations has mainly regarded the determination of the frequencies of the quasi-normal modes, at which stars are expected to pulsate and emit gravitational waves. However, whether these modes can be excited, and how much energy is emitted at the corresponding frequencies, can be understood only by considering realistic astrophysical situations. The theory of stellar perturbations, suitably matched with the theory of black hole perturbations, allows to explore this new field. In our investigation we shall consider a polytropic star with mass and radius typical of neutron stars, and integrate the equations describing the scattering of a small mass. In the interior of the star, we shall use the equations for the axial and polar perturbations derived by S.Chandrasekhar & V.Ferrari in 1990 in ref. [2], (to be referred to hereafter as Paper I), that hold also in a more general gauge appropriate to describe non-axisymmetric perturbations. Outside the star, one may continue the solution by integrating the Regge-Wheeler and the Zerilli equations [3], [4], with a source term given by the stress energy tensor of the scattered mass moving along a geodesic of the unperturbed spacetime. However, it is known that the source term of these equations diverges when the mass reaches the periastron, and therefore we switch to a different formalism, and use the generalized non-homogeneous Regge-Wheeler equation, which was introduced by T.Nakamura and M.Sasaki [5] to overcome this problem. We shall integrate this equation by adopting the procedure discussed in ref. [6] (to be referred to hereafter as Paper II), and further developed in this paper. We shall show how the problem of matching the interior and exterior solution can be solved, and find the energy spectra of the emitted radiation for different values of the orbital parameters of the scattered mass. The plan of the paper is the following. In section II we shall briefly review the equations to be integrated in the interior of the star; in section III the equations governing the exterior perturbations will be shown; the procedure to find the complete solution and the matching conditions will be discussed in section IV; in section V we will describe a model of polytropic star for which we shall compute the characteristic oscillation frequencies; in section VI we will show the energy spectra emitted when a small mass is scattered by such star. II. THE EQUATIONS DESCRIBING THE PERTURBED SPACETIME INSIDE THE STAR. In order to describe the perturbations induced by a mass scattered by a spherical, nonrotating star, we choose a gauge appropriate to describe non-axisymmetric perturbations and Y ℓm (ϑ, ϕ) are the scalar spherical harmonics. The perturbed part of the metric (2.1) has been Fourier-expanded, and the usual decomposition in tensor spherical harmonics has been performed (cfr. [3,4]). The functions N ℓm (r, ω), L ℓm (r, ω), V ℓm (r, ω), T ℓm (r, ω) , are the radial part of the polar, (even) metric components, whereas h 0 ℓm (r, ω) and h 1 ℓm (r, ω) are the axial (odd) part. It should be mentioned that in the axisymmetric case (m = 0) the previous gauge reduces to that used in Paper I for the polar perturbations, and to the Regge-Wheeler gauge [3] for the axial ones. The unperturbed metric functions ν(r) and µ 2 (r) have to be determined by numerically integrating the Einstein equations coupled to the equations of hydrostatic equilibrium, for an assigned equation of state. Under the assumption that the star is composed by a perfect fluid with energy-momentum tensor given by T αβ = (p + ǫ)u α u β − pg αβ , where p and ǫ are, respectively, the pressure and the energy density, the relevant equations are where m(r) = r 0 εr 2 dr is the mass contained within a sphere of radius r. The solution for the function ν(r) requires the determination of the constant ν 0 , which can be found by imposing that at the boundary of the star the metric reduces to the Schwarzschild metric (e 2ν ) r=R = (e −2µ 2 ) r=R = 1 − 2M/R, (2.4) where M = m(R) is the total mass. It is easy to check that, since Y ℓm (ϑ, ϕ) = (−1) m Y * ℓ−m (ϑ, ϕ), the perturbed metric functions satisfy the following property where F ℓm (r, ω) is any of the functions N ℓm (r, ω), L ℓm (r, ω), V ℓm (r, ω), T ℓm (r, ω) , and h 0 ℓm (r, ω), h 1 ℓm (r, ω) . By writing explicitely the perturbed Einstein's equations coupled to the hydrodynamical equations in the interior of the star, it is possible to verify that the resulting separated equations coincide with those given in Paper I, both for the polar and for the axial perturbations. As a consequence, the decoupling of the gravitational perturbations from the perturbations of the thermodynamical variables that was performed in Paper I for the polar equations, is possible also in the present case of non-axisymmetric perturbations. Therefore, the equations to integrate inside the star, to find the values of the polar and axial functions at the boundary, are the following. If we consider stars with a barotropic equation of state, the polar equations are (cfr. eqs. (72)-(75) of Paper I) X ,r,r + 2 r + ν ,r − µ 2,r X ,r + n r 2 e 2µ 2 (N + L) + ω 2 e 2(µ 2 −ν) X = 0, (2.6) (r 2 G), r = nν ,r (N − L) + n r (e 2µ 2 − 1)(N + L) + r(ν ,r − µ 2,r )X ,r + ω 2 e 2(µ 2 −ν) rX , where Q = ǫ ,r /p ,r , the function V has been replaced by X = nV , with n = (ℓ−1)(ℓ+2)/2, and the function G takes the place of T the Schwarzschild perturbations, and can be reduced to the Zerilli equation [4] d 2 Z pol dr 2 * + ω 2 − 2(r − 2M) r 4 (nr + 3M) 2 [n 2 (n + 1)r 3 + 3Mn 2 r 2 + 9M 2 nr + 9M 3 ] Z pol = 0. (2.8) The value of the Zerilli function at the surface, Z pol (ω, R), can be found in terms of the solution of the eqs. (2.6) as follows (cfr. eq. (93) Paper I) and similarly for its first derivative. For simplicity, in eqs. (2.6-2.9) we have omitted the harmonic indices (ℓm). The equations for the axial perturbations can be reduced to a single wave equation (cfr. eqs. (148)and (149), Paper I) where r * = r 0 e −ν+µ 2 dr, and the function Z ax ℓm is related to the axial metric components by Outside the star eq. (2.10) reduces to the Regge-Wheeler equation. III. THE EQUATIONS DESCRIBING THE PERTURBED SPACETIME OUTSIDE THE STAR. The metric exterior to a nonrotating star reduces to the Schwarzschild metric, and therefore the perturbed spacetime is described by the perturbations of a Schwarzschild black hole. When the source exciting the perturbations is a scattered mass, the source term of After separating the variables, the radial BPT equation can be written as where ∆ = r 2 − 2Mr, the BPT function is related to δΨ 4 by and −2 S ℓm (ϑ, ϕ), in terms of which the separation is accomplished, is the spin-weighted spherical harmonic The source T ℓm (ω, r) is given by where and the factorsT ℓmω ab (r) are the radial parts of Newman-Penrose components of the stressenergy tensor (see Paper II, app. A.2). In the present paper we prefer to use another equation, related to the BPT equation, which was derived by Nakamura and Sasaki [5]. They showed that, given a solution of the inhomogeneous BPT equation , there exists a function Z N S ℓm (ω, r) related to Ψ BP T ℓm (ω, r) by the Chandrasekhar transformation [9] which satisfies the generalized inhomogeneous Regge-Wheeler equation The source term is related to the source of the BPT equation by We shall now write the source term of the Nakamura-Sasaki equation explicitely. If we use, as in paper I, the convenction G µν = 2T µν , the stress-energy tensor of a point-like mass m 0 , moving along a geodesic r(τ ) of the unperturbed spacetime, is where τ i (r) are the solution of r(τ ) =r, the solution can be divided in two branches, corresponding to the incoming and outgoing part of the trajectory. Consequently, the stressenergy tensor can be written as where t i (r), Ω i (r) are the time and angular position of the particle on the i−eth branch of the trajectory. In terms of the tensor (3.11), the source of the BPT equation can be written as the prime indicates differentiation with respect to r, and −0 S * ℓm Ω(r) , −1 S * ℓm Ω(r) are the complex conjugate of the spin-weighted sperical harmonics of weight 0 and 1, and σ = sign dr dτ . We shall assume that a particle m 0 starts its journey at radial infinity with energy E and angular momentum L z , and we shall put ϕ(r t ) = τ (r t ) = 0, at the turning point r = r t . We choose E and L z , such that the particle is scattered by the star, and follows a geodesic on the plane ϑ = π 2 , described by the equations The source term of the Nakamura-Sasaki equation can be derived from eqs. (3.9),(3.12) and (3.13), as in [15]. The result is with j=1,2,3, (3.16) The constants K 0 , ... are In order to integrate the Nakamura-Sasaki equation (3.8) outside the star, we need to know the value of Z N S ℓm and its derivative at the surface r = R. The Nakamura-Sasaki function is related to the solution of the Regge-Wheeler equation for the axial perturbations, Z ax , and to that of the Zerilli equation for the polar ones, Z pol , by the following relations [6] where Z 1 ℓm (ω, r) = − n (n + 1) (4.3) It should be stressed that both Z 1 ℓm (ω, r) and Z 2 ℓm (ω, r) satisfy the Regge-Wheeler equation. Thus, Z N S ℓm (ω, R) and its derivative can be found in terms of Z pol (ω, R) and Z ax (ω, R) and their first derivatives. These values can be found by numerically integrating the equations for the polar and axial perturbations in the interior of the star (cfr. eqs. 2.9 and 2.10), by imposing the boundary conditions that, for each assigned value of the frequency, the solution is regular near the origin, and that the perturbation of the thermodynamical variables vanish at the boundary, so that the perturbed spacetime reduces to vacuum (cfr. Paper I, sec. 6,7 and 11). However, these conditions do not define the amplitude of the functions Z pol (ω, R) and Z ax (ω, R), which depends on the exciting source. If we namē Z pol ℓm (ω, R) andZ ax ℓm (ω, R) the values found by integrating the interior equations with an arbitrary amplitude at the centre of the star, the 'true' values of Z pol (ω, R) and Z ax (ω, R) and χ 1 ℓm (ω) and χ 2 ℓm (ω) have to be found. Consequently, the functions Z 1 ℓm (ω, R) and Z 2 ℓm (ω, R) defined in eqs. (4.2) suffer the same ambiguity χ 1 ℓm (ω) and χ 2 ℓm (ω) determine how much of the polar and of the axial modes are excited by the particular source we are considering. Eq. (3.8) has to be solved by imposing that, at the surface, the solution of the perturbed equations in the interior matches continuously with the exterior solution, which, in addition, has to behave as a pure outgoing wave at radial infinity, i.e. where the prime indicates differentiation with respect to r * , and L RW is the Regge-Wheeler operator The solution of eqs. and the functionsZ 1 ℓ (ω, r * ) andZ 2 ℓ (ω, r * ) that, as Y ℓ (ω, r * ), satisfy a homogeneous Regge-Wheeler equation, with boundary condition at the surface of the star given by eqs. (4.6) and (4.7), respectively. It should be noted that, since the Regge-Wheeler operator does not depend on m , the functions Y ℓ (ω, r * ),Z 1 ℓ (ω, r * ) andZ 2 ℓ (ω, r * ) will be independent of m as well. We then construct the solution of the Nakamura-Sasaki equation as follows where W 1 ℓ (ω) and W 2 ℓ (ω) are the wronskians and α ℓm (ω) are constants. It should be mentioned that, since the source S N S ℓm (ω, r * ) given in eqs. (3.14)-(3.17) diverges as γ −1 at the turning point, the actual integration of eq. (4.11) can be performed by switching to the proper time τ (r). It is easy to check that the solution (4.11) satisfies the pure outgoing wave condition at infinity, and that the matching condition at the boundary are fulfilled provided the constants χ 1 ℓm (ω) and χ 2 ℓm (ω) are defined as The solution at radial infinity, to which we are primarily interested, therefore is The constants α ℓm (ω) will be determined by the following arguments. Due to the properties of the spherical harmonics by an inspection of the behaviour of the source S N S ℓm , it is easy to verify that the Nakamura-Sasaki function satisfies the following identity r) . (4.17) and consequently In addition, by explicitely evaluating δΨ 4 (t, r, ϑ, ϕ) for the perturbed metric (2.1), and by using eq. (3.6), we find From the property of the metric perturbations given in eq. (2.5), and since D * Eqs. ( 4.18) and (4.20) are compatible only if which imply that From eqs. (4.5) we see that if (ℓ + m) is even, in order Z 2 ℓm (ω, R) to be zero χ 2 ℓm (ω) must vanish, which means that α ℓm (ω) = 1 (see eq. 4.14). Similarly, if (ℓ + m) is odd χ 1 ℓm (ω) must vanish, i.e. α ℓm (ω) = 0. Thus, the complete solution of the Nakamura-Sasaki equation at radial infinity is (see eq. 4.15) where the source explicitely is and when (ℓ+m) is even Y * ℓm,ϑ | ϑ=π/2 = 0, the source term is zero, and the axial perturbations are not excited. It should be mentioned that the procedure used in this paper to find the constants α ℓm (ω) exploits the symmetry properties of the source of the Nakamura-Sasaki equation, and is much simpler than that described in Paper II. , [10] . The damping time associated to a mode is related to the curvature of the parabola that fits the curve near a minimum: smoother minima correspond to shorter damping times. As in newtonian theory, the classification of the polar modes is based on the behaviour of the perturbed fluid according to the restoring force that is prevailing [11]: the g-modes, or gravity modes, when the force is due to the eulerian change in the density, the p-modes, when it is due to a change in pressure, and the f-mode, that is the generalization of the only possible mode of oscillation of an incompressible homogeneous sphere [12]. An inspection of the behaviour of the thermodynamical variables in correspondence of the minima of the resonance curve shown in Fig. 1, allows to identify the corresponding modes. The resonance curves for ℓ = 3 and ℓ = 4 are plotted in Fig. 2. A resonance curve can be computed also for the axial perturbations by solving eq. (2.10). For the model of star we consider, it exhibits a monotonically decreasing behaviour, showing that no slowly damped axial modes exist in this case. The algorithm based on the resonance curve allows to find only the slowly damped modes (ω i << ω 0 .). To find the highly-damped modes (the w-modes [13]) other methods have to be used. In table 1, we tabulate the values of the complex frequency of the first few polar w-modes computed in ref. [13] for the same model of star considered in the present paper. We do not go into more details about these modes because, as we shall see, they are not excited in scattering processes. VI. NUMERICAL RESULTS As discussed in section IV, we findZ pol ℓ (ω, R) andZ ax ℓ (ω, R) and their first derivatives perturbations are assumed to be excited by a massive particle m 0 , which is scattered by the gravitational field of the star; we set the angular momentum and the energy respectively to L z = 5, and E = 1.007. For these values, the turning point is located at r t /M = 9.2 (case a). We then repeat the calculations allowing the particle to get closer to the star: we choose E = 1.097, so that r t /M = 5.0 (case b). For a plane wave (see for example [14] pg. 522 eq. (488)) and since the (0r)-component of the stress-energy pseudotensor can be written as it follows that the energy emitted in gravitational waves per unit solid angle and unit frequency is By integrating over the solid angle, and by using the relation existing between δΨ 4 and the Nakamura-Sasaki function and discussed in section III, it can be shown that the energy emitted per unit frequency can be expressed as a function of the amplitude of Z N S ℓm at infinity, i.e. 5) and A N S ℓm (ω) is determined by numerical integration. We find that, as in the case of the scattering of masses by a black hole, the energy spectrum of the emitted gravitational radiation is contributed mainly by the ℓ = m component. In Fig. 3 we plot the ℓ = m = 2 energy spectrum emitted in case a, when the turning point is r t = 9.2 M. It is interesting to compare this spectrum with that obtained when a black hole scatters a small mass (see [15] for an extensive review). In that case the energy is emitted essentially by the scattered mass, and most of it is radiated when the mass transits through the turning point. Indeed, the energy spectrum is peaked at a frequency which is related to the angular velocity of the mass at the turning point Thus, the black hole quasi-normal modes are not excited in these processes. In our case a, and for ℓ = 2, the frequency corresponding to the angular velocity of the mass at the periastron is ω rt M = 0.092 and the spectrum shown in Fig. 3 exhibits a sort of peak close to that frequency, showing that part of the energy is still emitted by the mass as a synchrotron radiation. However the very distinctive feature of that spectrum is a very sharp peak which occurs at the frequency of the fundamental f-mode (cfr. Fig. 1). If we allow the mass to get closer to the star, as shown in Fig. 4 for case b when the turning point is r t = 5, the amplitude of the f-mode peak increases by more than a factor 10 2 , and new peaks appear, corresponding to the excitation of the p-modes. (In this case ω rt M = 0.221). Thus, we can conclude that, unlike the case of black holes, the polar quasi-normal modes of the star can be excited in scattering processes, to an extent that depends on how close the mass can get to the star. Conversely, from table 1 and Fig. 3 and 4, we see that the w-modes are not significantly excited in these processes. It should be mentioned that, although we plot the energy spectra for ωM ≤ 1, we have extended our numerical integration at higher frequencies finding that the signal is negligible compared with that shown in the figures and no further modes appear. In Fig. 5 we plot the energy spectrum for ℓ = 2 and different values of −ℓ ≤ m ≤ ℓ. We see that when ℓ + m is odd, no modes are excited, because the spectrum is contributed only by the axial perturbations which are not resonant for this model of star. In Fig. 6 we compare the energy spectrum emitted for ℓ = 2, 3 and 4. The excitation of the quasi-normal modes is exhibited by all multipoles. VII. CONCLUDING REMARKS In 1983 Lindblom and Detweiler investigated the relation existing between the frequencies of oscillation of stars and the equation of state (EOS) prevailing in the interior, showing that quasi-normal modes carry information on the internal structure of a star [16]. They computed the frequency of the fundamental mode of oscillation (the f-mode), for several EOS's, and more recently this study has been extended to include a few modern EOS's suggested for neutron stars, for which also the frequencies of the first p-mode and of the polar and axial w-modes have been computed. Besides, it has been shown that the knowledge of these frequencies allows to infer empirical relations between the mode frequency and the macroscopic parameters of the star: the mass and the radius [17], [18]. Thus, there is no doubt that the observation of the quasi-normal modes would provide relevant information on both the structure of neutron stars and on the nature of the nuclear interactions at supernuclear densities. The real problem is to ascertain whether these modes can be excited in realistic astro-physical situations, and what is the amount of gravitational energy that is emitted at the corresponding frequencies. The work presented in this paper is a first step in this direction. The energy spectra plotted in Fig. 3-6 clearly show that the quasi-normal modes of stars can be excited in scattering processes. The waveforms of the corresponding signals are essentially exponentially damped, pure sinusoids, at the frequency of the f-mode of the considered star ( ν ∼ 3 kHz, τ ∼ 0.07 s), and with an amplitude which scales as m −2 0 , and depends on how close the scattered mass gets to the star. The p-modes can be excited if the mass gets sufficiently close (Fig. 4), but the f-mode contribution is likely to be the dominant one. Thus, the signal emitted by a star excited by a scattered mass is a pure note, emitted at the frequency, and with the damping time, of the fundamental f-mode. The spectra we compute do not show the excitation of the w-modes. However, they may be excited in other processess, like for example the capture of masses by stars. We plan to extend the present work, to compute the energy and the waveforms emitted by masses orbiting around compact stars.
2014-10-01T00:00:00.000Z
1999-01-22T00:00:00.000
{ "year": 1999, "sha1": "4750cdfc0dc828c4b3cf9259bc7ba9497ed83a2f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/9901060", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "37f56830744672c95e73e0c3d5751d2444a6a607", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
62825364
pes2o/s2orc
v3-fos-license
Quark jet versus gluon jet: deep neural networks with high-level features Jet identification is one of the fields in high energy physics that deep learning has begun to make some impact. More often than not, convolutional neural networks are used to classify jet images with the benefit that essentially no physics input is required. Inspired by a recent paper by Datta and Larkoski, we study the separation of quark/gluon-initiated jets based on fully-connected neural networks (FNNs), where expert-designed physical variables are taken as input. FNNs are applied in two ways: trained separately on various narrow jet transverse momentum $p_{TJ}$ bins; trained on a wide region of $p_{TJ} \in [200,~1000]$ GeV. We find their performance are almost the same, and the larger $p_{TJ}$ the better. Comparing with results from deep convolutional neural networks, our results are comparable for low $p_{TJ}$, and even slightly better than those for high $p_{TJ}$. We also test the performance of FNNs with full set or different subsets of jet observables as input features. The FNN with one subset, consisting of fourteen observables, shows nearly no degradation of performance. This indicates that these fourteen expert-designed observables may have captured most of the information useful for the separation of quark/gluon jets. I. INTRODUCTION At the Large Hadron Collider (LHC), hadronic decay final states of bosons such as W/Z or supersymmetric squarks are dominated by the light-quark-initiated jets while the corresponding standard model (SM) background often consists of gluon-initiated jets. Therefore, discrimination between quark jets and gluon jets is crucial in such searches. The quark-initiated jets and gluon-initiated jets were known to have different qualitative features for long time since the early measurements at PETRA and LEP colliders. For instance, radiation from color octet gluon will result in wider jet width in gluon jet comparing with the quark jet. However, the qualitative feature to discriminate quark from gluon was never as robust as the well-known b-jet tagging until a practical quark/gluon jet tagging via charged particle multiplicity and jet width proposed in [1,2] was finally employed by the ATLAS collaboration. This initiative effort was later joined by other quark/gluon tagging observable proposals such as the N-subjettiness observables [3][4][5] and energy correlation functions [6,7], etc. Detailed discussions can be found in recent reviews on jet substructure [8,9]. For jet classification, the energy deposited in the calorimeter can be viewed as a grayscale image. Deep convolutional neural networks (DCNNs), which have shown to be very powerful in computer vision, were then applied to classify jet images [25]. But the information of charged particle multiplicity, which is one of the best variables for quark/gluon tagging, is not included in grayscale jet images. As an attempt, pixel-level charged particle counts, transverse momentum p T of charged particles and p T of neutral particles are introduced in [26] as three "colors" of jet images. It is encouraging to observe that this DCNN outperforms traditional state-of-the-art methods in separating quark/gluon jets. This is very impressive, especially considering that almost all of the expert-designed jet variables are not used in this procedure. But it is a pity that it is hard for DCNNs to tell us what physics they learn. Unlike daily life pictures, a jet image is often sparse, i.e. only few pixels are activated. Therefore it is also arguable whether DCNN is an optimal choice for jet classification [20]. In [29], a different method is developed: the physics-motivated N-subjettiness observables are taken as input to study the jet substructure using fully connected deep neural networks. The logic behind this method is that N-subjettiness observables can completely span the phase space of a jet substructure which contains all kinematic information in a jet. As a concrete example, [29] demonstrates that jet mass plus just eight N-subjettiness observables are enough for the separation of boosted Z jets from QCD jets. As noticed in [29], the choice of observable basis is not unique, one may use energy correlation functions as well. It was argued in [7] that a new series of energy correlation functions U (β) i are powerful variables for the discrimination of light quark jets and gluon jets. In this paper, inspired by [29], we study the quark/gluon tagging using fully connected deep neural networks. To capture as much as physical information, more jet substructure observables besides the jet mass and N-subjettiness are considered. Deep neural networks are very effective to analyze multidimensional problems, thus we implement this technique to classify signature/background. In the next section, we discuss the jet observables used as the input of the deep neural networks and the event generation procedure. We discuss the architecture and the results of the fully connected deep neural network in the third and fourth sections. The final section is devoted to a summary. II. OBSERVABLES AND EVENT GENERATIONS In the first part of this section, all observables as the input of neural networks are enumerated, which elaborate both entirety properties and substructures of a jet. To generate a dataset for training and testing neural networks, we follow standard event generation procedures: generate all parton level events with MadGraph [46], and then do the showering with PYTHIA8.2 [47], the jets are clustered with FastJet [48] and observables are extracted using FastJet contrib as well as our private codes. Data generation processes are described detailedly in the second part of this section. A. Observables as inputs of neural networks The objects for quark/gluon discrimination considered in this work are the following: • Particle multiplicity and charged particle multiplicity in a jet • Jet mass m J , transverse momentum of the jet p T J and their ratio m J /p T J • Generalized angularities of two parameters, which were proposed in [49] and further discussed in [9] For proton-proton collision, using anti-k t algorithm [50] with E-scheme recombination [51], one has where z i is the momentum fraction of particle i, and R in is the rapidity/azimuth angle from a chosen axis 1 of particle i, and R 0 is the radius of the jet under consideration. In particular, five values of (κ, β), the same with [9,49], are set as benchmarks, (2) (2, 0) is known as p D T [6,52]; (3) (1, 0.5) is denoted as "LHA" (Les Houches Angularity) [9]; (4) width from (1, 1) is related to broadening or girth [53][54][55]; (5) mass from (1, 2) is related to thrust [56]. • N-subjettiness observables [4,5] measure the radiation about N axes in a jet with a definition τ (β) As pointed out in [29], a basis from N-subjettiness objects can be constructed to span the phase space of appropriately identifying M particles. This basis can then be taken as the 1 We use the axis directly from E-recombination in this paper. An alternative is with winner-take-all recombination scheme [60,61]. input of deep neural networks to discriminate boosted hadronic Z decays from light parton intiated jets. The N-subjettiness observables chosen in this work are the following which are identical to the ones used in [29,30] for spanning the 6-body phase space in a jet. Two ratios τ are also input in our work. So called "OnePass WTA KT Axes" in FastJet package is chosen for above N-subjettiness objects. • Generalized energy correlation functions [6,7] can identify N-prong jet substructure without finding subjets first as done for N-subjettiness. In this work, we employ C N (for quark/gluon discrimination, N = 1 ) [6] and U i [7]. Before moving on, let's briefly review these variables. where r N is much like the N-subjettiness τ N while C (β) N is similar to the N-subjettiness ratios τ N,N −1 are good to probe N-prong substructures in a jet. In particular, they are good measures for higher-order radiations from leading-order. Observable U i is proposed in [7] as For proton proton collision, z i 's share the same definition as (2) and θ ij denoting the opening angles between particle i and j in a jet. In this paper, we produce C B. Database generation In this subsection, event generation from simulations is described. We generate pure quark and gained from these 24 sub-windows formed the broadest kinematics region for our following study. III. ARCHITECTURE OF FULLY CONNECTED NEURAL NETWORKS We have described in the above section how to generate simulation data containing 36 expertdesigned jet observables. In Fig. 1, the distributions of charged particle multiplicity, N-subjettiness and energy correlation function U The model contains more than 460000 unknown weights and biases as parameters and it is important to prevent its overfitting. For this purpose, the dropout regularization and validationbased early stop are adopted. The dropout ratio is taken to be 0.1 for all six hidden layers, and the training would be stopped if its performance on the validation data does not improve anymore for 20 epochs. The neural network model is implemented with Tensorflow [57] and scikit-learn packages [58]. Running on a NVidia GTX 1080 GPU, it takes just several minutes to train the model with the input events in one given p T J region. IV. RESULTS Here, light quark jets are signals as they are presumably associated with the new physics, while the gluon jets are backgrounds. Therefore, the performance of neural networks can be measured by gluon rejection efficiency (1 − g ) as a function of quark acceptance efficiency ( q ), known as . receiver operating characteristic (ROC) curve which is widely used in the field of machine learning. The area under the ROC curve (AUC) is also a useful quantity to measure the performance of models. Moreover, in collider physics, it may be better to plot the significance improvement char-acteristic (SIC) curve q / √ g which is directly related to the statistical significance of the signal and background separation. In Fig. 2 Table I the gluon jet efficiency at 50% quark jet acceptance. It turns out that our results from fully connected neural networks are basically as good as those of DCNN with color for p T J around 200 GeV. For larger p T J , for example around 1000 GeV, our results are even slightly better than those from DCNN with color. This indicates that almost all information of quark/gluon jets has been included in jet observables used in this paper. To get an idea of which subset of jet observables are important for the quark/gluon tagging, as an attempt, we also train neural networks by using different subsets of jet observables as follows: • Case A. Input features include jet mass m J , fourteen N-subjettiness observables τ 1 . This basis is essentially the same as the one proposed in [29], which is powerful for discrimination of boosted Z jets from QCD background. The results are shown in Fig. 2 as magenta dotted curves. One can directly compare these results with the red solid lines (with all of the jet observables). The difference appears very small using ROC measures, while the difference becomes more and more obvious in SIC curves with the jet p T J increasing. • Case C. Fourteen jet observables are considered, which includes particle multiplicity, charged particle multiplicity, LHA λ 3 ). The results are shown in Fig. 2 as blue dashed curves. Interestingly, these results are basically as good as those using all jet observables. This implies that these fourteen observables may have captured most physical information which is useful for discriminating quark/gluon jets. Notice that the jet images may look different for various transverse momenta, one has to train as many DCNNs as the number of transverse momentum bins. For example, three DCNNs are trained in [26] to classify quark/gluon jets with p T J ∈ [200, 220] GeV, [500, 550] GeV or [1000, 1100] GeV. Even so, there is no guarantee that jets with p T J falling outside the considered bins, say for example p T J ∼ 700 GeV, could be classified efficiently using any of the three well-trained DCNNs. Certainly, this is not a very efficient way. Up to now, we have strictly followed the way of DCNN such that different fully connected To compare our results from FNNs in a quantitative way with those from DCNNs with color [26], the gluon jet efficiencies at 50% quark jet acceptance are shown in Table I. For p T J around 200 GeV, the performance of FNN is comparable to that of DCNN with color. As p T J increases, the performance of FNN becomes even slightly better than that of DCNN with color. V. SUMMARY Deep learning approaches have developed many applications in high energy physics, among of which is jet identification, such as the separation of quark-initiated jets from gluon-initiated jets. A natural way is to take the energy deposited in the calorimeter as a jet image. As a powerful tool, deep convolutional neural networks can then be used to classify jet images. Motivated by [29], in this paper, we take 36 expert-designed jet observables as input features to discriminate quark/gluon jets using fully connected deep neural networks. One advantage of this method is that, the architecture of fully connected neural networks is much simpler than that of convolutional neural networks, and the former is also less GPU time-consuming. Since jet images do not lose any information, the approach of convolutional neural networks may be more powerful in jet classification if (nearly) all of the useful information could be extracted from the data. However, only few pixels are activated in a jet image, it remains an open question whether convolutional neural networks are the most efficient method in this situation. We use a neural network with six hidden layers where 300 nodes are set in each hidden layer. Many of these 36 jet observables should be complementary for the quark/gluon tagging, but some of them may be redundant. As an attempt, we test the performance of neural networks by choosing different subsets of the jet observables as input features. It is interesting to find that the neural network using only fourteen observables has as good performance as the neural network using all of thirty-six jet observables. These fourteen observables include particle multiplicity, charged particle multiplicity, LHA λ 1 , τ 1 , τ (0.5) 2 , τ 2 , τ 2 , τ (0.5) 3 , τ 3 , τ 3 ). Since quark/gluon jet images may have different characteristics with various jet transverse momentum, it is customary to train different convolutional neural networks for different transverse momentum bins. But p T J is just one of the input features of the fully connected neural networks (FNNs), it should be possible to train a single FNN for jet tagging with very different transverse momentum. Therefore we train such a FNN using 1.3 million data with the jet transverse momentum in the range of [200, 1000] GeV. Again, the performance of such a FNN is almost the same as those of FNNs trained separately for each transverse momentum bins. In [30], a new method was proposed to construct novel jet observables with the help of fully connected neural networks. Such novel observables may have better discrimination power than widely-used observables. The novel observables may also deepen our understanding of the jet identification. It should be interesting to see in the future whether novel observables could also be constructed in the case of quark/gluon tagging.
2017-12-11T03:05:27.000Z
2017-12-10T00:00:00.000
{ "year": 2017, "sha1": "5d7399233b9997c37840275a4882cd48622e025b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5d7399233b9997c37840275a4882cd48622e025b", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
15328419
pes2o/s2orc
v3-fos-license
Efficient gradient calibration based on diffusion MRI Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. INTRODUCTION Accurate gradient calibration is a prerequisite for quantitative measurements in MRI and spectroscopy in general. Miscalibrated gradients can, for instance, lead to systematic over-or underestimation in geometric measure-ments. Calibration is usually performed by vendors at installation and during routine servicing, based on anatomical scans of a phantom of known dimensions. Phantoms with more complex geometries, typically grid structures over an extended field of view (FOV), have also been proposed for gradient calibration, with the added benefit of addressing gradient nonlinearities (1)(2)(3). The accuracy of these approaches is governed by the spatial resolution of the scan, and the geometric accuracy of the phantom dimensions. Diffusion MRI is particularly sensitive to poorly calibrated gradients as the measured apparent diffusion coefficient (ADC) depends on the square of the gradient amplitude. A typical error of 62% in gradient strength, for example, would lead to a 64% error in the measured ADC (4). Differential errors between the x-, y-, and z-gradients would additionally lead to errors in fractional anisotropy (FA) and eigenvector estimates, as they depend on the sample diffusion orientation with respect to the diffusionweighting direction. The corollary is that the measured diffusion can provide a sensitive means for gradient calibration. Fluid-filled phantoms are well suited for this purpose, and corrections can be determined on the basis that diffusion in such phantoms is isotropic and Gaussian. Examples include the use of phantoms filled with water (5,6), polyvinylpyrolidone (7), ethylene glycol (8), n-undecane (9), and dodecane (10). Use of such phantoms benefits from ease of preparation and access to reliable source materials. However, as diffusion is dependent on temperature, these methods require either accurate control or monitoring of temperature, such as with an ice-water phantom (11), temperature measurement before and/or after scanning (6,9), periodic temperature sampling with MR spectroscopy (8), and real-time temperature monitoring with a thermistor (12). A criticism of water-based phantoms is their relative low viscosity renders them susceptible to vibration, convection and flow (9,13), and their high diffusivity limits the use of higher b-values. In contrast, more viscous media such as cyclooctane and ethylene glycol have been shown to exhibit monoexponential behavior up to a b-value of 10,000 and 12,000 s/mm 2 , respectively (8,12). Cyclooctane has the added benefit of having single proton resonance, thereby avoiding chemical shift artifacts and signal cancellation from J-coupling. Depending on the calibration method employed, different means of assessing gradient calibration performance have been proposed. Studies that derive image deformation maps from high-resolution distortion-free reference x-ray CT data typically measure improvement in the conformance between the MRI and CT images after calibration (1,3). In studies of diffusion-based calibration methods, improvements were shown in the reduced directional bias in gradient strengths reflected in lower FA (5,9,10) and more robust fiber tracking (6,10). The aim of this study was to propose a simple and efficient method for calibrating the gradient scaling in x, y, and z, based on a phantom with well-characterized diffusivity and constructed from readily available materials. The same principle can be extended to correct for gradient nonlinearity in a model-free manner. The improvements in gradient calibration will lead to improved accuracy and precision in quantitative MRI. Such improvements were demonstrated with diffusion MRI in the same phantom, and independently validated with high-resolution anatomical MRI in a second custom-built grid phantom. Phantom Design A diffusion calibration phantom was constructed by filling a 20-mm outer diameter glass tube with 99% cyclooctane (Sigma-Aldrich, St. Louis, Missouri, USA) while avoiding air bubbles. The tube was sealed with a polyphenylsulphide plug and two Viton O-rings (Fig. 1a). A thermistor embedded in epoxy resin, used for monitoring of temperature in small animals, was connected to a Harvard Apparatus homeostatic temperature control unit (Harvard Apparatus, Kent, United Kingdom). The thermistor was calibrated and secured to the surface of the tube, and the temperature was recorded at 1 Hz on a Powerlab/30 using Chart v5.0 (AD Instruments, Bella Vista, New South Wales, Australia). A second grid phantom for geometric validation comprised two orthogonal 2-mm-thick slotted plates of Tecapet (Ensinger, Nufringen, Germany) that fitted in a cylindrical housing made from polyvinylchloride (Trovex Diamond, Hertfordshire, UK) (Fig. 1b). A grid pattern of 1-mm-diameter holes was drilled at nominally 5-mm intervals (Fig. 1c,d) and the phantom was filled with 2.0 mM aqueous gadolinium solution (Prohance; Bracco Diagnostics Inc, Cranbury, New Jersey, USA). Diffusion MRI Acquisition The calibration scans were performed using a 9.4 T preclinical scanner and a shielded gradient system (Agilent Technologies, Santa Clara, California, USA). The inner diameter of the gradient set was 60 mm, and the maximum gradient strength was 1 T/m with a rise time of 130 ms. A quadraturedriven transmit/receive birdcage radiofrequency coil of 20 mm inner diameter and coil sensitivity of 25 mm in z was used (Rapid Biomedical, Rimpar, Germany). Prior to the study, a rough calibration was performed. To simulate a poorly calibrated system, the gradient scaling factors in x and z were first offset by À5% and þ5% from the precalibrated values. Data were acquired with diffusion-weighting in x, y, and z separately using two-dimensional (2D) spin echo (SE) echo planar imaging with pulsed gradient SE diffusion-weighting (14). The sequencing parameters were as follows: repetition time (TR)/echo time (TE) ¼ 2000/20 ms; echo train length ¼ 16; resolution ¼ 375 Â 375 mm; FOV ¼ 24 Â 24 mm; slice thickness ¼ 1 mm; number of signal averages (NSA) ¼ 4; number of slices ¼ 11; d ¼ 2.5 ms; D ¼ 15 ms; b ¼ [100, 400, 900, 1600, 2500] s/mm 2 ; and acquisition time ¼ 5 min, 36 s. Forward and reverse readout polarity data were acquired to correct for errors between odd and even lines of k-space (15). To minimize the b-value contribution from imaging gradients, refocusing crushers were omitted. The b-values specified included contributions from imaging and cross-terms, and the diffusion gradient strength was adjusted accordingly (16). Diffusion tensor imaging (DTI) was performed with a 2D SE echo planar imaging sequence, and FA was measured in five central The dotted square identifies the region in which the measured and reference centroid maps were overlaid as in Figure 3. Reference distances dr(x), dr(y), and dr(z) are indicated and reported in Table 1. axial slices (17). Scan parameters were similar to the calibration scan with the following differences: number of non-diffusion-weighted scans ¼ 3, number of diffusion directions ¼ 21 (18), b-value ¼ 2000 s/mm 2 . Data with forward and reverse diffusion gradient polarity were acquired to correct for the effect of cross-terms (19). Correction factors were calculated and were used to adjust the gradient scaling. The calibration and DTI scans were repeated in the same experiment postcalibration. Correction for Gradient Linear Scaling The measured ADCs, D m (i), were first calculated by performing a linear fit of the ln signal intensity in the calibration data along the individual DW directions, i, and taking the mean over a central 7 Â 7 Â 7 voxel region. This signal behavior is described by where S is the measured signal intensity at the applied b-value, b, and S 0 is the non-diffusion-weighted signal intensity. The raw temperature data were smoothed with a sliding window method (mean temperature within a 60-s interval) to reduce noise. Temperature readings, T(a,b,i) corresponding to each average, b-value, and DW direction were obtained. Matching reference ADCs, D r (a,b,i), were calculated at T(a,b,i) by fitting a second-order polynomial to a range of reference diffusivity data (13) and averaged across NSA and b-values to obtain D r (i). Correction factors, a(x), a(y) and a(z) were calculated using Equation [2] and derived here (6); corrected gradient scaling factors, W'(x), W'(y), and W'(z) were calculated with Equation [3] and applied, where W(i) is the uncorrected gradient scaling factor: C 0 ðiÞ ¼ aðiÞ Ã CðiÞ: [3] To assess the effect of NSA and b-values sampled for the gradient calibration, correction factors were also calculated after subsampling D m (i) and D r (i) by number of averages (1, 2, 3, and 4) and number of b-values used (2, 3, 4, and 5). The b-value combinations used were [100, 2500], [100, 900, 2500], [100, 400, 900, 2500], and [100, 400, 900, 1600, 2500] s/mm 2 respectively. The effective scan times ranged from 48 s to 5 min, 36 s. The effect of cross-terms was removed from the DTI data by taking the geometric mean of data acquired with opposing diffusion gradient polarities (19), and the results were fit with a single tensor and linear least squares. The tensors were diagonalized and the FA (20) was calculated in each voxel based on Equation [4], where k 1 , k 2 , and k 3 are the eigenvalues of the tensor: Correction for Gradient Nonlinearities To correct for gradient nonlinearities, D m (x,y,z,i) were first calculated by performing a linear fit of the ln signal intensity in the SE DW data along the individual DW directions, i, on a voxel-wise basis. a correction maps were generated using Equation [2]. Instead of adjusting the gradient scaling, deformations were calculated from the isocenter in 2D according to Equation [5] and applied to the image data based on the correction maps. The integral of the signal intensity per unit voxel area was preserved postcorrection for gradient nonlinearity according to Equation [6]. The corrected image data were then resampled to the coordinate space of the original data and were reprocessed to obtain ADC and a correction maps. Here, p is a vector of coordinates reflecting the corrected voxel positions in the readout (r) and phase encoding (p) directions; m and n are voxels measured from the isocenter outward along r and p and range from 1 to 288 and 1 to 96, respectively, or half the matrix size along the respective directions; a represents the correction factors calculated in r and p directions at the corresponding voxels; dr and dp are nominal voxel dimensions in r and p directions; and S and S* are vectors of signal intensity in the uncorrected and corrected image data. For illustration purposes, the corrections for gradient nonlinearities were first applied in the phase encoding direction, and subsequently in the readout direction. To correct for the effects of gradient nonlinearities on the measured ADCs, the b-matrices were recalculated on a voxel-wise basis after dividing the magnitude of the gradient waveforms (including diffusion and imaging gradients) by a from the unwarped correction maps. The b-matrices were calculated numerically and included contributions from the diffusion and imaging gradients and cross-terms (21). The measured ADCs, D m (x,y,z,i), were calculated after correction for gradient nonlinearities as before. Validation Prior to assembly, reference geometric data were obtained by scanning the two plates of the grid phantom with an Epson Perfection V370 desktop flatbed optical scanner (Epson, Nagano, Japan) at 5.3 Â 5.3 mm resolution. The phantom was then scanned in a transmit/receive birdcage coil of 42 mm inner diameter and coil sensitivity of 55 mm in z (Rapid Biomedical, Rimpar, Germany) with threedimensional (3D) SE MRI before and after gradient calibration to obtain geometric measurements. The scanning parameters were as follows: TR/TE ¼ 80/9.2 ms; resolution ¼ 100 Â 100 Â 100 mm; FOV ¼ 65.0 Â 38.4 Â 38.4 mm; and acquisition time ¼ 3 h, 17 min. Central sagittal and coronal planes in the MRI data corresponding to the two plates were selected and interpolated to 10 Â 10 mm resolution by zero-filling in k-space. The holes forming the grid were automatically segmented, and their centroids were detected in both MRI and optical scan data. Distances along x, y, and z between the two holes adjacent to the center-most hole in the MRI data, dm(x), dm(y), and dm(z), were measured and compared with the reference data from the high resolution optical scan, dr(x), dr(y) and dr(z) (Fig. 1c,d). In addition, the correction for gradient nonlinearity determined from the 2D SE data was applied to the matching cropped sagittal and coronal views in the grid phantom data. For clarity, the corrections for gradient nonlinearities were again first applied in the phase encoding direction, and subsequently in the readout direction. All data analy-sis was performed in MATLAB 2013a (MathWorks, Natick, Massachusetts USA). Correction for Gradient Linear Scaling The measured ADC, as determined from the gradient of the ln signal versus b-value curve, shows marked differences between DW directions prior to calibration; these differences were corrected following calibration (Fig. 2). The errors in precalibration D m (x,y,z) were À9.2% 6 0.4%, À1.1% 6 0.5%, and þ8.8% 6 0.7% with respect to the reference values. After correction for linear scaling, the errors in D m (x,y,z) were À0.5% 6 0.4%, þ 0.8% 6 0.3%, and À0.1% 6 0.7% with respect to the reference values (R 2 > 0.999 for all fits). The percentage fitting errors between the measured signal and the fitted ADC remained <0.53% across all b-values. FA was elevated prior to calibration as seen in a central axial slice of the calibration phantom; this elevation in FA was reduced following calibration (Fig. 3). Figure 3 also shows the centroids of the holes identified from the anatomical MRI overlaid on those identified with the reference optical scan in a central 24 Â 24 mm region. Prior to calibration, it can be seen that the MRI data are compressed in x and stretched in z relative to the optical scan data. These scalings are the result of reduced and elevated gradient scaling in x and z, respectively, as reflected in the low D m (x) and high D m (z) with respect to the calibrated diffusion data. The gradient scaling errors resulted in an artifactually larger FOV in x and smaller FOV in z; consequently, there was an apparent compression in x and stretching in z. The correspondence of MRI and optical scan data is significantly improved in both plates of the grid phantom postcalibration. The diffusion measurements, correction factors, and geometric validation are summarized in Table 1. The mean temperature was 20.6 C 6 0.2 C and 21.3 C 6 0.04 C in the precalibration and postcalibration scans, respectively. In addition to the improved estimation of D m (i) with respect to D r (i) postcalibration, we observed that the FA decreased from 0.14 6 0.03 to 0.03 6 0.01 following calibration, better representing the isotropic fluid. The range of calculated correction factors was reduced from 0.959 to 1.050 before calibration to 0.996 to 1.003 after calibration. Similarly, the differences in geometric measurements dm(i) with respect to the optical scan reference data dr(i) ranged from À5.5% to þ 4.5% before calibration and À0.97% to þ 0.23% after calibration. The correction factors, a(i), were originally calculated based on data acquired with NSA ¼ 4 and 5 b-values. Figure 4 illustrates the effect of subsampling NSA and the number of b-values used to calculate a(i) to potentially reduce data acquisition requirements. The normalized results show that calculating a(i) based on NSA ¼ 2 and 2 b-values yielded differences of < 0.1% from the nominal a(i), and calculating a(i) based on NSA ¼ 1 and 2 b-values yielded differences of < 0.3%. These data could be acquired in 48 s and 80 s, respectively, where the NSA ¼ 4 and 5 b-values data required 5 min, 36 s. Correction for Gradient Nonlinearities The effects of the correction for gradient nonlinearity on the ADC and a are shown (Fig. 5a-f). While a is relatively uniform near the isocenter, it rapidly increases toward the ends of the phantom along z. This increase is accompanied by a concomitant narrowing of the appearance of the tube diameter. The geometric accuracy of the image data improves with stepwise corrections in the phase encoding and readout directions, as reflected in the more cylindrical appearance of the phantom. Regions of higher a corresponded to regions with lower ADC and vice versa. The results from the data acquired in the sagittal and coronal views were similar; for brevity, only data acquired in the coronal view are presented. As the gradient strength decreases further away from the magnet isocenter, so does the effective b-value. This is described in a plot of the nominal and effective b- value profiles along z (Fig. 5h). Recalculation of the ADC based on the effective b-values shows the effect of gradient nonlinearity correction across a profile in z (Fig. 5i) and in a coronal 2D image (Fig. 5g). Overlaying the centroids of the holes in the 2D SE MRI data with that of the reference optical scan data shows much better correspondence after correction for gradient nonlinearity (Fig. 6). Nine centroids were identified in each of the two plates, and the errors in their physical coordinates in x, y, and z with respect to the reference optical scan data are presented in Table 2. At point 1, the furthest identified point from the magnet isocenter, the absolute error in x, y and z coordinates were 35%, 31% and 8.8% in the uncorrected data and 4.9%, 1.5% and 1.6% in the corrected data. At point 2, these values were 21%, 18%, and 5.5% and 1.7%, 1.5%, and 0.5%, respectively. Supporting research data are available upon request. DISCUSSION The common approach of calibrating gradients based on anatomical MRI of a phantom with known dimensions assumes gradient linearity over the length scale of the phantom. Although this is a reasonable assumption for a small phantom, the accuracy of the calibration diminishes with phantom size for a given imaging resolution. With a larger phantom that extends beyond the linear region of the gradients, the measured geometry of the phantom, which depends on the gradient profile across the entire phantom, will be overestimated near isocenter and underestimated away from isocenter. This is reflected in the vendor-adjusted gradient scaling values, which we found to be À0.6%, À1.5%, and þ5.8% in x, y, and z relative to the corrected values. Cyclooctane possesses a number of properties that make it suitable for gradient calibration including isotropic Gaussian diffusion, relatively low diffusivity and high viscosity, and a single proton resonance. These properties were reflected in the highly reproducible diffusion MRI measurements and the excellent fit of the multiple b-value data. The cyclooctane phantom does not rely on long-term geometric stability and is simple to build in comparison with geometric phantoms, which typically need tight tolerances requiring specialized 3D printing or fabrication. Errors in diffusion measurements postcalibration were reduced, deviating by <1% with respect to the reference values. Validation with DTI showed that the FA approached zero, as would be expected in an isotropic fluid. Correction factors generated postcalibration were within 0.4% of identity, underscoring the reproducibility of the method. Independent validation using a grid phantom demonstrated that percentage errors in diffusion measurements were roughly double the errors in geometric measurements, supporting the use of diffusion as a sensitive method for gradient calibration. The errors in postcalibration geometric measurements were also <1% with respect to the highresolution reference values. The slotted plate design of the grid phantom meant that reference geometric data could be obtained with an optical flatbed scanner, at a similar resolution as a more expensive micro-CT scanner. Care was taken to position the grid phantom carefully so that the two plates were aligned with the scanner x-and y-axes. In this study, the grid phantom itself served as additional validation and was not a requirement for the calibration procedure. The proposed calibration method is also efficient in terms of scan time. Rather than acquire new calibration data for each specific diffusion-weighted sequence 5. (a-c) a correction maps in the diffusion calibration phantom, with diffusion along the readout direction (z) and in coronal view (a) without correction for gradient nonlinearity, (b) with correction in the phase encoding direction (x) only, and (c) with correction in the phase encoding and readout directions. (d-f) Corresponding apparent diffusion coefficient (ADC) maps (mm 2 /s). The cylindrical geometry of the tube is recovered postcorrection. The estimated a increases and ADC decreases toward either end of the coil, where the gradient profile becomes increasingly nonlinear. (g) The corrected ADC map (mm 2 /s) is more homogeneous after correction. (h) Nominal (dotted) and corrected (solid) b-values across a profile in z as indicated by the dashed line in panel F. Nominal b ¼ 100, 400, 900, 1600, and 2500 s/mm 2 are shown in blue, green, red, cyan, and magenta, respectively. (I) Nominal (dotted) and corrected (solid) ADC in z across the same profile along z. The nominal ADC was subsampled by a factor of 10 for display purposes, to distinguish it from the corrected data. (6,22), the method presented is non-sequence-specific and improves accuracy of both diffusion and geometric measurements. In our experience, we found that the gradient scaling was a dominant source of error in DTI. Provided that cross-terms and imaging gradients are accounted for (16,19), separate calibration of diffusion gradient strengths across individual diffusion-weighting directions is unnecessary. We further showed that the correction factors could be calculated in x, y, and z directions in 17 min using NSA ¼ 4 and 5 b-values for fitting, or 4 min with NSA ¼ 2 and 2 b-values, with a difference in a of < 0.1%. Central to the calibration is accurate monitoring of temperature and reliance on high-quality reference diffusivity data. Temperature monitoring systems such as we have adapted, are readily available and are used routinely for physiological monitoring. This system provided real-time temperature monitoring accurate up to 60.1 C. In practice, a 60 s sliding window was used to reduce noise in the temperature readings. The 2% higher reference diffusivity, D r (i), postcalibration reflects an increase in mean sample temperature of 0.7 C from the precalibration to postcalibration scans, primarily due to heating of the sample during the intervening precalibration DTI scan with high bvalues. However, the calibration scans themselves led to negligible sample heating. Given the continuous temperature measurements, the appropriate D r (i) can be calculated for each acquisition based on the sample temperature at any given time. However, when averaging data from multiple repetitions and b-values, accuracy is improved when temperature fluctuations are minimized. It is thus recommended that all calibration scans be performed successively, without interruption by other scans, particularly scans liable to cause sample heating such as those with short TR, multiple refocusing pulses or strong diffusion gradients. Additional dummy scans may help bring the sample to thermal equilibrium, although this was not found to be a requirement. There are several published reports on the diffusivity of cyclooctane (13,(23)(24)(25). As far as the authors are aware, only the work of Tofts et al (13) reports on the diffusivity over a range of temperatures relevant to the present study. FIG. 6. The correspondence of the centroids of the holes calculated from the 2D SE MRI data (green) and the reference optical scan data (black) are improved in both plates after correction for gradient nonlinearity. Errors in the physical coordinates of centroids 1-9 (in italics) are summarized in Table 2. Table 2 Errors in the Physical Coordinates of Centroids 1-9, as Identified in Figure 6, Between the MRI Measurements dm(x), dm(y), and dm(z) and Reference Optical Scan Measurements dr(x), dr(y), and dr(z) In addition to errors in gradient scaling, gradient linearity decreases with distance from the isocenter, giving rise to perturbations in the uniformity of k-space sampling and apparent FOV, and consequently to image distortions. Current methods for correcting gradient nonlinearities include deformation mapping based on high-resolution geometric information (1)(2)(3), and modeling of the gradient field with spherical harmonics (26,27), truncated linear distributions (28), and exponentials of power series (25). The use of deformation mapping approaches requires manufacture of typically 3D grid phantoms to high tolerances. Geometric gold standard data are also needed and have typically been acquired from separate CT scanning or from specifications at the time of manufacture. The former requires access to a CT scanner, whereas the latter assumes accurate phantom manufacture and perfect geometric stability of the phantom over time. Where required, identification of landmarks for registration (eg, at grid intersections) may limit the accuracy of corrections to the resolution of the MRI data. Alternatively, modeling approaches have been used in conjunction with either simpler diffusion phantoms or with a priori knowledge of the gradient field. These circumvent the cost of building more complex phantoms but make assumptions about the gradient fields. These assumptions may impose constraints on the situations where such corrections are applicable (eg, where the data are of sufficiently high resolution, and over limited FOVs). Here, we extended the calculation of the correction factors across the sagittal and coronal planes of the diffusion phantom, enabling calculation of continuous deformations for unwarping the image distortions in x, y, and z. The same deformations improved geometric accuracy in both the diffusion phantom and the grid phantom. Whereas correction for gradient nonlinearity was demonstrated in two orthogonal planes, extending the correction to 3D is straightforward. This would require expansion of Equations [5] and [6] into the slice select direction, and 3D data acquisition at the expense of imaging time. Another consideration is that the relative small diameter of the diffusion phantom here limits the region of support in x and y for calculating correction maps. Ideally, a larger phantom occupying the maximum desired FOV for imaging would be used. We observed that percentage errors in the coordinates of the centroids of the holes in the grid phantom were reduced by up to 20-fold at regions furthest from the magnet isocenter. Residual errors in these regions may be further minimized with the use of a radiofrequency coil with greater extent of sensitivity in the z-axis. Critically, the proposed method enables 3D model-free prospective correction of gradient scaling and retrospective correction of the effects of gradient nonlinearity, without the need for phantoms with high geometric tolerances or access to a separate micro-CT scanner. The correction is not limited to specific data types and applies over an FOV matching the phantom size. Whereas image distortions were largely removed following correction for gradient nonlinearity, the ADC remained lower at the ends of the FOV in z. We demonstrated that the accuracy of the ADC measurements could be improved significantly by fitting the diffusion data based on recalcu-lated b-matrices on a voxel-wise basis after accounting for the corrected gradient strengths. However, such correction for b-values will not change the fact that the data in regions of poor gradient linearity will not have been acquired with the same nominal b-values and may present problems, particularly in quantification of non-Gaussian diffusion where more precisely defined b-values are required. The 2D correction data here were acquired in a single slice, with five b-values, three DW directions, and 100 Â 100 mm in-plane resolution requiring 4 h of scan time. The longer scan time is largely attributed to the use of an SE sequence for high geometric fidelity. The scan time can be readily reduced to under 10 min by acquiring two b-values and lowering the resolution to 500 Â 500 mm in-plane, taking advantage of the smoothly varying gradient fields, and interpolating the data in postprocessing. In this study, we investigated a range of b-values up to 2,500 s/mm 2 as appropriate for models such as DTI and diffusion kurtosis imaging. Based on our previous work in calibrating diffusion spectrum imaging, where we showed that the signal decay of cyclooctane was monoexponential up to 10,000 s/mm 2 (22), the proposed calibration is expected to be valid across this wider range of b-values as well. One consideration is that cyclooctane is flammable. However, the volume of diffusate in the calibration phantom is less than 30 mL. This volume can be reduced further if only correction for gradient scaling is required, as only a few central voxels free of partial volume contamination are required for fitting. For larger FOV coverage, a sturdier wall construction would be required to securely contain the larger volume of cyclooctane. A limitation of the study is that the linearity of the gradient amplifier response was not investigated explicitly. However, the excellent fit of the multiple b-value data and the positive results of the geometric validation, where the imaging and diffusion gradient strengths ranged from 3.56 to 628.1 mT/m, suggest that the amplifier linearity was good within this range of gradient strengths. A second limitation is that only one radiofrequency coil was used for calibration. Because different radiofrequency coils may give rise to different eddy current behavior, it could be beneficial to use pulse sequences such as the twicerefocused SE (29) that minimize eddy currents. In the present study, eddy currents were mitigated by using volume transmit coils with shields built from overlapping slits (30), and were not found to be a major issue. While the proposed gradient calibration was demonstrated on a preclinical scanner at 9.4T, the method is equally applicable for calibrating gradient systems on clinical scanners at lower field strengths. The key differences of a clinical system compared with the preclinical system employed in this study are the lower gradient strengths, larger FOV, and potential eddy current effects. These require longer diffusion times, more robust phantom construction, and hardware or pulse sequences that minimize eddy currents, respectively, but are otherwise not an impediment to the implementation of the method. Because anatomical and diffusion MRI are ubiquitous in the clinic, improving the accuracy of such measurements over an extended FOV could find widespread clinical application
2018-04-03T05:06:42.473Z
2016-01-08T00:00:00.000
{ "year": 2016, "sha1": "9f53acb8653335de75d145e1a76e4588375d8852", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mrm.26105", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9f53acb8653335de75d145e1a76e4588375d8852", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
119461315
pes2o/s2orc
v3-fos-license
The Link between Magnetic-field Orientations and Star Formation Rates Understanding star formation rates (SFR) is a central goal of modern star-formation models, which mainly involve gravity, turbulence and, in some cases, magnetic fields (B-fields). However, a connection between B-fields and SFR has never been observed. Here, a comparison between the surveys of SFR and a study of cloud-field alignment - which revealed a bimodal (parallel or perpendicular) alignment - shows consistently lower SFR per solar mass for clouds almost perpendicular to the B-fields. This is evidence of B-fields being a primary regulator of SFR. The perpendicular alignment possesses a significantly higher magnetic flux than the parallel alignment and thus a stronger support of the gas against self-gravity. This results in overall lower masses of the fragmented components, which are in agreement with the lower SFR. It is not difficult to imagine that the SFR of a molecular cloud should be related to the mass of gas it contains, due to self-gravity. On the whole, this trend has indeed been observed 3,4 . On the other hand, star formation efficiencies of molecular clouds are usually just a few percent 6 while cloud ages are at least comparable to their free-fall time (~Myr) 7 . This requires other forces to slow down gravitational contraction 8 . In addition, clouds with a similar mass and age can have different SFR; for example, it is well known that the Ophiuchus cloud has a significantly higher SFR than its neighbour, the Pipe Nebula 3,4 . In Figure 1, we can see a significant difference between the two dark clouds: Ophiuchus is accompanied by the colourful nebulae, which is a signature of active stellar feedback, while Pipe looks very quiescent in comparison. No one knows the reason for these differences in SFR. Another good example is the pair of Rosette and G216-2.5 molecular clouds 9 , where turbulence had been suspected as the cause of their very different SFRs. But later their turbulent velocity spectra were found almost identical 9 . Recently, a good agreement has been discovered between the empirical column density threshold for cloud contraction and the magnetic critical column density of the Galactic field (~10 µG) 5,10 . For densities lower than this threshold, the field strength is independent of densities 11 ; i.e., gas must accumulate along field lines. Also, a bimodal cloud-field alignment, which is another signature of field-regulated (sub-Alfvenic turbulence) cloud formation, was recently observed in the Gould Belt 5 , where, interestingly, Pipe and Ophiuchus ( Figure 1) are aligned differently from their local fields. More intriguingly, it is the one aligned with the B-field that holds the higher SFR. Together with another observed piece of evidence that Galactic B-field direction anchors deeply into cloud cores 12 and thus plays a role in cloud fragmentation 13 , we were prompted to survey whether the cloud-field alignment has a connection with SFR. Luckily, both the cloud-field alignment 5 and SFR 3,4 of the Gould Belt clouds have been very well studied (Table 1). Both Heiderman et al. 3 and Lada et al. 4 assume that SFR is directly related to the population of young stellar objects (YSOs) in a cloud. The YSOs in the Gould Belt have similar ages of 2 ± 1 Myr (millions of years) and the median for the initial stellar mass is around 0.5 M (solar mass), so the SFR of each cloud can be estimated by the number of embedded YSOs multiplied by 0.5 (M ) and divided by 2 (Myr). As mentioned at the outset, we are looking for factors in SFR other than mass, so here we study SFR per unit mass (SFR/mass). Heiderman et al. estimate cloud mass from regions above A v = 2 mag, and Lada et al., besides a threshold similar to Heiderman et al., also used A v = 7 mag. Note that while the Herschel space telescope revealed ubiquitous sub-cloud filamentary density structures 14 , where most of the protostars are forming, YSOs, on the other hand, do not correlate with these filaments in position 15,16 (see Supplementary Figure 1 for example). So the resolution of SFR 3,4 , and thus our study of their correlation with field orientations, remains at cloud scales. Dust grains in molecular clouds cause the 'extinction' of background starlight (see Figure 1 for examples), which is widely used to study density structures and B-fields at cloud scales. A cloud orientation can be specified by the direction in which the autocorrelation length of the extinction map reaches the maximum 5 . The residual background starlight after extinction is polarized along the B-fields. Li et al. 5 defined a cloud B-field direction by the Stokes mean of all the polarization detections in the ~10-parsec region surrounding the cloud. Based on Monte Carlo simulations and Bayesian analysis 5 , they concluded that 95% of the 3-D cloud-field angles must be within 20º from either 0º or 90º. Figure 2 correlates the SFR/mass with the cloud-field angles and shows that large-angle clouds have consistently lower SFR/mass, no matter which A v threshold is used. Here we perform hypothesis tests to see how significant the trend is. Due to the small sample size, a non-parametric permutation test is more appropriate than a parametric test. Let µ L and µ S be the population mean SFR/mass of the large-angle group, L, and the smallangle group, S, respectively. To test whether µ L is significantly smaller than µ S , we adopt the null hypothesis H 0 : L and S stem from the same population distribution, i.e., µ L = µ S, and the alternative hypothesis H 1 : µ L < µ S . As the sample mean is an unbiased estimator of the population mean, we use T obs = <L> − <S> as the test statistic under observed samples; where <...> means the sample mean. The logic of the permutation test goes as follows. Under the null hypothesis, Z = (L, S), the combination of L and S, should still be an independent and identical sample from the same population distribution. Any regrouping of Z into two groups, Z 1 and Z 2 , of the same sizes as L and S will also produce two independent samples from the same distribution. The test statistics, T = < Z 1 > − < Z 2 >, from such kind of regroupings should have a mean equal to 0. By enumerating all possible regroupings of Z while maintaining the same sample sizes, we will obtain m+n C m ("m+n choose m") values of T's, where m and n are the sample sizes of L and S. The percentage of the T's that are lower than or equal to T obs is an estimate of the p-value. H 0 should be rejected if the p-value < 0.05. The data from Lada et al. has m = n = 4 and the T obs = 3.66 % Myr -1 (Table I) is lower than T from any regrouping. This is not a surprise given that L is composed of the four lowest SFR/mass in Z. So the p-value is 1/ 8 C 4 = 0.014. The Heiderman et al. data comes with the estimates of uncertainties (Table I) and we will take the uncertainty into consideration based on a permutation test on the bootstrap estimates of the test statistics. For the given mean and standard deviation of the SFR/mass observed from a cloud, we treat them as the mean and standard deviation of a log-normal distribution of SFR/mass. For a grouping of Z, to get a bootstrap estimate of the test statistic, one value for each cloud is randomly sampled from the corresponding log-normal distribution, and from these sampled values, a test statistic, T i , can be derived. After 1000 rounds of the above random sampling, 1000 T i 's are obtained, and their mean is used as the estimate of the test statistic of that particular grouping. The same process is applied to all the 11 C 5 groupings, and the test statistic derived from the grouping of L and S gives the estimate of T obs . Accordingly, the p-value is 0.0043. Besides the permutation tests, Spearman rank correlation tests are also performed and summarized in Methods, which show that the negative correlation between SFR and cloud-field angles ( Figure 2) is also significant (p-value < 0.006). Also in Methods is the analysis of Planck polarimetry data as a double-check of our discovery. Three reasons motivate Li et al. 5 to use optical data. First, both observations 12,13 and simulations 17,18 have shown that field directions from lowdensity cloud vicinities, where optical data tends to trace, are closely aligned with fields inside the clouds down to cloud cores. Second, for cloud vicinities, Planck data suffers more background contamination, because the distances of background stars can be selected for optical polarimetry analysis but Planck is sensitive to the entire lines of sight. Third, cloud B-fields have a higher chance of being affected by the embedded stellar feedback than the fields in the vicinities, so the latter should better preserve mean field orientations prior to star formation; a good illustration is Perseus, as discussed in Methods. However, given the first point, stellar feedback should not completely change field morphologies, and we should still see some correlation between Planck data and SFR. Indeed, hypothesis tests (see Methods) suggest that Planck and optical data correlate with SFR similarly, which, given the similar trends seen in Figure 2, should not be difficult to conceive. While the Planck team 30 show that the alignment between column density contours and B-fields tends to move away from parallelism as column density increases, we stress that it should not be interpreted as B-fields tending to be perpendicular to high-density structures (A V > 2 mag), as we see comparable populations for large and small cloud-field angles ( Figure 2; reference 5). Supplementary Figure 5 is dedicated to bridging the analyses from reference 5 and 30. So SFR/mass is observed to be significantly lower for large-angle clouds. Though the fact that cloud fields are ordered has only been discovered very recently 10,12,13 , the way in which stars can form under this condition has been considered for more than half a century: the mass-tomagnetic-flux ratio, M/Φ, has to be above a critical value for gas to collapse against magnetic pressure 19,20 . Here we try to understand Figure 2 along the same line of thought. First, the M/Φ of an elongated cloud depends on cloud-field orientation. To visualize this, consider a cylindrical cloud with length l > width d (Figure 3). If the long axis is aligned parallel or perpendicular to B-fields, the magnetic flux is, respectively, Bπd 2 /4 and Bld ( Figure 3). The ratio between the two is in the order of d/l < 1, i.e. when aligned perpendicular to B-fields, a cloud as a whole experiences a higher magnetic flux and thus more support from B-field against self-gravity. In other words, the orientation parallel to the field possesses a higher "magnetic criticality" (the ratio of M/Φ to the critical value), which agrees with the overall higher SFR/mass. We can further test the idea as follows. For the M/Φ of a subregion, consider a uniform linear density, λ, along the cloud long axis. For the parallel alignment, M/Φ of a subregion is proportional to the scale, s, along the axis: 4λs/Bπd 2 ≃ λs/Bd 2 = (s/d)λ/Bd. On the other hand, for the perpendicular case, M/Φ = λ/Bd is independent of s. Only the M/Φ of parallel alignment can exceed λ/Bd and grow a massive fragment that is impossible in the perpendicular case. Kaufmann et al. 21 studied the mass-size relation of the fragmentation in clouds Taurus, Perseus, Pipe, and Ophiuchus, which can serve as a test of the argument above. Their results are summarised in Figure 3 (right panel): the upper limit of the fragmented mass is indeed systematically higher for the parallel case. For subregions with M/Φ below the critical value, fragmentation can still be achieved by either reducing the flux through ambipolar diffusion or increasing the mass through gas accumulation along the fields 19,20 . Whether the former presents in molecular clouds is still under intense debate (e.g. reference 22-24; also, field diffusion due to other mechanism -magnetic reconnection -is proposed recently 25 ). In the latter scenario, a smaller cloud-field angle is easier for both gravity and turbulence to accumulate gas along the field to enhance λ (and thus the subregion M/Φ). Of course, in reality, clouds are not uniform cylinders and our model is still far from making a quantitative prediction, but the assumption helps us appreciate the fact that B-fields perpendicular to a cloud can more effectively hinder massive fragmentation and thus SFR. Magnetic fields are generally treated as isotropic pressure, like gas pressure, in existing SFR models 1,2 . However, the anisotropy of magnetic pressure is why the cloud-field alignment matters in the discussion above. As the discovery of ordered cloud B-fields has just started to constrain theories and numerical simulations of star formation (e.g. reference 10,17,18,26,27), being able to explain the bimodal cloud-field alignment 5 and SFR (Figure 2) should also be included among the criteria of a successful cloud/star-formation theory. Supplementary Figures 2), to define the regions for which we calculate cloud mean fields. We stay away from low-density regions because they are more vulnerable to background contamination (see discussion in main text). Also, these thresholds of intermediate density are quite close to the ones used to define cloud masses in the SFR study 3 (Supplementary Figures 1). The 14 regions for mean field calculation are shown in Supplementary Figures 1-3 and the results are listed in Supplementary Table 1. In a similar way to reference 5, the mean field direction can be estimated by utilizing the Stokes mean of all the polarized detections within a selected region. Comparison between Planck and optical polarimetry data In reference 5, for each cloud, optical polarimetry data are collected from a region > 10 pc in scale in order to get enough detections for the mean field direction. If two clouds are too close, e.g. Musca/Chamaeleon and Orion A/B, reference 5 cannot resolve them (see footnote "d" and "g" of Table 1). There is no such limitation given to the 5-arcmin resolution of Planck, so there are two more clouds in Supplementary Table 1. The most prominent difference between the two data sets occurs with Perseus cloud: it switches from a small-angle cloud to a large-angle cloud ( Figure 2). Goodman et al. 31 found that the B-field direction inferred from optical polarization in the Perseus region is bimodal (Supplementary Figure 4), with less polarized vectors (P < 1.2%) aligned along the cloud's long axis and stronger polarized vectors (P > 1.2%) lying roughly perpendicular to the long axis. Reference 5 took an equal-weight mean of all the vectors, so the group with more vectors, which is the one aligned with the cloud, wins. Planck data, on the other hand, favours the regions with higher polarized flux along a line of sight, which can be more comparable to the stronger polarized optical vectors. Since the polarized ratio may increase with the distances of stars 32 , Goodman et al. 31 proposed that two clouds at different distances possessing differing field orientations could be superimposed along the line of sight. The background of Supplementary Figure 4 is IRAS-derived dust column density 33 , which traces primarily warm dust surrounding young stars and H II regions. Since the polarization efficiency of dust associated with cold molecular gas can be lower than the efficiency of dust embedded in warm atomic gas 34 , Bally et al. 33 proposed that the stronger polarization in Perseus is predominantly associated with the ionization front of the H II region G159.6-18.5, which is well traced by the IRAS-derived dust map, and the less polarized vectors mostly trace magnetic fields associated with the Perseus cloud itself. It is also common to see that thermal dust emission polarimetry data traces B-fields associated with ionization fronts 13,32 . Further away from the H II region, we can see in Supplementary Figure 4 that the fields from NGC 1333 4A/B traced by sub-mm polarimetry are also parallel with the long axis of Perseus. In any case, the situation of Perseus is unique, so the permutation tests (see main text) are repeated using Planck data either with or without Perseus. The results are as follows: Therefore, the bimodal SFR is still significant with Planck data. Spearman rank correlation test While reference 5 has concluded a 3-D bimodal cloud-field alignment, and we adopted this point of view to show that SFR/mass can also be divided into two groups, it is difficult to distinguish "two clusters" from "one decreasing trend" for the plots in Figure 2. The mass-to-flux model we proposed to explain Figure 2 is consistent with both possibilities. So here we study how significant the decreasing trend is. We use Spearman rank correlation (SRC) test, which is based on the Pearson Correlation Coefficient between the ranks of two marginal observations. Since the SRC test is based purely on ranks instead of the original quantities, it is insensitive to outliers. For our cases, SRC(rank SFR , rank cf ) are between the ranks of SFR, rank SFR , and the ranks of cloud-field angles, rank cf ; they are as follows: SRC ranges within ± 1; the more positive/negative a SRC is, the more positively/negatively correlated are the ranks. SRC ≈ 0 means no correlation. Therefore, a negative correlation between SFR and cloud-field angles, i.e. a decreasing trend in Figure 2, is clearly based on the observed SRCs (SRC obs ) in the above table. In a similar way to the permutation tests for the bimodality, we perform a permutation test for the decreasing trend. For the SFR from Lada et al. 4 , we randomly permute rank SFR to generate rank SFR_rand_i , for i = 1, 2, ..., k, where k is the total number of permutations used for p-value estimation. We adopt the null hypothesis H 0 : SRC(rank SFR , rank cf ) = 0 , and the alternative hypothesis H 1 : SRC(rank SFR , rank cf ) < 0. The p-value is thus estimated by the relative frequency to obtain SRC(rank SFR_rand_i , rank cf ) < SRC obs . The p-values in the table above are estimated with k = 1000. For the SFR and their uncertainties from Heiderman et al. 3 , again, we treat them as the means and standard deviations of log-normal distributions. To obtain a bootstrap sample, we sample one value for each cloud from their corresponding log-normal distribution. In a similar way to the Lada et al. data, a p-value can be estimated by permutation for this particular bootstrap sample. By repeating this sampling N times, we obtained N resampling p-values (p 1 , p 2 , ..., p N ), which resulted from N independent and identically distributed samples. Let data from Heiderman et al., we treat them as the mean an which denotes what we may observe from this cloud. To get cloud from their corresponding log-normal distribution. The parametric bootstrap sample to get a p-value. By repeatin have N resampling p-values (p 1 , · · · , p N ), which are resulte samples. Let S obs = 2 P N t=1 log(p t ). Theoretically, under th distribution with degree of freedom 2N . If the alternative hy be small and S shall be large. Thus the final p-value is eq random variable with degree of freedom 2N . The pseudo-co N = 1000, the resulting p-value is nearly zero no matter wh Exact permutation test on bootstrap estimates of enumerate all possible regroupings of the clouds (i.e., simult data). For each regrouping, we sample a value from the co thus producing a parametric bootstrap sample for this reg resampling is repeated for N times for each regrouping. We t bootstrap estimate of the test statistic from this regrouping. , S obs is known to follow a chi-square distribution with degree of freedom 2N if H 0 is true. Thus the p-value is estimated by the right-tail probability P(! 2 2N > S obs ). The decreasing trends are highly significant based on the p-values reported in the above table, where N=1000. Data availability The observational data that support the plots within this paper and other findings of this study can be found from references 3-5 and 30. The Planck 353 GHz data can be downloaded from http://pla.esac.esa.int/pla/#maps. Table 1 Cloud long axes and B-field directions adopted from reference 5 (increasing counterclockwisely from Galactic north). The first 6 clouds have directional differences which are larger than 70 degrees, while the other 6 have a difference which is less than 30 degrees. Errors related to field direction errors are defined by the interquartile ranges of the polarization detections. The long-axis direction errors are less than 15 degrees 5 . b. SFR and cloud mass adopted from reference 4, from which errors are not available. c. SFR and cloud mass adopted from reference 3; errors are propagated from those of SFR and cloud mass. The SFR/mass from reference 3 is systematically lower than those from reference 4, since the latter used a larger column density threshold to estimate the cloud masses. Pipe Nebula, overlapped with the red lines 28 , tends to be perpendicular to the B-fields. On the other hand, Ophiuchus cloud, covered by the yellow lines 29 , is largely aligned with the fields. They are not special cases from the Gould Belt, where most molecular clouds are elongated and tend to be aligned either parallel or perpendicular to local B-fields 5 . (Table I). For each survey, the SFR/mass is normalized to the mean. The cloud long axes and Bfield directions based on optical data in the upper plot are adopted from Li et al. 5 (Table I). In the lower plot, we replace the B-field directions from Li et al. 5 with Planck 353 GHz polarization data 30 (Methods; Supplementary Table 1). Perseus cloud, shown as hollow symbols, has significantly different mean field directions based on optical and Planck data (see the main text and Methods for discussion). The "alignment parameters" used in reference 30 are shown in Supplementary Figure 5 for the clouds noted as (for aligned) and (for perpendicular) to illustrate the connection between the two analyses. A cross-section perpendicular to the B-field is highlighted for the cylindrical cloud when it is parallel (red) or perpendicular (green) to the local field. The area of each cross-section is shown below the cloud. Right: The distribution of mass-size relation, m(r), of the fragmentation from four Gould-Belt clouds 21 . Comparable to the left panel, the areas within the red dashed line or red shaded region are from small-angle clouds, based on the optical polarimetry data. The large-angle clouds, within the green dashed line or green shaded region, have an overall lower fragmented mass. The dotted dark lines are for references, which follow either m(r) µ r 2 or m(r) µ r for an isothermal equilibrium sphere at 10K.
2019-04-13T18:29:56.754Z
2017-06-19T00:00:00.000
{ "year": 2017, "sha1": "1743dcd943d7fefae83c0460a5e3b25c917a302b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1706.08452", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8710e30c8065c841826d73a095c4de73e5178816", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
273021828
pes2o/s2orc
v3-fos-license
Syringomyelia in three small breed dogs secondary to Chiari-like malformation: clinical and diagnostic findings Three small breed dogs were referred for the evaluation of neurologic deficits. Upon physical and neurologic examination, all dogs displayed hyperesthesia, pain, and neck stiffness. Magnetic resonance imaging was performed on the brain and spinal cord, and all three dogs presented Chiari-like malformations and syringomyelia. These dogs were treated with prednisolone and furosemide, and showed rapid improvement of clinical signs. Chiari malformations and syringomyelia were not improved because of congenital disorders. This case report demonstrates the clinical and diagnostic features of Chiari-like malformations and syringomyelia in three small breed dogs. Syringomyelia (SM) of unknown etiology is a condition in which fluid containing cavities develop within the spinal cord parenchyma [9].Although the cause of SM is unknown, the condition may result from venous obstruction or distension, or may be due to mechanical disruption or shearing of spinal cord tissue planes [9].Cervical pain is a predominant clinical sign of this disease, which is reported in approximately 80% of affected humans and 35% of affected dogs [7,10], although there is some controversy as to how this disease results in pain.In addition to pain, dogs with SM often scratch at one area of the shoulder, ear, neck or sternum, and may have other neurological deficits such as cervical scoliosis, thoracic limb weakness, and pelvic limb ataxia [6]. Medical treatment can help, but typically does not resolve the clinical signs.Non-steroidal anti-inflammatory drugs, corticosteroids, gabapentin, and oral opioids can be used for the treatment of SM [10].The most common procedure performed is foramen magnum decompression, where the hypoplastic occipital bone and sometimes the cranial dorsal laminae of the atlas are removed (with or without a durotomy) to decompress the foramen magnum [1,3]. This case report demonstrates SM/Chiari-like malformation (CM) in three small breed dogs primarily showing hyperesthesia, pain, and neck stiffness based on clinical and diagnostic findings. Case No. 1: A 6-year-old, spayed female Poodle dog weighing 4.1 kg was presented with neck stiffness, hyperesthesia and hind limb ataxia.Uncoordinated gait of hind limb was acutely presented 3 days ago and had maintained steadily until the admission day.No abnormalities on the complete blood count (CBC) and serum biochemical profile were detected.Neurological examination revealed decreased postural reactions in both hind limbs, though cranial nerves and spinal reflexes were normal.Based on the examination, myelopathy or cerebellar diseases were suspected.Magnetic resonance imaging (MRI) scan of the brain and spinal cord was obtained using a 0.2 Tesla (E-Scan; Esaote, Italy) in transverse, sagittal, and dorsal, T1-and T2-weighted images.Caudal cerebellar herniation through the foramen magnum and syrinx in the spinal cord between second and fourth cervical vertebrae were noted (Fig. 1). Based on these findings, the dog was diagnosed with Chiari-like malformations and syringomyelia (CM/SM).The dog was treated with furosemide (Lasix, 2 mg/kg, PO, BID; Handok Pham, Korea) and prednisolone (Prednisolone, 1 mg/kg, PO, BID; Korea Pharma, Korea).The ataxia mildly improved on the third day and completely disappeared by the second week.Medication had been tapered off over 2 months.There was no relapse for 6 months until last follow-ups. Case No. 2: A 3-year-old, intact male Maltese dog weighing 4.16 kg was presented with the neck pain, stiffness, and weakness in the hindlimbs for 7 days.The owners also reported that the dog had a mild tendency to scratch at its mid-cervical area and was becoming more sensitive.Physical and neurologic examinations revealed cervical pain (including cervical stiffness), shivering, hyperesthesia, bilateral patellar luxation, and tachypnea.CBC, serum biochemistry, and radiography were normal.Brain and spinal MRI scans was performed with 0.2 Tesla unit (E-scan; Esaote, Italy).T1-and T2-weighted images and gadolinium enhanced T1-weighted images were obtained.On T2-weighted images, a hyperintense lesion was found on the pons area and the syrinx formation was more obvious (Fig. 2).Based on the hyperintensity in the pons, an inflammatory status such as granulomatous meningoencephalitis (GME) was also suspected in this case.CM/SM was observed in the midsagittal MRI (Fig. 2).The dog was treated with furosemide (2 mg/kg, PO, BID), prednisolone (1 mg/kg, PO, BID) for 1 month.Then, prednisolone was continued to taper down for another 4 weeks.After 5 days of treatment, the clinical signs of the dog improved to normal condition.Since then, no side effects or relapses have occurred in over 12 months.We kept 1 mg/kg of prednisolone for 1 month and it continued to taper down for another 4 weeks. Case No. 3: A 2-year-old, spayed female Yorkshire terrier dog weighing 2.08 kg was presented due to right-sided hemiparalysis with no urination and defecation and cervical pain for 2 days.Upon neurological examinations, right-sided hemiparalysis with episcleral engorgement and delayed pupillary light reflex were observed.Moreover, the dog indicated pain over the cervical area during palpation.Serum biochemistry revealed mildly elevated creatinine kinase (265 U/L; reference range, 10 to 199 U/L).According to the owner's report, the dog was becoming more sensitive around her right cervical area over the past month.In addition, the dog developed hyperesthesia and right-sided limb weakness during that one month.Based on the initial examination, this dog was suspected of having a intracranial disorder.MRI scans of the brain and spinal cord was performed with the same equipment as in cases 1 and 2. Significantly asymmetrical enlargement of lateral ventricles was observed (Figs.3A and B) and CM/SM was evident on both T1-and T2-weighted images.On the midsagittal MRI of cervical spinal cord, long syrinx formation was evident (Fig. 3C).Serial transverse T1-and T2-weighted images of the spinal cord also showed asymmetrically dilated central canals tilting to the right side (Fig. 3D).CSF evaluation revealed mild neutrophilic pleocytosis and slightly increased protein level (47 mg/dL; reference range: < 25 to 35 mg/dL).The dog was treated with the same treatment protocol as case No. 2. The symptoms nearly disappeared by the 7 th day of treatment and this dog had a very good response to the treatment.There were no recurring symptoms 10 months after discontinuation of therapy.The three dogs in this case study were diagnosed with CM/SM.Cervical pain, hyperesthesia, and neck stiffness were the only clinical signs common to all three dogs in this case report.According to the medical history and physical examinations of this case group, it was suspected that the skin over one side of the head, neck, shoulder or sternum might be overly sensitive to touch and the dogs frequently scratch at that area often without making skin contact. Neuropathic pain can be defined as clinical state of pain accompanied by tissue injury of somatosensory processing in the peripheral or central nervous system, which includes spontaneous pain, paresthesia, dysthesia, allodynia, or hyperpathia [8].It is hypothesized that the pain-associated behavioral changes of dogs affected by SM are due to neuropathic pain, probably because of injured neural processing in the damaged dorsal horn [7].The dorsal horn has a key role in the perception of sensory information and transmission to the brain, and sometimes the neural connections and communications through the dorsal horn can be reorganized, resulting in persistent pain states [11]. In this case group, the presence or absence of the signs of probable SM associated pain was recorded.On the midsagittal T1-and T2-weighted MRIs of the three dogs, syrinxes were observed along the cervical spinal cord.Especially, the transverse MR images through the syrinxes explained the right sided asymmetrical region around the dorsal horn in case No. 3. It was thought that injury to the right dorsal horn might cause the right-sided hemiparalysis in case No. 3 due to recent studies [10].The hyperintense lesion in the pons of case No. 2 may indicate an inflammatory process like GME, which would be an alternative cause of neck pain.As a possible treatment option, surgical correction is recommended for CM/SM to correct the underlying anatomical or functional abnormality.However, even after an apparently successful procedure resulting in the collapse of the syrinx, the patient may still experience significant pain, especially if the spinal cord dorsal horn was compromised [4,5].In dogs, surgery appears less successful than in humans because, although there may be a clinical improvement, SM is generally persistent [2,7].Until a reliable surgical option is defined, pharmaceutical treatment of the clinical signs is likely to be the mainstay of veterinary therapy.Although the three dogs in this case study have had no relapse of the clinical signs after discontinuation of the therapy, long-term monitoring and life-long medical therapy are required because SM is a chronic and intractable condition. This case report demonstrates that CM/SM is clearly related to neck pain/stiffness and hyperesthesia.Better understanding of the pain symptoms of CM/SM might lead to the possibility of more effective medications and resolutions with dogs suffering from pain of unknown etiology in the veterinary clinics. Fig. 2 . Fig. 2. MRI images of case No. 2. On midsagittal MRIs (A and B), CM (arrow heads) with syrinx formation (arrows), indicating syrinomyelia (SM) is more evident on the T2-weighted image (B).The hyperintense lesion in the pons is also observed on the T2-weighted image (B).Serial transverse MRIs (C and D) reveal the dilation of the central canals (arrows).The dilated central canal is clearer on the T2-weighted image with hyperintensity (D). Fig. 1 . Fig. 1.MRI features of the dog in case No. 1. Chiari-like malformation (CM; arrow heads) and syrinx in the spinal cord between second and fourth cervical vertebrae (arrows) were noted on the midsagittal T1-(A) and T2-(B) weighted images.Transverse T1-(C) T2-(D) weighted images at the level of the third cervical vertebrae revealed syrinx with an enlarged central canal (arrows). Fig. 3 . Fig. 3. MRI features of case No. 3. Marked asymmetrical dilation of the lateral ventricle is confirmed on T1-(A) and T2-(B) weighted transverse MRIs.CM/SM (arrow) is evident on the midsagittal MRI of the cervical spinal cord (C).Transverse T1-weighted image of the spinal cord also demonstrates an asymmetrically dilated central canal (arrow) tilting to the right side (D).
2014-10-01T00:00:00.000Z
2009-11-26T00:00:00.000
{ "year": 2009, "sha1": "2b2d45b2fa5232d1cbf3709f2c099a155048d4ce", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4142/jvs.2009.10.4.365", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b2d45b2fa5232d1cbf3709f2c099a155048d4ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224965230
pes2o/s2orc
v3-fos-license
Simplified Seismic Vulnerability Assessment Methods: A Comparative Analysis with Reference to Regional School Building Stock in Italy : The paper compares several simplified methods proposed in the literature for assessing the seismic vulnerability of existing buildings. Type and number of input and output data, limitations of use for di ff erent structural typologies, and complexity of use are examined for each methodology to identify the most suitable for assessing the vulnerability of a given class of buildings, based on the available data, the computational e ff ort, and the type of vulnerability judgment. The selected methods were applied to a sample of school buildings located in the province of Naples (Italy). Data were available due to a digital platform and were used to verify the possibility of providing reliable large scale vulnerability judgments based on a reduced set of information, without carrying out additional surveys. The most simplified methods were applied to a sample of about a thousand of buildings, while more detailed methods, needing more information, were applied to a smaller sample. The comparison between the results obtained from di ff erent methods allows highlighting advantages and weaknesses of each, so as to identify the convenience in their use according to the specific available information and the objectives of the analysis, finally to evaluate which is more or less safe. Introduction The Italian structural heritage is mostly made up of non-recent buildings. In addition to a large number of monumental buildings, having an intrinsic cultural and artistic value, a high percentage of buildings and infrastructures have, indeed, exceeded the ordinary limits of design lifetime. Therefore, it is important, especially after the recent seismic events that occurred in Italy [1][2][3][4], to carry out a seismic risk analysis on a 'large', i.e., territorial, scale. As known, the seismic vulnerability, i.e., the probability that a building will suffer damage following an earthquake of a given intensity, is one of the risk components for constructions. The other two components are the probability of occurrence-in a given site-of earthquakes of a certain intensity (hazard) and the value of the expected losses in case of a certain level of damage (exposure). Based on the previous definition, the assessment of the seismic vulnerability of a building can be specialized in several ways, mostly depending on the objectives of the analysis itself. In this perspective, the 'scale' (single building, aggregates, urban scale, and territorial scale), which the analysis refers to, is one of the aspects that play a fundamental role in choosing the most appropriate way of estimating the vulnerability. The assessment of 'large-scale' vulnerability is very useful for planning interventions aimed to increase the seismic safety with a priority criterion and to spread a preventive, rather than protective, approach for the evaluation of the seismic risk. To this aim, simplified methods for assessing the sample (about 1000 buildings), while the more detailed methods could be applicable to a reduced number of buildings (only 14 masonry buildings). The fundamental assumption of this study is that the available data about the examined school buildings and usable in the analyses come from a 'closed' database, which is part of a digital national platform aimed to draw up a catalogue of the school buildings and where very diversified information (functional, logistic, structural, etc.) are collected. Thus, the structural data necessary for applying vulnerability methods were not obtained by on-site surveys directly carried out by the authors or by technicians. Such a condition is in the frame of 'large scale' vulnerability assessment philosophy, since preliminary analyses can be often carried out only based on few data made available by administrative offices and not coming by technical analyses or inquires. This circumstance leads to several methods available in the literature being not applicable to the examined database due to lacking detailed information. The idea of the paper is, indeed, to verify the possibility of providing reliable 'large scale' vulnerability judgments based on a minimum set of information, already available on local or national platform without necessity of carrying out additional surveys, i.e., 'at zero cost' in terms of time and resources, and, if yes, try to extend the experience implemented on the case study of the Naples province to other realties. According to such an approach, the choice of suitable methods for assessing the seismic vulnerability on the large scale should be consistent with the available knowledge level of the buildings and, thus, not all the methods are reliable (i.e., the mechanical ones based on push-over analysis, even if simplified). The comparison between the results obtained from the application of the different methods allows one to highlight advantages and weaknesses of each method, to identify the convenience in the use of them according to the specific available information and the objectives of the analysis, and finally to evaluate which method is more or less safe in the frame of a 'large scale' assessment of the seismic vulnerability. Simplified Vulnerability Assessment Methods for Masonry and RC Building Structures As described in detail in Section 1, several methods are available in the literature for the assessment of seismic vulnerability. In this paper, the attention will be focused on suitable methods for the available database of buildings that will be introduced in the following. Tables 1 and 2 list all the methods analyzed and the input data respectively for the following two macrocategories: -Group A, which includes methods that require few input parameters and provide a qualitative output in terms of the final vulnerability judgment; -Group B, which includes methods that require a relevant number of input parameters and generally provide a quantitative safety assessment, based on the definition of a safety index. Some of these methods also include a vulnerability judgment associated to the numerical safety index. Each method was individuated by a label that will be used for brevity in the following sections. For the methods of Group B, which allow one to calculate a simplified shear capacity or a vulnerability index, Table 3 summarizes also the input parameters necessary for calculating the resistant (capacity) and the requested (demand) shear. Figure 1 shows a direct comparison among the nine methods selected for this study. In particular, the number of input parameters required by the different methods was compared. It should be noted that the number of parameters counted for the procedures of Group B also includes the detailed parameters for the seismic shear resistance and demand of masonry buildings ( Table 3). The vulnerability classification, which is based on a qualitative description found in some of the analyzed methods belonging to Group A, refers to the one provided by the European Macroseismic Scale (EMS-98) [22] and is synthetically reported in Figure 2. The EMS scale represents a first attempt to define vulnerability classes. Six vulnerability classes were, indeed, defined according to a qualitative description and five damage grades and a certain intensity range are associated to each class. The first three classes (A, B, and C) represent the strength of a typical adobe house, brick building and reinforced concrete (RC) structure. Classes D and E were characterized by an approximately linear decreasing vulnerability as a result of an improved level of the earthquake-resistant design (ERD), and also take into account well-built timber, reinforced or confined masonry, and steel structures. Class F represents the vulnerability of a structure designed according to a high level of earthquake-resistant philosophy adopted in design rules. The vulnerability scheme reported in Figure 2 was based on the major European building types. The input data requested for defining the six vulnerability classes provided by the EMS scale have not been listed in Table 1, since such a classification has been recalled in the DM 58/17 method, as it will be explained in the next section. It is worth noting that about the methods proposed in the New Zealand Society of Earthquake Engineering (2016), the simplified approach (ISA) is not applicable, as it requires the filling of spreadsheets for the single building with information obtainable by means of an accurate survey of the building, which was not carried out in this study. Consequently, also the second level approach, needing further detailed data, is not applicable since too many assumptions should be used, negatively affecting the result provided by the methodology. (*) Parameters taken into account for maximum and resistant shear evaluation are listed in Table 3. Conversely, the empirical approach (LM1) of RiskUE should be applied to the database, but it would lead to a final output (damage levels and fragility curves) that is not easily comparable with the ones obtained with the other methods taken into account in this work (vulnerability linguistic judgments, vulnerability indexes, or capacity/demand ratios). Its application to the database, either extended or reduced, was, thus, discarded. Finally, the mechanical methods provided by FEMA 2003 and Risk UE, both based on the construction of capacity curves by means of significant parameters provided for building classes, request a level of computational effort into compare capacity and demand for each building that is not in agreement with the knowledge level available for the selected database. The vulnerability classification, which is based on a qualitative description found in some of the analyzed methods belonging to Group A, refers to the one provided by the European Macroseismic Scale (EMS-98) [22] and is synthetically reported in Figure 2. The EMS scale represents a first attempt to define vulnerability classes. Six vulnerability classes were, indeed, defined according to a qualitative description and five damage grades and a certain intensity range are associated to each class. The first three classes (A, B, and C) represent the strength of a typical adobe house, brick building and reinforced concrete (RC) structure. Classes D and E were characterized by an approximately linear decreasing vulnerability as a result of an improved level of the earthquakeresistant design (ERD), and also take into account well-built timber, reinforced or confined masonry, and steel structures. Class F represents the vulnerability of a structure designed according to a high level of earthquake-resistant philosophy adopted in design rules. The vulnerability scheme reported in Figure 2 was based on the major European building types. The input data requested for defining the six vulnerability classes provided by the EMS scale have not been listed in Table 1, since such a classification has been recalled in the DM 58/17 method, as it will be explained in the next section. It is worth noting that about the methods proposed in the New Zealand Society of Earthquake Engineering (2016), the simplified approach (ISA) is not applicable, as it requires the filling of spreadsheets for the single building with information obtainable by means of an accurate survey of the building, which was not carried out in this study. Consequently, also the second level approach, needing further detailed data, is not applicable since too many assumptions should be used, negatively affecting the result provided by the methodology. Conversely, the empirical approach (LM1) of RiskUE should be applied to the database, but it would lead to a final output (damage levels and fragility curves) that is not easily comparable with the ones obtained with the other methods taken into account in this work (vulnerability linguistic judgments, vulnerability indexes, or capacity/demand ratios). Its application to the database, either extended or reduced, was, thus, discarded. Finally, the mechanical methods provided by FEMA 2003 and Risk UE, both based on the construction of capacity curves by means of significant parameters provided for building classes, request a level of computational effort into compare capacity and demand for each building that is not in agreement with the knowledge level available for the selected database. For [18] is an improvement of the typological classification provided by EMS-98. Such a method was, indeed, calibrated on the damage observed in the buildings after eight relevant earthquakes occurred in Italy from 1980 (Irpinia) to 2002 (San Giuliano di Puglia). Except the Irpinia earthquake, the considered events are characterized by a macroseismic intensity, evaluated according to the MCS scale, variable from V to VIII. Since all the examined buildings were built before 2003, when the first innovative seismic code was introduced in Italy, the method was calibrated on buildings not designed according to the 'modern' rules reported in more recent national and international codes. The method provides the estimation of an overall vulnerability of the building, which is indicated by an average synthetic damage index (synthetic parameter of damage, SPD) and depends only on the typology of vertical resistant elements. The following four vulnerability classes are defined, with a decreasing level of vulnerability from class A v to D v : B v = Stones, regular masonry, hollow bricks, mixed (reinforced concrete and masonry on different floors); • C v = Solid bricks, mixed (reinforced concrete and masonry, both on all floors); • D v = RC or steel frames. These vulnerability classes can be correlated to the EMS classes as follows: Each vulnerability class corresponds to an average value of the SPD index that was calculated for each class of vertical typology for macroseismic intensities variable in the range V-VIII, since this was the range where more numerous and suitable empirical data (damages observed in past earthquakes) were available, as previously discussed. Thus, the following average values of the SPD index for the four macroseismic intensities were provided: 2.3 for A v , 2.1 for B v , 1.7 for C v , and 1.5 for D v . Successively, depending on the further 11 parameters affecting the seismic behavior of the building, the value of SPD and, thus, the vulnerability class can be modified with an increase or a reduction of the initial value. For whatever building typology, the additional parameters are: vertical structures, horizontal structures, number of floors, regularity in plant and/or in elevation, position of the building in case of aggregates, and building age. For masonry buildings, the following further parameters are considered: roof typology (inclined or plan), roof lightness, the presence of isolated columns, and the presence of ties. For RC buildings, also the regularity of infill walls is taken into account. For 'mixed' buildings, the type of mixed structure and the roof typology are considered as additional parameters. Some of these parameters are dependent on others (horizontal structure, roof lightness, number of floors, and building age), while others are independent. Even if the values for each parameter were calculated independently, the analysis of the database evidenced that some of them are dependent on each other. For example, 25% of buildings of class "A v " have both "deformable floor" and were built "before 1919". This means that the two features are not independent and the use of coefficients related to both parameters in the scoring of vulnerability would lead to an overestimation of the vulnerability. Thus, the weight of the parameters was reduced proportionally to the statistical dependence between them. The SPD index can be corrected by means of not-correlation coefficients that combine the influence of the independent and dependent parameters. The new four 'corrected' vulnerability classes are, thus, characterized by the following ranges of the SPD index: S&C Method Sandoli and Calderoni [9] assign a vulnerability class to existing buildings based on structural typology, building age, and typology of horizontal structures (floors). Despite the few qualitative parameters necessary to assign the class, a range of peak ground acceleration (PGA) corresponding to the building collapse, i.e., a capacity in term of PGA (referred to as PGA c ) is provided too. For masonry buildings, the following three classes and the corresponding values of PGA c are provided: • Class A: high vulnerability class, with PGA c ≤ 0.05 g; • Class B: medium vulnerability class, with 0.05 g < PGA c ≤ 0.10 g; • Class C: low vulnerability class, with 0.10 g < PGA c ≤ 0.15 g. Class A is divided into a further two subclasses: • MUR 1-'old buildings' made of masonry walls, with vaulted floors or wooden/iron plan floors without effective connections to the walls and without diffuse systems of ties; • MUR 5-'modern buildings not realized in according to the codes' made of masonry walls, with plan floors without effective connections to the walls and without curbs. Class B is divided into further two subclasses: • MUR 2-'old buildings with interventions', made of masonry walls, vaulted floors, or wooden/iron plan floors without effective connections to walls, but with diffuse systems of ties at all levels; • MUR 4-'semi-modern buildings' made of masonry walls, with plan floors connected to the walls through reinforced concrete curbs, but without diffuse systems of chains/ties. Finally, class C refers to: • MUR 3-'modern buildings' made of masonry walls, with plan floors connected with reinforced concrete curbs to the walls at each level. For RC buildings, two ranges of maximum PGA capacity and four vulnerability classes are provided, mainly based on the building age and on the technical code evolution: • 0.10 g < PGA c ≤ 0.15 g: CA1-buildings realized before 1939: structures designed for only gravity loads, the presence of frames in only one direction with deep beams, and heavy infill walls; CA2-buildings realized between 1939 and 1970: structures designed for only gravity loads, the presence of frames in only one direction with deep beams in perimeter frames, deep and flat beams in internal frames, and heavy or light infill walls; CA3-buildings realized between 1970 and enactment of seismic codes: structures designed for only gravity loads and built with higher quality and certified materials, the presence of frames in only one direction with deep or flat beams, and mainly light infill walls. • 0.15 g < PGA c ≤ 0.35 g: CA4-buildings realized after enactment of seismic codes: structures designed according to seismic codes, the presence of frames in both directions with deep or flat beams, and mainly light infill walls. Grant Method The method proposed by Grant et al. [15] is based on the assumption that the buildings were realized in agreement with the current codes at the age of construction. Thus, the capacity in terms of maximum PGA is equal to the demand requested by the code at the time of construction. This means that the seismic vulnerability can be simply measured as a function of the age of construction and of the hazard expected at the site. A PGA deficit is, thus, calculated as the ratio of the design value of PGA (PGA d ) expected in the site and evaluated according to current code, to the PGA representing the design seismic demand provided by the code in force at the time, t 0 , of construction, PGA d,t0 . DM 58/17 Method The guidelines DM 58/17 [17] for the classification of seismic risk have been introduced in Italy following the Centro-Italia 2016-2017 seismic events in order to provide operational tools aimed to spread a more organic preventive culture and identify a simplified method for assessing the seismic vulnerability of masonry buildings. The method can be used for a quick evaluation of the seismic vulnerability of masonry buildings and is based on the same typological classification provided by EMS-98 [22], which, as said above, identifies seven and six types of masonry and RC buildings (see Figure 2), respectively. Each type is associated to one of the six vulnerability classes variable from A to F according to the EMS-98. Such classes are indicated as V i in the Italian guidelines, where i = 1, 2, . . . , 6 corresponds to classes F, E, . . . , A, according to a decreasing vulnerability level. Furthermore, fluctuations are expected around the identified vulnerability class. As reported in Figure 2, the EMS-98 identifies, for each type and each vulnerability class, the most credible value (circle) and the dispersion around this value, expressed with the most probable values (solid lines) and less probable or even exceptional (dashed lines). The Italian guidelines adopts the same approach for assessing any class variance, but provides a change of class only in the case of a vulnerability increase. Table 4 reports the parameters analyzed in order to assess the minimum vulnerability class. GNDT Method Forms to assess the seismic vulnerability of masonry buildings are provided by the Italian National Group for Earthquake Defense [11] and are aimed to give a vulnerability index that depends on several parameters (namely 11) affecting the overall seismic behavior of the examined building. A class from A to D, with increasing vulnerability, and a weight factor are attributed to each parameter. The class corresponds to a coefficient c vi variable from 0 to 45 (i.e., c vi is 0 for class A, 45 for class D and has intermediate values for class B and C depending on the significance of each parameter) and a weight factor k i variable from 0 to 1.5 is assigned to calculate the vulnerability index, I V , as follows: The index I V varies from 0 to 382.5, with the higher values corresponding to higher vulnerability. The vulnerability is, then, usually expressed as a percentage ratio, V, of the index to the maximum value, i.e., 382.5, and the corresponding vulnerability judgments are reported in Table 5. Table 5. Ranges of the vulnerability index provided by GNDT method [11]. Class V Level of Vulnerability Parameter 1 is related to the global structural asset of the building and to the presence and effectiveness of connecting systems (ties, chains, and curbs) able to guarantee a 'box-like behavior'. Parameter 2 depends on the quality of strengthening system. Parameter 3 is related to the amount of the static equivalent horizontal seismic force that the building is able to sustain and depends on the coefficient C, defined as follows: being a 0 a parameter depending on the plan dimensions of the building, τ 0k the characteristic value of the shear strength of masonry in the absence of normal stress, γ the unit weight of masonry, n the number of floors, and q the total weight of the building (taking into account the weight of masonry walls and of floors). Parameter 4 depends on the building position and foundations. Parameter 5 considers the quality of floors in terms of the effectiveness of connections with vertical elements and the capacity to warrant a global behavior. Parameter 6 is related to the planimetric assess of the building through the ratio between sides. Parameter 7 is related to regularity along the height. Parameter 8 is related to the spacing between vertical walls. Parameter 9 regards the roof typology. Parameter 10 is related to the presence of not structural elements that can cause damage. Finally, parameter 11 evaluates the current global condition of the building. The weight factor k i assigned to the parameters is 0.25 for parameters 2, 8, and 10; 0.50 for 6; 0.75 for 4; 1.50 for 3; and 1.00 for all the others. In addition to the vulnerability judgment, the normalized vulnerability index, V, can be also used for calculating the collapse acceleration, y c (PGA c ), according to the following correlation provided by Petrini and Zonno [29]: being α c = 1.5371, β c = 0.000974, = 1.8087, and V s = −25. Azizi Method In Azizi-Bondarabadi et al. [8] a method for assessing the seismic vulnerability of masonry structures, i.e., in particular schools, through the definition of a vulnerability index R, is presented. The following three intervals of variation of the index are defined: • R ≤ 25 Low seismic vulnerability: it is not necessary to carry out further assessments or retrofitting the building; • 25 < R < 75 Moderate seismic vulnerability: the building needs to be assessed with a more refined method; • R ≥ 75 High seismic vulnerability: it is necessary to demolishnd rebuild the structure. The index R is calculated as: being a the expected ground acceleration in the site, i.e., PGA d , and The values of the parameters in Equation (6) are directly given in Tables 6 and 7, while the parameter k 1 was evaluated as the sum divided by 100 of the four parameters k 11 , k 12 , k 13 , k 14 whose values depend on the topics listed in Table 8. Table 6. Values of parameters L 1 , L 4 , L 5 , and k 2 [8]. Vulnerability Parameters Current Condition Description Good Fair Poor Table 7. Values of parameters L 2 , L 3 , and k 3 [8]. Once the R index is known, the following correlations provide the values of the vulnerability index according to the different masonry typologies reported in Table 9. Table 9. Masonry typologies proposed by [8]. Masonry Class Stories Description The method RE.SIS.TO ® [5] is based on the assessment of the collapse acceleration of the building, PGA c , evaluated through a simplified evaluation of the shear strength at each floor of the building. The spectral acceleration S a,c , corresponding to the collapse of the building, is converted in a capacity in terms of PGA, i.e., PGA c , through the following relationship [30]: being α PM a modal participation factor, equal to 1.0 for one-floor buildings and 0.8 for multi-story buildings, α AD a spectral amplification factor equal to 2.50, α DT a factor taking into account dissipative phenomena and equal to 0.8 if the contribution of the infill walls is neglected or to 1.0 if it is significant, and α DUC is the structure factor, which can be assumed equal to 2.0 for masonry buildings. The building capacity in terms of spectral acceleration, S a,c , is defined as the minimum ratio between the resistant shear, V r,i,rid , at the i-th floor and the corresponding acting shear, V s,i : The ratio of the resistant shear, V r,i, , to the acting shear, V s,i , at each floor represents, indeed, the structural performance of each floor in terms of acceleration. For masonry buildings, the resistant shear at the i-th floor is calculated with the Turnšek and Cacovic [31] formulation: Being A min,i the minimum area of resistant walls along with the main directions of the building at the i-th floor, τ 0 the shear strength of masonry in the absence of normal stress, and σ 0,i the average normal stress at the i-th floor. The resistant shear is further reduced by a factor, C rid , aimed to take into account the current conditions of the building by means of the GNDT form [11]: where V i is the value assigned to each parameter provided in the GNDT form [11] without considering the third parameter related to the conventional strength, α is a coefficient estimated by the method calibration (here considered equal to 1), and V pegg is the sum of the values of all the parameters V i evaluated in class D. Based on the values of the ratio PGA c /PGA d , being PGA d the design acceleration expected at the site, five classes of strength and vulnerability are defined (Table 10). being a g,SLV the expected acceleration on rigid soil at the life safety limit state and a SLV the collapse acceleration of the building, i.e., the previously defined PGA c , with reference to the lower strength direction and defined as follows: with S e,SLV = q·F SLV e * ·M (14) where S e,SLV is the value of the elastic spectrum at the life safety limit state, q is the structure factor, M is the total seismic mass of the building, e* is the ratio of participating mass in the considered failure mode, T 1 is the main vibration period of the structure, i.e., C 1 = 0.05 for masonry buildings, H is the building height, T B , T C , and T D are the characteristic periods of the response spectrum, S = S S ·S T is the factor taking into account the subsoil typology and the topographic conditions (EC8 [32]), and F 0 is the maximum value of the amplification factor. The shear strength of the building, F SLV , is calculated as the minimum value between those related to two main directions and for each floor of the building. As an example, for the i-th floor along the X direction, the shear strength is: where τ di is the design shear strength of masonry of the i-th floor, defined according to the Turnšek and Cacovic [31] formulation as follows: being τ 0d the design shear strength of masonry in the absence of normal stresses and σ 0,i the average normal stress at the i-th floor due to the dead and variable loads under the seismic load combination. Moreover, A xi is the area of resistant walls along the X direction of the building at the i-th floor, ζ xi is a coefficient related to the strength of masonry spandrels (equal to 1.0 in the case of strong spandrels or to lower values, minimum 0.8, for weak spandrels), β xi depends on the plan regularity of the i-th floor and varies from 1.00 to 1.25 (for safety 1.25 can be assumed), ξ xi is a coefficient equal to 1.0 for shear or 0.8 for flexural failure of the masonry walls, µ xi is a coefficient related to strength and stiffness homogeneity of the masonry piers at the i-th floor and can be assumed to be equal to 0.8 for safety. L&R Method In Lourenço and Roque [13] three simplified indices are proposed for assessing the vulnerability of masonry churches based on simple geometrical parameters and mechanical properties of masonry. In this paper, the only base-shear ratio, given by the ratio of the shear strength of the structure, V rd,i , to the total base shear under seismic loads, V Sd , is considered according to the following expression: where A wi is the in-plan area of the earthquake resistant walls in the i-th direction, A w is the total in plan area of the earthquake resistant walls, ϕ and f vk0 are the friction angle and the cohesion of the masonry, respectively, γ is the unit weight of masonry, H is the total height of the building, and β is an equivalent seismic static coefficient (V Sd = F E = β·G, G = A w ·γ·H), which has been assumed with reference to the expected design acceleration on rigid soil. The coefficient λ corresponds, thus, to λ = PGA d /g. Clearly, according to this simplified approach, the safety condition for the examined buildings corresponds to γ 3 ≥ 1. Comparative Evaluation of the Simplified Methods through Case Study Applications: A School Building Stock in the Campania Region The above methods have been applied to a sample of school buildings located in the province of Naples (Italy). Buildings of the stock are different mainly for structural typology, geometry, and age of construction. For some of them, a lot of information is available, while for others the starting data are limited. Depending on the information necessary for the application of each method, the reference sample is customized according to the level of knowledge available for each building of the whole set. The comparison among the results obtained from the application of the different methods allows one to highlight advantages and weaknesses of each method, to identify the convenience in the use of them according to the specific available information and the objectives of the analysis, and, finally, to evaluate which method is more or less safe. The Building Dataset The sample used as a case study refers to the school buildings of the Province of Naples. A number of 1185 cases were firstly individuated as those for which at least basic data were available (i.e., type of vertical and horizontal structural system, construction period). Most of the schools (64%) were RC structures, 20% were masonry buildings, while the remaining ones were made of steel or other materials ( Figure 3). Most of the schools with a mixed structure (76 out of 108) were mixed concrete-masonry buildings. Focusing attention on the above cited three main categories (RC, masonry, and mixed RC-masonry), the sample reduced to 1067 units. out of 108) were mixed concrete-masonry buildings. Focusing attention on the above cited three mai categories (RC, masonry, and mixed RC-masonry), the sample reduced to 1067 units. With reference to the stock of 1067 buildings, Figure 4 reports the type and occurrence of th horizontal structural systems. Most of the buildings, i.e., 90%, regardless of the type of vertica structure, had horizontal structures made of RC floor. Floors made of steel elements and bricks wer the second most frequent typology, mainly in the masonry buildings (23%), while few cases o vaulted ceilings were present, i.e., for masonry buildings only. With reference to the stock of 1067 buildings, Figure 4 reports the type and occurrence of the horizontal structural systems. Most of the buildings, i.e., 90%, regardless of the type of vertical structure, had horizontal structures made of RC floor. Floors made of steel elements and bricks were the second most frequent typology, mainly in the masonry buildings (23%), while few cases of vaulted ceilings were present, i.e., for masonry buildings only. Appl. Sci. 2020, 10, x FOR PEER REVIEW 17 of 34 out of 108) were mixed concrete-masonry buildings. Focusing attention on the above cited three main categories (RC, masonry, and mixed RC-masonry), the sample reduced to 1067 units. With reference to the stock of 1067 buildings, Figure 4 reports the type and occurrence of the horizontal structural systems. Most of the buildings, i.e., 90%, regardless of the type of vertical structure, had horizontal structures made of RC floor. Floors made of steel elements and bricks were the second most frequent typology, mainly in the masonry buildings (23%), while few cases of vaulted ceilings were present, i.e., for masonry buildings only. Information about the number of stories was available for 97% of the database (i.e., for 1028 out of the 1067 buildings). Figure 6 shows that most of buildings (45%) had two stories, followed by 26% of 1-story buildings, and by 20% of 3-story buildings. Only 6% of buildings had 4 stories. The percentages were similar also when calculated referring to the single building typologies (masonry, RC, and mixed RC-masonry). Finally, most of buildings (96%) of the dataset made of 1067 cases were located in a seismic zone belonging to the 2nd category since they were in the same district of Naples. The remaining 4% belonged to the 3rd category (classification of seismic zones according to OPCM 3274, [33]). Information about the number of stories was available for 97% of the database (i.e., for 1028 out of the 1067 buildings). Figure 6 shows that most of buildings (45%) had two stories, followed by 26% of 1-story buildings, and by 20% of 3-story buildings. Only 6% of buildings had 4 stories. The percentages were similar also when calculated referring to the single building typologies (masonry, RC, and mixed RC-masonry). Information about the number of stories was available for 97% of the database (i.e., for 1028 out of the 1067 buildings). Figure 6 shows that most of buildings (45%) had two stories, followed by 26% of 1-story buildings, and by 20% of 3-story buildings. Only 6% of buildings had 4 stories. The percentages were similar also when calculated referring to the single building typologies (masonry, RC, and mixed RC-masonry). Finally, most of buildings (96%) of the dataset made of 1067 cases were located in a seismic zone belonging to the 2nd category since they were in the same district of Naples. The remaining 4% belonged to the 3rd category (classification of seismic zones according to OPCM 3274, [33]). Finally, most of buildings (96%) of the dataset made of 1067 cases were located in a seismic zone belonging to the 2nd category since they were in the same district of Naples. The remaining 4% belonged to the 3rd category (classification of seismic zones according to OPCM 3274, [33]). Historical Seismicity of the Area In Italy the classification of the seismic hazard, which is fundamental for the seismic design of buildings, has changed profoundly over the decades. This obviously plays a significant role for seismic risk of existing constructions. With reference to the case study, it is worth remembering that the first seismic classification in Italy was introduced in 1927, when a royal decree (R.D. 431/1927 [34]) defined generically two seismic categories. Afterwards, a royal decree in 1935 (R.D. 640/1935 [35]) associated values of expected ground acceleration to those categories: 0.10 g for category I and 0.05 g for category II. In 1975, a more refined seismic hazard map was introduced (D.M. 40/1975 [36]) and extended the hazard to a wider area of the Italian territory than in the past. In 1984, a ministerial decree (D.M. 19.6.1984 [37]) introduced the differentiation of the seismic protection level through a seismic protection coefficient depending on building categories (equal to 1.0 for ordinary structures, 1.4 for strategic constructions, and intermediate values in other cases) in order to define the seismic forces. The seismic classification of the national territory was further updated, according to the knowledge and experience of the seismic events recorded during the time. Later, in 2003, a seismic zonation based on the probabilistic values of the expected ground acceleration was introduced (OPCM 3274/2003 [33]). For the first time, the whole national territory was classified as seismic and was divided into four categories. Each category is identified by the value of peak ground acceleration on stiff soil (rock) with an exceeding probability of 10% in 50 years and a corresponding return period of 475 years (i.e., −50 yrs/(ln(1-10%))). For each of the four zones, the seismic design of new buildings was imposed with different levels of severity, with the exception of zone 4, for which the regions were empowered to adopt specific obligation. The new Italian code for buildings in 2008 (NTC 2008 [38]) and its next update (NTC 2018 [39]) definitively abolished the so-called 'seismic zonation'. The new code, indeed, introduced a point-by-point mapping of the seismic hazard in terms of peak ground acceleration, defined according to the geographical coordinates of whatever site in Italy and for several return periods. In Figure 7, the evolution of the seismic hazard map in Italy from 1935 to 2008 was reported with reference to the above conventional return period of 475 years. The current seismic hazard map for the Campania region reported values of PGA-with 475 years of return period-between 0.07 (southern coast) and 0.26 g (near the Apennines, in the province of Benevento). The schools that belonged to the reference database for this research are located in the province of Naples (therefore medium seismicity), with a PGA value on rock between 0.09 and 0.19 g. It is worth noting that for many areas seismicity was significantly re-evaluated over the time, because the earthquakes occurred in the last decades on the national territory. This means that many buildings designed and built in the past now may have strength incompatible with the current seismicity of the site and a significantly lower safety in comparison with recent constructions. Application of Methods of Group A Finally, for the application of the methods belonging to Group A, 1010 buildings (of the 1067 of the whole sample) were considered, since for these buildings all the information necessary for applying the methods were available. The sample was, thus, composed as follows: 707 RC buildings, 228 masonry buildings, and 75 mixed RC-masonry buildings. It is worth noting that the data about the buildings were extracted from a regional digital platform and did not come from direct in situ surveys of the authors. Such a platform includes several information (administrative, functional, structural, plant engineering, and logistic) that the owners should insert in different forms, but not all the required fields are fully filled. Moreover, some data, including structural details, are not mandatory. This leads some heterogeneity about the information contents of the buildings present in the database. Since the aim of the paper was to investigate the possibility of providing a 'large scale' seismic vulnerability judgment for a selected group of buildings using the only data extracted from the available database, the authors did not carry out any survey or request of additional information, in order to have no supplementary cost in term of time and resource. Clearly, such a choice led to applying only some of the methods existing in the literature. The design seismic action PGAd assumed as a reference corresponds for all methods to a return period of 475 years, usually adopted to check the life safety (LS) limit state for existing buildings according to Eurocode 8 (EC8 1998 [32]) and corresponding to a probability of exceedance of 10% in 50 years. Since no specific information are available about the subsoil typology and the stratigraphic conditions of each site, average values for the coefficients related to these parameters were assumed for all buildings, i.e., Ss = 1. 20 Application of Methods of Group A Finally, for the application of the methods belonging to Group A, 1010 buildings (of the 1067 of the whole sample) were considered, since for these buildings all the information necessary for applying the methods were available. The sample was, thus, composed as follows: 707 RC buildings, 228 masonry buildings, and 75 mixed RC-masonry buildings. It is worth noting that the data about the buildings were extracted from a regional digital platform and did not come from direct in situ surveys of the authors. Such a platform includes several information (administrative, functional, structural, plant engineering, and logistic) that the owners should insert in different forms, but not all the required fields are fully filled. Moreover, some data, including structural details, are not mandatory. This leads some heterogeneity about the information contents of the buildings present in the database. Since the aim of the paper was to investigate the possibility of providing a 'large scale' seismic vulnerability judgment for a selected group of buildings using the only data extracted from the available database, the authors did not carry out any survey or request of additional information, in order to have no supplementary cost in term of time and resource. Clearly, such a choice led to applying only some of the methods existing in the literature. The design seismic action PGA d assumed as a reference corresponds for all methods to a return period of 475 years, usually adopted to check the life safety (LS) limit state for existing buildings according to Eurocode 8 (EC8 1998 [32]) and corresponding to a probability of exceedance of 10% in 50 years. Since no specific information are available about the subsoil typology and the stratigraphic conditions of each site, average values for the coefficients related to these parameters were assumed for all buildings, i.e., S s = 1.20 for subsoil type B and S t = 1.00 for topography typology T1 (NTC 2018 [39]; EC8 1998 [32]). Results of the SAVE Method As explained in Section 2.1.1, the SAVE method [18] assigns an index, SPD, and a vulnerability class from A (high vulnerability) to D (low vulnerability) based on the information about the typology of vertical structures. In Table 11, the indexes and the classes provided by the SAVE method were calculated for 935 buildings (the 75 buildings with 'mixed' typology were, indeed, not considered by the method): 228 masonry buildings and 707 RC buildings distinguishing the masonry buildings in regular (226) and irregular (2). It is worth noting that all the RC buildings fell in class D v , all the masonry buildings fell in class B v , with exception of the 2 irregular masonry buildings that fell in class A v . For masonry buildings, the corrected vulnerability index was calculated considering four independent parameters: the presence of isolated columns, position of the building (isolated or aggregated), the presence of ties, and regularity in plan or along the height. This assumption leads to having 16 combinations of parameters able to modify the original values of SPD. In Table 11, since specific information were not available for the whole dataset, the worse (case 1) and the best (case 2) modified values of SPD within the 16 combinations are listed. The worse combination, leading to an increase of the index, was obtained by considering the presence of isolated pillars, aggregated position, absence of ties, and irregularity in plan or along the height. Due to these assumptions, all masonry buildings fell in class A, with an increase of the SPD of about 20%, which determined an increase of the vulnerability class only for the regular masonry buildings. Conversely, under the best combination of parameters, the index reduced by about 18% leading to all masonry buildings being in class B and, thus, to an improvement of vulnerability only for the irregular buildings. The analyses allowed evidence also that the position of the building was the most influencing parameter in terms of SPD variation. For RC buildings, the corrected vulnerability index was calculated considering three independent parameters: the position of the building in the aggregate (isolated, internal, and angle), regularity in plan or along the height, and regularity of infill walls. Thus, seven combinations of parameters were considered and in Table 11 the values of SPD and the classes corresponding to the worse and best combinations are listed. The analyses evidenced that the most significant parameter was the regularity of infill walls. Under the best combination (case 2), the index reduced by 18% and all buildings remained in class D, while, under the worse one (case 1), the index increased by 18% and all the buildings fell in class B. S&C Method The method proposed by Sandoli and Calderoni [9] was applied to the 707 RC and the 228 masonry structures, i.e., 935 buildings in total excluding, thus, the buildings with 'mixed' structure. The method provides ranges for the maximum PGA sustainable for the buildings, i.e., the values of PGA c , in the function of the building typology. Figure 8 reports the occurrence of the masonry and RC buildings within the PGA c classes defined by the S&C method. In Figure 9, for each building category, the average value of the PGA range, PGAc, was divided by the design PGA, i.e., PGAd, at the LS limit state. For masonry buildings, Figure 9a shows that the capacity was lower than the demand (i.e., PGAc/PGAd < 1) in most cases (225, i.e., 98.6%). In particular, 62% of buildings had values of PGAc/PGAd < 0.45 and, thus, were characterized by a high seismic vulnerability, while 27% of buildings had PGAc/PGAd in the range 0.60-1.00, i.e., were characterized by medium vulnerability. For RC buildings, Figure 9b shows that 40% of buildings had PGAc/PGAd < 1, mostly within the range 0.6-0.8. Note that in Figure 9b, buildings of class CA-4, which corresponded to buildings realized after some seismic codes became mandatory (i.e., 1984 in the Campania region), were not reported since the range of PGAc was too wide (0.15 g < PGAc < 0.35 g) and the use of the average value of the range was not reliable. In Figure 9, for each building category, the average value of the PGA range, PGA c , was divided by the design PGA, i.e., PGA d , at the LS limit state. For masonry buildings, Figure 9a shows that the capacity was lower than the demand (i.e., PGA c /PGA d < 1) in most cases (225, i.e., 98.6%). In particular, 62% of buildings had values of PGA c /PGA d < 0.45 and, thus, were characterized by a high seismic vulnerability, while 27% of buildings had PGA c /PGA d in the range 0.60-1.00, i.e., were characterized by medium vulnerability. For RC buildings, Figure 9b shows that 40% of buildings had PGA c /PGA d < 1, mostly within the range 0.6-0.8. Note that in Figure 9b, buildings of class CA-4, which corresponded to buildings realized after some seismic codes became mandatory (i.e., 1984 in the Campania region), were not reported since the range of PGA c was too wide (0.15 g < PGA c < 0.35 g) and the use of the average value of the range was not reliable. Thus, for RC buildings of class CA-4, the values of PGA d provided by the current code have been compared with the lower and upper limits of the PGA c range proposed by the S&C method, resulting in the following judgment: Among the 374 RC buildings of class CA-4, only 22 could be defined surely 'safe', while in most cases it resulted in PGA c,MIN < PGA d < PGA c,MAX and, thus, more detailed analyses should be carried out. Grant Method The method proposed by Grant et al. [15] was applied to 122 RC buildings only, which were those realized after 1984. In fact, for buildings realized before, lacking a mandatory seismic code, the definition of "PGA deficit" did not make sense. Figure 10 shows the distribution of the ratio PGA d,to /PGA d and highlighted that 50% of cases were in the range 0.15-0.30, 30% in the range 0.30-0.45, and only 15% had PGA d,to /PGA d > 1. Note that, according to such a method, the design PGA at the construction time, PGA d,to , was assumed 'equivalent' to the PGA capacity of the building. These results further highlight how the seismic hazard in Campania had changed over time since 1984 (see Section 3.1). Appl. Sci. 2020, 10, x FOR PEER REVIEW 24 of 34 The method proposed by Grant et al. [15] was applied to 122 RC buildings only, which were those realized after 1984. In fact, for buildings realized before, lacking a mandatory seismic code, the definition of "PGA deficit" did not make sense. Figure 10 shows the distribution of the ratio PGAd,to/PGAd and highlighted that 50% of cases were in the range 0.15-0.30, 30% in the range 0.30-0.45, and only 15% had PGAd,to/PGAd > 1. Note that, according to such a method, the design PGA at the construction time, PGAd,to, was assumed 'equivalent' to the PGA capacity of the building. These results further highlight how the seismic hazard in Campania had changed over time since 1984 (see Section 0). DM 58/17 Method The simplified method provided in the annex of the DM 58/17 [17] is suitable for masonry buildings only. For this reason, it was applied on the subset of 228 masonry buildings out of the overall building stock. According to the information available about the constructive typology, i.e., the type of masonry, a vulnerability class was assigned (see Figure 2) and the distribution of the examined buildings in the six classes was plotted in Figure 11. Most of the buildings (75%) were assigned to the class V4, since they had RC floors. The buildings made of regular bricks and different types of slab (25%) were assigned to the class V5. The two buildings made of irregular masonry were assigned one to class V5 and the other to class V6, due to the different constructive techniques. DM 58/17 Method The simplified method provided in the annex of the DM 58/17 [17] is suitable for masonry buildings only. For this reason, it was applied on the subset of 228 masonry buildings out of the overall building stock. According to the information available about the constructive typology, i.e., the type of masonry, a vulnerability class was assigned (see Figure 2) and the distribution of the examined buildings in the six classes was plotted in Figure 11. Most of the buildings (75%) were assigned to the class V4, since they had RC floors. The buildings made of regular bricks and different types of slab (25%) were assigned to the class V5. The two buildings made of irregular masonry were assigned one to class V5 and the other to class V6, due to the different constructive techniques. the type of masonry, a vulnerability class was assigned (see Figure 2) and the distribution of the examined buildings in the six classes was plotted in Figure 11. Most of the buildings (75%) were assigned to the class V4, since they had RC floors. The buildings made of regular bricks and different types of slab (25%) were assigned to the class V5. The two buildings made of irregular masonry were assigned one to class V5 and the other to class V6, due to the different constructive techniques. Tables 12 and 13 report the comparison of the results provided by the simplified methods of Group A for masonry and RC buildings, respectively. It is worth noting that the methods SAVE, S&C, and DM58/17 provide the definition of vulnerability classes for buildings based on structural typology and retrieved on the classification proposed by the European macroseismic scale (EMS98 [22]). For SAVE and S&C methods, the vulnerability class starts from 'A', meaning the highest vulnerability, and decreases to the 'D'. DM58/17 suggests six vulnerability classes from 1 (lowest vulnerability) to 6 (highest vulnerability). Conversely, the S&C method gives three ranges of PGA capacity that can be associated to a high, medium, and low vulnerability. Note that for the SAVE method the not corrected vulnerability judgments were reported, i.e., the vulnerability classes were based on the only vertical structure typologies. According to the SAVE method, almost all the masonry buildings (226 out 228, Table 12) fell in the 'high-medium' vulnerability class (i.e., B V that corresponds to class B EMS /C EMS of EMS98) while, according to DM58/17, 24% fell in the 'high-medium' class and 75% in the 'medium' classes (i.e., classes V 5 and V 4 , which corresponded to class B EMS and C EMS of ESM98, respectively). The vulnerability judgments provided by these two methods were, thus, comparable. For the S&C method, the masonry buildings were almost equally distributed in the three classes, evidencing, thus, more distributed vulnerability judgment within the classes in comparison with the other two methods. For all the RC buildings (Table 13), the SAVE method provides a 'low' vulnerability class, while the S&C method predicted for 44% buildings of the sample a 'high' vulnerability class and for the remaining 56% a 'low' vulnerability class. Again, the S&C method provides more distributed vulnerability judgment within the proposed classes. It is worth noting that the DM58/17 did not provide a simplified method for RC buildings. Finally, Tables 12 and 13 show also that, according to the analyzed methods of Group A, the masonry buildings were characterized by more severe vulnerability judgments in comparison with the RC buildings. This result is expected due to the intrinsic vulnerability of masonry buildings with reference to horizontal actions when designed for gravity loads only. Application of the Method of Group A and B to a Reduced Sample of Buildings Methods of Group B were applied to 14 masonry buildings, since only for these buildings all geometrical and mechanical data necessary for applying the methods were available. For the same dataset of buildings also methods of Group A were applied. For all methods, the design seismic action, PGA d , was again expressed by the design PGA for the return period of 475 years and subsoil type B. In particular, for the 14 examined buildings, the PGA d varied in the range 0.18-0.21 g (i.e., corresponding to 0.15-0.18 g on rock). Based on the information gathered by the forms available in the digital platform, it was evidenced that all the examined masonry buildings were made of tuff stones with regular texture and floors were mainly made of RC elements and light bricks, while in a few cases were made of steel profiles and light bricks. The height of buildings was variable from 4.0 to 9.0 m; in particular, nine buildings had 1 or 2 floors, three buildings had 3 floors, and two buildings had 4 floors. The wall geometry was gathered by the digital platform too, in particular the areas of the resistant walls in the two directions were detected due to technical drawing of each floor available for these buildings. About the construction period, seven buildings were realized before 1960, four buildings between 1960 and 1975, and three buildings were built before 1945. Within the categories suggested by the annex document (Annex n. 7, 2019 [40]) to the Italian building code (NTC 2018), the typology 'regular tuff masonry' was assigned to all 14 buildings. Physical and mechanical parameters of masonry were, thus, defined according to the ranges of values provided for existing masonry buildings in (Annex n. 7, 2019 [40]) and assuming a basic level of knowledge, i.e., LC1, for structural geometry, details, and materials. Such an assumption leads to assessing the design values of mechanical properties as equal to the lowest values of the provided ranges divided to the so called "knowledge factor", FC, which was 1.35 for the level of knowledge LC1. For 'regular tuff masonry', the unit weight was assumed to be γ = 16 kN/m 3 , while the lower bounds for the average shear strength without normal stresses were τ 0 = 0.04 MPa and f v0 = 0.10 MPa. Note that τ 0 was the shear strength for tensile failure in the masonry and was used in the Turnšek and Cacovic [31] formulation, while f v0 refers to a sliding failure along the joints and was used in the When the Mohr-Coulomb criterion was used, i.e., in the only L&R method, for the friction angle of masonry, lacking specific information, the common value tan ϕ = 0.4 was assumed. For the floors, the unit weight of 4 kN/m 2 was assumed in all cases. Such a value, together with the unit weight of masonry, was used for evaluating the average normal stress acting in the masonry walls at each floor. Note that the methods Azizi, RE.SIS.TO, and GNDT of Group B allow one to express both a quantitative and a qualitative, i.e., linguistic, judgment (from 'low' to 'high') about the level of vulnerability of the buildings. Thus, in Section 3.4.1, the quantitative judgments in terms of capacity-to-demand ratio provided by methods RE.SIS.TO, GNDT, DPCM 9/2/2011, and L&R were presented and compared with each other. Successively, in Section 3.4.1, the linguistic vulnerability judgments given by methods of Azizi, RE.SIS.TO, and GNDT were compared with each other and with those obtained using the methods of Group A (S&C, SAVE, and DM58/17). Simplified Vulnerability Assessment via Quantitative Safety Factors Methods of Group B give a quantitative estimation of the seismic vulnerability, based in most cases on a simplified assessment of the seismic capacity-to-demand ratio (expressed in terms of ground acceleration or base shear). In Table 14, the values of such "safety factor" are listed for the 14 selected masonry buildings and in Figure 12 the comparisons among the examined are graphically shown. Note that, in the assessment of the capacity-to-demand ratios, for the DPCM method both shear and flexural failure modes were considered, since more detailed information about the buildings were not available and the lowest safety factor was finally considered (it resulted in being the one associated to flexural failure, in all the cases). For the L&R method, the lowest value between those related to x and y directions is reported for each building. In Table 14, the 'envelope' column summarizes the most severe results obtained by applying the quantitative methods of Group B and corresponding to the lowest seismic capacity-to-demand ratio. Table 14 and Figure 12 show that the GNDT method was the less safe approach since for all examined buildings the safety factor was largely > 1. This is probably due to an overestimation of the PGA capacity that depends both on the correlation given by Equation (3) and on the judgments assigned in the relief form, which were strongly affected by the subjectivity of the compiler or by a lack of information. Conversely, the DPCM method was the safest one, since it provided safety factors lower than 1 for more than the 85% of the buildings, with 43% of buildings (7) with safety factors lower than 0.5. The RE.SIS.TO method provided values of safety factors similar to those provided by the DPCM method, with the exception of buildings 3, 8, and 10 characterized by quite different predictions. Index γ 3 given by the L&R method seemed to be the most balanced one, since no building had a factor γ 3 lower than 0.5, but it is worth noting that for buildings 2, 4, 7, and 11 the index was significantly higher than the values provided by the DPCM and RE.SIS.TO methods. Finally, the values of the capacity-to-demand ratio given by the S&C method of Group A are listed in Table 14 too; they were always lower than 1 and in most cases lower than the values provided by the DPCM method, resulting, thus, in the safest approach. However, as previously underlined, it is worth noting that the S&C method gave a very approximate prediction of PGA c . Note that the only parameter variable within the 14 buildings and influencing the judgment was the construction age, and thus it had sense that the method was very safe. Simplified Vulnerability Assessment via Linguistic Judgments For the selected 14 masonry buildings, the vulnerability was investigated by means of some qualitative methods of Group A too. In Table 15, the qualitative judgments given by methods S&C, DM 58/17, and SAVE are listed together with the 'envelope' column corresponding to the safest results within these methods. It can be noted that the SAVE method was the safest one, since it Simplified Vulnerability Assessment via Linguistic Judgments For the selected 14 masonry buildings, the vulnerability was investigated by means of some qualitative methods of Group A too. In Table 15, the qualitative judgments given by methods S&C, DM 58/17, and SAVE are listed together with the 'envelope' column corresponding to the safest results within these methods. It can be noted that the SAVE method was the safest one, since it provided for all 14 buildings a 'high' vulnerability. Conversely, lower differences could be observed between the S&C and DM 58/17 methods, even if the latter one seemed to be safer (in most cases, indeed, the judgments were comparable or slightly more severe). In Table 15, the qualitative results provided by some methods Group B are listed too. In particular, for the Azizi and the GNDT methods, both the numerical results, i.e., the indexes R and I v respectively, and the corresponding qualitative judgments were reported in order to be compared to each other. Note that the qualitative judgments given by the RE.SI.STO method were related to the values of the safety index already listed in Table 14 on the basis of the correspondence proposed in Table 10 [5]. Additionally, the qualitative judgments of DPCM and L&R methods listed in Table 15 were obtained by a conversion of the quantitative results, already listed in Table 14, in accordance with Table 10. It could be observed that the results of these methods, compared with the linguistic outcomes of the others of Group B, were consistent on average, although those from L&R method estimated a lower vulnerability. Additionally, for the methods of Group B, the 'envelope' column summarized the most severe results. Among these methods, there was a substantial agreement of the results obtained by RE.SIS.TO and GNDT. On the contrary, the Azizi method led to unconvincing results because all 14 buildings of the set were assigned to a 'low' vulnerability class. It is worth noting that these conclusions could not be generalized because they depended on the characteristics of the sample examined, but they represent a solid basis for further study in this direction. Comparing the qualitative judgments of the two groups of methods, as expected, the more simplified methods of Group A that led to more conservative results than those provided by methods of Group B, which, nevertheless in Table 15 refer to linguistic judgments, were based on a larger number of structural information. Finally, it is interesting to highlight that the results obtained with the expeditious method of DM 58/17 were consistent with the 'envelope' related to the most sophisticated methods of Group B. This suggests that the DM 58/17 method and thus the EMS scale should be used, at least for masonry structures, whenever the available data are very limited. Figure 13 graphically summarizes the data reported in Table 15, in order to make the comparison among the examined methods easier, also distinguishing those of Group A, basically safer, and those of Group B, more 'realistic' because it is based on more information. The graph of Figure 13 was useful to show that, for the same building, the vulnerability judgment provided by the methods of Group B could be better or worse than that assessed by the means of the method of Group A, but in most cases they were less safe. For the 14 examined masonry buildings, based on the results of Group B (excluding the Azizi method), about 50% of the structures had, indeed, a 'low' or 'medium-low' vulnerability, while, based on the results of Group A (excluding the S&C method), all the buildings had a 'medium' or 'medium-high' vulnerability. Moreover, Figure 13 shows also that two methods of Group A (DM 58/17 and SAVE) gave similar vulnerability judgments for the 14 buildings, while most methods of Group B (with the exception of the Azizi method) provided different judgments for the 14 buildings. These results evidence that the methods of Group B allow one to highlight the singularity of each building, since more detailed information (geometry and material properties) are required for their application, while the more general methods provide judgments for 'classes of buildings', where specific data concerning the single building are not taken into account. Conclusions The need of defining a reliable method for assessing the seismic vulnerability of existing buildings is crucial in order to prevent damages and subsequent economic loss, both in ordinary conditions, through the adoption of an adequate maintenance program, and extraordinary conditions, such as the emergency phases following an earthquake. Since an important aim is the estimation of the consequences of a seismic event on a territory, several methodologies are available in the literature aimed to provide an assessment of the seismic vulnerability on a 'large scale' by means of simplified analyses. This paper was firstly focused on the analysis of some simplified methods proposed in the literature for assessing the 'large scale' seismic vulnerability, with a particular focus on those most suitable for the typical Italian buildings heritage. The selected methods are different in type and number of input data, type of output, limitations of use for different structural typologies, complexity of use, computational effort, and type of final judgment. The main idea of the paper was to verify the possibility of providing reliable large scale vulnerability judgments based on a minimum set of information, already available in the local or national platform and without the necessity of carrying out additional surveys, i.e., 'at zero cost' in terms of time and resources. This means that the preliminary large scale vulnerability assessment could be carried out only based on few data made available by administrative offices and not coming by additional technical surveys. According to such an approach, the choice of a suitable method for assessing the seismic vulnerability on a large scale should be also consistent with the available Conclusions The need of defining a reliable method for assessing the seismic vulnerability of existing buildings is crucial in order to prevent damages and subsequent economic loss, both in ordinary conditions, through the adoption of an adequate maintenance program, and extraordinary conditions, such as the emergency phases following an earthquake. Since an important aim is the estimation of the consequences of a seismic event on a territory, several methodologies are available in the literature aimed to provide an assessment of the seismic vulnerability on a 'large scale' by means of simplified analyses. This paper was firstly focused on the analysis of some simplified methods proposed in the literature for assessing the 'large scale' seismic vulnerability, with a particular focus on those most suitable for the typical Italian buildings heritage. The selected methods are different in type and number of input data, type of output, limitations of use for different structural typologies, complexity of use, computational effort, and type of final judgment. The main idea of the paper was to verify the possibility of providing reliable large scale vulnerability judgments based on a minimum set of information, already available in the local or national platform and without the necessity of carrying out additional surveys, i.e., 'at zero cost' in terms of time and resources. This means that the preliminary large scale vulnerability assessment could be carried out only based on few data made available by administrative offices and not coming by additional technical surveys. According to such an approach, the choice of a suitable method for assessing the seismic vulnerability on a large scale should be also consistent with the available knowledge level of the buildings and, thus, not all the methods are reliable or applicable in the case of a lack of necessary information. The methods selected according to the above criteria were then applied to a sample of school buildings located in the province of Naples (Italy), characterized by an almost uniform seismic hazard. Buildings were different mainly in structural typology, geometry, and age of construction. The structural data necessary for applying vulnerability methods were not obtained by onsite surveys directly carried out by the authors, but came from a regional digital platform. Depending on the information necessary for the application of each method, the reference sample of buildings was customized according to the level of knowledge available. The methods were divided into two macrocategories: Group A with the most simplified methods, which require a reduced number of parameters, and Group B with more detailed methods, which need more data. Methods of Group A provide a vulnerability judgment related to the assignment of a 'vulnerability class', while methods of Group B generally provide a quantitative evaluation based on safety indexes. Some methods of Group B associate a vulnerability judgment to the safety index too. It is worth noting that most methods of Group B are applicable to masonry structures only. The comparison between the results obtained from the application of the different methods allows one to highlight the following items: (1) Full database (935 buildings, application of method of Group A): -For masonry buildings (228), SAVE and DM58/17 methods gave comparable vulnerability classes (high-medium), with the results of SAVE being slightly more conservative. The S&C approach judged the buildings almost equally distributed in three classes (low, medium, and high vulnerability); -For RC buildings (707)-unlike masonry buildings-the SAVE method resulted in being unsafe, since it provided a low vulnerability class for all the buildings of the set. This result was not likely given the heterogeneity of the construction age of the different buildings in the set, which included both old buildings and recently designed buildings. Again, as for masonry structures, the S&C method provided more distributed vulnerability judgment within the classes. The Grant method was much safer than the S&C one, but it was applicable to a very small subset of buildings, i.e., not to those realized before 1984 in Campania Region (when the parameter PGA c could not be defined); -As expected due to their intrinsic vulnerability to seismic actions, the masonry buildings designed for gravity loads only had more severe vulnerability judgments (high or medium-high) in comparison with the RC buildings. (2) Reduced database (14 masonry buildings, application of method of Group A and B) (a) Methods of Group A -The vulnerability judgments provided by the SAVE method were the safest ones, since it provided for all 14 buildings a 'high' vulnerability. This result was consistent with that previously found by analyzing the larger set of 228 masonry buildings. Diversified judgments for the 14 buildings were provided by the S&C and DM 58/17 methods, with the latter one safer. (b) Methods of Group B -The GNDT method was the less safe approach since for all examined buildings the safety factor was largely > 1, probably because of an overestimation of the PGA capacity due to uncertainness about the assessment of some parameters present in the GNDT form; -The DPCM and RE.SI.STO methods led to similar results, in a safe direction since the safety factors were less than 1 in most cases; -The safety factors given by L&R method came to be in between the two above extreme results; -With the exception of the Azizi method, only about 50% of the examined buildings had a 'low' or a 'medium-low' vulnerability, confirming the intrinsic vulnerability to seismic actions of masonry buildings designed for gravity loads only, which was also evidenced by application of methods A to the whole database. (c) Comparison between methods of Group A and B -As expected, the more simplified methods of Group A led to safer results than those provided by methods of Group B; -The expeditious method DM 58/17, based on the EMS classification, was consistent with the 'envelope' related to the most sophisticated methods of Group B. This made the DM 58/17 method reliable, at least for masonry structures, whenever the available data was limited; -For the same building, the vulnerability judgment provided by the methods of Group B could be better or worse than that assessed by means of the method of Group A, but in most cases they were less safe. Moreover, the methods of Group B allowed to highlight the singularity of each building with diversified judgments within the examined buildings, since more detailed information (geometry and material properties) were required for their application, while the methods of Group A provided more uniform judgments suitable for 'classes of buildings', where specific data concerning the single building were not taken into account. Given the results above, for RC buildings, the authors suggest the application of the S&C method from Group A for assessing the large-scale seismic vulnerability, when an accurate level of knowledge is not available for the buildings in the set. Analogously, for masonry structures, when few data are available, a rational choice could be that of applying the DM 58/17 method from Group A. For masonry buildings, when it is possible to reach a better level of knowledge of the structures and, therefore, to apply one of the methods of Group B, the RE.SIS.TO (with more diversified judgments within the set of 14 buildings) or the DPCM 9/2/11 (more homogeneous judgments, on average safer than the L&R method) approach is preferable. The two methods require very different input parameters, in number and quality, and, thus, the designer can move towards one or the other depending on the data that he is able to collect and usable for the specific method. In particular, if the number of available parameters is small, it is advisable to adopt the method of DPCM 9/2/11, which is agreement with the judgments provided by DM 58/17 of Group A. Based on these results, it could be concluded that the most significant parameters for reliably assessing the seismic vulnerability on the large scale are: the type of vertical and horizontal structures, number of stories, and age of construction. For masonry buildings, the following data have to be considered too in order to apply more detailed methods: the type of masonry, shear strength of masonry, total weight of the building, and normal stresses acting in the masonry walls (the latter two parameters require the knowledge of the unit weight of masonry and of the area of the resistant walls). In conclusion, the studies presented in this paper highlighted that the assessment of reliable simplified methodologies for evaluating the seismic vulnerability was not simple since the results could be different according to the considered approach. The choice of the most suitable approach depends on the data available for the set of buildings under examination and has to take into account that the data might be not always the same for all the buildings. Future developments of this research will be focused on: (a) the applications of the investigated methodologies to wider databases in order to confirm the indications provided in this paper and (b) performing refined non-linear structural analyses or simplified procedure suggested by mechanical approaches to calculate the 'probable' seismic vulnerability for a limited number of buildings present in the database after more detailed information about them have been collected. Conflicts of Interest: The authors declare no conflict of interest.
2020-10-19T18:09:43.799Z
2020-09-27T00:00:00.000
{ "year": 2020, "sha1": "1e29a05baace73882f8435127e2e16cc7f96b65b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/19/6771/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5f2f5b90af21989efba05aa31998309a3155db34", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geography" ] }
232188800
pes2o/s2orc
v3-fos-license
Integrated biological and chemical characterisation of a pair of leonardesque canal lock gates The Museo Nazionale della Scienza e della Tecnologia “Leonardo da Vinci” in Milan is exposing two pairs of canal lock gates, used to control the water flow in Milan canal system, whose design appears in the Leonardo’s Codex Atlanticus. The wood present in the gates has been deeply characterised by mean of a multidisciplinary investigation involving i) DNA barcoding of wood fragments; ii) microbial community characterisation, and iii) chemical analyses. DNA barcoding revealed that two fragments of the gates belonged to wood species widely used in the middle age: Fagus sylvatica and Picea abies. The chemical characterisations were based on the use of ionic liquid as dissolving medium in order to analyse the entire cell wall material by means of Gel Permeation Chromatography (GPC) and 2D-NMR-HSQC techniques. This multidisciplinary analytical approach was able to highlight the complex nature of the degradation occurred during the gate operation (XVI-XVIII centuries): an intricate interplay between microbial populations (i.e. Shewanella), inorganic factors (i.e. iron from nails), physical factors and the lignocellulosic material. Introduction Recently, a diagnostic campaign funded by "Regione Lombarida" was realized with the goal to achieve useful information on the conservation conditions and historical data of two canal lock gates [1]. Those two pairs of lock gates, whose design appears in the Leonardo da Vinci Codex Atlanticus, were used to control the water flow in Milan canal system during the XVI-XVIII centuries and were removed in XX century from San Marco and Cassina di Pomm locks (Milan). They are nowadays property of the Milan city museums and conserved in the Museo Nazionale della Scienza e della Tecnologia "Leonardo da Vinci". Together with the radiographic analyses and radiocarbon dating, a preliminary chemical characterisation of wood was performed in order to assess conservation strategies for museum exhibition [1]. Gates were in fact partially waterlogged and subjected to wet/dry cycles producing physical and chemical modifications of wood structure. It is well known that waterlogged woods are artefacts that represent a conservation challenge still far to be solved, due to the complex Fig 1, where the white numbers label the sampling points [1]. Depending on the amount of wood, the samples have been submitted to different characterisation described in Table 1. Method Barcoding. The samples 2 and 7 were prepared for barcoding analysis and the subsequent botanical identification. After removing the external parts of the samples with a sterile scalpel (to avoid fungal contamination), some sample slices were washed with distilled water and kept in a sterilized glass bottle with distilled water at 4˚C for about 72 h; the water was repeatedly changed at every 12 h. DNA extraction was carried out combining CTAB and DNEasy Plant miniKit (Qiagen, Germany) methods [24]. To 0.50 mg of frozen (in liquid nitrogen) and ground tissue plant material, 900 μl of CTAB extraction buffer (preheated at 65˚C), 50 μl of 2-mercaptoethanol and 20 μl of RNAaseA (Qiagen, Germany) were added. Samples were homogenized for 3 min and then incubated for 20 min at 65˚C, with gentle shake and finally kept for 8 min at room temperature. The samples with the addition of one volume of chloroform were spinned at 9,000 rpm for 10 min (step repeated until the supernatant became clear). The aqueous phase of samples (upper part) was then placed into a new sterile tube (2 ml) and added with one volume of sterile water. The pH was adjusted to 7.0 with 20% HCl. Subsequently, the lysate filtration, the DNA elution and its suspension in 50 μl of AE buffer were performed by using the QIAshredder and DNeasy spin columns according to the manufacturer's instructions (Qiagen, Germany). The concentration of DNA was measured using a NanoDrop ND-1000 spectrophotometer (Thermo Scientific, USA). Molecular characterisation was performed with 3 different DNA primer pairs (Pp) widely used in barcoding [25], selected in the rbcL region of the plastid DNA: • PpA (rbcL1-rbcLR3A): rbcL1 forward (TTGGCAGCATTYCGAGTAACTCC) and rbcLR3A reverse (TTCGGTTTAATAGTACAGCCCAAT); • PpB (rbcLF2-rbcLR3A): rbcLF2 forward (TGTTTACTTCCATTGTGGGTAATG) and rbcLR3A reverse (TTCGGTTTAATAGTACAGCCCAAT); • PpC (rbcL1F-rbcL724 R): rbcL1F forward (ATGTCACCACAAACAGAAAC) and rbcL724 reverse (TCGCATGTACCTGCAGTAGC). Polymerase chain reaction (PCR) amplification was performed using PuReTaq Ready-To-Go PCR beads (Amersham Bioscience, Italy) in a 25 μL reaction according to the manufacturer's instructions. PCR cycles consisted of an initial denaturation step for 7 min at 94˚C, 35 cycles of denaturation (45 s at 94˚C), annealing (30 s at 48˚C) and elongation (1 min at 72˚C), and a final extension at 72˚C for 10 min. PCR products were sequenced using an ABI 3730XL automated sequencer at BMR Genomics (Padua, Italy). Manual editing of raw traces and subsequent alignments of forward and reverse sequences allowed us to assign edited sequences to species. Particularly, the 3 0 and 5 0 terminals were clipped to generate consensus sequences for each taxon. In order to avoid the inclusion of inadvertently amplified nuclear pseudogenes of plastid origin (see, for example, De Mattia et al. [26]), barcode sequences were checked following the guidelines of Buhay, 2009 [27]. The rbcL sequences were visualised and edited using the Sequencer 4.8 program, and a sequence similarity search, for plant identification was carried out by querying the GenBank database (GenBank accession numbers: MT231324-MT231327), using BLAST program (https://blast.ncbi.nlm.nih.gov/Blast.cgi) [28]. The sequences obtained were then deposited in the NCBI's data library (https://www.ncbi. nlm.nih.gov/; submission #2323851). Microbial community characterisation. Microbial communities hosted by three wood samples (samples 11, 14 and 16) were characterised by high-throughput sequencing of the taxonomic markers 16S rRNA gene for bacteria and ITS1 for fungi. Total genomic DNA was extracted from approximately 150 mg of wood for each sample, using the FastDNA™ SPIN Kit for Soil (MP Biomedicals, Solon, OH, USA). Extraction was performed according to manufacturer's instructions, except that the FastPrep1 instrument was run for 45 s at a speed of 6.5, and the following centrifugation step was extended to 15 min. The V5-V6 hypervariable regions of 16S rRNA gene were PCR-amplified using 783F and 1046R primers [29,30], while ITS1 region was amplified with ITS1F and ITS2 primers [31]. At the 5' end of each primer, a 6-bp barcode was included to allow sample pooling and sequence sorting. All amplicons were sequenced by MiSeq Illumina (Illumina, Inc., San Diego, CA, USA) with a 2 × 250 bp pairedend protocol. For each sample, 3 × 75 μL volume PCR reactions were performed with Phu-sion1 High-Fidelity DNA Polymerase (New England Biolabs, Ipswich, MA, USA), using 5X Phusion GC Buffer, MgCl 2 at a final concentration of 2 mM, 200 μM of each dNTP, 0.5 μM of each primer, and 1.5 U of Phusion polymerase. The cycling conditions for the amplification of the 16S rRNA gene fragment were: initial denaturation at 98˚C for 1 min; 28 cycles at 98˚C for 7 s, 47˚C for 20 s, and 72˚C for 10 s, and a final extension at 72˚C for 5 min. The cycling conditions for the amplification of the ITS1 region were: initial denaturation at 98˚C for 1 min; 30 cycles at 98˚C for 7 s, 50˚C for 20 s, and 72˚C for 10 s, and a final extension at 72˚C for 5 min. However, no PCR products were obtained in reactions using ITS primers; therefore, further analyses were conducted on bacterial communities only. The amplicons were purified with the Wizard1 SV Gel and PCR Clean-up System (Promega Corporation, Madison, WI, USA) and purified DNA was quantified using Qubit1 (Life Technologies, Carlsbad, CA, USA). Further library preparation with the addition of standard Nextera indexes (Illumina, Inc., San Diego, CA, USA) and sequencing were carried out at Parco Tecnologico Padano (Lodi, Italy). Reads from sequencing were demultiplexed according to the indices. Uparse pipeline was used for the subsequent elaborations [32]. Forward and reverse reads were merged with perfect overlapping and quality filtered with default parameters. Suspected chimeras and singleton sequences (i.e. sequences appearing only once in the whole data set) were removed. OTUs were defined on the whole data set clustering the sequences at a 97% of similarity and defining a representative sequence for each cluster. The abundance of each OTU was estimated by mapping the sequences of each sample against the representative sequence of each OTU at 97% of similarity. Taxonomic classification of the OTU representative sequences was obtained by RDP classifier. Sequences classified as chloroplasts were discarded. FT-IR. Chemical composition of the archaeological wood powders (Table 1) was investigated by means of a Fourier Transform Infrared (FT-IR) spectrometer (Nicolet iS10, Thermo Scientific) equipped with an ATR sampling accessory with a diamond crystal (Smart iTR). For each spectrum 64 scans, with a spectral resolution of 4 cm -1 , were recorded. Wood acetylation in ionic liquid. Acetylation reaction was performed in 1-allyl-3-methylimidazolium chloride ([amim]Cl, 950 mg), on the wood powders (70 mg, Table 1) with acetyl chloride, as reported by Salanti [23]. The procedure was slightly modified and at the end of the reaction, 200 μL of iodomethane were added and left to react for 15 minutes extra in order to convert the carboxylic acids into methyl esters. The acetylated wood samples were solubilized in THF (1 mg mL -1 ) for GPC analysis and in d 6 -DMSO for NMR analyses (50 mg mL -1 ). GPC analyses. Acetylated wood samples after dissolution in THF (1 mg mL -1 ) were analysed by GPC using THF as eluent at a flow rate of 1 mL min -1 . The analyses were performed on an HP1100 liquid chromatography system equipped with an UV-Vis detector set at 280 nm. The injection port was a Rheodyne1 equipped with a 20 μL loop. The GP-column system was composed as follows (according to the solvent flow direction): Agilent PLgel 5 μm, 500 Å, Agilent PLgel 5 μm, 1000 Å, and Agilent PLgel 5 μm, 10000 Å. PL Polymer Standards of Polystyrene from Polymer Laboratories were used for calibration. The peak molecular weight (M p ) values reported are the average of three replicate analyses (M p : ±100 g mol -1 , P = 0.05, n = 3). 2D-HSQC-NMR analyses. Two-dimensional Heteronuclear Single Quantum Coherence spectra (2D-HSQC) were run in DMSO-d 6 on IL-acetylated wood samples. The inverse detected 1 H-13 C correlation spectra were measured on a Bruker Avance 500 MHz spectrometer set at 308 K. The spectral width was set at 5 kHz in F2 and 25 kHz in F1. In total 128 transients in 256 time increments were collected. The polarization transfer delay was set at the assumed coupling of 140 Hz, and a relaxation delay of 2 s was used. The spectra were processed using P/2 shifted squared sinebell functions in both dimensions before FT. The integrated procedures are schematised in Fig 2 along with the SEM, MWC and GPC after benzoylation analyses performed in [1]. Plant DNA barcoding DNA extraction from ancient wood fragments was successful for the two investigated samples (2 and 7) with good DNA quality but modest yield (i.e. 5-10 ng mL -1 ). Differences in amplification success, PCR product lengths and sequence quality were detected for the three considered rbcL loci (PpA, PpB and PpC). In particular: a) the PpA products ranged from 361 to 376 bp, for the samples 2 and 7, respectively; b) the PpB products ranged from 128 to 136 bp, for the samples 7 and 2, respectively; c) the PpC products failed to yield usable sequences. The BLAST results based on sequence matching as well as the putative species identification produced the following results: Microbial community Bacterial communities of samples 11 and 14 were clearly dominated by Shewanella (70.0 and 77.8%, respectively, Fig 3). This genus was also present in sample 16, although at a much lower abundance (6.8%). Delftia and Halomonas were also more abundant in samples 11 and 14 than in sample 16. In fact, Delftia had a relative abundance of 4.6 and 4.9% in samples 11 and 14, respectively, but only of 0.3% in sample 16, while Halomonas had a relative abundance of 2.8, 5.1 and 1.1% in the three samples, respectively. In contrast, bacterial community of sample 16 was more diverse and was not clearly dominated by any populations. Here, the most abundant genera were Ralstonia, Hymenobacter, and Sphingomonas, with relative abundances of 17.4, 8.9 and 7.5%, respectively. However, several other genera were part of bacterial community of this sample, as well as unclassified members of the class Gammaproteobacteria and of the family Flavobacteriaceae. By contrast, no amplicons were obtained in PCR reactions using ITS primers. This was possibly due to the very low amount of total DNA extracted from the wood samples, of which fungal DNA is only a fraction. Therefore, although the presence of fungal communities in gate wood cannot be excluded, it was not possible to fully characterise them. Chemical characterisation As a preliminary characterisation, the wood powders have been submitted to FT-IR analyses. The stacked FT-IR spectra of the samples 1, 10, 13, 11, 6 and 12 were reported in Fig 4 while in Table 2 the assignments of the main lignocellulosic bands are described. The spectra for the sample 1, 10, 13 were qualitatively similar, where it was possible to detect the principal bands related to lignin, hemicelluloses and cellulose [17,18]. The spectra of the sample 6 and 12 were instead different, with the lignin bands at 1505 (b) and 1270 cm -1 (d) enhanced in relation to the reduction of the typical bands of polysaccharides at 1375 (c), 1157 (f), 1030 (g) and 900 (h) cm -1 . However, different lignocellulosic components have overlapped absorption bands: for example, the band (a) at 1730 cm -1 related to the C = O unconjugated in xylans, was present in the sample 1, 10 and 13. This band was missing in the sample 11 and then increased again in the sample 6 and 12, probably due to the oxidation occurred to lignin with the formation of carbonyl groups. In addition, also the band (d) at 1240 cm -1 (syringyl ring and C-O stretch in lignin and xylan) had a non-linear trend decreasing from the sample 1 to 11 and then increasing again in the sample 6 and 12. In general, the FT-IR spectra highlighted a degradation typical of waterlogged woods consisting in the relative enrichment on lignin due to the loss of the polysaccharides [10,11]. Then we adopted ionic liquids (ILs) as non-derivatizing solvents allowing us to overcome the difficulty of dissolving wood in conventional molecular solvents [22]. Benzoylated (performed in [1]) and acetylated wood samples Table 2. https://doi.org/10.1371/journal.pone.0247478.g004 were analysed by GPC at 240 and 280 nm, respectively, in order to maximize their analytical response. The chromatograms obtained are reported in Fig 5. As previously reported [23], after benzoylation reaction in IL, polysaccharides and lignin have a similar UV response (240 nm) due to the presence of the phenyl ester. Therefore, the chromatograms of benzoylated samples reported the molecular weight distribution of the whole cell wall components. On the other hand, the chromatograms of acetylated samples (280 nm) account exclusively for the molecular weight distribution of those lignocellulosic fractions that naturally contain aromatic moieties (mainly LCCs and lignin). In this view, it is possible to observe a trend in degradation related to the decreasing of the molecular weights from sample 1 to sample 12 (Fig 5, left panel). In particular, samples 1, 10 and 13 had a similar molecular weight distribution in line with sound hardwood (not reported) [12]. On the contrary, samples 11, 6 and 12 had a GPC profile shifted to lower molecular weights: this was probably due to the cellulose hydrolysis. It is possible to observe a similar and general trend for the GPC profiles after acetylation (Fig 5, right panel). However, some differences could be highlighted: samples 1 and 10 had a GPC acetylated profile typical of undegraded woods, while the molecular weight distribution of the sample 13 showed degradation on the LCCs structure. Sample 6 and 12 were characterised (as for benzoylation) by a GPC profile shifted to low molecular weights. The sample 11 had behaviour not in line with the general trend. This trend, already reported for waterlogged woods, could be rationalised in different temporal phases: i) LCCs degradation with partial loss of hemicellulose, ii) unshielded cellulose degradation with relative enrichment in lignin [9]. The high solubility achieved after wood acetylation enables the analysis of derivatized wood by means of 2D NMR techniques after dissolution in DMSO-d 6 [12]. The HSQC spectra of samples 1, 10, 11, 13, 6 and 12 along with the main chemical structures for lignin, hemicelluloses and cellulose are reported in Fig 6. Considering the results from the barcoding analyses, it is possible to hypothesize that all the samples have the same botanical origin, Fagus sylvatica: in fact in all the samples, the hardwood diagnostic peak of the syringyl unit was detected. As already observed by FT-IR and GPC, the samples 1 and 10 were characterised by a HSQC spectrum typical of a well-preserved wood. From a chemical point of view, the main components were detected: cellulose (indicated as C n for the anhydroglucopyranes unit), hemicelluloses (indicated as X 5 , considered representative for glucuronoxylans) and lignin (indicated as A for aliphatic and S, S' and G for aromatic region). The NMR data were also in agreement with FT-IR and GPC for the samples 13, 6 and 12: those samples were characterised by a weak X 5 signal (samples 13 and 12) or even not detected (sample 6), and the cellulose C n signals were in general less intense. The results indicated a trend in lignin enrichment via LCCs degradation. The sample 11 was on the contrary characterised by the presence of the X 5 signal and the disappearing of the lignin signals A (aliphatic region). Discussion Remarkably, our results showed that samples were 2 and 7 were attributed, by means of barcoding tools, to two very common species at the regional level, in the Lombardy pre-Alps and Alps that can be supposed as the origin areas of the woods: a) F. sylvatica (beech) ranges from about 500 to 1200 m a.sl.; b) P. abies (Norway spruce) ranges from about 800 to 2200 m a.s.l.. Since roman times and through the Middle Age, the woods of these two species have always been economically important both as structural woods and for their use in domestic and industrial products [33][34][35]. These considerations were consistent with the radiocarbon dating of the lock, around the XVI-XVIII centuries and with SEM analyses where the image of samples 2 highlighted the typical vessel structures of hardwood [1]. The wood of beech is heavy, hard, highly resistant to shock and suitable for steam bending [36]. Those characteristics explain why this wood species was probably selected to constitute the frame of the canal lock gates (sample 2, Fig 1). On the other hand, the wood of Norway spruce is moderately lightweight, strong, stiff, tough, and hard [37,38]. Since it can be easily processed and used, the spruce wood was probably selected to constitute the central plank and the shutter (sample 7, Fig 1): those parts could have been often subjected to maintenance during the canal lock gate working. As already reported in [1], the MWC, SEM and GPC after benzoylation analyses of the samples 1-17 indicated a good state of preservation of the wood composition and a deterioration typical of waterlogged woods (mainly on the external part) that depends on the continual filling and emptying of the lock. Radiographic analyses evidenced some metal inclusions probably due to the presence of the metal nails. In waterlogged conditions, the oxidation of nails could have generated cations able to catalyse the wood degradation. Depending on the available amount of wood, the samples 1, 10, 13, 11, 6 and 12 have been submitted to different and deeper characterisations following the scheme reported in Fig 2 and Table 1. The main results from the chemical characterisation were resumed in Fig 7 where MWC (%) [1], GPC data output (M p in mol/g for benzoylated and acetylated profiles), integration ratio C 1 /OCH 3 and X 5 /OCH 3 from 2D-HSQC spectra, intensity ratio 1505/1030 cm -1 from FT-IR spectra [1,9,17,18], and SEM images [1] for the archaeological samples 1, 10, 13, 11, 6 and 12 were reported. The integration ratio C 1 /OCH 3 and X 5 /OCH 3 from HSQC spectra were used to quantify respectively the holocellulose/lignin and hemicellulose/lignin ratios. For what that concern MWC values, we need to highlight that the samples were recovered from the surface of the gates which were stored for a long time in dry condition: they could have suffered of collapses that inhibit a complete re-hydration, so the MWC reported were used as indicative values. It is possible to observe that from sample 1 to sample 12 (excluding the sample 11) there was a trend. Samples 1 and 10 were characterized by low MWC, high M p (either benzoylated and acetylated), high C 1 /OCH 3 ratio, low I1505/I1030 ratio and high X 5 /OCH 3 ratio. SEM images highlight a thick cell wall. Those results are in agreement with a well-preserved wood chemical structure and composition. Porosity was in the range of sound woods, the cellulose molecular weights indicated absence of hydrolysis, LCCs were intact and no enrichment in lignin was detected. On the contrary, samples 6 and 12 were characterized by MWC indicating low-medium degradation, low M p (either benzoylated and acetylated), low C 1 /OCH 3 ratio, high I1505/I1030 ratio and low X 5 /OCH 3 ratio. SEM images highlighted thin cell wall. Those results indicated wood degradation for what that concerns the chemical structure and the lignocellulosic composition. Cellulose was hydrolysed and the small fragments lixiviated, LCCs were degraded with loss of hemicelluloses: the final material resulted on the formation of a spongy lignocellulosic material enriched in lignin content. The sample 13 was characterized by output data in between samples 1, 10 and 6, 12. It was characterized by low MWC, high M p after benzoylated but low M p after acetylated, high C 1 / OCH 3 ratio, low I1505/I1030 ratio and low X 5 /OCH 3 ratio. SEM images highlight a thick cell wall. From a chemical point of view, the sample 13 showed a well-preserved structure but an incoming degradation on LCCs that could be considered the first step for further modifications to cellulose [9]. This degradation could be caused by different physical and chemical factors: i) the conditions at which the artefact was subjected during the working, in particular the dry/wet cycles [1], ii) the role of iron oxidation from nails [1,39,40]. Through the microbial ecology analyses, the biological factors were also evaluated. No fungal populations were detected in microbiological characterisation and many bacterial genera found were ubiquitous. Sample 16 hosted a bacterial community very different from those of samples 14 and 11. Particularly, the latter were characterised by a low biodiversity and a strong dominance by one bacterial genus only, Shewanella. It can be hypothesized that wood surface of these samples offered harsher conditions to bacterial colonization, thus selecting for more specialised bacteria. In contrast, wood of sample 16 did not seem to exert strong selective pressures on bacterial populations. In fact, many bacterial genera of this community are either ubiquitous in the environment, such as Ralstonia, Flavobacterium, Pedobacter, Acidovorax and Pseudomonas [41], or are commonly retrieved as associated to aerial parts of many plant species, such as Hymenobacter, Sphingomonas, Methylobacterium and Burkholderia [42,43]. This peculiar bacterial community structure is in agreement with the particular chemical characterisation data, at least for sample 11. In fact, the sample 11, unlike all the other samples, was characterized by the M p acetylated higher than M p benzoylated, high C 1 /OCH 3 ratio, medium-low I1505/I1030 ratio and high X 5 /OCH 3 ratio. Those data were not in line with the general trend observed for all the other samples, which were characterised by the enrichment on lignin as degradation marker. The chemical characterisation indicated a partial degradation of polysaccharides (cellulose) but the sample 11 seemed predominantly characterised by the loss of lignin. In fact we also detected the disappearing of the lignin signals A of the aliphatic region, related to the β-O-4 inter-monomeric linkage. Members of the genus Shewanella, which was strongly dominant in bacterial community hosted by sample 11, are facultative anaerobic bacteria widely distributed in marine and freshwater environments [44]. They have been widely described as electroactive microorganisms, being able of extracellular electron transfer [45]. Particularly, they can reduce Fe(III) by transferring electrons through soluble shuttles that are either secreted by the cell, such as flavins, riboflavins [46] and melanin [47] or found in the extracellular environment, such as humic acids [48]. The effectiveness of humic acids in this process has been attributed to their high content in polycondensed and conjugated aromatic moieties, which mediate Fe(III) reduction [49]. Analogously, it has been shown that lignin possesses redox activity and can be repeatedly switched between oxidized and reduced states [50]. It can be therefore hypothesized that Shewanella, in the presence of Fe(III), may exploit lignin as an electron shuttle, thus finding a suitable environment in wood surface and gaining a selective advantage over other bacteria. In the operation conditions, the repeatedly switching between oxidized and reduced states could lead to a dissimilatory lignin depletion [51]. In contrast, both Delftia and Halomonas have been described as potential lignin degraders. In fact, it has been proposed that some Delftia strains might be able to mineralize lignin-derived aromatic compounds [52], while Halomonas meridiana M11 was a part of a lignocellulose-degrading consortium [53]. Moreover, the presence of Halomonas, a halophilic and halotolerant microorganism, suggests that locks came in contact with saltwater in their past, at least occasionally. The hypothesis of lignin degradation in archaeological waterlogged wood by a particular bacterial genus such as Shewanella must be more deeply investigated. Conclusion In conclusion, the multidisciplinary analytical approach, based on barcoding of wood fragments, microbial community study and chemical analyses, was able to highlight the complex history of the canal lock gates (XVI-XVIII centuries). The DNA barcoding used as innovative technique permitted the identification of the botanical origin revealing that two fragments of the gates belonged to wood species widely used in the middle age: Fagus sylvatica and Picea abies. The microbial ecology was also investigated identifying the bacterial communities that colonized the gates during their utilization. Both those data were integrated by the chemical characterisation in order to have a complete picture of the history and the state of the artefact: a typical waterlogged wood degradation was confirmed. Anyways for one sample, an interesting different degradation pathway was supposed: an intricate interplay between a particular microbial community with a strong dominance by one bacterial genus only (Shewanella) and the presence of iron cation from the oxidation of nails that lead to the lignin depletion in wood.
2021-03-12T06:16:03.355Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "4d0ec3ca2b1c2cfc62f425dd74f77b3bf773ab64", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247478&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0233e2337f201ee42d40a06bc12dee9ab491bf76", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
225198354
pes2o/s2orc
v3-fos-license
Collaborative Planning of Charging Station and Distribution Network Considering Electric Vehicle Mobile Energy Storage A collaborative planning model for electric vehicle (EV) charging station and distribution networks is proposed in this paper based on the consideration of electric vehicle mobile energy storage. As a mobile charging load, EVs can interact with the power grid. Taking EVs as planning considerations, subsidies for EVs are used to shift the charging load to the feeder network area with a large margin, reducing the transformer capacity and reducing the overall planning cost. Finally, the article uses CPLEX to solve the optimization problem, and uses an 18-node distribution system for simulation verification. Introduction In recent years, in order to ease the pressure on the environment and energy, the EV industry has received widespread attention for its clean and efficient energy model [1]. The charging load of EV is affected by factors such as battery characteristics, user behaviours, and roads. It has random, intermittent, and fluctuating uncertainties in time and space distribution. Large-scale EVs connected to the grid will affect the safe operation of the grid. At the same time, EVs have considerable energy storage value. Based on 2020's holdings of 5 million units and a simultaneous rate of 0.3, their charging and discharging power will reach 525 MW, equivalent to 2.3 total installed capacities of the Three Gorges. Therefore, in order to effectively play the energy storage characteristics of large-scale EV, it is of great research value to consider the coordinated planning of charging stations and distribution networks under the EV mobile energy storage. At present, many researches have been done on the layout planning of EV charging stations [2][3][4]. Ref. [5] studied the coping strategies of the distribution network in consideration of EV, and established an optimal planning model for EV power station replacement with economic optimality as the objective function and power balance as the constraint conditions. Ref. [6] aimed at the load uncertainty caused by the random charging and discharging of grid-connected EV, with the objective function of optimal economy and minimum network loss, and established a new power source location and capacity model for EV. Ref. [7] built the objective function to minimize the overall cost of commissioning, and used Voronoi diagram to do the charging station location planning. While research on the planning of EV and distribution networks is still lacking. Ref. [8] used economic optimization as the objective function, and used an orderly optimization algorithm to solve the distribution network planning model. Ref. [9] built the substation location optimization model with the minimum load spacing as the goal and the grid optimization planning model with the minimum GEESD 2020 IOP Conf. Series: Earth and Environmental Science 555 (2020) 012005 IOP Publishing doi:10.1088/1755-1315/555/1/012005 2 investment and operation cost as the goal. The above studies rarely consider the collaborate between EV and distribution networks. Based on the above considerations, this paper considers the collaborative planning of the EV charging station and the distribution network in the case of EV mobile energy storage to complete the transfer of the EV charging load from peak power to valley. Based on the power demand of each load point during the planning period and the update of the electric vehicle's mobile state, this paper establishes a charging station-distribution network collaborative planning model that takes EV mobile energy storage into consideration. EV Mobile Energy Storage Model EV mobile energy storage refers to the exchange of energy between the power grid and the energy storage battery when EV connected to the power grid in stationary state [10,11]. Due to the relatively small capacity of EV battery energy storage units, this paper examines the mobility characteristics of EV groups between stopping and driving state from a macro perspective. An area is divided into different areas according to the nature of land use. EVs can be parked in the area or driven across the area according to the owner's needs. Utilizing the characteristics of EV mobile energy storage, on the one hand, when the SOC of the EV is high, it can realize the release of electricity at the peak of the regional power load and reduce the peak load of the grid; on the other hand, when the SOC of the EV is low, at the same time the load is relatively high, you can update the running status of EV to achieve area transfer and conduct electric vehicle charging in other areas. Regarding the mobile energy storage of EV, this paper mainly reflects the mutual movement characteristics through different types of building land. As shown in figure 1, in order to avoid excessive vehicle movement distance and the next time the vehicle position is reachable in unit time, this article assumes that vehicle movement occurs only in the adjacent area. For example, when the vehicle is located in area 4 at a certain moment, if the electricity load on the area is low, the vehicle will be charged in area 4. If the vehicle needs to be charged urgently and area 4 is at the peak of electricity load, then the vehicle will have a position state transition and move to area 1, area 2 or area 5 for charging. If the vehicle has a high SOC at this time and area 4 is at the peak of the power load, the owner can be subsidized by encourage the owner discharge to the grid. After the introduction of EV mobile energy storage, the load of charging stations in each area can be expressed by equation (1). where Em(t) is the actual charging load of the m-th area at time t; E ' m(t) is the EV charging demand of the m-th area at time t; E dis m (t) is the EV discharge capacity to the grid of the m-th area at time t; Em_n(t) is the amount of electricity transferred from the EV charging load in the m-th area to its neighboring nth area at time t; N is the set of all areas neighboring to the m-th. The distribution network subsidizes car owners involved in the discharge of EV to the grid and the transfer of charging loads. A subsidy cost of 24 hours can be expressed as in the equation (2). where fc is the subsidized cost of the distribution network to the owner; b1 and b2 are the unit price of the subsidized electricity price. Objective Function This paper mainly analyzes the economics of the charging station-distribution network collaborative planning model that considers the EV mobile energy storage. In addition, the planning process needs to consider meeting the electricity demand in the planning area. The objective function is to minimize the cost of regional transfer subsidies for the EV charge load, construction cost of the distribution network and charging station and operation cost of the distribution network and charging station, which can be expressed by equation (3). min where f is the annual value of the total cost of the collaborative planning of the charging station and the distribution network; f js is the annual value of the construction cost; f op is the average annual operating cost. Annual Value of the Construction Cost (f js ). The construction cost includes the cost of substation construction, line construction and charging station construction, specifically expressed by equation (4): where fsub is the construction cost of the substation; fline is the construction cost of the line; fEV is the construction cost of the charging station;  indicates the annual value of construction cost, which can be calculated by equation (5): (1) Construction cost of the substation (fsub) Substation construction cost mainly includes the sum of substation construction costs at all nodes, as shown in equation (6): where I node is the set of all nodes; sub,p is a decision variable, 1 represents the construction of a substation at node p, 0 represents don't construction the substation at node p; fsub,0 is the unit capacity construction cost of the substation; kp is the capacity of the substation constructed at the p-th node. (2) Construction cost of the line (fline) The construction cost of the line mainly includes the sum of all line construction costs in the distribution network, as shown in equation (7): where line,ij is the decision variable of the line between nodes i and j, 1 represents a new feeder between the two nodes, 0 represents there is no new feeder between the two nodes; fline,0 is the construction cost per unit length of the line; sij is the length of the line between node i and node j. (3) Construction cost of the charging station (fEV) The construction cost of the charging station mainly includes the changing cost and the inherent cost of the charging station, as shown in equation (8): where the former is the changing cost of the charging station, which is directly proportional to the capacity of the charging station; the latter is the inherent cost of the charging station, which represents the cost of the land occupied by the charging station. EV,q is the decision variable, 1 represents the construction of a charging station at the q node, else it will be 0; fEV,0 is the cost per unit capacity of the charging station; Ecap,q is the installed capacity of the charging station; fEV,1 is the inherent cost of the charging station. Average Annual Operating Cost (f op ). The average annual operating cost includes the annual cost of grid operation and maintenance, the annual cost of charging station operation, and the annual cost of mobile energy storage to compensate users for EV, which are specifically expressed as: where fop_sub represents the annual cost of grid operation and maintenance; fop_EV represents the annual operating cost of the charging station; fop_c represents the annual cost of user subsidies. (1) Annual cost of grid operation and maintenance (fop_sub) where line,ij is the decision variable of the line between node i and node j, 1 represents the existence of an inherent feeder between the two nodes, and 0 represents the absence of an inherent feeder between the two nodes; line,ij is the annual operating cost of the line between node i and node j; sub,p is the annual operating cost of the substation constructed at node p. (2) Annual operating cost of the charging station (fop_EV) The annual operating cost of the charging station is mainly the maintenance cost of the charging facility, which is directly proportional to the capacity of the load in the charging station. Therefore, the annual operating cost of the charging station can be expressed as: where EV,q is the unit load cost of the charging station operation; PEV, q is the magnitude of the charging station load at the q node, which can be expressed by equation (12): Restrictions In the collaborative planning model for charging stations and distribution networks that consider EV mobile energy storage, the charging and discharging of electric vehicles and the safe operation of the power grid are subject to multiple constraints. (1) Power balance constraint (14) where Pi and Qi are inject active power and inject reactive power for node i; Gij and Bij are conductance and susceptance on the line between node i and node j; ij is the phase angle difference between node i and node j. (2) Node voltage constraints For all nodes in the network, including load nodes and substation nodes, the voltage amplitude needs to be kept within the allowable range at each stage. In this section, the upper and lower limits of the voltage fluctuation of the load node are set to ± 5% of the rated voltage, and the rated voltage of the substation node is UN. (3) Logical constraint Generally, a radial network is used, so the following constraints can be obtained: the generalized node connectivity; the number of distribution network lines should be equal to the number of nodes except the substation and the load nodes are connected to the network by only one branch. Specific radiation constraints are shown in equations (16) where A is a sufficiently large real number; node,i is a 0-1 decision variable, 1 means that the i load node is connected to the distribution network, and 0 means that the i load node is not connected to the system. (4) Battery constraint Battery constraints are mainly current constraints and capacity constraints. If the battery is charged / discharged with a large current in a short time, it will increase the loss and shorten the service life [12]. At the same time, deep charge and discharge will also exacerbate battery loss, and deep charge and deep discharge phenomena should be restricted during battery operation. bat,dis bat,cha () where Ibat,cha and Ibat,dis are respectively the maximum charge and discharge current of the battery; I(t) are the charge / discharge current of EV during t period; Sbat,max and Sbat,min are the upper and lower limits of the SOC of EV which are proposed to ensure the battery performance, the upper limit is set to 0.95, and the lower limit is set to 0.2 [13]. Case Study In this paper, an 18-node distribution network system with EV loads is used for simulation and verification. The voltage level of the distribution network is 10 kV. The initial grid structure is shown in figure 2. Among the distribution network, nodes 17 and 18 are substation nodes, and the remaining nodes are load nodes. Among the load nodes, candidate nodes for charging stations are 7, 10, and 12 nodes. The solid line indicates the inherent lines in the distribution network, and the dotted line indicates that feeders can be added. The load demand of each node [14] is shown in table 1. The other model parameters [14,15] involved in this example are shown in table 2. The model established in this paper is a mixed integer linear programming model, and can be calculated by commercial mathematical programming software. Its built-in algorithm is based on the branch and bound principle, which can efficiently obtain the global optimal solution. This section uses the MATLAB programming environment, uses the YALMIP toolkit to define each decision variable, Figure 3 constructs charging stations at nodes 7 and 12, and figure 4 constructs a charging station at node 7. Table 3 shows the results of the collaborative planning of the charging station and the distribution network in the case of EV mobile energy storage is taking into consideration or not. (1) Due to the exist of EV mobile energy storage, some EVs are subsidized and encouraged to move to other regions for charging or discharging with the grid when the grid is on load peaks, the total installed capacity of charging stations has decreased, and the number of charging stations has also been reduced from 2 to 1. So the inherent costs of charging stations have decreased, and the construction costs of charging stations have decreased. (2) The mobile energy storage of EV reduces the total load of the power grid during peak period, the number of matched transformers can be reduced, and the cost of substation construction is reduced. At the same time, the cost of grid operation and maintenance also be reduced because it's relevant to the cost of substation and line construction. (3) Because some EVs are transferred to other areas for charging, the total charging load of the charging station decreases within one day, and the operating cost of the charging station is reduced; (4) Although there is a need to subsidize EV users due to the mobile energy storage of EV, the increased subsidies for mobile energy storage account for a small proportion of the overall construction of the distribution network. Considering the situation of mobile energy storage, the total cost of the charging stations and distribution grid collaborative planning is lower than the total cost without considering mobile energy storage. Conclusion This paper proposes a charging station and distribution network collaborative planning model while considering EV mobile energy storage, and optimized the model. Comparing the model with EV mobile energy storage or not, the conclusions of the study are as follows: Through the electricity price subsidy, EV owners respond to the electricity price subsidy and consider the state transfer of EV under mobile energy storage. It can transfer part of the charging load to other areas and can discharge to the GEESD 2020 IOP Conf. Series: Earth and Environmental Science 555 (2020) 012005 IOP Publishing doi:10.1088/1755-1315/555/1/012005 8 grid when the grid in load peak periods. Therefore, the cost of mobile energy storage subsidies has increased, but overall planning costs are still falling.
2020-09-03T09:04:44.401Z
2020-08-29T00:00:00.000
{ "year": 2020, "sha1": "256dfbdce42ffd3f3022f6d77d1d47ef1199c2d1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/555/1/012005", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "57a58522a476a323a211e8e4a1d828e601b957f0", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
249002540
pes2o/s2orc
v3-fos-license
Characterization and impact of sunflower plastidial octanoyltransferases ( Helianthus annuus L. ) on oil composition Prosthetic lipoyl groups are essential for the metabolic activity of several multienzyme complexes in most or- ganisms. In plants, octanoyltransferase (LIP2) and lipoyl synthase (LIP1) enzymes in the mitochondria and plastids participate in the de novo synthesis of lipoic acid Introduction Plastids are the major site of fatty acid synthesis in plants (Ohlrogge et al., 1979), a process in which acetyl-CoA represents the building block of this biosynthetic process (Schwender et al., 2004;Alonso et al., 2007). Acetyl-CoA is produced through the activity of the plastidial pyruvate dehydrogenase complex (PDH), which catalyzes the oxidative decarboxylation of pyruvate to acetyl-CoA. The E2 subunit (E2-PDH) of this complex is a dihydrolipoamide dehydrogenase that must be lipoylated to function correctly. The lipoic acid (LA: thioctic acid or 6,8-dithiooctanoic acid) required for this activation is a modified form of octanoic acid that contains two thiol substituents at the Δ6 and Δ8 positions. Most studies on LA biosynthesis have focused on bacterial metabolism, where there is evidence of a two-step lipoylation pathway involving the activity of the octanoyltransferase (LIP2) and lipoyl synthase (LIP1) enzymes (Cronan, 2016). In the first step, LIP2 transfers the octanoyl moiety from the octanoyl-acyl carrier protein (octanoyl-ACP) to the ε-amino group of highly conserved Lys residues close to the N-terminus of E2-PDH, establishing an amide link (Reed and Hackert, 1966). Subsequently, LIP1 generates the lipoyl cofactor by inserting sulfur atoms at C6 and C8 of the octanoyl chain in a reaction dependent on S-adenosyl methionine (SAM: Zhao et al., 2003). Unlike most cofactors, LA must be covalently bound to its cognate enzymes. Octanoyl-ACP is an intermediate in the fatty acid synthesis catalyzed by the type II plastidial fatty acid synthase (FAS), which depends on the acetyl-CoA synthesized by the lipoylated PDH. Therefore, both pathways are interdependent since E2-PDH lipoylation is essential for de novo fatty acid biosynthesis and vice versa. Moreover, plastidial E2-PDH lipoylation is an essential process in plants, which are inviable without a functional form of that complex (Ewald et al., 2014b). In addition to PDH, LA is an essential cofactor for the correct activity of other enzyme complexes that are involved in carbon metabolism in most eukaryotic and prokaryotic organisms. These complexes include 2oxoglutarate dehydrogenase (OGDH), branched-chain 2-oxoacid dehydrogenase, acetoin dehydrogenase and the glycine cleavage system (known as glycine decarboxylase -GDC-in plants: Cronan, 2016). In Escherichia coli two pathways have been described that supply LA to these complexes: de novo LA biosynthesis achieved through lipoyl synthase and octanoyltransferase (called LIPA and LIPB, respectively: Cronan et al., 2016); and through the scavenging of free lipoate through lipoate protein ligase activity (LPLA: Zhao et al., 2003). Therefore, LIPB can be replaced by LPLA, whereas LIPA is essential for E. coli lipoylation (Zhao et al., 2003). In higher eukaryotes, the LA biosynthetic pathways are not yet well defined. In plants, OGDH and GDC are located in the mitochondria, whereas PDH can be found within both mitochondria and plastids. These two organelles possess specific machinery for LA biosynthesis and protein lipoylation (Wada et al., 1997), and therefore, plants have both plastidial and mitochondrial forms of LIP1 and LIP2. In plant mitochondria, a LIP1-LIP2 de novo lipoylation pathway has been described (Wada et al., 2001;Yasuno and Wada, 1998), and while LPLA activity may also be involved in this process, its physiological role is not fully understood (Kang et al., 2007;Ewald et al., 2014a). By contrast, there is no evidence LPLA activity exists in plastids, and E2-PDH is thought to be lipoylated through specific LIP1 and LIP2 isoforms. LIP1 was reported in plastids from Arabidopsis thaliana (Yasuno and Wada, 2002), tomato (Solanum lycopersicum: Araya-Flores et al., 2020) and sunflower (Helianthus annuus: Martins-Noguerol et al., 2020). In Arabidopsis, both plastidial LIP1p and LIP2p are essential genes and therefore the double mutant is embryo lethal (Ewald et al., 2014b). The overexpression of sunflower plastidial LIP1 in Arabidopsis seeds altered the content of certain glycerolipid species (Martins-Noguerol et al., 2020). However, the involvement of plastidial LIP2 activity in plant fatty acid and lipid synthesis has not yet been characterized. The mitochondrial system of LA synthesis was recently studied in Arabidopsis plants, and the overexpression of sunflower mitochondrial LIP1 and LIP2 in Arabidopsis altered the host plant lipid composition, pointing to an involvement of sulfur metabolism (Martins-Noguerol et al., 2021). Furthermore, it was previously shown that suppressing LIP2 activity alters certain fatty acid species and increases the total fatty acids in seeds, suggesting the participation of LIP2 in the de novo fatty acid synthesis of Arabidopsis seeds (Martins-Noguerol et al., 2019). This study is the last of a series studying the system of LA synthesis in sunflower. Here, two sunflower plastidial LIP2 genes were identified, cloned and sequenced (HaLIP2p1 and HaLIP2p2), and their expression in different plant tissues was studied. The phylogeny, structure, and catalytic mechanisms of both proteins was also analyzed in silico. Finally, after overexpression of HaLIP2p1 and HaLIP2p2 in E. coli and in Arabidopsis, the fatty acid and lipid content of both heterologous systems were analyzed, and the role of sunflower LIP2p in the metabolism of these molecules is discussed. Cloning of cDNAs encoding octanoyltransferases from H. annuus The Arabidopsis plastidial octanoyltransferase (AtLIP2p) protein sequence, encoded by the AT4G31050.1 gene and retrieved from public databases, was used to search for sunflower homologues in the sunflower genome database (Sunflower genome portal Heliagene -https:// www.heliagene.org/: Badouin et al., 2017). Two cDNA molecules were selected, HaLIP2p1 and HaLIP2p2, and specific PCR primers were designed to amplify both these genes, including their ATG and STOP codons (Table S1: primers were synthesized by Eurofins MWG Operon, Germany). The PCR fragments were purified and cloned into the pMBL-T Easy vector (Canvax, Spain), and the nucleotide sequences were confirmed by sequencing (Secugen, Spain). The confirmed sequences were deposited in GenBank under the accession numbers MT610110 (HaLIP2p1) and MT610111 (HaLIP2p2). Protein sequence analysis The protein sequences derived from the HaLIP2p1 and HaLIP2p2 genes were obtained using the BLASTp program (Camacho et al., 2009). These were aligned with homologous sequences and a phylogenetic analysis was performed using ClustalX v.2.0.10 software (Larkin et al., 2007) and MEGA6 software (Tamura et al., 2013). An in silico analysis of protein localization was carried out using the DeePLoc (Almagro Almagro Armenteros et al., 2017), TargetP V1.1 (Emanuelsson et al., 2007), iPSORT (Bannai et al., 2002) and Predotar (Small et al., 2004) applications. ClustalX v.2.0.10 and BioEdit (Hall, 1999) were used to study evolutionarily conserved residues through their alignment with homologous proteins from different phylogenetic groups: Arabidopsis thaliana, Ricinus communis, Oryza sativa and Amborella trichopoda. The location of theoretically critical residues and motifs involved in the catalytic activity of these novel proteins was deduced through their alignment with the crystal structure of octanoyltransferase from Mycobacterium tuberculosis (Protein Data Bank accession 1w66: Ma et al., 2006). Modeling the three-dimensional structure of HaLIP2p and molecular docking Homology modeling was performed using the Swiss Model server (Schwede et al., 2003; http://swissmodel.expasy.org/) and the M. tuberculosis LIPB X-ray structure as a template (MtLIPB: Ma et al., 2006). The UCSF Chimera program (Pettersen et al., 2004) was used to visualize the model. Molecular docking was performed using the SwissDock server (Grosdidier et al., 2011a and2011b) with octanoic acid as a substrate (ZINC01530416). The docking model was then visualized and analyzed with the UCSF Chimera program. Expression and purification of recombinant proteins in Escherichia coli The sequences corresponding to the mature HaLIP2p1 and HaLIP2p2 were cloned into the pQE-80L expression vector (Qiagen, Germany) using the BamHI/XbaI and BamHI/PstI, restriction sites, respectively, allowing (His) 6 -fusion proteins to be produced for purification. The specific primer pairs designed for this cloning were: HaLIP2p1-B-BamHI-F/HaLIP2p1-PstI-R for HaLIP2p1; and HaLIP2p2-B-BamHI-F/ HaLIP2p2-PstI-R for HaLIP2p2 (Table S1). The fidelity of the pQE-80L::HaLIP2p1 and pQE-80L::HaLIP2p2 constructs was confirmed by sequencing, and they were then expressed in E. coli XL1-Blue cells grown at 37 • C with shaking in LB media (1% Bacto Tryptone, 0.5% yeast extract, 1% NaCl [pH 7]), and containing 50 μg/mL ampicillin. Isopropyl b-D-1-thiogalactopyranoside (IPTG) was added to a final concentration of 0.5 mM to induce protein production when the cultures reached an OD 600nm value of 0.4. After 4 h growth, the cells were harvested by centrifugation (3000×g for 20 min), resuspended in Binding Buffer (20 mM sodium phosphate [pH 7.4], 500 mM NaCl, 20 mM imidazole) and disrupted with 15 cycles of sonication (70 • amplitude during 10 s pulses with 10 s intervals for cooling on ice). Soluble fractions for protein purification were obtained by centrifugation at 2000×g at 4 • C for 30 min and the recombinant proteins were purified using the HisSpinTrap Kit (GE Healthcare, UK), following the manufacturer's instructions. Protein-enriched eluates were monitored by SDS-PAGE and the (His) 6 -tagged recombinant proteins were then visualized in Western Blots (see Martins-Noguerol et al., 2020). Fatty acid analysis of transgenic Escherichia coli Cultures of the Escherichia coli XL1-Blue harboring pQE-80L::HaL-IP2p1, pQE-80L::HaLIP2p2 or an empty pQE-80L vector were grown and induced as described above. The cells were harvested by centrifugation (3000×g for 20 min) and washed twice with distilled water. Total lipids from he cultures were methylated using a methylation mixture containing methanol/toluene/sulfuric acid (88:10:2, v/v/v) and heated for 1h at 80 • C (Garcés and Mancha, 1993). Heptadecanoic acid (17:0) was added as an internal standard. The total fatty acid methyl esters (FAMES) were extracted by adding 1 mL of heptane and the upper phase was then washed with 2 mL Na 2 SO 4 (6.7%) in a new tube. The resulting fraction was evaporated under nitrogen gas and the methyl esters obtained were resuspended in 200 μL heptane. Gas chromatography (GC) analysis were performed as described in Martins-Noguerol et al. (2021). Generation of transgenic Arabidopsis thaliana plants The mature sequence of the HaLIP2p1 gene was cloned into a pBIN19-35S binary vector that contains the 35S promoter from cauliflower mosaic virus (CaMV). The sequence flanked by the BamHI and PstI restriction sites was obtained in PCR reactions with specific primers (HaLIP2p1-BamHI-F/HaLIP2p1-PstI-R: Table S1) and then cloned into pBIN19-35S using the respective restriction sites. The CAMV-35S-F and pBIN19-R primers were used for PCR screening of the construct (Table S1), which was subsequently confirmed by sequencing. The construct was used to transform competent Agrobacterium tumefaciens GV3101 cells, which were used to generated transgenic Arabidopsis lines were by floral dipping (Sayanova et al., 2006). A. thaliana Columbia (Col-0) ecotype plants were grown in a growth chamber under a controlled environment (22 • C day/20 • C night, 60% humidity, 16 h photoperiod at 250 μmol m − 2 s − 1 ) and first generations seeds from the transformed plants were selected by germination in MS medium supplemented with 50 μg/mL kanamycin (Harrison et al., 2006). Gene insertion was confirmed by PCR using Arabidopsis gDNA, and the expression of HaLIP2p1 in transgenic plants was confirmed by RNA extraction, cDNA synthesis and PCR. Third generation seeds from confirmed transgenic A. thaliana plants were used for the lipid analysis. Fatty acid analysis of transgenic arabidopsis seeds Total lipids were extracted from three replicates of mature seeds (10 mg) from wild-type (WT) Col-0 and third-generation transgenic plants overexpressing HaLIP2p1. The fatty acid composition of the total lipid seed extract was determined as described previously (Martins-Noguerol et al., 2020). The total lipids were extracted using glass beads in a Precellys homogenizer (6000 rpm for 30 s, 3 cycles: Precellys 24, Ozyme), and then 1 mL hexane:isopropanol (2:1) was added. After solvent evaporation, the lipids were resuspended in chloroform:methanol (1:1) and methylated with 5 μL of tetramethylammonium hydroxide (TMAH) solution (Sigma-Aldrich). Finally, 50 μL of decane was used to stop the reaction and 20 μL of the upper phase was used to analyze the FAMEs by GC. Lipidomic analysis For lipidomic studies, lipids were extracted from ice-dried Arabidopsis seeds (20 mg) as described previously (Martins-Noguerol et al., 2019). Once the lipids were extracted, the solvent was evaporated in an atmosphere of nitrogen and the lipids were solubilized in 200 μL isopropanol. The samples were diluted four times and analyzed by ultra-high performance liquid chromatography coupled to quadrupole-time of flight (QToF) mass spectrometry (UHPLC-HRMS2), performed following the protocol described by Ulmer et al. (2017) with some modifications. LC was performed on a HPLC 1290 (Agilent Technologies) and the lipid species were separated on a C18 Hypersil Gold column (100 × 2.1 mm, 1.9 μm: Thermofisher), following the temperature and gradient solvent conditions described in Martins-Noguerol et al. (2019). LC-electrospray ionization (ESI)-HRMS2 analysis was performed by coupling the LC system to a hybrid QToF high definition mass spectrometer (Agilent 6538: Agilent Technologies) equipped with a dual ESI source. The parameters were controlled using MassHunter B.07 software and the chromatogram was built as described in Martins-Noguerol et al. (2020). The peaks were annotated using two different databases: lipid Match and lipid Blast (Kind et al., 2013). Statistical analysis Statistical analysis was carried out using IBM SPSS v. 24.0 software (IBM Corp., Armonk, N.Y., USA). The data were tested for normality (Kolmogorov-Smirnov test) and homogeneity of variance (Levene test), and a one-way analysis of variance (ANOVA) was performed determining the significant differences with a Tukey test. The data from the lipidomic analysis were analyzed using Metaboanalyst v4.0 (Chong et al., 2019) and performing a multivariate analysis. A principal component analysis (PCA) was performed to study the differences in the lipid profiles among the different genotypes. Finally, an agglomerative analysis was carried out to obtain hierarchical clusters, which were represented together with heatmaps in which the cells represent the concentration of each lipid species. Cloning and sequence analysis of two lipoyl octanoyltransferases from sunflower Two coding sequences were identified within the sunflower genome database (Heliagene, Badouin et al., 2017) based on their homology to the Arabidopsis LIP2p gene (At4g31050.1), located on chromosome 9 (HaLIP2p1; HanXRQChr09g0268371) and chromosome 5 (HaLIP2p2; HanXRQChr05g0141551). Using these sequences, PCR products of 874 bp (HaLIP2p1) and 873 bp (HaLIP2p2) were amplified from 25 DAA sunflower seed cDNA, and the protein sequences encoded by these cDNAs contained 283 (HaLIP2p1) and 290 (HaLIP2p2) residues. When these genes were launched into location predictors (DeePLoc: Almagro Almagro Armenteros et al., 2017) they were both clearly classified as plastidial proteins (Figs. S1 and S2). The plastid transit peptide was located in the N-terminus of the proteins, Arg52 and Arg46, representing the first residues in the mature HaLIP2p1 and HaLIP2p2 proteins, respectively (Fig. S4). The mature HaLIP2p1 had 231 aa, with a theoretical molecular weight of 25.99 kDa and a pI of 5.45. The mature HaLIP2p2 contained 247 aa, with a theoretical molecular weight of 27.84 and a pI of 5.82. In both the chloroplast signal peptides there is a predominance of Ser, Pro and Thr residues. A phylogenetic analysis was performed with the predicted amino acid sequences of both the novel sunflower proteins, HaLIP2p1 and HaLIP2p2, along with other known homologous plant proteins (Fig. 1). The dendrogram showed that both these proteins are close to those from Cynara cardunculus and Lactuca sativa, forming a group corresponding to Asteraceae family within the dicot subtree. In order to study the conserved and catalytic residues in the sequence of sunflower proteins, their deduced amino acid sequences were aligned with their homologues from different phylogenetic groups (Arabidopsis thaliana, Ricinus communis, Oryza sativa and Amborella trichopoda), highlighting the conserved residues and domains in the octanoyltransferases enzymes (Fig. S4). Catalytic residues (indicated in the alignment: Fig. S4) were identified in accordance with the alignment of both sunflower protein sequences together with the crystalized Mycobacterium tuberculosis octanoyltransferase sequence (MtLIPB: Ma et al., 2006). Two invariant residues (Cys176 and Lys142 in M. tuberculosis) have been postulated to act as acid-base catalysts, with the Cys residue binding covalently to the substrate through a thioesther bond. These residues correspond to Cys229 and Lys195 in HaLIP2p1, and Cys223 and Lys189 in HaLIP2p2, and these Cys residues constitute part of a highly conserved PCG motif (Pro228-Cys-Gly230 in HaLIP2p1, and Pro222-Cys-Gly224 in HaL-IP2p2). Homologous conserved aromatic residues that interact with the substrate in MtLIPB (Tyr22, His49, His83 and Tyr91) were found in the sunflower proteins: Trp72, His99, His137 and Tyr145 in HaLIP2p1; Trp66 His93, His131 and Tyr139 in HaLIP2p2. Several conserved MtLIPB Gly residues (Gly77, Gly78, Gly147 and Gly158) that are involved in cavity formation for the substrate interaction were also detected in the HaLIP2p1 (Gly131, Gly132, Gly200 and Gly211) and HaLIP2p2 sequences (Gly125, Gly126, Gly194 and Gly205). Finally, MtLIPB has been said to possess a predominance of positively charged residues (Arg58, Arg79, Arg130 and Arg149) in the substrate access site, residues that are less well conserved in the sunflower given that several changes were found, although some of the positively charged residues were preserved: Ala108, Leu186 and Arg202 in HaLIP2p1; and Ala102, Leu180 and Arg196 in HaLIP2p2. Prediction and docking of the tertiary structure model In the absence of a crystal structures for plant octanoyltransferases, the MtLIPB X-ray structure was used as template to model the tertiary structures of HaLIP2p1 and HaLIP2p2. MtLIPB shared 38.16% sequence similarity with HaLIP2p1, and 35.75% with HaLIP2p2. Both sunflower proteins shared 67.70% of identity and as such, the model generated was quite similar for both sequences (Fig. S5). The overall predicted structure for both proteins consisted of a monomer whose secondary structure was formed by several β strands forming a core, and several α helices surrounding this core. In HaLIP2p1, 8 α helix and 9 β strands were found, as opposed to 9 α helix and 8 β strands in HaLIP2p2. The distribution of the secondary elements in the tertiary structure was very similar to that in the template, and a gap in the core was formed by two β-sheets where it is presumed that the substrate interacts. Several α-helices protrude around the β-strands. In HaLIP2p1, the large β sheet is forming by six β strands (β1-β2-β6-β7-β8-β9) and the small sheet is made up of three (β3-β4-β5). In HaLIP2p2, the large β-sheet also consisted of six β strands (β1-β2-β5-β6-β7-β8), although only two β strands were considered to form the minor β sheet (β3-β4). Despite these differences, the 3D view of molecular surface was essentially the same for both these proteins (Fig. S5). The HaLIP2p docking model was generated using octanoic acid as a substrate (Fig. S6). Octanoic acid was positioned in the gap formed by the two β sheets, which is visible in the molecular surface model (Fig. S6A). The carboxyl group of octanoic acid was close to the Cys residue (Figs. S6B-C) with which it presumably interacts. In our docking model the conserved Lys was also located close to the Cys in the structure core. Moreover, all the aforementioned residues that participated in the substrate interaction or gap formation were found to be close to the active site of the enzyme. HaLIP2p1 and HaLIP2p2 tissue distribution The expression of HaLIP2p1 and HaLIP2p2 was studied by RT-qPCR in developing seeds and vegetative tissues. Both genes were expressed in all the tissues analyzed, although less HaLIP2p2 accumulated than HaLIP2p1, the latter representing the predominant isoform in sunflower (Fig. 2). These transcripts were temporally regulated during embryo development, with the strongest expression of the HaLIP2p1 gene in developing seeds evident at 18 DAA. Moreover, the main expression of this gene in the vegetative tissues was detected in leaves. Fatty acid analysis of E. coli expressing the HaLIP1p1 and HaLIP1p2 enzymes Both sunflower octanoyltransferases were expressed in E. coli using the pQE-80L vector (pQE-80L::HaLIP2p1 and pQE-80L::HaLIP2p2) and both soluble proteins were purified from cell extracts (Fig. S3). The involvement of HaLIP1p1 and HaLIP1p2 in fatty acid biosynthesis was assessed by analyzing the fatty acids in the transgenic bacteria (Table 1). The overexpression of both sunflower octanoyltransferases altered the fatty acid profile and when HaLIP2p1 was overexpressed, there was a significant decrease of 16:1 Δ9 and 16:0, and an increase of 14:0 and 18:1 Δ11 fatty acid species relative to the control. The overexpression of HaLIP2p2 produced a significant decrease in the unsaturated 16:1 Δ9 and 18:1 Δ11 fatty acids. In terms of the unsaturated/saturated ratio, there were differences in the cultures overexpressing HaLIP2p1 in which that ratio decreased. Finally, a decrease in the total fatty acid content was observed in the cultures when HaLIP2p1 and HaLIP2p2 were overexpressed (Table 1). Fatty acid profile of transgenic Arabidopsis thaliana seeds The third generation of mature transgenic seeds from confirmed transgenic A. thaliana plants overexpressing HaLIP2p1 were harvested. The overexpression of HaLIP2p1 did not modify plant growth, displaying a similar phenotype to WT plants (Fig. S7). No significant changes were observed in the fatty acid profile of transgenic seeds relative to the control lines (Table 2). Lipidomics in transgenic Arabidopsis thaliana seeds A comparative lipidomics analysis between WT and transgenic HaLIP2p1 Arabidopsis seeds identified significant differences in the content of 46 of the 70 annotated lipids studied (p < 0.05). Subsequently, a PCA was carried out to determine the experimental variation, clustering different genotypes in score plots that provide an overview of the differences in seed oil composition among the WT and transgenic lines. The accumulated variance explained by principal component 1 (PC1) was 69.2%, while that explained by PC2 reached 26.2% (Fig. S8). Discussion Plastidial octanoyltransferase and lipoyl synthase are key enzymes involved in LA biosynthesis in this organelle. LA is necessary for the activity of the plastidial PDH complex, which catalyzes the oxidative decarboxylation of pyruvate to acetyl-CoA, the source of carbon for de novo fatty acid biosynthesis. Studies of metabolic flux have shown that carbon imported into plastids and used for fatty acid synthesis comes from carbohydrates degraded through the cytosolic glycolytic pathway. The glycolytic metabolites imported by plastids are typically glycose-6phosphate, phosphoenolpyruvate and pyruvate, in proportions that depend on the specific plant species (Schwender et al., 2003;Alonso et al., 2007). The degradation of these metabolites continues within the plastid, which has endogenous glycolytic capacity, and they are all transformed into pyruvate. Thus, the production of the acetyl-CoA required for fatty acid synthesis relies on the plastidial PDH complex, which should contain its LA co-factor, with both these events related and essential for the correct development of plant cells, and for the production of reserve TAGs. Therefore, better understanding the plastidial lipoylation pathway is critical to design approaches that modify the lipid composition of sunflower plants. In the present study, two sunflower seed octanoyltransferase genes, HaLIP2p1 and HaLIP2p2, were cloned and studied. These LIP2 encoded proteins shared 67.70% identity, with the main differences in the Nterminal transit signal peptide. Indeed, the presence of these N-terminal signal peptides suggests a high likelihood of a plastidial location. In both cases the N-terminal sequences were enriched in Pro, Ser and Thr amino acids, and these residues were described as typical amino acids in plastid signal peptides (Zhang and Glaser, 2002;von Heijne et al., 1989). Previous studies showed that sunflower plastidial enzymes also accumulate these phosphorylation motifs (e.g., HaFatA, HaFAD7, HaKASIII, HaHAD1-2 and HaLIP1s: Serrano-Vega et al., 2005;Venegas-Calerón et al., 2006;González-Mellado et al., 2010;González-Thuillier et al., 2016;Martins-Noguerol et al., 2020), pointing to similar processes of protein maturation for the products of all these genes. Phylogenetic analysis points to a duplication as the potential origin of these two sequences, a common origin that is presumable related to the hybrid origin of the Helianthus genus (Giordani et al., 2014). The localization of the two LIP2p sunflower genes on different chromosomes supports this hypothesis, as does the close proximity of the HaLIP2p1 and HaLIP2p2 proteins to other members of the Asteraceaea family, such as Cynara cardunculus and Lactuca sativa. The alignment of both LIP2p proteins with homologous sequences from other plant species identified highly conserved catalytic residues (Ma et al., 2006) and motifs, indicating the little variation among octanoyltransferases during evolution. In the novel sunflower HaLIP1p1 and HaLIP2p2, these two catalytic residues correspond to Cys229/Lys195 and Cys223/Lys189, respectively, the Cys residue forming part of the conserved PCG motif in both these sunflower proteins (Pro228-Cys229-Gly230 in HaLIP2p1; Pro222-Cys223-Gly224 in HaLIP2p2). Besides these previously described residues, other conserved amino acids involved in substrate interaction and accommodation were identified in the crystalized MtLIPB protein (Ma et al., 2006). Homologous amino acids to these key residues were found in both sunflower sequences, confirming the conservation of these critical residues during octanoyltransferase evolution. Octanoyltransferases act as Cys/Lys acyltransferases, whereby the Cys residue interacts with the carboxyl group of the octanoyl group from octanoyl-ACP and an acyl-thioester intermediate is formed (Ma et al., 2006). Thus, the octanoyl group is transferred to a lipoyl domain of apo-proteins (Zhao et al., 2005). Moreover, the involvement of the conserved Lys has been demonstrated in the catalytic activity of MtLIPB (Ma et al., 2006) and when no substrate is available, this Lys is close to the sulfhydryl group of the Cys residue, suggesting a hydrogen bond may form between this Lys-Cys. This could generate a nucleophile group for acyltransferase activity. Furthermore, this Lys has been suggested to participate in the activation of the octanoyl chain for thioester octanoyl-Cys formation (Ma et al., 2006). The docking model obtaining for HaLIP2p suggests it possesses a mechanism analogous to that described previously, with octanoic acid located in the gap, with the aliphatic chains inbound and the carboxyl group very close to the Cys with which it would interact through a thioester bond. Moreover, the Cys and Lys residues stay close together. This mimics the proposed model for MtLIPB, where Lys forms part of a β-strand (as also occurs in HaLIP2p1 and HaLIP2p2) close to the Cys. Furthermore, these observations were also described in E. coli LIPB, where the octanoyltranferase reaction was proposed to take place through the formation of an acyl-thioester intermediate between the octanoyl chain and the Cys from LIPB (Zhao et al., 2005). Thus, the structure and docking data from both sunflower plastidial LIP2 suggest an identical activity. Gene expression suggests that none of these enzymes are tissue specific. Two octanoyltransferases were identified in A. thaliana (AtLIP2p and AtLIP2p2: Ewald et al., 2014b), and although AtLIP2p1 is the most strongly expressed in leaves and roots, AtLIP2p2 dominates in siliques and flowers. Nevertheless, both were seen to be redundant and the expression of at least one of them was essential for plant development (Ewald et al., 2014b). In sunflower, the prevalence of HaLIP2p1 expression, in contrast with the relatively low levels of HaLIP2p2 (Fig. 2), suggests that HaLIP2p1 is the main octanoyltransferase involved in the attachment of octanoyl chain to plastidial E2-PDH. Moreover, when both enzymes were overexpressed in E. coli, different changes in fatty acid composition were detected. In the biosynthesis of Fig. 4. Representative differences in the TAG and glycerolipid species found in mature wild-type Arabidopsis seeds (WT, white columns) and mature transgenic Arabidopsis seeds that overexpress HaLIP2p1 (grey). The data are the averages ± SD of three different transgenic Arabidopsis lines in different experiments, where △▾ reflect significant differences at the 0.05 level. bacterial and plant plastid/chloroplast fatty acids, PDH catalyzes the production of acetyl-CoA, the source of carbon necessary for this anabolic process. In this pathway, β-ketoacyl-ACP synthase (FABB) uses octanoic acid to provide decanoic acid for further fatty acid synthesis. In this sense, competition for the octanoyl-ACP substrate could exist between LIPB and FABB, and in this context, an excess of octanoyltransferase activity (due to HaLIP2p1 and HaLIP2p2 overexpression) could drive the retention of more octanoate by this enzyme. This would lead to a depletion in octanoic acid for acyl elongation and therefore, a decrease in fatty acid biosynthesis. This would be coherent with the decrease in the absolute amounts of fatty acids that HaLIP2p genes produce in bacteria when overexpressed (Table 1). Moreover, it is important to consider that E. coli has other lipoylated proteins, such as 2-oxoglutarate dehydrogenase (2OGDH) and protein H of the glycine cleavage system (Vanden Vanden Boom et al., 1991;Wilson et al., 1993). Hence, the distinct impact on fatty acid profiles could be related to the putative specificity of sunflower LIP2 in bacteria. The HaLIP2p1 gene was selected instead of HaLIP2p2 to obtain overexpressing Arabidopsis plants because it is the most strongly expressed octanoyltransferase gene in vegetative tissues and developing sunflower seeds. Seed germination rates and plant development were not affected in Arabidopsis transgenic plants (Fig. S7), and nor was the seed fatty acid composition of the mutant line (Table 2). In Arabidopsis, plastidial PDH complex supplies acetyl-CoA for de novo fatty acids biosynthesis and β-ketoacyl-ACP synthase I (KAS I) is the responsible for fatty acid elongation from octanoic acid (C8:0) up to palmitic acid (C16:0: Wu and Xue, 2010). As we hypothesized for bacteria, a potential competition between KAS I and LIP2p proteins is present within plastids. HaLIP2p1 overexpression could produce an excess of octanoyltransferase activity, leading to a depletion of octanoyl-ACP in Arabidopsis plastids. Accordingly, a reduced availability of octanoyl-ACP for KAS I activity will lead to a decrease in fatty acid synthesis. Previously, Arabidopsis mutants with low KAS I and KAS III activities were seen to have reduced fatty acids synthesis (Wu and Xue, 2010;Takami et al., 2010). In order to clarify the role of LIP2p during fatty acid synthesis, a relatively complete lipidomics analysis in transgenic seeds was carried out. This study revealed different lipid profiles in both transgenic and WT seeds. The PCA analysis separated the data from both genotypes indicating that the overexpression of HaLIP2p1 had consequences on lipid biosynthesis. Furthermore, the heatmap identified glycerolipid species, mainly TAGs, with significant alterations in their content due to HaL-IP2p1 overexpression. Some of the TAG whose content changed coincided with the most abundant TAG species in Arabidopsis seeds (e.g. TAG 56:6 or TAG 56:5), yet as indicated previously, no differences in the total fatty acid composition was found in the seeds. Although, differences in the content of some lipid species were detected, these were not based on fatty acids changes but rather on their distribution into glycerolipid species. These findings are similar to those described previously (Martins-Noguerol et al., 2020), where the overexpression of plastidial lipoyl synthase forms from sunflower in Arabidopsis seeds produced changes in lipid species without altering the total fatty acid content. Thus, the rearrangement of fatty acids into lipid species in transgenic seeds here could be due to modifications in the acyl-CoA pool during seed development. These data suggest that HaLIP2p1 is able to alter lipid metabolism in Arabidopsis seeds, affecting the oil composition. However, these changes are unpredictable and this offers further evidence of the complexity underlying the regulation of lipid biosynthesis in plants. Conclusion We discovered two plastidial octanoyltransferases in H. annus, HaLIP2p1 and HaLIP2p2, being HaLIP2p1 the predominant form in the species. In silico models suggest their activity proceed through the formation of an acyl-thioester intermediate between the octanoyl chain and a conserved Cys residue from the enzyme, likewise in other species. HaLIP2p is able to alter lipid metabolism when it is heterologous expressed in bacteria and Arabidopsis seeds. In the latest, overexpression of HaLIP2p produce changes in the content of glycerolipids, including several most abundant TAGs in Arabidopsis seeds, likely due to an excess of LIP2 activity and depletion of plastidial octanoyl-ACP. These findings further our understanding of LIP2 activity in the context of lipid biosynthesis.
2022-05-24T15:06:09.960Z
2022-05-22T00:00:00.000
{ "year": 2022, "sha1": "e95b334727c8a99d90099c3b6a8937d42504f36c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jplph.2022.153730", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "c55e34cb7c3db1eb72ea680ccbe025242f5e80a8", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118880018
pes2o/s2orc
v3-fos-license
STM Spectroscopy of ultra-flat graphene on hexagonal boron nitride Graphene has demonstrated great promise for future electronics technology as well as fundamental physics applications because of its linear energy-momentum dispersion relations which cross at the Dirac point. However, accessing the physics of the low density region at the Dirac point has been difficult because of the presence of disorder which leaves the graphene with local microscopic electron and hole puddles, resulting in a finite density of carriers even at the charge neutrality point. Efforts have been made to reduce the disorder by suspending graphene, leading to fabrication challenges and delicate devices which make local spectroscopic measurements difficult. Recently, it has been shown that placing graphene on hexagonal boron nitride (hBN) yields improved device performance. In this letter, we use scanning tunneling microscopy to show that graphene conforms to hBN, as evidenced by the presence of Moire patterns in the topographic images. However, contrary to recent predictions, this conformation does not lead to a sizable band gap due to the misalignment of the lattices. Moreover, local spectroscopy measurements demonstrate that the electron-hole charge fluctuations are reduced by two orders of magnitude as compared to those on silicon oxide. This leads to charge fluctuations which are as small as in suspended graphene, opening up Dirac point physics to more diverse experiments than are possible on freestanding devices. Graphene was first isolated on silicon dioxide because of the ability to image monolayer regions using an optical microscope [11]. However, the electronic properties of SiO 2 are not ideal for graphene because of its high roughness and trapped charges in the oxide. These impurity-induced charge traps tend to cause the graphene to electronically break up into electron and hole doped regions at low charge density which both limit device performance and make the Dirac point physics inaccessible [3-5, 12, 13]. In order to create devices with less puddles, the substrate must be removed or changed. One possibility to get rid of substrate interactions is to suspend graphene [6,7], as shown by the drastic improvement in mobility which has enabled the observation of the fractional quantum Hall effect in suspended devices [14,15]. However, the freely suspended monolayers are very delicate, leading to fabrication difficulties as well as strain [16]. Because of these difficulties, there have been no STM spectroscopy measurements of the local electronic properties of suspended graphene devices. All of this points to the need for new substrates that offer mechanical support to the graphene without interfering with its electrical properties. Recently, such a candidate substrate has been found with the demonstration of high-quality graphene devices on hexagonal boron nitride (hBN) [8]. Hexagonal boron nitride has the same atomic structure as graphene, but with a 1.8% longer lattice constant [17], and shares many similar properties with graphene except that it is a wide-bandgap electric insulator [18]. The planar structure of hBN cleaves into an ultra-flat surface and the ionic bonding of hBN should leave it free of dangling bonds and charge traps at the surface resulting in less induced electron-hole puddles in graphene. Indeed, graphene on hBN devices exhibit the highest mobility reported on any substrate, as well as narrow Dirac peak resistance widths, indicating reduced disorder and charge inhomogeneity [8]. To study how the local electronic structure of graphene is affected by the hBN substrate, we prepare graphene on hBN devices for STM measurements. A schematic of the measurement set-up showing the graphene flake on hBN with gold electrodes for electrical contact is shown in Fig. 1a. A typical STM image of the monolayer graphene showing the surface corrugations due to the underlying hBN substrate is shown in Fig. 1b. This image can be compared with an STM image of monolayer graphene prepared in a similar manner but on SiO 2 as shown in Fig. 1c. It is clear from these two images that the surface corrugations are much larger for graphene on SiO 2 as compared to hBN. This is due to the graphene conforming to the substrate and the planar nature of hBN as compared to the amorphous SiO 2 . Figure 1d shows a histogram of the heights in the two images. In both cases, the heights are well described by gaussian distributions with standard deviations of 224.5 ± 0.9 pm for graphene on SiO 2 and 30.2 ± 0.2 pm for graphene on hBN. The values for graphene on SiO 2 are similar to previously reported values [19,20] while the distribution for graphene on hBN is similar to graphene on mica or HOPG [21]. Reducing the surface roughness is critical for graphene devices because local curvature can lead to electronic effects such as doping [22] and random effective magnetic fields [23]. As the height variation of the graphene on hBN is as flat as HOPG, it has reached its ultimate limit of flatness. Therefore, we conclude that the underlying hBN substrate is continuous and the graphene above it sits at two different angles. Atomic force microscopy images show that graphene on hBN tends to form flat regions separated by ridges and pyramids. As the two STM images were taken from different sides of one of these ridges, it is clear that the graphene can change orientation across these ridges. The scanning tunneling microscope is not only able to acquire images of atoms but can also map the local density of states. We have performed scanning tunneling spectroscopy of the graphene on hBN. Figure 3a shows a typical dI/dV spectroscopy curve which is This minimum occurs when the Fermi energy of the tip lines up with the Dirac point. We observe that the location of the minimum changes more quickly when the Dirac point is near zero tip voltage which is consistent with the linear band structure of graphene. There is also a dark ridge that occurs at decreasing tip voltage as the gate voltage is increased. This is due to the effect of the voltage on the tip acting as a local gate and changing the density of electrons in the graphene [25]. Our local spectroscopy measurements indicate that there is no band gap induced in graphene on hBN, not even locally. These results disagree with earlier theoretical calculations which predicted the opening of a band gap of order 50 meV when graphene is placed on hBN, because of the breaking of sublattice symmetry [9,10]. This discrepancy is explained by the 1.8% mismatch between the graphene and hBN lattices and the different orientations of the two lattices, which were both neglected in Refs. [9,10]. Taking them into account, one expects that, while one of the carbon atoms may sit over a boron (nitrogen) atom at one location, this alignment gets lost a few lattice constants away. In large enough systems, carbon atoms should therefore have the same probability to have a boron or a nitrogen atom as nearest neighbor in the hBN layer, regardless of their sublattice index. We numerically checked the validity of this hypothesis by calculating the interlayer hopping potential ∼ γ ⊥ exp[−|r − r ′ |/ξ] from a carbon atom at r in the graphene layer to a boron or nitrogen atom at r ′ in a rotated hBN layer. We restricted ourselves to nearest-and nextnearest-neighbor interlayer hopping and chose the parameters γ ⊥ = 0.39 eV and ξ = 0.032 nm to fit known values for these hoppings in graphene bilayers [26]. We found that the 1.8% lattice mismatch alone is sufficient to make the hopping strength from a carbon atom to a boron or a nitrogen atom independent of the graphene sublattice index for systems of a few hundred unit cells, going down to a few tens of unit cells when the lattices are misaligned by about one degree. We incorporated the Fourier transform of this hopping potential into the low-energy Hamiltonian for graphene on hBN to find the energy-momentum dispersion. The inter-layer coupling is nonzero only for k = 0 as well as for six additional vectors k associated with the Moiré pattern. Most importantly, we found that the coupling between the A and B atoms in the graphene lattice with the boron and nitrogen atoms in the hBN are almost identical. Thus sublattice symmetry is restored and a gapless Dirac spectrum is recovered, albeit at slightly shifted values of K. This is illustrated in Fig. 3d. While γ ⊥ depends in principle on whether hopping occurs between a carbon and a boron or nitrogen atom, we note that this does not break sublattice symmetry, and we checked that the spectrum remains gapless, even when this discrepancy is taken into account. More details of our numerical approach are given in the Supplementary Information. Since it is a two-dimensional material, the density of electrons is given by n = g s g v πk 2 /(2π) 2 where g s and g v are the spin and valley degeneracy which are both 2 for graphene. Therefore, the Dirac point should depend on gate voltage as E = v F παV g . The red curve is a fit to the data from which we can extract the Fermi velocity. We find that v F = 1.16 ± 0.01 × 10 6 m/s for the electrons and v F = 0.94 ± 0.02 × 10 6 m/s for the holes. Moreover, we observe an asymmetry between the Fermi velocity for electrons and holes of about 25% depending on the Moiré pattern observed. The shorter Moiré pattern has a higher Fermi velocity for holes while the longer one has a higher Fermi velocity for electrons. The origin of this asymmetry is unclear but may arise due to next-nearest neighbor coupling which are not taken into account in our model. One of the main advantages to using hBN as a substrate for graphene as compared to SiO 2 is the improvement in electronic properties of the graphene which is believed to be due to the lack of charge traps on the hBN surface. Figure 4a shows the topography of graphene on hBN over a range of 100 nm. Note that the height variation is less than 0.1 nm over the range of the image as compared to typical values of nearly 1 nm for graphene on SiO 2 . We have performed dI/dV measurements at 1 nm intervals over the entire area of Fig. 4a. For each of these dI/dV curves, we have found tip voltage of the minimum which corresponds to the Dirac point. The results are plotted in Fig. 4b. We have done a similar analysis for a 100 nm area of graphene on SiO 2 and the results are plotted in Fig. 4c. The red and blue regions correspond to electron and hole puddles respectively. It is clear from these two images that the variation in the energy of the Dirac point is much smaller on hBN. The spatial extent of each puddle is also much smaller in the graphene on SiO 2 consistent with an increased density of impurities [13]. can be converted to charge fluctuations using n = E 2 d /π( v F ) 2 . We find that the charge fluctuations in graphene on hBN are σ n = 2.50 ± 0.13 × 10 9 cm −2 while they are more than 100 times larger for graphene on SiO 2 , σ n = 2.64 ± 0.07 × 10 11 cm −2 . Our measurements for the charge fluctuations on SiO 2 are consistent with previous single electron transistor [3] and STM [4,5] measurements which established the presence of electron and hole puddles in graphene on SiO 2 . Furthermore, our measurements for the charge fluctuations in graphene on hBN show a very similar value to values extracted from electrical transport measurements in suspended graphene samples [6] implying that using hBN as a substrate provides a similar benefit to suspending graphene without the associated fabrication challenges and limitations. We have demonstrated that graphene on hBN provides an extremely flat surface that has significantly reduced electron-hole puddles as compared to SiO 2 . By reducing the charge Moreover, hBN allows this low-density regime to be reached in a substrate-supported system which will allow atomic resolution local probes studies of the Dirac point physics. Methods Thin and flat few layer hBN flakes were prepared by mechanical exfoliation of hBN single crystals on SiO 2 /Si substrates. The hBN growth method has been previously described [27]. Exfoliated graphene flakes were then transferred to the hBN using Poly(methyl methacrylate) (PMMA) as a carrier [8]. Then Cr/Au electrodes were deposited using standard electron beam lithography. The lithography process leaves some PMMA resist on the surface of graphene which is cleaned by annealing in argon and hydrogen at 350 • C for 3 hours [28]. The device was then immediately transferred to the STM (Omicron low temperature STM operating at T = 4.5 K in ultrahigh vacuum (p ≤ 10 −11 mbar)). Electrochemically etched tungsten tips were used for imaging and spectroscopy. All of the tips used were first checked on an Au surface to ensure that their density of states was constant. The dI/dV spectroscopy was acquired by turning off the feedback loop and holding the tip a fixed distance above the surface. A small ac modulation of 5 mV at 563 Hz was applied to the tip voltage and the corresponding change in current was measured using lockin detection. We also measured dI/dV curves with 0.5 mV excitation and observed the same results. I. MOIRÉ PATTERNS A Moiré pattern occurs when the atoms in the graphene layer form a super-lattice structure with the atoms in the hBN layer. In this section, we derive the conditions which lead to a Moiré pattern and therefore predict the angles and lengths of the Moiré pattern. We consider two superimposed hexagonal lattices defined by Similarly, a position in the hBN lattice is represented by the integers (r, s). If the graphene and hBN lattice are AB stacked at a given position, they will be AB stacked again when na 1+ + ma 1− = ra 2+ + sa 2− . In terms of xy-coordinates this gives the conditions n − m = 2 a 2 a 1 (r cos(π/3 + φ) − s cos(π/3 − φ)) √ 3(n + m) = 2 a 2 a 1 (r sin(π/3 − φ) + s sin(π/3 + φ)) A Moiré pattern occurs when these equations can be satisfied for integer values of (n, m) and (r, s). For a given ratio of a 2 /a 1 these conditions will only hold for certain special angles. In terms of the values of (n, m), the length of the Moiré pattern can be written as L = a 1 √ n 2 + m 2 + nm and it is at an angle θ = tan −1 ( √ 3m 2n+m ) with respect to a 1+ . However, a near commensurate condition can always be found leading to a Moiré pattern over some finite length. We have numerically created lattices to reproduce the images in Figure 2. The results are shown in Fig. S1. The first set of images correspond to the data shown in Fig. 2a and 2b in the main text. It was created by using a hBN lattice with a 1.8% longer lattice constant and rotated by φ = −5.4 • counterclockwise with respect to the graphene lattice. From the FFT of the lattice, Fig. S1b, we see that it matches the experimental figure, Fig. 2b very well. To create the shorter Moiré pattern we must use a larger rotation angle. We find the best match occurs at φ = −10.9 • . In general, the Moiré pattern gets shorter as the angle φ increases. II. THEORY CALCULATIONS We restrict the interlayer hopping potential to nearest-neighbor and next-nearestneighbor hopping, and evaluate with (m ′ , n ′ ) labelling the nearest and next-nearest neighbor sites to (m, n). Note that if one is a boron atom, the other one is a nitrogen atom. In this way, four interlayer couplings between the two sublattices are defined. The parameters γ ⊥ = 0.39 eV and ξ = 0.032 nm are calibrated to fit the interlayer couplings in bilayer graphene [26]. While γ ⊥ should in principle depend on the sublattice index µ in the hBN layer, this has no influence on our main conclusion, that the graphene spectrum is effectively gapless due to the mismatch between lattices and their different relative orientations, because it does not break sublattice symmetry in the graphene layer. We consider system sizes up to 600×600 unit cells, which is more than sufficient to extract with all H's being 2 × 2 matrices, with f i (q) = 1 + 2 cos(q x a i /2) exp(−i √ 3q y a i /2) and parameters γ 0 = 3.16 eV, γ 1 = 2.79 eV, ǫ B = 3.34 eV and ǫ N = −1.4 eV [10]. The low-energy graphene dispersion is obtained via diagonalization of H. We find that a gap exists in the spectrum only whenṼ µA (k i ) − V µB (k i ) = 0 for at least one µ, thereby breaking sublattice symmetry in the graphene layer. At experimentally relevant rotation angles φ of a few degrees, we find that max[Ṽ µν (k i ) − V µ ′ ν ′ (k i )]/Ṽ µν (k i ) 10 −4 for 150 × 150 systems, decreasing with size to less than 10 −5 for 600 × 600 systems. Consequently, we get an upper bound of ∆ < 10 −6 eV for the excitation gap ∆ between the two graphene bands for the largest systems we investigated. The latter are still smaller than the experimentally observed domains so that such a small gap cannot be resolved. Large values for max[Ṽ µν (k i ) −Ṽ µ ′ ν ′ (k i )]/Ṽ µν (k i ) are obtained only for small systems or if the lattice mismatch is neglected. This explains the 50 meV gap reported in Refs. [9,10], which we qualitatively reproduced (see Fig. 3d). Even without relative rotation of the two
2011-02-13T22:10:06.000Z
2011-02-13T00:00:00.000
{ "year": 2011, "sha1": "f160070cb5754327a83a8cee456c4681cb70ecad", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1102.2642", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f160070cb5754327a83a8cee456c4681cb70ecad", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
488775
pes2o/s2orc
v3-fos-license
Role of JunB in Erythroid Differentiation* The role of junB as a regulator of erythroid cell survival, proliferation, and differentiation was tested by controlled expression of JunB in the erythropoietin (EPO)-dependent erythroleukemia cell line HCD57. JunB induced erythroid differentiation as evidenced by increased expression of the erythroid-specific proteins β-globin, spectrin-α, and TER-119. Expression of JunB for at least 48 h was required for the differentiated phenotype to emerge. Differentiation was accompanied by a slower rate of proliferation and an increase in the expression of the cell cycle inhibitory protein p27. p27 protein expression increased due to reduced turnover without changes in transcription, indicating global changes in cell physiology following JunB induction. JunB expression was also studied in mouse and human primary erythroid cells. JunB expression increased immediately in both primary mouse cells and HCD57 cells treated with EPO and quickly returned to base-line levels, followed by a secondary rise in JunB in primary erythroid cells, but not in HCD57 cells, 36–48 h later. This result suggested that the initial EPO-dependent JunB induction was not sufficient to induce differentiation, but that the late EPO-independent JunB expression in primary erythroid cells was necessary for differentiation. This study suggests that JunB is an important regulator of erythroid differentiation. Erythropoietin (EPO) 1 is the primary hormone that regulates the production of erythroid cells in mammals. Conflicting theories propose that EPO may be instructive in directing uncommitted cells into erythroid maturation or alternatively that EPO is permissive only in allowing the maturation of cells that spontaneously commit to erythroid development. Whereas it is possible that EPO acts to promote the proliferation, survival, and differentiation of cells committed to erythroid development, some workers have hypothesized that EPO acts primarily as a survival factor for proerythroblasts by preventing the cells from undergoing programmed cell death or apoptosis (1). This view is supported by the development of relatively mature erythroid cells at the colony-forming unit-erythroid (cfu-E) stage of differentiation in fetal EPO Ϫ/Ϫ null or EPO receptor Ϫ/Ϫ null mice; however, proerythroblasts undergo apoptosis in fetal mice with deletion of either EPO or the EPO receptor (2). The results show that EPO is absolutely necessary only during a small window in this process of erythroid maturation: cells at the cfu-E to proerythroblast stage of differentiation. If EPO is absent or suboptimal, these cells undergo apoptosis. If EPO is sufficient, these cells survive to mature to erythroblasts. Erythroblasts will continue to terminally differentiate even if EPO is absent. In murine proerythroblasts, this differentiation is accompanied by an increase in the expression of the anti-apoptotic protein Bcl-x L (3). It is not yet clear if intracellular signals from the EPO receptor activated by EPO are limited to promoting survival or whether these EPO-dependent signals are necessary for continued proliferation and later differentiation characterized by induction of genes coding for erythrocyte membrane-specific protein and hemoglobin. Whereas the regulation of the expression of these erythroidspecific proteins by transcription factors such as GATA-1 and erythroid Kruppel-like factor has been described, regulators upstream of these events are not well characterized. A few in vitro EPO-induced erythroid differentiation models exist (4,5), and erythroleukemic cells respond to chemicals such as Me 2 SO and hexamethylenebisacetamide by undergoing partial erythroid differentiation (6,7). Expression of the EPO receptor in Ba/F3 cells results in EPO-dependent transcription of the ␤-globin gene (8). These observations suggest a possible requirement for EPO-dependent signals that modulate transcription of erythroid genes at some point in erythroid maturation. JunB is a member of the activator protein-1 (AP-1) family of transcription factors that binds to a specific DNA sequence, the 12-O-tetradecanoylphorbol-13-acetate-responsive element (TRE), to activate transcription of target genes. The AP-1 family consists of the Fos (c-Fos, Fra-1, Fra-2, and FosB) and Jun (c-Jun, JunB, and JunD) proteins, which are known as "early-response" proteins due to their up-regulation in response to extracellular stimuli affecting cell growth, differentiation, or survival. Inactivation of JunB in mice leads to embryonic lethality between embryonic days 6 and 8 due to vascular defects (9). JunB appears to function as a negative regulator of a number of cell systems, including proliferation in response to negative growth factors (10 -13). JunB has also been shown to negatively affect cell cycle progression (14,15) and cell survival (16). There is also evidence that JunB may play a central role in differentiation and growth arrest during hematopoiesis. In M1 mouse myeloid leukemic cells, induction of differentiation with either chemicals or cytokines is associated with an increase in junB expression (17)(18)(19). There is also some evidence that JunB may play a role in the differentiation of erythroid cells. JunB expression and DNA binding have been detected in Friend murine erythroleukemia-C cells induced to differentiate with Me 2 SO or hexamethylenebisacetamide, but not in control cells (6,7). Our laboratory uses the EPO-dependent murine erythroid cell line HCD57 as a model cell system for the study of erythroid proliferation, survival, and apoptosis. These cells depend on EPO for survival and proliferation, but do not differentiate in the presence of EPO. We have demonstrated that AP-1 DNA-binding activity induced in the growth factor withdrawal state is associated with an increase in junB message, JunB protein, and DNA-binding activity. Stable expression of a dominant-negative AP-1 mutant prevents apoptosis of HCD57 cells in the absence of EPO and prevents the down-regulation of the anti-apoptotic protein Bcl-x L (16). Therefore, in the absence of EPO, junB expression correlates with the induction of apoptosis. To further investigate the role of junB in erythropoiesis, we were interested in the effect JunB expression might have in the presence of EPO. In this study, we show that, in the presence of EPO, overexpression of junB induces differentiation of HCD57 cells. Furthermore, we found that JunB expression is associated with terminal differentiation of both mouse and human primary erythroid progenitors, thereby suggesting an additional role for JunB in the regulation of erythropoiesis. MATERIALS AND METHODS Cell Culture and Creation of the HCD57-JunB Cell Line-HCD57(K) cells (described previously (20)) stably expressing the pTET-OFF transactivator plasmid were a gift from Dr. Steve Brandt (Vanderbilt University) and were cultured in Iscove's modified Dulbecco's medium (Invitrogen) containing 30% fetal calf serum, 50 g/ml Geneticin (Invitrogen), and 10 g/ml gentamycin at 37°C in a 5% CO 2 environment and maintained in 2 units of EPO/ml of medium. To create the tetracycline (Tc)-inducible pTRE-JunB plasmid, the human junB gene was excised from the parental pGEM4-JunB cDNA on a HindIII/PstI fragment. The ends of the fragment were filled in with Klenow DNA polymerase. EcoRI/NotI adaptors (Invitrogen) were ligated onto the blunted fragment, and this fragment was ligated into the unique EcoRI site of the pTRE plasmid. The resulting pTRE-JunB plasmid was cotransfected with the pTK-Hyg plasmid (CLONTECH) at a 20:1 ratio into the HCD57(K) cells using DIMRIE-C reagent (Invitrogen), and stable clones were selected by limiting dilution in the medium described above supplemented with 400 g/ml hygromycin and 2 g/ml tetracycline (to repress junB expression). Hygromycin-resistant cell lines were then cultured for 24 h in the absence of Tc and screened for JunB expression by Northern blot analysis using the junB cDNA as a probe. The clone used in these studies was the strongest expresser of junB and was designated HCD57-JunB. For each time point, 5 ϫ 10 6 HCD57-JunB cells were used. For induction of junB expression, the cells were washed three times with serum-free medium and incubated in Iscove's modified Dulbecco's medium containing 30% fetal calf serum, 50 g/ml Geneticin, and 2 units/ml EPO, but in the absence of Tc, for the times indicated in the figure legends. For the electrophoretic mobility shift assay, HCD57-JunB cells were washed to remove Tc as indicated above and then incubated in medium either supplemented with 2 g/ml Tc (to repress JunB expression) in concentrations of Tc indicated in the figure legends or without Tc to induce JunB expression. For the protein turnover studies, the HCD57-JunB cells were cultured for 48 h in the absence or presence of Tc, followed by treatment with 100 M cycloheximide for the times indicated. To test the effect of length of JunB expression on differentiation, cells were washed to remove Tc from the medium, and then Tc was re-added to the medium at the times indicated. junB mRNA expression ceased within 4 h of Tc re-addition (data not shown). Nuclear extracts and total cell extracts were prepared as previously described (21), and 20 g of protein were subjected to Western blot analysis with anti-JunB (N-17), anti-actin (I-19), anti-p16 (M-156), anti-spectrin-␣ (C-20), or anti-spectrin-␤ (C-18) antibody (Santa Cruz Biotechnology) or mouse monoclonal anti-p27 antibody (Transduction Laboratories). Western-blotted proteins were visualized using enhanced chemiluminescence (Amersham Biosciences, Inc.). The level of p27 protein expression during cycloheximide treatment was quantitated using a scanning densitometer and analyzed using ImageQuant software (Molecular Dynamics, Inc.). For the cell viability studies, cells were washed with medium as indicated above and cultured at 1 ϫ 10 5 cells/ml in the presence of EPO and in the absence or presence of 2 g/ml Tc. Cell viability was determined by counting cells in a hemocytometer in the presence of 0.2% trypan blue. Isolation and Culture of Primary Erythroid Cells-Primary murine erythroid progenitors infected with the anemia-inducing strain of the Friend virus (FVA cells) were isolated as previously described (3), and 10 6 FVA cells were cultured in EPO for 0, 2, 4, 8, 24, 36, and 48 h. Primary human erythroid progenitors were derived by in vitro culture of CD34 ϩ cells isolated from peripheral blood. Growth factor-mobilized peripheral blood collected from normal donors was purchased from AllCells, LLC (Berkeley, CA). CD34 ϩ cells were isolated from growth factor-mobilized peripheral blood cells using the antibody-coated paramagnetic microbeads in the CliniMACS TM cell isolation device (Miltenyi Biotec, Inc., Auburn, CA). The isolated cells were Ͼ95% positive for CD34 as determined by flow cytometry and were cultured for 7-8 days to obtain highly purified erythroid progenitors that were at the cfu-E stage of differentiation. The cell culture medium contained 15% fetal calf serum, 15% human AB serum, Iscove's modified Dulbecco's medium, 100 units/ml penicillin, 100 g/ml streptomycin, 10 ng/ml interleukin-3, 2 units/ml EPO, 50 ng/ml stem cell factor, and 50 ng/ml insulin-like growth factor-1. To facilitate terminal differentiation, stem cell factor was omitted from the culture medium beyond day 5 of culture. Ͼ90% of these cells were positive for CD71 and glycophorin A as determined by flow cytometry and produced erythroid colonies when plated in methylcellulose. Day 7, 9, and 14 cfu-E cells are at the basophilic, polychromatophilic, and orthochromatic stages of differentiation, respectively. Northern Blot Analysis-Following treatment of the cells, total RNA was isolated using the RNeasy miniprep kit (QIAGEN Inc.), and 10 g of total RNA were subjected to Northern blot analysis as previously described (16). cDNA probes to junB or ␤-globin (kindly provided by Dr. Joyce Lloyd, Virginia Commonwealth University) were labeled with [␣-32 P]dCTP using the random priming method (Stratagene). Filters were hybridized to the probes as previously described (16). The blots were washed under high stringency conditions and visualized with autoradiography for 3 days at Ϫ80°C. Changes in mRNA expression were quantified using the Cyclone phosphoimager and Optiquant Image analysis software (Becton Dickinson). RNase Protection Analysis-For RNase protection of AP-1 and bcl family members, the murine AP-1 and APO-2 RNase protection templates, respectively (both from BD Pharmingen), were transcribed in vitro using the Riboquant in vitro transcription kit (BD Pharmingen). 5 g of total RNA isolated from HCD57-JunB cells were subjected to RNase protection using the Riboquant RNase protection kit (BD Pharmingen) and resolved on a 5% polyacrylamide gel containing 7 M urea and 0.5ϫ Tris borate/EDTA. AP-1 DNA Binding Studies and Supershift Assay-Electrophoretic mobility shift assays and supershift assays were conducted as previously described (16) using an electrophoretic mobility shift assay (Stratagene). 10 g of nuclear extract were incubated with a [␥-32 P]ATPlabeled double-stranded DNA fragment corresponding to the TRE (sense strand, 5Ј-CTAGTGATGAGTCAGCCGGATC-3Ј) at 4°C in the absence or presence of 10ϫ unlabeled TRE and subjected to electrophoresis at 25 mA for 2.5 h at 4°C on 7% polyacrylamide gels. The gels were dried in vacuo for 1.5 h and exposed to x-ray film overnight at room temperature. For the supershift assay, 5 g of nuclear extract were preincubated with 4 g of anti-JunB antibody (Geneka Biotechnologies, Inc.) for 1 h at 4°C prior to incubation with the [␥-32 P]ATPlabeled TRE for 15 min at 4°C and electrophoresis at 4°C as described above. Dried gels were exposed to x-ray film for 2-3 days at Ϫ80°C. Flow Cytometry Analysis-Differentiation of HCD57-JunB cells was measured by detection of the mature erythroid-specific marker TER-119 using flow cytometry. 2 ϫ 10 5 HCD57-JunB cells cultured with junB induced or uninduced for 96 h were washed once with fluorescence-activated cell sorter buffer (1ϫ phosphate-buffered saline, 2% fetal calf serum, and 0.1% sodium azide) and preincubated with 20 g/ml anti-Fc␥ receptor II blocking antibody 2.4G2 (BD Pharmingen) for 15 min at 4°C. The cells were then incubated with either phycoerythrin-labeled rat IgG (BD Pharmingen) or phycoerythrin-labeled rat anti-mouse TER-119 antibody (BD Pharmingen) at a final concentration of 2 g/ml for 30 min at 4°C. The cells were washed twice with fluorescence-activated cell sorter buffer and then subjected to flow cytometry using the FACScan flow cytometer. DNA Fragmentation Studies-For DNA fragmentation studies, cells were cultured at 1 ϫ 10 5 cells/ml in the absence or presence of 1 unit/ml EPO or in the presence of EPO and in the absence or presence of Tc. 2 ϫ 10 6 cells were harvested at 24-h intervals, and genomic DNA was isolated from the cells using the Omniprep genomic DNA isolation kit (Genotech, Inc). 10 g of genomic DNA were resolved on a 2.25% agarose gel containing 1ϫ Tris acetate/EDTA and 300 ng/ml ethidium bromide. DNA laddering indicative of apoptosis was visualized using ultraviolet light. In Vitro Kinase Assay-HCD57-JunB cells were cultured in the presence of 2 units/ml EPO and in the absence (junB expression-induced) or presence (junB expression-suppressed) of Tc for 48 h. 500 g of cell extracts isolated from 5 ϫ 10 6 cells were subjected to immunoprecipitation as previously described (22) with anti-Cdk2 (M-2) or anti-cyclin E (M-20) antibody (Santa Cruz Biotechnology) and protein A-agarose (Transduction Laboratories). Anti-Cdk2 and anti-cyclin E antibodyimmunoprecipitated proteins were concentrated in 25 l of lysis buffer and subjected to an in vitro kinase assay as previously described (22) using 5 g of histone H1 (Sigma) as a substrate. Following electrophoresis of the samples on a 12% SDS-polyacrylamide gel, the proteins were transferred to nitrocellulose and exposed to autoradiography for 18 h at Ϫ80°C with an intensifying screen to visualize phosphorylated histone H1. The blots were then probed with monoclonal anti-p27 antibody and finally with anti-Cdk2 or anti-cyclin E antibody to ensure equal loading of proteins. RESULTS To investigate the role of JunB in erythropoiesis, the junB gene was cloned downstream of a Tc-inducible promoter and stably transfected into HCD57(K) cells, which undergo massive apoptosis within 24 h of EPO withdrawal (20). We have designated the cell line used in this study HCD57-JunB. Removal of Tc from the cells resulted in rapid up-regulation of the junB message. junB expression was detected within 3 h of Tc removal and reached its maximum 12 h after removal of Tc (Fig. 1A). Overexposure of this Northern blot detected low endogenous junB expression in this cell line (data not shown) comparable to that seen in our previous studies (16). Western blot analysis revealed an increase in JunB protein that correlated with the increase in junB mRNA expression; we consistently observed an ϳ10-fold increase in JunB protein 24 h after Tc withdrawal (Fig. 1B, lane E). An electrophoretic mobility shift assay of HCD57-JunB cells washed and then cultured in fresh medium containing Tc to prevent junB expression revealed a weak shift 1 h after the cells were recultured, and this shift rapidly decreased and was undetectable 24 h after the start of the experiment (Fig. 1C, lane 2). When Tc was removed to induce junB expression, however, a strong increase in AP-1 DNA-binding activity was detected 3 h after Tc removal, and this increase continued over the 24-h time course (Fig. 1C, lanes 9 -11). Supershift analysis showed that JunB was present in all bands of the AP-1 complex as evidenced by the shifted bands (Fig. 1C, lane 16, dashed arrows) and the loss of the uppermost band seen in the absence of antibody (asterisk). Further supershift analysis revealed the presence of c-Jun, JunD, c-Fos, FosB, and the AP-1-related proteins activating transcription factor-1 and -2 in the AP-1 complex; other Fos and Jun proteins were not detected (data not shown). These results indicate that the HCD57-JunB cell line is tightly controlled and produces functional JunB protein when induced by the removal of Tc from the medium. When HCD57-JunB cells were harvested after culturing the cells for 96 h in EPO, but in the absence of Tc, we noticed that the pelleted cells had turned bright red (data not shown). This remarkable observation strongly suggested that the HCD57-JunB cells were synthesizing hemoglobin as a result of junB expression, indicating partial differentiation of these erythroleukemic cells. To determine what percentage of cells was taking on a more differentiated phenotype, the expression of the late erythroid-specific surface protein TER-119, a protein that associates with glycophorin A (23), was investigated. Flow cytometry revealed that the entire population of HCD57 cells expressing junB exhibited an increase in the surface expression of TER-119 ( Fig. 2A, arrow). The differentiated phenotype of HCD57-JunB cells was further explored by assessing the expression of three markers of erythroid differentiation: ␤-globin, a key component of the hemoglobin molecule, and spectrin-␣ and spectrin-␤, two proteins necessary for the assembly of the erythrocyte membrane. Northern blot analysis revealed that ␤-globin expression was strongly induced by 48 h following JunB induction, with expression still increasing at 96 h when the experiment was stopped (Fig. 2B). Varying the expression of JunB by varying the concentration of Tc in the culture medium revealed that the increased expression of ␤-globin correlated with the level of JunB expression (Fig. 2C). Western blot analysis revealed that spectrin-␤ expression was high in the absence and unaltered by the induction of junB expression (Fig. 2D, lower panel); by contrast, the expression of spectrin-␣ was induced by junB expression with the same kinetics as the emergence of ␤-globin expression (upper panel). Therefore, HCD57 leukemic cells express some erythroid-specific markers (spectrin-␤), but express other differentiation markers only when junB is induced (spectrin-␣, ␤-globin, and TER-119). junB, like all AP-1 genes, is thought of as an "immediateearly" gene, having its effect soon after its activation by growth factors or stress. We were therefore interested in whether the induction of differentiation was caused by immediate-early induction of junB expression or whether a longer period of junB expression was necessary for the differentiated phenotype. When junB expression was induced for 7 h and then suppressed by re-addition of Tc to the medium, the HCD57-JunB cells did not differentiate, as shown by a lack of increase in ␤-globin expression (Fig. 2E). junB had to be expressed for at least 48 h for maximum ␤-globin expression to occur (Fig. 2D, lane E). Therefore, it appears that long-term junB expression is necessary for differentiation of HCD57 cells. The effect of long-term expression of junB on the erythroleukemic cells led us to investigate whether similar expression patterns might be seen in differentiating primary erythroid cells. junB expression was assessed in primary murine erythroid progenitors infected with the anemia-inducing strain of the Friend spleen focus-forming virus (FVA cells), which begin terminal differentiation within 48 h after treatment with EPO. Northern blot analysis showed that FVA cells induced to differentiate in the presence of EPO exhibited both an initial increase in mRNA expression of junB and a later increase 36 -48 h after addition of EPO (Fig. 3A). By contrast, HCD57 cells deprived of EPO and then stimulated with EPO showed a similar initial increase in junB expression 1 h after EPO induction (Fig. 3A, lane I); this increased junB expression rapidly decreased and did not increase again. This experiment was repeated using RNase protection analysis, and the expression of junB and the housekeeping gene gapdh was quantified. Immediately following treatment with EPO, junB expression increased ϳ3-fold in both FVA and HCD57 cells. During terminal differentiation of FVA cells (24 -48 h after EPO addition), junB expression increased ϳ5-fold relative to total RNA, whereas gapdh expression decreased to 30% of the starting levels. HCD57 erythroleukemic cells showed no such increase in junB expression during this time period. We then investigated junB expression in normal human colony-forming cells cultured in EPO. For this experiment, CD34 ϩ early hematopoietic cells were cultured under conditions that promote the development of cfu-E cells. The cfu-E (day 7) cells were then further cultured to allow terminal differentiation to occur. junB expression was low during the early cfu-E stage of differentiation (the proerythroblast stage), but increased by day 9 (the polychromatophilic stage) and was still elevated in day 14 terminally differentiated cells (the orthochromatic stage) (Fig. 3B). Therefore, in both murine and human primary erythroid progenitors, elevated junB expression is observed during differentiation of the cells. Other hallmarks of differentiation include suppression of proliferation and potential changes in cell cycle regulatory proteins. We therefore investigated the expression of c-jun in the HCD57-JunB cells because we have previously shown a role for c-jun in the proliferation of HCD57 cells (16). RNase protection analysis of the expression of all jun and fos family members during the induction of junB revealed a transient increase in the expression of c-jun, junD, and junB 1 h after fresh medium was added, with no significant changes in expression thereafter and an increase in c-fos expression during the first 48 h after removal of Tc (Fig. 4). The stably transfected junB gene is the human junB gene, so only the native murine junB expression induced was observed in this experiment. No changes in other AP-1 family members (fra-1, fra-2, and fosB) were detected (data not shown). An examination of the growth properties of HCD57-JunB cells cultured in the absence and presence of the induction of junB expression revealed that these cells proliferated more slowly in response to EPO when junB was expressed (Fig. 5A). 48 h after junB expression was induced, proliferation decreased ϳ50% compared with cells cultured in the absence of JunB. This 50% decrease was maintained throughout the rest of the time course. Therefore, JunB expression does not simply result in the expression of markers of mature erythroid cells, but reflects other aspects of differentiation, including inhibition of proliferation of HCD57 cells. When we examined the cell cycle state of junB-expressing HCD57 cells using flow cytometry analysis of propidium iodidestained cells, no cell cycle arrest was detected; however, a slight increase in the percentage of cells in G 0 /G 1 led us to investigate potential changes in the expression of cell cycle regulators. Proteins were screened for changes in expression during the induction of JunB expression by Western blot analysis. No changes in expression were detected in the p16 protein (Fig. 5B) or in the p21, p57, and p36 proteins (data not shown). By contrast, an increase in p27 protein expression was detected 24 h after JunB expression was induced (Fig. 5B, upper panel). Therefore, one of the ways that junB may slow cell cycle progression is to increase the expression of p27, which is known to inhibit the activity of cyclin E and its associated cyclin-dependent kinases Cdk2 and Cdk4. An in vitro kinase assay using anti-Cdk2 and anti-cyclin E immunoprecipitates and histone H1 as a substrate revealed that JunB expression resulted in a decrease in Cdk2-and cyclin E-associated kinase activity (Fig. 5C, lanes B and H) and an increase in the association of p27 with these proteins (lanes D and J). These results suggest that one mechanism for the retardation of proliferation seen in junB-expressing cells is inhibition of Cdk2 kinase activity. p27 expression is regulated at both the transcriptional and post-transcriptional levels (24 -27). The mechanism by which junB regulates p27 expression in HCD57-JunB cells was therefore explored. No changes in the mRNA expression of p27 upon junB induction were detected (data not shown). Potential posttranscriptional changes in p27 expression were then investigated by treatment of junB-and non-junB-expressing HCD57 cells with cycloheximide and measuring the rate of protein turnover. HCD57 cells expressing JUNB for 48 h showed a much slower rate of p27 turnover compared with cells in which JUNB was not expressed (Fig. 5D). The half-life of p27 in cells cultured without JUNB expression was ϳ20 min, whereas the half-life of p27 cultured in the presence of JUNB expression was ϳ120 min. Therefore, it appears that one mechanism by which junB increases p27 protein levels is by protein stabilization. In the absence of EPO, the induction of apoptosis of erythroid cell expression is associated with an increase in junB expression and a decrease in the expression of the anti-apoptotic protein Bcl-x L (16). To investigate the apoptotic state of the junB-expressing HCD57 cells cultured in EPO, genomic DNA was isolated from cells cultured in the presence of EPO with JunB expression over a 96-h time period and examined by agarose gel electrophoresis. HCD57-JunB cells deprived of EPO for 24 h exhibited the characteristic DNA laddering, indicating that the cells underwent apoptosis rapidly in the absence of EPO (Fig. 6A, lane B). junB expression for 96 h induced only a small amount of DNA laddering (Fig. 6A, lane G). Additional flow cytometry experiments quantitated the increase as ϳ10% of the total DNA with sub-G 0 /G 1 DNA (data not shown). Expression of the anti-apoptotic gene bcl-x L increased ϳ8-fold as junB was expressed (Fig. 6B, lane J, upper arrow); no changes in mRNA expression were observed in other bcl family members, including the pro-apoptotic gene bad (Fig. 6B, lower arrow). Taken together, these results indicate that, in the presence of EPO, JunB does not appear to directly induce apoptosis. DISCUSSION We have created a system in which junB expression can be tightly regulated in an erythroid cell line, allowing us to study the role of JunB in erythropoiesis. Culture of HCD57-JunB cells in the absence of Tc led to a rapid and sustained increase in AP-1 DNA-binding activity due to the presence of JunB in the AP-1 complex. The weak AP-1 DNA binding observed 3 h after medium with Tc was added to the HCD57-JunB cells (Fig. 1C, lane 3) may have been due to transient increases in c-jun, junD, and junB expression induced by the addition of fresh serum (Fig. 4). Expression of JunB induced differentiation of HCD57 cells as evidenced by increases in the expression of known important markers of erythroid differentiation, including ␤-globin and spectrin-␣. Decreasing the expression of JunB resulted in a corresponding decrease in ␤-globin expression (Fig. 2C), thus correlating the expression of JunB with the degree of differentiation. Spectrin-␤, another erythroid protein, was already present in the HCD57-JunB cells and did not increase upon JunB expression. Therefore, this erythroleukemic cell line has retained some ability to differentiate, but cannot synthesize hemoglobin or express spectrin-␣, functions that are restored by the reintroduction of JunB expression. The HCD57-JunB cells did not enucleate (data not shown); therefore, JunB expression did not cause terminal differentiation of HCD57 cells, but restored part of the differentiation program to the erythroleukemic cells. A number of results imply a correlation between long-term junB expression and erythroid differentiation. First, both human and murine primary erythroid progenitors differentiating in the presence of EPO exhibited long-term junB expression during terminal differentiation. Second, the early, EPO-dependent junB expression detected in HCD57(K) erythroleukemic cells was not sufficient to induce differentiation. Third, expression of junB was required for at least 48 h for maximum ␤-globin expression to occur in the HCD57-JunB cells (Fig. 2E). Therefore, it was the reintroduction of prolonged JunB expression that partially restored the differentiation program in the HCD57 erythroleukemic cells. The biphasic nature of junB expression in the FVA cells suggests that there may be two modes of junB induction: an early, EPO-dependent induction and a later induction. This later induction may not be EPO-dependent because FVA cells no longer respond to EPO during the terminal stages of differentiation. Therefore, the long-term junB expression may be part of the differentiation program and not induced directly by EPO. It cannot be ruled out that JunB expression increased as a result of differentiation in FVA and cfu-E cells and did not cause differentiation in these cells. However, induction of differentiation by human junB did not cause an increase in endogenous murine junB expression in HCD57 cells (Fig. 4, lanes G and H), suggesting that an increase in junB expression is not a by-product of differentiation. Because the differentiation is incomplete, however, we cannot discount the possibility that junB expression may increase as a result of an additional differentiation mechanism. The fact that junB expression affects not only the expression of markers of erythroid differentiation, but also the proliferative state of the cells, is further evidence that JunB promotes differentiation on a global scale. The accumulation of p27 protein during erythroid cell differentiation is consistent with previous reports on the differentiation of primary FVA cells (28) and on an EPO-dependent cell line (4). The results presented here imply an increase in JunB expression upstream of the p27 protein accumulation due to a decreased rate of protein turnover. It is interesting that we also observed no increase in p16 expression given the recent observations that JunB can inhibit proliferation by increasing p16 mRNA expression in fibroblasts (14). This discrepancy may be explained by differences in the expression of junB binding partners between the cell lines or by cell type-specific regulation of p16 expression. An alternative explanation could be that, in erythroid cells, JunB appears to be part of a broader differentiation program and therefore may not directly affect the expression of cell cycle-related genes. A recent report showed that reintroduction of junB into junB Ϫ/Ϫ mice rescues the embryonic lethality phenotype of the knockout mice, but the adult mice develop a hyperproliferative disease due to loss of the junB transgene in myeloid cells (29). JunB was not lost in erythroid cells, so it is difficult to say what the effect loss of junB might have on adult erythropoiesis in these transgenic mice. JunB clearly has different effects on erythroid cells in the presence of EPO compared with its absence. HCD57(K) cells underwent apoptosis rapidly in response to EPO withdrawal (Fig. 6A). Overexpression of junB for 48 h prior to EPO withdrawal had a minimal effect on the number of apoptotic cells 6 -12 h after EPO was withdrawn compared with cells that did not express junB (data not shown). Because junB expression is induced when EPO is withdrawn from these cells (16), it is probable that this normal amount of junB is sufficient to induce apoptosis and that additional induction of junB does not further contribute to cell death. By contrast, a low level of significant and reproducible apoptosis (Յ10%) was observed when JunB was induced in the presence of EPO (Fig. 5A); this low level of apoptosis may occur as a result of cells failing to complete differentiation. Furthermore, bcl-x L expression decreased during the induction of apoptosis upon EPO withdrawal, yet bcl-x L mRNA levels increased when JunB was induced in the presence of EPO. One explanation for these differences could be the availability of binding partners for JunB in the absence and presence of EPO: in the absence of H1 (lanes A, B, G, and H), the proteins were immunoblotted with anti-p27 antibody (lanes C, D, I, and J) and then with either anti-Cdk2 antibody (lanes E and F) or anti-cyclin E antibody (lanes K and L) to ensure equal loading of proteins. WB, Western blot. D, JunB regulates p27 expression at the post-transcriptional level. Western blot analysis was performed with whole cell lysates of HCD57-JunB cells cultured with junB uninduced or induced for 48 h prior to addition of 100 M cycloheximide and harvested at the times indicated. p27 expression after cycloheximide addition for cells with junB uninduced (ࡗ) or induced (f) is indicated. The experiment shown is representative of three independent experiments. EPO, only FosB and JunD were available (Fig. 2B) (16), whereas c-Fos, c-Jun, and activating transcription factor-1 and -2 were also available for association with JunB in the presence of EPO. These observations raise the interesting possibility that junB may have a dual role in the regulation of erythroid cell maturation and survival. In the absence of EPO, JunB may be an inducer of apoptosis by inhibiting the expression of survival proteins such as Bcl-x L . In the presence of EPO, JunB may function as part of the differentiation program. Overexpression of junB may restore a loss of prolonged JunB expression necessary for the differentiation of HCD57 cells. Alternatively, prolonged junB expression may replace the loss of another transcription factor that complexes with AP-1 family members and is important in erythroid differentiation. Given that JunB may act as a potent inhibitor of c-Jun transactivation and transformation (30), JunB may promote differentiation by inhibiting the activity of c-Jun, which we have shown to be important in EPO-induced proliferation (16), or by antagonizing the activity of c-Jun at the promoter of cell cycle regulatory genes that have been shown to be activated by c-Jun (31)(32)(33). Conversely, JunB may cooperate with c-Jun and other AP-1 family members to activate the transcription of genes necessary for differentiation. The question remains whether EPO has a directive or permissive role in erythropoiesis. The facts that 1) HCD57 cells do not undergo apoptosis due to JunB expression but are protected from apoptosis by EPO and 2) the long-term increase in junB expression during terminal differentiation may be EPO-independent suggest that, rather than directing JunB to promote differentiation, EPO permits the cells to survive, thus allowing the differentiation program (including JunB expression) to promote differentiation. In summary, our data support a growing amount of evidence that junB is an important regulator of differentiation of hematopoietic cells. HCD57 cells are derived from a leukemic mouse; therefore, by their very nature, these cells do not differentiate. We have been able to partially differentiate these cells. This is a significant accomplishment and may give clues as to how these cells became leukemic and how the leukemic phenotype might be reversed.
2018-04-03T03:07:32.288Z
2002-02-15T00:00:00.000
{ "year": 2002, "sha1": "57f10fb8359a33787a477b36d81bece904da7b4f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/7/4859.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "2274774e2f93c6dd6e492bf2bc3a8956fe40554d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
201042287
pes2o/s2orc
v3-fos-license
Stumped by rapid symptomatic prostatic regrowth: A case report on a STUMP tumour of the prostate resected with HoLEP Highlights • Stromal tumour of Undetermined malignant potential (STUMP) of the prostate is a rare tumour arising from the prostate specialized stroma.• The patient presented with LUTS, MRI showed prostatic growth, and biopsy showed no malignancy.• The symptoms were treated by TURP and 11 g were removed of the prostate.• The tumour recurred within less than a year to three times its original size.• It is the first time in literature, HoLEP was used to remove the origin of the tumour successfully. Introduction Stromal tumour of uncertain malignant potential (STUMP) of the prostate is an extremely rare tumour arising from the specialized stroma of the prostate that shows a wide spectrum of clinical presentations and behavior, ranging from focal benign lesion to a huge aggressive tumour [1]. This Case shows An aggressive, rapidly recurring STUMP of the prostate growing in size from 40 ml to 131 ml in only one year despite interim Transurethral resection of the prostate(TURP) removing 11.85 g of the prostate. The work has been reported in line with the SCARE criteria [2]. Case presentation A 57-year-old man with no known medical illnesses, presented with voiding lower urinary tract symptoms (LUTS) of hesitancy, poor flow and incomplete bladder emptying lasting 1.5 months and had a benign feeling prostate on digital rectal examination (DRE). He had previously one episode of urinary retention and had an elevated Prostate Specific Antigen (PSA) of 14.5 ng/ml. He was started on Alfuzosin 10 mg but with no significant improvement in his symptoms. A trans-abdominal ultrasound scan of the urinary tract showed a large prostate gland of size 41 ml, with indentation in the bladder base. A multiparametric MRI scan displayed an enlarged prostate (41 ml) with central gland hypertrophy protruding into the bladder base, the capsule was intact, with normal seminal vesicles, PIRADs 2 score {Figs. 1 and 2}. His repeat PSA was 25 ng/ml. Due to the high PSA density, a trans-perineal prostate biopsy was undertaken. This was performed from 6 sectors of the peripheral zone showing 24 cores with no evidence of malignancy. The patient underwent a Trans-urethral resection of the prostate (TURP) for his obstructive symptoms in August 2016. Intraoperatively, enlarged adenoma causing bladder outlet obstruction was seen by the cystoscope and 11.85 g of prostate tissue was resected. The histopathology report confirmed the previous biopsy results of no malignancy but added that the prostate had adenomatous hyperplasia. Initially, the patient's symptoms improved, and his PSA fell from 25 ng/ml to 3 ng/ml. A year later, the patient presented with haematuria and recurrence of his voiding LUTS. Even though his PSA was still 3 ng/ml, his prostate was much more enlarged on Ultrasound scan with a size of 131 ml. An MP-MRI scan was repeated which demonstrated a huge prostate (>100 ml) bulging into the bladder base, extending from the central gland, with a papillary middle lobe area measuring up to 6 × 5.4 x 3.9 cm intravesically, the capsule was intact, the seminal vesicles were atrophic with the same PIRAD score of 2 {Figs. 3 and 4}. The patient's case was discussed at a multi-disciplinary prostate meeting and booked for a Holmium Laser Enucleation procedure (HoLEP) of the prostate to relieve the bladder outflow obstruction and attempt to resect all of the recurrent adenoma. The endoscopic view showed a multilobed mass which stemmed from the verumontanum and extended to the bladder neck and inside the bladder. During the procedure 64 g of tissue was removed and the histopathology report stated the specimen showed hypercellularity of the stroma, with some areas with a myxoid background, and some areas showing a spindle cell morphology. There is however, no atypia, no mitoses and no necrosis. There is also marked epithelial hyperplasia, without atypia, which are the features of Stromal Tumour of Uncertain Malignant Potential (STUMP) with no evidence of any malignancy. The patient was followed up after 6 and 11 months of the HoLEP; the patient was not complaining of any symptoms; the PSA levels were undetectable and the MP-MRI showed no sign of recurrence or regrowth of the STUMP {Fig. 5}. Discussion STUMP is a rare prostate specific stromal tumour. It was initially described in 1998 by Gaudin et al. and categorized on the basis of the histopathological features of the tumour [3]. The largest case series written on STUMP of the prostate was by Herawi and Epstein in 2006, which provided a clinicopathological follow up for 50 cases of Prostate specific stromal tumours, of which 36 were STUMP. It showed that the mean age of patients with STUMP was 58 (range: 27-83) [4], which was similar to the age in this case; 57 years old. The clinical symptoms at presentation caused by STUMP vary between voiding/obstructive LUTS, haematuria, haematospermia, urinary retention, rectal dysfunction and abnormal DRE [3][4][5], which relates to our case where the patient presented with urinary obstructive symptoms of hesitancy, poor flow, incomplete bladder emptying, benign-feeling enlarged prostate on DRE and an episode of urinary retention. Regarding the PSA levels, in this case, the patient presented with high levels of 14 ng/ml that kept rising until it reached 25 ng/ml. Other cases reported similar high PSA levels [3,4,6], and in one case it reached 500 ng/ml [7]. On the other hand, some case reports showed normal PSA levels [8]. What makes this case unique, is the unusual aggressive rapid recurrence of the tumour after TURP, where it regrew from 41 ml to 131 ml in less than a year with a huge invasion to the bladder base as seen in {Figs. 3 and 4} despite removing 11 g of the tumour. According to previous reported cases Stump showed an unpredictable clinical behaviour. Recurrence was reported in 46% by Gaduin et al. [3] and in 50% of cases by G.Bostwick et al in their case series. In the latter, only 5 cases out 23 were reported to have fast recurrence in less than a year, but there was no measured size for this recurrence or imaging available [4]. In terms of management, this is the first time HoLEP was used to treat the tumour, in an attempt to enucleate the entire origin of the STUMP growth by clearing the whole transition zone which showed good response as no recurrence or regrowth were detected. Previously, standard TURP, radical and simple prostatectomy have been used to manage these tumours [1,3,4,9]. Conclusion STUMP of the prostate is an exceptionally rare tumour, the management of which is not clearly understood in terms of clinical presentation, response to treatment and prognosis. It should be suspected in cases of rapid recurrence after bladder outflow surgery. Close follow up should always be considered and HoLEP may be used to remove the origin of the STUMP, while other radical treatment options including resection of the prostate may be needed. Sources of funding None. Ethical approval No ethical approval is needed for this case report. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Author contribution TAREQ ALTELL: Data collection, Writing the paper. LORENZO MARCONI: Assistant surgeon in HoLEP operation. PAUL CATHCART: Consultant urological surgeon who did the TURP operation. BENJAMIN CHALLACOMBE: Consultant urological surgeon who did the HoLEP, he also edited the paper. Registration of research studies Not relevant for this study. Provenance and peer review Not commissioned, externally peer-reviewed. None. Declaration of Competing Interest None.
2019-08-18T13:04:42.709Z
2019-07-26T00:00:00.000
{ "year": 2019, "sha1": "f94812e70f609d92a18cd36e00c3f110de1859f6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2019.07.058", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5342cbcc9c69ce491abdf31f830f93d68812f40", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250415014
pes2o/s2orc
v3-fos-license
The role of project management in the success of green building projects: Egypt as a case study Sustainability and project management are two trends that have taken global interest in the last decades due to their significant role in various fields of life. However, these two topics have rarely been addressed in one study or framework. As sustainability and environmental issues are not specifically or systematically considered in most major project management frameworks such as the Project Management Body of Knowledge (PMBOK), Individual Competence Baseline (ICB), International Organization for Standardization (ISO 21500:2012), and so on. Furthermore, sustainability applications in the construction field under the term “green buildings” are facing various types of obstacles that obstruct the pervasion of this type of construction in an adequate and required way. Some of these obstacles have been addressed in recent studies with suggested solutions, but the role of project management in overcoming or even mitigating the risk of these obstacles was almost absent in most of these studies. Therefore, this paper attempts to observe the most important obstacles facing the application of sustainability in the construction field and taking the green construction situation in Egypt as a case study. In addition, this paper aims to investigate the role of project management in green building projects’ success, through project management best practices’ applications to overcome the main reasons that obstruct the green building projects movement. The results showed that there is a lack of management methods that address sustainable construction projects. In addition, there is no clear methodology governing the green building management process. Also, the unspecified responsibilities between stakeholders in green building projects lead to difficulties in managing and implementing green buildings. However, some defined obstructions could be overcome by project management’s best practices and methods. its resources, and therefore exposes future life to danger. The latest events around the world-like forest fires, floods, and torrents due to global warming, encourage the global interest in sustainability as one of the means to address this phenomenon. And the high cost of oil energy and its negative impact on the environment have prompted the search for alternative sources of energy and developed concepts that aim to reduce dependence on oil energy and rationalize the use of coal and gas for power generation. A few years ago, concepts such as "eco-friendly building" and "green architecture" emerged within the framework of sustainable development that goes beyond the narrow economic outlook for rapid profit and the aspiration to conserve natural resources and allow them to be exploited for longer periods to serve future generations. The main feature that distinguishes green buildings or ecological buildings from the remaining buildings is that they do not disturb the ecological balance and they aim to produce structures that will benefit both nature and human beings [1]. Green building applications are facing a lot of challenges on more than one level. In this paper, green building applications at the project level were considered. The lack of green buildings in developing countries and the gap between the percentage of the registered projects and the certified projects from green building rating systems in countries like Egypt indicates that there are some obstacles facing this kind of project in all project phases. Some previous studies addressed these obstacles, and some of them provided suggested solutions at governmental and professional levels. But a few of them attempt to find a practical solution from project management and the project manager's point of view. This paper aims to investigate the obstacles that face green building applications in developing countries due to the size of the challenges that face these projects there and takes Egypt as a case study. Furthermore, the study observed challenges facing project managers or green building administrators in this project through a questionnaire and online interviews with them. Finally, the study attempted to find solutions through project management best practices to overcome the main reasons that impede the green building movement in developing countries like Egypt. Sustainability and green architecture Sustainability and a better life for current and future generations have captured global attention in recent decades. In 1972, the term "sustainability" was developed for the first time at the world environmental conference in Stockholm with the Club of Rome through the discussions within the framework of "eco-development" [2]. In 1987, the World Commission on Environment and Development (WCED), published a report entitled "Our common future. In this report, "sustainable development" was defined as development that meets the needs of the present without compromising the needs of future generations [3]. A broader concept of SD is based on the integration of three dimensions: economic, environmental, and social [2], constituting the sustainability known as the Triple-Bottom Line. In 1997, John Elkington, in his book "Cannibals with Forks, " coined the term "triple bottom line" (3BL), which refers to economic prosperity, environmental quality, and social justice. Also, knowing the three pillars of sustainability, or triple P, as follows: • Profit-The first bottom line is the traditional measure of financial performance-How responsible has the company been in terms of assuring its competitive prosperity? • People-The second bottom line is the measure of a company's social account-How socially responsible has the organization been in terms of its impact on the quality of life of the individuals it affects? • Planet-The third bottom line is the measure of the company's environmental account-How environmentally responsible has it been in terms of its impact on natural ecosystems? [4]. Sustainable development insights have been applied in several fields in our life, but the application of SD in the construction field creates a type of building under the term "Green Building". According to the World Green Building Council, a "green' building is a building that, in its design, construction, or operation, reduces or eliminates negative impacts, and can create positive impacts, on our climate and natural environment. Green buildings preserve precious natural resources and improve our quality of life" [5]. The pervasion of the concept of sustainability and green architecture in the world has been accompanied by the so-called "Rating Systems" programs, which act as arbitrators on whether a building is a "green building" or not, and how green it is. Furthermore, there is an active role played by these programs in the marketing of the green architecture concept around the world by working on the spirit of competition in the design, construction, and operation of buildings. In addition, building owners sought to obtain certificates from these global evaluation programs to prove that their buildings are subject to the principles of green architecture and compete at the highest level in the evaluation. The most famous and widely used rating system is the American system (Leadership in Energy and Environmental Design (LEED)), which was introduced in 1998 by the US Green Building Council (USGBC). In addition (Building Research Establishment's Environmental Assessment Method (BREEAM)) system in the UK is the world's first green building assessment system. Project management With the increasing complexity of projects in general and construction projects in particular, the need for a holistic system to manage all the project's resources, stakeholders, documents, finance, requirements, and solve all issues that come up with the project's progress. According to the American Project Management Institute (PMI), project management is the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements. Furthermore, project management enables organizations to execute projects effectively and efficiently through the appropriate application and integration of the project management processes identified for the project [6]. In the 1960s, Dr. Martin Barnes introduced the iron triangle (also called the triple constraint), which refers to the idea of being on time, within budget, and according to specifications. The triple constraints were the indicators of the project's success for decades until sustainable projects started to pop up, and other constraints have been raised as environmental and community dimensions [4]. The triple P took attention to a different project's success dimensions that were not realized before, along with the iron triangle, or triple constraint cost, time, and quality. Integration between sustainability and project management Project management and sustainability are two topics rarely integrated into one study or a framework, although project management could be a means of positive influence on the integration of sustainability dimensions into projects [7]. Recently, a few studies realized the role of project management in sustainability and green building's success; however, the existing studies are still insufficient [8]. In addition, most major project management frameworks, such as PMBok, ICB, ISO21500:2012, and Prince2, did not take sustainability and environmental issues into consideration [9]. Furthermore, it is noticed that most previous studies care about studying sustainable management and environmental management, but few of them address project management and its great role in sustainable and green architecture. According to Wu and Low [10], the credits related to project management in some of the rating systems (LEED2.2, Green Globes, BCA Green Mark 3.0), take around 20% of the credits in these rating systems. Furthermore, green buildings must be viewed as a comprehensive solution that integrates sustainable principles throughout the project life cycle, from project planning to design, construction, and operation, rather than simply as a collection of green materials, technologies, and other environmentally friendly innovations [10]. Green buildings are often developed according to rating system guidelines, which provide guidance on measurements and can provide recognition and verification of the level of compliance [11]. Rating systems are designed to evaluate the performance of an entire building or a specific section of a building from planning, to design, construction, and operations phases. This requires a specific management system to manage all procedures and processes of the rating system, the registration and documentation of credits, the interactions between the various stakeholders in the project, the responsibilities of everyone on the project team, resources, cost, and time management. It is worth mentioning that the management systems of these projects must have a specific nature that is adaptable to the project requirements and sustainable goals. Recently, there are few attempts to develop frameworks or methodologies for sustainable projects. However, until now, most of those attempts have not yet materialized from being studies and have not been applied to green building projects in a significant way. For example, Marcelino-Sádaba et al. [8] developed a framework in their study published in 2015 to help project managers deal with sustainability projects based on four dimensions: products, processes, organizations, and managers [8]. Globally, there is the methodology of (Project Integrating Sustainable Methods (PRISM)) which was introduced in 2013 by the international organization of green project management (GPM). PRISM is a structured methodology for sustainable-"Green Project Management", which is based on a series of standards and incorporates their use in the standard ISO 21500:2012 "Guidance on Project Management" [12]. But this methodology is not yet experienced in a significant way with a lot of applications as well as the studies that address this methodology are very rare. In addition, the methodology is totally not remedied at the local level according to the applied questionnaire in this study. Another perspective or level that addresses the integration between sustainability and project management is the organizational level. Sustainable project management is an integral part of the sustainable management of organizations. Where organizations interested in sustainable development, determine clear sustainable goals and issue sustainability reports in which they define their vision and future plans towards sustainability. In order to achieve organizations' sustainable goals, organizations define internal practices and projects either in the form of individual projects, programs, or portfolio aims to achieve the defined sustainability goals. Naturally, not all sustainable projects are implemented due to the sustainable organizations' strategies, as there are a lot of sustainable projects that have been implemented due to marketing considerations or to go with the new trend, especially in the construction field. However, the projects that are achieved based on clear organizations' goals and visions from a sustainability perspective are most likely to have a good chance for continuity and improvement. Although there are numerous studies on energy management and environmental conservation via ISO 50001 and ISO 14001, a holistic method for the management of sustainability in the context of an organization is still lacking [13]. Moreover, there is a gap between organizations' perception of the importance of sustainable management and its actual use in practice [14]. However, there have been some attempts recently in some studies to integrate the management methods with sustainable principles with the aim of introducing organizational sustainable management. Mustapha et al. [13] proposed the development of an integrated green management framework called the Sustainable Green Management System (SGMS). A systematic, integrated, and efficient approach for collecting, monitoring, analyzing, and managing information and resources. SGMS leads to sustainable organizations, saves resources, removes significant redundancies, promotes cleaner production, and enhances the profitability and efficiency of an organization [13]. Another important aspect in addressing the integration between project management and sustainability or green buildings is the contribution of the project managers to the success of sustainable projects. According to Hwang and Ng [15], many studies have been concerned with the efficiency of project managers to ensure the success of the project. A few of them have been concerned with the project managers' execution of green architecture projects and the challenges they face in such quality of projects. Therefore, in their study, they identified the most important challenges facing project managers in green architecture buildings. Among them, the long period required for planning and designing green buildings; the unavailability of subcontractors, professionals, green materials, and equipment; high-cost and risk; and the lack of experience and knowledge. Hwang and Ng [15] also identified critical knowledge areas and skills that are essential to respond to the challenges. The most important knowledge areas were schedule management and planning, stakeholder management, communication management, cost management, and human resources management. In addition, the most important skills that are required to mitigate the challenges were analytical, decision-making, team working, delegation, and problem-solving skills [15]. Also, Martens and Carvalho [14] pointed out that project managers can improve their results in projects when looking at four factors, which are sustainable innovation business model, stakeholders' management, economics, competitive advantage, environmental policies, and resource saving [14]. Methods This qualitative exploratory research aims to define the role of project management in the success of green building applications and how it helps in overcoming the obstacles facing these kinds of buildings. For this purpose, a systematic literature review was conducted for a better understanding of the green buildings' obstacles and challenges facing these buildings in developing countries like Egypt as a case study for some reasons as follows: -Egypt is one of the countries that suffers from a lack of energy sources, environmental pollution, the pervasion of some diseases due to this pollution, and economic problems. As the movement of sustainability and green building principles contribute significantly to solving these problems, it becomes necessary to study the reasons that prevent the pervasion of sustainability and green buildings in Egypt, find solutions, and overcome these obstacles. -Although the significant recognition of green building projects' importance in Egypt, a very limited number of certified green building projects have been observed, principally in the national rating system GPRS. -All previous studies addressing the green building project crisis in Egypt totally neglected the role of the most important factor in project management, which led to wondering how these projects are managed in Egypt and how the cases and numbers of green building projects could be improved by a successful project management system. Following the SLR, an online questionnaire and interviews with project managers and sustainability consultants were conducted to determine how green project buildings are managed in Egypt, a more specific ranking for the most affective challenges that obstruct green buildings in Egypt from the challenges identified previously in previous studies, and finally to determine how the green building situation in Egypt could be improved. The questionnaire consists of 18 questions with two types of questions, open questions, and multiple-choice questions aiming to benefit from the experience of project managers and to define obstacles they faced in managing green building projects in Egypt, the main aims need to be elicited from the questionnaire as follows: 1. The main project phases that the project managers participated in and their main role in the project. 2. Are the project managers following specific management methods/methodologies to address green buildings' unique natural and requirements? 3. Most management methodologies that have been followed in these projects and what are the most useful software programs have been used. 4. Who decides the management methods in the project and are the project stakeholders participating in choosing the way in which the project has been managed. 5. The main obstacles that project managers face when managing green buildings in Egypt. 6. The main factors which caused discrepancies between the estimated project cost/ time and the final project cost/time achieved. 7. The project managers' point of view on how management systems could be developed to be convenient for green building projects. The target group for the study is project managers and green building consultant who have worked in certified/registered green building projects in Egypt. The method which has been used to collect data is an online survey and personal interviews by using voluntary response sampling. The total number of responses are 10 responses varies among project managers and green building consultants. For more clarification, the research method was summarized in Fig. 1. Results and discussion Green architecture insights have appeared in Egyptian buildings since the early eras in building design considerations such as taking advantage of building location, designing buildings to overcome external environmental conditions without harming the environment, benefiting from daylight, optimizing resource use, and other environmental design concepts now adopted by green architecture. But with the passage of the ages and industrial progress, these concepts faded away and the natural solutions were replaced with artificial solutions in buildings, which led to environmental harm and natural resource exploitation. Green architecture as a term was introduced in Egypt in the 1990s at the first symposium of "Bioclimatic Architecture", which was held in 1996 [16]. After launching the LEED system (the most popular rating system in the world) in 1998, small steps have appeared toward this trend in this period until the first green building approval in 2010 under the LEED rating system. Following this, a few investors and developers in Egypt were interested in registering their buildings in the LEED program as a kind of marketability to keep up with the new trend. Green building project challenges in Egypt Sustainable construction projects known as green building projects face some obstacles and challenges with their implementation in reality. Particularly in developing countries due to certain factors that will be discussed later. In Egypt, as a case study, there are limited numbers of green buildings in the modern era which are certified by third-party or green rating systems, whether by LEED or the Green Pyramid Rating System (GPRS) the national green building rating system in Egypt). The number of buildings registered in LEED until 2021 reached 63, with only 22 certified [17]. As well as there is only one building that has gained the LEED platinum certification in Egypt. On the other hand, the application of the (GPRS) has been neglected at the level of the public and private sectors since its launch in 2011 by the Egyptian green building council. Unfortunately, there are only 5 buildings that were certified under this system [18]. Comparing the number of certified green buildings in Egypt with other countries in light of the rapid movement globally toward sustainability and green buildings, found that Egypt's movement toward green architecture is very slow and needs more encouragement from the government and construction developers, as well as more studies of the factors leading to such delays and exploring solutions to promote strongly the application of green architecture principles. During the last decade, local studies in Egypt focused on studying the application of green buildings, but few really addressed the main reasons for preventing the pervasion of green buildings in Egypt and the main problems that face these buildings. In this section, the reasons behind the green building crisis in Egypt will be discussed from the most important previous studies. Some studies focused on general reasons and determining the problems facing green building in Egypt are listed below: -The absence of governmental incentives toward green building. -High initial cost for the green building compared with the traditional type. -Lack of design team specialists who are aware of environmental control strategies and building simulation programs to choose the optimum choices for the building's environmental performance. -Unavailability of the required technology for some credits. -Lack of contractors' awareness. -Unavailability of recycling companies for construction materials. -Unavailability of data about the life cycle cost of the available materials. -Unavailability of low-emitting materials in the Egyptian market [19]. -Lack of a database related to green building materials [20]. -The unified Building Law No.119 that was released in 2008 and its executive appendix, which was released by the Ministerial decree No. 144 in 2009, were not formulated having green concepts as a governing parameter [21]. On the other hand, there are studies that point to some reasons behind the inapplicability of the Green Pyramid Rating System (GPRS) in the Egyptian environment. For instance, the lack of knowledge or awareness by architects towards certain elements, principles, or even criteria when it comes to the GPRS. Also, the failure to adapt to the local context to cultural issues, resources, priorities, practices, and economic challenges. According to Attia and Dabaieh [22], GPRS requires compliance with Egyptian and American codes at the same time, which has led to inconsistencies in some cases and requires a lot of effort. Furthermore, there are missing guidelines and documentation methods in some credits, for example, indoor air quality and material credits. In addition, GPRS ignores the local Egyptian built environment, for example, local building techniques, vernacular architecture, heat island effect, informal housing, natural ventilation and ceiling fans, solar water heating, Cairo air pollution, occupational behavior, health, Egyptian society, and economic aspects [22]. Furthermore, some studies highlighted the lack of a database related to (GPRS) certified materials that can be used as a benchmark for assessment and as a guide for the user. In spite of that, there are currently over 120 international green labeling programs for building materials worldwide [19]. As well as the lack of comprehensiveness in achieving the remains of social, cultural, and economic sustainability goals [23]. All the mentioned studies did not recognize role of project management, along with project manager competency, and how a poor management system could affect the successful implementation of green building projects anywhere, particularly with regard to overcoming the extracted obstacles, and helping in implementing successful green building projects as the previous studies emphasized as mentioned in the literature review. From the systematic literature review, the research reached an important hypothesis, which is that green building situation in Egypt could be improved and go faster in steady steps by developing and improving the project management methods used in implementing the green building projects. Therefore, to experiment research hypothesis, it is needed to know how green building projects are managed in Egypt and study the management methods used in these projects. The role of project management in green building project success This section of the study aims to investigate how green-building projects are managed in Egypt. Moreover, discover if the way of managing these buildings affects project success in achieving the sustainability goals and whether it is among the factors leading to the obstruction of the construction of green buildings in Egypt. Furthermore, the study investigated how to overcome the obstacles identified in the research problem by project management. Accordingly, an online questionnaire and interviews were conducted with Egyptian project managers and green building administrators (with experience of 3 to 20 years in green buildings) who worked in green buildings in Egypt, whether registered or certified buildings, under LEED or GPRS. The results came as follows: -There is confusion between the roles of project managers and green building consultants in most cases, while the responsibilities of each of them are also unspecified. -In some cases, the project manager is not involved in the green building certification process and all responsibilities related to the green building process, and rating system certification is the green building consultant's responsibility. -Involved personnel presented themselves as project manager and green building consultant at the same time, in spite of the fact that their responsibilities did not cover all aspects of project management. This means that there are some neglected management areas in the projects due to multiple responsibilities. -The project managers/green building consultant most involved in green building projects is at the construction stage, followed by the design development, then the schematic design and bid stage, then the conceptual design, and finally in the preconcept design stage as shown in Fig. 2. -The main role of the project manager in the green building project is sustainable management and then selection of rating system credits and verification of rating system achievement of prerequisites and credits. However, there are some management areas that do not get proper attention as other important issues such as time, quality, and risk management. Moreover, there is a neglected area such as stakeholder management, as shown in Fig. 3. -Seventy percent of the results showed that there is no specific management methodology to be followed in managing green building projects, and the most used management methodology is Agile due to its ability to control project output and then Waterfall, Prince 2, Critical Path, and PM Book framework as shown in Fig. 4. -The responsibility of choosing the methodologies and software programs used in green building projects falls on the project manager, then the green building consultant, and finally the project management office, as shown in Fig. 6. -The main obstacles that project managers face when managing green buildings in Egypt by the order are as follows and shown in Fig. 7: -The lack of awareness of contractors. -The absence of government incentives. -The lack of professional expertise. -The lack of recycling companies. -The lack of data on the lifecycle cost of available materials. -The lack of green resources and their data. -The lack of technology required for some credits. -57% of the results showed that there were no discrepancies between the estimated project cost and the final project cost achieved, while the other 43% who admitted that the discrepancy existed, 67% of them classified the discrepancy as minor, and 33% as intermediate, as shown in Fig. 8. The reasons behind this discrepancy are the unrealistic estimation, the change in material costs, and those green building requirements that were overlooked in the early design phase. -Seventy-eight percent of the results showed that there are discrepancies between the estimated project duration and the final project duration achieved in green building projects, and these discrepancies are estimated as 50% intermediate and 25% major and 25% minor as shown in Fig. 9. The reasons for this are that the process is not usually smooth, there are many project stops, the client changed the design, and the estimates are unrealistic. Conclusions In general, as mentioned in the literature review, there is a lack of project management methods/methodologies that address sustainable construction projects around the world. In addition, it is concluded from the study that there are many defects in the way that green building projects are managed in Egypt, which could be one more obstacle in addition to the set of obstacles extracted from the previous studies, which led to delays in the green building movement. As it is concluded from the questionnaire results, open questions, and the interview with the project managers as follows: -There is no clear methodology governing the green building management process in Egypt. All efforts in that field rely on the vision and experience of the project manager with the assistance of current general management methodologies such as Agile, PM BOOK, and Waterfall. As well as, these methodologies are not used efficiently to overcome the major obstacles and solve the problems that these projects are exposed to in Egypt. -The roles of project managers and green building consultants are unclear. Sometimes the project manager and the green consultant are the same person in charge of the managerial work as well as the technical consultant and certification process, which is a huge task, especially in large-scale projects. In other cases, the project manager is completely isolated from the sustainable management or the green certification process, which also leads to poor project bonding, and does not activate the principles of the integrative process. -Cases in which the project manager and the green building consultant are the same person, showed complete ignorance of some management knowledge areas like stakeholder management and weak risk management. As well as, the concept of a green project manager is missing, the person who has the project management knowledge, including management methodologies, methods, tools, and techniques, and has leadership skills to lead the entire project team and organize all project processes in an integrative manner holistically in the context of sustainability. -Late commissioning of a green consultant in the project or deciding to follow building green principles after the start-up design phase may result in repeat work, increase budget, schedule delays, and failure to obtain green building certification. -Stakeholder and risk management knowledge areas are the most neglected, although studies emphasize the importance of these areas in green building projects' success. Recommendations -Green architecture needs to develop more simply applied management methods, methodologies, tools, and techniques in order to overcome some of the obstacles facing green buildings around the world, especially in Egypt, to encourage the sustainability movement. -There should be a distinction between the roles and responsibilities of project managers and green building consultants. The main factor in the success of the project is that everyone knows their role in the project and what their duties are. -The green building consultant is the person who leads the building certification process and must have knowledge of the technical data involved in green building construction, be supportive of the team on technical matters, and coordinate all project disciplines. Meanwhile, the green project manager is the person who deals with the management aspect of the project in the context of sustainability. In addition, he must be familiar with the principles of sustainability and green buildings, the requirements of the rating system, and the process involved in the system of certified green buildings. -The participation of a green project manager in the project from the pre-design stage is mandatory for organizing all project operations, maintaining project sequence, putting the project on track, recording and solving problems, making decisions, and others. Any delay in involving the green project manager or even the green consultant from the pre-design stage of the project affects the success of the project. -It is very important to incorporate green building requirements into the design from a very early stage. This doubles the ease of fulfilling these requirements and increases cost-efficiency in addition to saving time. -Overcoming issues of lack of knowledge of the team, contractors, suppliers, and operators through scheduled training during the project life cycle. This training should be continuous, repetitive, and defined in pre-design in a separate plan developed by the project managers. -Documentation of green building projects is a very important issue, especially for the certification process, so the documentation plan should be defined at the initiation stage and developed throughout the project life cycle. -Stakeholder management is an effective management area that needs to be noticed and given more attention by project managers in the field of green building. -Most of the obstacles that contradict sustainable construction in Egypt could be overcome by project management. The most important obstacles identified in the literature review and ranked by the project manager in the questionnaire are as follows, with some suggestions from the project management point of view: Lack of awareness of contractors and professional expertise, which could be mitigated in the current projects and future projects by organizing scheduled training that is performed throughout the project life cycle. In addition, recording the lessons learned and sharing them within the organization and outside the organization, if possible, is an active action toward increasing awareness and professionalism in this field. Through stakeholder management, project managers could participate with the authorized government agencies in the early project discussions to be involved in the project and recognize the benefits that the project will introduce to the surrounding environment and the community, which could lead to an increase in the authorized agency interest and recognition toward sustainable construction and green buildings and could lead to increase government incentives for the project. National database systems for all available green building materials with lifecycle assessment data, recycling companies, sustainability, responsible manufacturers, and all required green building resources are needed to facilitate the green certification process and overcome the lack of information and verified green resources. The database systems should have frequent updates periodically to include all the new resources and companies. Project managers could avoid or mitigate the problem of the lack of professionals or technology required for some credits by involving professionals or specialized agencies from abroad in the project, which required efficient human resources management and strong communication plans to acquire and manage the project virtual team effectively. Finally, research on green building project management should be encouraged, especially at the local level, due to its important role in the success of the project, overcoming the obstacles that may face this type of construction, and the ability to organize the process and coordinate between several of its elements.
2022-07-11T15:02:07.929Z
2022-07-09T00:00:00.000
{ "year": 2022, "sha1": "0c0d33c60f337eae055ae1b4abd948eeb74d57dd", "oa_license": "CCBY", "oa_url": "https://jeas.springeropen.com/track/pdf/10.1186/s44147-022-00112-5", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0bebf1be8b71e19d2c446b4ecd3ac6253bd16e4c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
256214204
pes2o/s2orc
v3-fos-license
The Role of Arginine-Vasopressin in Stroke and the Potential Use of Arginine-Vasopressin Type 1 Receptor Antagonists in Stroke Therapy: A Narrative Review Stroke is a life-threatening condition in which accurate diagnoses and timely treatment are critical for successful neurological recovery. The current acute treatment strategies, particularly non-invasive interventions, are limited, thus urging the need for novel therapeutical targets. Arginine vasopressin (AVP) receptor antagonists are emerging as potential targets to treat edema formation and subsequent elevation in intracranial pressure, both significant causes of mortality in acute stroke. Here, we summarize the current knowledge on the mechanisms leading to AVP hyperexcretion in acute stroke and the subsequent secondary neuropathological responses. Furthermore, we discuss the work supporting the predictive value of measuring copeptin, a surrogate marker of AVP in stroke patients, followed by a review of the experimental evidence suggesting AVP receptor antagonists in stroke therapy. As we highlight throughout the narrative, critical gaps in the literature exist and indicate the need for further research to understand better AVP mechanisms in stroke. Likewise, there are advantages and limitations in using copeptin as a prognostic tool, and the translation of findings from experimental animal models to clinical settings has its challenges. Still, monitoring AVP levels and using AVP receptor antagonists as an add-on therapeutic intervention are potential promises in clinical applications to alleviate stroke neurological consequences. Introduction Stroke results from an acute central nervous system injury caused by the disruption of cerebral blood flow or bleeding within or around the brain, with consequent neurological damage and loss of function. Ischemic stroke due to cerebral blood vessel occlusion comprises most of the cases (~62% of all new strokes in 2019), followed by intracerebral hemorrhage stroke (28%), and subarachnoid hemorrhage stroke (10%) [1]. Independent of the cause, in 2019 stroke remained the second-leading cause of death globally and the third-leading cause of disability and all-death mortality combined [1]. The extent of the neurological injury in response to stroke, particularly ischemic stroke as the most common type, depends on various factors, including the severity, duration, and area affected. The pathological mechanisms are complex and the time window to intervene at each stage is limited. With the decrease of cerebral blood flow, oxygen and glucose deprivation that follow change the ionic environment, thus promoting excitotoxicity and neuronal loss [2]. The resulting blood-brain barrier (BBB) disruption, increased levels of reactive species, and neuroinflammation all contribute to poor outcomes in ischemic stroke. In the post-stroke phase, a chronic neuroinflammatory process might occur at the infarct core and distal regions, promoting long-term neuronal tissue damage. A subsequent strokeinduced secondary neurodegeneration may occur in the distal regions. Neurodegeneration may lead to loss or impairment of motor function, autonomic and/or cognitive systems depending on the functional location of neuronal loss and the structure affected [3]. Despite scientific advances in understanding the mechanisms of stroke, etiology, and prognosis, effective treatments are still deficient. For instance, though mitigating post-stroke neuroinflammation is promising in improving stroke outcomes, multiple neuroprotectants have failed clinical efficacy to date [2]. Hence, a constant need to search for novel or complementary targets to decrease the burden of stroke. Likewise, long-term prognosiseffective biomarkers remain a cause for concern. In this narrative review, we present several pieces of evidence suggesting arginine vasopressin (AVP) neuropeptide receptors as a potential therapeutic target, and copeptin, a surrogate marker of AVP, as an effective biomarker for long-term prognosis and risk stratification in stroke. AVP is a neuropeptide critically involved in the maintenance of body homeostasis. The primary physiological mechanism controlling AVP release to the circulation is osmotic stimuli, followed by nonosmotic stimuli, including blood volume and pressure changes [4]. However, AVP secretion also occurs during stress-related stimuli associated with acute onset diseases, including stroke, in a hypersecretion manner [5,6]. As a neurotransmitter and neuromodulator within the central nervous system (CNS), AVP plays significant roles in maintaining physiological cerebral fluid balance, electrolyte homeostasis, and vascular resistance [5,[7][8][9][10]. All those functions are highly affected by a cerebrovascular accident, suggesting that dysregulated AVP release may play a significant role in stroke pathophysiology. Indeed, AVP hypersecretion is implicated in major stroke-related complications, including brain edema, vasoconstriction, oxidative stress, BBB disruption, and neuroinflammation [11][12][13][14]. The AVP precursor peptide (pre-proAVP) is synthesized mainly within neurosecretory neurons of the hypothalamus and subsequently transported along the axons to the neurohypophysis. Proteolytic cleavage during axonal transport forms the mature AVP alongside copeptin and dissociates their intracellular carrier protein neurophysin II in equimolar amounts, independent of the stimuli applied [15][16][17]. In contrast to AVP's short half-life and unstable nature, copeptin is stable in the serum and plasma of patients with elevated AVP, reliably reflecting AVP circulating concentrations [18]. Hence, copeptin has been used as a surrogate marker for AVP in multiple disease states, including myocardial infarction and stroke [18]. Recently, studies exploring the detrimental role of AVP in animal models of ischemic and hemorrhagic stroke have supported a positive correlation between copeptin level and stroke severity. Moreover, an association between increased levels of AVP in stroke patients and poor prognosis has been established [19][20][21][22][23][24][25]. Hence, from a clinical perspective, the importance of understanding further the AVP involvement in stroke-elicited damage progression as copeptin, as it is secreted in equimolar concentrations as AVP and has been considered a surrogate marker of AVP, is an effective biomarker in terms of risk stratification after ischemic stroke and in predicting vascular events after a transient ischemic attack (TIA) [21,26]. In this narrative review, we first briefly summarize the physiological role of AVP. We then discuss the underlying mechanisms contributing to AVP hypersecretion during the onset of the cerebrovascular accident and how this phenomenon causes detrimental effects on both systemic and local scales. Furthermore, recent data suggesting a promising role for copeptin use as a prognostic tool in stroke is introduced. Finally, we summarize current evidence for using selective and mixed AVP receptor antagonists in the acute phase of ischemic stroke and as an add-on therapy for mitigating post-stroke complications. The discussion is primarily focused on AVP in ischemic stroke, as most data has studied this type of stroke. Our goal is to provide the reader with a comprehensive, state-of-the-art narrative review highlighting AVP's critical role in stroke pathology and its potential for therapeutic targets. The Physiological Role of Arginine Vasopressin AVP is synthesized in magnocellular neurons located in the supraoptic and paraventricular nucleus of the hypothalamus (PVN). The AVP gene encodes the precursor protein pre-proAVP, which is proteolytically cleaved during axonal transport towards the pituitary gland to form AVP and copeptin in equal concentrations [15,16]. Physiological secretion of AVP to the circulation occurs primarily in response to osmotic stimuli, sensed mainly by osmoreceptors and through nonosmotic cues detected by baroreceptors and atrial stretch receptors [4]. Once released, circulating AVP acts as a hormone in widespread targets regulating many physiological functions to promote fluid homeostasis by controlling water balance and plasma ion concentration, and maintaining systemic blood pressure, by controlling the vascular tonus [4]. AVP acts primarily through AVP activation of G protein-coupled receptors, namely: V1a (V1aR) and V2 receptors (V2R), to promote its physiological functions [8]. AVP acting on V2R found in the distal tubules and collecting ducts of the kidney regulates proper fluid exchange and plasma osmolality within the renal system [36]. The AVP/V2R activation causes upregulation of aquaporin 2 (AQP2), resulting in ion gradient-driven water influx and subsequent urine concentration, plasma dilution, and plasma osmolality reduction [36]. Blood pressure regulation by AVP is complex and still not fully understood, given that AVP has interactions with other systems, such as the sympathetic and the reninangiotensin system, both critical regulators of blood pressure. Still, AVP predominantly regulates blood pressure through V1aR and V2R, located on endothelial and vascular smooth muscles. Their activation by AVP causes either vasoconstriction or vasodilation, depending on the vascular bed [37], and aids blood pressure maintenance in normotensive and hypertensive subjects [38,39]. Moreover, AVP vasoconstriction response depends upon arteriolar diameters. As such, the vasoconstrictive response to AVP in large arterioles is more significant than that observed by norepinephrine treatment [40]. On the other hand, smaller arterioles do not show differences between treatments in rodents' microcirculation [40]. Hence, AVP has clinical importance and use in advanced hypovolemic or vasodilatory shock states [41]. In the CNS, AVP acts as a neurotransmitter and neuromodulator and participates in various physiological and behavioral functions, as summarized in the remainder of this section. In concert with the corticotropin-releasing hormone, AVP regulates corticosteroids' secretion within the CNS and periphery, contributing to the endocrine stress response [42,43]. In the CNS, AVP regulates the hypothalamic-pituitary-adrenal (HPA) axis through a third characterized receptor named V1b located in the corticotropic cells in the anterior pituitary gland (V1bR, also called V3) [44]. Additionally, AVP directly activates V1aR in the adrenocortical cells to release adrenocorticotropic hormone (ACTH), hence potentiating the release of cortisol, an essential hormone involved in stress response [45,46]. Moreover, AVP is present within the so-called intrinsic vasopressin fiber system composed of astrocytes and astrocyte-like cells. Although the precise function thereof remains unknown, it is postulated that the vasopressin fiber system facilitates neocortical water flux via AQP4 in response to V1aR activation. Thus, playing a role in brain water regulation and ion homeostasis [26,47]. Within the suprachiasmatic nuclei (SCN), daily rhythm regulation is tightly connected to AVP signaling and HPA activity [51]. HPA-regulated cortisol increase might be inhibited when the AVP is derived from parvocellular SCN neurons or stimulated if derived from parvocellular PVN neurons [52,53]. Although AVP's role in daily rhythm regulation has not been clearly defined, based on recent bioluminescence findings, AVP signaling influences SCN period, precision, and organization [54]. AVP affects neuronal oscillations generated at various scales, from individual neurons through local networks to multiple neural systems across the brain [55]. Among other functions, AVP activation translates into fine-tuning neural responses behind emotional reactivity, risk acceptance, cognitive functioning, social interactions, and maternal behavior in mammals besides oxytocin [56,57]. Additionally, AVP is one of the modulators of neurogenesis. In rat models, brain AVP content gradually increases from 16 days of pregnancy, while pituitary AVP rapidly increases from 19 days until birth [58]. Trials on rodents showed that lower AVP concentration neonatally is associated with reduced brain weight, mainly affecting the cerebellum. In 10-15% of cases, these abnormalities persist throughout life [58]. Arginine Vasopressin Hypersecretion and Release Mechanisms during Stroke Several studies indicate excess release of AVP in association with somatic and psychological stressors present in acute onset diseases, including myocardial infarction, sepsis, and stroke [59]. Interestingly, the stress-related release of AVP in response to pathological stimuli shows a much higher increase in magnitude than classical osmotic stimuli. For instance, in the baboon model of hemorrhagic shock, baseline copeptin level (reflecting AVP) increased approximately 36-fold (median level from 7.5 to 269 pmol/L) [60]. Indications of increased release of AVP in ischemic stroke were reported in animal models of stroke and stroke patients. AVP gene and protein expression were reported as elevated within the SON and PVN in an experimental model of cerebral ischemia and reperfusion, suggesting transcription and translation of AVP are increased in acute brain injury [61]. Patients with ischemic stroke show a significant increase of AVP in plasma and cerebrospinal fluid [62][63][64][65]. Elevated AVP levels in blood plasma were observed in patients with ischemic stroke during the 24 h period [63]. The observed AVP hypersecretion was independent of plasma osmolality and mean arterial pressure and correlated with the mean size of the lesion and level of neurological deficit. However, no correlation was observed between AVP release and vascular territory of ischemic injury [63]. During the acute phase of stroke, secretion of AVP occurs mainly from neurosecretory MNCs neurons located in the PVN and SON, as the AVP neurons are resilient to ischemia and are capable of secreting AVP after ischemic insult [66,67]. However, other local sources of AVP during brain injury were also identified, including activated microglia, choroid plexus, and to a lesser extent, brain endothelial cells [62,68]. Notably, AVP release in experimental stroke was found in the infarct and peri-infarct spatial location areas [62,66]. In the acute phase of ischemic stroke, overactivation of the angiotensin-convertingenzyme (ACE)/angiotensin (Ang II)/angiotensin receptor 1 (AT1R) axis, sympathetic and baroreceptor dysregulation are frequently observed. Such dysregulation causes blood pressure fluctuations, impairs cerebrovascular autoregulation, increases pro-inflammatory cytokine production in the parenchyma, and induces hyperglycemia [67,[69][70][71][72]. The state of sympathetic predominance after stroke can cause catecholamines release, which activates the magnocellular neurons in PVN and SON, resulting in AVP release [73,74]. Moreover, the state of hyperglycemia following stroke (as a result of direct post-stroke hyperglycemia or exacerbation of pre-existing diabetes) may increase the number of AVP-expressing neurons in PVN and SON, resulting in increased release of AVP during stroke [75,76]. Another nonosmotic systemic stimulus contributing to AVP release during stroke is the increased intracranial pressure (ICP) [77]. ICP in stroke was classically attributed to cerebral edema formation after stroke [78]. However, recent evidence demonstrates that increased ICP is also associated with the acute phase of stroke before the edema formation [79]. In stroke, mechanical pressure created before/during edema formation exerts direct pressure on the hypothalamus, which results in AVP release from PVN and SON [77]. The local mechanisms contributing to AVP release during stroke include glutamate release, local hyperosmotic environment formation, structural and functional changes in astrocytes located in PVN and SON, and pro-inflammatory mediators' release. Excessive and prolonged glutamate release occurs during stroke in infarct and peri-infarct areas, leading to exaggerated depolarization of postsynaptic neurons and neuronal death through excitotoxicity [80]. After the stroke, elevated pools of circulating glutamate are detected in patients' plasma and cerebrospinal fluid, which can directly interact with AVP neurons in PVN and SON [81,82]. AVP neurons in the hypothalamus receive dense glutaminergic innervation, accounting for approximately 25% of the total number of synapses formed in PVN and SON. Previous work shows glutamate induces AVP release in conscious rats by stimulating non-n-methyl-d-aspartate (NMDA) receptors [83]. Furthermore, after middle cerebral artery occlusion (MCAO) in rats, glutamate levels rapidly increase in PVN and SON in the first 15 min after the procedure. Moreover, ischemic stroke induces increased BBB permeability, enabling the passage of ions, molecules, and fluids into the brain parenchyma, disrupting ion homeostasis [14]. Particularly after ischemic injury, an increase in BBB Na+/H+ exchangers, Na+-K+-Cl− cotransporters, or the calcium-activated potassium channel KCa3.1, induces the uptake of the Na+ ions in the brain parenchyma [14,84]. Consequently, the formation of a local hyperosmotic environment in PVN and SON can enhance the release of AVP [85]. The release of AVP during stroke is also regulated by interactions of astrocytes and microglia with magnocellular neurons located in PVN and SON. In physiological conditions, the activity of AVP neurons is under negative feedback modulation from astrocytes [86,87]. Increased AVP secretion in PVN and SON increases the aquaporin 4 (AQP4) expression, which is associated with the extension of glial fibrillary acidic protein (GFAP) filaments. Therefore, astrocytic processes can reversely expand around AVP neurons and inhibit AVP secretion [87]. However, during stroke, the maladaptation of astrocyte plasticity causes the regulatory volume decrease of astrocytes, followed by the release of glutamate into the extracellular space [88]. It has been demonstrated that MCAO and basilar artery occlusion (BAO) reduce the GFAP and AQP4 in astrocytes located around AVP neurons, followed by the separation of GFAP from AQP4, thereby increasing the activity of AVP neurons [89,90]. Additionally, activated microglia and astrocytes can release many pro-inflammatory mediators during stroke [91]. Previous works indicate that AVP can be released during a cerebral injury in response to TNFa, IL-1, and IL-6. Furthermore, AVP acts synergistically with those cytokines and increases the local release of CXC and CC chemokines [92]. Following the hypersecretion of AVP during stroke onset, an early (1-2 h) increase of V1aR is observed in astrocytes, neurons during axonal beading, and brain endothelium [6,31,93]. An increase in V1aR expression is correlated with AQP4 upregulation. The V1aR is also redistributed from the astrocyte body to the astrocytic processes, where the AQP4s are localized [90]. Consequently, the injured brain parenchyma is more susceptible to AVP-induced effects, particularly brain edema formation. Arginine Vasopressin and Hyponatremia The detrimental effects of AVP overproduction can be observed in patients with the syndrome of inappropriate antidiuretic hormone secretion (SIADH), which is commonly associated with stroke (3.9-45.3% and 40-45% on admission and during hospitalization, respectively) [94,95]. AVP's action to effectively lower the plasma osmolality is mainly focused on reducing the concentration of sodium, a predominant osmolyte. Thus, a pathological increase in AVP secretion results in hyponatremia. Of note, although AVP induces water conservation in the kidneys, the regulatory mechanisms, i.e., the reninangiotensin system, prevent extracellular volume expansion. Therefore, the elevated levels of AVP observed in SIADH result in the pathological state of euvolemic hyponatremia [96]. The severity of CNS damage caused by hyponatremia is causally associated with the magnitude of sodium deficiency and the rate of change in plasma sodium concentration [96]. In severe cases of hyponatremia, the hypoosmotic state produces a gradient-driven water influx into the brain causing cerebral edema [97]. The edema and increased intracranial pressure (ICP) promote further AVP secretion followed by edema accretion [26,77]. For instance, posterior reversible encephalopathy syndrome (PRES), a clinicoradiological condition characterized by brain edema as a primary symptom was associated with an increased AVP secretion [98]. As mentioned above, nearly half of stroke patients develop hyponatremia either on admission or during hospitalization [94]. Moreover, sodium concentration below 135 mmol/L is associated with higher mortality and larger baseline intracerebral hemorrhage volume [99][100][101][102]. Although there are many overlapping causes of low sodium levels in stroke patients, AVP's role seems significant, especially in hemorrhagic stroke. Indeed, in ischemic stroke and mild/moderate subarachnoid hemorrhage cases, 7% and 71% of hyponatremia cases were attributed to SIADH, respectively [103,104]. Arginine Vasopressin, Autonomic Nervous System, Inflammation, and Immunosuppression Response Strokes may provoke damage to anatomical structures of the autonomic nervous system, causing autonomic dysregulation. Likewise, AVP acts centrally, coordinating autonomic responses, i.e., AVP administered intraventricularly in rodent models causes sympathetic activation [105][106][107]. AVP-mediated sympathetic activation and stroke-elicited disruption of central control of the autonomic nervous system could account for catecholamine release, and thus immunosuppression. The diminished immune response can be detrimental in stroke patients due to the increased risk of infection complications and potential bystander autoimmune response [108]. AVP is secreted in response to IL-6, an inflammatory cytokine released during infection. This temporal relationship may explain the hyponatremia associated with high CRP values [109]. Moreover, AVP modulates immune response by increasing IFN-y and primary antibody production in inflammatory cells. Furthermore, AVP stimulates the release of prolactin, a hormone with pro-inflammatory properties [110]. This data suggests the involvement of AVP in inflammatory response exacerbation (caused by, e.g., hospital-acquired infection), which may be related to mid-term stroke outcome worsening. Arginine Vasopressin and Stress Response The release of AVP, and other stroke-related events, such as cytokine release and ischemic disruption of HPA inhibitory areas, can entail the HPA axis activation [111,112]. Indeed, one of the earliest events following a brain injury caused by ischemia or hemorrhage (depending on the type of stroke) is the activation of the HPA axis and subsequent hypercortisolism [57,112]. As previously mentioned, AVP, in concert with corticotrophin-releasing hormone (CRH), can stimulate the pituitary release of ACTH, subsequently activating adrenal glucocorticoid secretion [45,46]. AVP, in physiological conditions, is a weaker ACTH secretagogue compared to CRH. However, AVP acts synergistically with CRH and causes extensive ACTH release after stimulation by various acute stressors, including stroke [113]. The resulting increase in cortisol starts promptly after the stroke onset and persists from seven days to months after an insult [114]. The effect of cortisol on brain tissue is detrimental and is associated with structural changes in all brain regions, but predominantly gray matter [115]. Mounting evidence suggests that chronic cortisol elevation causes atrophic changes in the hippocampus, leading to hippocampus-dependent learning and memory impairment [116]. The loss of hippocampal volume may be attributable to brain-derived trophic factor expression changes in the hippocampus [117]. Moreover, the prolonged high cortisol levels are associated with an exacerbation of Alzheimer's disease, which is putatively a result of an increase in oxidative stress and amyloid beta peptide toxicity [118]. Regarding stroke, in the most recent systematic review involving a total of 1340 patients, high cortisol levels at admission were associated with greater dependency, morbidity, and mortality in patients with an ischemic stroke. Nevertheless, there is no evidence of cortisol's causal role in worsening stroke outcomes to date [114]. Moreover, the degree of involvement of AVP in ischemia-elicited cortisol rise has not yet been clarified. Hypersecretion of AVP and consequent increase in plasma AVP concentration may lead to depression, including endogenous or major depressive disorder and anxiety due to HPA disorder and following hypercortisolemia [57,112]. The exact biochemical mechanism is not well known; however, there seems to be functional interaction between AVP (via V1aR) and serotonin (via 5-HT receptor) at the level of the hypothalamus. This association might be significant in aggressive behavior as well [119,120]. Such behavior often exists in stroke patients during the acute phase, who may experience depressive mood, aggression, or hostility even if they have never been diagnosed with depression or other mood disorders before [121]. Arginine Vasopressin and Platelet Aggregation AVP-induced platelet aggregation is facilitated by V1R activation and subsequent thromboxane release [122]. However, the effect of AVP on platelets could only be observed in physiologically unattainable concentrations of AVP. Moreover, there is no link between AVP and platelet activation in human pathology [123]. Thus, the significant influence of AVP on platelet aggregation in stroke is theoretically unlikely and unsubstantiated. Central Nervous System Local Effects of Arginine Vasopressin System in Stroke Hydromineral disturbance and neurovascular unit damage are strongly associated with stroke. Moreover, phenomena such as BBB disruption, neuroinflammation, astrocyte, and neuronal swelling, take place during the acute phase of the ischemic stroke and account for brain edema formation (Table 1) [11]. Under the term "brain edema" associated with ischemic/hemorrhagic stroke, cytotoxic and vasogenic edema can be distinguished [11,124]. Although both forms of edema coexist simultaneously, the former dominates in ischemic stroke, whereas the latter is associated with hemorrhagic stroke [125]. The water exchange between vascular and perivascular space and between perivascular and cellular space is regulated by AQP4, a water channel expressed mainly on astrocytes and endothelial cells [126]. In cytotoxic edema, AQP4 upregulation causes water movement from extracellular space into the astrocytes aggravating astrocyte compensatory swelling, thus increasing brain edema [127][128][129]. However, in vasogenic edema, AQP4 upregulation aids water movement from brain interstitial fluid to the perivascular drainage system, i.e., the glymphatic system, thus attenuating the damaging effects of local edema [125,[130][131][132]. The data on AVP's influence on AQP4 expression is unclear as studies show its up-regulatory and down-regulatory effects [31,33,133]. Regarding ischemic stroke, in one study, treatment with a V1R antagonist increased AQP4 expression [134]. Contrarily, in a study on intracranial hemorrhage animal models, V1aR antagonists led to the downregulation of the AQP4 expression [135]. Interestingly, in both of these studies, AVP receptor antagonism proved beneficial in brain edema attenuation [134,135]. It is clear that further studies are needed to address differences between ischemic and hemorrhagic stroke regarding AVP-mediated AQP4 expression modulation and edema formation. AVP influences ion balance in astrocytes and endothelial cells. The ion imbalance caused by Na/K pump dysfunction is aggravated by V1aR-mediated luminal ion transporters' activity enhancement [26,136]. AVP activates endothelial V1R, which results in AMPK and ERK1/2 pathways activation leading to NKCC1, NHE 1, and NHE2 upregulation [26,136,137]. Moreover, AVP enhances sympathetic response, which also stimulates NKCC1 [138]. The resulting influx of sodium into the endothelial cells and astrocytes promotes osmosis-driven water movement into the extracellular space and astrocytes [139]. Stimulation of V1aRs found on cerebral vasculature leads to endothelin-1 overexpression and protein kinase C (PKC) pathway activation, which translates into oxidative stress exacerbation and BBB disruption. A study conducted by Faraco et al. has shown that water deprivation-induced AVP hypersecretion elicits AVP-mediated oxidative stress leading to cerebrovascular dysregulation in murine brains [140]. Moreover, AVP may damage BBB by stimulating matrix metallopeptidase 9 (MMP9) expression in the brain endothelium [92]. The activation of MMP9 causes proteolytic degradation of the basal lamina, increasing the vessel permeability [141,142]. Furthermore, MMP9 alters the expression of tight junction components, resulting in increased neutrophil and macrophage infiltration through BBB into the brain [143]. The influx of inflammatory cells is augmented by AVP-mediated neutrophil (CXCL1 and CXCL2) and monocyte (CCL2) chemoattractant production in the brain endothelium and astrocytes [92]. AVP also enhances the production of VEGF, a potent vascular permeability-increasing agent, in mesangial cells acting through V1aR, further enhancing BBB leakage [144,145]. AVP elicits the vasoconstriction of cerebral blood vessels via V1aR, causing an increase in cerebral perfusion pressure (CPP) [146]. In the setting of SAH-induced AVP release, vasoconstriction causes a decrease in cerebral perfusion [65]. Contrarily, the increase in CPP caused by AVP administration was associated with the improvement of cerebral blood flow in TBI patients [146]. Most probably, the elevated CPP counteracted the dysfunction of cerebral blood flow autoregulation elicited by sympathetic activation and arterioles distention [147,148]. These results support the rationale for further research regarding the influence of AVP on cerebral blood flow in other diseases associated with cerebral autoregulation impairment, including ischemic stroke [149]. In the rodent TBI model, AVP aggravated injury-elicited inflammatory response. Mechanistically, AVP amplifies CXC and CC chemokines synthesis in the endothelium and astrocytes. In line with this, AVP stimulates the expression of high-mobility group box1, a well-known inflammatory cytokine, in astrocytes during hypoxia/reoxygenation [150,151]. A recent study has shown AVP involvement in IL-6 production in murine hearts, which may play a role in myocardial inflammation in heart failure [152]. Taken together, local immunogenic effects of AVP in combination with enhanced immune cell infiltration caused by AVP-mediated BBB disruption and pro-inflammatory mediators' production may play a significant role in stroke-associated neuroinflammation. As highlighted in Table 1, several neuropathological mechanisms associated with stroke pathology may benefit from the AVP receptor antagonism as a target to improve secondary mechanisms of stroke. The Prognostic Value of Copeptin in Acute Stroke Patients As summarized above, increases in AVP are detrimental to stroke and have a considerable potential of being used as a therapeutic target as the mechanisms are directly associated with AVP receptors. Furthermore, although AVP is an unstable peptide, it is released in equimolar concentrations with copeptin and validated as a surrogate marker of AVP. Clinically, copeptin measurements in acute ischemic stroke serve as a tool to predict functional outcomes [20,166]. A meta-analysis of the correlation between copeptin levels and functional outcomes in acute ischemic stroke has shown that a 10-fold increase in copeptin level was associated with a higher risk of poor 3-month and 1-year outcomes (OR = 2.56, 95% CI: 1.97-3.32) and all-cause mortality (OR = 4.16, 95% CI: 2.77-6.25) [24]. Moreover, the authors observed that using copeptin levels in addition to the National Institutes of Health Stroke Scale (NIHSS) score provided a better prediction for unfavorable outcomes than using the NIHSS score alone, showing the importance of such predictive biomarkers in everyday clinical practice [24]. Accordingly, De Marchis and colleagues recently proposed a scale for risk stratification based on clinical features and copeptin levels [167]. The CoRisk score, available for calculation online, considers the patient's age, NIHSS score, copeptin plasma concentration levels, and information on whether the patient received intravenous therapy [168]. A validation study of the CoRisk score scale showed good sensitivity in outcome prediction, with three in four patients classified correctly, with an area under the curve (AUC) of 0.819 [167]. Another promising application of copeptin measurement is to predict stroke after an episode of TIA. Previous studies have shown that patients with a higher copeptin level measured immediately after the TIA event are at a greater risk of developing a stroke or any cerebrovascular re-event [21,169]. In addition, in a long-term follow-up of patients after TIA or stroke, copeptin was a predictive factor of the recurrent vascular event [25]. Together, these data show that it may be beneficial to improve the ABCD2 score for TIA (age, blood pressure, clinical features, duration of TIA, and presence of diabetes) by adding copeptin values or monitoring the patients with large artery stenosis and high copeptin concentration [21,23]. In general, copeptin seems to be an effective biomarker in risk stratification after ischemic stroke and in predicting vascular events after TIA. However, copeptin is also associated with low specificity or low discrimination as a biomarker in other conditions. For instance, a few studies assessed the risk of stroke-associated infections using copeptin [20,[170][171][172]. While higher levels of the biomarker were associated with a higher risk of complications, especially pneumonia, the results were unsatisfactory regarding diagnostic ability [20]. Copeptin alone had a similar predictive value observed with white blood count or C-reactive protein (CRP), and its addition to established scales resulted in a minor improvement in discrimination ability [170][171][172]. Likewise, applying copeptin as a diagnostic tool to distinguish stroke patients from stroke-free subjects has been tested with inconsistent results [19,22,173]. However, it may help to exclude vascular causes in patients with dizziness in the emergency department [173]. In summary, copeptin seems to be an effective biomarker opening an opportunity to improve stroke prognosis. It should be considered that the field of novel biomarkers regarding cerebrovascular disease management is growing rapidly. The combination of both circulating blood protein biomarkers and genetic testing can be used for cerebrovascular risk assessment and outcome stratification after an ischemic event [174][175][176]. Still, the interpretation of blood-based biomarkers should always be correlated with individual clinical judgment and the physician's best knowledge to avoid the danger of undertreatment of patients who are believed to be at a high risk of a poor outcome [177]. Targeting Arginine Vasopressin Receptors in the Acute Phase of Ischemic Stroke The mechanisms mentioned above, especially brain edema exacerbation, prompted the rationale for AVP receptor antagonism in stroke. The use of a selective V1aR antagonist (SR49059) in MCAO rodent stroke models improved neurological outcomes and reduced infarct area [30,135,178]. The histopathological findings, such as decreased brain water content in the infarct area, sodium shift into the brain, reduced AQP4 expression, and reduction of BBB disruption, could be a plausible explanation for the beneficial outcome of SR49059-treated animals. Notably, both histological (reduced brain water content and Na+ accumulation) and neurobehavioral (higher modified Garcia score and better performance in beam balance test and wire hanging test) outcomes were achieved when the V1aR antagonist was administered no more than one hour after the procedure [29,81,135,178]. The drug administration after three or six hours since the MCAO proved ineffective [30]. OPC-31260, a V2R antagonist, prevented water and sodium accumulation in the brain in both general cerebral hypoxia and SAH rat models. Moreover, in both studies, V2R antagonism enhanced plasma AVP increase in subjected rodents [27,28]. Tolvaptan is a clinically approved V2R antagonist, successfully used in hyponatremia treatment [34]. However, in the ischemic stroke mouse model, Tolvaptan and other selective V2R antagonists, apart from the established aquaretic effect, did not produce the neuroprotective effects displayed by V1aR antagonists [134,179,180]. Conivaptan is an FDA-approved V1aR and V2R antagonist [35]. The main indication for Conivaptan use is euvolemic hyponatremia in specific disorders, including SIADH [181]. In animal stroke models, Conivaptan improved neurological outcomes, reduced brain water content, and attenuated BBB disruption [179]. In a different study, AVP receptor antagonism reduced brain edema and BBB disruption and improved neurological deficits in mice subjected to MCAO. Importantly, in this study, Conivaptan proved effective when administered three hours after an infarct induction [182], suggesting it may also be effective when administered within a similar time window of standard stroke treatment (i.e., thrombolysis). Recently, Can et al. have shown evidence for Conivaptan superiority over mannitol in diuretic activity in ischemic stroke achieved by a 30-min common carotid occlusion. Moreover, Conivaptan decreased serum 2-Phospho-D-Glycerate-Hydrolase (NSE) and increased progranulin. The former is a clinical biomarker associated with post-injury brain dysfunction, whereas the latter is known for its neuroprotective properties [183]. Regarding the safety of Conivaptan in stroke, the standard dose of the drug (20 mg) administered every 12 h for 2 days in combination with standardized intracranial hemorrhage (ICH) management proved safe and well tolerated by the patients [184]. In a case report published by Hedna et al. presenting a patient with post-operational ICH, the addition of Conivaptan to ineffective conventional anti-edema therapy resulted in both conscious level and motor function improvement, as well as radiologically-assessed brain edema resolution with no reported side effects [185]. Overall, antagonism of just V1R or both V1R and V2R provides a better outcome in rodent stroke models than using V2R antagonists alone. Recent literature regards Conivaptan as a safe, mixed AVP receptor antagonist that has the potential to supplement stroke treatment. Nevertheless, phase II studies are needed to test the effectiveness of Conivaptan in stroke. Discussion During the acute phase of stroke, AVP release is uncontrolled and larger than observed in physiological conditions. The mechanisms for AVP release in stroke are independent of plasma osmolality and involve a complex interplay and synergism between regulatory systems, systemic factors, and local components. Among them, AVP secretion underlying mechanisms include an interplay between the sympathetic and renin-angiotensin system; systemic changes in glycemia; and regional increase in glutamate excitability, as well as maladaptation of the neurovascular unit (reviewed in [11]). The resultant AVP hypersecretion is associated with many pathological detrimental phenomena, including (1) brain edema caused by hyponatremia aggravation, AQP4 expression modulation, and ion transporters' function disruption, (2) BBB integrity disruption through MMP9 and VEGF upregulation (3) hypercortisolemia caused by HPA axis activation, (4) neuroinflammation through inflammatory cytokines upregulation and enhancement of inflammatory cells infiltration, and (5) vasoconstriction through direct V1a-mediated effect on cerebral blood vessels and the promotion of oxidative stress, causing cerebrovascular dysregulation ( Figure 1, and Tables 1 and 2). Using the stable peptide copeptin as a surrogate marker of AVP, studies have found an association between a higher concentration of copeptin in stroke patients (hence, AVP) with a higher risk of poor outcomes and all-cause mortality. Furthermore, patients with higher copeptin levels at the time of TIA were at a greater risk of developing a stroke or any cerebrovascular re-event. Altogether, the revised individual literature establishes the potential of using copeptin as a biomarker for AVP plasma levels. After the stroke, the initial disruption of the blood-brain barrier causes local hyperosmotic environment formation and stimulation of osmoreceptors (A). The release of AVP is also regulated by the interaction of astrocytes with AVP-containing neurons. During a stroke, the regulatory volume decrease of astrocytes occurs, followed by the downregulation of GFAP and AQP4 (A). Additionally, stroke causes the release of glutamate and pro-inflammatory cytokines, which stimulate AVP release (A). After the release, AVP activates the V1aR and V1b(V3)R located in the brain, brain vasculature, and pituitary, respectively (B). Upon release, AVP exacerbates brain edema formation (1). Consequently, the expanding brain edema can increase the intracranial pressure and exert direct pressure on adenohypophysis, further increasing the AVP release (1). Vasopressin causes time-dependent arteries and arterioles vasoconstriction and increases cerebral perfusion pressure (CPP) but not cerebral blood flow (CBF) (2). Vasopressin also acts synergistically with corticotropin (CRH), causing excessive cortisol release (3). Abbreviations: ACTH, adrenocorticotropic hormone; AQP4, aquaporin-4; AVP, arginine vasopressin; BBB, blood-brain barrier; CBF, cerebral blood flow; CPP, cerebral perfusion pressure; CRH, corticotropin-releasing hormone; GFAP, glial fibrillary acidic protein; IL-1, interleukin 1; IL -6, interleukin 6; PVN, paraventricular nucleus; RAAS, renin-angiotensin-aldosterone system; and RVD, regulatory volume decrease. Targeting the AVP receptor with pharmacological receptor antagonists, either individually or in combination, aims to prevent pathological events spurred by V1R and V2R activation. A mixed AVP antagonist, Conivaptan, achieved the best safety and neurological outcomes, providing the most significant degree of potential in stroke treatment. Of note, most of the beneficial effects of AVP receptor antagonists were achieved in ischemia-reperfusion models. Thus, it is reasonable to infer that AVP receptor antagonismoriented therapy is more efficient when applied together with the recanalization of the occluded vessel. In line with this, we believe that future studies involving AVP receptor antagonists should focus on using AVP antagonists as an add-on therapy for standard recanalization (thrombolysis/thrombectomy) treatment. It is crucial to recognize the potential of the anti-edematous effects of AVP receptor antagonists in post-stroke secondary brain damage alleviation. We strongly feel that treatment of post-stroke brain edema using AVP receptor antagonists in concert with standard brain edema treatment could be a viable means of stroke outcome improvement and mortality rate reduction, especially in patients with refractory brain edema [185]. As demonstrated previously, a phase I clinical trial on Conivaptan use, along with standard treatment for reduction of perihematomal edema, has proven its safety and tolerability in patients treated with ICH. In our view, relatively large amounts of data pointing to the efficacy of AVP receptor antagonists in ischemic stroke animal models supports the rationale for future clinical trials in ischemic stroke patients. A limited number of studies have focused on the possibility of using AVP receptor antagonists in animal models of hemorrhagic stroke. Although ischemic stroke is epidemiologically more frequent than hemorrhagic stroke, we suggest that the efficacy of vasopressin antagonists in the latter should also be investigated [187]. Thus, we encourage researchers to include hemorrhagic stroke models in their studies. In this review, we pointed out the involvement of AVP in some stroke-related pathophysiological events. The multiplicity of mechanisms involved in primary and secondary brain damage in stroke points out the still underexplored areas of possible AVP influence. We propose that further research should be undertaken in the following areas: (1) reperfusion injury, (2) neuroinflammation, (3) vasospasm, (4) oxidative stress, and (5) BBB disruption. This review provides an update on understanding AVP involvement in ischemic stroke. Moreover, it highlights viable future research areas to explore. Our work would lend itself well to clinicians conducting clinical trials involving AVP receptor antagonists and researchers exploring the molecular mechanisms behind other neurological conditions. The ultimate goal of this review is to provide a springboard for further studies on the involvement of AVP in stroke settings and the implication thereof in modern medicine. In our view, the biggest challenge would be the time-consuming and cost-intensive process of clinical trials, particularly considering other competing research areas regarding stroke therapy, including neuroprotective drugs, safer/cheaper thrombolytic agents, and extended time windows protocols for thrombolysis/thrombectomy. Moreover, since the animal model for human ischemic stroke does not adequately reflect the human brain conditions in stroke, the discussed results of animal studies could be delusive, thus undermining the rationale for large-scale studies. To overcome these challenges, further experimental investigations are needed to establish the complete basis of the AVP mechanism of action in terms of stroke. For instance, the current animal stroke models should be improved by considering the age, metabolic state, and common comorbidities of stroke patients, as well as the anatomical and molecular differences between experimental animals and humans. Due to the significant burden of stroke in the world's rapidly growing and aging population, the scientific fields studying novel stroke therapeutics and stroke pathomechanism are one of the central issues currently being researched. The line of research regarding the involvement of AVP in stroke is especially significant, given that some drugs that could putatively diminish the detrimental effect of AVP are widely available for treatment in other medical scenarios, thus making them easily accessible in case of a positive translation process. Conclusions In conclusion, overlapping detrimental events occurring both in stroke and AVP hypersecretion models and the effectiveness of AVP receptor antagonists provide evidence to infer that AVP plays a vital role in stroke development. However, the indirect nature of the gathered data does not provide us with the essential details to determine the overall involvement of AVP-mediated mechanisms in stroke. Moreover, the differences between ischemic and hemorrhagic stroke regarding AVP release and the mechanism of action remain largely uncovered. Nevertheless, monitoring AVP levels and targeting AVP receptors are of paramount potential in clinical applications. Undoubtedly, molecular and prospective clinical studies are needed to discover the precise mechanisms behind AVP effects and integrate the measuring of copeptin or/and the use of AVP receptor antagonists in clinical practice. Based on the data included in the sections of the manuscript, we can summarize that: • A stroke is associated with the release of AVP into the blood in response to various endogenous stimuli; • Overproduction of AVP is most likely detrimental to the brain, primarily due to the AVP-elicited brain edema; • Copeptin is a surrogate marker of AVP and can be used as a prognostic biomarker of stroke outcome; • AVP receptor antagonists can potentially be used as an add-on therapeutic intervention in stroke. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
2023-01-25T16:13:12.725Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "957491a0eaa48d48281ca79be433f4850a0737e8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/3/2119/pdf?version=1674229379", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad0170986d4477643c0d807b41111657c5829fdc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250712913
pes2o/s2orc
v3-fos-license
Inflammatory Indexes as Prognostic Factors of Survival in Geriatric Patients with Hepatocellular Carcinoma: A Case Control Study of Eight Slovak Centers Background and Aims: Hepatocellular cancer (HCC) often occurs in geriatric patients. The aim of our study was to compare overall survival and progression-free survival between geriatric patients (>75 years) and patients younger than 75 years and to identify predictive factors of survival in geriatric patients with HCC. Material and Methods: We performed a retrospective analysis of patients with HCC diagnosed in Slovakia between 2010–2016. Cases (HCC patients ≥75 years) were matched to controls (HCC patients <74 years) based on the propensity score (gender, BCLC stage and the first-line treatment). Results: We included 148 patients (84 men, 57%) with HCC. There were no differences between cases and controls in the baseline characteristics. The overall survival in geriatric patients with HCC was comparable to younger controls (p = 0.42). The one-, two-, and three-year overall survival was 42% and 31%, 19% and 12%, and 12% and 9% in geriatric patients and controls, respectively (p = 0.2, 0.4, 0.8). Similarly, there was no difference in the one- and two-year progression-free survival: 28% and 18% vs. 10% and 7% in geriatric HCC patients and controls, respectively (p = 0.2, 1, -). There was no case–control difference between geriatric HCC patients and younger HCC controls in the overall survival in the subpopulation of patients with no known comorbidities (p = 0.5), one and two comorbidities (p = 0.49), and three or more comorbidities (p = 0.39). Log (CRP), log (NLR), log (PLR), and log (SII) were all associated with the three-year survival in geriatric HCC patients in simple logistic regression analyses. However, this time, only log (NLR) remained associated even after controlling for the age and BCLC confounding (OR 5.32, 95% CI 1.43–28.85). Conclusions. We found no differences in overall survival and progression-free survival between older and younger HCC patients. Parameters of subclinical inflammation predict prognosis in geriatric patients with HCC. A limitation of the study is small number of the treated patients; therefore, further investigation is warranted. Abstract: Background and Aims: Hepatocellular cancer (HCC) often occurs in geriatric patients. The aim of our study was to compare overall survival and progression-free survival between geriatric patients (>75 years) and patients younger than 75 years and to identify predictive factors of survival in geriatric patients with HCC. Material and Methods: We performed a retrospective analysis of patients with HCC diagnosed in Slovakia between 2010-2016. Cases (HCC patients ≥75 years) were matched to controls (HCC patients <74 years) based on the propensity score (gender, BCLC stage and the first-line treatment). Results: We included 148 patients (84 men, 57%) with HCC. There were no differences between cases and controls in the baseline characteristics. The overall survival in geriatric patients with HCC was comparable to younger controls (p = 0.42). The one-, two-, and three-year overall survival was 42% and 31%, 19% and 12%, and 12% and 9% in geriatric patients and controls, respectively (p = 0.2, 0.4, 0.8). Similarly, there was no difference in the oneand two-year progression-free survival: 28% and 18% vs. 10% and 7% in geriatric HCC patients and controls, respectively (p = 0.2, 1, -). There was no case-control difference between geriatric HCC patients and younger HCC controls in the overall survival in the subpopulation of patients with no known comorbidities (p = 0.5), one and two comorbidities (p = 0.49), and three or more comorbidities (p = 0.39). Log (CRP), log (NLR), log (PLR), and log (SII) were all associated with the three-year survival in geriatric HCC patients in simple logistic regression analyses. However, this time, only log (NLR) remained associated even after controlling for the age and BCLC confounding (OR 5.32, 95% CI 1. 43-28.85). Conclusions. We found no differences in overall survival and progression-free survival between older and younger HCC patients. Parameters of subclinical inflammation predict prognosis in geriatric patients with HCC. A limitation of the study is small number of the treated patients; therefore, further investigation is warranted. Introduction Liver cancer was the sixth most frequently diagnosed cancer and the third most common cause of cancer death worldwide in 2020. Incidence and mortality rates are 2-3 times greater in the male population, and the incidence rates are highest mainly in transitioning countries. Among all histological subtypes, hepatocellular carcinoma (HCC) represents 75-85% of all diagnosed cases. Most hepatocellular cancer cases are attributed to chronic liver disease resulting from hepatitis B virus (HBV) or hepatitis C virus (HCV) infection, alcohol abuse, non-alcoholic fatty liver disease (NAFLD), aflatoxin-contaminated food intake, and smoking. All of these risk factors vary by region [1]. Barcelona Clinic Liver Cancer is the most widely accepted staging system for providing prognostic information and guidance in therapeutic strategy for patients with HCC. (BCLC) According to the European Association for the Study of the Liver (EASL), the European Society for Medical Oncology (ESMO), the American Association for the study of the Liver Diseases (AASLD), and the American Society of Clinical Oncology (ASCO), curative treatment (including resection, transplantation, and radiofrequency ablation) is indicated for patients in stage BCLC 0 and A, palliative treatment (transarterial chemoembolization and systemic treatment with biological treatment and/or immunotherapy) is recommended for patients with stages BCLC B and C, and the best supportive care is reserved for subjects in stage BCLC D [2][3][4][5]. Data from developed countries (the UK, the USA, Canada, and Taiwan) show a trend of increasing incidence in the elderly population [6,7]. In addition, inflammatory responses have recently been shown to influence tumor prognosis by interfering with the tumor microenvironment [8]. Many studies show a negative impact of increased inflammatory indexes such as the neutrophil to lymphocyte ratio (NLR), platelet to lymphocyte ratio (PLR), and systemic immune-inflammation index (SII) on overall survival of treated patients with HCC [9][10][11][12][13][14]. The aim of our presented study was to compare overall survival in both age groups (elderly and younger patients) and to investigate the influence of inflammatory markers (CRP level, NLR, PLR, and SII), ALBI score, and number of comorbidities on survival parameters. Materials and Methods We performed a multicenter retrospective longitudinal case-control study of patients diagnosed with HCC at eight specialized centers in Slovakia during the period from 2010 to 2016 (Banska Bystrica, Bratislava (2), Kosice (2), Michalovce, Nitra, and Poprad). The inclusion criterion was the diagnosis of HCC consistent with the EASL-EORTC guidelines (HCC confirmed by either histopathological examination or magnetic resonance imaging). Patients with uncertain histology, combined histology with cholangiocellular carcinoma, or any concurrent malignancy were excluded from the study. Initially, we screened all patients diagnosed with a malignant neoplasm of the liver and intrahepatic bile ducts (ICD-10 cm C22) and identified 483 patients with a diagnosis of hepatocellular carcinoma (ICD-10 cm C22.0). Among these subjects, we identified 74 cases aged 75 and older at the time of hepatocellular carcinoma presentation. The Child-Pugh score was calculated to estimate cirrhosis severity, and performance status was evaluated using the Eastern Cooperative Oncology Group Scale. CT scans of the thorax, abdomen, and pelvis were used to identify potential extrahepatic spread. All centers used the Barcelona Clinic Liver Cancer (BCLC) staging system to guide the management of patients. Case report forms (CRFs) included baseline blood test results, which were also later used to calculate the neutrophil-to-lymphocyte ratio (NLR) [15], the platelet-to-lymphocyte ratio (PLR) [16], systemic immune-inflammation index (SII) [17], the model for end-stage liver disease score (MELD) [18], and the albumin-bilirubin grade (ALBI) [19]. If any condition that could have influenced baseline values was present (acute infection, corticosteroid treatment, etc.), repeated analyses were performed after the restoration of that condition. The CRFs also included the date of death, extracted either from the patients' medical records or from the database of the Slovak Health Care Surveillance Authority. Comorbidities of interest included cardiovascular diseases (arterial hypertension, congestive heart failure, arterial fibrillation, ischemic heart disease, valve disorders) chronic kidney disease, chronic pulmonary diseases (chronic pulmonary obstructive disease, bronchial asthma, interstitial lung fibrosis), endocrine disorders (diabetes mellitus-including longterm related complications, hypothyroidism), gastroenterological disorders (inflammatory bowel disease, chronic pancreatitis, esophageal varices), and neurological disorders (peripheral exotoxic neuropathy and Parkinson's disease), and were collected from CRFs' medical history at the time of first presentation of hepatocellular carcinoma. The study protocol was in accordance with the 1964 Declaration of Helsinki, its later amendments, and the principles of good clinical practice. The study protocol was approved by the Ethics Committee of East Slovakia Oncological Institute on 27 May 2021 (approval code, EK/2/05/2021). The committee waived the need for the patients' informed consent due to the retrospective nature of the data collection and analysis and publication of only anonymous data. Statistical Analyses Data are presented as absolute counts and frequencies and medians and interquartile ranges (IQR). A Kaplan-Meier plot was used to describe the survival data graphically. The significance of differences in data distribution was tested using the Wilcoxon rank-sum test, Pearson's Chi-squared test, Fisher's exact test, or Log-rank test as appropriate. Simple logistic regression analyses were performed to analyze the association of baseline factors and the survival of patients, and multiple logistic regression analyses were later used to control for confounding effects of particular variables. Case report forms (CRFs) included baseline blood test results, which were also later used to calculate the neutrophil-to-lymphocyte ratio (NLR) [15], the platelet-to-lymphocyte ratio (PLR) [16], systemic immune-inflammation index (SII) [17], the model for endstage liver disease score (MELD) [18], and the albumin-bilirubin grade (ALBI) [19]. If any condition that could have influenced baseline values was present (acute infection, corticosteroid treatment, etc.), repeated analyses were performed after the restoration of that condition. The CRFs also included the date of death, extracted either from the patients' medical records or from the database of the Slovak Health Care Surveillance Authority. Comorbidities of interest included cardiovascular diseases (arterial hypertension, congestive heart failure, arterial fibrillation, ischemic heart disease, valve disorders) chronic kidney disease, chronic pulmonary diseases (chronic pulmonary obstructive disease, bronchial asthma, interstitial lung fibrosis), endocrine disorders (diabetes mellitusincluding long-term related complications, hypothyroidism), gastroenterological disorders (inflammatory bowel disease, chronic pancreatitis, esophageal varices), and neurological disorders (peripheral exotoxic neuropathy and Parkinson's disease), and were collected from CRFs' medical history at the time of first presentation of hepatocellular carcinoma. The study protocol was in accordance with the 1964 Declaration of Helsinki, its later amendments, and the principles of good clinical practice. The study protocol was approved by the Ethics Committee of East Slovakia Oncological Institute on 27 May 2021 (approval code, EK/2/05/2021). The committee waived the need for the patients' informed consent due to the retrospective nature of the data collection and analysis and publication of only anonymous data. Statistical Analyses Data are presented as absolute counts and frequencies and medians and interquartile ranges (IQR). A Kaplan-Meier plot was used to describe the survival data graphically. The significance of differences in data distribution was tested using the Wilcoxon ranksum test, Pearson's Chi-squared test, Fisher's exact test, or Log-rank test as appropriate. Simple logistic regression analyses were performed to analyze the association of baseline factors and the survival of patients, and multiple logistic regression analyses were later used to control for confounding effects of particular variables. Results The analyses included 148 patients (84 men, 57%) with HCC. Controls (<75 years) were matched to cases (≥75 years) according to gender, BCLC stage, and the first-line treatment (see Figure 1). There were no differences between cases and controls in these examples, and no other baseline characteristics (Table 1). The overall survival in cases was comparable to controls (p = 0.42, Figure 2). The one-, two-, and three-year overall survival was 42% and 31%, 19% and 12%, and 12% and 9% in cases and controls, respectively (p = 0.2, 0.4, 0.8). Similarly, there was no difference in the one-and two-year progression-free survival: 28% and 18% vs. 10% and 7% in cases and controls, respectively (p = 0.2, 1). , two-, and three-year overall survival was 42% and 31%, 19% and 12%, and 12% and 9% in cases and controls, respectively (p = 0.2, 0.4, 0.8). Similarly, there was no difference in the one-and two-year progression-free survival: 28% and 18% vs. 10% and 7% in cases and controls, respectively (p = 0.2, 1). There were several factors associated with the one-and two-year survival in geriatric patients with HCC: log (CRP), log (NLR), log (PLR), log (SII), and ALBI 3; although after controlling for the age and BCLC confounding, neither association remained significant. Similarly, in simple logistic regression analyses, log (CRP), log (NLR), log (PLR), and log (SII) were all associated with the three-year survival. However, this time, log (NLR) remained associated even after controlling for the age and BCLC confounding (OR 5.32, 95% CI 1.43-28.85). Complete results of regression analyses are presented in the Supplementary Material. Finally, there was no case-control difference in the overall survival in the subpopulation of patients with no known comorbidities (p = 0.5), one and two comorbidities (p = 0.49), and three or more comorbidities (p = 0.39) (Figures 3-5). There were several factors associated with the one-and two-year survival in geriatric patients with HCC: log (CRP), log (NLR), log (PLR), log (SII), and ALBI 3; although after controlling for the age and BCLC confounding, neither association remained significant. Similarly, in simple logistic regression analyses, log (CRP), log (NLR), log (PLR), and log (SII) were all associated with the three-year survival. However, this time, log (NLR) remained associated even after controlling for the age and BCLC confounding (OR 5.32, 95% CI 1. 43-28.85). Complete results of regression analyses are presented in the Supplementary Material. Finally, there was no case-control difference in the overall survival in the subpopulation of patients with no known comorbidities (p = 0.5), one and two comorbidities (p = 0.49), and three or more comorbidities (p = 0.39) (Figures 3-5). Discussion Hepatocellular carcinoma belongs to the group of malignant diseases of the gastrointestinal tract with increasing morbidity and mortality. With increasing numbers of cases mainly recorded in developed countries (the USA, Canada, the UK, and Taiwan) there is also an increase in the group of patients of geriatric age. The definition of a geriatric patient is considered an issue in itself. Currently, the generally accepted age threshold over 65 years is already being shifted in practice to over 70 and 75 years [20][21][22][23][24][25][26]. Despite the gradually increasing age, the evidence of treatment efficacy remains at quite a low level due to its rare inclusion in clinical trials [27]. Geriatric patients have a higher number of comorbidities compared to the younger population. These comorbidities may limit the indication of anticancer treatment and its efficacy. Older patients more often report complications after interventions and surgical procedures and complications of anticancer therapy. At geriatric age, the most impaired function is in the liver. Due to aging, the volume of liver parenchyma shrinks by 20-40%, the blood flow is reduced by 35-50%, and enzy-matic activity of hepatocytes and cells of the immune system, especially dendritic cells, decreases [28]. Generally, these patients are considered fragile due to their comorbidities and impaired drug metabolism. Nevertheless, the data documenting efficacy and safety in this therapeutic subgroup are quite limited, and the current guidelines do not recommend any change in the therapeutic strategy. [29]. The aim of our study was to compare the outcomes of a heterogeneous group of geriatric patients with the control group. The results of overall survival and progressionfree survival were comparable in patients of both age subgroups at the same BCLC stage, performance status, and identical treatment modality. This conclusion is supported by the results published in the study with an identical age cut-off for performing resection [30,31], radiofrequency ablation [32,33], transarterial chemoembolization [24,34,35], and use of sorafenib [36][37][38], lenvatinib [39], and ramucirumab [40]. Likewise, several studies did not show the difference in recurrence-free survival when performing resection [30,41], worsened recurrence rate when performing RFA [33], or progression-free survival in systemic therapy [37,39]. An important role in the process of carcinogenesis is also played by a chronic, subclinical, ongoing inflammation in the tumor microenvironment, which represents a dynamic component that promotes tumor growth, proliferation, neoangiogenesis, and metastatic spread [42]. A key role in suppressing the cellular response is played by tissue macrophages (Kupffer cells), monocyte-derived macrophages, regulatory T cells (T reg ), and monocytederived tumor-associated macrophages (TAMs). Macrophage components polarized into the M2 phenotypic form produce IL-10 and TGF-β, and chemokines promote chemotaxis of Tregs and ineffective Th2 cell response, promoting neoangiogenesis and tissue remodalation via production of VEGF and EGF. Via PD-L1 expression, they disable the effector phase CD8+ of T-cell immune response and reduce the expression of MHC glycoproteins class II molecules, producing IL17, which leads to an increase in the neutrophil count in the peripheral blood [43,44]. An important role in the tumor microenvironment is held by platelets, which, by producing PDGF, may directly promote the growth of the tumor tissue; moreover, they produce a chain of proinflammatory cytokines (P-selectin, lL-1, IL-3, IL-3) and anti-inflammatory factors (TGF-β) [45]. The production of TGF-β leads to significant immunosuppression and a reduced lymphocyte count [46]. NLR, PLR, and SII have been described as effectively independent factors of overall survival in geriatric patients with high-grade gliomas [47], non-small cell lung cancer [48], esophageal cancer [49], gastric cancer [50,51], and glioblastoma [52]. There are only published results providing data of elderly patients with HCC. In our study, NLR and SII indexes have been evaluated as statistically significant predictors of 3-year survival after controlling for the influence of age and BCLC, while none of them were significant in evaluating 1-to 2-year survival. These data are in contrast with the results of Li et al. (2018), who evaluated SII as an effective predictor of 1-year survival in patients over 75 years of age; however, these data have purposely monitored the geriatric population and showed significant heterogeneity, considering miscellaneous histological subtypes [10]. According to the data presented by Zaour et al. (2019), the elevated values of AFP, NLR, and PLR were associated with higher mortality in geriatric patients with HCC who underwent resection [53]. On the other hand, PLR values are not associated with overall survival in some malignancies; for instance, pancreatic cancer [54]. Another discussed topic in the treatment of geriatric patients is the impact of comorbidities on patients' overall survival. A higher number of comorbidities in geriatric patients is linked to poorer survival in geriatric patients with head and neck cancers [55]; they reduce the number of patients on adjuvant therapy in the treatment of colorectal cancer [56] and increase the number of deaths in patients with breast cancer in stage I.-III. [57]. In the case of hepatocellular carcinoma, the presence of cardiovascular and respiratory tract diseases worsens the overall survival of patients who have undergone curative RFA treatment [26]. The outcomes of our study point out comparable overall survival and progression-free survival in patients 75 years or older in comparison with a younger age group. The overall survival in our study population has not been affected by the number of comorbidities in the semi-quantitative division into three groups. The results of our study were limited by the low number of included patients and the retrospective character of the study. Another limitation of the study is the heterogeneous etiology of the primary liver disease and different stages of HCC according to BCLC criteria in both geriatric and younger patients. Considering the retrospective study design, we were unable to compare the incidence of treatment side effects in both groups of patients with HCC. Conclusions In our study, we found that patients older than 75 years with HCC have overall survival and progression-free survival comparable to younger patients matched in age, BCLC stage, and first-line treatment. In patients older than 75 years, the NLR value was an independent predictor of 3-year survival. These findings allow us to state that geriatric patients at risk of developing HCC should have the same surveillance program as younger patients. In indicating the anticancer treatment, age should not be considered a limitation, but in treatment, it is necessary to take comorbidities into account. Furthermore, the treatment should be adequately monitored, focusing on the incidence of the adverse events. Due to the small number of treated geriatric HCC patients, further studies confirming our results are required. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm11144183/s1. Table S1. Association of baseline factors and one-year overall survival-simple analysis. Table S2. Association of baseline factors and one-year overall survival-multivariant analysis adjusted for age and BCLC. Table S3. Association of baseline factors and two-year overall survival-simple analysis. Table S4. Association of baseline factors and two-year overall survival-multivariant analysis adjusted for age and BCLC. Table S5. Association of baseline factors and three-year overall survival-simple analysis. Table S6. Association of baseline factors and three-year overall survival-multivariant analysis adjusted for age and BCLC. Institutional Review Board Statement: The study protocol was in accordance with the 1964 Declaration of Helsinki, its later amendments, and the principles of good clinical practice. The study protocol was approved by the Ethics Committee of East Slovakia Oncological Institute on 27 May 2021 (approval code, EK/2/05/2021). Informed Consent Statement: Patients' consent was waived due to the retrospective nature of the data collection. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest: The authors declare that they have no conflicts of interest.
2022-07-21T15:16:20.003Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "d89d74ed234bf6b0ef83af2c4ada78baa73f438c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/14/4183/pdf?version=1658223578", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc2589cf0356b1253ce580733123ab4ac0921eb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248154759
pes2o/s2orc
v3-fos-license
Corrosion behavior and mechanism of X80 steel in silty soil under the combined effect of salt and temperature In this study, X80 pipeline steel was embedded in silty soil with different salinities and subjected to corrosion at a constant temperature for 24 h before electrochemical testing. The effect of soil medium, temperature, and salt content on the kinetics of corrosion behavior of X80 steel was analyzed. Furthermore, the compositions and structures of the corrosion products were analyzed by X-ray photoelectron spectroscopy and scanning electron microscopy. Based on the results, the anodic dissolution reaction mechanism of X80 steel in silty soil was determined, the differences in the corrosion process caused by different soil systems were comprehensively contrasted, and the impact of the migration process of heterogeneous silty soil on corrosion behavior under different conditions was systematically explored. Comparative analysis revealed that chloride ions possess strong adsorption ability at temperatures above freezing point and that more oxidized substances are present in the deposited layer on the surface of corroded steel, which facilitates the occurrence of corrosion under deposition. At temperatures below freezing point, the sulfate ions present in the pore solution contribute to crystallization-induced expansion and lead to swelling and deformation of the soil, rendering the X80 steel more prone to corrosion in sulfate corrosion environments. Introduction The global energy demand was estimated to have decreased by about 4.5% in 2020, attributed to the abrupt outbreak of the novel coronavirus disease (COVID-19) that stalled global economic activity and impacted the completion of various tasks in the energy sector of different countries. However, with a total share of 55.9%, oil and natural gas continue to occupy the largest share of the primary energy consumption structure. 1,2 Moreover, various energy policies in different countries are capable of strengthening support for the industry to mitigate the impact of the epidemic as well as to actively achieve progress by further intensifying systemic energy reforms and advancing in the construction of support systems and clean, low-carbon, efficient, and safe energy systems. 3,4 Most recently developed oil and gas elds are present in more remote geographic locations, and cause problems in construction and support systems due to difficult service conditions. Among them, the construction and operation of oil and gas pipelines encounter harsh environmental conditions and operational requirements such as low-temperature environments, highly corrosive soil, high-pressure delivery, and long-range transport. In order to improve the operating quality and assure the security of energy supply, pipeline steel requires higher strength and toughness. 5 Each enhancement in the pipeline steel grade results in a cost reduction of 5-15%. For this reason, currently, the API 5L X80 steel with high tensile strength is widely used in the oil and gas industry. [6][7][8] The failure that occurs during the service life of pipelines can be broadly divided into the following three categories: excavation damage, material defects, and corrosion effects. [9][10][11][12] Inhomogeneous and porous soil cause relatively complex corrosion of buried pipeline steel, based on the interaction of multiple simultaneous reactions. As a result, the direct contact between the steel surface and the erosive soil environment is regarded as a signicant cause of its corrosion failure. 9,13 Previously, extensive research efforts have been devoted to the study on the electrochemical corrosion reaction at the soil-steel interface, [14][15][16][17] and a more mature theory has been established. Most of the reactions correspond to oxygen-depolarized corrosion. Elementary reaction processes are as follows: First, iron dissolves in the anode area under the loss of electrons (anodic oxidation reaction: Fe / Fe 2+ + 2e À ), and oxygen reduction occurs at the cathode (cathodic reduction reaction: O 2 + 2H 2 O + 4e À / 4OH À ). Then, in acidic soil environments, iron ions enter the soil solution as hydrated iron cations (Fe 2+ + nH 2 O / Fe 2+ $nH 2 O), in near-neutral or alkalescent soil environments, iron ions, aer entering the soil, react with hydroxyl ions (OH À ) to form iron(II) hydroxide Fe(OH) 2 (Fe 2+ + 2OH À / Fe(OH) 2 ). However, this hydroxide being unstable gets partly dehydrated to iron(II) oxide (FeO) and partly continues to react with the corrosion products iron(III) oxide hydroxide (FeO(OH)), iron(III) oxide (Fe 2 O 3 ), and iron(II,III) oxide (Fe 3 O 4 ). Saline soil is the generic term for soil containing excessive amount of salt and/or alkaline components. Owing to the adverse effects of increasing soil salinity, soil salinization has gradually become an important global ecological and environmental problem that requires urgent research attention. Chloride and sulfate anions are major components of soil-soluble salts. Chloride contained in the soil is primarily affected by climate, terrain, and human activities, while sulfate content in the soil is mostly affected by acid rain, landform, geological, and climatic conditions. 18 Indeed, the migration, aggregation, and transformation of inorganic salts in the soil vary in different areas, and their spatial variability is large. It has been demonstrated that the permeability of chloride ions is a leading cause of corrosion of bars used in a working environment containing chloride ions, while the presence of sulfate restrains the diffusion of these ions. 19 Salt heave and frost heave deformations caused by physical or chemical reactions in the sulfate-containing soil due to temperature changes have also become a considerable and continuing hurdle for transportation security. 20 The safety and durability of buried pipeline steel under the synergistic effect of salt and temperature have rarely been explored till date. Thus, investigation of more improved studies on the differences in corrosion behavior caused by chloride ions and sulfate ions in soil pore solution and their respective corrosion mechanisms is highly desirable. The corrosion behavior of pipeline steel buried in soil under the combined inuence of salt and temperature requires further and deeper exploration, and the theoretical questions involved further require systematic investigation. The corrosive nature of soil, explicitly of the external corrosion environment of buried pipelines, should be assessed in a viable way to provide basic data and a scientic basis for research on corrosion-related questions in other similar soil environments with the objective to improve the effectiveness and efficiency of corrosion-preventing engineering. For this purpose, chloride ions and sulfate ions, which are ubiquitous in various soil environments and highly corrosive, were selected in this study to investigate the corrosion behavior of X80 steel under the synergistic effect of salt and temperature in the soil environment. Electrochemical measurements were conducted to determine the corrosion rate, and further analysis was performed to investigate the corrosion mechanism in different soil environments. The structure, morphology, and type of the corrosion products were analyzed by scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and other surface analyses techniques. Materials and sample preparation The test soil samples were taken from the vicinity of buried oil and gas pipelines in the district of Xiaodian, Taiyuan City, Shanxi Province, China. The sampling area is schematically outlined in Fig. 1. The sampling depth of the soil samples was 1.2 m. The soil was Quaternary sediment formed by weathering processes, and soil texture was classied as alkaline silty soil (1.8% sand, 68.9% silt, 19.3% clay). Some of the physical parameters are listed in Table 1. Aer retrieving the undisturbed soil, the soil samples were air-dried, mechanically crushed, passed through a soil sieve with a mesh size of 1 mm, and washed with sterile deionized water. Further, excess salt was washed away, and the soil samples were placed in a thermostatic blast drying oven at 378.15 K for at least 6 h prior to their use to ensure the complete removal of residual moisture. In the trials, sodium chloride (NaCl, analytical grade) and sodium sulfate (Na 2 SO 4 , analytical grade) were dissolved in deionized water and stirred evenly to obtain saline solution as a simulation of the Cl À and SO 4 2À , respectively, in the soil pore solution. The use of remolded soil ensured the repeatability and comparability of the experiments. The X80 pipeline steel with high tensile strength and used widely in the oil and gas industry, was selected as the research object of the present study, and its chemical composition is listed in Table 2. The used X80 steel met the API Spec 5L criteria, and the carbon equivalent CE pcm value is below the maximum allowable value specied in the standard. The size of the X80 steel sample for the electrochemical test was 10 mm  10 mm  2 mm. The sample was sanded level by level with waterresistant sandpaper on the grinding/polishing machine. Aer polishing with 2000# sandpaper, the sample was then cleaned in an ultrasonic bath to acquire smooth surface without scratches and no residual adsorbed materials. The copper conductor was welded to the steel and sealed with epoxy resin to retain a nominal working surface area of 1 cm 2 . In order to characterize the microstructure, the surface of steel sample was prepared via the following steps: inlay / grinding / polishing / erosion. First, phenolic resin was used to hot mosaic the sample. Aer cooling, 800-3000# sandpaper was used for surface grinding, and then diamond polishing agents with a particle size of 2.5 and 0.5 mm were utilized for coarse polishing and ne polishing, respectively. Finally, the sample was etched with electrolyte solution prepared using 96 vol% alcohol and 4 vol% nitric acid for about 6 s until the surface turned from bright to off-white (the sample used to capture SEM image was etched for about 12 s). Optical micrographs were obtained using a Leica Dmi 8C metallurgical microscope (Leica Microsystems, Wetzlar, Germany). The surface microstructure of X80 steel used in this experiment is shown in Fig. 2. The optical micrographs of X80 steel surface reveal that it predominantly consists of ferrite and bainite phases. The shape of ferrite is drawn into thin strips and polygons, corresponding to acicular ferrite (AF) and polygonal ferrite (PF) morphologies. The shape of bainite is granular (GB), with ne grains and irregular distribution. Among them, GB, as a typical nonequilibrium phase, has high electrochemical activity. 21 Owing to the electrochemical inhomogeneity, the presence of tiny areas with different levels of potential in the metal surface constitutes various corrosion micro-galvanic cells. The organization with more active chemical properties and prone to be deprived of electrons in the structure of the material acts as the rst place for the corrosion to occur. In the original structure of X80 steel, due to the different corrosion potentials of the phases, the alpha phase with a lower corrosion potential gets corroded as the anode, the beta phase with a higher corrosion potential is relatively preserved as the cathode, and the anode alpha phase acts as the active site of pitting corrosion. Notably, this heterogeneous inhomogeneity of metal can cause pitting-type corrosion. 22 The GB in the structure of X80 steel usually acts as the early-developing region for pitting. These locations lead to the initiation of the interfacial corrosion process when they are in contact with the corrosive environment. The workow of sample preparation is illustrated in Fig. 3. Experimental setup The dried soil sample was adequately stirred with the salt solution and then maintained in a standard constant temperature and humidity conservation box for 24 h in order to ensure the even distribution of ions in the soil pore solution. The optimal moisture content of the silty soil used in this study was calculated to be 15.30% by the standard compaction test, the maximum dry density was 1.82 g cm À3 , and the soil compaction coefficient was 0.90. A square insulated box with a dimension of 70.7 mm  70.7 mm  70.7 mm was used as a test mold, and the height of the soil sample in the mold was controlled based on the compaction coefficient. The soil sample and steel sample were placed together in the mold box to ensure that the bottom end of the steel to be tested was 1 cm from the bottom of the mold box, and the steel working surface was in tight contact with the soil. An external thermostatic bath with an error of AE0.15 K was used as the sample temperature control setup. A temperature sensor was inserted in the middle section of the soil sample to record the soil temperature. The electrochemical test was performed aer soil temperature reached the target temperature for 24 h. Electrochemical measurements Electrochemical measurements were performed on a CorrTest CS-350 electrochemical workstation. Electrochemical measurements were conducted using a three-electrode system with X80 steel, Pt plate electrode, and saturated calomel electrode (SCE) as the working, counter, and reference electrodes, respectively. The measurement was repeated three times, and the results of a representative experiment were obtained. Prior to electrochemical measurements, the open circuit potential (OCP) test was rst performed for the X80 steel. The sampling time was 2500 s, and the sampling frequency was 5 Hz. Impedance-frequency sweep test was performed with perturbation voltage signal of 10 mV in the frequency range from 100 kHz to 0.01 Hz with 12 data points analyzed per frequency decade. The obtained spectra were tted by using Zview soware in terms of appropriate equivalent circuits to ensure experimental precision, and the chi-square values were found to be within the required range. Before the potentiodynamic scanning test, the working electrode was stabilized by polarizing at the polarization potential for 3 min. The scanning range of the potentiometric polarization curve was À0.75 to 3 V, and the scanning rate was 1.5 mV s À1 . Mass loss test The X80 steel sample used for the weight loss method was 120 mm  90 mm  2 mm dimension. The sample was ground, polished, cleaned and stored in ambient conditions. The dimensions were measured by vernier caliper and the samples were weighted by analytical balance with an accuracy of 0.0001 g. The sample was embedded into epoxy resin, leaving a nominal working surface area of 120 mm  90 mm. Soil temperature and salt content control were the same as the electrochemical test. Aer burying the X80 steel sample in silty soil for 24 h and removing the epoxy resin, a descaling agent (500 mL HCl + 500 mL distilled water + 3.5 g hexamethylenetetramine) was used to remove the corrosion layer. The steel sample was then cleaned in an ultrasonic bath, naturally airdried and weighed again. Three parallel experiments were performed for each group, and the average corrosion rate (ACR) was calculated using the eqn (1). where ACR is the average corrosion rate (mm y À1 ), W is the mass loss (g), A is the work area (cm 2 ), r is the steel density (7.86 g cm À3 ) and t is the corrosion time (h). Surface characterization techniques The corroded working surface was slightly wiped with absolute ethanol to remove the loose soil particles attached to the surface. Then a high-magnication digital camera (Canon 6D SLR) was used to obtain the macroscopic images of surface morphology. Further, the surface morphologies of the samples were observed by all-in-one scanning electron microscopy (JSM-IT200), and surface elemental compositions were detected by energy-dispersive X-ray spectroscopy (EDS). The valence states of Fe elements were measured by an X-ray photoelectron spectrometer (XPS, Thermo Scientic K-Alpha) with the Al Ka excitation. The XPS data processing was completed by using the Avantage data soware and XPSPEAK soware. The 3D morphology and cross-section plot of X80 steel surface deposits were characterized using the optical prolometer (VHX-7000, Keyence). Results and discussion 3.1. Corrosion behavior of X80 steel in silty soil with salt as the independent variable 3.1.1. Surface morphology and corrosion products. The surface macro-micro morphology of the X80 steel sample aer corrosion in silty soil with salt concentration as the independent variable and temperature at 293.15 K is shown in Fig. 4 and 5. The macroscopic morphology indicates the formation of the Fe-oxides rust layer on the steel surface, and some amount of soil is also attached to different degrees. The sediment layer covered on the steel surface functions as protective lm, which suppressed the uniform corrosion and impeded the part of reducing substances in soil to arrive at the surface. The analysis of micro-morphology indicates that the sediment layer may be subdivided into the inner layer and the outer layer. The inner layer is thin and compact iron oxide lm, with the bulk enrichment of Fe elements and a small number of O elements. The outer layer is thick but porous iron oxide lm and some soil particles, with the characteristics of loose texture and scattered distribution. Moreover, defects such as pores and through-thickness cracks are observed at these sediment layers. The reduced mediator (pore water and corrosive ions) in the soil crosses the defect band, forming channels for the reduced mediator to diffuse into the steel substrate surface. As a result, the sediment layer cannot be used as a protective layer to inhibit the corrosion of the metal substrate for a long period, and this also promotes the corrosion electrochemical reaction to occur again. The porous and thin iron oxide layer (compared to the thick and compact oxide layer) would provide more routes to capture and cross more nearby corrosive ions. Noteworthy, soil is a very complicated system consisting of non-uniform threephase system made up of solids, liquids, and gases, therefore, the occurrence of local corrosion behavior on the metal surface leads to the increase and consecutive acceleration of the reaction that could drive corrosion behavior. Notably, the concentration of ions primarily impacts the number of pits. 23 With the increase in the ion concentration, the sediment layer on the steel surface gradually becomes denser. However, the presence of a high salt concentration leads to enhanced adsorption, which results in enlarging and spreading of the corrosion area, and it is prone to form continuous ulcerlike pits under the sediment layer. Fig. 5(a)-(c) exhibit that the rust layers formed on the surface of X80 steel in sulfatecontaining silty soil are mainly grown perpendicular to the surface. Layers of loose sediments are mostly distributed in a uffy cluster, with large quantities of cluster-like corrosion products, and the degree of erosion is relatively mild. However, in the corrosion environment of chloride-containing silty soil for the X80 steel (as shown in Fig. 4(a)-(c)), the rust layers grow along the steel surface, and the corrosion is severe. For the area marked as red dots in Fig. 4 and 5, an EDS test was conducted to analyze the local element mapping. The results of these samples are presented in tables in each group of gures. Clearly, with increasing ion concentration, the amount of iron-dissolved corrosion products increases gradually. The elemental oxygen contents of the rust layers in the corrosive environments of chloride are higher and the iron contents are lower under 293.15 K temperature conditions. These results indicate that the elemental oxygen content is strongly inuenced by the amount of soil adhesion and the content of oxide corrosion products, and it can thus be concluded that the contents of Fe oxides in the chloride-containing environment are more, and it has a stronger inuence on the metal corrosion. EDS as a semi-quantitative method could only be used for determine the elemental content and not the specic types of compounds. 24 Therefore, it was necessary to use XPS analysis technique to further investigate the composition of the corrosion products. Notably, Fe is a transition metal element and its ne spectrum has asymmetric bimodal and satellite peaks. These peaks can be used to qualitatively and quantitatively analyze the ionic state of Fe. Thus, the Fe 2p 3/2 orbital was further analyzed by peak tting. 11,25,26 The XPS Fe 2p 3/2 ne spectrum test and tting results are shown in Fig. 6. In view of the results, several conclusions can be achieved. The corrosion products formed on the X80 steel surface in various silty media were composed predominantly of Fe 2 O 3 , FeOOH, and Fe 3 O 4 , this is in accordance with the results of a few earlier studies. [27][28][29] Since the reduced mediator in the soil can act as the oxidant to oxidize ferrous ions to ferric ions, within various silty media, the formation and transformation processes of corrosion products on the X80 steel surface were different, and the components of the surface deposits were also different. The deposited layer was composed of the products (primary products) of cathodic reaction and anodic reaction and the products (secondary products) produced due to the continued reactions of the primary products, both containing soluble and non-soluble corrosion products. FeOOH and Fe 2 O 3 in the sediment have a loose and porous structure with poor protection against the metal matrix, while Fe 3 O 4 is the non-soluble corrosion product with a compact structure that exhibits a better protective effect on the matrix. When the non-soluble corrosion product reaches a certain threshold, the effect of inhibiting the corrosion of the matrix increases. In the sulfate-containing silty soil medium, with the increase in the ion concentration to 1% (as shown in Fig. 6(e)), the content of oxidation state in the X80 steel surface sediments decreases, the content of reduced state in the inner layer corrosion products increases, and soil-steel interface corrosion is suppressed. With the increase in concentration to 2% (as shown in Fig. 6(f)), the Fe 3 O 4 content decreases, the content of FeOOH outer layer corrosion product increases, and the corrosion rate increases. In the chloride-containing silty soil medium (as shown in Fig. 6(a)-(c)), the Fe 3 O 4 content decreases slowly with respect to an increasing ion concentration, indicating that the interface exhibits the absence of sufficient amount of oxygen supply to further continue the reaction and the corrosion products content of complete oxidation decreases. The incomplete oxidation of corrosion products due to reduced oxygen content leads to promoting the occurrence of corrosion under deposition and the onset of pitting. Fig. 7 shows the XPS spectra of O 1s. It can be seen that the O 1s spectrum of surface deposits exhibited three peaks at 530.0 eV (metal oxides), 531.8 eV (hydroxides) and 532.9 eV (soil particles). 30,31 Different soil environments could affect working surface deposits not only composition but also morphology. In the sulfate-containing silty soil, with the increase in the ion concentration to 1% (as shown in Fig. 7(e)), there are more soil particles in surface deposits, which would show some hindrance effects and inhibit the soil-steel interface corrosion. With the increase in concentration to 2% (as shown in Fig. 7(f)), the Fe-oxides in surface deposits and corrosion kinetics behavior increases. In the chloride-containing silty soil, with the increase of ion concentration, the content of soil particles in the surface deposits decreases, whereas the content of corrosion products increases, and the corrosion rate increases. The tting results of the XPS spectrum of the corrosion surface deposits O 1s and Fe 2p are consistent with the corrosion morphology characterization. To better reect the information of the corrosion deposits on the X80 steel surface and further analyze the thickness and coverage of the deposits, the 3D morphology and cross-section plot of surface deposits is supplemented. The optical prolometry technique was found to be an efficient way for characterizing the structure and microtopography of the sample surface. It could provide qualitative images at the microscale level and further evaluate the degree of corrosion surface coverage. 32 The results of measurements are shown in Fig. 8 and 9. The 3D morphology clearly shows that the corrosion deposits are of double-layer structures. In the silty soil environment with low ion concentration (as shown in Fig. 8(a)), a clear boundary line can be observed for the dense inner and loose outer layers of the X80 steel surface corrosion deposits. Still, the boundary line gradually becomes blurred with the increasing ion concentration (as shown in Fig. 8(c)). Fig. 9(a)-(c) exhibit that the corrosion deposits on the surface of X80 steel in sulfatecontaining silty soil are relatively thin and at. With the increase in ion concentration, the area covered by corrosion deposits rst decreases and then increases. However, in the corrosion environment of chloridecontaining silty soil for the X80 steel, the relatively deep corrosion pits on the X80 steel surface can be observed. As the ion concentration increases, the thickness of the corrosion product becomes thicker, the degree of pitting corrosion becomes more serious, and the corrosion region becomes larger. As a result, it can be thus inferred that the chloridecontaining silty soil environment is more corrosive. The 3D morphology and cross-section plot of the corrosion deposits reect that the corrosion degree of X80 steel in different soil environments is consistent with the corresponding SEM observations. 3.1.2. Electrochemical characterization. Fig. 10 presents the OCP monitoring process for X80 steel in silty soil with salt concentration as the independent variable and temperature at 293.15 K. The thermodynamic stability information of the working electrode surface in different corrosive environments could be reected by the OCP value. By the OCP monitoring results, it can be observed that the silty soil corrosion system is in aerobic conditions. 39 The OCP value of the test system uctuates slightly and tends to a stable value over time, and the OCP value of the chloride-containing environment increases as the salt content increases, whereas the value of the sulfatecontaining environment rst decreases and then increases. It is obvious that the former is larger than the latter. The increase of the OCP value would reect, to a certain extent, the cathode current increases, the anode reaction promotes accordingly, and the surface electrochemical reaction in the electrode accelerates. The polarization curves of the X80 steel in silty soil with salt concentration as the independent variable and temperature at 293.15 K are shown in Fig. 11. Evidently, when the X80 steel is present in a corrosive soil environment with chloride ion concentration of 2%, the current density maintains a low value when the measured potential is above the self-corrosion potential. When the potential reaches À0.39E, a signicant increase of current is observed. The polarization curves of the steel in the remaining salty soil environment were observed in the passivation region, indicating that the passivation lm on partial anode domains of the steel surface prevented it from continuing active dissolution. Of course, it is not that the higher the ion concentration of the soil electrolyte, the greater the possibility of metal passivation. In contrast, the higher salt content of the soil makes it easier to damage the passivation lm, thus promoting local corrosion. In the corrosive soil environment, with a range of concentrations used in this study, the passive current densities increase with the chloride ion concentration. Moreover, the passive current densities remain almost the same with the increase of sulfate concentration. The passive current density is relatively large, indicating a large corrosion rate and it is more vulnerable to the effect of the reduced mediators in the electrolyte. Accordingly, it can be thus inferred that the chloridecontaining silty soil environment is more corrosive. The corrosion current density (i corr ) and corrosion potential (E corr ) values (as shown in Fig. 12(a) and (b)) were derived from the extrapolation of the linear portion of the anodic and cathodic branches of the measured polarization curves shown. Clear trends can be observed, that is, with the increase in the ion concentration, the polarization curve shis to the high current direction and the E corr of the chloride-containing soil environment also increases; however, the i corr of the sulfatecontaining soil environment rst decreases and then increases. The potential shis positively, which reveals that the deposits formed on the steel surface under the chloride-containing soil gradually grow thicker and denser. Nonetheless, in the sulfate-containing soil, the i corr decreases. This situation indicates that the corrosion deposits formed on the steel surface at sulfate-containing soil with the concentration of 1% exhibit some inhibitory effect on the cathodic hydrogen evolution reaction and the anodic dissolution reaction. The (2)), 24 and the corresponding results are shown in Fig. 12(c). The variation tendency of corrosion rate was the same as that of i corr , and the corrosiveness of chloride-containing silty soil was signicantly higher than that of sulfate-containing silty soil. where CR is the corrosion rate (mm y À1 ), i corr is the corrosion current density (A cm À2 ), M is the molar mass (g mol À1 ), z is the number of per atom-transferred electrons, F is the Faraday constant (96 485 C mol À1 ), r is the steel density (7.86 g cm À3 ), and A is the work area (cm 2 ). Equivalent weight (EW) is typically considered to be the mass of metal in grams that is oxidized by the passage of one Faraday of electric charge. For alloy, the EW value is calculated by using eqn (3) as follows: 33 where n i is the valence of the ith element of the alloy, f i is the mass fraction of the ith element of the alloy, and W i is the atomic weight of the alloy. The EW value of X80 steel used was 27.92. For the penetration rate (PR) and mass loss rate (MR) of the steel, the calculations can be further conducted with the EW value (eqn (4) and (5)), where the value of i corr can be calculated by using the Stern-Geary equation (eqn (6) and (7)). 34,35 where B is the Stern-Geary coefficient, R p is the polarization resistance, and b a , b c are the slopes of the anodic and cathodic Tafel reaction, respectively. The working electrode has different relaxation times for the applied signal. Therefore, the electrochemical parameters related to corrosion of the electrode can be calculated from different frequency bands. Thus, the R p value can be calculated from the EIS measurement for the working electrode, the i corr value can be estimated from the impedance data analysis, and the mechanism of soil-steel interface corrosion can be monitored by the impedance patterns of the different shapes. The correlation corrosion rate values concluded from multiple approaches can be thus contrasted to make the result more plausible and reduce the error. 36 The EIS results of the X80 steel sample in silty soil with salt concentration as the independent variable and temperature at 293.15 K are shown in Fig. 13. In general, R p can be calculated via two methods by exploiting impedance spectroscopy. 36 The rst method is based on the impedance modulus value. This is based on the fact that the low-frequency region in EIS reects the Faraday process, therefore, the low-frequency impedance magnitudes can be used to characterize the electrochemical kinetic parameters of the corrosion process (i.e., R p ¼ jZj 0.01 Hz ). The second method uses the R(Q(R(QR))) equivalent circuit (inset of Fig. 13) tted in the limited frequency range of the impedance spectrum: 10 000-0.01 Hz. The tting curves are shown in Fig. 13 (black solid curve). The result approves that the c 2 meets the requirements and has a high accuracy rate and small deviation. The R p value is the sum of the resistance values of partial circuit components (i.e., R p ¼ R 1 + R 2 ). In order to exclude the effect of the semicircle portion in high-frequency region, which is mainly affected by corrosion products and salt ion penetrating the corrosion product cracks, the t range did not include the high-frequency region of 10 5 -10 4 Hz. According to the above-mentioned method, the calculated values of R p are listed in Table 3. By using the above-stated formulas (eqn (4)- (7)), values of PR and MR were further calculated and listed in Table 4. According to the information extracted from the EIS results, the values of PR and CR calculated by both methods are almost same, indicating that the two calculation methods are relatively reasonable. Moreover, their variation tendency is the same as that of the corrosion rate calculated according to the polarization curve displayed above. This proves that when using electrochemical measurements to study the corrosion kinetic behavior of steel in a corrosive soil environment, the polarization curve and impedance spectroscopy can be overall considered. With the increase in the ion concentration, the radii of the semicircular arcs in the Nyquist plot decrease, the reaction resistance decreases, and the corrosion rate as well as corrosion get enhanced. The presence of chloride ions could increase the conductivity of the soil electrolyte, and the R s value related to soil is relatively low in the chloride-containing silty soil. With soil temperature increasing, the cathodic hydrogen evolution and anodic iron dissolution reaction simultaneously accelerated. However, the corrosion kinetics of metal is not increased with the soil temperature; rather, it is strongly dependent on the ionic motion rate. The directed movement speed of the ions in the soil pore solution accelerated with increasing temperature, leading to enhanced ion adsorption. The strongly adsorbing ions tend to cluster in passivation lm defect sites, which subsequently decrease soil oxygen content and corrosion rate, and the formation of dense corrosion deposits block delivery of ferrous ions, making the diffusion rate slower than iron oxidation. In this case, the electrode surface shows high chemical inertness, which enhances the steel resistance and prevents the corrosion process. Thus, the corrosion resistance of X80 pipeline steel could likely be improved with a higher soil temperature. Taken together, in the corrosive soil environment at a range of concentrations used in this study, when the soil temperature is 293.15 K, as the ion concentration increases, the corrosion behavior of X80 steel samples in the chloride-containing silty soil gradually enhances drastically. The corrosion behavior of X80 steel in the sulfate-containing soil environment rst decreases and then continues to increase. The corrosivity in the chloride-containing soil environment is signicantly higher than that in the sulfate-induced corrosion environment. 3.1.3. Mechanism analysis. Simultaneous presence of the electrolyte (salt-containing solution) and oxidant (oxygen) in the soil is a sufficient condition for the corrosion to occur. The existence of salt ions plays a facilitating role in the cathodic reaction of steel. When salt ions present in the soil pore solution act on the interface of the metal-soil pore, local corrosion occurs. The different salts present in the soil pore solution produce diverse effects on the corrosion behavior, resulting in differences in its corrosion resistance. When the constant soil temperature is 293.15 K, the corrosion behavior of X80 steel in silty soil containing chloride ions becomes signicantly higher than that in the sulfate corrosion environment. Partly, the reason is that the chloride ions have stronger adsorption behavior. Through the adsorption-dissolution process between chloride ions and metal atoms, the corrosion can be accelerated. The adsorption of chloride ions on the metal surface is essentially a process of exchange with H 2 O molecules. The X80 steel surface is positively charged, and the oxygen atom from lattice water molecules in the soil pore solution can get adsorbed on the metal surface atoms due to the high electronegativity of oxygen, as shown in Fig. 14(b). Nevertheless, chloride ions feature the typical physical adsorption, under the electrostatic action, they can be characteristically adsorbed on the positively charged surface, and can get preferentially adsorbed on the defects of the metal crystal surface (Fig. 14(c) and (d)). Simultaneously, chloride ions react with two adjacent metal atoms, decrease the total free energy of the metal surface, and repel the water molecules initially adsorbed on the metal surface. These effects result in competitive adsorption between water molecules and chloride ions on the metal crystal surface. The water molecules initially adsorbed near the metal atoms gradually move away from the metal surface over time. Chloride ions interact with the electrode surface to form covalent bonds or coordination bonds. The metal atoms on the surface are attracted by the chloride ions in the soil pore solution, causing them to deviate from the original lattice positions, and then vibrate near their equilibrium positions. Concurrently, the diffusion of metal atoms is affected, resulting in disordered movement of metal ions and increase in the activity of anodic metal ions. 37 The adsorption of chloride ions similarly causes it to continue to diffuse and adsorb inward due to the attractive effect by other atoms on the crystal surface, which leads to the increase in the concentration and content of active anions inside the crystal defects. Gradually, the chloro-complex formed on the steel surface replaced the more stable oxide-ion, becoming the active site of corrosion, reducing the activation energy of the anodic dissolution process of the metal material, and eventually accelerating the corrosion rate. The other possible reason is that the XPS results indicate that the corrosion products produced under the soil corrosion system in this test consisted mainly of FeO, FeOOH, Fe 3 O 4 , and other iron oxides (as shown in Fig. 14(a)). These corrosion products continue to chemically react with the strongly adsorbed chloride ions. For instance, Fe(OH) 2 reacts with chloride ions to release ions that can continue to participate in the (8)), thus increasing local electrochemical activity. Moreover, FeOOH gets dissolved by chloride ions to form a layered compound FeOCl (eqn (9)). FeOCl belongs to the orthorhombic system, showing the stack of irregular lamellae structures, and the interface of the interlayer is relatively fuzzy (as shown in Fig. 4(b)). 24,38,39 The unstable compound FeOCl further decomposes into hydroxide anion, ferri ion, and chloride ion (eqn (10)). The presence of trivalent iron and chloride ions accelerates the corrosion process, causes corrosion under the passivation lm on the metal surface, and leads to the increase in the occurrence of pitting corrosion. In summary, the kinetics behavior of X80 steel corrosion in the chloride-containing soil environment is even more dramatic. Corrosion behavior of X80 steel in silty soil with temperature as the independent variable A lot more systematic explorations are further required to investigate this topic. Therefore, the soil temperature range was set from 283.15 to 263.15 K, and electrochemical tests were performed on X80 steel samples at AE5 K temperature intervals. Electrochemical characterization. The stable values from the OCP monitoring of the X80 steel in silty soil with temperature as the independent variable are shown in Fig. 15. It is found that with the decrease of the soil temperature, the OCP values show an overall downward trend, and the OCP difference (D OCP ) increases rst and then decreases, ranging between 0.006-0.147. The reason for minor change within a certain range of the OCP values may be related to the microstructural properties of the steel, the physio-chemical parameters of the soil, and the soil-steel interface reaction. Therefore, subsequent electrochemical measurements are used to study the electrochemical reactions that may occur at the soil-steel interface. The results from electrochemical measurements of the X80 steel in medium saline silty soil with temperature as the independent variable are shown in Fig. 16 and 19. During the polarization process, electrochemical, mass transport, chemical, and adsorption-desorption processes occurred on the steel surface. With the gradual positive shi of the potential and reaching the passivation potential of the material, the material primarily undergoes passivation. Next, the potential continues to increase, the passivation state of the sample steel alters, and the curve shows the feature of entering the trans-passive zone. Fig. 16 illustrates that with the decrease of soil temperature, the polarization curve shis toward lower current, the passivation potential becomes higher, and the corrosion current density as well as passivated current density decrease. Consequently, both the cathodic hydrogen evolution reaction and the anodic iron dissolution would be signicantly reduced. In other words, the sample steel in the corrosive soil environment exhibits higher chemical inertness and better corrosion resistance. The corresponding corrosion parameters and surface corrosion morphologies of X80 steels are shown in Fig. 17. In this temperature range, along with the lowering of soil temperature, the corrosion kinetic behavior of X80 sample steel in the corrosive silty soil environment also showed a decreasing tendency. In soil environments at temperatures above freezing point, the chloride-containing silty soil is more corrosive. According to the surface and microcosmic morphology of X80 steel presented in the inset of Fig. 17(c), the corrosion deposited layers composed of oxy-hydroxide compounds and hydrous ferric oxides on the surfaces have a bilayer structure at temperatures above freezing point. The outer layer is warm brown sediment, and the inner layer corresponds to a black corrosion product. The petal-like sediment layer with cracks on the surface has already reached a critical thickness when the temperature is 283.15 K, leading to poorer layer adhesion and causing sediment to easily peel off from the surface. 40 When the temperature was decreased to 278.15 K, the surface of X80 steel was covered with looser occulent sediments. When the temperature was 273.15 K, the corrosion rate of X80 steel in the two corrosive environments was similar, and when the soil temperature decreased to 263.15 K, the sulfate-containing silty soil was more corrosive than the chloride-containing silty soil. The decrease of unevenly distributed corrosion-induced deposits on the surface of X80 steel in sulfate-containing silty soil with decreasing temperature and a scattered trend were observed. This is attributed to the formation of ice in the porous structure of soil by the phase transition with the decrease in the soil temperature, resulting in the formation of a hypoxic microenvironment in soil-steel interface. It is known that the dissolution rate of metal is controlled by the oxygen diffusion in the soil electrolyte. 41 The dissolution rate of the metal matrix becomes lower due to the high oxygen diffusion resistance at this point. In order to gain better insight into the difference in corrosion behavior caused by different silty soil systems, we determined the sample to be tested according to the PC test results (as shown in Fig. 17) and further supplemented the XPS characterization test of typical X80 steel interfacial corrosion deposits at different temperatures. The XPS spectra of the elements O 1s and Fe 2p 3/2 are shown in Fig. 18. As depicted in Fig. 18, the corrosion deposits on the surface of X80 steel at different temperatures are mainly composed of iron oxides and some soil particles. Different soil temperatures have a relatively small impact on the type of corrosion deposits but greatly affect the composition and morphology of deposits. In particular, the decrease in soil temperature has an inhibitory effect on the kinetic behavior of metal corrosion that is consistent with the analysis of surface topography at macro and micro-scales results mentioned above. The formation and transformation of corrosion deposits on the surface of X80 steel are different in different media. At temperatures above freezing point, the O 1s spectrum of X80 corrosion surface could be decomposed into three Gaussian peaks corresponding to iron oxides (O 2À ), hydroxyl groups (OH À ) and soil particles (SiO 2 ), respectively. 30,31 At temperatures below freezing point, however, the O 1s spectrum could be decomposed into three peaks, which can be attributed to hydroxyl groups (OH À ), soil particles (SiO 2 ) and a small number of iron oxides (O 2À ), respectively. The XPS spectra of Fe 2p 3/2 could be decomposed into three major peaks. They were assigned to FeOOH, Fe 2 O 3 , and Fe 3 O 4 , respectively. 16,42 It can be seen that the content of corrosion deposits decreases with the lowering soil temperature. At temperatures above freezing point, the Fe 2 O 3 content is higher. This is because the dehydration reaction took place, and the FeOOH with poor crystal quality was converted to more stable Fe 2 O 3 . At temperatures below freezing point, the ice phase gradually precipitated from pore solution to the soil-steel interface, hindering the oxygen spreading depolarization. Since the relatively low concentration of iron ions ows into the soil with the decrease in soil temperature, the degree of corrosion decreases, and the content of corrosion product formed at the soil-steel interface decreases. The EIS results markedly changed under different situations of soil temperatures (as shown in Fig. 19). In the Nyquist plots of the X80 steels, the centers of high-frequency semicircle are positioned below the real part (x-axis), indicating the formation of the coarse and nonhomogeneous products layer aer corrosion tests. Moreover, with the decrease in the temperature, the radii of low-frequency arc and high-frequency capacitive arc increase, and the impedance modulus at 0.01 Hz (Fig. 19(a-2) and (b-2)) as well as corrosion rate decrease. This is consistent with the PC results mentioned above. The diffusion impedance part of the low-frequency region indicates the diffusion rate of oxygen through the porous layer of corrosion product, and oxygen reduction occurs on the active metal surface at the bottom of the porous product layer. In the soil environments at temperatures below freezing point, the ice phase gradually precipitated from pore solution, and its existence resulted in the reduction of the mass-transfer efficiency of oxygen. At this time, the soil environment exhibited very poor electrical conductivity. In the phase angle diagrams (Fig. 19(a-3) and (b-3)), the change of the maximum phase angle indicates the change of the electrochemical reaction occurring on the surface of the electrode. Clearly, the maximum phase angle in the sulfate-containing silty soil undergoes a larger shi range with the decrease of soil temperature, indicating that under low temperatures below freezing point, its corrosion degree is greater than that in chloride-containing silty soil. 3.2.2. Average corrosion rate. The weight loss method is a useful measure to determine metal corrosion. 35 From the weight loss results, the average corrosion rate (ACR) of the X80 steel was determined (as shown in Table 5). The results exhibited a decrease in ACR with the decrease of soil temperature. The conductivity of silty soil at temperatures below freezing point deteriorates, the ice phase gradually precipitates from pore solution and impedes the ow of electrical current. At temperatures above freezing point, the corrosion of X80 steel in the chloride-containing silty soil is signicantly stronger than that in the sulfate-containing silty soil. Nevertheless, at temperatures below freezing point, the sulfate-containing silty soil presents more corrosive, and its ACR is twice that of the chloride-containing silty soil. The trend of weight loss results is the same as the above trend of corrosion rate calculated by polarization curve, which conrms the difference in corrosion behavior caused by chloride ions and sulfate ions in silty soil at different temperatures. 3.2.3. Mechanism analysis. Aer the metal contacts with the soil, the soil-steel interface undergoes a complex and special corrosion process due to the non-homogeneity and porous nature of the soil. The anodic reaction of the corrosion process indicates the anodic dissolution of metals or the formation of metal oxides and hydrous metal oxides, that is, the precipitation of dissolved metal ions and the deposition of oxides or hydroxides. The metal ionic diffusion has a relatively signicant effect on the soil corrosion process. Noteworthy, the migration processes of dissolved oxygen and iron ions affect the formation process and proportion of formed corrosion products. Fig. 20 shows the anodic iron dissolution reaction mechanism and iron ion diffusion model. [43][44][45] The upper and lower parts, respectively, correspond to the electrolyte environment containing sulfate ions and chloride ions. Considering the X80 steel as an example, a thinner liquid membrane on the interface is formed by soil-steel system aer contact. When the dissolution of the iron anode occurred, and Fe 2+ was formed, the difference of concentration gradient was generated, leading to gradual diffusion. 16 The soil electrolyte has a larger inherent heterogeneity; therefore, the vast majority of iron ions continued to undergo reactions to form iron oxides or oxy-hydroxide compounds attached to the steel surface, and its small fraction diffused to the pores of the soil. The structure and particle size of the soil determine its permeability and directly affect the migration rate of the liquid and gas components present in soil. 16 The soil is a type of complicated system in terms of composition and property; therefore, the corrosion resistance of the metal can be improved compared with the simulated soil solution. 46,47 The composition and state of the soil pore solution near the metal surface also affect the soil corrosion process. Specically, under aerobic conditions, the main reduced mediators that affect the corrosion behavior are pore solution and oxygen. The mass transfer process has a substantial impact on the corrosion rate. In other words, the content of the effectively reduced mediator in the electrolyte has unique control over the corrosion process. 42 Soil corrosion at temperatures below freezing point is a complex stage controlled by the freezing process. The decrease of soil corrosiveness at temperatures below freezing point is mainly attributed to the ice phase gradually precipitating from the pore solution and leading to the blocking of electric/ionic conduction. Moreover, the oxygen content of soil pore gradually decreases, the oxygen is gradually dispersed in the pore solution further away from the crystalline plane, which leads to less adsorption on the plane, resulting in the depletion of oxidizing agent on the metal surface. In this case, mainly hydrogen ion reduction reaction occurs at the metallic cathodes to generate hydrogen atoms on the surface. However, atomic hydrogen binding to generate hydrogen requires high activation energy, leading to corrosion batteries being difficult to supply, and an adsorption lm of hydrogen atom is formed on the cathode surface, thus hindering the cathodic reaction. The corrosion degree of X80 steel in the sulfate-containing silty soil at temperatures below freezing point was signicantly higher than that in the chloride-containing environment. One possible reason is that the adsorption of chloride ion becomes weaker with the decrease in the temperature. Another possible reason is that Na 2 SO 4 absorbs water to form Na 2 SO 4 -$10H 2 O, and the volume expansion of mirabilite decahydrate by up to about 3-4 times larger than that of NaCl$2H 2 O developed from NaCl. 48 With the increase in the test time and decrease in the temperature, the sulfate in the soil pore solution becomes more amenable to the occurrence of crystallization-induced expansion. The expansion created by the hydration migration of sulfate ions leads to the bulging deformation of the soil and simultaneously generates some level of mechanical tension to affect the soil-steel interface. The swelling deformation of the soil refers to the salt heaving and frost heaving deformation. For the sulfate-containing silty soil, the salt heaving deformation plays a determinant role. The temperature changes act as the external condition for saltswelling deformation, and the salt concentration and water content of the soil pore solution are the internal conditions for salt-swelling deformation. Its deformation properties are affected primarily by the state of salt in the soil (the state of salt includes the salt precipitation and dissolution) and the soil structure. With the continuous decrease of temperature, the saltheaving deformation gradually occurs in the soil, and the deformation of the soil is cumulative and irreversible. 49 The larger the soil volume, the greater the spatial extent for deformation. Therefore, the salt heaving deformation of ne-grained soil is greater than that of coarse-grained soil. When the soil particles are mixed with the crystallized powdered sodium sulfate, the overhead structure appears, and the interparticle pores provide space for deformation due to the honeycomb-like structure of the silty soil. In parallel, larger amounts of adsorbed cations are present in sulfate-containing silty soil. They interact with neighboring colloids due to the strong hydrophilicity of cations, forming a bound water lm around colloidal particles and clay particles. Thus, this leads to the reduction in the cohesion between the soil particles, separation of the soil particles, and eventually swelling deformation of soil. When being subjected to the phase transition from water to ice, the diffusion of the solute molecules, hydrodynamic dispersion, and other physicochemical roles, the salt gets attached to the metal surface by migration, aggregation, and precipitation. The adsorbed ions exhibit certain electrical conductivity. Thus, the place where the salt ions adhere to the surface becomes the corrosion active site again, forming localized corrosion cells on the metal surface, accelerating metal dissolution, and promoting the formation of pitting-type corrosion. Conclusions This study offers a comprehensive and objective analysis of the synergistic effect of salt and temperature in the silty soil system on the corrosion impact of API 5L X80 pipeline steel. The following major ndings emerged from this study: (1) At 293.15 K, the loose-surface deposited layer of X80 steel in the sulfate-containing silty soil contains more reduced state substances and grows perpendicular to the surface, the layer is mostly distributed in a uffy cluster, and the degree of corrosion is relatively mild. For the X80 steel present in the chloridecontaining silty soil, the deposited layer contains more oxidation state substances and grows along the steel surface, which easily promotes the occurrence of under-deposition corrosion. (2) The impedance spectroscopy is used to determine the polarization resistance. In turn, it is concluded that the variation tendency of penetration rate and mass loss rate of X80 steel in various silty soil media is the same as that of the corrosion rate calculated by using the polarization curve. Therefore, it represents a plausible way to extract information regarding the corrosion of the specimen based on the impedance spectroscopy data. (3) The corrosion kinetic analysis shows that the different ions in the silty soil pore solution cause differences in the corrosion behavior of X80 steel at different soil temperatures. At temperatures above freezing point, due to the adsorptiondissolution process between chloride ions and metal atoms and with continuous occurrence of the chemical reaction with corrosion products, the corrosion of X80 steel in the chloridecontaining silty soil is signicantly stronger than that in the sulfate-containing silty soil. (4) At temperatures below freezing point, soil corrosion is a complex stage controlled by the freezing process. The sulfate in the soil pore solution is more amenable to the occurrence of crystallization-induced expansion. The expansion of sulfate ions leads to the bulging deformation of the soil body and simultaneously generates some level of mechanical tension to affect the soil-steel interface, resulting in more corrosion of the X80 steel in a sulfate corrosion environment. Conflicts of interest There are no conicts to declare.
2021-12-22T16:52:34.759Z
2021-12-20T00:00:00.000
{ "year": 2021, "sha1": "96f38531c2f787c69523982cbf4c73fa5242bd4a", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ra/d1ra08249c", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14e8bb3bb7f81ddf6146a10a6e75c29e7a6b2213", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
25537803
pes2o/s2orc
v3-fos-license
Effect of hypercortisolism on bone mineral density and bone metabolism: A potential protective effect of adrenocorticotropic hormone in patients with Cushing’s disease Objective To investigate the effects of Cushing’s disease (CD) and adrenal-dependent Cushing’s syndrome (ACS) on bone mineral density (BMD) and bone metabolism. Methods Data were retrospectively collected for 55 patients with hypercortisolism (CD, n = 34; ACS n = 21) from January 1997 to June 2014. BMD was examined in all patients, and bone turnover markers were tested in some patients. Healthy controls (n = 18) were also recruited. Results The lumbar spine and femoral neck BMD were significantly lower in the ACS and CD groups than in the control group. Lumbar BMD was significantly lower in the ACS than CD group. The collagen breakdown product (CTX) concentrations were significantly higher while the osteocalcin and procollagen type I N-terminal propeptide (PINP) concentrations were significantly lower in the ACS and CD groups than in the control group. The PINP concentration was significantly lower while the CTX concentration was significantly higher in the ACS than CD group. In the CD group only, lumbar BMD and serum adrenocorticotropic hormone had a significant positive correlation. Conclusions Bone turnover markers indicated suppressed osteoblast and enhanced osteoclast activities. PINP and CTX changes might indicate bone mass deterioration. Adrenocorticotropic hormone might be protective for lumbar BMD in patients with CD. Introduction Hypercortisolism is a syndrome characterized by chronically high cortisol levels. It is classified as either adrenocorticotropic hormone (ACTH)-dependent or ACTHindependent (i.e., adrenal tumor) based on the underlying cause. Osteoporosis has been recognized as a serious consequence of endogenous hypercortisolism since the first description by Harvey Cushing in 1932. 1 The reported prevalence of osteoporosis due to excess endogenous cortisol ranges from 50% to 59%. 2,3 Pathological fractures, particularly in the vertebral spine, can be the presenting manifestation of hypercortisolism. 4,5 The proposed mechanism by which excess glucocorticoid leads to the development of secondary osteoporosis is multifactorial. Early identification of the characteristic changes in bone mass with hypercortisolism assists with early identification of bone mass loss and timely treatment, thus reducing the occurrence of adverse events such as fractures. The current study was performed to compare the characteristics of the bone mineral density (BMD) and bone metabolism in patients with ACTH-dependent or ACTH-independent hypercortisolism versus healthy controls and to analyze the effects of ACTH on BMD in patients with adrenal-dependent Cushing's syndrome (ACS) and Cushing's disease (CD). Patients We retrospectively analyzed data for 78 patients with hypercortisolism, including 52 patients with CD and 26 patients with ACS. All patients had signs and symptoms of overt hypercortisolism. Patients were excluded from the study if they had subclinical Cushing's syndrome, forms of hypercortisolism other than CD and ACS (e.g., ectopic ACTH syndrome, adrenocortical hyperplasia, or adrenocortical carcinoma), recurring diseases, or comorbid diabetes mellitus prior to admission. Finally, 55 patients (21 with ACS and 34 with CD) were included in this study. Data for 18 healthy controls were collected from the physical examination center of the Tianjin Medical University General Hospital. Cushing's syndrome was diagnosed based on clinical features and the following endocrine work-up: measurement of serum cortisol and ACTH at 08:00, 16:00, and 24:00; measurement of 24-h urinary free cortisol (UFC) excretion, evaluated as the mean value of three different samples collected on consecutive days; an overnight low-dose dexamethasone suppression test (2 mg/d administered orally, with measurement of urine cortisol the following day) and overnight high-dose dexamethasone suppression test (8 mg/d administered orally, with measurement of urine cortisol the following day); and findings of a spaceoccupying lesion on adrenal computed tomography or pituitary magnetic resonance imaging. All healthy controls were confirmed healthy based on their clinical history, physical examination findings, and routine laboratory test results. The patients and controls were matched in terms of sex, age, body mass index (BMI), ethnicity, and geographic origin. The exclusion criteria for all patients and healthy controls in this study were pregnancy, alcoholism, smoking, chronic disease, and previous or current exposure to drugs affecting the pituitary-adrenal axis or bone metabolism. Biochemical evaluation In all patients and healthy controls, blood samples were obtained in a fasting state, and calcium, phosphorus, alanine aminotransferase, creatinine, total cholesterol, and triglycerides in the plasma samples was measured using standard enzymatic techniques (Hitachi 7600 Automatic Biochemistry Analyzer; Hitachi, Tokyo, Japan). Serum thyroid hormones (free triiodothyronine, free thyroxine, and thyroid stimulating hormone), gonadal hormones (testosterone and estrogen), adrenal cortex function (serum adrenocorticotropic hormone and cortisol), parathyroid hormone (PTH), and 24-h excretion of UFC were measured using chemiluminescent methods (Siemens Healthcare Diagnostics Inc., Erlangen, Germany). The 25(OH)D 3 level in the plasma samples was measured using an enzyme-linked immunosorbent assay (Immunodiagnostic Systems Ltd., The Boldons, UK). Serum bone turnover markers (serum osteocalcin [OC], procollagen type I N-terminal propeptide [PINP], and serum collagen type I cross-linked C-telopeptide [CTX]) were measured using a chemiluminescent method (Roche Diagnostics Ltd., Mannheim, Germany). Urinary calcium was measured using a colorimetric method (Roche Diagnostics Ltd.). This study was conducted in accordance with the principles of good clinical practice and the Declaration of Helsinki and was approved by the ethics committee of the Tianjin Medical University General Hospital. All participants provided written informed consent. Assessment of BMD Bone densitometry was performed by dualenergy X-ray absorptiometry using a Prodigy-GE densitometer (GE Healthcare, Chicago, IL, USA). The coefficient of variation of the BMD measurements in our laboratory was <0.1%. Data were analyzed using absolute BMD values (g/cm 2 ). Statistical analysis SPSS v19.0 (IBM Corp., Armonk, NY) was used for data analysis. All data were tested for a normal distribution. Normally distributed data are reported as mean AE standard deviation. Statistical analyses were performed using analysis of variance for multiple-group comparisons among the ACS, CD, and control (N) groups, and the Student-Newman-Keuls test was used for comparisons between two groups. The chi-square test was used for count data. Correlations were tested using Pearson's test. Linear regression analysis was used for further correlation testing. Binary logistic regression analysis was used to compare the correlations of categorical variables. Statistical significance was set at P < 0.05. Patient characteristics This study included 21 patients with ACS, 34 patients with CD, and 18 healthy controls. Age, BMI, estimated disease duration, renal function, blood glucose concentrations, blood lipid concentrations, and thyroid hormone concentrations were not significantly different among the ACS, CD, and N groups (Tables 1 and 2). The ACS, CD, and N groups contained one, two, and one postmenopausal women and two, nine, and five men, respectively. The numbers of postmenopausal vs. nonpostmenopausal women and men vs. women were not significantly different among the three groups as shown by the chi-square test. The patients with hypercortisolism were divided into two groups according to their gonadal status (those with hypogonadism and those with normal gonads). Sex, BMD, and age were taken as independent variables. Binary logistic regression analysis showed that lumbar BMD and gonad function had no correlation (b ¼ 7.778). Comparison of adrenal cortex function The serum ACTH concentration, serum and urinary cortisol concentrations, and 24-h UFC concentration were significantly different among the three groups. The serum ACTH concentration was significantly lower in the ACS group than in the CD and N groups (P < 0.05). The serum and urinary cortisol concentrations were significantly higher in the ACS and CD groups than in the N group (P < 0.05); however, the 24-h UFC concentration in the ACS group was significantly higher than that in the CD group (P < 0.05) ( Figure 1, Table 3). Comparison of BMD Lumbar, femoral, and whole-body BMDs were significantly lower in the CD and ACS groups than in the N group (P < 0.05). Lumbar BMD was significantly lower in the ACS group than in the CD group (P < 0.05) ( Figure 2, Table 3). Comparison of bone turnover markers Bone turnover markers were measured for 10 patients in the ACS group, 11 patients in the CD group, and 9 patients in the N group. The serum calcium and phosphorus concentrations were significantly lower in the ACS and CD groups than in the N group (P < 0.05). However, the serum calcium corrected by serum albumin was not significantly different among the groups. No significant difference was observed in the serum alkaline phosphatase (ALP) or urinary calcium concentration between any of the groups. Additionally, no significant difference was observed in the serum calcium or phosphorus concentration between the ACS and CD groups ( Figure 3, Table 4). The serum OC and PINP concentrations in the ACS and CD groups were significantly lower than those in the N group (P < 0.05). The serum CTX concentrations in the ACS and CD groups were significantly higher than those in the N group (P < 0.05). The CTX concentrations were significantly higher and the PINP concentrations were significantly lower in the ACS group than CD group. No significant differences were observed in the serum 25 (OH)D 3 concentration among the groups. The serum PTH concentration in the ACS group was significantly higher than that in the CD and N groups (P < 0.05). Correlations with BMD In the patients with hypercortisolism, Pearson analysis showed a significant positive correlation between BMD and the PINP concentration (P < 0.05), a significant positive correlation between BMD and the OC concentration (P < 0.05), and a significant negative correlation between BMD and the CTX concentration (P < 0.05) (Figure 4). We found no significant correlations between BMD and the following parameters: the PTH, urinary calcium, 25 (OH)D 3 , urinary cortisol, and serum cortisol concentrations (data not shown). In the CD group but not in the ACS group, there was a significant positive correlation between lumbar BMD and the serum ACTH concentration ( Figure 5). No significant correlations were found for femoral or wholebody BMD (data not shown). Further linear regression analysis was performed in the CD group, in which age, sex, BMI, disease course, serum ACTH, and serum cortisol were taken as independent variables. The results showed that lumbar BMD and serum ACTH had a linear correlation (b ¼ 0.524, P = 0.003). However, when Values are expressed as mean AE standard deviation. *P < 0.05 compared with the N group; D P < 0.05 compared with the CD group Ca, calcium; P, phosphorus; UCa, urinary calcium; ALP, alkaline phosphatase; PTH, parathyroid hormone; OC, osteocalcin; PINP, procollagen type I N-terminal propeptide; CTX, collagen type I cross-linked C-telopeptide more independent variables were added (e.g. free triiodothyroxine, testosterone, estradiol, and serum calcium), the results showed that serum cortisol was associated with lumbar BMD (b ¼ À0.723, P = 0.033). The small sample size may have resulted in insufficient statistical power. Discussion In the present study, the serum cortisol and urinary cortisol concentrations were significantly higher in patients with ACS and CD than in healthy controls. In the patients with hypercortisolism, the serum ACTH concentration was significantly higher in patients with CD than in those with ACS, while the urinary cortisol concentration was significantly higher in patients with ACS than in those with CD; however, the serum cortisol concentration was similar between the ACS and CD groups. The urinary cortisol concentration in the ACS group was significantly higher Figure 4. Correlation between bone turnover markers and lumbar bone mineral density (BMD) in patients with hypercortisolism OC, osteocalcin; PINP, procollagen type I N-terminal propeptide; CTX, collagen type I cross-linked Ctelopeptide than that in the CD group, indicating that adrenal adenoma has a strong effect. Despite the different numbers of men and postmenopausal women among the three groups, no significant differences were observed. dos Santos et al. 6 showed that endogenous hypercortisolism was a more important determinant of bone properties than was the gonadal status. In the present study, the patients with hypercortisolism were divided into a hypogonadism group and a normal gonads group, and the binary logistic regression analysis showed that lumbar BMD and gonad function had no correlation. Therefore, we do not believe that sex hormones were a confounding factor affecting bone density in this study. Osteoporosis is one of the most important clinical consequences of hypercortisolism. In the present study, the lumbar, femoral, and whole-body BMDs were significantly lower in patients with ACS and CD than in the healthy controls, similar to a previous study. 6 The differences in the location and degree of decreased BMD showed a trend that followed the underlying cause of hypercortisolism, while a previous study showed that patients with ACS had significantly greater loss of bone mass than did patients with CD. 7 Patients with ACS had poorer BMD in the lumbar region than did patients with CD in the present study, while femoral BMD was similar between the two groups. The lumbar vertebrae contain more cancellous bone (trabecular bone) than do the femurs; therefore, the lumbar vertebrae are more vulnerable to endogenous glucocorticoid damage. 8 This is one possible reason for the differences among the various regions of BMD in patients with hypercortisolism, but further studies on this topic are needed. There were some differences in the biochemical markers of bone metabolism between patients with hypercortisolism and healthy controls. In the present study, the serum calcium and phosphorus concentrations were significantly lower in the patients with ACS or CD than in the healthy controls. The serum calcium that we examined was the total serum calcium, which comprises both free calcium and protein-bound calcium, and the level of protein-bound calcium varies as does the level of serum albumin. Affected by the higher levels of cortisol, the serum albumin concentration in patients with hypercortisolism is lower than that in healthy people; therefore, the serum calcium concentration corrected by albumin was not significantly different among the groups in the present study. In patients with hypercortisolism, the serum ALP concentration is slightly higher than that in healthy people; but the concentration was not significantly different among the groups in this study. The ALP that we measured was the total ALP, not one of the bone isoenzymes of ALP. The serum PTH concentration was significantly higher in the patients with ACS or CD than in the healthy controls; similarly, higher PTH concentrations were previously detected in patients with hypercortisolism than in healthy controls, and the authors suggested that this reflected active bone resorption and secondary hyperparathyroidism. 9 Meng et al. 10 showed that urinary calcium excretion was higher in patients with hypercortisolism and that hypercalciuria might decrease the serum calcium. Thus, the parathyroid glands were stimulated and the PTH secretion increased. Excess PTH stimulates bone resorption. Rozhinskaia et al. 11 reported that secondary hyperparathyroidism was found in 25% of patients with hypercortisolism. Of the bone turnover markers, OC is the most abundant in the bone matrix and is produced late in the bone formation process. It is synthesized by mature osteoblasts and then mixed into the bone matrix. Only a small fraction enters the circulation. Its function is not clear; it might affect bone mineralization and play a negative feedback role during bone remodeling. OC increases with bone absorption by osteoclasts; therefore, the OC concentration might reflect not only the reactionary state of bone formation but also the comprehensive state of bone transformation. CTX, a bone resorption marker, is a product of the cleavage of type I collagen due to osteoclast activity. PINP, a bone formation marker, is a metabolite of the pyrolysis of total type I tropocollagen (synthesized and secreted by osteoblasts) under the action of polypeptidase. Its serum concentration reflects the ability of osteoblasts to synthesize collagen. The serum OC and PINP concentrations in patients with hypercortisolism are lower and the CTX concentrations are higher than those in healthy people, suggesting that bone formation is suppressed and bone resorption is increased in hypercortisolism. The bone marker concentrations in the present study suggested significantly greater suppression of bone formation and bone resorption in the patients with ACS than in the patients with CD, which is supported by the lower lumbar BMD in patients with ACS than in those with CD. O'Brien et al. 12 suggested that glucocorticoids directly or indirectly accelerate apoptosis of osteoblasts and osteocytes and reduce osteoclast apoptosis, causing loss of bone mass. Recent studies have revealed obviously lower OC concentrations in patients with hypercortisolism than in healthy people; the concentrations quickly recovered postoperatively, reaching peak values at 6 months. 13,14 Regarding factors associated with BMD, there was a significant correlation between the ACTH concentration and lumbar BMD in the patients with CD; therefore, ACTH could potentially have a protective effect on lumbar BMD. However, the same relationship was not detected for the femoral neck and whole-body BMDs. ACTH reportedly stimulates proliferation of osteoblasts and increases collagen I mRNA in the osteoblastic cell line SaOs2 in vitro 15,16 ; ACTH binds to a specific member of the melanocortin receptor family, MC2R, which is expressed in osteoblastic cells in vivo and then promotes osteoblast proliferation. In vitro exposure of bone marrow stromal cells to ACTH and leptin promotes osteoblast differentiation by increased gene expression of osterix and collagen type I alpha. 17 Furthermore, ACTH might stimulate proliferation of osteoblasts in a dosedependent manner; i.e., ACTH at 10 nM stimulates proliferation, while lower ACTH concentrations might oppose osteoblast differentiation. 15,16 In the present study, ACTH was not associated with lumbar BMD in the patients with ACS, indicating that ACTH might not have a protective effect on lumbar BMD in patients with ACS; this finding might have been related to the low ACTH concentration in this group. ACTH might have a protective effect on bone; however, it is not sufficient to act against the adverse effects of high cortisol levels on bone metabolism in patients with CD. Conclusions Long-term exposure to excess glucocorticoids in patients with hypercortisolism results in bone loss, with loss of lumbar bone occurring earlier and more extensively. To enable early identification and timely intervention for bone loss, BMD examinations should be prioritized to prevent fractures. At the same time, it is important to conduct imaging of the vertebrae for early identification of fractures. Measurement of bone turnover markers (PTH, OC, CTX, and PINP) might help with early assessment of the influence of hypercortisolism on bone quality. ACTH might have a protective effect on bone; however, it is not sufficient to act against the adverse effects of high cortisol levels on bone metabolism. Because of the limited sample size and retrospective design in the present study, the effect of endogenous hypercortisolism on bone mass should be examined in a larger prospective cohort of patients.
2018-04-03T02:15:37.613Z
2017-08-29T00:00:00.000
{ "year": 2017, "sha1": "fce881f42ea3221e4cbe519c476e38526f98c3c3", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060517725660", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fce881f42ea3221e4cbe519c476e38526f98c3c3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247451104
pes2o/s2orc
v3-fos-license
White Paper on Forward Physics, BFKL, Saturation Physics and Diffraction The goal of this whitepaper is to give a comprehensive overview of the rich field of forward physics. We discuss the occurrences of BFKL resummation effects in special final states, such as Mueller-Navelet jets, jet gap jets, and heavy quarkonium production. It further addresses TMD factorization at low x and the manifestation of a semi-hard saturation scale in (generalized) TMD PDFs. More theoretical aspects of low x physics, probes of the quark gluon plasma, as well as the possibility to use photon-hadron collisions at the LHC to constrain hadronic structure at low x, and the resulting complementarity between LHC and the EIC are also presented. We also briefly discuss diffraction at colliders as well as the possibility to explore further the electroweak theory in central exclusive events using the LHC as a photon-photon collider. B. Forward direct photon measurements 27 C. Top quark pair production as a tool to probe Quark-Gluon -Plasma formation 29 V. Ultra-peripheral collisions at hadronic colliders and exclusive reactions at the EIC 31 A. Existing measurements on diffractive vector meson photoproduction in UPCs 31 B. Future measurements on diffractive vector meson photoproduction in UPCs 33 C. The ratio of Ψ(2s) and J/Ψ photoproduction cross-sections as a tool to quantify non-linear QCD evolution 34 D. Planned For successful runs at any colliders, such as the LHC at CERN or the incoming EIC at BNL, and future projects such as FCC at CERN, it is fundamental to understand fully the complete final states. This obviously includes the central part of the detector that is used in searches for beyond standard model physics but also the forward part of the detector, the kinematic region close to the outgoing particles after collision. The detailed understanding of final states with high forward multiplicities, as well as those with the absence of energy in the forward region (the so-called rapidity gap), in elastic, diffractive, and central exclusive processes is of greatest importance. Some of these configurations originate from purely non-perturbative reactions, while others can be explained in terms of multi-parton chains or other extensions of the perturbative QCD parton picture such as the Balitsky-Fadin-Kuraev-Lipatov (BFKL) formalism. Future progress in this fundamental area in high energy physics requires the combination of experimental measurements and theoretical work. Forward Physics addresses physics that takes place in the forward region of detectors, which at first is defined as the region complementary to the central region. The latter is the region dominantly employed in the search for new physics at e.g. the Large Hadron Collider. It is then also the central region where collinear factorization of hard processes in terms of an partonic cross-section, convoluted with corresponding collinear parton distribution functions is well defined. 'Hard process' refers here to a certain reaction subject to strong interactions, which is characterized by the presence of a hard scale M with M Λ QCD with Λ QCD the characteristic scale of Quantum Chromodynamics (QCD) of the order of a few hundred MeV. Physics in the forward region is on the other hand at first characterized by production at large values of rapidity with respect to the central region. For a hard reactions, where the underlying partonic sub-process is resolved, one therefore deals with the interplay of partons with a relative large proton momentum fraction x 1 , with x 1 ∼ 0.1 . . . 1, and partons with very small proton momentum fractions x 2 down to 10 −6 in the most extreme scenario. Such small momentum fractions lead generally to a break down of the convergence of the perturbative expansion and require resummation, which is achieved by Balitsky-Kuraev-Fadin-Lipatov (BFKL) evolution. The latter gives rise to the so-called hard or BFKL Pomeron, which predicts a strong and power-like rise of the gluon distribution in the proton in the region where x 2 → 0. While such a rise is clearly seen in data, unitarity bounds prohibit such a rise to continue forever: at a certain value of x 2 , this rise must slow down and eventually come to hold. The latter is strongly related with the formation of an over-occupied system of gluons, known as the Color Glass Condensate, whose exploration is one of the central physics goals of a future Electron Ion Collider (EIC). While at an EIC a dense QCD state will be achieved through scattering of electrons on heavy ions, forward physics at LHC allows here for a complementary exploration, since high gluon densities are here at first produced through the low x evolution of the gluon distribution in the proton. Forward physics allows therefore for the exploration of both BFKL evolution (perturbative evolution towards the low x region) as well as to search and investigate effects related to the on-set of gluon saturation. Besides the direct interaction of partons at very low proton momentum fraction, forward physics also allows for the observation of a different class of events, so-called diffractive events. The latter are characterized through the presence of large rapidity gaps and therefore probe physics beyond conventional collinear factorization. In the case of hard events, they give access to complementary information on the physics of high gluon densities as well as corrections due to soft re-scattering. At the same time such processes are themselves of direct interest for the exploration of electroweak physics and physics beyond the Standard Model: due to the presence of rapidity gaps, such events are characterized through a very few numbers of particles in the final state and allow therefore for very clean measurements with a strongly reduced background, in comparison to conventional LHC measurements. Closely related to such diffractive events are photon induced reactions which can be observed at the LHC. While such reactions can produce final states both in the central and forward region, control of the forward region is of particular importance for those reactions, since it allows us to control whether in a certain the event the scattering proton stayed indeed intact and acts in this way as the photon source. In such events, either one or both of the two scattering protons or ions at the LHC at as a photon source. The former allows for the study of exclusive photon-hadron interaction at highest center of mass energies and yields therefore yields another tool for the study of highest gluon densities, with high precision; as for inclusive reactions, such exclusive reactions are complementary to measurements at the future Electron Ion Collider, since at the LHC high parton densities are predominately generated due to high energy evolution, while an EIC relies due to its lower center of mass energy on the nuclear enhancement. Photon-photon interactions are on the other hand of high interest, since they provide very clear probes of electroweak and Beyond-the-Standard-Model physics. With both scattering hadrons intact after the interaction, QCD background is suppressed to a minimum in such a reaction and complements in this way LHC searches for new physics based on inclusive events in the central region. The outline of this white paper is as follows: Sec. II is dedicated to attempts to pin down BFKL evolution at the LHC as well as its actual use for phenomenology. Sec. III deals with high gluon densities, saturation as well as their relation to TMD PDFs. Sec. IV deals with the investigation of The azimuthal-angle difference distribution measured for Mueller-Navelet jets in the rapidity interval 6.0 < ∆y < 9.4. Right: Comparison of the measured ratio C 2 /C 1 as a function of rapidity difference ∆y to SHERPA, HEJ+ARIADNE and analytical NLL BFKL calculations at the parton level [38]. fundamental to test the dependence of the ∆φ correlations as a function of the outermost jets p T , in addition to the ∆y scan. The region of applicability of the BFKL formalism is expected to occur in cases where the outermost jets have similar p T . At the same time, we are interested in studying the radiation pattern between the jets presented in the form of "mini-jets" between the outermost jets. Indeed, as the rapidity interval increases there is more phase space available for extra radiation to be emitted, so it is natural for the average jet multiplicity to increase. The number of mini-jets as well as the emission pattern in y-φ space could potentially be used in addition to the azimuthal angular decorrelation to further characterize MN dijet events. The main focus will be given in the definition of more "exclusive" observables that exploit the two-jet angular correlations between the mini-jets and the outermost jets in y-φ space, together with a measurement of cos(∆φ) between the outermost jets. From the phenomenological studies so far, it became apparent that more precise theoretical work is needed (e.g. see [38]). In Refs. [39,40] new observables were proposed aiming at probing novel multi-Regge kinematics signatures. In order to define properly the proposed observables, we assume that a MN event is characterized by: k a , k b : transverse momenta of the MN jets y 0 = y a = Y, y N +1 = y b = 0 : rapidities of the MN jets k 1 , k 2 , ..., k N : transverse momenta of the minijets y 1 , y 2 , ..., y N : rapidities of the minijets with y i−1 > y i . Then the observables are and Eq. (3) differs from the original definition in [39] since now i runs over the minijets and excludes the leading MN jets. R ky incorporates a p ⊥ dependence which carries information related to the decoupling between transverse and longitudinal components of the emitted gluons. For the proposed observables, one can see that events where the minijets have relatively low p ⊥ , contrary to what one would naively expect, give a very significant contribution to the gluon Green's function ( Fig. 1 in [39]) and consequently to the cross-section. The experimental analyses however (mainly to deal with jet energy reconstruction uncertainties) impose a veto on the p ⊥ of any resolved minijet. Usually the p ⊥ veto value for ATLAS and CMS is Q 0 = 20 GeV which is rather large if we compare it to the Q 0 = 1 GeV value which was the jet p ⊥ infrared cutoff for the plots in [39]. Here we are performing a first comparison between the predictions from a fixed order calculation and a BFKL based computation for the observables described in Eqs. 2, 3 and 4. We focus on events where two jets with rapidities y a in the forward direction and y b in the backward direction can be clearly identified. In order for the BFKL dynamics to be relevant, the difference Y = y a − y b needs to be large enough so that terms of the form α n s Y n be important order-by-order to get a good description of the partonic cross-section which can be written in the factorized form In this expression φ A,B are impact factors depending on the external scales, Q 1,2 , and the off-shell reggeized gluon momenta, k a,b . The gluon Green function f depends on k a,b and the center-of-mass energy in the scattering ∼ e Y /2 . Here, we will work at leading order (LO) with respect to an expansion in the strong coupling constant α s , however, for BFKL phenomenology at the LHC it is mandatory to work within the next-to-leading order (NLO) approximation for both the impact factors and the gluon Green's function which introduces the dependence on physical scales such as the one associated to the running of the coupling and the one related to the choice of energy scale in the resummed logarithms [41][42][43][44]. It is possible to write the gluon Green function in an iterative way in transverse momentum and rapidity space at LO [45] and NLO [46,47]. The iterative solution at LO has the form (for the NLO expressions see Refs. [46,47]) where ω ( q) = − α s N c π ln q 2 λ 2 corresponds to the gluon Regge trajectory which carries a regulator, λ, of infrared divergences. All these expressions have been implemented in the Monte Carlo code BFKLex which has already been used for different applications ranging from collider phenomenology to more formal studies in the calculation of scattering amplitudes in supersymmetric theories [48][49][50][51][52][53]. For the fixed order QCD computation of the observables in Eqs. 2, 3 and 4 we use POWHEG [54][55][56] and Pythia 8 [57]. In both the BFKL based computation and the fixed order one, the anti-kt jet clustering algorithm has been used as implemented in fastjet [58,59]. We use the following kinematic cuts: whereas the jet radius was taken to be R = 0.5 and the NNPDF31 [60] PDF sets were used. In Figs. 2 and 3 we present some preliminary plots of the observables defined in Eqs. 2, 3 and 4. At the moment, there are no clear conclusions to draw here, this is still work in progress and the final results will be reported elsewhere. One well know issue in the BFKL approach is that NLO corrections to the Green's function turn out to be large and with opposite sign with respect to the LO contribution. This is generally true also for the impact factors, depicting the transition in the fragmentation region of the colliding particles, all that resulting is a strong instability of the high-energy series. A notable example in this respect is represented by the Mueller-Navelet reaction, where instabilities can be dumped by unnaturally large values of the renormalization and factorization scales [29,30,71] suitable optimization schemes, such as the Brodsky-Lepage-Mackenzie (BLM) method [103][104][105][106]. This brings to a substantial lowering of cross sections and hampers any chance of making precision studies. Recently, however, a set of semi-hard reactions was singled out exhibiting a first, clear stability, in the typical BFKL observables, under higher-order corrections calculated at natural scales. It is the case of forward emission of objects with a large transverse mass, such as Higgs bosons [61,[107][108][109] and heavy-flavored jets [110][111][112], studied with partial NLO accuracy. Strong stabilizing effects in full NLO emerged in recent studies on inclusive emissions of Λ c baryons [109,113] and bottomflavored hadrons [62]. Here, a corroborating evidence was provided that the characteristic behavior of variable-flavor-number-scheme (VFNS) collinear fragmentation functions (FFs) describing the production of those heavy-flavored bound states at large transverse momentum [114][115][116] acts as a fair stabilizer of high-energy dynamics. We refer to this property, namely the existence of semi-hard reactions that can be studied in the BFKL approach without applying any optimization scheme nor artificial improvements of the analytic structure of cross section, as natural stability of the highenergy resummation. Figure 4(left) summarizes the key features of a well behaved perturbative series in the case of the p T -distribution of a forward Higgs inclusively produced together with a backward jet (rapidity difference ∆Y = 5) in proton-proton collisions at √ s = 14 TeV: Born and NLO fixed order predictions are clearly separated from LO and NLO BFKL, and the latter show a very moderate dependence on scale variation. Similar features are seen if a bottom-flavored hadron is detected instead of a Higgs boson in the forward region -see Fig. 4(right). This supports the statement that high-energy emissions in forward regions of rapidities bring along a high discovery potential and a concrete opportunity to widen our understanding of hadronic structure and, more in general, of strong interactions at new-generation colliders, such as the EIC [117][118][119], HL-LHC [120], the International Linear Collider (ILC) [121], the Forward Physics Facility (FPF) [122,123], and NICA-SPD [124,125]. C. Unintegrated gluon distribution (UGD) Inclusive emissions of single forward particles represent a golden channel to access the proton content at low-x via an unintegrated gluon distribution. The original definition of the unintegrated gluon distribution relies on high energy factorization and Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution [1][2][3]5]. It takes the form of a convolution in transverse momentum space between the BFKL Green's function and the proton impact factor. The Green's function is process-independent and accounts for the resummation of small-x logarithms, while the proton impact factor represents the non-evolved part of the density and is of non-perturbative nature. Our knowledge of the proton impact factor is very limited and different models for it and for the unintegrated gluon distribution itself have been proposed so far. First analyses of unintegrated gluon distributions were performed in the context of deep-inelastic-scattering (DIS) structure functions [127,128]. Subsequently, the unintegrated gluon distribution was probed via the exclusive electro-or photo-production of vector mesons at HERA [129][130][131][132][133][134][135][136][137][138][139] and the EIC [126,140,141], the single inclusive heavy-quark emission at the LHC [142], and the forward Drell-Yan production at LHCb [143][144][145][146]. In Fig. 5 we show the dependence on the hard scale Q 2 seven different unintegrated gluon distributions, presented in Section 3 of Ref. [126]. To be specific, we single exclusive production of a ρ-meson in lepton-proton collisions via the sub-process where a photon with virtuality Q 2 and polarization λ i is absorbed by a proton and a ρ-meson with polarization λ f is detected in the final state. The two spin states λ i,f can be longitudinal (0) or transverse (1). The (00) combination gives rise to the longitudinal cross section, σ L (Q 2 ), while the (11) one to the transverse cross section, σ T (Q 2 ). Here the semi-hard scale ordering, W 2 Q 2 Λ 2 QCD (with W the hard-scattering center-of-mass energy), is stringently preserved, and the small-x regime, x = Q 2 /W 2 , is accessed. We further present new results for the EIC [117][118][119] at the reference energy of W = 30 GeV (right panel). We make use of the twist-2 (twist-3) distribution amplitudes s for the longitudinal (transverse) configuration, and we gauge the impact of the collinear evolution of the distribution amplitudes describing the exclusive emission of the ρ via a variation of the non-perturbative parameter a 2 (µ 0 = 1 GeV) in the range 0.0 to 0.6 (see Section 2 of Ref. [126] for further details). We point out that our predictions are spread over a large range. This provides us with a clear evidence that polarized cross sections for the exclusive production of light-vector mesons (such as the ρ-particle) in lepton-proton collisions act as a discriminator for the unintegrated gluon distribution. We expect that future studies at the EIC will substantially extend our knowledge of the gluon content of the proton at small-x. [183]) and prompt inclusive J/ψ photoproduction cross section (right panel, adopted from Ref. [184]) at NLO in α s in the Color-Singlet Model for various choices of factorization (µ F = ξ F M ) and renormalization (µ R = ξ R M ) scales. D. BFKL resummation of NLO collinear factorization: Heavy quarkonium production Another way BFKL dynamics manifests itself is within a direct resummation of contributions enhanced by logarithms of partonic center of mass energy in collinear factorization. Such an approach allows to combine high energy resummation with theoretical fixed order predictions which are in general available at a higher perturbative order than their counterparts obtained within high energy factorization. In the following we focus on the production of heavy quarkonia -bound states of cc or bb heavy quark pairs, see [173][174][175] for a recent review. New quarkonium-related measurements had been proposed for the experimental programs of the High-Luminosity LHC [120] and Spin Physics Detector at NICA [124], as well as for fixed-target program at the LHC [176,177]. It is believed, that the non-relativistic nature of these bound states should allow for the description of hadronization of the heavy quark-antiquark pair into an observed quarkonium state with a modest number of free parameters. Despite the availability of several factorisation approaches such as the Color-Singlet Model [178], Non-Relativistic QCD Factorisation approach [179] and more recent potential-NRQCD [180] and Soft-Gluon factorisation [181,182], none of them is yet capable to fully describe the rich phenomenology of inclusive heavy quarkonium production observables, which includes differential cross sections and polarisation observables in proton-proton, lepton-proton collisions and e + e − annihilation, as described in more deltail in the reviews [120,124,[173][174][175] cited above. As pointed out in Ref. [183] for the case of p T -integrated prompt η c hadro-production cross section and Ref. [184] for the total inclusive photo-produciton cross section of prompt J/ψ, the collinear NLO calculations of these quantities, based on the Color-Singlet(CS) Model, become unreliable if the collision energy √ s significantly exceeds the heavy quarkonium mass M , see Fig. 6 -state as function of √ s pp . Plots adopted from the Ref. [186]. arithmic corrections ∼ α n s ln n−1 (ŝ/M 2 ) arise in this limit. The necessary resummation can be addressed using the BFKL resummation in a form of High-Energy Factorisation (HEF), provided by Refs. [147][148][149]185]. The formalism described in these papers allows one to perform a resummation of Leading Logarithmic (LL) corrections ∼ α n s ln n−1 (ŝ/M 2 ) to the partonic cross-section in all orders in α s . As shown in [186], in order for this resummation to be consistent with the factorzation scale dependence of collinear PDFs, subject to standard NLO DGLAP evolution, it is needed to truncate the full LL(lnŝ/M 2 ) resummation for the partonic cross-section, by taking into account only Doubly-Logarithmic (DL) terms ∼ α n s ln n−1 (ŝ/M 2 ) ln n (q 2 T /µ 2 F ) in the resummation functions of high energy logarithms. This allows to obtain the double logarithmic resummed expressionσ (HEF) ij (ŝ, µ F , µ R ) (i, j = q,q, g), which is guaranteed to reproduce the leading logarithmic terms in theŝ M 2 asymptotics of the exact partonic cross-section up to NNLO in α s ; it serves therefore for an approximation for the latter one in the Regge limit. In Ref. [186] the resummed expression was then combined with the exact collinear NLO result, through introducing a smooth weight functions 0 < w which we construct by suitably adapting the Inverse Squared Errors Weighting (InEW) matching method of the Ref. [187], in such a way, that the NLO CF term is suppressed whenŝ M 2 and the resummation term is suppressed outside of the Regge limit. The numerical results of such matching calculation are illustrated by plots in the Fig. 7. The left panel shows how the InEW matching of NLO collinear factorization and the resummed contributions works. In the right panel, the plot of the InEW-matched total cross section (red line) is shown together with its µ F and µ R scale-variation uncertainty (shaded band). One can see that the scale uncertainty of the matched prediction does not show any pathological behaviour at high energy, unlike the scale-variation plots in the Fig. 6, and it is reduced compared to the scale uncertainty of the LO cross section. Thus we conclude, that problems with high-energy behaviour of p T -integrated cross sections of quarkonium production described in Refs. [183,184] are manifestations of the necessity to perform the BFKL-type resummation of high-energy logarithms in the partonic coefficient function of these processes. Nevertheless such resummed partonic cross section is only part of the full answer, and at realistic energies √ s, the region of M 2 /ŝ ∼ 1 gives comparably large contribution, as it is clear from the left plot in the Fig. 7, so both contributions should be matched. Interestingly, at high energies theμ F -prescription of the Refs. [183,184] predicts much lower cross section 1 (dashed line in the right panel of the Fig. 7) than the matched NLO+ resummed calculation, which shows the importance of systematic resummation formalism, see the Sec. 2.5 of the Ref. [186] for more detailed discussion. In the future these calculations based on double logarithmic resummation of high energy logarithms matched with NLO collinear factorization in the scheme described above need to be extended to the J/ψ photoproduction case, studied in the Ref. [184], as well as to the case of rapidity and p T -differential hadroproduction cross sections of prompt η c and χ c0,1,2 mesons, including the Color-Octet contributions for the latter ones. Another important intermediate-term goal is to study how the DL HEF+NLO CF calculation changes the prompt J/ψ p T -spectrum in the Color-Singlet Model. And crucially, we must find the way to extend out formalism beyond the double logarithmic approximation to be able to reduce scale-variation uncertainties of our results to make them useful e.g. for determination of the gluon PDFs at low scales and small x. Experimentally, the inclusive J/ψ photoproduction in wider range of √ s γp than was available at HERA could be accessed using ultra-peripheral collisions at the LHC, as well as future EIC data which will lie at lower energies but will be more precise due to the increased luminosity. The χ c0,1,2 production cross section in wide range of energies could be studied by the fixed-target experiments using LHC beams [176,177,188]. E. Diffraction: Gaps between jets Another jet probe of BFKL dynamics at the LHC is the production of two high-p T jets separated by a large (pseudo)rapidity interval void of particle activity, as proposed by Mueller and Tang nearly 30 years ago [189]. The rapidity gap signature between the jets is indicative of an underlying tchannel color-singlet exchange mechanism. The hard scale of the process, justified by the high jet p T , allows for a treatment of this exchange in terms of perturbation theory. A natural mechanism in QCD to explain this process is BFKL Pomeron exchange between partons. This description is expected to be more justified as the jets become more separated in rapidity. Contributions based on DGLAP evolution are expected to be strongly suppressed in dijet events with a central rapidity gap by virtue of a Sudakov form factor that needs to be supplemented to the calculation. Thus, the jet-gap-jet process may allow us to directly access the small-x dynamics of interest, complementary to other standard probes of this regime of QCD interactions. Measurements of jet-gap-jet events have been presented by the CDF, D0, and CMS at √ s = 0.63, 1.8, 7, and 13 TeV [190][191][192][193]. At the Tevatron and at the LHC, the pseudorapidity gap between the jets is defined as the absence of particles in |η| < 1 with p T > 200 MeV (or 300 MeV in some cases) between the highest p T jets. The threshold is constrained by the capability of the detectors to reconstruct charged-particle tracks and by the calorimeter noise energy threshold. Experimentally, these events are very clean and can be separated from the overwhelming coloroctet exchange dijet background using data-driven methods or with Monte Carlo generators. The observable that is extracted in these measurements is the fraction of color-singlet exchange dijet events in the inclusive dijet sample, The f CSE fraction is measured as a function of the second-leading jet p jet2 T , the pseudorapidity separation between the jets ∆η jj ≡ |η jet1 − η jet2 |, and in some cases a measure of momentum imbalance between the jets, such as ∆φ jj ≡ |φ jet1 − φ jet2 |. Theoretical uncertainties related to the choice of PDF and the variation of renormalization and factorization scales partially cancel in f CSE . Correlated experimental uncertainties related to jet energy corrections, luminosity, acceptance and efficiency effects, cancel in the ratio. The f CSE fractions are of the order of 0.5-1%, depending on the collision energy and dijet kinematics. This means that about 0.5-1% of the inclusive dijet cross section is due to hard color-singlet exchange. Previous phenomenological studies of the jet-gap-jet process were based on PYTHIA6 and HER-WIG6 Monte Carlo generator [194][195][196][197][198]. This is a good motivation to re-visit the phenomenological predictions in light of recent developments on the event generator tuning at the LHC and with the advent of NLO + PS generators for the calculation of the cross section for inclusive dijet production. This helps us assess the possible theoretical shortcomings and ideas for future experimental measurements. To understand these measurements in the context of BFKL dynamics, we have embedded the BFKL pomeron exchange amplitudes at NLL with LO impact factors in the PYTHIA8 event generator. We use a recent CP1 tune of PYTHIA8, which has an improved phenomenology of initial-and final-state radiation, multiple parton interactions, and hadronization for a wide range of energies and collision systems, including 13 TeV pp collisions [199]. We use POWHEG+PYTHIA8 for the NLO+PS calculation of the inclusive dijet cross section using the CP5 tune of PYTHIA8. We compared our calculations to the measurements by the Tevatron and LHC experiments using the same rapidity gap selection as the experiments (p T > 200 MeV in |η| < 1) and with a rapidity gap definition that is closer to the theoretical expectation (|η| < 1, no p T requirement). In Fig. 8, we show a few predictions for 13 TeV together on top of the measurement by CMS. In doing these studies, we discovered that there is an important role of initial-state radiation effects in the destruction of central gaps as one goes to larger √ s. We find that the description using a theoretical-like gap and the experimental gap agree with each other, modulo a global normalization factor, at 1.8 TeV and 7 TeV, but a clear disagreement is observed at 13 TeV. The theoretical gap prediction gives a better description of the data at 13 TeV. In investigating the source behind the phenomenological differences between the gap definitions, we find that there is sensitivity in the modeling of fragmentation with the additional production of color charges with ISR in PYTHIA8. The Run-2 tunes of CMS were fit to reproduce charged particle spectra measurements in minimum-bias events split into single-diffractive (forward rapidity gap), non-diffractive, or inelastic topologies [200]. For a better phenomenological interpretation of the jet-gap-jet process, additional experimental input on measurements of minimum-bias events with central rapidity gap topologies, similar to the ones used for the jet-gap-jet process but without the requirement of high-p T jets, will be necessary to further tune ISR and fragmentation modelling effects. To get more insight on this aspect, future phenomenological calculations should be done 40 [193]. Theoretical predictions based on BFKL calculations at next-to-leading logarithmic accuracy assuming experimental gap ("exp gap"), a theoretical-like gap ("strict gap"), and no gap requirement ("full BFKL") are presented. The bands represent uncertainties related to factorization and renormalization scale variations. The gap survival probabilities, indicated in the legend, were fit with a χ 2 scan. by embedding the BFKL calculations in the HERWIG7 generator, which has a different evolution variable in the parton shower and a different fragmentation model. The present BFKL calculations for the jet-gap-jet process account for the resummation of logarithms of energy at NLL accuracy using LO impact factors. The NLO impact factors were calculated in recent years [201,202], but they have not been incorporated into the phenomenological analysis yet. In addition, to improve the phenomenological description, it will be important to take into account the effect of wide angle, soft gluon emissions into the gap region. These lead to so-call nonglobal logarithms, which are not resummed in the BFKL framework. The resummation is known exactly in the large-N C limit, and is described by the Banfi-Marchesini-Smye equation [203]. The effect of these non-global logarithms for the jet-gap-jet topology is a suppression of the gluon-gluon processes relative to quark-gluon and quark-quark processes [204]. Experimentally, future measurements could benefit from exploring different definitions of the pseudo-rapidity gap between the jets. For example, by scanning the p T threshold used to define the pseudo rapidity gap in order to better control the aforementioned ISR effects, or by defining a "sliding" pseudo rapidity gap interval event-by-event. To suppress the underlying event activity, one could target hadron-hadron collisions where at least one of the colliding hadrons remains intact due to Pomeron exchange. Such a measurement has been presented by CMS and TOTEM [193], demonstrating the feasibility for such studies. However, the sample size was rather limited and did not allow for a differential measurement of the f CSE . Special runs at the LHC at √ s = 14 TeV with single proton-proton collisions with an integrated luminosity of L = 10 pb −1 would allow for a highly differential measurement in a controlled hadronic environment. To this date, we still do not have a global and satisfactory description of jet-gap-jet events from the point of view of QCD. In principle, given the hard scale of the process, the jet-gap-jet process should be describable in terms of perturbation theory, and potentially be a venue for understanding BFKL dynamics. It is counter-intuitive that such a simple signature is more complicated to describe than the significantly "busier" inclusive dijet events. According to the Tevatron and LHC measurements, about 0.5-1% of the inclusive dijet cross section is due to t-channel hard color-singlet exchange. The subprocess for QCD hard color-singlet exchange is not currently implemented as a standard subprocess in modern Monte Carlo event generators. As the experimental precision increases for inclusive jet cross section measurements, the absence of t-channel color-singlet exchange subprocesses becomes more important, for example for PDF or α s extractions. Thus, for the next years in high-energy physics, it will be important to have a proper understanding of this process, both experimentally and theoretically. Even though BFKL resummation can stabilize collinear factorization, this approach is expected to eventually breakdown due to the resulting high gluon occupation numbers. While the gluon distribution is rapidly growing at small x due to radiation of more and more small x gluons due to the availability of large longitudinal phase space at high energy, at some point the number of partons (gluons) occupying the same transverse area in the target hadron or nucleus will be large and the QCD-improved parton model where nearby partons are treated as not interacting with each other will cease to be applicable to high energy collisions. This is the phenomenon of the so called gluon saturation. In the Color Glass Condensate formalism one treats this state with a large gluon occupation number as a classical color field generated by the large x color degrees of freedom generically called sources of color charge ρ. In this formalism a high energy hadronic or nuclear collision is then treated as a collision of two highly contracted classical color fields, i.e. shock waves. This a highly non-trivial and so far not amenable to analytic solutions in general. A somewhat easier problem is to consider scattering of a dilute system of partons on a dense system of gluons described as a saturated state of gluons. In this so-called dilute-dense collision the relevant degrees of freedom are Wilson lines in fundamental or adjoint representation, re-summing multiple scatterings of a quark or gluon parton projectile on the classical color field A − describing the target dynamics in light cone gauge A + = 0. Production cross sections in this approach involve two and four point correlation functions of Wilson lines known as dipoles and quadrupoles (these are the only two correlation functions in leading N c approximation). Quantum loop effects are then incorporated into this formalism via a functional renormalization group equation known as the JIMWLK equation [205][206][207][208][209][210][211][212][213][214][215] which in the Gaussian and large N c approximation reduces a close equation known as the BK equation [216,217]. While applications of the Color Glass Condensate to high energy collisions at HERA, RHIC and the LHC have yielded tantalizing hints of gluon saturation effects there is still no firm evidence. B. Transverse Momentum Dependent Parton Distribution Functions Unlike collinear factorization, such an approach is based on high energy factorization, which yields cross-sections as convolutions in transverse momenta or coordinates, in contrast to convolutions in hadron momentum fraction, encountered for collinear factorization. In [165] it has been shown that neglecting any higher twist correction one finds in the small x description of semiinclusive observables the sought after distributions which contains the information on transverse momentum in hadron: the Transverse Momentum Dependent (TMD) gluon distributions. These distributions allow for 3 dimensional imaging of hadrons: one accesses one longitudinal direction and two transverse dimensions of momentum inside them. Such TMD distributions are not only of interest to characterize effects related to the presence of high gluon densities in hadron. More generally they allow to obtain a 3D imaging of the proton content and to answer fundamental questions of the dynamics of strong interactions, such as the origin of proton mass and spin calls, see Refs. [218,219] and references therein. The complete list of unpolarized and polarized gluon TMDs at leading twist (twist-2) was afforded for the first time in Ref. [220]. Note that this is list is generic and is at first not restricted to high energy factorization and/or the presence of high parton densities. They however provide useful tools to map the information contained in correlators of multiple Wilson lines Eq. (12). Tab. I contains the eight twist-2 gluon TMDs for a spin-1/2 target, using the nomenclature defined in Refs. [221,222]. The two functions on the diagonal in Tab. I respectively stand for the density of unpolarized gluons inside an unpolarized nucleon, f g 1 , and of circularly polarized gluons inside a longitudinally polarized nucleon, g g 1 . They are the counterparts to the well-known unpolarized and helicity gluon PDFs obtained within collinear factorization. According to TMD factorization, i.e. factorization in the limit where a certain transverse momentum k ⊥ is significantly smaller than a certain hard scale M , M k ⊥ , all these densities embody the resummation of transverse-momentum logarithms, which constitute their perturbative input. Much is know about this resummation [223][224][225], but very little is known about the non-perturbative content of these TMD distribution. It is then this non-perturbative content (from the point of view of TMD factorization) which promises to give information on the Color Glass Condensate. The distribution of linearly polarized gluons in an unpolarized hadron, h ⊥g 1 , is particularly relevant at low-x, since it leads to spin effects in collisions of unpolarized hadrons [226][227][228][229][230][231], whose size is expected to become more and more relevant when x diminishes. The Sivers function, f ⊥g 1T , gives on the other hand information about unpolarized gluons in a transversely polarized nucleon, and is relevant to study transverse-spin asymmetries emerging in collisions with polarized-proton beams; within the context of low x physics it is of particular interest due to its connection with the QCD Odderon [232]. U circular linear I: A table of leading-twist gluon TMDs for spin-1/2 targets. U , L, T stand for unpolarized, longitudinally polarized and transversely polarized hadrons, whereas U , 'circular', 'linear' depict unpolarized, circularly polarized and linearly polarized gluons, respectively. T -even (odd) functions are given in blue (red). Black functions are T -even and survive the integration over the gluon transverse momentum. nomenology of f ⊥g 1T was discussed in Refs. [236][237][238][239]. Due to the shortage of experimental data on the gluon-TMD sector, exploratory analyses of gluon TMDs via simple and flexible models are required. Pioneering analyses along this direction were conducted by the hands of the so-called spectator framework [220,240,241]. Originally employed to model quark TMD distributions [221,[242][243][244][245][246], it relies on the assumption that from the struck hadron a gluon is extracted, and what remains is considered as an effective on-shell spin-1/2 object. Spectator-model T -even gluon TMDs at twist-2 were recently obtained in Ref. [247] (see also Refs. [248][249][250]), while a preliminary calculation of the T -odd ones can be found in Refs. [251][252][253]. The T -even gluon correlator is taken at tree level and does not account for the gauge-line dependence, which appears in our model in the T -odd case. For an unpolarized proton, we identify the unpolarized distribution as the probability of extracting unpolarized gluons at given x and p T , while the Boer-Mulder density represents the probability of extracting linearly-polarized gluons in the transverse plane at x and p T . Contour plots in Fig. 9 refer to the behavior in p T of the ρ-densities in Eqs. (13) and (14), respectively, obtained at Q 0 = 1.64 GeV and x = 10 −3 for an unpolarized proton virtually moving towards the reader. The color code is related to the size of the oscillation of each density along the p x and p y directions. To better catch these oscillations, ancillary 1D plots representing the corresponding density at p y = 0 are shown below each contour plot. As expected, the density of Eq. (13) has a cylindrical pattern around the direction of motion of the proton. Conversely, the Boer-Mulders ρ-density in Eq. (14) presents a dipolar structure which reflects the fact that gluons are linearly polarized. The running away from the cylindrical symmetry is emphasized at small x, because the Boer-Mulders function is particularly large. From the analytic point of view, the ratio between f g 1 and h ⊥g 1 TMDs turns to a constant in the asymptotic limit x → 0 + . This is in line with the prediction coming from the linear BFKL evolution, i.e. that at low-x the "number" of unpolarized gluons equals the number of linearly-polarized ones, up to higher-twist effects (see, e.g., Refs. [254][255][256][257][258]). Therefore, a connection point between our gluon TMDs and the high-energy QCD dynamics has been established. To access more dimensions of partonic content, exclusive processes are necessary. The richest distributions one encounters in perturbative QCD processes are the so-called Generalized TMD distributions (GTMD). They parameterize master correlators of 5 parameters (x, ξ, k 2 ⊥ , ∆ 2 ⊥ , k ⊥ · ∆ ⊥ ) where x, k ⊥ are respectively the fraction of the longitudinal target momentum and the transverse momentum carried by a parton, and ξ and ∆ ⊥ are the longitudinal and transverse momentum transfer to the hadron. For (ξ = 0, ∆ ⊥ = 0 ⊥ ) the GTMD becomes a TMD. In a given process, the set of GTMD distributions which are accessed depends on final state kinematics, but also on the parton and hadron polarizations with repercussions on the measured final state. Multidimensional tomography of hadrons as a probe for Wigner distributions and TMD distributions has been a major focus of theoretical and experimental efforts, and it will be an important share of the physics goals of future colliders such as the Electron Ion Collider [259]. Examples of relevant processes -which also allow at least in principle for an analysis at next-to-leading order in the perturbative expansion -are Meson production in γ ( * ) p collisions [260], inclusive and diffractive dijets [170,259,[261][262][263][264], see also [265] for a first phenomenological application to HERA data, as well as exclusive pion production in unpolarized electron-proton scattering in the forward region, which is a direct probe of both the gluon Sivers function and the QCD Odderon [232]. C. Complementarity between EIC and LHC There exists a very nice complementarity between EIC and LHC experiments. Indeed it is possible to gather important input on such (G)TMD distributions from photon-hadron reactions, which at the LHC is accessible through ultraperipheral proton-nucleus and nucleus-nucleus collisions, see Sec. V for a detailed discussion. A photo production processes which provides the necessary hard scale for an analysis based on perturbative QCD expansion is the production of quarkonia, in particular diffractive exclusive photoproduction of J/ψ charmonia with negative charge parity which allows to study the distribution of gluons in the target. On the other hand, the same type of production of charmonia with positive charge parity as η c or χ c permits studying gluonic exchanges with negative charge parity corresponding to Odderon exchanges. A related process is exclusive diffractive meson production at large momentum transfer |t|, which could provide a new set of observables to reveal saturation effects. While HERA measurements are limited by statistics, UPC reactions at LHC might allow for the observation of a different scaling in t, when passing from a high energy kinematics governed by BFKL-like descriptions to extreme rapidities in which saturation effects are expected. Instead of using t as a hard scale, it is also possible to study diffractive states which contain at least one meson carrying a large p ⊥ , originating from collinear fragmentation of the virtual qq pair produced by the photon. The large value of p ⊥ ensures that each of the q andq carries an (opposite) large q ⊥ through the usual ordering of collinear fragmentation. Relying on the recent results obtained for impact factors γ → qqg at leading order and γ → qq at next-to-leading order, which were used for the computation of the γ → dijet impact factor at NLO, a complete NLO study could be done in the future. Most probably, the pion channel which is the best known for fragmentation functions, would be the most promising. Last but not least it is worth to mention large invariant mass systems: In the spirit of time-like Compton scattering, in which the hard scale is provided by the virtuality of the emitted virtual photon, exclusive production at JLab of a γ-meson pair of large invariant mass is a very promising process to access to Generalized Parton Distributions (GPD) [266,267], another particular facet of the master correlators obtained by integrating out their k ⊥ dependency. It turns out that the same process in the large s limit belongs to the class of diffractive processes, therefore furnishing another probe for studying gluonic saturation. Thus, by covering a very wide kinematical range, the same process studied at JLab, EIC (in photoproduction), and LHC (in UPC), would allow to pass from a description based on collinear factorization involving a nucleon GPD to a high energy description in which linear and non-linear resummation effects are expected. D. Forward dijets: from LHC to EIC Dijets produced in the forward direction of the detectors are characterized by final states at large rapidities and hence they trigger events in which the partons from the nucleus carry small longitudinal momentum fraction x. This kinematic setup is well suited to investigate the properties of dense partonic system and the phenomenon of saturation. The following study is based on the socalled small-x Improved Transverse Momentum Dependent factorization framework [168,171,[268][269][270][271] which accounts for exact kinematics of the scattering process with off-shell initial-state gluons, gauge invariant formulation of the TMD gluon densities as well as off-shell partonic amplitudes, and complete kinematic twists, while neglecting genuine twists. The framework therefore covers both k T -factorization [148,272] in the limit of large off-shellness of the initial-state gluon from the nucleus small-x TMD factorization [165] in the limit where momenta of the final-state jets are much larger then the momentum of the incoming off-shell gluon. Furthermore [169,170] demonstrate a very good agreement of this approach with the full CGC result in the region dominated by hard jets i.e. k T , p T > Q s . While the original Improved TMD framework includes gluon saturation effects, it does not account for the complete set of contributions proportional to the logarithms of the hard scale set by the large transverse momenta of jets -the so-called Sudakov logarithms. As shown in Refs. [273,274], inclusion of the Sudakov logarithms is necessary in order to describe the LHC jet data in the region of small x. In the low-x domain, the resummation leading to the Sudakov logarithms has been developed in Refs. [275,276], see also [277]. In Ref. [278], it was shown for the first time that the interplay between the saturation effects and the resummation of the Sudakov logarithms is essential to describe the small-x forward-forward dijet data. In this section we present two results that demonstrate relevance of both effects, i.e. nonlinearity, accounting for saturation, and the Sudakov effects, accounting for emissions of soft gluons. We shall consider two processes: • the inclusive dijet production • the dijet production in deep inelastic scattering where A can be either the lead nucleus or a proton. To describe the former process, we use a hybrid approach where one assumes that the proton p is a dilute projectile, whose partons are collinear to the beam and carry momenta p = x p P p . The hadron A is probed at a dense state. The jets j 1 and j 2 originate from the hard partons produced in a collision of the probe a with a gluon belonging to the dense system A. This gluon is off-shell, with momentum k = x A P A + k T and k 2 = −| k T | 2 . The ITMD factorization formula for the above process can be found in Ref. [171], while the formula for the e -A collision can be found in Ref. [279] As it has been argued above, in order to provide realistic cross section predictions one needs to include also the Sudakov effects. For dijet production at the LHC, we used a DGLAP based Sudakov form factor [278]. In Fig. 10 we show normalized cross sections as functions of ∆φ in p -p and p-P b collisions. The three panels correspond to three different cuts on the transverse momenta of the two leading jets. The points with error bars represent experimental data from Ref. [280]. The main results for the p -P b collisions are represented by the blue solid lines. The broadening of the distributions as we go from p -p to p -P b comes from the interplay of the non-linear evolution of the initial state and the Sudakov resummation. In the Fig. 11 we show predictions for nuclear modification in DIS process as one increases the energy of the collision. In particular, we present the results for three values of the energy that are relevant for the EIC, LHeC and FCCeh. We see that for larger energies, the suppression due to saturation is larger, and that the Sudakov form factor cancels too a large degree. This results demonstrates that having results for the absolute cross sections [279] and nuclear modification ratio, one can in principle isolate effects due to saturation from the effects coming from to Sudakov resummation. FIG. 11: Nuclear modification factor R pA as a function of the azimuthal angle between the jet system and the scattered electron in LAB frame (left) and as a function of the azimuthal angle between the jets in the Breit frame (right). The calculations are done for three different CM energies per nucleon. Solid lines correspond to calculations using Sudakov form factor and KS-based WW gluon density. E. Color Glass Condensate beyond high energy factorization Theory predictions for Color Glass Condensate correlators are genuinely based on high energy factorization which can be recast as the first order of an expansion in parton momentum fraction x. However, in particular for EIC kinematics but also if descriptions are to be extended to central FIG. 12: A comparison of of target x range contributing to single inclusive pion production in proton-proton collisions at RHIC. Left panel is from Ref. [281]) showing the target x-range as predicted by the collinear factorization formalism while the right panel is the x-range obtained from the Color Glass Condensate formalism, taken from Ref. [282] rapidities, there is the danger that one is leaving the regime of applicability of this expansion. This is most easily seen in the kinematics relation where x 1,2 are the projectile and target momentum fraction probed in the collision while p ⊥ and y are the transverse momentum and rapidity of the produced particle. As is seen as one looks at higher and higher transverse momenta of produced particles one is probing larger and larger values of the target x which is bound to eventually become too large for CGC to be a valid description of the target. Furthermore it is essential to realize that the multiple scatterings of the projectile on the target which is needed due to high gluon density of the target are treated in the eikonal, i.e. recoil-less approximation which can accommodate only a small angle deflection of the projectile. This eikonal approximation to multiple scatterings is the reason one can elegantly re-sum them into a Wilson line. As one considers particle production at higher p ⊥ this recoil-less approximation breaks down and partons are scattered at a large angle. This is not included in the CGC formalism. In Fig. (12) we show the difference in the target x involved in pion production in proton-proton collisions in the forward rapidity region at RHIC. Both collinear factorization and CGC approaches fit the data well and yet the physical cross section is dominated by very different target x's in the two approaches so clearly there is a discrepancy. Both approaches can not be correct at the same time so at least one or both of the two must be missing some crucial physics. This demonstrates the need for a more general formalism that include collinear factorization at high p ⊥ and the Color Glass Condensate formalism at small x. In [283][284][285] a new approach to particle production at high energies was proposed which aims to accomplish this, include large x (high p ⊥ ) physics as encoded in collinear factorization and gluon saturation effects at small x as described by JIMWLK equation. A related, but different approach [286,287] aims at first at a unification of collinear factorization and low density high energy factorization. All these approaches imply the necessity to go beyond the eikonal approximation and allow a large angle deflection of the projectile parton. This can not happen if one considers scattering from small x modes of the target as is the case in CGC. Therefore one must include scattering from the large x modes of the target. This is currently work in progress and will be reported elsewhere. Apart from the study of specific observables which allow to obtain information on different TMD distribution function, there exists also more global features of QCD phenomenology at a hadron-hadron collider, which allow to study and quantify the manifestation of high and potentially saturated parton distributions. Evidence for nonlinear QCD dynamics in the fragmentation region in pA scattering A first region where such evidence for QCD non-linear dynamics could be found is the the fragmentation region of proton nucleus scattering. In a collision of two nucleons or nuclei, a parton with given hadron momentum fraction x 1 resolves partons in another nucleon down to momentum fractions of the order with p ⊥ the transverse momentum of the parton and √ s the center of mass energy of the collider. At the LHC, for x 1 = 0.3 and |p ⊥ |=2 GeV/c, and √ s = 14 TeV in proton-proton collisions, one is therefore sensitive down to values of approximate x 2 ∼ 3 · 10 −7 . Even smaller values of x, down to x ∼ 10 −9 , are resolved in the collisions with cosmic rays. At such low values of x, the secondary hadron is in generally characterized by the presence of high parton -in particular gluon -densities. A parton propagating through such a dense medium of small x ≥ x 2 partons acquires a significant transverse momentum, due to interaction with the dense gluonic field and looses a finite fraction of its momentum [288]; this is in particular the case for central proton nucleus collisions, where parton densities are further subject to nuclear enhancement 2 One consequence of this is a strong suppression of the leading particle spectrum as compared to minimal bias events: Each parton fragments independently and splits into a couple of partons with comparable energies. This suppression is especially pronounced for the production of nucleons: for values of Feynman x F ∼ x 1 above 0.1 the differential multiplicity of pions should exceed that of nucleons, see [289] for a study which however does not include an additional suppression due to finite fractional energy losses. Suppression of forward pion production was observed at RHIC in deuteron -gold collisions at x F ≥ 0.3, [290,291], see also the discussion in [281]. It is important to note that in this kinematic regime, perturbative QCD works pretty well. Indeed essential values of x 2 are in these reactions of the order of ∼ 0.01 [281], see Fig. 12 left-hand side, which is still far from the phase space region where gluon densities are so high such that nonlinear effects may become important in the evolution of nuclear parton distribution functions. Note that in the CGC scenario central collisions dominate and one has to assume the existence of a mechanism for a very strong suppression of the scattering in the DGLAP x 2 ≥ −.01 kinematics. Nevertheless, the observed suppression of the inclusive per nucleon dAu → π 0 + X cross section as compared to the inclusive pp → π 0 + X cross section, R dAu , is in a gross contradiction with the naive perturbative QCD prediction of R dAu (pQCD) = 1.0. Moreover the analysis of the STAR data on the multiplicity of hadrons produced in the events with a forward π 0 trigger indicates that the dominant contribution to the pion yield originates from peripheral collisions. This suggests that for central collisions the actual suppression is much larger -about a factor of the order five. The observed pattern is consistent with the scenario of an effective fractional energy loss, which leads to a large suppression: The π inclusive cross section strongly drops with increasing x F . Leaving details aside, the observed effect is strong evidence for a break down of the perturbative QCD approximation. It is natural to suspect that this is due to effects of strong small x gluon fields in nuclei, as the forward kinematics is sensitive to small x effects. Overall the generic features expected in all models in which interaction strength is comparable with a black disk limit are: i) Strong suppression of the large x F spectra at moderate values of p ⊥ ii) Broadening of the transverse momentum distributions of leading hadrons at large x F . Both effects should become more and more pronounced with increasing collision energy and centrality of collision and/or and increase of the number of nucleons A. They should be studied as a function of A and centrality. Hence one may expect much stronger suppression of pion spectrum at x F 0.4 and a stronger p t broadening at the LHC as compared to RHIC. Note that these effects should be much more mild for central proton-nucleus collisions, due to weak dependence of the cross section on x F . This is in line with the observation of the ALICE experiment [292], which measured the pion yield as a function of p ⊥ at central rapidities at the LHC and found only a small enhancement of the yield for the central collisions. Note that the rapidity interval between the pion and the initial nucleon in this case similar to that of the RHIC experiments. This is consistent with the scenario that at central rapidities the only significant effect is p ⊥ broadening due to elastic re-scatterings, which leads to an enhancement of the cross section rather than to its suppression. It would be informative to measure a recoil gluon mini-jet from the underlying quark -gluon collision to study the effect of suppression as a function of x g for fixed pion x F . Further exploration of these effects in pp scattering would be possible by studying production of the leading mesons with a centrality trigger -like dijet production at at central rapidities. Also it would be instructive to study these effects in the ultra-peripheral collisions at the LHC since in this case one can reach center of mass energies for the photon-nucleus reaction, which are comparable to center of mass energies for nucleus-nucleus scattering at RHIC. Mini-jet dynamics at collider energies The leading order cross section for scattering of two partons grows within perturbative QCD rapidly with decreasing momentum transfer between the scattering partons. Combining this effect with the growth of gluon densities at small x, one obtains an inclusive mini jet cross section which exceeds even the inelastic proton-proton cross section. This suggests that the average mini jet multiplicity may exceed one. To tame this mini-jet cross section, Monte Carlo models either introduce a hard (no collisions with p ⊥ < p 0 (s) or a soft cutoff (smoothly switching off interactions with p ⊥ 's below a few GeV/c. Interestingly, models seem to suggest that the suppression factor grows rather rapidly with collision energy. Clarifying the precise mechanism for the energy dependence of the suppression of mini-jets in both in proton-proton and proton-nucleus scattering is one of the current challenges of high energy QCD. Note that while gluon saturation generates such an effect for the small x region, it does not explain a similar suppression for peripheral proton-nucleus and proton-proton collisions. It does also not explain suppression for x ∼ 10 −2 ÷ 10 −3 far from the black disk limit [293]. A comparison of transverse momentum distributions of hadrons produced in the very forward region and central rapidities would certainly help for a better understanding of this suppression mechanism. A direct observation of mini-jets is pretty difficult, since transverse momenta of hadrons generated in the fragmentation of mini-jets may be rather close to the soft scale. The hadron density close to the fragmentation region of protons is however much smaller than for central rapidities, which should make extraction of the mini-jet signal easier. One possible strategy is to select a hadron (a mini-jet) at y ∼ 2 ÷ 4 or even higher with fixed p ⊥ and measure the average transverse momentum of hadrons produced at negative rapidities. A distinctive feature of this mechanism is the presence of transverse correlations between hadrons only at small rapidity intervals, ∆y ≤ 2, which follow a Gaussian distribution in ∆y in contrast to the power law suppression of correlations in the hard mechanism [294]. Another suggestion is to use proton -nucleus scattering to distinguish the production of two pairs of mini-jets in a two parton collisions (2 → 4 mechanism) from production of four mini-jets in two binary collisions 4 → 4 mechanism) [295]. The procedure is based on the centrality dependence of two mechanisms -the production rate in the 2 → 4 mechanism grows linearly with the nuclear thickness, while in the 4 → 4 mechanism it is quadratic in the thickness [295]. B. Forward direct photon measurements Prompt photons provide a direct access to the parton kinematics, since they couple to quarks, and unlike hadrons are not affected by final state effects. At leading order (LO), the photon is produced directly at the parton interaction vertex without fragmentation. At LHC collision energies, the quark-gluon Compton cross section is significantly larger than quark-anti-quark annihilation. At next-to-leading order (NLO) or higher order, photons may also be produced by bremsstrahlung or fragmentation of one of the outgoing partons. However, fragmentation photons are accompanied by hadronic fragmentation products and the contribution of this process can be largely suppressed by application of isolation cuts. Isolation cuts ensure that the remaining particle production process is dominantly from Compton scattering, where the measured photon is directly sensitive to the gluon PDF [296]. In this paragraph, we will focus on direct photon measurements at forward rapidities, as enabled by the LHCb experiment [297,298] and the planned forward calorimeter (FoCal) upgrade of AL-ICE [299]. LHCb is a single-arm spectrometer equipped with tracking and particle-identification detectors as well as calorimeters with a forward angular coverage of about 2 < η < 5. The FoCal is a calorimeter at 3.4 < η < 5.8 consisting out of a high-granularity, compact silicon-tungsten (Si+W) sampling electromagnetic part with longitudinal segmentation and a conventional high granularity metal/scintillating hadronic part providing good hadronic resolution and compensation. In fact, measurements of isolated photon spectra at forward rapidities at the LHC, are sensitive to the gluon density at small Bjorken-x of up to about x = 10 −5 over a large range of momentum transfer, Q 2 . By comparing measurements in p-Pb and pp collisions one can hence extract the gluon nuclear modification at small x and Q 2 . The parton structure of protons and nuclei is described by momentum distributions at an initial momentum scale, and the scale dependence of the structure can be calculated with linear QCD evolution equations, such as the DGLAP [9,12,13] and BFKL [3][4][5]300] equations. At small x, hadronic structure is expected to evolve non-linearly due to the presence of high gluon densities, as predicted by the JIMWLK [301] and BK [302] evolution equations. These non-linear effects should affect multi-parton dynamics, resulting in phenomena beyond a reduction of inclusive yields, including for instance observable effects in coincidence measurements. Measurements of photon-jet correlations will allow to study this effect quantitatively by constraining the parton kinematics as precisely as possible in hadron interactions. To illustrate the expected performance of future forward isolated photon measurements, the expected uncertainties of the gluon PDFs for the nNNPDF fit using either pseudo-data for the EIC (starting from nNNPDF1.0) [304] or the FoCal above 4 GeV/c based on nNNPDF2.0 [303], are presented in the left panel of Fig. 13. As expected, the higher-energy option of the EIC (which will be realized by eRHIC at BNL) will constrain the gluon PDF for x down to about 5 · 10 −3 , while isolated photon measurements provided by the FoCal would lead to significantly improved uncertainties even significantly below 10 −4 . We also note that the uncertainty on the nNNPDF2.0 parametrisation is already smaller than the EIC band. The fit to EIC pseudo data from the nNNPDF group gives a qualitatively similar result to an earlier study based on modified EPPS16 nuclear PDFs [305], although the uncertainty estimates at small x where there is no direct constraint from the pseudo data differ. Clearly, the FoCal measurements will probe much smaller x than the existing and possible future EIC measurements, and lead to high precision results due to the excellent direct photon performance. However, in case of the EIC, unlike at a hadron collider, the initial state is precisely known, and one can map x and Q 2 , independently, and measure not only longitudinal but generalized parton (GPDs) and transverse-momentum (TMDs) distributions [118]. Furthermore, the EIC will allow us to scan the A (nucleus) dependence using several nuclear beams. So far, it was assumed in the re-weighting process that the central value of the FoCal measurement of the isolated photon R pPb would be the same as the central value of the initial baseline prediction. Instead in the right panel of Fig. 13, we show the effect of the FoCal pseudo-data for a value of R pPb shifted to about 0.7. In this case, the FoCal data would add a significant amount of new information to the global fit, leading to a deviation of R g from the expected value [303] and fits to the FoCal pseudo-data above 4 GeV/c (red band), as well as "high energy" EIC pseudo-data (green band; starting from the nNNDPF1.0 parametrization) [304]. In all cases, 90% confidence-level uncertainty bands are drawn, and the nuclear PDFs are normalized by the proton NNPDF3.1. (Right) Comparison of the original and the reweighted nNNPDF2.0 fits, where the FoCal pseudo-data were shifted by about 0.7 [303]. in almost the entire range of 10 −5 < x < 10 −2 . Therefore, this analysis indicates that FoCal measurements could be sensitive either to the gluon shadowing effects or to possible non-linear QCD dynamics. To disentangle one from the other, a dedicated analysis of the χ 2 in global pdf fits and the nPDF behavior in the small-x region would be required, following the approach of Ref. [162]. Significant suppression of R g arises also from forward charm measurements [306] when they are included in the determination of nuclear PDFs as recently done [307]. In this case, comparing precise forward photon and charm measurements will allow us to test factorization and universality of the nuclear PDFs. C. Top quark pair production as a tool to probe Quark-Gluon -Plasma formation Past Deep-inelastic-scattering (DIS) experiments provided accurate information on the partonic structure of the free proton. Notwithstanding the phenomenological success of Quantum Chromodynamics (QCD) analyses, a detailed understanding of the partonic structure modifications in bound nuclei is still lacking. Compared to the parton distribution functions (PDFs) in the proton, nuclear PDFs (nPDFs) are less constrained mainly because of the lack of data across the momentum fraction x-squared momentum transfer Q 2 plane and nuclear mass number range. The scheduled proton-lead (pPb) and lead-lead (PbPb) LHC Runs 3-4 at the LHC provide the opportunity to precisely constrain the nPDFs for the lead nucleus. This is demonstrated in Fig. 14, where a simple projected scenario of the existing CMS measurement [308] is assumed, considering improvements in the statistical and currently dominant systematic uncertainties, respectively. There is even a complementarity between the physics programs at LHC and the planned Electron Ion Collider, allowing for stringent tests of the nPDF universality too. Top quark production [308] and projected at HL-LHC with either lead-lead or argon-argon collisions. Vertical lines and bands represent Quantum Chromodynamics predictions at next-to-next-to-leading order with soft-gluon resummation at next-to-next-to-leading logarithmic accuracy [309,310], with nuclear modification effects and their uncertainties, respectively, as parametrized by the nCTEQ15 bound-nucleon distribution functions [311]. The shown cross section values are their nucleon-nucleon equivalent. (inclusively or differentially) in pPb and PbPb collisions has been suggested as a valuable probe of the high-x∼10 −2 -10 −1 gluon distribution at very high Q 2 in the Pb ions [312]. One powerful probe of the quark-gluon plasma (QGP) is "jet quenching", i.e., the study of jet modifications while passing through the QGP. Processes used so far, e.g., dijet or Z/γ+jet production, are only sensitive to the properties of the QGP integrated over its lifetime. Hadronically decaying W bosons can provide key novel insights into the time structure of the QGP when studied in events with a top-antitop quark pair [313] thanks to a "time delay" between the moment of the collision and that when the W boson decay products start interacting with the QGP. Although there seems to exist limited potential to bring the first information on the time structure of the QGP considering the baseline LHC scenario of Runs 3-4, lighter ions are potentially promising candidates despite their expected smaller quenching effects. Because of the potential for order-ofmagnitudes higher effective integrated nucleon-nucleon luminosities, in this paragraph we advocate on the usage of an "optimal" nucleus-nucleus colliding system at HL-LHC. Such an example for the inclusive top quark pair production cross section is also shown in Fig. 14, considering the expected luminosity increase for the case of argon-argon collisions [314]. Substantially increased LHC partonic and photon-photon luminosities at HL-LHC (or future higher energy colliders) could be also achieved via isoscalar beams, even opening up opportunities for studies not accessible with high-pileup collisions [315]. The high-luminosity collisions of isoscalar nuclei could provide a new environment to study the QGP and complement the QGP studies in the low-luminosity collisions of heavy nuclei. Photons are clean probes of the QCD structure of nuclear targets [316,317]. At hadron colliders, like RHIC and the LHC, photoproduction processes can be studied in ultra-peripheral collisions (UPCs), where the incoming particles pass each other at impact parameters larger than the sum of their radii, such that strong interactions are suppressed and photon-induced processes are dominant. For a recent review, see [318]. V. ULTRA-PERIPHERAL COLLISIONS AT HADRONIC COLLIDERS AND EXCLUSIVE REACTIONS AT THE EIC The most measured process, but not the only one, in UPCs is the diffractive production of vector mesons. Photons fluctuate to virtual qq pairs which then scatter elastically from nuclear targets, emerging as real vector mesons. They are copiously produced. Their decays into few charged particles provide clean experimental signatures that can be used to trigger and select the corresponding events. In the theoretical side, the different vector meson masses allow us to study QCD at different scales: the production of a ρ 0 serves to investigate the approach to the black-disc limit of QCD, while that of a J/ψ sheds light into aspects of perturbative QCD at high energies like saturation [319] and shadowing [320]. Additionally, in a Good-Walker approach [321,322], coherent and incoherent processes where the photon interacts with the full target or just with a piece of it, respectively, give access to the average behaviour of the gluonic field (coherent) or to the variance of its quantum fluctuations (incoherent). Figure 15 shows the x and Q 2 ranges covered by UPCs at the LHC. GeV [325]. At the LHC, the ALICE Collaboration has carried out measurement at 2.76 TeV [326] and 5.02 TeV [327] in Pb-Pb UPCs and at 5. 55 TeV in Xe-Xe UPCs [328]. The measured ρ cross-section is seen to scale nearly linearly with the atomic number, with ALICE finding a best fit σ ∝ A 0.96±0.02 . The cross-section is smaller than is expected from a Glauber calculation (even including generalized vector meson dominance), but is consistent with a Glauber-Gribov calculation [329]. The latter approach includes for high-mass intermediate state fluctuations. These topics can be further studied by examining excited meson states. The STAR Collaboration [330] also observed the coherent production of four charged pions [330] which could be related to an excited state of the ρ 0 vector meson but at a perturbative scale. Determining the cross-section is not possible without making assumptions about the branching ratios, but the rate seems consistent with the expectations of generalized vector meson dominance [331]. Both collaborations also observed an intriguing signal of an state in coherent di-pion production at a mass around 1.7 GeV/c 2 which could be related to another ρ 0 excited state [327,332]. All these studies were performed at mid-rapidity in the corresponding laboratory frame. The availability of data for all these systems and energies provides an exacting challenge to theoretical descriptions of this process, whose cross section has been described more or less successfully using in quarkonium production, dijet and di-hadron production. The Q value for typical gluon virtuality in exclusive quarkonium photoproduction is shown for J/Ψ and Υ. The transverse momentum of the jet or leading pion sets the scale for dijet and ππ production respectively. For comparison, the kinematic ranges for J/Ψ at RHIC, F A 2 and σ A L at eRHIC and Z 0 hadro-production at the LHC are also shown. Figure taken from [317] a variety of approaches; e.g. [333][334][335]. In addition, there are at least three specific measurements to highlight: (i) The observation of interference effects originated in the fact that each of the two incoming projectiles can act either as a source of the photon or as a target of the interaction [336,337], (ii) the mapping of the impact-parameter dependence of the target structure for gold ions obtained as a Fourier transform of the Mandelstam-t dependence of the cross section [325], and (iii) the dependence on the atomic mass number of this process at a centre-of-mass energy of the photon-nucleus system of 65 GeV per nucleon obtained from measurements off Pb [327], Xe [328], and protons [338], where a clear indication of strong shadowing was found along with the observation that the black-disc limit of QCD has not been yet reached. Regarding the diffractive photoproduction of J/ψ the main results from UPCs have been obtained at the LHC [339] after a first proof-of-principle measurement at RHIC [340]. Coherent production has been investigated by the ALICE Collaboration in two ranges of rapidity, central and forward, and two different energies √ s NN = 2.76 TeV and 5.02 TeV [341][342][343][344][345]. The CMS Collaboration published a cross section at semi-central rapidities [346] at √ s NN = 2. 76 TeV, and recently the LHCb Collaboration presented results √ s NN = 5.02 TeV covering forward rapidities [347]. Together, these measurements allow for the study of the rapidity dependence of J/ψ diffractive photoproduction in a large kinematic range which corresponds to three orders of magni-tude in Bjorken x from 10 −2 to 10 −5 . These results provide new constraints on the evolution of the nuclear gluon distribution at large energies, see e.g. [348,349]. In particular, the results from [345] provide a look at the transverse structure of Pb nuclei at the Bjorken-x range (0.3 − 1.4) × 10 −3 which are the first step towards the mapping of the gluon distribution in impact parameter at a perturbative scale. A proof-of-principle measurement for incoherent production has been has been performed by the ALICE Collaboration in Pb-Pb UPCs at √ s NN = 2. 76 TeV [342]. The coherent production of ψ has also been measured by the ALICE Collaboration [344,350]; this state is interesting to understand the spin structure of the interaction, in particular to constraint the modelling of the wave function of the vector meson (see e.g. [351]), which is a non-perturbative component of all theoretical predictions. Finally, the CMS collaboration has made an initial measurement of Υ photoproduction in pA collisions [352]. Another interesting related result is the measurement of coherent J/ψ photoproduction in peripheral collisions, that is with a geometrical overlap of the colliding nuclei, that have been performed by the ALICE Collaboration in the Pb-Pb system [353] and by the STAR Collaboration in the Au-Au and UU systems [354]. These measurements open up interesting questions on the meaning of coherence in these quantum processes and the possibility to study new effects, e.g. the interaction of the J/ψ with the quark-gluon plasma created in such collisions [355][356][357]; they also offer a tool to study the energy dependence of J/ψ production by offering a measurement in a different impact-parameter range than those in UPCs [358]. B. Future measurements on diffractive vector meson photoproduction in UPCs In the near future the Run 3 and 4 of the LHC will provide an enormous data set of UPCs; see e.g. Table 12 of [314]. In the middle term, the Electron Ion Collider will be the experimental facility to study the QCD structure of nuclei, including diffractive vector meson production [117]. One of the key measurements to be performed with the new LHC data set is the Bjorken-x evolution of coherent diffractive vector meson production for as many different mesons-that is, mass scales-as possible. To achieve this, the two contributions to the nucleus-nucleus cross section, one with a high-, the other with a low-energy photon have to be disentangled. In principle, this requires to perform a given measurement at a fix rapidity, but different impact-parameter ranges. There are two proposals on how to do this: using UPCs in conjunction with the peripheral collisions mentioned above [358], and using events where in addition to the photon exchange producing the vector meson, there is an extra nuclear dissociation process that acts as a selector of a different impact-parameter range [359,360]. Measurements of coherent ρ 0 production at mid-rapidity in Pb-Pb [327] and Xe-Xe [328] UPCs accompanied by neutrons at beam rapidities, product of the nuclear dissociation, are correctly described by the NOON model [361] giving us confidence that the relevant physics is understood (at the current precision of the data) and paving the way to the application of this method. Another eagerly awaited measurement is the dependence on Mandelstam-t of the incoherent production of vector mesons at a given rapidity. This process is sensitive to quantum fluctuations at the sub-nucleon scale [362]. Model predictions, e.g. [363,364], expect a one order of magnitude increase of the cross section at |t| ∼ 1 GeV 2 when the sub-nuclear quantum fluctuations are taken into account with respect to the case where the relevant degrees of freedoms are the nucleons. Such a measurement will be feasible at the LHC in the near future. Another interesting measurement to be performed at the LHC is the study of the angular correlations of the decay products of the vector meson. Both the quasi-real photons and the gluons participating in the interaction are linearly polarised. This, and the presence of the interference effects mentioned above produce new angular correlations with a particular dependence on the transverse momentum of the vector meson. These observables are a complement to the traditional polarisation measurements and are also sensitive to the QCD structure of the target. See e.g. [365][366][367]. At least two other techniques can be used to measure gluon distributions using UPCs: dijets and open charm. These measurements are theoretically cleaner than vector mesons, since they involve only single-gluon exchange, so the uncertainties involving color neutralization are much smaller. But, the final states are more complicated, and since there is a color string connecting the midrapidity state and the target nucleon remnants, the reaction cannot be fully exclusive. The ATLAS collaboration has already made the first preliminary measurements of dijet photoproduction [368]. ATLAS explored the region where the leading jet had p T > 20 GeV, and dijet mass above 35 GeV, giving them a reach down to x ≈ 3 × 10 −3 . Charm photoproduction is an attractive alternative approach to reach down to lower x values, since it should be possible to measure charm down to threshold, M cc ≈ 4 GeV. The rates for charm are high [369][370][371], and, at the LHC, open bb and potentially, with pA collisions or lighter ions, tt [372]. C. The ratio of Ψ(2s) and J/Ψ photoproduction cross-sections as a tool to quantify non-linear QCD evolution Exclusive photoproduction of charmonium at the Large Hadron Colllider (LHC) provides an excellent testing ground for the description of the low x gluon distribution, since it allows for a direct observation of the energy dependence of the photoproduction cross-section which directly translates into the x-dependence of the underlying gluon distribution. photoproduction of bound states of charm quarks, i.e. J/Ψ and Ψ(2s) vector mesons, are of particular interest, since the charm mass provides a hard scale at the border between soft and hard physics and the observable is therefore expected to be particularly sensitive to the possible presence of a semi-hard scale associated with the transition to the saturation region, the so-called saturation scale. It is therefore ideal to search for potential deviations from linear QCD evolution. Studies in the literature for this process, which take into account effects due to gluon saturation, exist both on the level of dipole models [364,[373][374][375][376][377][378] and complete solutions to non-linear BK equation [379][380][381]. At the same time also descriptions on collinear factorization [382][383][384][385][386] and linear NLO BFKL evolution [137,381], provide an excellent description of data, see also the discussion in [139,381]. It is therefore not entirely clear, which is the appropriate description of data. While at first one might conclude that center of mass energies are simply not yet high enough to see the onset of non-linear effects, there are also indications that at least some linear frameworks turn unstably at highest center of mass energies available at LHC [381]. A similar conclusion has been drawn in [139], where it has been found that the energy dependence of the J/Ψ and Ψ(2s) cross-section is not able to distinguish between the non-linear (Kutak-Sapeta (KS) gluon [387], subject to non-linear Balitsky-Kovchegov (BK) evolution) and linear (Hentschinski-Salas-Sabio Vera gluon (HSS) [127,128], subject to linear NLO BFKL evolution) low x evolution. On the other hand the ratio of both cross-sections was found to reveal a characteristically different energy dependence for linear and non-linear QCD evolution, see Fig. 16, left. Leaving large uncertainties associated with fixed scale HSS evolution aside(see [139] for a detailed discussion), the stabilized "dipole scale" HSS gluon, subject to linear NLO BFKL evo- lution predicts a constant ratio of both photoproduction cross-section. The KS gluon, subject to non-linear BK evolution, predicts on the other hand a rise of the ratio. To understand this behavior better, it is instructive to analyze the the same observable within a simple saturation model. The latter yields a particular simple form which allows us to gain some intuitive understanding why linear and non-linear QCD dynamics yield a different prediction for the Ψ(2s) over J/Ψ ratio. In the high energy limit, the relevant quantity of interest is the imaginary part of the scattering amplitude, which encodes the energy dependence related to the low x evolution of inclusive low x evolution. Here r = |r| the transverse separation of the quark anti-quark pair and the z the photon momentum fraction and Σ (1,2) T describes the transition of a transverse polarized photon into a vector meson V [390]; see [166] for details. Within this approach the entire energy dependence is contained in the dipole cross-section σ qq , which for the Golec-Biernat Wüsthoff saturation model takes the following simple form [391] where Q s (x) yields within this model the saturation scale, which carries the entire energy dependence; a linearized version of this model, which yields a power-like growth with energy is then obtained through an expansion of the above expression for small saturation scales, Inserting Eq. (22) into Eq. (20), it is immediately clear that the saturation scale -which carries the essential W dependence -cancels for the ratio of Ψ(2s) and J/Ψ photoproduction cross-sections, up to a small logarithmic correction, related to the energy dependence of the diffractive slope of J/Ψ and Ψ(2s) cross-sections, [166,390]. While the complete saturation model agrees with the linear approximation in the region r → 0, they start to disagree for large dipole sizes and it is this region where the wave-function overlap differs for the production of vector mesons Ψ(2s) and J/Ψ, due to the presence of the node in the 2s wave function. We further show results due to the DGLAP improved saturation model, the so-called Bartels Golec-Biernat Kowalski (BGK) model, which replaces Q 2 s (x) → 4πα s (µ(r))xg(x, µ(r))/3, where α s (µ) and xg(x, µ) denotes the strong coupling constant and the collinear gluon distribution respectively, evaluated at an r-dependent scale µ in the perturbative region. Similar to the HSS gluon, such a dipole size dependent saturation scale prevents an exact cancellation of the energy dependence in the linear approximation. Nevertheless this merely affects the perturbative region of small dipole sizes r < 1 GeV −1 and therefore maintains the observed rise of the ratio for the complete saturation versus an approximately constant ratio for the linear approximation.We therefore suggest to extract from existing data and future measurements Ψ(2s) and J/Ψ ratio, since the energy behavior of this ratio allows to draw conclusion on the presence of non-linear QCD dynamics. The feature is both present for unintegrated gluon distributions, which have been obtain from a numerical solution to linear and non-linear QCD evolution, fitted to DIS data, as well as for analytic dipole models, where the distinction of linear and non-linear realization is somehow easier. While there exist LHCb data for the energy dependence of the J/Ψ and Ψ(2s) photoproduction cross-section, extracted from pp collisions [392] (see also [344] for P bP b data), there exist only H1 data for the energy dependence of the ratio of both cross-sections, at relatively low values of W and with still considerable uncertainties. We believe that an extraction of the ratio from both combined HERA and LHC data would be highly beneficial to pin down the size of non-linear low x QCD evolution at the LHC. D. Planned measurements at the Electron Ion Collider Looking further ahead, in the early 2030s the U. S. Electron Ion Collider (EIC) should provide high-precision measurements of vector mesons over a wide range of Bjorken−x and Q 2 ; the Q 2 of the photon can be measured independently of the rest of the reaction, allowing us to probe the nucleus using qq dipoles of different lengths [117,118]. The high center-of-mass energy (up to about 140 GeV) and high luminosity will allow the EIC to study large samples of light and heavy mesons (including the three Υ states) [393]. The expected event samples range from about 50 billion ρ 0 per year down to about 140,000 Υ(1S) per year. It will also be able to study exotic states (including the XYZ states) via Reggeon exchange reactions [394,395]. The high luminosity will allow for precise multi-dimensional studies, including measurements of Generalized Parton Distributions, measurements out to kinematic extremes (i. e. large |t| etc.) and studies of rarely produced mesons and decays. The EIC will take data with a variety of different ions, so will be able to study how low−x gluons evolve with nuclear size. Light ions will be of special interest. The EIC detectors forward spectrometers are expected to be able to detect scattered protons and light ions, allowing for a measurement of |t| even if the scattered electron is not seen or poorly measured. Light ion studies will allow for the study of neutron targets, and studies with deuterium and other very light ions will allow for measurements of the nuclear force in relatively simple systems. The EIC detectors are being designed to be extremely hermetic, so will be able to record vector mesons over a wide range in Bjorken−x and to accurately separate coherent and incoherent production over a wide range of |t| [118]; this is necessary to fully apply the Good-Walker paradigm. The large event samples and precision detectors will allow for precise studies of the variation in gluon density and transverse position within the target. And, the inclusion of relatively precise calorimetry that is sensitive down to low energies will allow us to study final states that include γ and π 0 , allowing for the study of a wider range of mesons. In this section we provide some details on the recent discovery of the Odderon as well as inclusive diffractive measurements and explorations of the soft Pomeron structure. A. Soft diffraction and the Odderon discovery by the D0 and TOTEM experiments Soft diffraction and elastic interactions have been studied for the last 50 years at different colliders. Elastic pp and pp scattering at high energies of the Tevatron and the LHC for instance corresponds to the pp → pp and pp → pp interactions where the protons and antiprotons are intact after interaction and scattered at very small angle, and nothing else is produced. In order to measure these events, it is necessary to detect the intact protons/antiprotons after interactions in dedicated detectors called roman pots and to veto on any additional activity in the main detector. Many experiments have been looking for evidence of the existence of the Odderon [396,397] in the last 50 years, and one may wonder why the Odderon has been so elusive. At ISR energies, at about a center-of-mass energy of 52.8 GeV [398][399][400][401][402], there was already some indication of a possible difference between pp and pp interactions. Differences are about 3σ but this was not considered to be a clean proof of the Odderon. This is due to the fact that elastic scattering at low energies can be due to exchanges of additional particles to Pomeron and Odderon, namely ρ, ω, φ mesons and Reggeons. It is not easy to distinguish between all these possible exchanges, and it becomes quickly model dependent. This is why the observed difference at 52.8 GeV was estimated to be due to ω exchanges and not to the existence of the Odderon. The advantage of being at higher energies (1. 96 TeV for the Tevatron and 2.76, 7, 8 and 13 TeV at the LHC [403][404][405][406][407] is that meson and Reggeon exchanges can be neglected. It means that a possible observation of differences between pp and pp elastic interactions at high energies would be a clear signal of the Odderon. The D0 and TOTEM elastic dσ/dt data are shown in Fig. 17. The difficulty to compare between pp and pp elastic scatterings is that one has to extrapolate the pp measurements from TOTEM to Tevatron center-of-mass energies [408]. The comparison between the pp elastic dσ/dt measurement by the D0 collaboration and the extrapolation of the TOTEM pp elastic dσ/dt measurements is shown in Fig. 18, including the 1σ uncertainty band as a red dashed line [408]. The comparison is only made in the common t domain for both pp and pp measurements and show some differences in the dip and bump region between |t| of 0.55 and 0.85 GeV 2 . Given the constraints on the optical point normalization and logarithmic slopes of the elastic cross sections, the χ 2 test leads to a significance of 3.4σ. Combining this result with previous measurements of TOTEM of ρ [409] and the total cross section, the significance ranges from 5.3 to 5.7σ (depending on the model). Models without colorless C-odd gluonic compound or the Odderon are excluded by more than 5σ. Further measurements of elastic pp cross sections will happen at higher LHC energies (such as 13.6 and 14 TeV) and the Odderon production will be performed in additional channels, such as the production of ω mesons. It is also clear that the discovery of the Odderon is likely related to the existence of glueballs, and the search for their production will happen at the LHC, RHIC and the EIC. B. Inclusive diffraction measurements at the LHC and sensitivity to the Pomeron structure Hard diffraction correspond to events when at least one proton is intact after interaction at the LHC and correspond to the exchange of a colorless object called the Pomeron. Many measurements at the LHC can constrain the Pomeron structure in terms of quarks and gluons that has been derived from QCD fits at HERA and at the Tevatron. All the studies have been performed using the Forward Physics Monte Carlo (FPMC), a generator that has been designed to study forward physics, especially at the LHC [410,411]. One can first probe if the Pomeron is universal between ep and pp colliders, or in other words, if we are sensitive to the same object at HERA and the LHC. Tagging both diffractive protons in ATLAS and CMS allows to probe the QCD evolution of the gluon and quark densities in the Pomeron and to compare with the HERA measurements. In addition, it is possible to assess the gluon and quark densities using the dijet and γ + jet productions [412][413][414][415]. The different diagrams of the processes that can be studied at the LHC are shown in Fig. 1, namely double Pomeron exchange (DPE) production of dijets (left), of γ+jet (middle), sensitive respectively to the gluon and quark contents of the Pomeron, and the jet gap jet events (right). The measurement of the dijet cross section is directly sensitive to the gluon density in the Pomeron and the γ+jet and W asymmetry measurements [415] are sensitive to the quark densities in the Pomeron. However, diffractive measurements are also sensitive to the survival probability which needs to be disentangled from PDF effects, and many different measurements will be needed to distinguish between them. It is clear that understanding better diffraction and probing different models will be one of the key studies to be performed at the high luminosity LHC, the EIC and any future hadron collider. While the bulk of this white paper focuses on aspects related to strong interactions in the limit of high energies and densities, there exists also an increased interest in the study of electroweak processes, which rely on dedicated forward detectors for their analysis and which are capable to contribute to searches for new physics at LHC. A. Precision Proton Spectrometer (PPS) and ATLAS Forward Proton detector (AFP) at high luminosity Introduction The CERN's Large Hadron Collider (LHC) will be restarting its operation this year at a recordbreaking energy of √ s = 13.6 TeV. The physics run is expected to last until the end of 2025, collecting integrated luminosity of about 300 fb −1 . LHC will undergo a major upgrade following the four-year physics run, increasing its instantaneous luminosity by a factor of 5-10 larger than the nominal LHC nominal value. The High Luminosity LHC (HL-LHC) is expected to collect data corresponding to an integrated luminosity of a few ab −1 , and measure the rarest processes of the Standard Model (SM). Central Exclusive Production (CEP) is a unique process where an object X is produced via t-channel exchange of colorless objects, photon (γ) for electromagnetic or pomeron (IP) for strong interactions, pp → p ⊕ X ⊕ p, where ⊕ stands for an absence of additional interaction between the final states. When final state particles are produced with high invariant mass, the dominant production mechanism is via photon exchange [416], in which the LHC can be considered a photon collider. Figure 20 shows a comparison between pomeron-pomeron (IP − IP) and photon-photon (γ − γ) initiated processes for production cross-section of central exclusive bb and γγ events as a function of a mass, which shows the enhancement of the photon-photon scattering at high masses. In CEP, interacting protons often emerge intact but lose a fraction of momentum and are scattered at small angles. The LHC accelerator magnets can be seen as longitudinal momentum spectrometers. The protons are deflected away from the proton bunch and can be measured by near-beam detectors installed downstream the LHC beamline, hundreds of meters from the interaction point. Such detectors, installed in movable vessels (Roman Pots) with tracking and timing capabilities, were brought online during Run 2 in 2016 by the ATLAS and CMS collaborations and were operated in standard runs. Near-beam proton spectrometers in LHC Runs 2 and 3 The Precision Proton Spectrometer (PPS) [418], is a CMS sub-detector installed in 2016, ∼210 meter from the interaction point. Initially called CT-PPS (started as CMS and TOTEM project), the PPS apparatus is equipped with tracking and timing detectors. It collected more than 100 pb −1 of integrated luminosity during LHC Run 2 and will continue to be operational with some upgrades and optimizations during LHC Run 3. During Run 2, PPS tracking detectors measured protons that have lost approximately between 2.5% to 15% of their initial momentum, resulting 14 TeV FIG. 20: Integrated cross sections of different exclusive processes with intact protons at √ s =14 TeV, plotted as a function of the required minimum central system mass. Taken from Ref [417]. in mass acceptance between 350 GeV to 2 TeV [419]. The data collected with PPS during 2016, with an integrated luminosity of 10 fb −1 , led to the first measurement of central exclusive di-lepton production [420], and the first search for the high mass exclusive production of photon pairs [421], both using tagged protons. Next, using 2017 data and integrated luminosity of ∼30 fb −1 a search for the exclusive production of pair of top quarks and a search for new physics in the missing mass spectrum in pp → p ⊕ Z/γ + X ⊕ p events were performed [422,423]. Finally, searches for the exclusive production of di-bosons using the full Run 2 dataset were published as well [424]. The ATLAS Forward Proton detector (AFP) [425], comprises two Roman Pot stations on each side from the interaction point with four planes of silicon pixel sensors to measure proton tracks. The far stations are additionally equipped with time-of-flight (ToF) detectors. During Run 2, ToF detectors demonstrated 20-40 ps resolution but suboptimal efficiency. AFP recorded ∼30 fb −1 of integrated luminosity during Run 2, and this data was used to report on the exclusive di-lepton production [426]. Physics perspectives at HL -LHC For the HL-LHC (LHC Run 4), the accelerator will be rearranged, and the current forward detectors will be dismounted. While the new detector design of forward proton spectrometers is currently under development (for example [417]), the physics perspectives are presented in the following section. Two scenarios are under consideration: • Station located in a "warm" region -comprise a few stations ∼ 200 m from the interaction point, and which are suitable for the Roman Pot technology (ATLAS and CMS) • Station located at 420 m in a 'cold" region -which requires a bypass cryostat and a mov-able detector vessel approaching the beam from between the two beam pipes, for which new developments are needed (CMS), While QCD-induced processes are typically dominant at low masses, the photon-photon scattering is enhanced at high masses (Fig. 20). Fiducial cross sections for different standard model processes at √ s = 14 TeV for IP − IP and γ − γ production modes are shown in Table II Physics w/o 420 meter station: Standard model γγ → + − production is an important channel for both calibration and validation of the proton reconstruction, and to measure ElectroWeak contribution to Drell-Yan processes. In addition the γγ → τ + τ − channel is of particular interest as it is sensitive to the anomalous magnetic moment (or "g − 2") of the τ lepton . For diboson production, γγ → W + W − (with W + W − → µ + e − ν µνe ) is a particularly clean channel. The configuration of stations considered here would substantially increase the acceptance for 2-arm events, allowing a significant measurement of the SM cross section in the µ + e − final state, which will serve as a benchmark for diboson searches in other channels and at higher masses, which provides a good means to test the interactions of photons and W bosons at high energies, and to search for Anomalous Quartic Gauge Couplings (AQGC) or other nonresonant signals of BSM physics. A wide variety of BSM scenarios involving γγ production with forward protons have been explored in the theoretical literature (e.g [427]). For exclusive production with intact protons, only spin-one resonances and any spin-odd states with negative parity are forbidden in γγ interactions [428,429]. This type of search is particularly interesting for resonances with large couplings to photons but not to gluons, which may appear in the γγ → X → γγ channel [430][431][432][433][434]. It was shown that the expected sensitivity for axion-like particles (ALP) in CEP is expected to be competitive and complementary to other collider searches for masses above 600 GeV [434]. Conversely, if a resonance is detected via decays to two photons, measuring the cross section with forward protons will help constrain its couplings to photons in a model-independent way [433]. The use of forward protons was recently been revisited as a possible means to improve searches for pair production of supersymmetric sleptons or charginos in compressed mass scenarios [435,436]. Physics including the 420 meter station: Central exclusive Higgs boson production has been extensively studied theoretically and in simulations (including the original detailed studies of the FP420 project [437]). In this case, unlike higher-mass and weakly coupled final states, gluon-gluon production is expected to dominate over γγ production. The cross section for CEP Higgs production in the SM has been evaluated by several groups, and the total cross section ranging between a few fb and a few tenths of a fb, depending on details of the survival probabilities, parton distribution functions (PDFs), Sudakov factors and other assumptions of the calculations. A measurement of CEP dijets at the same energy and mass range would therefore remove most of the remaining theoretical uncertainties in the Higgs cross section predictions. For the 125.4 GeV Higgs boson production, protons could be detected in the 420 m stations on both arms, and in the combination of the 234 m and 420 m stations, while the associated production with W + W − vector-boson pair has the potential for probing the Higgs sector in CEP events in the absence of the ±420 m stations. Although the exclusive production cross section is estimated to be σ ≈ 0.04 fb at tree-level, a high acceptance is expected because of the large invariant mass of the central system. As discussed in [438,439] the experiments with forward proton spectrometers at the HL-LHC would open a promising way to perform a search for the QCD instantons, which are a non-trivial consequence of the vacuum structure of the non-abelian theories (for a recent review and references see e.g. [440]). Instantons describe quantum tunneling between different vacuum sectors of the QCD and are arguably the best motivated yet experimentally unobserved nonperturbative effects predicted by the Standard Model. It is shown in [439] that for an instanton mass M inst ≥ 50 GeV the expected central production cross sections for the instanton-induced processes are of the order of picobarns in the pure exclusive case and increase up to hundreds of pb when the emission of spectator jets is allowed. These signal cross-sections are encouragingly large, and under favourable background conditions there is a tantalising chance that QCD instanton effects can either be seen or ruled out. The expected experimental signature for the instanton-induced process in the central detector is a large multiplicity and transverse energy ( i ET i ) in relatively small rapidity interval (δy 2 − 3) and large sphericity S > 0.8 of the event. Note that the mean number of gluon jets radiated by the instanton is ∼ 1/α s , while the probability of the instanton creation is ∝ exp(−4π/α s ). Therefore to observe the clear signature of the instanton-induced signal it is most feasible to consider the case of the moderately heavy instantons, M inst ≥ 50 − 100 GeV. This would require measurements with the 420 m stations. B. Non-elastic contribution in photon-photon physics The two-photon production of di-leptons has been largely studied at the LHC experiments in the past year [420,426,[441][442][443][444][445][446][447], investigating elastic interactions at distinct colliding energies. This exclusive production presents a final state composed by the lepton pair produced at the central detector, where large rapidity gaps are present between pair and the outgoing protons in the beam line direction. Such signature differs from the usual QCD production by the absence of particle (gluon) radiation that populates the detector, largely reducing the possibility of observing this signature in the data [33]. The interest for di-leptons comes from the fact that they can be used as luminosity monitors [448,449], however the production of W boson pairs via they decay channel into leptons provides a way to investigate evidences of New Physics with the use of effective theories including anomalous gauge couplings. The signal yields includes both the elastic production -with two intact outgoing protons in the forward direction -as well as the nonelastic production, with one or both protons dissociating into a hadronic final state, classified as semi-elastic and inelastic production, respectively (see e.g. Ref. [450] for more details). Figure 21 illustrates the production cases. While the former is easily computed analytically with the use of photon fluxes [452], the latter is based on parton distribution functions (PDFs) with QED contribution. The typical production cross section can be expressed in terms of effective photon luminosities: σ i ∝ L i ef f ×σ(γγ → + − ), whereσ(γγ → + − ) is the tree-level cross section and L i ef f is the photon luminosity for each processes: Fig. 1c: with x i is the momentum fraction of the proton carried by the photon and Q 2 is the photon virtuality. The non-elastic cases made use of photon PDFs based on the DGLAP evolution equations modified to include the QED parton splitting functions. Considering the different approaches used in the literature for the elastic and nonelastic contributions, an estimate for the uncertainties associated for these choices. Figure 22 shows the differential cross section as function of the invariant mass of muon pairs [451]: The curves correspond to the predictions averaged at Q = 300 GeV among the recent parametrizations for the photon PDF: LUXqed17 [453], MMHT2015qed [454], and NNPDF31luxQED [455]. All these parametrizations are based on the approach proposed in Ref. [456] (See also Ref. [457]). The bands are evaluated as one standard deviation around the averages. There are Monte Carlo event generators providing predictions for the elastic contribution in the two-photon di-lepton and W W productions, however the nonelastic contribution is not a common feature. Given that the curves show similar shapes, it favors the possibility of obtaining a multiplicative factor that can be used to re-weight generated event samples to account for the nonelastic contributions [451]. This prediction can be experimentally tested with forward detectors capable of observing the intact protons emerging from elastic and semi-elastic collisions, such as CMS Precision Proton Spectrometer (PPS) [418] and ATLAS Atlas Forward Proton (AFP) [425]. A multiplicative factor has been already evaluated in previous CMS analyses [445,446] in the high-mass region: where N µµ(data) is the total number of events passing the selection criteria, N DY the total number of events identified as coming from the Drell-Yan production process related to events with one or more extra tracks, and N elastic is the estimated number of elastic events from theory. In a similar fashion, theoretical predictions are used to provide a estimate of this ratio like: Using the set of parametrizations for the photon PDF, one is able to evaluate these ratios in the phase-space region accessible by LHC forward detectors. Figure 23 presents the predictions in the mass range of 300 GeV to 2 TeV including different approaches for the elastic photon flux, see also [458] for a recent study on these effects for WW production. It shows an uncertainty of 20-40% considering the available parametrizations. A experimental measurement of this observable would provide new insight on the parametrizations and account for a data-driven result that could be used in event generators and extend the stringency of limits for anomalous couplings. The upcoming Run3 of the LHC may provide an unique opportunity to collect enough luminosity for such measurement, opening new fronts for the investigation of photon interactions and improvement of the computational tools available in the literature (see e.g. Dark Matter searches [436] or tt production [459,460], both in the exclusive mode). [451]. C. Exclusive production of Higgs boson Central Exclusive Production (CEP) is especially attractive for three reasons: firstly, if the outgoing protons remain intact and scatter through small angles then, to a very good approximation, the primary di-gluon system obeys a J z = 0, C-even, P-even selection rule [461,462]. Here J z is the projection of the total angular momentum along the proton beam axis. This therefore allows a clean determination of the quantum numbers of any observed resonance. Thus, in principle, only a few such events are necessary to determine the quantum numbers, since the mere observation of the process establishes that the exchanged object is in the 0 ++ state. Secondly, from precise measurements of the proton momentum losses, ξ 1 and ξ 2 , and from the fact that the process is exclusive, the mass of the central system can be measured much more precisely than from the central detector, by the so-called missing mass method [463], M 2 = ξ 1 ξ 2 s which is independent of the decay mode. Thirdly in CEP the signal-to-background (S/B) ratios turn out to be close to unity, if the contribution from pile-up is not considered. This advantageous S/B ratio is due to the combination of the J z = 0 selection rule, the potentially excellent mass resolution, and the simplicity of the event signature in the central detector. For pp collisions, the dominant contribution is expected to be from exclusive gluon-fusion production gg → h for which the cross section predictions are still known with a limited accuracy. A similar statement applies to photon-fusion production, which is strongly enhanced in PbPb collisions with respect to the pp case, see for instance [464]. While the gg → h is in principle calculable in perturbative QCD, a non-negligible (but conservative) spread in cross section predictions of 0.5 -3.0 fb is seen due to such basic ingredients as the parton distribution function (PDF) used and a limited control over the non-perturbative theory of soft survival factors, S 2 , for gluon-initiated processes in this mass range [465] (although these uncertainties cancel in the S/B ratio for many backgrounds). Existing experimental data from CDF exclusive di-photon [466] or LHCb J/Ψ pair [467] or quarkonia [468] analyses rather prefer values towards the higher end of the spread (see discussions in Refs. [469,470]) nevertheless direct measurements of the exclusive Higgs production would undoubtedly allow its production rate to be directly constrained (or for example by monitoring rates of CEP dijets or di-photons, since the same PDFs and S 2 enter the respective production cross sections at the same central system mass). The exclusive production of Higgs boson was a flagship topic of the project FP420 (see e.g. the title of the main document, "The FP420 R&D Project: Higgs and New Physics with forward protons at the LHC" [437]) whose main goal was to install forward proton detectors (FPDs) at 420 m from the interaction point of ATLAS and CMS experiments to detect forward protons coming from diffractive proton-induced or photon-induced interactions. Another important feature of forward proton tagging in the case of the Higgs boson is the fact that it enables the dominant decay modes, namely bb, W W ( * ) , ZZ ( * ) and τ τ to be observed in one process. In this way, it may be possible to access the Higgs boson coupling to bottom quarks. This is challenging in conventional search channels at LHC due to large QCD backgrounds, even though h → bb is the dominant decay mode for a light SM Higgs boson. The bb, W W ( * ) and τ τ decay modes were studied in detail and are documented in literature (bb in Refs. [471][472][473][474][475][476][477][478], W W ( * ) in Refs. [471,473,[477][478][479][480] and τ τ in Ref. [473,474] and in an unpublished diploma thesis [481]). It was the bb mode that was studied in greatest detail -thanks to advantages enumerated above and also thanks to the most favourable prospects for this decay mode in enhancing the production cross section in Minimal SuperSymmetric SM (MSSM), the most popular model of BSM of those days. Prospects for other extensions were outlined in Ref. [482] for NMSSM (Next-to-Minimal SuperSymmetric SM) and in Ref. [483] for a possible triplet Higgs sector. Results of the above studies, including SM and BSM Higgs bosons, were reviewed in 2014 in Ref. [484] and can be summarized in the following way, noting especially the fact that all were performed prior to the Higgs boson discovery. Although studies of properties of the Higgs boson with mass close to 125.5 GeV discovered by the ATLAS [485] and CMS [486] (see for example a global analysis in Ref. [487]) suggest that the Higgs boson is compatible with the Standard Model, there is still room for models of New Physics, e.g. at lower or higher masses than 125.5 GeV, and the central exclusive production of the Higgs boson still represents a powerful tool to complement the standard strategies at LHC. A striking feature of the CEP Higgs-boson is that this channel provides valuable additional information on the spin and the coupling structure of Higgs candidates at the LHC. We emphasize that the J z = 0, C-even, P-even selection rule of the CEP process enables us to estimate very precisely (and event-by-event) the quantum numbers of any resonance produced via CEP. Signal selection and background rejection cuts are based on requiring a match between measurements in the central detector and FPD within assumed subdetector resolutions. In addition, pile-up backgrounds are suppressed by using Time-of-Flight (ToF) detectors, a natural part of FPD whose utilization necessitates protons to be tagged on both sides from the interaction point (see a recent ToF performance study in Ref. [488]). The significances for the CEP Higgs boson decaying into bb, W W or τ τ pairs in SM are moderate but 3 σ can surely be reached if the analysis tools, ToF measurement resolution or L1 trigger strategies are improved, among others by knowing the Higgs boson mass precisely, as discussed in Ref. [484]. For example we can surely expect improvements in the gluon-jet/b-jet mis-identification probability P g/b . In the original analyses in Refs. [472][473][474][475]478] a conservative approach has been followed by taking the maximum of two values available at that time in ATLAS and CMS. Meanwhile new developments were reported in reducing the light-quark-b mis-identification probabilities in ATLAS [489] and CMS [490]. Other possibilities to improve the significances in searching for the SM Higgs in CEP are a possible sub-10 ps resolution or finer granularity of timing detectors, the use of multivariate techniques or a further fine-tuning or optimization of the signal selection and background rejection cuts, thanks to the fact that the mass of the SM-like Higgs boson is already known with a relatively high precision. The known Higgs boson mass can also greatly facilitate proposals for a dedicated L1 trigger to efficiently save events with the CEP H → bb candidates. Proposals made in Ref. [491], well before the SM-like Higgs boson discovery, can thus be further optimized. Studying properties of Higgs bosons born exclusively with a mass around 125 GeV would require building FPDs in the region 420 m from the interaction point. Such a possibility, as a possible upgrade of FPDs at HL-LHC, is considered by the CMS collaboration (see e.g. Ref. [417]). Equipping that region of the LHC beam pipe (so called "cold region") by Roman Pots or Hamburg Beampipe devices was thoroughly discussed in the framework of the FP420 collaboration and all the know-how has been then put in the R&D document [437]. The constraints coming from experimental data exclude the heavy Higgs boson mass region below 400 GeV, although in special MSSM scenarios, for example Mh125 alignment scenario [492], masses lower than 400 GeV would still be possible, but for "fine-tuned" points rather than larger areas. Other extreme scenarios that are still possible are represented by the M 125 H scenario [492], in which the light CP-even Higgs is lighter than 125 GeV, and the discovered Higgs boson corresponds to the heavy CP-even MSSM Higgs boson. The development of the M 125 H scenario was triggered by the observation of a local excess of 3σ at about 96 GeV in the diphoton final state, based on the CMS Run 2 data [493]. First Run 2 results from ATLAS with 80 fb −1 in the γγ final state (see e.g. Ref [494]) or full Run 2 ATLAS results in the τ + τ − final state [495] turned out to be weaker, but a full Run 2 analysis of the CMS data is still awaited. D. Anomalous quartic couplings with proton tagging High-energy photon-photon fusion processes can be studied at the CERN LHC in proton-proton collisions. In comparison to the ultraperipheral heavy-ion collisions, the impact parameter range is much smaller in pp collisions for photon exchange. The quasi-real photon energy spectrum can easily reach the TeV scale for 14 TeV pp collisions, although with a much smaller photon flux since one does not have the same Z 4 enhancement factor as in heavy-ion collisions. One of the main interests for studying photon-fusion processes in proton-proton collisions is its potential for discovering physics beyond the standard model (BSM). Such prospects for discovering new physics are complementary to the standard searches at the LHC, which rely on quark-and gluon-initiated processes. In a fraction of the quasi-real photon exchange processes, the colliding protons may remain intact. In these central exclusive production processes, the photon exchange can be modelled within the equivalent photon approximation, which is based on the parametrization of the electromagnetic form factors of the proton from elastic photon-proton precision data. Non-perturbative corrections related to the underlying event activity or QCD initial-state radiation effects are absent in this case. The survival probability, which quantifies the probability that the protons remain intact after the photon exchange, has been calculated and measured to be on the order of 70-90% (depends on the invariant mass of the central system). The intact protons retain most of the original beam momentum, and are thus deflected at small angles with respect to the beam line. The magnetic lattice of the LHC can be used to separate these intact protons from the beam protons that did not collide. Then, these intact protons can be detected with the Roman pot detectors located at about 200 m with respect to the interaction point. If these two protons are detected together with a hard, central system at central pseudorapidities, then all the decay products of the collision have been successfully measured. The PPS and AFP detectors of CMS and ATLAS have such setups for the detection of protons at the nominal instantaneous luminosities. The mass m X and rapidity y X of the central system are directly related to the fractional momentum loss of the scattered protons ξ 1,2 = ∆p 1,2 /p beam . This kinematical correlation is used to suppress the contributions from pileup interactions, which is the largest source of background for these measurements. The pileup contributions are such that a hard scale process (e.g., QCD production of a photon pair, jets) is paired with uncorrelated forward protons from diffractive pileup interactions. The signature would be similar to that of central exclusive production: two protons and a hard scale system at central rapidities. The cross section for soft diffractive interactions is large (on the order of 20 mb at 13 TeV). Together with the high pileup multiplicities at the LHC and at the future HL-LHC, it becomes more important to control this background. The aforementioned kinematical correlation between the forward and central system mitigate pileup. Pileup is further mitigated with time-of-flight measurements. We now discuss a number of examples of new physics searches using proton tagging at the LHC. The scattering of light-by-light (γγ → γγ) is induced via box diagrams in the SM at the lowest order in perturbation theory. The experimental signature would be two photons back-toback, with no hadronic activity, and two scattered protons. Exotic particles can contribute to light-by-light scattering via virtual exchanges at high-mass [496,497]. Generic manifestations of physics beyond the SM can be modelled within the effective field theory (EFT) formalism, under the assumption that the invariant mass of the diphoton system is much smaller than the energy scale where new physics manifests. Among these operators, the pure photon dimension-eight operators L 4γ = ζ 4γ 1 F µν F µν F ρσ F ρσ + ζ 4γ 2 F µν F νρ F ρλ F λµ induce the γγγγ interaction. The quartic photon couplings have been constrained at the CERN LHC by the CMS Collaboration with values of |ζ 4γ 1 |(|ζ 4γ 2 |) < 2.88(6.02) × 10 −13 GeV −4 at 95 % CL [421]. At the HL-LHC, these bounds can in principle be improved down to |ζ 1 | ≈ 4(8) × 10 −14 GeV −4 [498]. Time-of-flight measurements will be very important to suppress the larger amount of pileup interactions. Projections for HL-LHC conditions are shown in Fig. 24. The γγ → γZ scattering process can be probed with proton tagging as well [499]. This process is induced at the lowest order in perturbation theory via box diagrams of particles charged under hypercharge, analogous to the SM light-by-light scattering box diagram. In the leptonic decay channel, the background can be controlled to a similar degree as the one in light-by-light scattering. New physics manifestations can be modelled using dimension-eight effective operators L γγγZ = ζ 3γZ 1 F µν F µν F ρσ Z ρσ + ζ 3γZ 2 F µνF µν F ρσZ ρσ . The quartic ζ 1 , ζ 2 couplings can be constrained down to ≈ 2 × 10 −13 GeV −4 in Run-3 conditions [499]. This constrain surpasses projections based on measurements of the branching fraction of the rare Z → γγγ decay at the HL-LHC by about two orders of magnitude. The channel is experimentally very clean (an isolated photon recoiling backto-back against a reconstructed Z boson with no soft hadronic activity associated to the primary vertex). Competitive limits can already be extracted with existing data collected by ATLAS and CMS. At the future HL-LHC, the search can be expanded by considering boosted topologies of the Z boson. This could help populate the region of phase-space at large γZ invariant masses, complementing the reach with the (cleaner) fully leptonic decay channel. Projections for the HL-LHC conditions are shown in Fig. 25. Another process of interest is the electroweak gauge boson scattering γγ → W + W − . Unlike the two previous instances, the γγ → W + W − process is induced at tree-level in the SM via the triple γW W and quartic couplings γγW W in the electroweak sector [501]. The process has been observed already by the ATLAS Collaboration without the use of proton tagging by focusing on the purely leptonic decay channel [502]. However, in order to probe a region of phase-space that is sensitive to modifications of the SM interactions (high-mass modifications specifically), the proton tagging technique is necessary [500]. At high diboson invariant masses and high boson p T , boosted topologies are kinematically favorable. The fully hadronic channel, where each of the hadronically decaying W bosons are reconstructed as large radius jets, provide the best sensitivity to new physics manifestations [500]. Modifications to the SM can be modelled with a dimensionsix interaction Lagrangian density, L eff These are the only operators allowed after imposing U(1) em and global custodial SU(2) C symmetries. The expected limit on the anomalous a W 0 and a W C couplings would be at least one order of magnitude larger in the hadronic channel than in the semi-leptonic or leptonic channel combined. The projections for 14 TeV Run-3 combining all channels is shown in Fig. 26. However, the use of jet substructure variables that are sensitive to the number of hard prongs in the jet are necessary (for example, N -subjettiness ratios) in order to tame the large QCD jet background. The sensitivity can be further expanded by considering ungroomed jet substructure variables; the ungroomed jet mass and jet shapes for central exclusive W boson jets should render similar resemblance to the The yellow and green areas represent respectively the projected sensitivities at 95% CL and 5σ combining the hadronic, semi-leptonic, and leptonic decay channels of the W + W − system. The blank area in the center represents the region where we do not expect sensitivity to the anomalous coupling parameter. Time-of-flight measurements with 20 ps precision is assumed. Figure extracted from Ref. [500]. jet substructure of a groomed W boson jet from a typical QCD interaction. The SM γγ → W W scattering can be probed in the semi-leptonic channel at high W W invariant masses, in a way such that it complements the phase-space covered by the fully leptonic channel. In addition to pure gauge boson scattering, one can probe electromagnetic interactions in other processes such as in γγ → tt scattering, which is induced at tree-level in the SM with the elementary QED vertices of the top quark and the photon. The SM process has not yet been observed. The CMS Collaboration has set an upper limit on the cross section of 0.59 pb at 95% CL [422]. Although the process is induced at tree-level, the cross section is on the order of 10 −1 fb before branching fraction corrections and for a typical RP acceptance in ξ. It is likely that evidence could be established considering the full HL-LHC luminosity. For BSM physics, we considered six different operators (four dimension-six and two dimension-eight) with γγtt quartic couplings. We embedded the corresponding amplitudes for six different operators, each representative of different underlying symmetries of the BSM scenarios at high masses. The constraints we expect for a typical Run-3 scenario is about ζ γγtt i ≈ 10 −12 GeV −4 , for i = 1, . . . , 6, where ζ i represent the anomalous quartic couplings. Focusing on high-mass back-to-back top quark pairs with proton tagging, one expects a residual QCD tt background of the order of 100 counts for 300 fb −1 at 14 TeV. The mass distributions at particle-level for QCD tt production and predictions for anomalous couplings are shown in Fig. 27. The search could be expanded to include the fully-hadronic case at the HL-LHC, where the larger statistical sample allows the coverage of the region of phase-space of highly boosted top quarks. To summarize this section, there are good prospects for expanding the search for new physics at the LHC in photon-fusion processes such that is complementary to the existing program of the CERN LHC. Other prospects for the HL-LHC era can be read in Ref. [417]. VIII. CONCLUSIONS Forward physics allows to address fundamental research questions related to the growth of gluon distributions in the perturbative high energy limit and their potential saturation due to the onset of unitarity corrections. It allows searching for imprints of such effects in both parton distribution functions of colliding hadrons and directly in the final state of events. Carrying out this physics program is essential for two reasons: preparation for the future Electron Ion Collider (EIC) and the potential to answer central research questions already at LHC runs. In comparison to the LHC forward physics program, the future EIC will allow to probe the dense nuclear matter with an electron beam, ideal for the investigation and characterization of hadronic structure. Identifying suitable probes at the LHC is on the other hand far more cumbersome. Nevertheless this is worthwhile effort: due to its high center of mass energy, the LHC allows to probe hadronic matter at unprecedented values of x, which are several orders of magnitude below the values to be reached at the Electron Ion Collider. This is particularly true when using dedicated events in the forward region. It therefore covers regions of phase space which are completely inaccessible at the EIC and allows for a direct comparison between high parton densities generated through low x evolution and those present in large nuclei. A related topic addresses the direct analysis of emission patterns, related to low x -in that case BFKL -evolution, which can be studied using multi jet events. While challenging at the LHC, study of such evolution effects is clearly limited at an Electron Ion Collider to the limitations in available phase space. Within the foreseeable future, such questions will be either studied at the LHC within the Forward Physics program, or they will not be studied at all. While somewhat orthogonal from the point of view of the physics program, it is natural to employ forward detectors not only for the exploration of strong interactions but also for new physics searches and the study of electroweak dynamics. In particular photon-photon reactions, and related Pomeron-Pomeron fusion processes allow for the observation of very clean events at the LHC, due to the detection of intact scattered protons and/or large rapidity gaps between the centrally produced object and the scattered proton. While their exploration is of high interest by itself, such events have further the potential to improve existing bounds on new degrees of freedom and to contribute to searches for new physics at the LHC. Forward Physics allows therefore to address central physics questions of both nuclear and particle physics. Its physics program is strongly related to the physics at the future EIC as well as searches for new physics at the LHC. The region of phase space explored by LHC forward physics is unique and therefore allows us to address research questions which are not accessible anywhere else.
2022-03-16T01:15:56.012Z
2022-03-15T00:00:00.000
{ "year": 2022, "sha1": "86ce07393cc2b650b3a645d5d14260df817fd4c0", "oa_license": null, "oa_url": "https://www.actaphys.uj.edu.pl/R/54/3-A1/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "86ce07393cc2b650b3a645d5d14260df817fd4c0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233486530
pes2o/s2orc
v3-fos-license
Using First Principles for Deep Learning and Model-Based Control of Soft Robots Model-based optimal control of soft robots may enable compliant, underdamped platforms to operate in a repeatable fashion and effectively accomplish tasks that are otherwise impossible for soft robots. Unfortunately, developing accurate analytical dynamic models for soft robots is time-consuming, difficult, and error-prone. Deep learning presents an alternative modeling approach that only requires a time history of system inputs and system states, which can be easily measured or estimated. However, fully relying on empirical or learned models involves collecting large amounts of representative data from a soft robot in order to model the complex state space–a task which may not be feasible in many situations. Furthermore, the exclusive use of empirical models for model-based control can be dangerous if the model does not generalize well. To address these challenges, we propose a hybrid modeling approach that combines machine learning methods with an existing first-principles model in order to improve overall performance for a sampling-based non-linear model predictive controller. We validate this approach on a soft robot platform and demonstrate that performance improves by 52% on average when employing the combined model. INTRODUCTION Soft robots have many desirable characteristics which make them attractive candidates for a wide variety of tasks where traditional rigid robots are ill-suited. For example, rigid robots are often restricted to operating in well-defined enclosures to avoid dangerous collisions with the environment or human operators. In contrast, soft robots are able to operate safely in unstructured environments, where incidental contact is likely or even desired, due to their inherent flexibility and adaptability. In this work, the main contribution we present is a methodology for learning model discrepancies for use in a real-time non-linear model predictive control (NMPC) scheme. We validate this approach in simulation and on a soft robot platform. This platform is an ideal test bed for our approach because the actual dynamics (both in terms of joint configuration and air pressure in the joint chambers over time) are intrinsically more uncertain than previously presented rigid robot systems and control methods discussed in section 1.1. While we apply our approach to soft robotics to demonstrate its potential to learn both uncertain and unknown dynamics, the proposed method could generalize to any platform using a model predictive controller. The structure of this paper is as follows. Section 2 presents our hardware platform, the analytical model used to generate training data, our deep neural network (DNN) training methods, and evaluation of each model's accuracy. Section 3 explains the non-linear evolutionary model predictive control (NEMPC) algorithm we employ and shows the results of our experiments and explores their implications. Section 4 discusses the importance of this work as well as current limitations and future directions for additional research. Related Work The many desirable characteristics of soft robots present challenging problems when it comes to modeling and controlling them. Accurate physics-based (first-principles) models that are tractable for real-time model-based control are difficult to obtain because of uncertain material properties, hysteresis, non-linear dynamics, and complicated pneumatic flow dynamics. Soft robot physics-based modeling efforts range from finite element (FEM) approaches as in Pozzi et al. (2018) and Katzschmann et al. (2019) to Cosserat Rod models as in Till et al. (2019) or piecewise constant curvature (PCC) models as in Allen et al. (2020) and Della Santina et al. (2020). Many of these methods have shown promise. However, the effort and expertise required to accurately model all of the aforementioned effects is formidable. Even if a perfectly accurate analytical model could be derived, it may be useless for real-time model-based control due to the high computational time required for evaluation, as will be shown in the experiments of section 3.5. Additionally, even if the model is made tractable using appropriate simplifications, it would likely still require significant effort in system identification to obtain acceptable closed loop control performance. Rus and Tolley (2015) and Thuruthel et al. (2018) both summarize the wide spectrum of strategies that have been proposed to overcome the aforementioned modeling challenges. Among these, data-driven modeling specifically addresses many difficulties of physics-based modeling for control. Generally, data-driven control algorithms are based on various forms of machine learning, such as neural networks as in Thuruthel et al. (2017) and Mohajerin et al. (2018), Gaussian processes (GP) in Ostafew et al. (2016), Kabzan et al. (2019), Soloperto et al. (2018), and Hewing et al. (2020), reinforcement learning (RL) as in Thuruthel et al. (2019), or sparse optimization (also known as SINDY) as in Kaiser et al. (2018). Notably, deep learning has proven to be a valuable tool for robot modeling and control and is explored thoroughly in Pierson and Gashler (2017) and Sünderhauf et al. (2018). Deep learning has more recently demonstrated the ability to approximate soft robot dynamic models accurately in Gillespie et al. (2018) and Hyatt and Killpack (2020). A major benefit of such approaches is that they are largely data-driven and as such, do not require an analytical model or specialized expertise. However, using these learned models in a real-time, model-based control formulation for soft robots (such as in Gillespie et al., 2018;Hyatt and Killpack, 2020) has been explored to a much lesser extent. Specifically, by using specialized hardware for accelerated computing, such as Graphics Processing Units (GPUs), data-driven models can be forward sampled in large batches and at high rates using a parallelized architecture. This enables their direct use to solve an optimal control problem using a non-linear model predictive control strategy (see Hyatt and Killpack, 2017;Hyatt et al., 2020b). This is the approach on which we build for this paper. On the other hand, an undesirable characteristic of datadriven modeling techniques is the need for large amounts of representative data, which is difficult to collect on hardware platforms where exploring the whole state space of the robot is infeasible or dangerous. Our approach in this paper is to use a simplified, first-principles model to train a deep neural network (DNN) to represent general trends in state variables for the dynamics, and then add another deep neural network to compensate for additional error in the predicted states. To accomplish this, while also benefiting from the parallel computation available on a GPU, we first train a DNN to learn the first-principles model. Then we train a second DNN to learn the simulation-to-reality error gap. Because the first-principles DNN learns the general form of the dynamics from simulation, much less hardware training data is required. The hardware data only serves to make adjustments to capture unmodeled dynamics and does not necessarily need to be as representative or as plentiful as would be required if hardware data was exclusively used to train the neural network. Our work toward compensating for modeling error with datadriven learning is similar to Sun et al. (2019) where authors use deep learning to predict physics-based modeling error of water resources, Kaheman et al. (2019) where they present an algorithm to learn a discrepancy model on an double inverted pendulum, and Della Santina et al. (2020) where the authors augment a model-based disturbance observer with a learned correction factor on a soft robot. Most similar to our work is that of Koryakovskiy et al. (2018) where they augment a non-linear model predictive controller with various forms of learned actions to compensate for model-plant mismatch on a rigid humanoid robot. Other works that include using neural networks as the backbone for predictive control are Piche et al. (2000) and Lu and Tsai (2008). FIRST PRINCIPLES AND DEEP LEARNING We start by providing an overview of our approach and how it fits with the methods and hardware presented in subsequent sections. Our overall approach to compensate for unknown modeling errors starts with training a deep neural network to act as a surrogate for the analytical model derived in section 2.1. This surrogate DNN is needed to exploit the parallelized architecture of modern GPUs, which in turn, affords higher control rates for our non-linear MPC algorithm described in section 3.1. Details related to the training of the surrogate DNN are presented in section 2.2. Next, we train a second deep neural network to compensate for modeling discrepancies described in section 2.3. The methods for training this error DNN are presented in section 2.4. Once the surrogate and the error DNN are trained we evaluate both in parallel, resulting in a combined forward prediction FIGURE 1 | Photograph of soft robotic continuum joint used for this work. θ and φ are the rotations about the joint's x and y axes, respectively. model (that we refer to as a combined DNN) which reflects the dynamics of the hardware platform more accurately. By improving the forward prediction capabilities of our model, we enable the controller to find more optimal input trajectories and thereby improve control performance. The methods involved in validating the control performance using the combined DNN are presented in section 3.4. Robot Platform Description and Modeling The platform used for this work is a continuum joint comprised of four pressurized bellows which encircle an inextensible steel cable, as shown in Figure 1. Controlling the pressure in each of the bellows results in a net torque which causes the joint to bend. We use the same singularity-free kinematic relationships derived by Allen et al. (2020) where the curvature of the continuum joint is parameterized as two separate rotations (u and v) about orthogonal axes (x and y), which lie at the base of the joint. For notational clarity in this paper, we define θ = u and φ = v. The dynamic model of the continuum joint is of the form where M(q) ∈ R 2×2 is the symmetric mass matrix, C(q,q) ∈ R 2×2 is the Coriolis matrix, g(q) ∈ R 2 is a vector of torques caused by gravity, q(t) = [θ , φ] ⊤ is a vector of generalized coordinates, and τ ∈ R 2 is a vector of generalized forces. An analytical equation of motion of the form shown in Equation (1) can be derived using principles of Lagrangian mechanics by modeling the joint as an infinite set of infinitesimally thin disks and integrating along the length of a piecewise constant curvature (PCC) arc. This method was developed in Hyatt et al. (2020a), which includes a detailed derivation of this model. There are also significant non-linear pressure dynamics inside of the bellow actuators, where the rate of change in pressures is on the same order of time response as the actual motion of the robot. We model the pressure dynamics as a first-order system such thatṗ where p(t) ∈ R 4 is a vector of pressures, p ref (t) ∈ R 4 is a vector of reference (i.e., commanded) pressures, and α ∈ R 4×4 is a diagonal matrix of coefficients representing the fill/vent rate of the pneumatic valves. Numerical values for the parameters used in this model are included in the repository accompanying this paper. Because each of the pressure bellows is made of deformable plastic, there are several effects from material properties, such as stiffness and damping that are not accounted for in Equation (1). We include these effects as a linear spring term (K spring q, where K spring is a diagonal matrix), which pulls the joint toward a completely vertical configuration, and a viscous damping term (K dq , where K d is also a diagonal matrix). The pressure-totorque mapping term (K prs p) maps pressure differentials in each antagonistic pair of bellows to a torque about each axis where bending in φ and θ occur. These additions, coupled with Equation (2), result in our final analytical dynamic model: For conciseness, we rearrange Equations (2) and (3) into a nonlinear state variable forṁ We use x(t) and u(t) for the remainder of this work. Surrogate DNN Training We first train a neural network to learn a state transition function from simulated data using the analytical model described in Equation (4). Previous work in the field of reinforcement learning (OpenAI et al., 2019;Rao et al., 2020) demonstrated that using simulated data, while not reflecting the world perfectly, allows the model to learn more quickly and perform better than when the model is trained on real-world data alone. Leveraging analytical models and simulation environments also allows us to be able to collect more data than would be physically possible since simulations can run for long periods of time without supervision and without the risk of damaging hardware. In simulation, data is cheap and easy to collect. With the only cost for collecting data being computing power and time, we theoretically have access to an infinite dataset without risking any damage to the real robot hardware. Training data is generated by numerically integrating Equation (4) with a fourth order Runge-Kutta integration scheme using a constant time step of 0.001 s in order to get accurate simulation data. The pressure commands (u(t)) are square waves randomly distributed between the minimum and maximum safe operating pressures (8-400 kPa) in order to record both transient and steady-state responses for DNN training. We use square waves because of their ability to excite (and therefore learn) more dynamic modes in the system compared to other common test signals (e.g., sine waves or ramps). The simulated training data consists of 12 simulation runs, each over a period of 250 s. Sampled at a rate of 0.001 s, this came out to three million data points. We frame the training process as a supervised learning problem, with the current state (x t ) and commanded pressures (u t ) being inputs, and the difference between the current state and the next state ( x t = x t+1 − x t ) as the output. Since the changes in state are small over small time steps, by only requiring the model to learn the difference in states, we free the model from mostly having to learn the identity operation of copying over the previous state with only small adjustments. In training, we use a simple fully-connected network. In situations where accuracy is desired more over speed, one might instead opt for a long short-term memory (LSTM, Hochreiter and Schmidhuber, 1997) or transformer neural architecture (Vaswani et al., 2017). However, we chose to use this small, simple network to allow for the very quick evaluation time that is needed for NEMPC. Each network is composed of three intermediate fully-connected networks which we call N x , N u , and N out . All hyperparameters, including the number of hidden layers and hidden layer sizes, were chosen using a hyperparameter search while maintaining the speed necessary for real-time control (see Table 1 for full list of parameters). N x and N u are fully-connected networks with two hidden layers, and 256 hidden nodes. N out has two hidden layers and 512 hidden nodes. For context, we let the state and network outputs be x t , x t ∈ R 8 , and let the commanded pressure be u t ∈ R 4 . We run x t through N x and u t through N u to produce intermediate outputs and run through N out to produce the state transition from the current time step to 0.02 s (the prediction rate of the controller) in the future, x t = x t+1 − x t . Because our data was recorded at 1,000 Hz, we had to sample from our training data at the correct frequency of 50 Hz (taking every twentieth data point) to help the network learn at the desired control rate of 50 Hz. We use 70% of the data for training and reserve 30% for validation. For a diagram of the architecture, please refer to Figure 2. We calculate the loss to be the L1-norm plus the cosine distance. where cosine distance (c) is We chose this loss in order to account for both the total absolute error and the direction of change. This direction of change matters because if the predicted rate of change in our state has the wrong sign, this can cause significant stability problems for model-based control. Note that we normalize x t and u t to have a mean of zero and standard deviation of one before running them through N x and N u to allow for faster training. The difference between states x t = x t+1 − x t is scaled by the standard deviation before calculating the loss function to allow the loss function to weight all state variables equally, regardless of the unit, but is not shifted by the mean to preserve direction. At evaluation time, we re-scale the derivative to the correct units with our cached standard deviations. for each simulated training sequence of length n do ⊲ Around 2 million training sequences 3: For non-linear model predictive control (NEMPC), or any other predictive control algorithm, we need to be able to accurately predict more than just one time step ahead. To make our DNN more robust and accurate across longer time intervals, we do not train the network to only predict one time step into the future. Instead, we only allow the network to see the first state in a sequence (like an initial condition for numerical integration). Then using the pressure inputs, we train the network to estimate n steps forward by recursively running the estimated states through the network. Note that we backpropagate the total loss over the entire trajectory at each training step. By training this way, we can be more confident that any unmodeled error will not propagate forward in time. When choosing the number of steps (n) for forward propagation of the dynamic model, one should consider the desired horizon where we need the most accurate predictions. We chose n = 100 so that our model would be able to predict 2 s (100 · 0.02 s) into the future-a horizon longer than most that would be used with NEMPC-as detailed in section 3 (For reference, the time horizon we use in this work for real-time control is 0.1 s). We train the surrogate DNN on data gathered from the analytical model described in section 2.1. For a detailed description of the training procedure, please refer to Algorithm 1. Note that we refer to the surrogate DNN as N sim . Dynamic Model Inaccuracies In this section we discuss in more detail the modeling errors and partially correct assumptions that exist in the dynamic model presented in section 2.1 in an attempt to understand and gain intuition as to how the trained error model will compensate during real-time control. Regarding Equation (2), we acknowledge that the real pressure dynamics on hardware are not simply first order. For example, we do not model the dynamics of the valves used to control pressures (which can cause choked or unchoked fluid flow) and the differences in the pressure dynamics depending on whether the chambers are filling or venting from different pressure reservoirs. In Equation (3), we assume a linear pressure to torque mapping (K prs p), a linear damping term (K dq ), and a linear spring term (K spring q). These terms do not capture non-linear behaviors, such as increased stiffness and damping that exist near joint limits, nor do they reflect any wear in the materials due to usage over time. We also suspect some hysteresis in the movement of the joint, as well as an offset in the resting equilibrium position for φ and θ of the robot due to plastic deformation in each of the robot's pressure chambers. None of these previous effects are explicitly included in the dynamic model of the robot. While previous work (Hyatt et al., 2020a) demonstrated that this formulation of the dynamic model was accurate enough for model-based control, improvements are needed in order to control soft robots in uncertain environments or during highly dynamic movements. Certainly, further system identification would improve this model; however, because of the complexities and uncertainties inherent in soft robots and the processes to manufacture them, system identification techniques scale poorly with high degree-of-freedom systems and do not necessarily generalize well between platforms. The error model developed in this paper offers a scalable technique to compensate for modeling error while still maintaining generality between platforms. Error DNN Training To train an error model that is capable of compensating for the model inaccuracies described in section 2.3, we first collect hardware data by sending and recording bounded random pressure inputs u(t) to the robot. The robot's internal pressure controller ensures that the pressure in each of the bellows reaches the commanded pressure. Note that the minimum and maximum safe operating pressures (8-400 kPa) are also respected here through external pressure regulation. We record the joint positions and pressures directly and estimate joint velocities numerically. This process is repeated for each time step until a suitable quantity of training data is gathered (see Figure 3). By nature, this hardware data is noisy and inconsistent, with sampling rates varying slightly during the data collection process. In order to train on data with uniform spacing, we interpolate between real data points to estimate the state vector and inputs at regular 0.001 s intervals. We trained with more simulated data than hardware data, with only seven hardware runs which are each 90 s long, coming out to 630,000 data points. We use 540,000 data points for training and 90,000 to validate the model. With the data gathered from the hardware, we were able to train the error DNN. First, we sample the data at the desired time interval for which the surrogate model was trained (0.02 s). We then freeze the weights of the surrogate DNN, and divide our dataset into sequences of length n = 100. In a similar training procedure as before, we only allow the network to see the first state x 0 , and task it with predicting the next n states given the commanded pressures u 0 , u 1 , ..., u n−1 . We run the states through the surrogate DNN, and add to its output the output of the error DNN. Thus, the error model does not have to learn the first-principle physics that the surrogate DNN has already learned. Instead, it only has to learn the discrepancies between the simulation and reality, as discussed in section 2.3. We pass the first state and sequence of commanded pressures n times recursively through both the surrogate and error networks, calculate loss between the true and predicted error, and update the error network's weights (see Algorithm 2). We use 80% of the data for training and reserve 20% for validation. Note that we refer to this error DNN as N err . When convergence is reached, both DNN models are ready to be utilized within NEMPC for forward prediction of the robot's behavior. Modeling Results To test the relative fidelity of each model and compare their responses, we simulate the analytical model, the surrogate DNN, and the combined DNN (i.e., the surrogate DNN plus the error DNN or N sim + N err ) with a random step trajectory of commanded pressures (u(t)). This same pressure trajectory is then also commanded on hardware to enable a complete comparison between all models and the actual hardware platform. Figures 4-6 compare the dynamic response of the four different systems (e.g., analytical model, N sim , N sim + N err , actual hardware) in pressure, angular velocity, and joint angles, respectively. It is important to note that while there is significant steady-state error as well as some unmodeled transients in all three figures, the analytical model captures the general trends of the hardware data. Because these trends are naturally embedded in the training data, the error DNN is only required to learn small adjustments which requires much less data than learning the dynamics from scratch. Additionally, in all three figures, Let u 0 , u 1 , ..., u n−1 be sequence of commanded pressures 6:x 1 = N err (x 0 , u 0 ) + N sim (x 0 , u 0 ) + x 0 ⊲ Add N err output to simulation model, learn the gap 7: the surrogate DNN tracks the analytical model with relatively small error. This indicates that the surrogate DNN training was successful. Because the surrogate DNN is trained with simulated data, it could easily be improved further by running more simulations. Likewise, the combined DNN tracks the hardware data well. It is clear from Figure 4 that the error DNN learned that the actual pressure dynamics on hardware are not first order as is predicted by the analytical and surrogate DNN. We believe these differences arise from valve/flow dynamics when venting or filling a pressure chamber aggressively. The most salient feature in Figure 5 is that the velocities on hardware tend to lag behind those of the analytical model due to the filtering of measured position data, which introduces a small phase lag. Interestingly, the combined DNN still tracks the hardware data well, revealing a promising ability to compensate for errors introduced not only by modeling error, but also by state estimation. We also note that in a few cases (around 6 s in the lower subplot of Figure 5), the analytical model actually predicts a velocity in the wrong direction. We suspect this may be due to unmodeled non-linear stiffness properties of the robot because at this moment in time (6 s), the robot is near its upper joint limit of 1.5 radians in both θ and φ (see Figure 6). Our primary observation from Figure 6 is the large steadystate offset caused by plastic deformation on the real hardware resulting in a non-zero equilibrium configuration in the joint angles θ and φ, but which is not present in the analytical model and the surrogate DNN. The error model is able to eliminate most of the offset and track the hardware data well. While there are some small transient dynamics in the hardware data that were not learned by the error DNN (e.g., the 4, 6, and 18 s marks for θ or 4, 6, and 16 s marks for φ in Figure 6), the error DNN prediction FIGURE 3 | This diagram shows the method used to generate error training data. A random step input pressure trajectory u is sent to the hardware and the states are recorded. The same input trajectory is simulated using the surrogate DNN which takes u and x as inputs. The resulting state trajectory is subtracted from the hardware state trajectory to get the state tracking error over time. This allows the error DNN to predict error given the current state x and the current input u. FIGURE 4 | Comparison of pressure dynamics between the four different systems used. The dashed line indicates the commanded pressure, while each of the solid lines is the pressure response resulting from the commanded pressure input. Note that the states from the analytical model and the surrogate DNN match well and that when using the combined DNN, the simulation closely resembles the hardware data. performance is clearly superior to that of either the analytical or surrogate DNN. In an effort to explore the transferability of our model, we also tested our DNNs, sending pressure trajectories that were not used for training (e.g., sine waves and ramps). The results of this experiment are shown in Figure 7. The left column is the model prediction using sine wave pressure inputs and the right column is the model prediction using ramp pressure inputs. It is interesting to note that the error DNN (N err ) learned to compensate for some coupling clearly visible in θ , as well as some offsets in both θ and φ that are not captured by the analytical model or the surrogate DNN (N sim ) alone. CONTROL In this section, we present our control algorithm and our findings based on several experiments in simulation and on hardware. Non-linear Evolutionary Model Predictive Control Non-linear evolutionary model predictive control (NEMPC) was developed as a real-time control algorithm for high degree of freedom (DoF) robot platforms. A variant of model predictive control (MPC), NEMPC utilizes an evolutionary algorithm to solve the MPC optimization. By using an evolutionary algorithm, it is able to approximate a global minimum (as opposed to an exact local minimum) because it explores more of the solution space than local optimization methods. Extensive implementation details can be found in papers by Hyatt and Killpack (2020) and Hyatt et al. (2020b). The implementation of NEMPC in this work differs from the work in Hyatt and Killpack (2020) in that the algorithm no longer mutates every child generated during mating. With some probability P mutate , children are selected for mutation. Those children have each of their genes perturbed by a uniform distribution on the interval (−σ , σ ). This allows the search to refine individual trajectories while still preserving others. For this paper, we implement the typical quadratic cost function formulation used in other MPC schemes with one small modification that places a cost on the change in inputs (i.e., u t = u t − u t−1 ) as opposed to u t itself. This forces NEMPC to generate more conservative solutions which in turn, cause pressure to vary more smoothly over time. Note that the cost on change in inputs is a competing optimization objective with position tracking and requires some tuning of Q and R to achieve good tracking performance while also maintaining smooth input trajectories. The optimization is formulated as where In Equation (7), J is a scalar representing the cost of a given input sequence, T is the simulation horizon over which that input series is applied, and Q ∈ R 8×8 , Q f ∈ R 8×8 and R ∈ R 4×4 are diagonal weighting matrices penalizing error, error at the final time step of the horizon, and actuator effort, respectively. x t represents the state vector and u t is the input vector. x goal is the commanded robot state. Q and Q f are weighted such that the only values of x goal that contribute to the cost J are the position and velocity states. The variable N is a placeholder for the DNN that NEMPC uses. For the case using the surrogate DNN defined in section 2.2, NEMPC enforces the constraint given in Equation (8). For the combined case defined in section 2.4, NEMPC uses the constraint given in Equation (9). At each time step, the optimizer is allowed to take a single step toward the optimum (or one generation of the genetic algorithm). NEMPC then returns the input associated with the lowest cost member of the population for the current time step, which is applied to the hardware system. As soon as that command is sent, NEMPC takes another step toward the optimum, given new measurements of the robot's state. The fact that the previous time step's population is used to warm start the next optimization causes the algorithm to converge quickly. As a practical note, the tuned weights in Q corresponding to the pressure states are 0 because we are not trying to follow a pressure trajectory or specify stiffness. This allows NEMPC to find any valid set of pressure states that will enable tracking of desired velocity and positions. Positions are weighted heavily and velocities relatively lightly. The introduction of a DNN as NEMPC's internal model of the plant is a key component that enables NEMPC's execution at realtime speed and the evaluation of an entire population of solutions in batches. This allows a large graphics processing unit (GPU) to simultaneously evaluate all 1,500 potential input series at any given time step. In our work, we are able to control the eight state soft robot continuum joint at a rate of 100 Hz with a time horizon of 0.1 s. Simulation Experiment To validate the efficacy of the NEMPC controller, a simulated experiment is run using the analytical model of the soft robot continuum joint as the plant, and the surrogate DNN as the internal control model of the system. As in later hardware experiments, NEMPC is fed a reference trajectory in θ and φ, and calculates a set of reference pressures u * t which are then applied to the dynamic system (simulated with the analytical model in this case). This experiment is not run in real time, due to the computational time required to numerically integrate the analytical model of the robot. Simulation Results The results of the simulation experiment can be seen in Figure 8. Since the surrogate DNN is a good approximation of the simulated robot, NEMPC is able to find near-optimal solutions with relative ease. From these results, we see that Non-linear Evolutionary Model Predictive Control is capable of generating excellent control inputs for a system that is well-approximated by a surrogate DNN. However, when NEMPC is used to control the hardware with a surrogate DNN, the results are much worse because the surrogate DNN is a poor approximation of the dynamics for the real hardware (see section 3.4). Hardware Experiments After validating NEMPC's performance in simulation, we evaluate the performance of NEMPC while controlling the soft robot continuum joint, following a reference trajectory in θ and φ. This experiment is run twice, once while NEMPC's internal model of the robot is represented by the surrogate DNN (N sim ), and once while NEMPC's internal model is represented by the combined DNN (N sim + N err ). We use two HTC Vive Trackers rigidly attached to the robot base and tip in order to measure joint angles (θ and φ) in real-time (see Figure 9), while the joint velocities (θ andφ) are numerically differentiated from the angle measurements. The pressures in each of the robot's four chambers are measured by onboard sensors and controlled by an embedded high-frequency PID controller. All of this data is packaged and published via the Robot Operating System (ROS) at 400 Hz to a separate computer on the network with an 8 core Intel Xeon E5-1620 CPU and an NVIDIA GeForce GTX 1080 Ti GPU, which is dedicated to running the NEMPC algorithm. As shown by Thompson et al. (2020), the hardware requirements for major deep learning papers have increased quickly with time, so we FIGURE 7 | Comparison of joint angle dynamics between the four different models used on different test signals. The left column is in response to sine waves in pressure commands and the right column is in response the ramp inputs in pressure commands. Note that the combined DNN, although not trained on sines and ramps, still predicts unmodeled dynamics. believe that our single-GPU setup is relatively inexpensive and computationally cheap. Figure 10 illustrates the process as a control diagram. The controller is given an x des (t) which is used in conjunction with the current state estimatex t to calculate an optimal pressure command u * . This command is sent to the embedded PID pressure controller and then pressures and joint angles are measured directly. Hardware Results The results of the hardware experiments are presented in Figure 11. When the surrogate DNN is used as NEMPC's internal model to control the soft robot hardware, NEMPC struggles to follow the desired path for θ and φ. This behavior is likely due to the surrogate DNN's poor approximation of the hardware dynamics, as evaluated in section 2.5. Evidence of this is found in the performance of NEMPC while internally simulating with the combined DNN. When NEMPC controls the hardware while using the combined DNN as its internal model, the reference tracking performance shown in Figure 11 improves significantly. With a more accurate internal model, NEMPC is able to generate solutions that better account for factors, such as the robot's plasticity (e.g., non-zero equilibrium configuration), hysteresis, and increased stiffness and damping near joint limits. This results FIGURE 9 | Diagram of the experimental setup for the hardware experiments. Also illustrated here is the inherent plasticity of the robot, resulting in a variable offset in θ and φ. Over time, the plastic in the pressure chambers deforms and causes the robot to have an equilibrium configuration that is not vertical. in a much lower steady-state offset, and more rapid convergence in some cases. Quantitatively, the reference tracking behavior of NEMPC can be measured through a statistical analysis of the tracking error for each experiment. A statistical comparison of NEMPC performance can be found in Table 2. The mean tracking error decreased from 0.378 to 0.182 rad, a 52% decrease. The median tracking error decreased by almost an order of magnitude. Of particular note is the difference in integral of the time-weighted absolute error (ITAE) for each trial. This measure penalizes errors that persist over time, and allows a controller to be slightly less aggressive, as long as it converges and stays close to its target. The ITAE is calculated for each step input individually, summed over the whole series of step inputs, then recorded. As seen in the table, NEMPC with the combined DNN greatly outperforms NEMPC with the surrogate DNN in regards to ITAE, in part due to its lack of significant steady-state error. The surrogate DNN could be helped by the addition of an integrator to the controller, as done in previous work with NEMPC by Hyatt and Killpack (2020). What is most impressive in this case is that by incorporating the combined DNN with NEMPC, we achieve very low steadystate error with no integral control at all. All of our prior work (and most of the soft robot control literature) has required some sort of integral or adaptive control to compensate for this steadystate error (see Hyatt et al., 2020a for an example of modelreference adaptive control (MRAC) which essentially exhibits integral action to achieve low steady-state error). The implementation of an integrator could help reduce steady-state tracking error for the surrogate DNN controller, but the control would still suffer from overshoot and generally poor performance. The mean, median, and standard deviation of the tracking error would likely remain indicators of the surrogate DNN's relatively poor performance. To visualize the insights offered by the mean, median, and standard deviation of the tracking error, Figure 12 presents a histogram of the normalized FIGURE 10 | Control diagram for running NEMPC in conjunction with the learned error model. u * indicates the optimal input chosen by the controller. This input is sent to the embedded pressure controller and we measure pressures p and positions q directly, while estimatingq. FIGURE 11 | Comparison of tracking performance on the physical soft robot continuum joint while using the two categories of DNN model approximation. Note that the control performance of NEMPC while using the combined DNN contains much less steady-state tracking error than the control performance of NEMPC while using the surrogate DNN. frequency of error for each of the two experiments on hardware. Visible in the plot for the surrogate DNN is the angle offset due to the robot's non-zero equilibrium configuration. The surrogate DNN causes NEMPC to tend toward negative error in θ and positive error in φ. When the error model in the combined DNN is introduced, both θ and φ error are pulled toward zero, becoming uni-modal and more normally distributed. Overall, the combined DNN is a much better approximation of the robot's dynamics, allowing NEMPC to follow the given reference trajectory much more effectively, even with fast changes (step inputs) in the commanded changes for φ and θ . To validate that the combined DNN can be used for control trajectories other than step inputs, we conducted two more experiments: one for tracking sin waves in φ and θ and a second for tracking ramps in φ and θ . The results can be seen in Figure 13. From these figures, it is apparent that the training data consisting of only step inputs is enough for the DNN to accurately predict the performance of the robot while tracking other wave forms. There is a nominal amount of phase lag in both cases, but this is expected because, in our implementation of NEMPC, x goal for the entire prediction horizon remains constant while the waveform continuously changes. This could be overcome (without changing our formulation at all) by simply allowing NEMPC to use a continuous x goal trajectory instead of a single constant value which we used. CONCLUSIONS AND FUTURE WORK In this work we demonstrate that significant model and control improvement is possible through a data-driven deep learning approach. Our approach does not require specialized expertise or any assumptions about the form of the model. As a result, this method is generally applicable to any model-based control problem where the plant dynamics are highly uncertain or only partially known. Additionally, because our approach is rooted in a physicsbased analytical model and our error DNN only needs to learn relatively small adjustments, the error DNN can be smaller, faster, and train with less data than would be required if we took a completely model-free learning approach. This is especially beneficial when gathering training data on hardware is dangerous or expensive, as is often the case in the field of robotics (albeit less so for many soft robots). FIGURE 12 | A histogram of the normalized frequency of θ and φ tracking error in the hardware experiments. Note that the data gathered while using the surrogate DNN for control has θ error and φ error that is biased in both directions away from zero. This is a result of the surrogate DNN's lack of information regarding the offsets in θ and φ at equilibrium. Also note the difference in y axis scaling for both histograms. FIGURE 13 | Comparison of tracking performance on sine (left column) and ramp (right column) test signals using the N sim + N err DNN configuration for control. Note that although both DNNs were trained solely on step inputs, the models are able to generalize well to other types of signals. In future work we hope to improve DNN accuracy, including using a state buffer. Currently, the DNN state transition model can only see the current state and commanded pressures-in other words, we assume that the state transition model is a firstorder Markov process. If hysteresis and other non-linear, statedependent phenomenon are present, then performance may improve by including a buffer of the last n states. This time sequence data could be leveraged by a fully-connected network, or some kind of recurrent neural network (RNN). However, this approach may slow the evaluation of the network. An important preliminary result, though not discussed indepth in this paper, is that the model and controller were sensitive to the frequency content in the data used for training. The effects of this were significant, but are currently poorly understood. However, we have presented evidence that using square waves to explore and learn the state space is an efficient method because the trained models generalized relatively well to sine waves and ramps. We also note from our experiments that the inverse relationship is not true; models trained on sine waves and ramps generally did not perform well when tested on step inputs. We believe this is because square waves excite more dynamic modes than sine waves or ramp inputs in pressure. Further exploration into deep learning dynamics in a generalized fashion could be valuable as future work, especially in regards to specifically learning frequency content and modes of a dynamical system. This could produce even higher fidelity models. A downside to our approach is that if anything causes the plant dynamics to change, a small period of retraining would be required to maintain model fidelity. Future work could include a learning approach which allows the platform to continuously learn an error model online. Additionally, we recognize that exploring the state space randomly to gather training data is not always possible on some hardware platforms. Future work could include an exploration of how learning from a safe subspace of the state space can generalize to control over the entire reachable state space. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS CJ contributed a general problem formulation, collected the training data, and ran the experiments. TQ contributed with NEMPC controller improvements and running experiments. TS contributed DNN structures and training methods. CJ, TS, and TQ contributed equally to writing the paper. DW and MK assisted in developing the methodology and in advisory roles. All authors contributed to the article and approved the submitted version. FUNDING This material was based upon work supported by the National Science Foundation under Grant no. 1935312.
2021-05-04T13:28:12.609Z
2021-05-04T00:00:00.000
{ "year": 2021, "sha1": "15f12a7e925b826569c3cde41a263a1e228a25e9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2021.654398/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15f12a7e925b826569c3cde41a263a1e228a25e9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
258620569
pes2o/s2orc
v3-fos-license
Chronic pain following totally extra-peritoneal inguinal hernia repair: a randomized clinical trial comparing glue and absorbable tackers Purpose Chronic pain following inguinal hernia repair occurs in up to 20% of patients. The underlying mechanism probably involves sensory nerve damage and abnormal healing that might be influenced by the materials chosen for mesh fixation. The main objective of this study was to compare glue and absorbable tackers on the rate of chronic pain after surgery in patients undergoing totally extraperitoneal inguinal hernia repair (TEP). Methods Patients undergoing (TEP) inguinal hernia repair were enrolled in a single-blind randomized clinical trial and were randomized for mesh fixation with glue (LIQUIBAND FIX 8 Neopharm) or absorbable tackers (SECURE STRAP Johnson & Johnson). Pain was assessed using a validated 4-point verbal-rank scale (none, mild, moderate, and severe) at 1 week, 1 month, 6 months, and 1 year postoperatively. Chronic pain was defined as pain persisting beyond 3 months. Results Two hundred and eight patients were analyzed. The groups were similar in age, gender, and hernia side. Chronic pain of any intensity was reported in 31.7% (66/208) after 6 months and in 13% (29/208) after 12 months. No differences in postoperative pain were observed between the two forms of mesh fixation. Still, when only those with severe pain were considered, mesh fixation with glue resulted in less pain compared to fixation by tackers (log-rank p = 0.025). At 1 year, 4 symptomatic recurrent hernias were identified in patients whose mesh was fixated with absorbable tackers. Conclusions Patients who underwent TEP inguinal hernia repair with mesh fixated by glue suffered from less pain. Background Inguinal hernia is one of the most common surgical pathologies. Postoperative chronic inguinal pain has been reported in 16-62% of patients who undergo inguinal hernia repair [1]. Prevalence of groin pain has been significantly decreased with the adoption of the endoscopic approach, but up to 20% of the patients undergoing laparoscopic repair may still suffer from chronic pain [2]. Although postoperative groin pain is usually mild in nature, quality-of-life studies have shown that chronic pain may significantly interfere with normal daily activity [1,3] The underlying pathogenesis of postoperative groin pain is not completely clear; it may be multifactorial and, in many cases, impossible to define. One possible explanation relates to nerve damage during surgery, but this seems not to be the only factor, as many patients have postoperative sensory diminution without pain [1]. Another commonly cited reason may be associated with chronic foreign body inflammatory response due to the use of mesh [2]. The synthetic mesh is secured to the abdominal wall by tackers or glue [4], which may also potentially influence postoperative pain. The assumption is that tackers that are encored into the tissue, such as pubic bone or abdominal wall muscles, may result in a higher rate of local response and in higher rate of postoperative chronic pain compared to fixation methods that are not encored into the tissue. A recent network meta-analysis suggests that absorbable tacks and glue cause less local tissue response and may be associated with Igor Jeroukhimov and Daniel Dykman contributed equally to this work. the lowest rate of chronic pain and recurrence [5]. It is for that reason we decided to compare these two techniques in a randomized trial. The main aim of this study was to assess the effect of two methods for mesh fixation in TEP inguinal hernia repair, absorbable tackers vs. glue, on postoperative chronic pain. Design We performed a single-blind, randomized clinical trial comparing mesh fixation using glue to mesh fixation using absorbable tack staples in patients undergoing TEP hernia repair. Ethical committee approval The study was approved by the local Institutional Review Board (193-16-ASF), and all patients gave their written informed consent. Participants Patients aged 18 to 80 years old who were admitted for elective inguinal hernia repair between March 1, 2017, and March 30, 2020, were eligible to participate in the study. Patients with a recurrent, irreducible, or incarcerated hernia, large inguinoscrotal hernia, pregnant women, patients presenting with co-morbid conditions that might interfere with pain assessment (impaired cognitive status, limited mobility, and daily use of pain medicine), and patients with previous surgery in the groin were excluded from the study. Patient's flowchart is shown in Fig. 1. Randomization Randomization was made using sealed envelopes, which were opened in the operating room just before mesh fixation. Intervention In group I, mesh was secured by glue (LIQUIBAND FIX 8, Liquiband, Advanced Medical Solutions, UK), and group II included patients with mesh secured to the tissue by . Surgery was performed under general anesthesia by 5 attending surgeons with individual experience in more than 200 laparoscopic inguinal hernia repairs. A standard, uniform surgical technique of TEP inguinal hernia repair was applied in all cases. All patients received prophylactic antibiotics with 2 g of intravenous cefazolin before surgery. The TEP inguinal hernia repair was routinely done using 3 trocars, with the camera having an 11-mm port at the umbilical border and two 5 mm ports along the midline about 4 cm and 8 cm above the pubis. Pre-peritoneal balloons were not used to dissect the pre-peritoneal space in this study. An appropriately sized knitted polypropylene mesh (Bard 3Dmax, BD, USA) was used in all patients. The mesh fixation points were standardized for all participants. The mesh was secured medially, just above the pubic bone, into Cooper's ligament and to the back of the transverse fascia laterally (maximum of 4 tackers or glue applications) [6]. Patients were discharged from the hospital within 24 h following surgery. Standard postoperative care instructions were given by staff nurses, who were blinded to the randomization. Outcomes All patients were examined in the outpatient clinic at 7 days and 1 year after surgery. A telephone interview was also performed at 1 month, 3 months, and 6 months after surgery. The examinations and telephone interviews were performed by an attending surgeon blinded to the method of mesh fixation. All the interview questions were standardized. Pain was assessed using a 4-point verbal-rank scale [7]. Patients were asked about the degree of pain, use of pain medications, restriction of daily activity, and use of medical resources such as medical consult for inguinal pain. Mild pain was defined as occasional pain or discomfort that did not limit daily activity and did not require pain medications, moderate pain interfered with normal daily activity with rare analgesic requirement, and severe pain was incapacitating, at frequent intervals, or interfering with daily activities with a frequent need for painkillers. Daily activity included both physical and sport activities such as walking, lifting a bag, or jogging. Chronic pain was defined as pain persisting beyond 3 months after surgery [7]. Statistical analysis Statistical analysis was performed with R version 4.1.0 (R Foundation for Statistical Computing) and GraphPad Prism (version 6.00 for Windows, GraphPad Software, La Jolla, California, USA). Descriptive statistics were presented with mean (standard deviation) or number (percentage) for continuous and categorical covariates, respectively. Comparisons between the two groups were analyzed with independent t-tests and χ 2 -tests for continuous and categorical variables, respectively. Time-to-event curves were constructed for each of the groups for residual pain. The log-rank test was used to test the differences between the time-to-event curves. Means, standard deviation, and percentages were approximated to the nearest decimals, and p-values to the nearest thousandth. Recruitment data Three hundred and seventy-two patients who underwent laparoscopic TEP inguinal hernia repair at the Shamir Medical Center during the study period were screened for this study. Eighty-nine of them did not meet the inclusion criteria, and 68 patients refused to participate. Two hundred fifteen patients (199 males and 16 females) were randomly assigned to the 2 study groups. Seven patients were lost for follow-up and were excluded from the final analysis. Figure 1 summarizes the study flowchart from eligibility assessment to final analysis. Seventeen patients were unable to visit the outpatient clinic for examination 1 year after surgery due to COVID-19 pandemic restrictions. These patients were interviewed by telephone regarding pain and were considered minor protocol deviation. Baseline characteristics Two hundred and eight patients participated in the study and underwent 346 laparoscopic TEP repairs, including 70 patients who underwent unilateral and 138 bilateral hernia repairs. No significant differences were observed in age and gender distribution between the groups (Table 1). Overall, 17 patients were unable to visit the outpatient clinic for physical examination 1 year after surgery (8 from group 1 and 9 from group 2). Postoperative pain assessment Residual pain following surgery of any intensity was reported in 54.8% (114/208) after 1 month, 31.7% (66/208) after 6 months, and 13% (29/208) after 12 months (Fig. 2). When the patients were stratified according to the technique of mesh fixation, no difference in moderate to severe residual pain was observed (Fig. 3). Severe pain after surgery was infrequent and declined from 11% of the patients 1 week after surgery to 1% 1 year later. At each time point, there were less patients suffering from severe pain when mesh fixation was made with the glue compared to tack fixation (log-rank p = 0.025) (Fig. 4). Postoperative complications and hernia recurrences Two patients from group 1 developed urinary retention, and one patient from group 2 was readmitted due to large inguinoscrotal hematoma. No wound-related complications were observed. Ten out of 329 recurrent unilateral hernias were diagnosed (3%), 1 patient from the glue fixation group and 9 from the tacker fixation group (p = 0.02). Patients with recurrence were older (70 ± 8), and two of them were female. Eight patients had indirect hernia on the first surgery. Nine patients initially underwent bilateral indirect inguinal hernia repair. Four patients from group 2 underwent surgery due to symptomatic recurrence. All symptomatic recurrent hernias were diagnosed in the first 4 months after surgery. Asymptomatic recurrent inguinal hernia was diagnosed by physical examination and confirmed by ultrasonography in 6 patients. Discussion The main aim of this study was to evaluate whether the form of mesh fixation may affect chronic pain. Overall, no differences in chronic pain were found between patients who underwent mesh fixation with glue and those who underwent fixation with tackers. Our results do show that patients in whom mesh was fixed by glue had a lower probability of suffering from severe chronic pain. These results suggest that mesh fixation with glue may be superior in patients undergoing TEP hernia repair. Laparoscopic TEP inguinal hernia repair is a standard practice today. In a large cohort study based on the Swedish national hernia registry, Lindstrem et al. showed a significant decrease in chronic groin pain after TEP repair compared with open repair, at the cost of increased risk of recurrence requiring surgery [8]. Mesh is traditionally used to cover hernia defects. Many surgeons routinely fix the mesh to the tissue to avoid its displacement. Several authors recommended proper mesh placement in preperitoneal space with no fixation in order to avoid additional tissue damage and decrease the rate of chronic pain [9]. In a randomized control trial (RCT), Moreno-Egea et al. showed no difference in the rate of chronic pain when comparing mechanical mesh fixation with non-fixation technique [10]. Different types of mesh fixation have been proposed, varying from mechanical staples or tacks and glue to simple sutures, but high-quality evidence for differences between mesh fixation techniques and their influence on chronic pain is still lacking. Most of the RCTs were either based on small number of patients or included a mix of TEP and transabdominal preperitoneal repair. Chronic groin pain appears in 1-20% of patients after laparoscopic inguinal hernia repair [4]. Techapongsatorn et al. recently evaluated 11 RCTs and found no significant difference between various types of mesh fixation for laparoscopic hernia repair, but ranked glue as the best method for less chronic pain [5]. Chih-Chin Yu et al. prospectively studied single institution data of 583 patients after TEP repair [11] and found higher rate of acute groin pain in patients with mechanical fixation compared to the glue. However, chronic groin pain rate was similar between the groups. Recurrent inguinal hernia after laparoscopic repair appears in 1 to 13% of the patients [4,12]. These figures probably underestimated the real recurrence rate as many studies did not actively follow their patients in the long run, and patients were actively examined only if they experienced symptoms of recurrent hernia. All but 17 patients in our study were examined by attending surgeons blinded to study randomization 1 year after surgery. Indeed, only 4 patients had a symptomatic recurrence. Six patients had no symptoms of recurrence, diagnosed by physical examination as part of the follow-up protocol and confirmed by ultrasonography. Recurrent hernia in our study was significantly more common in patients in whom the mesh was secured using absorbable trackers compared to those where the mesh was secured with glue, suggesting an additional benefit for the glue. The reason for this difference is not completely clear, as both methods should secure the mesh in place in the pre-peritoneal space during the initial healing phase to allow fibrosis. The use of tackers may require more precise application at the pubic bone to ensure appropriate anchoring. We standardized the technique of mesh securing, and all the attending surgeons participating in the study were highly experienced in the use of tackers for laparoscopic hernia repair, and it seems unlikely that misuse of the tackers can explain this difference. The study is an investigator-initiated trial that was not supported by any of the industrial companies, and both products were readily available for the use of surgeons in our Institute. This limits potential publication bias and other potential biases that may be associated with sponsored studies. Our study has certain limitations that might influence the final conclusions. A non-response bias or loss for followup might have occurred in this study, which was affected by the COVID-19 pandemic and may have weakened our observed differences. A larger study sample size could have observed a more accurate picture of the pain assessment and postoperative complication rate. Furthermore, in this study, we focused only on pain and its interference with daily activities. We did not evaluate other outcomes related to quality of life. In conclusion, this study shows that the use of glue for mesh fixation decreases severe groin pain and hernia recurrence in patients undergoing laparoscopic TEP hernia repair compared to absorbable tackers.
2023-05-12T14:53:44.364Z
2023-05-12T00:00:00.000
{ "year": 2023, "sha1": "e409f5e2819f6e6344c977fcabdbcedca244296f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e409f5e2819f6e6344c977fcabdbcedca244296f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73600185
pes2o/s2orc
v3-fos-license
Determining Growing Season of Potatoes Based on Rainfall Prediction Result Using System Dynamics Received Aug 17, 2017 Revised Jan 22, 2018 Accepted May 4, 2018 Potato has been and is a basic food for many countries. However, because of the uncertainty in rainfall patterns that have occurred since the existence of climate change make a significant impact on the outcome of potatoes production from year to year. Therefore, it needs the determination of new growing season period according to climate change. The determination of growing season is based on the result of rainfall prediction data using system dynamics ever done in previous studies to predictions of rainfall during the next five years starting in 2017-2021. Based on the modeling that has been done shows that early dry season ranges in mid-April to mid-May by the length of days in the growing season ranges from 162-192 days. The growing season prediction model has small error only about two dasarian. By the middle of the dry season, rainfall is expected to be very low which will make the potatoes into water deficit and will affect the harvest of potatoes plants which can be overcome with the irrigation system. Keyword: INTRODUCTION Potato is one type of tubers plant that has a relatively high carbohydrate content [1].Potatoes are known as a food ingredient that can replace the main crops and enter into five major world food crops other than wheat, corn, rice, and wheat flour [2].Potatoes has been and is a basis for human food in many countries, which has a very large prospect to ensure food security in the future. Tengger Indonesia is one of area that producing potatoes.Potatoes are the main agricultural commodities of Tengger people since a hundred years ago [3].In the process of planting, the potatoes are very dependent on rainfall.However, due to the uncertainty in rainfall patterns that have occurred since the existence of climate change, significant impact on the result of potatoes products from year to year [4].According to Indriantoro [1] before the existence of climate change in 2004 the production of potatoes still amounted to 14,165 kg/ha, the harvest are higher when compared to the harvest after the climate change in the year 2010 amounted to 10 580 kg/ha. Rainfall data is one of the meteorological parameters that have a greater bearing on the livelihood [5], such as for agriculture [6].Now, agriculture industry are applies many technology for improve the harvest such as technology of digital information, sensors, monitors, and other devices [7].So, to improve crops harvest especially potatoes, required planting time schedule in accordance with the current rainfall patterns, because rainfall variable became one of the most dominant variables, especially for rain fed agriculture commodities [8]. Integration about the timing of planting agricultural commodities with climate forecasting results was made by Hansen [9].In that study, Hansen did integrating crop simulation models with dynamic seasonal climate forecast models is expanding in response to a perceived opportunity to add value to seasonal 211 climate forecasts for agriculture.The simulation results are made is expected to be a decision support system that can assist farmers in determining the timing of planting and risk management in agriculture.In addition, similar research ever conducted by Manjula & Rengalakshmi [10] to ensure the sustainability of agriculture and food security rain-dependent farmers in India based on the information resulting from climate prediction. In this study produced a series of step to reduction risk adaptively for rain-dependent farmers in India based on data forecasts of seasonal climate and estimates of weather for a short distance and intermediate.Rainfall prediction was successfully done by Wahyuni et al. using system dynamics [11].That research done make a modeling of rainfall prediction in four area in Tengger, namely Puspo Districts, Sumber District, Tosari District, and Tutur District.The value of root mean square error (RMSE) obtained in the respective districts have been quite small.The highest RMSE value is only 7.0756 that they got on one area, which is Sumber District.The method successfully predicted rainfall is well able to determine the seasonal time scales that contribute to the management and the resilience of farming systems [12].So the results of rainfall prediction can be used as a reference for determining the growing season of potatoes crop in the future in accordance with the needs of the potatoes to rainfall. Based on the statement described, this study will determine the growing season of potatoes crop in Tengger, Indonesia based on rainfall prediction that conducted by Wahyuni et al. [11].To determine the growing season of potatoes is using system dynamics method for making a decision support system.Prototype created to be used as a reference in determining the growing season of potatoes in the next five years start in 2017 until 2021. LITERATURE REVIEW 2.1. Potato (Solanum Tuberosum) Characteristic Potatoes have a scientific name of Solanum tuberosum that can be planted in an area with a height of more than 500 m above sea level.However, where the most potential is a plateau area with an altitude between 1000-2000 m above sea level with temperatures around 20°C.Therefore, Indonesia has many regions that could be planted with potatoes such as Cipanas, Lembang, Canning, Batu Malang, Tengger [3], Wonosobo, Tawangmangu, Bukit Tinggi, Kerinci, and Malino [13].The conditions required for potato plants include the condition of the fertile soil, a little bit sandy, a lot of topsoils (fertile), ground water does not stagnate, and has a pH between 5 to 5.5.According to Alberta Agriculture and Forestry, in general, potato planting time approximately lasts for 80-150 days [14], the details of potato plants development stage are shown in Table 1.Potato plants need different water requirements in each grow stage.During the dry season, water deficit from rainfall occurred then generally circumvented by the irrigation system.The amount of water required for the cultivation of potatoes shown by Figure 1. System Dynamics A system dynamics is a modeling and simulation approach for analysis and designing a new policy with the help of computers.System dynamic can be distinguished by the presence of dependency among variables, mutualism interaction, information feedback, and the causal loop [15].Systems dynamics are often used to manage, model, and simulate new policies based on existing systems. The system dynamics approach begins with identifying the main problems, especially when it could be defined as a time series graph.Then models is developed by identify the variables that affect the main problem including circular causality and loops of information feedback.The developed models is used to identifying stock, accumulations, and flows in the system thus developing a Stock and Flow Diagram (SFD).The SFD is used as a simulation model which runs with the aid of a computer.Hereafter, the policymaker need to understand the simulation result and use it as a decision support materials on implementing changes in making policy changes for a better system [15]. Rainfall Prediction Using System Dynamic Rainfall is an important things in agriculture so early prediction of rainfall is good for the better economic growth of agriculture country [6].A system dynamics is a modeling and simulation approach for analysis and designing a new policy with the help of computers.System dynamic can be distinguished by the presence of dependency among variables, mutualism interaction, information feedback, and the causal loop [15]. System dynamics has been used by Wahyuni et al. to model and simulate rainfall prediction [11].The study was conducted to predict the rainfall in the area in four districts in Tengger such as Puspo, Sumber, Tosari, and Tutur.Rainfall prediction models incorporate various influence factors such as temperature and humidity.Models that has been used by Wahyuni et al. can be seen in Figure 2 The smallest RMSE obtained at the Tosari which is 6.4219 and the largest RMSE obtained at the Sumber which is 7.0756.Validation and value of RMSE which obtained in each district can be seen more clearly in Table 2. Methods of system dynamics used by Wahyuni et al. [11] successfully predict rainfall with smaller RMSE value than the other methods that have been used to predict the rainfall such as GSTAR-SUR [4], Tsukamoto FIS [16], Tsukamoto FIS with GA [17].The predicted results using system dynamics can predict 213 rainfall with a small error, hereafter it can be used to determine the potato growing season in the future in accordance with the needs of the potato to rainfall [12]. MODELING 3.1. Season Prediction Modeling The beginning and the end of the dry season can be determined based on the rainfall data.The dry season begins when rainfall is below 50 mm or shower rain types (44.52 mm/h) followed by next two subsequent dasarian were also below 50 mm or widespread rain types (14.21 mm/h) [18].The ends of dry season are determined when the rainfall of two previous dasarian still below 50 mm and the next dasarian rainfall exceeding 50 mm [19].Stock and flow diagram model of dry season prediction show in Figure 3. Figure 3. Stock and flow diagram model of dry season prediction The data used is the BMKG data which formed in the dasarian units or average rainfall for ten days [20].To determine the amount of rainfall data for ten days simply multiplied the dasarian data by ten.Detail explanation of dasarian data shown in Table 3. Growing Season Modeling The best time to plant potatoes is at the end of the rainy season and it can well growth with air temperatures around 20°C.However, the potato can also be planted at the beginning of the rainy season on the condition that the potatoes must be aged two months or have large tuber when heavy rain occurred [10].In other words, determining the beginning and the end of the dry season will affect the potato growing season prediction with considering the air temperature.The dependence among variables which affect the determining of potato growing season can be depicted in Figure 4. . Season Prediction Results of rainfall prediction can be used to predict the beginning and end of the dry season which can be used to determine the suitable time for potato planting.The simulation results are used to calculate the beginning and the end of dry season in the past 10 years from 2005 to 2014.Determination of dry season is shown in dasarian units and the error rate of beginning and ending of dry season can be seen in Table 4.By using the system dynamics approach, the growing season can be simulated with the average error in beginning and ending of the dry season only about two dasarian which can be see in Table 4. Data from rainfall prediction using system dynamics that conducted in previous studies are shown in the dasarian units [11].Rainfall prediction results in year 2005 until 2014 is clearly illustrated in Figure 5 until Figure 7.In Figure 5 until Figure 7 shown the limits of rainfall that distinguishes the rainy season and dry season.It can be seen that the start of the dry season begins around April to June.From the data obtaine, the prediction results of dry and rainy season shown in Table 5. 215 Temperature data obtained from the Meteorological, Climatological and Geophysical Agency of Indonesia in the last 14 years from 2000 until 2014 did not show any significant change [20].Graph of average temperature over a period of 1 year is shown in Figure 8. GROWING SEASON PREDICTION Prediction of potato growing season can be done based on the season prediction which was generated for the next five years from 2017 to 2021.The temperature the end of the rainy season or at the beginning of the dry season shows an average temperature below 20°C, indicating that it is a suitable time to start planting potatoes.Growing season prediction of potatoes and the length of the season are shown in Table 6.Growing season prediction of potatoes that shown in Table 6 is used to decision support system for farmer to begin grows potatoes.Result of growing season prediction has a small error as shown in Table 4. So, this growing season prediction could improve the production of potato harvest in the future. CONCLUSION System dynamics modeling can be used as a prediction tool for decision support in determining potato growing season.The prediction is based on rainfall prediction data for the next five years starting from 2017-2021 using system dynamics approach in previous studies [11].The simulation results using system  ISSN: 2089-3272 IJEEI, Vol. 6, No. 2, June 2018 : 210 -216 216 dynamics shows that early planting time of potatoes is around mid-April to mid-May with the number of days in a season between 162-192 days.Growing season prediction in beginning and ending of dry season only has average error about 2 dasarian.This result is good because the error is less than 4 dasarian or 1 month.So, this result can be use by the potatoes farmer to start planting potatoes. In the mid-grow stage of potato require water about 7 mm per day [14].Based on the graph of rainfall prediction, the amount of rainfall in the mid drought is very low which may make potato being a deficit of water that can be overcome by the irrigation system.The next research will be conducted simulation potato planting season based on the data of rainfall prediction using a combination of methods ANFIS-GA [21] and system dynamics. Figure 2 . Figure 2. Stock and flow diagram model of rainfall prediction of Potatoes Based on Rainfall Prediction Result Using… (IdaWahyuni) Figure 4 .ISSN Figure 4. Stock and flow diagram of potato growing season Table 4 . Validation and RMSE of Rainfall Prediction Using System Dynamics Table 2 . Validation and RMSE of Rainfall Prediction Using System Dynamics Table 3 . Validation and RMSE of Rainfall Prediction Using System Dynamics Table 5 . Prediction Results of Dry and Rainy Season in Tengger, Indonesia Determining Growing Season of Potatoes Based on Rainfall Prediction Result Using… (IdaWahyuni) Table 6 . Prediction Results of Potato Planting Time in the Dry Season
2018-12-27T09:22:55.301Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "e9d340fcfffba556a2a278aeb09b103f76f149ee", "oa_license": "CCBY", "oa_url": "http://section.iaesonline.com/index.php/IJEEI/article/download/315/345", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e9d340fcfffba556a2a278aeb09b103f76f149ee", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
55759352
pes2o/s2orc
v3-fos-license
Further Investigation on Building and Benchmarking A Low Power Embedded Cluster for Education  Embedded parallel computing become popular, and the future of innovation in the semiconductor industry will be in ubiquitous computing. Many researchers built embedded cluster system with limited number of devices, but we utilize the device from embedded classroom to build more number of parallel computing unit. In this paper we built low power cluster consisting 32 ARM boards with low-cost customized power supply for high performance computing class for education purpose, tested with several benchmarks on embedded cluster system and analyse the raw performance. Further Investigation on Building and Benchmarking A Low Power Embedded Cluster for Education Sritrusta Sukaridhoto 1 , Achmad Subhan KHalilullah 1 , and Dadet Pramadihanto 2 Abstract Embedded parallel computing become popular, and the future of innovation in the semiconductor industry will be in ubiquitous computing.Many researchers built embedded cluster system with limited number of devices, but we utilize the device from embedded classroom to build more number of parallel computing unit.In this paper we built low power cluster consisting 32 ARM boards with low-cost customized power supply for high performance computing class for education purpose, tested with several benchmarks on embedded cluster system and analyse the raw performance. Keywords Embedded Cluster System, ARM Board, Benchmark, High Computing Cluster for Education. I. INTRODUCTION 1 he large system used today in HPC are dominated by processors that use the x86 and Power instruction sets supplied by big vendors such as Intel, AMD, and IBM.These processors have been designed to mainly cater to the server, desktop PC and laptop market.The processors provide very good single thread performance but suffer from high cost and power usage.One of the main goals in building a HPC system is to stay within a power budget.In order to achieve this ambitious goal, other low power processor architectures such as embedded system are currently being explored since these processors have been primarily designed for the mobile and embedded device market. Embedded processors (CPUs) can be found in a vast variety of products from cellular phones, digital cameras and up to network-connected household appliances.Some of these embedded processors can run advanced operating systems, such as Linux, to achieve flexible network connectivity and to have logically same functionality as that of high-end processors designed for PC and workstations.With these abilities, to build HPC system using embedded system also possible. The issue of providing the High Performance Computing for education has been widely investigated in the literature, in particular with reference to embedded systems.In [1][2], the authors made the UCC embedded parallel computing based on SH4 processor, with 4 nodes.Sasaki, et [3], provides about M32RUCC parallel computing based on ARM processor.The node in that modules only using 4 nodes.In [4], they used Virtual Machine (VM) to learn parallel and distributed system.But still to build that system with VM it cost so much money.Balakrishnan [5] built ARM cluster but they built only less than 10 nodes and it will difficult if we use adaptor to give power supply. One of the problems in education for HPC system is the lack of cost-effective standardized platform for prototyping, testing and evaluating application programs on network-connected embedded CPUs.Addressing this problem, in this paper, we present a compact high performance computing cluster system with embedded CPUs, called "EEPIS Embedded Cluster Computer (EECC)" which provides a rapid-prototyping environment for high performance computing at very low cost and low power consumption compared with conventional PC clusters. EECC consists of 32 embedded computing nodes and network switches (100Mbps Fast Ethernet) from Embedded Class and customized power supply.The key idea is to fully utilize System on Chip (SoC) embedded products to realize cost-effective prototyping environment.In this context, we selected a commercially available embedded system as a computing node for EECC, which consists of a Dual Processor embedded CPUs, a memory, a storage, a network adapter and I/O interfaces.The computing nodes run Linux with some daemons and libraries required to make inter-processor communication for parallel processing, where MPI (message Passing Interface), and PVM (Parallel Virtual Machine) could be employed for parallel programming. Multiple EECC can be easily stacked to extend the number of computing nodes.We decided to use System on Chip (SoC) product with embedded CPUs in building EECC.A PandaBoard ES [7] as shown in Figure 2. is employed as a computing node, is a low-power, low-cost single board computer development platform based on the Texas Instruments OMAP4460 system on a chip (SoC).The OMAP4460 SoC on the PandaBoard features a dual-core 1.2 GHz. By the use of SoC products and customized power supply, we achieve compact size of 650mm x 480mm x 130mm similar with 2U rack mount system, low power consumption of 400W to drive 32 embedded CPUs and low cost.Thus, EECC can be easily introduced to educational programs in universities for Linux-based cluster computing.Another interesting feature of EECC is that every computing node has 2 USB interfaces, and hence EECC could be easily extended to various realworld application systems using USB-based sensors, such as USB cameras. This paper is organized as follows: Section 2 gives the system overview of EECC.Section 3 describes performance test for basic data transfer bandwidth through MPI.Section 4 discusses an implementation of T EECC as educational module in the class.In Section 5, we end with some conclusion.II.SYSTEM OVERVIEW Figure 1.shows the system overview for "EEPIS Embedded Cluster Computer (EECC)".It consists of 16 computing nodes from #1, #2, until #16, where the node #1 works as server node for various applications, such as NIS, NFS, and SSH servers, etc., and is directly accessible from terminal outside or from node #1 which connected with monitor and keyboard.These 16 computing nodes, a network switch, and 5V power supply are mounted together.The computing nodes are connected over conventional 100Mbps Fast Ethernet.Multiple EECC can be easily stacked to extend the number of computing nodes. We decided to use System on Chip (SoC) product with embedded CPUs in building EECC.A PandaBoard ES [7] as shown in Figure 2. is employed as a computing node, is a low-power, low-cost single board computer development platform based on the Texas Instruments OMAP4460 system on a chip (SoC).The OMAP4460 SoC on the PandaBoard features a dual-core 1.2 GHz ARM Cortex-A9 MPCore, 384 MHz PowerVR SGX540 GPU, IVA3 multimedia hardware accelerator with a programmable DSP, and 1 GB of DDR2 SDRAM.Primary persistent storage is via a SD Card slot allowing SDHC card class 10 with 8GB capacity.The board include wired 10/100 Ethernet as well as wireless Ethernet and Bluetooth connectivity.Its size is slightly larger than the ETX/XTX Computer form factor at 4 × 4.5 in (100 × 110 mm).The board can output video signals via DVI and HDMI interfaces.It also has 3.5 mm audio connectors.It has two USB host ports and one USB On-The-Go port, supporting USB 2.0.The PandaBoard has a real-time clock can be synchronize with NTP server, and runs the Linux Kernel with ARM architecture.The detailed specification of PandaBoard used for computing node is shows in Table 1. The ARM Cortex-A9 in PandaBoard is a 32-bit multicore processor which implements the ARMv7 instruction set architecture.The cortex-A9 can have a maximum of 4 cache-coherent cores and clock frequency ranging from 800 to 2000 Mhz.Each core in the cortex-A9 CPU has a 32 KB instruction and a 32 KB data cache.One of the key features of the ARM Cortex-A series processors is the option of having Advanced SIMD (NEON) extensions.NEON is a 128-bit SIMD instruction set that accelerates applications such as multimedia, signal processing, video encode/decode, gaming, image processing etc.The features of NEON include separate register files, independent execution hardware and a comprehensive instruction set.It supports 8, 16, 32 and 64 bit integer as well as single precision 32-bit floating point SIMD operations. EECC uses SDHC Class 10 with 8GB storage as shown in Figure 3.By using this SDHC can be easily to install Operating System such as Linux.SDHC Class 10 provides 30 MB/s transfer rate of data.With this speed gives better performance for running parallel computing inside the node. We used DC 5V 40A 200W Transformer Switch Power Supply with cooling fan.With this power supply we can power up 16 nodes of PandaBoard.In each PandaBoard need 5V DC with 2A minimum for requirement to run. Figure 4. Shows the type of power supply.To distribute the power supply, we made custom cables distribution board as shown in Figure 5, 6, and 7 to parallelize the supply.With this board also, we can also build stacking power supply to provide power supply for more PandaBoard. For switching device we decided to use 24 ports 10/100Mbps.16 ports in the switch used for PandaBoard as computing node, the rest of the port can be used as stackable network with other EECC module or can be used to connect to Internet or other LAN. Figure 8. Shows the switching device.The dimension for the switch is 28cm x 12.5cm x 4cm, which this size is relatively small. The With midnight commander application user is able to manage file and also send to the outside of parallel computing environment.e. EECC implements NIS to administer user accounts to be use in nodes.f.EECC connected with EEPIS's Debian Linux local mirror to update and upgrade software.g.With shell scripting provided by BASH, user can easily manage and control many nodes.Figure 9. Shows a photograph of a prototype of two 16 nodes EECC modules.We could achieve extremely compact size of 650mm x 480mm x 130mm similar with 2U rack mount system, low power consumption of 400W in total and low cost by employing the SoC embedded devices. III. BENCHMARK In this section we will discuss about the various benchmark that were run on the EECC.The benchmark system chosen based on various performance metrics of the system and cluster that is of interest to HPC applications.We describe performance of basic MPI function on EECC. A. Pallas MPI Benchmark (PMB) Pallas MPI Benchmark (PMB) [9] is employed to evaluate data transfer bandwidth and latency of basic MPI ping-pong, sendrecv, and exchange communications. Figure 10 shows the result of ping-pong test, where two processors send and receive alternatively.Peak performance of ping-pong communication is estimated at about 7.5Mbytes/sec (~60Mbps), with this result shows that EECC can communicate with enough bandwidth.Figure 11 shows the experiment result from send-recv test, where 4 processors do communication send and receive as full-duplex.With all processors sending and receiving packet data, EECC can utilize about 12Mbytes/sec (~96Mbps).It means that EECC system can do high performance communication. Figure 12 shows the result from MPI exchange communication, where 4 processors send and receive packet data as round robin topology.EECC system gave performance about 10Mbytes/sec (~80Mbps). B. Pallas MPI Benchmark (PMB) High Performance Linpack (HPL) [10] is a parallel implementation of the Linpack benchmark and is portable on a wide number of machines.HPL uses double precision 64-bit arithmetic to solve a linear system of equations of order N.It is usually run on distributed memory computers to determine the double precision floating-point performance of the system.The HPL benchmark uses LU decomposition with partial row pivoting.It uses MPI for inter-node communication and relies on various routines from BLAS and LAPACK libraries.Table 4. Shown that EECC passed the test comparing with single node in HPL test. IV. HPC COURSE The course provides an introduction to advanced computer architectures, parallel algorithms, parallel languages, and performance-oriented computing, and uses real-world case studies from computational science and engineering application domains.As hardware designers turn to multi-core CPUs and GPUs, software developers must embrace parallel programming to increase performance.No single approach has yet established itself as the "right way" to develop parallel software, especially as the hardware evolves so rapidly. We design the course that starts by overviewing the architecture of modern processors, including multi-core, many-core, and general APUs.This discussion includes not only the computation cores themselves, but also the importance of understanding the memory hierarchy and caching.The course then turns to the programmability of these systems, and works from the ground up: multithreading, higher-level directive and task-based, message-passing, and map-reduce.The course also moves from shared memory to distributed memory to the cloud, showing examples of C++11, CUDA, Thrust, OpenMP, PPL, MPI, and Hadoop to program these systems.Additional topics include measuring performance, linear speedup, Amdahl's law, profiling and debugging tools, types of parallelism (data, task, dataflow, embarrassing), and common patterns (forkjoin, reduction, and map-reduce). Hands-on lab exercises in C and C++ are an integral part of the course; attendees should expect to bring a laptop.The syllabi of the course are as follows: a. Intro to HPC and multicore hardware b.Modern many-core hardware c.Types of parallelism d.Parallel programming in C++11 e.The dangers of parallel programming f.OpenMP: a higher-level abstraction g.Task-based parallelism with the TPL h.Tools: debuggers, profilers, and analyzers i. Parallelism at scale: clusters and MPI j.Parallelism at scale: cloud and Hadoop k.Future research directions The approach is hands-on, the students are expected to use the lecture information, a series of assignments and a final project to emerge at the end of the class with parallel programing knowledge that can be immediately applied to their research projects.The situation when students learn about HPC by using EECC system can shows in Figure 13. In this section we compared our old HPC system with EECC system for HPC Course.The specification of our old HPC system is G4 Power Macintosh G4 (2003) as shown in Figure 14.We compared in the matter of economics, performance and education process. Table 5 shows that comparison between old HPC systems with EECC.We give values from 1 to 10 with 1 is worst and 10 is the best value.In this comparison sometimes the bigger value is not the better meaning. In economics way, EECC is better rather than old HPC.To build HPC system with 32 nodes using PC, we need to make investment around Rp. 160.000.000,-.But when we build HPC system using EECC it will cost around half from HPC system using PCs.The HPC system using PCs need more space comparing with EECC, because EECC size is same with 2U rack server.For power consumption EECC only need 400W for 32 nodes, but for PC with 550W each it will take power consumption around 17000W to power up 32 nodes. In performance way, EECC can give almost the same performance with the old HPC system.Significant different is in the matter of storage, EECC used 8 GB SDHC comparing with old HPC system used 500GB HDD.Although EECC using only 8GB storage, the education way to learn parallel computing is already enough.Basic installation of EECC system with parallel programming applications used about 2GB of storage. In education process way, EECC gives better value rather than old HPC system.The reason is, EECC system can be used for different subject.For preparation, EECC and old HPC system are need the same action to turn on the node.We used the same material course and we added more information about embedded system.Student can learn how to make research, to find solution in HPC course. V. RESULT AND ANALYSIS In this paper, we presented a new parallel computing using embedded systems, called "EEPIS Embedded Computing Cluster (EECC)", with purpose to provide a cost-effective prototyping environment for design and test of parallel computing in education program.EECC gives a better design, low cost and also low power consumption. For future work, we want to make EECC with better packaging and more examples for HPC course.And also we want to try EECC system in robotics system. Figure 1 . Figure 1.Architecture of the EECC
2018-12-05T00:32:53.432Z
2015-01-28T00:00:00.000
{ "year": 2015, "sha1": "3bb5c9837551c85f3eb5781660977e2609fc4271", "oa_license": "CCBYSA", "oa_url": "http://iptek.its.ac.id/index.php/jps/article/download/355/455", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3bb5c9837551c85f3eb5781660977e2609fc4271", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Education" ], "extfieldsofstudy": [ "Engineering" ] }
246780289
pes2o/s2orc
v3-fos-license
X-Ray fluorescence as a method of characterizing inorganic pigment patterns in the work of Julian Onderdonk This study utilized portable X-Ray fluorescence to analyze pigment patterns in 33 paintings by Julian Onderdonk, a 19th-20th century Texas impressionist. This analysis led to the identification of distinctive pigment preferences for Onderdonk at different periods of his career. Using the pigment preference patterns identified in the paintings that were dated by the artist, undated works were analyzed and assigned to different periods in the artists career based on their pigment patterns. This study represents a non-destructive method for organizing the artist's work without solely relying on stylistic changes. had only tentative dates based on associations with similar, dated works. Analysis of Onderdonk's signature revealed a shift in style between 1906 and 1908, allowing some of the undated works to be grouped to a general date range [3]. To further narrow down the dates, this study utilized portable energy dispersive X-Ray fluorescence (pXRF) analysis to attempt to identify pigment markers that can further refine these groupings, and to develop pigment patterns that are unique to certain periods of Onderdonk's career. There has been a growing need for non-destrcutive in situ analysis of pigments, and the non-invasive, non-destructive nature of pXRF analysis was essential to gain access to the paintings for this study [4,5]. By then comparing the pigment patterns in these refined groups to the undated works, date ranges can be assigned with greater certainty than stylistic comparisons. Methods For each painting analyzed, several small areas were chosen to examine based on pigment hue, apparent homogeneity of the paint, and the thickness of the paint. Data was obtained with a Bruker Tracer III-SD pXRF detector, and analyzed in Bruker's S1pXRF software. The spectra for the different points were compared to one another in order to attempt to isolate the constituent elemental building blocks responsible for a particular hue. In some cases, a distinct pigment color was determined, and in others, the colors were clearly a blend of two or more colors. Though some of the pigments analyzed remain ambiguous, like other similar studies, for the most part this method of analysis was successful [6][7][8]. There are two major drawbacks to using pXRF to analyze pigments. The first is that the Bruker pXRF instrument is unable to detect organic elements due to the low fluorescence energy levels of low Z-number elements [9]. This means that the analyses presented here are likely incomplete. A number of organic based pigments exist and are frequently used, but given the inability of the instrument to detect them, they were necessarily ignored for this study. In some cases, it is noted that due to a lack of otherwise compelling data, organic pigments are the most likely source of a color, but this is inference based on a lack of evidence. The second drawback is that fundamentally, XRF analysis is unable to detect chemical structure. The elemental composition is read, but the chemical bonds between the elements are not [10,11]. Many elements are present in a wide variety of pigments. Iron (Fe) for example can be found in hematite red, orange and yellow ochre, and even Prussian blue [12]. When each pigment point is analyzed, the hue of the point must be taken into account when attempting to find the best chemical fit for the elements present. Table 1 below summarizes the pigments identified in this study with their chemical formula and earliest known use. Sample painting analysis Elemental data points were collected across all 33 paintings, then collated and compared. An example of how the analysis was carried out across the assemblage can be seen here with the 1922 painting Dawn in the Hills (Fig. 1). Apart from being a particularly beautiful painting in its own right, Dawn in the Hills is highly treasured due to the fact that it is the last painting that Onderdonk ever painted [15]. A total of seven points were analyzed for this painting. Points 1 and 2 are the whites of the sky, points 3 and 4 are blues, point 5 is the green of the tree to the left of the painting, point 6 is the reddish background in the lower left-hand corner, and point 7 is the signature. This painting was framed and unable to be removed from the frame, but for those that were unframed or able to be removed Table 1 Pigments, their chemical formulas, and use for artworks [13,14]. from a frame, a portion of non-painted canvas was analyzed to determine the elements present in the primer. After each point was tested, they were analyzed by color grouping. White As seen in Fig. 2, the elemental composition of the white of the clouds in points 1 and 2 are predominantly zinc and lead, and therefore are most likely to be a zinc white and a lead white. Both zinc and lead peaks are consistent across all peaks, and so it stands to reason that the painting was primed with both. The presence of calcium and cadmium in the scan are likely due to the blending of paints to achieve a different hue of white. They are found in similar intensities across all spectra. rest of Onderdonk's paintings in the 1920s, this cobalt based blue does not have the indicative tin or aluminum peaks that would point toward cerulean blue or cobalt blue. Nonetheless, this cobalt signature is consistent with the other blues in Onderdonk's late period work, and could indicate the use of smalt or ultramarine. The presence of calcium, chromium, and iron are consistent with the background across the rest of the spectra and therefore are not likely part of the blue pigment. Green The green of point 5 shown in Fig. 4 is a chromium based green, so it likely viridian, a chromium oxide. Many commonly used greens are copper (Cu) based, like verdigris, emerald green, and malachite, but the lack of a copper peak here rules those out. The presence of calcium, chromium, and iron are consistent with the background across the rest of the spectra and therefore are not likely part of the green pigment. Fig. 5 shows that the area behind the signature is a reddish orange, which appears to be iron based, indicating hematite. In point 6, there is also a trace of mercury, indicating that the red highlights are vermillion (HgS) instead of iron. Point 7, the signature, has a slightly higher peak for chromium, likely indicating that the red and green were mixed to make a darker color. Red and signature For Dawn in the Hills, as the spectra were analyzed, the results were added to a spreadsheet for comparison across the collection. As the results continued to build, patterns and trends started to become clear. Table 2 lists each painting with a known date and the characteristic elements for each hue analyzed, with the exception of Bluebonnets in San Antonio; this painting had titanium in each point analyzed, indicating that it was primed with titanium white, a paint that was not available until after Onderdonk's death [16]. Additionally, Milkweed in Bandera was omitted due to an inability to separate out conclusive elemental data for the different pigments. Based on this table, Onderdonk's preference for zinc white over lead white grew, and by 1914 it seems he had phased the lead white out of use almost entirely until he re-introduced it in 1921. He seems to have eliminated mercuric red pigment by 1908, though it does show up again in the 1922 Dawn in the Hills. In 1909 there was a shift away from iron-based blues to cobalt-based blues, along with a shift from zinc-based yellows to cadmium-based yellows. Discussion and conclusions Despite the limitation of XRF to obtain only elemental data about the pigments used from which the pigments can only be inferred, with enough data points and a moderately restricted pool of possible chemical formulas, it is entirely possible to establish the chemical makeup of cultural resources like art and archaeological artifacts. With the work of Julian Onderdonk, clear patterns of changing pigment preferences through time revealed themselves when each of the paintings was analyzed. It has been shown that undated works can be reasonably associated with the spectral signatures of dated works, allowing for a completely non-destructive way to categorize the undated works into the various phases of Onderdonk's career. It has also been shown that by establishing these patterns, paintings incorrectly attributed to an artist can be discovered, as is most likely the case with Bluebonnets in San Antonio. The aim of this study was to utilize a portable non-destructive analysis technique to expand the knowledge available about a collection of cultural heritage materials. The results of the study outlined here have proven to be very satisfactory in achieving this aim, though clearly refinements in instrumentation would be advantageous. Future studies incorporating the use of analytic methods capable of detecting organic materials and the chemical bonds between the elements detected would greatly enhance the breadth and depth of the results outlined in this study. However, given that the pXRF was easily transported to the various locations of the paintings and obtained excellent data sets without any adverse effects on these priceless works of art, this was overall a successful endeavor. Authors contributions Christopher Dostal, PhD: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Using this table as a stepping off point for spectral comparisons between paintings, Table 3 shows the elemental composition of the dated paintings compared to the undated paintings to see which paintings fit with which date range the best. Several paintings illustrate shortcomings in the date ranges derived from the dated paintings; At the Edge of the Forest fits best with the 1909 range of pigments, but the inclusion of mercury hints that Onderdonk continued to use mercuric red throughout his entire time in New York instead of stopping in 1906. Likewise, Seascape V, signed Chas. Turner, fits best with the dates of 1905-1908, and if this is correct, the supposition that zinc whites were not introduced until 1909 must be incorrect. Table 4 lists the undated paintings with their most parsimonious compositional match to the dated works. To further substantiate the matches made via the elemental composition, the undated paintings were crosschecked with the signature analysis of selected paintings carried out by Baker [3]. There were no conflicts between the assigned date ranges and the signature estimated dates, which, if nothing else is at least encouraging. Data availability statement Data will be made available on request. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This study was made possible by the generous access to several collections, including that of James and Kimel Baker, the Witte Museum and the Villa Finale Museum & Gardens of San Antonio, TX. Additionally, the Bakers facilitated access to this museum, and were vital to the success of project. The use of the pXRF unit was made possible by Texas A&M University's Center for Maritime Archaeology and Conservation.
2022-02-13T16:20:49.953Z
2022-02-11T00:00:00.000
{ "year": 2023, "sha1": "60b960f8ce502a4d847b891ad61e6e15516d57a7", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023077174/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d22e45e4629d73ef4a6ee3acd2bdd12b68a7e9e2", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }